Please help us improve your experience by sending us a comment, question or concern
Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.
As humanoid robotics is coming of age and a growing number of mechanical functions become possible, a robot’s perceptual and cognitive functions continue to grow in importance and R&D focus. While perceptual abilities of machines generally still lag behind human performance, machines already exceed human capabilities in areas where human cognitive limit our performance. For example, machines could theoretically handle hundreds of languages and process vast amounts of information on the Internet. Can future Humanoid Robotic assistants (in addition to handling mechanical chores) perhaps provide language and cultural support in a seamless and natural fashion?
In this talk, I will present results from our efforts to make this dream become reality. We will discuss and demonstrate various multi- and cross-lingual communication and interpretation devices and services that we have deployed into use already, including multilingual dialog systems, pocket speech translators on smartphones, and automatic simultaneous interpretation services. I will then discuss the many open issues remaining from basic performance issues in speech and language processing, to providing better and more natural forms of deployment.
Depending on scenario and interaction style, rather different strategies appear to be effective, requiring different degrees of sophistication of the user interfaces, speech, translation and dialog processing.