Monday, April 25, 2011

Paper Reading #24: Usage patterns and latent semantic analyses for task goal inference of multimodal user interactions

Comments:

TBD

Reference Information:

Title: Usage patterns and latent semantic analyses for task goal inference of multimodal user interactions

Authors:  Pui-Yu Hui, Wai-Kit Lo, and Helen Meng of The Chinese University of Hong Kong, Hong Kong

Presentation Venue: IUI '10 Proceedings of the 15th international conference on Intelligent user interfaces

Summary:

This paper is about creating a system that has the capability to interpret vocalized speech and written words. The system that is being proposed would be able to ascertain the semantic meanings with the communication and analyze it in such a way that it would demonstrate a form of machine learning.


The system combines the analysis of three subsystems to achieve this result. The system implements latent semantic modeling (LSM). Spoken locative references (SLR) consumes input from parsed Chinese vocalization. With further processing, it is eventually sent to the singular value decomposition (SVD) module. So far, the researchers are reporting a 99% accuracy result using their technique.

Discussion:

I think this could have a profound impact on semantic recognition systems in the future. The Chinese language is comprised of many complex symbols and has been described as one of the most difficult languages to learn. A system with a machine learning capability that would be able to interpret semantic meaning based on user input could go a long way to providing more accurate translations and understanding of input from the language.


This same technique could be applied to other languages. The result of all of this would be the ability to create machines that could take written and spoken input from any language and create an accurate translation of concept and meaning between multiple disparate individuals regardless of cultural heritage or ethnic background.

No comments:

Post a Comment