Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/29220
Autoria: Freitas, J.
Teixeira, A.
Dias, M. S.
Editor: Bilmes, J., Fosler-Lussier, E., Hasegawa-Johnson, M., and Livescu, K.
Data: 2013
Título próprio: Multimodal silent speech interface based on video, depth, surface electromyography and ultrasonic Doppler: Data collection and first recognition results
Título e volume do livro: Workshop on Speech Production in Automatic Speech Recognition (SPASR-2013)
Paginação: 44 - 49
Título do evento: Workshop on Speech Production in Automatic Speech Recognition (SPASR-2013)
Referência bibliográfica: Freitas, J., Teixeira, A., & Dias, M. S. (2013). Multimodal silent speech interface based on video, depth, surface electromyography and ultrasonic Doppler: Data collection and first recognition results. In J. Bilmes, E. Fosler-Lussier, M. Hasegawa-Johnson, & K. Livescu (Eds.), Workshop on Speech Production in Automatic Speech Recognition (SPASR-2013) (pp. 44-49). International Speech and Communication Association. https://www.isca-speech.org/archive/spasr_2013/freitas13_spasr.html
ISSN: 2308-457X
Palavras-chave: Silent speech interfaces
Multimodal
Video and depth information
Surface electromyography
Ultrasonic doppler sensing
Resumo: Silent Speech Interfaces use data from the speech production process, such as visual information of face movements. However, using a single modality limits the amount of available information. In this study we start to explore the use of multiple data input modalities in order to acquire a more complete representation of the speech production model. We have selected 4 non-invasive modalities – Visual data from Video and Depth, Surface Electromyography and Ultrasonic Doppler - and created a system that explores the synchronous combination of all 4, or of a subset of them, into a multimodal Silent Speech Interface (SSI). This paper describes the system design, data collection and first word recognition results. As the first acquired corpora are necessarily small for this SSI, we use for classification an example based recognition approach based on Dynamic Time Warping followed by a weighted k-Nearest Neighbor classifier. The first classification results using different vocabularies, with digits, a small set of commands related to Ambient Assisted Living and minimal nasal pairs, show that word recognition benefits can be obtained from a multimodal approach.
Arbitragem científica: yes
Acesso: Acesso Aberto
Aparece nas coleções:ISTAR-CRI - Comunicações a conferências internacionais

Ficheiros deste registo:
Ficheiro TamanhoFormato 
conferenceobject_96464.pdf1,08 MBAdobe PDFVer/Abrir


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.