Please use this identifier to cite or link to this item:
http://hdl.handle.net/10071/25552
Author(s): | Freitas, J. Teixeira, A. Dias, J. |
Editor: | Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk and Stelios Piperidis |
Date: | 2014 |
Title: | Multimodal corpora for silent speech interaction |
Book title/volume: | Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014) |
Pages: | 4507 - 4511 |
Event title: | 9th International Conference on Language Resources and Evaluation, LREC 2014 |
Reference: | Freitas, J., Teixeira, A., & Dias, J. (2014). Multimodal corpora for silent speech interaction. Em N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014) (pp. 4507-4511). European Language Resources Association (ELRA) |
ISBN: | 978-2-9517408-8-4 |
Keywords: | Silent speech Multimodal HCI Data collection |
Abstract: | A Silent Speech Interface (SSI) allows for speech communication to take place in the absence of an acoustic signal. This type of interface is an alternative to conventional Automatic Speech Recognition which is not adequate for users with some speech impairments or in the presence of environmental noise. The work presented here produces the conditions to explore and analyze complex combinations of input modalities applicable in SSI research. By exploring non-invasive and promising modalities, we have selected the following sensing technologies used in human-computer interaction: Video and Depth input, Ultrasonic Doppler sensing and Surface Electromyography. This paper describes a novel data collection methodology where these independent streams of information are synchronously acquired with the aim of supporting research and development of a multimodal SSI. The reported recordings were divided into two rounds: a first one where the acquired data was silently uttered and a second round where speakers pronounced the scripted prompts in an audible and normal tone. In the first round of recordings, a total of 53.94 minutes were captured where 30.25% was estimated to be silent speech. In the second round of recordings, a total of 30.45 minutes were obtained and 30.05% of the recordings were audible speech. |
Peerreviewed: | yes |
Access type: | Open Access |
Appears in Collections: | ISTAR-CRI - Comunicações a conferências internacionais |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
conferenceobject_71799.pdf | Versão Editora | 670,81 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.