Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/27819
Registo completo
Campo DCValorIdioma
dc.contributor.authorMedeiros, H.-
dc.contributor.authorMoniz, H.-
dc.contributor.authorBatista, F.-
dc.contributor.authorTrancoso, I.-
dc.contributor.authorNunes, L.-
dc.contributor.editorBimbot, F., Cerisara, C., Fougeron, C., Gravier, G., Lamel, L., Pellegrino, F., and Perrier, P.-
dc.date.accessioned2023-02-08T15:42:13Z-
dc.date.available2023-02-08T15:42:13Z-
dc.date.issued2013-
dc.identifier.citationMedeiros, H., Moniz, H., Batista, F., Tjalve, M., Trancoso, I., & Nunes, L. (2013). Disfluency detection based on prosodic features for university lectures. In F. Bimbot, C. Cerisara, C. Fougeron, G. Gravier, L. Lamel, F. Pellegrino, & P. Perrier (Eds.), Proceedings of the 14th Annual Conference of the International Speech Communication Association (INTERSPEECH 2013) (vol. 4, pp. 2629-2633). International Speech Communication Association. https://doi.org/10.21437/Interspeech.2013-605-
dc.identifier.isbn978-1-62993-443-3-
dc.identifier.issn2308-457X-
dc.identifier.urihttp://hdl.handle.net/10071/27819-
dc.description.abstractThis paper focuses on the identification of disfluent sequences and their distinct structural regions, based on acoustic and prosodic features. Reported experiments are based on a corpus of university lectures in European Portuguese, with roughly 32h, and a relatively high percentage of disfluencies (7.6%). The set of features automatically extracted from the corpus proved to be discriminant of the regions contained in the production of a disfluency. Several machine learning methods have been applied, but the best results were achieved using Classification and Regression Trees (CART). The set of features which was most informative for cross-region identification encompasses word duration ratios, word confidence score, silent ratios, and pitch and energy slopes. Features such as the number of phones and syllables per word proved to be more useful for the identification of the interregnum, whereas energy slopes were most suited for identifying the interruption point.eng
dc.language.isoeng-
dc.publisherInternational Speech Communication Association-
dc.relationinfo:eu-repo/grantAgreement/FCT/PIDDAC/SFRH%2FBD%2F44671%2F2008/PT-
dc.relationFP7-ICT-2011-7-288121-
dc.relationinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/PEst-OE%2FEEI%2FLA0021%2F2013/PT-
dc.relation.ispartofProceedings of the 14th Annual Conference of the International Speech Communication Association (INTERSPEECH 2013)-
dc.rightsopenAccess-
dc.subjectProsodic featureseng
dc.subjectAutomatic disfluency detectioneng
dc.subjectCorpus of university lectureseng
dc.subjectMachine learningeng
dc.titleDisfluency detection based on prosodic features for university lectureseng
dc.typeconferenceObject-
dc.event.title14th Annual Conference of the International Speech Communication Association (INTERSPEECH 2013)-
dc.event.typeConferênciapt
dc.event.locationLyoneng
dc.event.date2013-
dc.pagination2629 - 2633-
dc.peerreviewedyes-
dc.volume4-
dc.date.updated2023-02-08T15:39:56Z-
dc.description.versioninfo:eu-repo/semantics/publishedVersion-
dc.identifier.doi10.21437/Interspeech.2013-605-
dc.subject.fosDomínio/Área Científica::Ciências Naturais::Ciências Físicaspor
dc.subject.fosDomínio/Área Científica::Engenharia e Tecnologia::Engenharia dos Materiaispor
dc.subject.fosDomínio/Área Científica::Ciências Médicas::Medicina Clínicapor
iscte.identifier.cienciahttps://ciencia.iscte-iul.pt/id/ci-pub-42668-
iscte.alternateIdentifiers.wosWOS:WOS:000395050001064-
iscte.alternateIdentifiers.scopus2-s2.0-84906221162-
Aparece nas coleções:ISTAR-CRI - Comunicações a conferências internacionais
IT-CRI - Comunicações a conferências internacionais

Ficheiros deste registo:
Ficheiro TamanhoFormato 
conferenceobject_42668.pdf219,68 kBAdobe PDFVer/Abrir


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.