DSpace Repository

Unconstrained Many-to-Many Alignment for Automatic Pronunciation Annotation

Show simple item record

dc.contributor.author Kubo, Keigo ja
dc.contributor.author Kawanami, Hiromichi ja
dc.contributor.author Saruwatari, Hiroshi ja
dc.contributor.author Shikano, Kiyohiro ja
dc.date.accessioned 2012-08-30T05:51:31Z
dc.date.available 2012-08-30T05:51:31Z
dc.date.issued 2011-10 ja
dc.identifier.uri http://hdl.handle.net/10061/8293
dc.description APSIPA ASC 2011: 2011 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, October 18-21, 2011, Xi'an, China. ja
dc.description.abstract An alignment between graphemes and phonemes is vital data to annotate the pronunciation for out-of-vocabulary words. We desire an alignment to be (1) many-to-many and (2) fine-grained. A traditional one-to-one alignment model does not represent an intuitive mapping for logograms, such as Chinese characters, and has previously reported an inferior performance in phoneme prediction. A conventional many-to-many alignment model prefers a mapping consisting of longer substrings, which degrades the generalization ability of the prediction model, especially for out-of-vocabulary words. In order to obtain a highly generalized model, we introduce city block distance in the conventional many-to-many alignment, so that fine-grained mappings are inferred without constraining the maximum lengths of both graphemes and phonemes. Experimental results show that our extension improves the baseline grapheme-to-phoneme conversion on several language data sets. ja
dc.language.iso en ja
dc.rights Copyright 2011 APSIPA ja
dc.title Unconstrained Many-to-Many Alignment for Automatic Pronunciation Annotation ja
dc.type.nii Conference Paper ja
dc.textversion Publisher ja
dc.identifier.NAIST-ID 73292492


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account