Deprecated: Array and string offset access syntax with curly braces is deprecated in /var/www/html/arg/administrator/components/com_jresearch/tables/publication.php on line 150

Deprecated: Array and string offset access syntax with curly braces is deprecated in /var/www/html/arg/administrator/components/com_jresearch/tables/publication.php on line 166

Mapping Techniques for Voice Conversion

Helander, Elina
Abstract

Speaker identity plays an important role in human communication. In addition to the linguistic content, speech utterances contain acoustic information of the speaker characteristics. This thesis focuses on voice conversion, a technique that aims at changing the voice of one speaker (a source speaker) into the voice of another specific speaker (a target speaker) without changing the linguistic information. The relationship between the source and target speaker characteristics is learned from the training data. Voice conversion can be used in various applications and fields: text-to-speech systems, dubbing, speech-to-speech translation, games, voice restoration, voice pathology, etc. Voice conversion offers many challenges: which features to extract from speech, how to find linguistic correspondences (alignment) between source and target features, which machine learning techniques to use for creating a mapping function between the features of the speakers, and finally, how to make the desired modifications to the speech waveform. The features can be any parameters that describe the speech and the speaker identity, e.g. spectral envelope, excitation, fundamental frequency, and phone durations. The main focus of the thesis is on the design of suitable mapping techniques between frame-level source and target features, but also aspects related to parallel data alignment and prosody conversion are addressed. The perception of the quality and the success of the identity conversion are largely subjective. Conventional statistical techniques are able to produce good similarity between the original and the converted target voices but the quality is usually degraded. The objective of this thesis is to design conversion techniques that enable successful identity conversion while maintaining the original speech quality. Due to the limited amount of data, statistical techniques are usually utilized in extracting the mapping function. The most popular technique is based on a Gaussian mixture model (GMM). However, conventional GMM-based conversion suffers from many problems that result in degraded speech quality. The problems are analyzed in this thesis, and a technique that combines GMM-based conversion with partial least squares regression is introduced to alleviate these problems. Additionally, approaches to solve the time-independent mapping problem associated with many algorithms are proposed. The most significant contribution of the thesis is the proposed novel dynamic kernel partial least squares regression technique that allows creating a non-linear mapping function and improves temporal correlation. The technique is straightforward, efficient and requires very little tuning. It is shown to outperform the state-of-the-art GMM-based technique using both subjective and objective tests over a variety of speaker pairs. In addition, quality is further improved when aperiodicity and binary voicing values are predicted using the same technique. The vast majority of the existing voice conversion algorithms concern the transformation of the spectral envelopes. However, prosodic features, such as fundamental frequency movements and speaking rhythm, also contain important cues of identity. It is shown in the thesis that pure prosody alone can be used, to some extent, to recognize speakers that are familiar to the listeners. Furthermore, a prosody conversion technique is proposed that transforms fundamental frequency contours and durations at syllable level. The technique is shown to improve similarity to the target speaker’s prosody and reduce roboticness compared to a conventional frame-based conversion technique. Recently, the trend has shifted from text-dependent to text-independent use cases meaning that there is no parallel data available. The techniques proposed in the thesis currently assume parallel data, i.e. that the same texts have been spoken by both speakers. However, excluding the prosody conversion algorithm, the proposed techniques require no phonetic information and are applicable for a small amount of training data. Moreover, many text-independent approaches are based on extracting a sort of alignment as a pre-processing step. Thus the techniques proposed in the thesis can be exploited after the alignment process.

Research areas

Year:
2012
Note:
Awarding institution:Tampereen teknillinen yliopisto - Tampere University of Technology<br/>Submitter:Submitted by Elina Helander (elina.helander@tut.fi) on 2012-08-13T07:16:44Z No. of bitstreams: 1 thesisForPrint.pdf: 11715994 bytes, checksum: 2a1a41f36499fe4abeb79722f06f46d1 (MD5)<br/>Submitter:Approved for entry into archive by Kaisa Kulkki(kaisa.kulkki@tut.fi) on 2012-08-14T08:38:43Z (GMT) No. of bitstreams: 1 thesisForPrint.pdf: 11715994 bytes, checksum: 2a1a41f36499fe4abeb79722f06f46d1 (MD5)<br/>Submitter:Made available in DSpace on 2012-08-14T08:38:43Z (GMT). No. of bitstreams: 1 thesisForPrint.pdf: 11715994 bytes, checksum: 2a1a41f36499fe4abeb79722f06f46d1 (MD5)