SmartKom - towards an intuitive multimodal human-machine interactionThe SmartKom project aimsat developing a human-machine interface that is intuitive to use, self-explaining,and adapting to the user's needs and preferences. The system will recognizespeech, gesture and mimic input, and it will generate text, graphics andspeech output. The user and the system are intended to use whichever modalityis most appropriate for the particular task, the type of information, userpreferences, and the application scenario. There are three such scenarios:a home/office working environment, public access to the internet and toinformation services, and a mobile device (basically a far-advanced cellphone). The project is funded by the German Ministry of Education and Research(BMBF) for the period of September 1999 through August 2003.
Speech synthesis in SmartKom (excerpt of the synthesis@smartkomproject plan)The goal of the speech synthesis project within the SmartKom consortiumis to develop a speech synthesis module that is capable of producing naturalsounding German speech. This general goal is achieved when the user ofthe SmartKom system is satisfied with the system?s voice. This includesintelligible speech and a friendly voice, but also the appropriatenessof the system response for a given task and within a certain dialogue state.The speech output component to be developed has to be in accordance withother modes of the multimodal interaction possibly used in parallel tospeech. The goal, therefore, includes the successful integration of themodule within the SmartKom system. An additional goal of this projectis to invent and explore innovative methods within the framework of speechsynthesis in order to contribute to the research carried out in this field.
ObjectivesA. Natural speech.
B. Friendly voice.
D. Innovative methods.
E. System integration.
General Approach and Contractual AspectsAs the technical task is to develop a speech output module for a multimodalsystem, all sub-tasks of a text-to-speech (TTS) system and of a concept-to-speech(CTS) system have to be taken care of. For the following items detailedwork will be carried out within the project.
A. Selecting a friendly voice
B. Create a new diphone voice
C. Construction of the speech synthesis database
D. Development and integration of naturalspeech synthesis methods
E. Development of a prosody module for multimodalspeech synthesis
F. Definition of the Interfaces between SmartKommodules
G. Evaluation of speech synthesis
H. System integration