Apart from automatic feature extraction and subsequent speech recognition, our chair deals with the following topics: spoken dialogue systems, recognition and processing of unknown, so-called out-of-vocabulary words, automatic analysis and classification of prosodic phenomena such as accent and phrase boundaries. Another core topic is the automatic recognition of emotion-related, affective user states based on acoustic and linguistic features; moreover, we use multi-modal information for this task, including an analysis of facial expressions, gestures, and physiological parameters. Another topic is the multi-modal recognition of the user's focus of attention in human-machine-interaction. Finally, we work on the analysis of pathologic speech such as speech from children with cleft lip and palate or patients after laryngectomy (removal of the larynx after cancer).