Project Description The aim of the SEMAINE project is to build a Sensitive Artificial Listener â€“ a multimodal dialogue system with the social interaction skills needed for a sustained conversation with a human user. The system will emphasise â€œsoftâ€ communication skills, i.e. non-verbal, social and emotional perception, interaction and behaviour capabilities. The Sensitive Artificial Listener paradigm involves only very limited verbal capabilities, but has been shown to be suited for prolonged human-machine interaction. In this paradigm, we will build a real-time, robust interactive system perceiving a human user’s facial expression, gaze, and voice, and engaging with the user through an Embodied Conversational Agent’s body, face and voice. The agent will exhibit audiovisual listener feedback in real time while the user is speaking, and will take the user’s feedback into account while the agent is speaking. The agent will pursue different dialogue strategies depending on the user’s state; it will learn to interpret the user’s non-verbal behaviour and adapt its own behaviour accordingly.