Socially assistive robots can provide cognitive assistance with activities of daily living, and promote social interactions to those suffering from cognitive impairments and/or social disorders. They can be used as aids for a number of different populations including those living with dementia or autism spectrum disorder, and for stroke patients during post-stroke rehabilitation [1]. Our research focuses on developing socially assistive intelligent robots capable of partaking in natural human-robot interactions (HRI). In particular, we have been working on the emotional aspects of the interactions to provide engaging settings, which in turn lead to better acceptance by the intended users. Herein, we present a novel multimodal affect recognition system for the robot Luke, Fig. 1(a), to engage in emotional assistive interactions.

Current multimodal affect recognition systems mainly focus on inputs from facial expressions and vocal intonation [2], [3]. Body language has also been used to determine human affect during social interactions, but has yet to be explored in the development of multimodal recognition systems. Body language has been strongly correlated to vocal intonation [4]. The combined modalities provide emotional information due to the temporal development underlying the neural interaction in audiovisual perception [5].

In this paper, we present a novel multimodal recognition system that uniquely combines inputs from both body language and vocal intonation in order to autonomously determine user affect during assistive HRI.

This content is only available via PDF.