نبذة مختصرة : INTRODUCTION: Sound is an important interaction modality and a large part of human interaction happens in the aural domain. While research in Human-Robot Interaction (HRI) has long explored spoken language for interacting with humans, sound as a broader-and, to a significant degree, nonlexical (i.e., without words)-medium has been given comparably less attention. Yet, the range of sounds that robots can produce is vast, encompassing, among others, mechanical noise, music, and utterances that mimic human and animal vocalizations with varying degrees of realism. The sound of a robot's machinery can shape our perceptions and expectations [11, 17], music serves as a medium for robots to engage and communicate [18], and shared musical experiences can strengthen the bond between humans and robots [5]. Sonifications may enhance the legibility of movement and gestures [4, 7, 15] and beep sounds may be used to communicate emotions [2, 14]. Getting closer to themargins of language, robotsmay take inspiration from non-lexical fillers such as "uh" [13, 16] and backchannels such as "mhmm" [8, 12]. More generally, pitch, intensity, and other human prosodic variations may be drawn on in robot sound design [3, 10]. The information that can be extracted from sound in a robot's environment is equally rich. Beyond the recognition of semantic content, robots use, for example, sound source localization to gain a better understanding of their environment [9], or analyze a human's voice timbre and tone to distinguish speakers [6] and detect emotion [1].
No Comments.