Co-speech gestures for human-robot collaboration

A. Ekrekli, A. Angleraud, G. Sharma, R. Pieters

Tutkimustuotos: KonferenssiartikkeliScientificvertaisarvioitu

62 Lataukset (Pure)

Abstrakti

Collaboration between human and robot requires effective modes of communication to assign robot tasks and coordinate activities. As communication can utilize different modalities, a multi-modal approach can be more expressive than single modal models alone. In this work we propose a co-speech gesture model that can assign robot tasks for human-robot collaboration. Human gestures and speech, detected by computer vision and speech recognition, can thus refer to objects in the scene and apply robot actions to them. We present an experimental evaluation of the multi-modal co-speech model with a real-world industrial use case. Results demonstrate that multi-modal communication is easy to achieve and can provide benefits for collaboration with respect to single modal tools.
AlkuperäiskieliEnglanti
OtsikkoIEEE International Conference on Robotic Computing (IRC)
KustantajaIEEE
Sivut110-114
ISBN (elektroninen)979-8-3503-9574-7
DOI - pysyväislinkit
TilaJulkaistu - 30 marrask. 2023
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaIEEE International Conference on Robotic Computing (IRC) - Laguna Hills, California, Yhdysvallat
Kesto: 11 jouluk. 202313 jouluk. 2023

Conference

ConferenceIEEE International Conference on Robotic Computing (IRC)
Maa/AlueYhdysvallat
KaupunkiLaguna Hills, California
Ajanjakso11/12/2313/12/23

Julkaisufoorumi-taso

  • Jufo-taso 1

Sormenjälki

Sukella tutkimusaiheisiin 'Co-speech gestures for human-robot collaboration'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä