2021 Virtual Undergraduate Research Symposium

2021 Virtual Undergraduate Research Symposium

Is it Pointless? Modeling and Evaluation of Category Transitions of Spatial Gestures

Is it Pointless? Modeling and Evaluation of Category Transitions of Spatial Gestures

PROJECT NUMBER: 40 | AUTHOR: Adam Stogsdill​, Computer Science

MENTOR: Tom Williams, Computer Science

ABSTRACT

To enable robots to select between different types of nonverbal behavior when accompanying spatial language, we must first understand the factors that guide human selection between such behaviors.
In this work, we argue that to enable appropriate spatial gesture selection, HRI researchers must answer four questions: (1) What are the factors that determine the form of gesture used to accompany spatial language? (2) What parameters of these factors cause speakers to switch between these categories? (3) How do the parameterizations of these factors inform the performance of gestures within these categories? and (4) How does human generation of gestures differ from human expectations of how robots should generate such gestures?
In this work, we consider the first three questions and make two key contributions: (1) a human-human interaction experiment investigating how human gestures transition between deictic and non-deictic under changes in contextual factors, and (2) a model of gesture category transition informed by the results of this experiment.

PRESENTATION

AUTHOR BIOGRAPHY

Adam Stogsdill is an Undergraduate Senior majoring in Computer Science on the Robotics and Intelligent Systems track. He is working under Dr. Tom Williams in the MirrorLab. His research focuses on non-verbal communication and gesture generation techniques. He hopes to further develop this research into the machine learning domain and develop techniques to help robots communicate more effectively.

2 Comments

  1. Hello, your poster was overall well put together and appealing. The biggest question I came into the presentation with was about how to categorize the factors your describe like visibility and distance. You explain this very well and the pseudo-code you provide for the Gesture Selection Model is helpful. I’m curious what other algorithms you’ll try against GAN, and which you expect to do well.

  2. Hello Adam, this project is incredibly interesting! Your explanation and the four questions that were provided do a great job of conveying what you plan to achieve. Your Gesture Selection Model looks very promising, and I would be interested in learning more about any future work that stems from this project.

Share This