Stefanie Tompkins

Transcript

Tom Williams: 

Robots have a lot of influence over people. Robots are able to exert influence over what other people view as appropriate because people interpret robots as social agents and as moral agents. So they view them as capable of reasoning socially and taking actions that sort of threaten or affirm social needs. And they view robots as moral agents that are capable of taking moral actions. So people view those robots as community members and take cues from what they do. So we really need to be either really intentionally focused on these high-level moral communication issues or really explicit about the fact of making sure that people understand the limits of the robots capabilities. 

Tom Williams: 

My name is Tom Williams, and I am an assistant professor of computer science at Colorado School of Mines. 

The Conveyor: 

You’re listening to The Conveyor, the podcast that brings you the latest research, new discoveries and world-changing ideas from Colorado School of Mines. 

The Conveyor: 

Tom, you and your research examines a broad range of HRI, or human-robot interaction, all the way from robots in space to robots in the classroom. Can you give an example of HRI in action? 

Tom Williams: 

Yeah, absolutely. So we are doing some work that is sponsored by NASA. NASA has these newer robots that are on the ISS called the Astrobees. And so they are specifically designed for human-robot teaming. The Astrobees are designed with verbal and social interaction in mind. So they still might be doing tasks for astronauts, like going around and performing inspections and things like that, but they provide a great platform for us to look at things that are much more fundamental like when is the right time for those robots to be interrupting people and sharing information with them? How do people think about networks of these robots that, in reality, sort of share a hive mind but might be presenting themselves as individual robots each with their own name? And so there a really exciting platform to be able to think about these more fundamental social and sort of cognitive aspects of humanrobot interaction. 

The Conveyor: 

Right, so robots being able to recognize certain social cues within their environments. 

Tom Williams: 

Yes. 

The Conveyor: 

And with your recently awarded NSF grant—congratulations by the way. 

Tom Williams: 

Thank you. 

The Conveyor: 

You are studying the working memory in those types of robots. What exactly does that include? 

Tom Williams: 

In this project, what we want to do is study how robots can be given these sort of short-term memory or working memory stores of information so that they can keep information on hand ready to access for common tasks.  

The Conveyor: 

And that term is called “caching,” right, like what your internet browser does to retrieve information quickly?  

Tom Williams: 

Exactly. So the point is keeping information that you think is going to be helpful on hand so that you can make use of it later. And so then a question we have then is “Well what information should we put into these working memory stores of helpful information, and then what should our policy be for kicking things out?” So for example, I’m currently holding in my hand, a blue water bottle. And if I wanted to talk about it, there’s a lot of properties I could use. I could use the blue water bottle. I could use the water bottle. I could use the clear plastic water bottle. And if I use one of those to refer to it, so if I would call it the blue water bottle, and then you want to talk about it, you’re going to be maybe more likely to refer to it as the water bottle or the blue water bottle, as opposed to the Nalgene. So if I know I’m going to be performing a lot of tasks with that water bottle five minutes from now because we’re performing a water-bottle filling task together, then that might lead me to what to keep more facts about that around.  

So that’s something we also want to do over the course of this project is use different methods for assessing the robot’s future plan of attack and what actions and what objects are likely to be involved in those robotic plans and then using those to inform these models of exactly what we keep around and why. And this is particularly important for us when we’re talking about interactive robots where we want to develop robots that are going to interact people in a human-like way so that they understand how to interact with it, they know sort of what to expect, they are going to act in a way that then the robot will find predictable. And so being able to design robots that evoke these types of very human-like ways of interaction is a really powerful sort of design strategy. 

The Conveyor: 

Yeah, it’s incredibly powerful. So how far then do we go towards fully humanizing robots, like having a regular seamless interaction between us and AI? 

Tom Williams: 

That’s a really interesting question because it taps into a couple very different types of questions relevant to different areas of HRI. So is it reasonable to envision a future in which robots are interacting seamlessly with humans? Yeah, sure. Of course, developing a fully conscious robot may be hundreds of years away. But if we’re talking about just fluidly interacting with humans, that might be just as simple as I can’t even talk but I pass you a wrench at the time when you’re sort of expecting me to do that, right. Just a really seamless handover interaction, which doesn’t require me to do small talk on the weather in a human-like way, but it’s really human-like and flexible in that smallscale interaction. 

At a slightly higher level, in the human-robot interaction community there’s a lot of emphasis on non-verbal interaction, on using gaze, cues and gestures, and these other types of things beyond what we say in order to help direct people’s attention or have our own attention directed. And we are doing a lot of work on that in our lab and really robots are getting better and better at those types of nonverbal cues, which I think are really the key to having this seamless interaction, beyond the scope of chit chat. 

The Conveyor: 

Yeah, being able to read people and make decisions from there. But that also raises some ethical questions, which, speaking of, what sort of controversial concerns have you come across? 

Tom Williams: 

In our lab, we’ve had a number of discussions surrounding use of robotic technologies in policing and military applications and have had to make decisions for ourselves as to what we’re okay with working on. And these are all questions that extend well beyond just the algorithm development, but they relate to the questions that a lot of people in the AI community are now grappling with, especially surrounding topics like face recognition, where face recognition is a really sort of important topic, sensitive topic for us in social robotics, because in a lot of cases, you want to know who you’re talking to and an easy way to do that is by face recognition. 

But this gets really complicated. One, because face recognition algorithms basically do not work for people of color, especially women of color, where the error rates are like 40 percent, which means that if you’re developing a robotic technology, for example, that relies on face recognition, really what you’re doing is developing the robotic equivalent of a whites only water fountain in that it’s like a piece of technology that, regardless of your intent, is only going to be working for white people. 

The Conveyor: 

All right. To wrap things up, we often see human-robot interaction go wrong in science fiction, say in movies, books, you name it. How realistic are those plotlines when it comes to AI? 

Tom Williams: 

It depends on what type of problems you’re talking about. If you’re talking about the Terminator and the robots rising up and destroying us all, no, they’re not really very realistic concerns, not just because robots have enough difficulty going through doorways right now. Really hard problem for some robots fitting through doors. Robots are nowhere near that capability. I think it’s interesting to think about, and there are people who are doing good work thinking about those types of problems, because maybe they’ll be relevant in a couple hundred years, but right now, they are very much a distraction from very serious questions about how we’re using robots and AI in society right now. 
 

Tom Williams: 

Thanks for listening to The Conveyor. To learn more about how Colorado School of Mines is solving some of the world’s biggest engineering and scientific challenges, visit mines.edu and then join us back here for our next episode. 

 

This episode of The Conveyor was produced by Ashley Spurgeon and was hosted and edited by Dannon Cox. 

Subscribe

spotify SPOTIFY

Soundcloud SOUNDCLOUD

Apple Podcast APPLE PODCASTS

 

About the Podcast

The Conveyor brings listeners insights into the latest research, new discoveries and world-changing ideas from Colorado School of Mines.

The viewpoints and opinions expressed by featured guests do not necessarily represent those of Colorado School of Mines.