As automated machines increase in popularity, the advancements in technology and design must keep up. To this end, Wendy Ju from the Jacobs Technion-Cornell Institute at Cornell Tech is currently studying human-robot interaction.
“It’s delightful for me to watch people move around in a space,” shares Ju. “We are not always thinking about the communicative signaling of our actions, and yet we know very well what we’re doing.”
Ju is working to identify human movements and interactions to better integrate such behaviors in artificial intelligence. Her research goal is to understand what is required from various machines to create a more seamless process that mimics the ease people have with each other in daily interactions.
How cars play a role
The automotive industry presents an ideal case study for this research. Ju has long studied human-car interactions. She was a part of the team at the Centre for Design Research at Stanford that observed interactions between people and driverless vehicles.
She found that, typically, more attention is paid to the movement of the car than to the driver in the car. On crosswalks, the pedestrians focus more on wheels, and if they sense it is going to stop they don’t look further.
However, the focus shifted to the driver when there was an anomaly. For example, if the car appears to brake but then eases up near the crosswalk, pedestrians will look up at the driver’s seat. But they’ll usually still cross the road. “
Walking down the road is so automatic that we do that when we are doing all sorts of other things,” says Ju. “Our priority is to safely get across the road, then post-analyze the situation.”
Before Ju’s study, the automotive industry considered including signs and lights on driverless cars to signal the car’s intentions to pedestrians. This experiment showed that those would not be effective since people are determining what the car will do before they could see those signals.
Voice interactions
In her next research project, Wendy took up a study called “Is Now a Good Time?” where her team looked into responses of 60 drivers to 3000 in-car voice interactions at the Toyota Research Institute. The study aimed at prediction of best initiation time for in-car voice interactions without interrupting the drivers.
The voice interaction systems of cars are on an evolutionary path. They are smart but need to get smarter to act as the ideal human-machine interface. For now, it is the driver that needs to initiate a conversation with the car’s system. However, the next step in this process is to let the car provide the driver with important updates and information without being prompted. But it’s critical to identify the timing of such interactions as poor timing can be lethal.
The study also revealed some eye-opening observations like people do not wish to talk to voice agent if they misunderstood a direction or missed a light or lose their path. Similarly, voice interactions can be avoided when the car is reaching a light or stop signal. But as they wait for lights to go green it can be a good time to interact.
These small inputs are to play a key role in uplifting the experience with voice- interaction systems. Ju explains, “There is an important role academia plays in a very applied space. There are day-to-day implications for the work that we’re doing. It’s an exciting space to be in, and it’s easy to stay motivated.”
Adding in robotics
Ju is now leading a different project that works on observing human interactions with robotic chairs. She is looking for answers to other important questions, such as how threatening it can be if a chair has biological motion.
A robotic chair has exceptional abilities. For instance, it can offer a seat to a person, follow people, or guide people to follow it. Ju’s team strapped the chairs on robotic vacuums to control their movement. If the chair needed someone to move out of the way, it could either move back and forth or side by side or just pause.
While the first gesture was perceived as aggressive, the rest couldn’t register any success. So, they provided the chair with a virtual gaze.
“The performed gaze makes them feel acknowledged and that they can move. Everyone has the same reaction. From a purely logical perspective, it doesn’t make sense,” she explains. “The chair doesn’t really see you. But it works.”
She concludes by saying, “We’re going through the world with a lot of assumed rules. They’re so assumed, we couldn’t even articulate them, but in the moment, we follow them. We have to teach machines these rules, and also how to change under different circumstances and environments.”
Filed Under: Blog entry
Questions related to this article?
👉Ask and discuss on Electro-Tech-Online.com and EDAboard.com forums.
Tell Us What You Think!!
You must be logged in to post a comment.