By Rebecca Gibian / March 24, 2019

Bringing A Scientific Ear to Robot Voices

A conversation with Kozminski University roboticist Aleksandra Przegalinska.

A girl talks with an AI robot by Canbot during the China International Robot Show 2018 (CIROS 2018) at National Exhibition and Convention Centre in Shanghai, China. (VCG)

Most people find humanoid robots that look super realistic, but not quite realistic enough, to be pretty freaky, according to Wired. So scientists have worked on robot faces and bodies, but not as much on robot voices. However, Kozminski University roboticist Aleksandra Przegalinska, also a research fellow at MIT, is bringing a scientific ear to the economy of chatbots and voice assistants, such as Alexa. Wired interviewed Przegalinska about the future of robots and their voices.

Przegalinska said that replicating human intonation is particularly difficult, but also very important. She also explained that she and her students spent a year talking to a chatbot that they built, and that her students were very mean to it. The relationship between a human and a chatbot is weird, almost assistant-like, and the chatbots are very polite. It gives people the opportunity to be mean back with really no repercussions.

Przegalinska also discussed how chatbots try to mirror what the other person was saying, so it would say it liked sports if the person it was talking to liked sports. However, the chatbot would flip a lot, it would just mirror the opinions of the person talking.

During the interview, Przegalinska also said that what throws people off of bot voices is that robots have difficulty understanding the tonality and context of what you’re saying.

Daily Brief

15 Things to Know Today, from RealClearLife

April 23, 2019 April 22, 2019