15 Feb Creating a Computer Voice That People Like
When computers speak, how human should they sound?
This was a question that a team of six IBM linguists, engineers and marketers faced in 2009, when they began designing a function that turned text into speech for Watson, the company’s “Jeopardy!”-playing artificial intelligence program.
Eighteen months later, a carefully crafted voice — sounding not quite human but also not quite like HAL 9000 from the movie “2001: A Space Odyssey” — expressed Watson’s synthetic character in a highly publicized match in which the program defeated two of the best human “Jeopardy!” players.
The challenge of creating a computer “personality” is now one that a growing number of software designers are grappling with as computers become portable and users with busy hands and eyes increasingly use voice interaction.
Machines are listening, understanding and speaking, and not just computers and smartphones. Voices have been added to a wide range of everyday objects like cars and toys, as well as household information “appliances” like the home-companion robots Pepper and Jibo, and Alexa, the voice of the Amazon Echo speaker device.
A new design science is emerging in the pursuit of building what are called “conversational agents,” software programs that understand natural language and speech and can respond to human voice commands.
However, the creation of such systems, led by researchers in a field known as human-computer interaction design, is still as much an art as it is a science.
It is not yet possible to create a computerized voice that is indistinguishable from a human one for anything longer than short phrases that might be used for weather forecasts or communicating driving directions.
Most software designers acknowledge that they are still faced with crossing the “uncanny valley,” in which voices that are almost human-sounding are actually disturbing or jarring. The phrase was coined by the Japanese roboticist Masahiro Mori in 1970. He observed that as graphical animations became more humanlike, there was a point at which they would become creepy and weird before improving to become indistinguishable from videos of humans.
The same is true for speech.
“Jarring is the way I would put it,” said Brian Langner, senior speech scientist at ToyTalk, a technology firm in San Francisco that creates digital speech for things like the Barbie doll. “When the machine gets some of those things correct, people tend to expect that it will get everything correct.”
Beyond correct pronunciation, there is the even larger challenge of correctly placing human qualities like inflection and emotion into speech. Linguists call this “prosody,” the ability to add correct stress, intonation or sentiment to spoken language.
Today, even with all the progress, it is not possible to completely represent rich emotions in human speech via artificial intelligence. The first experimental-research results — gained from employing machine-learning algorithms and huge databases of human emotions embedded in speech — are just becoming available to speech scientists.