AI Weekly: If Duplex is the future, then Google Assistant, Alexa, and Cortana should start working together

Picture can not loading...

Last week, Google gave the world more information about Duplex, its experimental conversational AI that makes phone calls to schedule appointments or make restaurant reservations on your behalf. With initial trials expected to begin in the coming weeks, the company shared additional details about how it will navigate communication between a Google Assistant user and businesses.

We’ll soon find out just how effective Google Assistant is at speaking for users, but should Duplex succeed, the logical next question is what will happen when AI that carries out tasks for people goes mainstream and must communicate not just with people, but with other bots?

If Duplex is a harbinger of things to come, bot-to-bot communication will soon be critical, which is why the makers of assistants like Siri, Alexa, and Google Assistant need to work together.

AI assistants have been around for a while, and growing rates of adoption can be attributed in large part to their declining word error rate and increased ability to understand human language. But collaboration between the major players will be essential in allowing the next generation of assistants to develop helpful use cases like Duplex and enjoy widespread adoption.

The most significant example of cooperation between the makers of assistants we have so far is the Alexa-Cortana partnership announced in 2017. Amazon and Microsoft are planning to provide Cortana access on Echo devices and Alexa access on Windows 10 PCs because, they said, we will come to live in a multi-assistant world.

This feature, which we got to see in action for the first time in May, essentially makes the most widely available assistant in homes (Alexa) work with the most widely available assistant on personal computers (Cortana). This goes further than the nonexistent interaction between other assistants today, and Alexa can even tell jokes about Cortana, but it only changes where you can access the assistants and does not appear to incorporate the sort of intercommunication that will make living in a multi-assistant world a viable option.

An example of how this could function is the Open Neural Network Exchange (ONNX) format made available last year for interoperability of Microsoft’s Cognitive Toolkit, Facebook’s Caffe2, and PyTorch that brings together an ecosystem of engines and frameworks to train and deploy AI.

If the makers of assistants also agreed upon a common framework for communication, it could encourage users to rely on these assistants for an increasing number of tasks. This would open up opportunities for the makers of Siri, Alexa, and the like, but it would also boost the entire conversational computing space. People could come to entrust more tasks to an uber assistant like Alexa or make room in their lives for specialized assistants designed to help them do their job more effectively or accomplish other goals.

That would be a very different world from the one we live in today, where people are still most likely to use AI assistants to play music, set a timer, or check the weather.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers, and guest post submissions to Cosette Jarrett.

Thanks for reading,

Khari Johnson
AI Staff Writer

Article from: venturebeat

添加新评论

Restricted HTML

  • You can align images (data-align="center"), but also videos, blockquotes, and so on.
  • You can caption images (data-caption="Text"), but also videos, blockquotes, and so on.