With iOS 10, scheduled for full release this fall, Siri’s voice becomes the last of the four components to be transformed by machine learning. Again, a deep neural network has replaced a previously licensed implementation. Essentially, Siri’s remarks come from a database of recordings collected in a voice center; each sentence is a stitched-together patchwork of those chunks. Machine learning, says Gruber, smooths them out and makes Siri sound more like an actual person.

Acero does a demo — first the familiar Siri voice, with the robotic elements that we’ve all become accustomed to. Then the new one, which says, “Hi, what can I do for you?” with a sultry fluency. What made the difference? “Deep learning, baby,” he says.


Fascinating read about how Apple uses machine learning and artificial intelligence in so many parts of their ecosystem.

Leave a Comment