With iOS 10, scheduled for full release this fall, Siri’s voice becomes the last of the four components to be transformed by machine learning. Again, a deep neural network has replaced a previously licensed implementation. Essentially, Siri’s remarks come from a database of recordings collected in a voice center; each sentence is a stitched-together patchwork of those chunks. Machine learning, says Gruber, smooths them out and makes Siri sound more like an actual person.
Acero does a demo — first the familiar Siri voice, with the robotic elements that we’ve all become accustomed to. Then the new one, which says, “Hi, what can I do for you?” with a sultry fluency. What made the difference? “Deep learning, baby,” he says.
Fascinating read about how Apple uses machine learning and artificial intelligence in so many parts of their ecosystem.
Latest Entries
- Touch Bar Epiphany
- The only iOS 10 review that matters
- Machine Learning and AI at Apple
- Dear Tim Cook
- Sloth shows open files in use
- The Most Important Apple Executive You’ve Never Heard Of
- Updated Lightning to SD Card Camera Reader
- iPhone 6s Smart Battery
- The Grand Unified Theory of Apple Products
- Sketch bids farewell
No Comments
Leave a Comment
trackback address