🏡 Wenzels.Blog

  1. About
  2. Work
  3. RSS

Where is Apple in the AI race?

I’m impressed by Microsoft’s Copilot for Office 365 announcement, and it got me thinking. Using AI tools is a real-world competitive advantage, yet there is no Copilot in Xcode or iWork. I think Apple is uniquely positioned to one-up the tech industry, but I’m not sure if they’re too distracted by the VR goggles the rumor mill is focused on.

Earlier today, I posted this on Mastodon:

With Copilot expanding from code to office products, it feels like Microsoft caught Google and Apple asleep at the wheel. Where is Xcode’s copilot equivalent? Where are the deep integrations of Siri into apps other than Photos?

Maybe Apple will surprise us all by adding a large language model system-wide to macOS, available as an API to apps, but I think they’re too distracted building VR goggles.

I want to expand on this. Microsoft, owning GitHub, has been building its Copilot brand for a while now, and a few days ago, it expanded Copilot to Office 365. Until now, Copilot was a product baked into VSCode, helping developers write code more quickly by partially writing it for them. Microsoft seems to have caught the industry off guard by making Copilot a brand and adding it to all Office 365 products—just weeks after adding an AI chatbot to Bing that seems to be improving rapidly. Copilot in Office will analyze your spreadsheets for key insights, build presentations, and write documents for you.[1] Using Copilot will put you and your organization at an advantage—this holds true for AI in general. So, where is Apple in all of this?

Apple’s AI products have one unique property the others do not have: They run on the device they’re being interacted with. Even Siri will process your voice on-device, as opposed to other systems.[2] [3] This on-device processing deserves a lot more attention than it gets, as it is a game-changer on multiple levels: privacy, availability, speed, and cost.

To be fair, VALL-E is not yet not publicly available. But hacked recordings of voice assistant users won’t magically disappear once VALL-E or another system becomes readily available.

On-device processing is generally faster if your computer, tablet, or smartphone has enough power. It works without an internet connection, for obvious reasons. While many of us are constantly connected, mobile data is still expensive in most parts of the world and often fairly slow. For providers of AI tools, the cost structure is also very different. If an AI tool needs to send a task to “the cloud”, it costs money to run it. Running the servers for an AI service can be expensive, so expensive that most AI tools are only available for paying subscribers.

Apple might add a language model to macOS that is available for free to all Apps running on your Mac.

In light of Apple’s approach to on-device computation, what should we expect from Apple on the front of large language models (chatbot AIs)? Apple shocked its competition in the past years by releasing its M-Series Apple Silicon chips. These impressive energy-efficient and powerful chips might enable Apple to one-up the tech industry once more: On-device LLMs as a free-to-use API on the OS level. In plain English: Apple might add a language model to macOS that is available for free to all Apps running on your Mac. That means any app running on a Mac would instantly be able to offer AI tools to its users without additional cost to the app developer.

That means any app running on a Mac would instantly be able to offer AI tools to its users without additional cost to the app developer.

This would be a game changer and something nobody could copy easily. Microsoft has very limited control over the hardware its products run on, and Google remains mostly a website. It would put the Mac into a league of its own as the first computing platform to democratize access to a new technology that will likely be as disruptive as the World Wide Web or the Telephone.


  1. (link: https://youtu.be/hGb9UZ8DyDc text: The Microsoft 365 Copilot AI Event in Less than 3 Minutes target: _blank) published March 21, 2023 by Microsoft on youtube.com. ↩︎

  2. (link: https://www.theverge.com/2021/6/7/22522993/apple-siri-on-device-speech-recognition-no-internet-wwdc text: Apple’s Siri will finally work without an internet connection with on-device speech recognition target: _blank) published June 7, 2021 by James Vincent on theverge.com. ↩︎

  3. Other voice assistants, like Google’s, upload your spoken word and analyze it on a server. And with modern AI tools such as VALL-E, just a few seconds of audio can be enough to synthesize your voice. Imagine a phone call in your voice to, for example, your grandparents, asking them to send money somewhere for an emergency.

    • (link: https://support.google.com/googlenest/answer/7072285?hl=en#zippy=%2Cwhat-information-does-google-collect-when-i-interact-with-google-assistant text: Data security and privacy on devices that work with Assistant target: _blank) on Google.com.
    • (link: https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/ text: Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio target: _blank) published January 9, 2023 by Benj Edwards on arstechnica.com.
    ↩︎