Apple and AI: time for a strategic change?
Apple is doubling-down on its development of Artificial Intelligence (AI) technology, dropping heavy hints that it will provide clarification during its Worldwide Developers Conference (WWDC) in June. Apple’s senior vice president of worldwide marketing, Greg Joswiak, commented on X that the event would be “Absolutely Incredible” (the capitalization suggesting AI?). Could this herald an updated, enhanced version of Apple’s Siri digital assistant, or will the company pivot in a different direction - potentially partnering with another technology firm to provide more advanced AI/GenAI features?
Background
Most Apple users had their first experience of interacting with the company’s AI technology with the launch of Siri, Apple’s digital assistant that was first integrated into its hardware - the iPhone 4s - in 2011. A relatively early example of conversational AI, these technologies use Natural Language Programming (NLP) to understand and synthesize human speech, enabling simple tasks to be automated: setting of reminders and calendar dates, taking notes, executing simple services requests etc. Siri was later joined by Google’s Now and Samsung’s S Voice on the Galaxy S3 in 2012, while Amazon released its Alexa assistant in 2014 - the same year that Microsoft launched Cortana in 2014.
As a product of one of the major technology firms, Siri was the first of its kind: something that Apple is generally not known for. “First Mover Advantage” plays well for many entering a new market, but Apple has historically succeeded (where others have failed) by refining emerging product and service strategies into integrated hardware/software experiences on its own platform(s). While not the first manufacturer of mobile phones, laptops, PCs, tablets ear-buds etc., Apple builds - then iterates on - its vision of a sector-leading product, often to strong commercial success.
Perhaps because of its early prominence and its ubiquitous presence - available on every iPhone and Apple desktop/laptop - Siri receives a lot more attention than other similar technologies. At the same time, it is clear that a large proportion of users believe that Siri seems to have languished in its development. Since it was released, Siri has been criticised for its lack of accuracy: not recognizing spoken words (including those that are accented), mis-hearing commands, providing vague or unhelpful answers, or even activating randomly without being asked. The same criticisms can be levelled at other digital assistants such as Amazon’s Alexa. But a significant number of those that use both Siri and Alexa - myself included - can attest to Alexa’s ability to distinguish speech in noisy environments, its more consistent understanding of pronunciation etc. Some of the differences come down to architecture and privacy considerations, including Apple’s “walled garden” strategy. Siri works best when it is ‘trained’ on a user’s voice, and it integrates well with Apple’s ecosystem of devices. Siri also appears to work best with native Apple applications - phone/contacts, email, maps etc. - but many commentators assert that Siri struggles with providing a consistent experience with 3rd party apps and platforms.
From Conversational AI to Generative AI
Unless you’ve been living under a rock for the last year or so, it would have been impossible to avoid the coverage of Generative AI (GenAI). While it’s unfair to compare the conversational AI features of digital assistants such as Siri or Alexa with the GenAI capabilities of e.g. ChatGPT, the average consumer doesn’t necessarily differentiate between the two. GenAI is a natural progression from conversational AI, and tech firms have been quick to develop their own technology or invest in others. The largest global cloud providers have set their strategies: Microsoft with Copilot and OpenAI (developer of ChatGPT), Google with its own Gemini technology, while Amazon Web Services (AWS) provides support for multiple Large Language Models (LLMs) with its Bedrock product and Q, its GenAI-powered chat bot. Others are emerging: Anthropic’s recently announced partnership with AWS and consulting firm Accenture will provide both firms with AI capabilities via its Claude foundational models.
So, what about Apple? It seems that there has been a decisive shift in priorities. Earlier this year, commentators alleged that resources from the firm’s leaked Apple Car “Project Titan” initiative have been reassigned to work on AI initiatives, with some suggesting that Apple may have been ‘distracted’ from pursuing work on its own GenAI development. Bloomberg’s seasoned Apple-watcher, Mark Gurman, recently suggested that Apple is in talks with Google to licence the use of Gemini to enhance its GenAI capabilities. To what extent this may or may not happen remains to be seen - including whether or not Gemini would replace Siri, or enhance it to bring additional features for more complex GenAI requirements.
Although it’s unlikely Apple would replace Siri’s functionality completely, it seems the company is more comfortable with a strategic adoption of Google’s technology than it would be with rivals such as Microsoft. It is worth pointing out however that Google faced an industry backlash when it was revealed that Gemini - its “largest and most capable AI model” - did not have the level of functionality that its launch video appeared to show. The much-touted real-time responses to a human voice and video input were in fact being prompted by text input, and only provided with a selection of still images taken from that video. While Google included notes indicating that latency had been reduced and responses shortened, many felt the lack of transparency was detrimental to Gemini’s launch. Whatever strategy Apple decides on, it would not want its AI announcements to be tainted by further scepticism.
The Quick Tech Take
In many ways, conversational AI assistants paved the way for the adoption of GenAI tools such as ChatGPT. They do different things, but there is some functional overlap. Many Apple users would probably be happy with a “Siri 2.0” - something that is more consistently responsive, less error prone, and plays nicer with devices and applications outside of the Apple ecosystem. ‘Fixing’ Siri might not be in Apple’s plans though: a complete overhaul is probably necessary, especially if GenAI functionality is prioritized. Whether this includes enhancements to Siri that enable integration - with e.g. Google Gemini - or a differentiation in functionality (GenAI ‘Air’ vs ‘Pro’ perhaps?), Apple needs to show that it can bridge the competitive gap it is facing. Apple is late to market with its GenAI offering, but given that this was also the case with its mobile phone and tablet technologies, it is arguably the one company that could end up redefining the sector and enable the mass adoption of GenAI.