[ad_1]
Bloomberg’s Mark Gurman wrote Wednesday that Apple has been quietly developing its own generative AI models, and may even try to compete with OpenAI and Google in the chatbot wars. Gurman’s (unnamed) sources say Apple management is still deciding how the company might publicly release the technology, which has a tendency to invent facts and, at times, invade privacy.
Apple has reportedly developed a new framework, referred to internally as “Ajax,” to develop large language models. Within that framework, Apple has already developed a chatbot that some within Apple are calling “Apple GPT,” a term that became popular on Twitter Wednesday. Gurman reports that the LLM chatbot project has become “a major effort” within Apple, already involving collaboration between several teams.
All of this was inevitable.
The reality is that Big Tech companies, including Apple, are beholden to the beliefs and whims of investors, and the investment community is all in on generative AI. Apple’s stock bounced up by 2.3% (that’s a quick gain of $60 billion in market cap) after the Bloomberg story appeared, while Microsoft (which owns a major share in OpenAI), saw its shares decrease in value by 1%. This is more evidence of the gravity of the November 2022 release of OpenAI’s ChatGPT, which kicked off the current breathless excitement about LLMs and other types of generative AI.
Apple’s actions aren’t entirely motivated by Wall Street. Large language models—whether or not you believe they’re capable of true intelligence—will likely prove to be meaningfully helpful in both work and personal settings, once they become better trained and more predictable. The potential impact of generative AI on Apple’s core hardware business was less apparent than on the businesses of some of its peers, such as Google’s core Search business (generative AI might bring conversational, question-based search), or on Microsoft’s core productivity apps (AI could generate emails or document summaries). That may have given Apple executives reasons to stay on the sidelines of the growing AI arms race.
But Gurman’s sources suggest that that view has changed at Apple. Apple executives have “grown concerned” that it might fail to recognize a big shift in the way consumers want to interact with their personal tech devices, like iPhones. Apple is famously concerned with the user interface, the technology that mediates between the human and the silicon. In the past, that’s been a graphical user interface (the touch screen) but in the future, it might be a highly personalized AI that runs on a chip on the user’s device. If Apple remains locked into an old UX mindset, the $320 billion in revenue it makes from hardware sales could be put in jeopardy.
And Apple has some real advantages over other AI developers. For one thing it has a vast and ready distribution network through its billions of iPhones and apps (Messages, notably) in use around the world, points out Perplexity CEO Aravind Srinivas. So Apple could quickly serve new AI-powered apps or features to its users. And Apple has for years been setting the stage for running AI models on its devices. It’s work designing chips purpose-built for iDevices has yeilded impressive results, and some of its chips are powerful enough to run large AI models on-device. “Their M1, M2, and M2 Ultra chips are getting more and more powerful,” Srinivas says, “and it’s rumored iPhones will have those chips soon too.”
Apple is likely to continue lagging behind its peers in terms of training AI models, Srinivas says, but the company’s approach to user data privacy could blur that perception. “People just blindly believe if Apple says they are the most private AI,” he says. “They have built a brand around it, so even if the models may not match the capabilities of OpenAI, Meta, and Google, they might still try to go hard on privacy.”
Generative AI’s impact on one Apple product was always clear: Siri. Consumers have already begun drawing direct comparisons between ChatGPT (good AI) and Siri (bad AI). A large language model could put a foundation of general knowledge underneath the more specific informational and task-oriented data that Siri typically calls up. An LLM might give Siri a basic knowledge of how the world works, as seen in ChatGPT, and therefore a better framework for understanding and helping users. If Apple is planning to offer us an “Apple GPT” chatbot, it’s not a very big leap to putting that model underneath Siri and giving it a human voice.
[ad_2]
Source link
Comments are closed.