Apple’s new Vision Pro virtual reality headset was on display at Apple’s Worldwide Developers Conference (WWDC) at the Apple Park campus in Cupertino, California on June 5, 2023.
Josh Edelson | AFP | Getty Images
For years, Apple has avoided using the acronym “AI” when talking about its products, but that’s no longer the case.
The boom in generative artificial intelligence, coined by OpenAI in late 2022, has been the biggest story in tech recently, giving chipmakers a boost. NVIDIA With market capitalization reaching $3 trillion, corporate priorities have shifted dramatically. Microsoft, Google and Amazonand these companies are racing to incorporate the technology into their products. Core Services.
Investors and customers are now eager to see what the iPhone maker has in store.
The new AI features are expected to be unveiled at Apple’s Worldwide Developers Conference (WWDC) on Monday at the Apple campus in Cupertino, California. apple Chief Executive Officer Tim Cook hinted at “big plans” that would mark a change of approach for the company, which doesn’t like to talk about products before they’re released.
WWDC isn’t usually a big event that draws investors. On the first day, Apple unveils its annual updates to its iOS, iPadOS, WatchOS and MacOS software. It’s typically a two-hour, videotaped keynote event hosted by Cook. This year, the presentations will be screened at Apple’s headquarters. App developers will then attend parties and virtual workshops throughout the week to learn about Apple’s new software.
It gives Apple fans a sneak peek at the software that will ship on the iPhone, allows developers to get to work on app updates, and doesn’t showcase any new hardware products, if any.
But this year, everyone’s attention will be on the tech industry’s most talked-about acronym.
With more than 1 billion iPhones in use, Wall Street is eager to hear what AI features will make the iPhone more competitive against Android rivals and how the company can justify investing in developing its own chips.
Investors have rewarded companies that demonstrate a clear AI strategy and vision. Shares of Nvidia, a leading maker of AI processors, have tripled in the past year. Microsoft, which has been actively incorporating OpenAI into its products, is up 28% in the past year. Apple is up just 9% over the same period and is trailing the other two in market capitalization.
“This is the most important event for Cook and Cupertino in a decade,” Wedbush analyst Dan Ives told CNBC. “AI strategy is the missing piece of Apple’s growth puzzle, and this event needs to wow the crowd, not just be a shrug.”
On stage will be executives including software chief Craig Federighi, who is expected to talk about how Apple actually uses AI, whether it should run locally or on large cloud clusters, what should be built into the OS or distributed as an app, and more.
Privacy will also be a key issue, with attendees eager to know how Apple can deploy data-intensive technology without invading user privacy, something that has been a central part of the company’s marketing for more than five years.
“At WWDC, we expect Apple to unveil its long-term vision for implementing generative AI across its diverse ecosystem of personal devices,” DA Davidson analyst Gil Luria wrote in a note this week. “We believe the impact of generative AI on Apple’s business will be one of the largest in all of technology. Unlike many of the AI innovations that impact developers and enterprises, Apple has a clear opportunity to reach billions of consumer devices with generative AI capabilities.”
Siri Upgrade
Last month, OpenAI launched a voice mode of its AI software called ChatGPT-4o.
In a short demo, an OpenAI researcher held up an iPhone and spoke directly to the bot in the ChatGPT app, and the bot was able to imitate, speak fluently, and sing. The conversation was snappy, the bot offered advice, and the voice sounded human. Further demos at the live event showed the bot singing, teaching trigonometry, translating, and telling jokes.
Apple Users and Critics Easy to understand OpenAI said it demoed a preview of what Apple’s Siri might be like in the future. Apple’s voice assistant debuted in 2011 and has since gained a reputation for being ineffective. Siri is inflexible and can only answer a small set of well-defined questions, in part because it’s based on older machine learning techniques.
Apple may partner with OpenAI to upgrade Siri next week and is also in talks to license chatbot technology from other companies, including Google and Cohere, according to a person familiar with the matter. The New York Times.
Apple declined to comment on its partnership with OpenAI.
One possibility is that Apple’s new Siri won’t compete directly with a full-featured chatbot, but will instead improve on its current capabilities and ask partners questions that only a chatbot can answer. That would be closer to how Apple’s Spotlight search and Siri work today: Apple’s system will try to answer the question, but if it can’t, it will turn to Google. The arrangement is part of a deal worth $18 billion a year to Apple.
Apple may be reluctant to partner with OpenAI or embrace chatbots all-out, in part because of the embarrassing headlines it could face if the chatbot malfunctions, undermining the company’s emphasis on user privacy and personal control over its data.
“Data security will be a key strength for the company and we expect the company to spend time talking about its privacy efforts at WWDC,” Citi analyst Atif Malik said in a recent note.
OpenAI’s technology is based on web scraping, and ChatGPT’s user interactions are used to improve the model itself, but this technique may violate some of Apple’s privacy principles.
Large language models like OpenAI’s still have problems with inaccuracies and “hallucinations,” like when Google’s search AI said last month that President Barack Obama was the first Muslim president. OpenAI CEO Sam Altman recently found himself at the center of a messy social debate about deepfakes and deception when he denied actress Scarlett Johansson’s accusation that OpenAI’s voice mode had stolen her voice. It’s the kind of confrontation Apple executives want to avoid.
Efficient or large scale?
Craig Federighi, Apple’s senior vice president of software engineering, speaks before the start of the Apple Worldwide Developers Conference at the company’s headquarters in Cupertino, California, on June 5, 2023. Apple CEO Tim Cook kicked off the company’s annual developers conference, WWDC23.
Justin Sullivan | Getty Images News | Getty Images
Outside of Apple, AI has come to rely on massive server farms combining powerful Nvidia processors with terabytes of memory to do the math.
In contrast, Apple wants to run AI features on its battery-powered iPhones, iPads and Macs, and Cook emphasized that Apple’s own chips are good at running AI models.
“We believe in the transformative power and potential of AI, and believe our unique and seamless integration of hardware, software and services, breakthrough Apple Silicon with our industry-leading Neural Engine, and unwavering focus on privacy give us distinct advantages in this new era,” Cook told investors during an earnings call in May.
“We expect Apple’s presentation at the WWDC keynote will be heavily focused on features and on-device capabilities, as well as the GenAI models running on-device to enable those capabilities,” JPMorgan analyst Samik Chatterjee wrote in a note this month.
In April, Apple announced research into its AI model. It is called an “efficient language model.” Something that can run on your phone. Microsoft is also Same conceptOne of Apple’s “OpenELM” models has 1.1 billion parameters, or weights, which is much smaller than OpenAI’s 2020 GPT-3 model, which has 175 billion parameters, and even smaller than one version’s 70 billion parameters. Meta Llama is one of the most widely used language models.
In the paper, Apple researchers benchmarked the model on a MacBook Pro laptop equipped with Apple’s M2 Max chip, showing that these efficient models don’t necessarily need to connect to the cloud, improving response speeds and providing a layer of privacy by allowing sensitive questions to be answered on the device itself without being sent back to Apple’s servers.
Features built into Apple’s software could include giving users summaries of unread text messages, generating new emoji images, code completion in the company’s Xcode development software, and drafting email replies. According to Bloomberg.
Apple may also decide to equip its data centers with M2 Ultra chips to handle AI queries that require more processing power. Bloomberg reported.
Green Bubble and Vision Pro
A customer uses Apple’s Vision Pro headset at the Apple Fifth Avenue store in Manhattan, New York City, USA on February 2, 2024.
Brendan McDiarmid | Reuters
WWDC isn’t strictly about AI.
The company has more than 2.2 billion devices in use, and customers are demanding software improvements and new apps.
One possible upgrade is Apple’s adoption of RCS, an improvement over the older text messaging system known as SMS. Apple’s Messages app routes texts between iPhones to its own iMessage system, which shows conversations in blue bubbles. If you send a text from an iPhone to an Android phone, the bubbles will be green. Many features, such as typing notifications, won’t be available.
Google led the development of RCS, adding features such as encryption to text messages. Confirmed Support for RCS will also be added in addition to iMessage, and the debut of iOS 18 would be a good time to show off the results.
The conference marks the one-year anniversary of the unveiling of its virtual and augmented reality headset, the Vision Pro, which launched in the US in February, and where Apple may announce its expansion to more countries, including China and the UK.
During its WWDC announcement, Apple said a big focus would be on Vision Pro, which is currently the first version of the operating system, with core features like Persona videoconferencing simulation still in beta.
For Vision Pro owners, Apple will offer some virtual sessions at the event. 3D Environment.