Apple plans to equip large language models (LLMs – which powers AI), in iPhones leading to a complete transformation of the way we use smartphones, recent research confirms.
While Microsoft, Google and Samsung have all equipped generative AI services in their platforms one way or another, Apple was the only big tech largely left behind in implementation of AI, since the onset of OpenAI’s ChatGPT. Though Apple was seen working on an AI chatbot, what the employees call “AppleGPT”, no official announcement was made by the firm regarding it.
However, a research paper published recently by Apple clearly signals that the iPhone maker plans to join the generative artificial intelligence race by deploying LLMs in iPhone.
LLMs (AI) in Apple iPhones
The paper titled as “LLM in a Flash” offers a “solution to a current computational bottleneck,” its researchers write. Normally, large language models that power AI models runs in vast data centers with much greater computing power than a smartphone or system or laptop. Apple’s paper, however, aims to run LLMs on devices with limited memory. Apple thus working on ways to incorporate large AI models into smartphones and maybe computers which significantly transform the way we use them now.
The research could make AI assistants to be quicker and native in iPhones than it is in cloud and might also work offline. Ensuring that queries are answered on an individual’s own device without sending data to the cloud is also likely to bring privacy benefits, a key differentiator for Apple in recent years.
“Our experiment is designed to optimize inference efficiency on personal devices,” its researchers said. “Our method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks,” explained lead author Keivan Alizadeh.
Apple tested its approach on models including Falcon 7B, a smaller version of an open source LLM originally developed by the Technology Innovation Institute in Abu Dhabi.
Apple’s previous AI paper
LLM in a Flash isn’t the first AI paper by Apple. The firm earlier published a paper, which discusses about AI model that generates animated 3D avatars from monocular videos (videos from a single camera). Proposed as HUGS (Human Gaussian Splats), the model takes a monocular video with a smaller number of (50-100) frames shot in a phone, and automatically learns to layer the objects in the video and ultimately sketch a fully animatable human avatar within 30 minutes.
HUGS trains and renders 100 times faster than previous avatar generation models. It also outperforms state-of-the-art techniques like Vid2Avatar and NeuMan on 3D reconstruction quality, according to Venturebeat.
As firms have geared up to incorporate AI in smartphones, 2024 will have shipments of more than 100 million AI-focused smartphones, Counterpoint estimated. And Apple iPhones could also join in that list.
(For more such interesting informational, technology and innovation stuffs, keep reading The Inner Detail).
Kindly add ‘The Inner Detail’ to your Google News Feed by following us!