Some of the apparent — and truthfully, the dullest —traits inside the smartphone business over the previous couple of years has been the incessant discuss AI experiences. Silicon warriors, particularly, usually touted how their newest cellular processor would allow on-device AI processes resembling video technology.
We’re already there, albeit not utterly. Amidst all of the hype present with hit-and-miss AI tips for smartphone customers, the controversy barely ever went past the glitzy shows concerning the new processors and ever-evolving chatbots.
It was solely when the Gemini Nano’s absence on the Google Pixel 8 raised eyebrows that the plenty got here to know concerning the vital significance of RAM capability for AI on cellular gadgets. Quickly, Apple additionally made it clear that it was retaining Apple Intelligence locked to gadgets with at the least 8GB of RAM.
However the “AI telephone” image shouldn’t be all concerning the reminiscence capability. How effectively your telephone can carry out AI-powered duties additionally is dependent upon the invisible RAM optimizations, in addition to the storage modules. And no, I’m not simply speaking concerning the capability.
Reminiscence improvements headed to AI telephones

Digital Developments sat with Micron, a worldwide chief in reminiscence and storage options, to interrupt down the function of RAM and storage for AI processes on smartphones. The developments made by Micron needs to be in your radar the following you go purchasing for a top-tier telephone.
The most recent from the Idaho-based firm contains the G9 NAND cellular UFS 4.1 storage and 1γ (1-gamma) LPDDR5X RAM modules for flagship smartphones. So, how precisely do these reminiscence options push the reason for AI on smartphones, other than boosting the capability?
Let’s begin with the G9 NAND UFS 4.1 storage answer. The overarching promise is frugal energy consumption, decrease latency, and excessive bandwidth. The UFS 4.1 commonplace can attain peak sequential learn and write speeds of 4100 MBps, which quantities to a 15% achieve over the UFS 4.0 technology whereas trimming the latency numbers, too.
One other essential profit is that Micron’s next-gen cellular storage modules go all the way in which as much as 2TB capability. Furthermore, Micron has managed to shrink their dimension, making them a super answer for foldable telephones and next-gen slim telephones such because the Samsung Galaxy S25 Edge.

Shifting over to the RAM progress, Micron has developed what it calls 1γ LPDDR5X RAM modules. They ship a peak pace of 9200 MT/s, can pack 30% extra transistors resulting from dimension shrinking, and eat 20% decrease energy whereas at it. Micron has already served the marginally slower 1β (1-beta) RAM answer packed contained in the Samsung Galaxy S25 collection smartphones.
The interaction of storage and AI
Ben Rivera, Director of Product Advertising in Micron’s Cell Enterprise Unit, tells me that Micron has made 4 essential enhancements atop their newest storage options to make sure quicker AI operations on cellular gadgets. They embrace Zoned UFS, Information Defragmentation, Pinned WriteBooster, and Clever Latency Tracker.
“This function allows the processor or host to establish and isolate or “pin” a smartphone’s most continuously used knowledge to an space of the storage machine referred to as the WriteBooster buffer (akin to a cache) to allow fast, quick entry,” explains Rivera concerning the Pinned WriteBooster function.

Each AI mannequin – consider Google Gemini or ChatGPT — that seeks to carry out on-device duties wants its personal set of instruction recordsdata which might be saved regionally on a cellular machine. Apple Intelligence, for instance, wants 7GB of storage for all its shenanigans.
To carry out a activity, you may’t depute your entire AI bundle to the RAM, as a result of it will want area for dealing with different vital chores resembling calling or interacting with different vital apps. To take care of the constraint on the Micron storage module, a reminiscence map is created that solely hundreds the wanted AI weights from the storage and onto the RAM.
When assets get tight, what you want is a quicker knowledge swap and studying. Doing so ensures that your AI duties are executed with out affecting the pace of different vital duties. Because of Pinned WriteBooster, this knowledge trade is sped up by 30%, making certain the AI duties are dealt with with none delays.
So, let’s say you want Gemini to tug up a PDF for evaluation. The quick reminiscence swap ensures that the wanted AI weights are shortly shifted from the storage to the RAM module.
Subsequent, now we have Information Defrag. Consider it as a desk or almirah organizer, one which ensures that objects are neatly grouped throughout completely different classes and positioned of their distinctive cupboards in order that it’s simple to seek out them.

Within the context of smartphones, as extra knowledge is saved over an prolonged interval of utilization, all of it’s normally saved in a somewhat haphazard matter. The web affect is that when the onboard system wants entry to a sure form of recordsdata, it turns into more durable to seek out all of them, resulting in slower operation.
In line with Rivera, Information Defrag not solely helps with orderly storage of information, but in addition adjustments the route of interplay between the storage and machine controller. In doing so, it enhances the learn pace of information by a powerful 60%, which naturally hastens every kind of user-machine interactions, together with AI workflows.
“This function may help expedite AI options resembling when a generative AI mannequin, like one used to generate a picture from a textual content immediate, is known as from storage to reminiscence, permitting knowledge to be learn quicker from storage into reminiscence,” the Micron govt instructed Digital Developments.
Intelligence Latency Tracker is one other function that basically retains an eye fixed on lag occasions and components that could be slowing down the standard tempo of your telephone. It subsequently helps with debugging and optimizing the telephone’s efficiency to make sure that common, in addition to AI duties, don’t run into pace bumps.

The ultimate storage enhancement is Zoned UFS. This technique ensures that knowledge with related I/O nature is saved in an orderly trend. That is essential as a result of it makes it simpler for the system to find the required recordsdata, as an alternative of losing time rummaging via all of the folders and directories.
“Micron’s ZUFS function helps set up knowledge in order that when the system must find particular knowledge for a activity, it’s a quicker and smoother course of,” Rivera instructed us.
Going past the RAM capability
With regards to AI workflows, you want a specific amount of RAM. The extra, the higher. Whereas Apple has set the baseline at 8GB for its Apple Intelligence stack, gamers within the Android ecosystem have moved to 12GB because the protected default. Why so?
“AI experiences are additionally extraordinarily data-intensive and thus power-hungry. So, so as to ship on the promise of AI, reminiscence and storage have to ship low latency and excessive efficiency on the utmost energy effectivity,” explains Rivera.
With its next-gen 1γ (1-gamma) LPDDR5X RAM answer for smartphones, Micron has managed to cut back the operational voltage of the reminiscence modules. Then there’s the all-too-important query of native efficiency. Rivera says the brand new reminiscence modules can hum at as much as 9.6 gigabits per second, making certain top-notch AI efficiency.

Micron says enhancements within the Excessive ultraviolet (EUV) lithography course of have opened the doorways for not solely greater speeds, but in addition a wholesome 20% leap in vitality effectivity.
The street to extra personal AI experiences?
Microns’s next-gen RAM and storage options for smartphones are focused not simply at enhancing the AI efficiency, but in addition the final tempo of your day-to-day smartphone chores. I used to be curious whether or not the G9 NAND cellular UFS 4.1 storage and 1γ (1-gamma) LPDDR5X RAM enhancements would additionally pace up the offline AI processors.
Smartphone makers in addition to AI labs are more and more shifting in direction of native AI processing. Meaning as an alternative of sending your queries to a cloud server the place the operation is dealt with, after which the result’s despatched to your telephone utilizing an web connection, your entire workflow is executed regionally in your telephone.

From transcribing calls and voice notes to processing your advanced analysis materials in PDF recordsdata, every part occurs in your telephone, and no private knowledge ever leaves your machine. It’s a safer method that can be quicker, however on the identical time, it requires beefy system assets. A quicker and extra environment friendly reminiscence module is a kind of conditions.
Can Micron’s next-gen options assist with native AI processing? It may. In reality, it should additionally pace up processes that require a cloud connection, resembling producing movies utilizing Google’s Veo mannequin, which nonetheless require highly effective compute servers.
“A local AI app working straight on the machine would have probably the most knowledge site visitors since not solely is it studying consumer knowledge from the storage machine, it’s additionally conducting AI inferencing on the machine. On this case, our options would assist optimize knowledge circulate for each,” Rivera tells me.
So, how quickly are you able to count on telephones geared up with the newest Micron options to land on the cabinets? Rivera says all main smartphone producers will undertake Micron’s next-gen RAM and storage modules. So far as market arrival goes, “flagship fashions launching in late 2025 or early 2026” needs to be in your buying radar.