This month only – get free shipping with no minimum purchase! Order early to avoid holiday shipping delays. Shop now
Find the perfect storage for your loved ones with our holiday gift guide! Shop now
Open

Blog Post

Seagate Areal Density Gains Enable the AI Era

Senior VP Jason Feist Offers Insight on Bloomberg’s Tech Disruptors

Table of Contents

We know that GPUs are critical to AI workflows. But what about hard drives? Seagate’s Senior Vice President of Products and Markets Jason Feist recently appeared on an episode of the Bloomberg Intelligence Tech Disruptors podcast with host Woo Jin Ho. They discussed HAMR-enabled technology, the new Mozaic™ 3+ platform, and how hard drives enable the era of AI. 

The following is an abridged version of that Q&A, edited for length. 

Listen to the full podcast episode with Jason Feist on Bloomberg, Apple Podcasts, or Spotify. 

What’s the Seagate elevator pitch and why do you think it’s a disruptor? 

Seagate has been a storage leader across the transformation of how storage is used with compute. We’ve gone from things like mainframes to mobile clients to now into a hyperscale cloud computing world. We continue to see disruption and opportunity as the megatrend of AI takes off, and how [AI] will be used to generate more value from data. 

Data is at the core of everything we do. It’s one of the key tenants we talk about every day in terms of technologies across a wide range of skills, engineering principles, chemistry, robotics, firmware, software, physics, and material science. 

Today, we’re focused on the launch of our Mozaic 3+ HAMR technology. Seagate is very well positioned, in close collaboration with our customers, to continue to deliver that value, utilizing and built on the backbone of HAMR technology. 

You started off by talking about Mozaic, [drives enabled by] the HAMR technology. Could you just walk us through why it matters, and how HAMR differs from existing technologies, such as PMR and SMR? 

We’re always trying to pack more and more information into a smaller location. The term is areal density. How many bits and tracks can we pack into a specific location? 

There are other mechanisms to grow capacity over time. You can add in more heads. You can add more platters. You can change the form factors. However, the disadvantage of things of that nature is they add cost. You do those things only when you don’t have areal density capability. 

So HAMR has added significant technological advancement into magnetic recording, creating a roadmap for us that we see as 3TBs per platter, 4TBs per platter, 5TBs per platter, and beyond. There is a new media technology that allows us to reduce the size of each bit. It’s a fundamental magnetic recording property called coercivity, which is the concept of how do you hold a magnetic state in a one or a zero as you’re recording it?   

We also have had to make a ton of innovation around how do you write such media technology? and that’s where the heat-assisted part comes in. We’ve taken the technology of using a light delivery mechanism to transfer that into something that creates surface plasmons and energy—and a near field transducer—and focuses that light and transfers that energy to the media, which then couples with a magnetic writer and allows us to transition from a one and zero in real time while spinning a platter at 7200 RPMs. 

You mentioned 3TB, 4TB, 5TB. Is the product roadmap limited to 5TB per platter or a 50TB drive?

It is not. That is what’s really exciting, and disruptive, about HAMR technology. We have constant research going on, both in our organization as well as in universities around the globe, to continue to underpin forward-looking technologies all the way out to 8-10TB per platter from a fundamental recording, physics technological capability. It’s really exciting. We know how to do 3TB, 4TB, and 5TB per platter. We have innovation started and research underway based on recording physics to take us beyond that out to 8-10.

How do you think it’s going to play out over the next few years? 

Right now, I think there are a lot of people storing a lot more information and a ton of data center growth. You can see every day when you open up the news that more footprint is being allocated to more data centers around the globe.  I think the signals are pretty clear that there will be more data growth. We’re capitalizing on that in terms of our ability to make sure that the storage required to fill those data centers has the most innovation and technologically capable storage device to go along with it. 

How about the existing footprint? Because we’re probably upgrading from anywhere from a 16 to 20TB drive to a 30TB drive. Is there a replacement cycle there or replacement opportunity there? 

There’s always refresh. Data center operators will continue to look at the lifecycle of any component within their data center, whether it be power delivery, networking gear, CPU advancement, or storage density improvement. They are selling a service and they are monetizing all of those devices, and they all have a subsequent lifecycle built into them. 

That’s no different than for us, as a storage device manufacturer, to mature compared to those other components. They will continue to look at the return on deploying two 16TB drives that have been five years old versus using, today, a 32TB drive. There is a clear conversation that we have with our customers about that lifecycle, that refresh cycle, as well as how they look on new storage demand deploying on the more dense solution from the get-go. 

What if I don’t need 30, 40, or 50? Can I leverage the technology, [so] I can keep 20 on five platters? Is that an opportunity ahead of you as well?  

 That storage optionality is what’s exciting about aerial density per platter, rather than only getting capacity through adding heads and platters. When you create innovation around areal density, those that need the most capacity—i.e., the hyperscalers—they’re going to go out and deploy the largest device that they can get their hands on as fast as possible. That is the TCO equation that we’ve seen and observed over many years in their market. However, there are many other use cases like video/image analytics, network-attached storage, and enterprise use cases that would love that optionality, because they operate in different price bands. They’re selling business-to-business, not business-to-consumer or business-to-many-user-type applications.   

So having that optionality of maybe I want to make a more cost-optimized 8TB or 16TB or 12TB drive, or 20TB drive is something that we know customers are asking about. We know that markets will benefit from and use cases will benefit from, and that will allow us to complement and go–to-market with the same technology that we’re using at the highest capacity drive in cloud and then utilize fewer heads and platters than what’s being deployed in those solutions today at the lower capacities. We’re able to optimize that footprint and optimize our usage of our heads and media, and our fabs, and our capabilities of production to meet all of that demand in different markets.   

You identified markets and opportunities, but with fewer platters per drive. Does the form have to change and, if it does, are there any other infrastructure behind it that needs to change along with it before that market really takes off?  

No form factor needs to be changed in the sense that, today, those are standardized slots. What we’re doing in terms of the optimization is, within the box, we’re creating fewer components within and staying within the same physical footprint. If you thought about it another way, changing form factors have very long lead times because you have to both impact new infrastructure, and then you have to change what you’ve deployed in the field already. By utilizing a strategy like this, we already know what that install base is to all of these mass capacity hard drives, and now we can go both refresh the legacy products that are out there utilizing fewer heads and platters—and continuing to keep those applications running—as well as develop new routes to market in this more optimal hardware architecture. And then, most importantly, it’s using the same component technology and platform technology that we would scale to very high volumes with our hyperscale customers.  

We all know that the hottest topic in technology is AI. How do hard drives play a critical role in an AI storage infrastructure? 

Today, GPUs and high bandwidth memory are tightly coupled in DRAM. That’s because a GPU can process things so quickly. What’s great about that architecture is it’s analyzing information at a very high rate, and it’s going to create a ton of new applications and software development, which in turn is going to generate more information.  

As those applications take shape—and are used more and get deployed into workflows and integrated into our day-to-day activities—it’s going to generate more content that, ultimately, we’ll want to be stored and then retrained for future model development.  All of which today is very well suited to hard drive storage and large pools of information being in close proximity to those GPUs and high bandwidth memory. 

In the end, we’re excited. Again, just like cloud has a symbiotic relationship between the different device types, as we’ve engaged with customers and different usages, we see that symbiotic relationship proliferating, and if anything, it’s going to continue to scale because so many more companies are now participating in this AI development, and AI architecture and infrastructure, whereas before there were only maybe four to 10 companies that were really participating in cloud. 

Any last takeaways? 

We continue to have some of the best and brightest engineers and employees around the globe, across such a diverse set of skills, capabilities, and cultures. It’s truly remarkable to see every day at 24/7 business of engagement across the board.   

We’ve made transitions across what the storage industry needs multiple times over the last 45 years, and we’ve shipped over four billion terabytes of data capacity, and we work on things that are different all the time: devices, systems, all the way across markets that interact with consumers all the way to cloud.  

It’s one of the unique places, as I talk to many different companies and interact with many different individuals, where truly, every day you get to come in and experience something new, something unique, and something exciting. And at the same time, you get to see how what you work on every day delivers value to so many users around the globe, and how so much information is stored on the devices that we’re creating.  

Listen Now

Listen to the full Tech Disruptors podcast with Jason Feist on Bloomberg, Apple Podcasts, or Spotify.