Advanced storage architecture to power AI in data centers.
Built for the future of AI, a joint solution from Supermicro, Seagate, and OSNexus is engineered to drive both AI efficiency and scalability.
minute read
Table of Contents:
The rise of artificial intelligence (AI) has driven unprecedented demand for scalable, high-performance, and cost-effective data center storage solutions. This white paper presents a comprehensive solution combining Supermicro hardware, Seagate Exos hard drives enabled by Seagate’s HAMR-based Mozaic 3+™ technology, and OSNexus QuantaStor software. This joint solution addresses the explosive growth in AI-driven data storage needs, providing a robust architecture that supports both scale-up and scale-out configurations. Key benefits include enhanced scalability to accommodate growing AI workloads, exceptional performance with high throughput and low latency, optimized cost efficiency through reduced physical drives and power savings, a unified management platform that simplifies operations, advanced security features for compliance, and reduced environmental impact through energy-efficient storage solutions.
The rapid evolution of AI and machine learning (ML) technologies has fundamentally transformed the data storage landscape. Advances in computational power, democratized access for developers, and faster development tools have led to an explosion of AI-driven innovation. As AI models become more advanced, the need for scalable, high-performance storage solutions has never been greater. Data is the backbone of AI, and the ability to store, manage, and access vast amounts of data efficiently is crucial for training AI models and deploying AI applications. Traditional storage solutions often fall short of meeting these demands, necessitating the development of new architectures tailored to the needs of AI workloads.
AI workloads present unique challenges that traditional storage solutions struggle to meet. AI models require vast amounts of data for training, often reaching petabyte scale. This data must be readily accessible, as the efficiency of the training process heavily depends on fast data retrieval. Furthermore, AI applications often involve large-scale data processing tasks which demand high throughput and low latency to deliver real-time insights.
The computational intensity of AI workloads also generates significant amounts of metadata, which must be managed efficiently to prevent bottlenecks. Traditional storage solutions, with their limited scalability and performance, are ill-suited for these demands. They often lack the flexibility to handle dynamic workloads, leading to inefficiencies and increased operational costs.
AI-driven innovation necessitates storage solutions that can scale rapidly, handle large volumes of unstructured data, and provide seamless access to this data. For instance, training a complex AI model involves iterative processing of vast data sets to refine algorithms and improve accuracy. The sheer volume of data required for these iterations can overwhelm traditional storage systems, causing delays and reducing the overall efficiency of AI operations.
Moreover, AI applications are increasingly deployed in real-time environments where immediate data processing is critical. This includes applications such as autonomous vehicles, predictive maintenance, and personalized healthcare. These use cases require storage solutions that not only offer high capacity but also deliver exceptional performance to support instantaneous data analysis and decision-making.
The joint solution from Supermicro, Seagate, and OSNexus combines cutting-edge hardware and software to deliver a robust, scalable, and cost-effective storage infrastructure for AI workloads. The core components of this solution include Supermicro servers and JBODs, Seagate Mozaic 3+ hard drives, Seagate Nytro NVMe SSDs, and OSNexus QuantaStor software.
The architecture of the joint solution supports both scale-up and scale-out configurations, catering to diverse deployment needs.
Scaling up (or vertical scaling) involves increasing the capacity of a single storage system or server by adding more resources, such as CPUs, memory, and/or storage drives. This approach maximizes the performance of individual units but has inherent limitations in scalability.
Scaling out (or horizontal scaling), on the other hand, involves adding more storage nodes or servers to a system, distributing the workload across multiple units. This approach allows for virtually unlimited scalability, enabling systems to handle larger, more complex AI workloads by expanding the architecture seamlessly as demand grows.
Scale-up configurations are ideal for smaller, cost-sensitive applications, offering up to 5-10GB/s throughput. In contrast, scale-out configurations are designed for larger deployments, with performance scaling linearly as additional nodes are incorporated. This scalability allows the solution to achieve hundreds of gigabytes per second in throughput, meeting the demands of intensive AI workloads.
The seamless integration of Supermicro servers, Seagate drives, and QuantaStor software forms a cohesive and efficient storage solution. This architecture supports both file and object storage, providing organizations with the flexibility to choose the most suitable configuration for their specific needs. The unified management provided by QuantaStor ensures that all components work harmoniously, delivering optimal performance and reliability. The ability to manage both scale-up and scale-out configurations within a single platform simplifies operations and reduces the complexity associated with maintaining multiple storage systems.
The architecture comprises Supermicro servers, Seagate Exos Mozaic 3+ hard drives, and Seagate Nytro NVMe SSDs, all orchestrated by OSNexus QuantaStor software. This combination meets the intense demands of AI/ML workloads, which require high throughput, low latency, and the ability to handle massive datasets efficiently.
Deployment infrastructure considerations.
Depending on the specific performance requirements and data capacity needs of AI/ML workloads, different configurations may be necessary to achieve optimal results. Factors such as the volume of data being processed and the speed at which data needs to be accessed will dictate whether a hybrid or all-flash configuration is the best fit for the scenario. Additionally, budget considerations and scalability requirements will influence the design choices for the architecture.
Effective management and optimization are critical for ensuring that AI/ML workloads perform at their best within the storage architecture. QuantaStor's advanced management features streamline operations, providing comprehensive control and oversight across diverse configurations.
Different AI/ML workloads require tailored storage solutions to achieve optimal performance and cost-efficiency. Depending on the scale and complexity of the workload, scale-up, scale-out, or mixed configurations can be deployed to meet the specific demands of various industries and applications.
The technological advancements embodied in this solution are critical to its effectiveness. The Seagate Exos Mozaic 3+ hard drives represent a significant leap forward in storage technology. By utilizing HAMR technology, these drives achieve unprecedented areal density, allowing for greater storage capacity within the same physical footprint. This advance not only addresses the need for large-scale data storage but also improves energy efficiency as fewer drives are required to store the same amount of data.
The TCO advantages of Mozaic 3+ hard drives are considerable, including 3× the storage capacity in the same data center footprint for 25% less cost per TB, 60% lower power consumption per TB, and a 70% reduced embodied carbon per TB (compared to 10TB PMR drives, a common drive capacity needing upgrade at data centers today). The drives’ lower power consumption translates to reduced energy costs, while the higher density reduces the need for physical space, leading to savings in data center infrastructure. Additionally, the drives' lower embodied carbon makes them a more environmentally friendly option, aligning with sustainability goals that are increasingly important for modern enterprises.
The integration of Seagate Nytro NVMe SSDs adds another layer of enhanced performance. These high-speed drives are essential for managing the intensive read and write operations typical of AI workloads. Their low latency ensures that data can be accessed and processed in real time, which is crucial for training AI models and deploying AI applications. The dual-ported design of the SSDs enhances reliability, as it allows for continuous operation even if one port fails.
OSNexus QuantaStor software further enhances the solution by providing intelligent data management and advanced security features. The software's auto-tiering capabilities ensure that data is stored in the most appropriate tier, optimizing both performance and cost. The end-to-end encryption and compliance with industry standards help protect data by addressing the security and privacy concerns that are paramount in AI applications, particularly in industries like healthcare and finance where sensitive data is frequently handled.
The joint solution from Supermicro, Seagate, and OSNexus offers several key benefits that address the specific needs of AI/ML workloads. These benefits include:
The solution is versatile enough to support a wide range of use cases across various industries. Some examples include:
The joint AI solution developed by Supermicro, Seagate, and OSNexus offers a comprehensive, scalable, and cost-effective storage architecture tailored to the unique demands of AI/ML workloads. By combining advanced hardware and software technologies, the solution delivers exceptional performance, reliability, and efficiency, making it an ideal choice for Organizations looking to leverage AI to gain a competitive edge. Whether deployed in healthcare, finance, media, manufacturing, or research, this solution provides the robust infrastructure needed to support the next generation of AI applications and pave the way for the future of AI-driven innovation across industries.
Topology | Product | Resiliency Model | Raw Capacity | Usable Capacity | Detailed Specification |
---|---|---|---|---|---|
Scale-up | SBB hybrid; | Triple parity | 2039TB raw | 1512TB usable | link |
Scale-up | SBB All-flash | Double-parity(4d+2p) | 737TB raw | 553TB usable | link |
Scale-out | Hyper All-flash | EC2k+2m/REP3 | 1106TB raw | 533TB usable | link |
Scale-out | 4U/36 | EC4K+2m/REP3 | 3974TB raw | 2513TB usable | link |
Scale-out | 4U/36 | EC8K+3m/REP3 | 8342TB raw | 5786TB usable | link |
Scale-out | Dual-node top loading | EC8K+3m/REP3 | 11981TB raw | 8406TB usable | link |
Acronyms And Additional Information.
SBB: Storage Bridge Bay.
EC: Erasure Coding.
“Double-parity” and “triple-parity” refer to the number of parity blocks used to provide data redundancy and fault tolerance.
Numerical strings relate to the resiliency model.