This month only – get free shipping with no minimum purchase! Shop now
Find the perfect storage for your loved ones with our holiday gift guide! Shop now
Open

Article

The Benefits of Composability in Private Cloud Architecture

Explore the cost and performance benefits of composability in a private cloud architecture.

Table of Contents

Composability Benefits Private Cloud Architecture Composability Benefits Private Cloud Architecture Composability Benefits Private Cloud Architecture

Putting together a private cloud data center is like building a sports car from scratch. You need to pick the right engine, parts, and equipment to meet the performance demands of the road and driver. Thanks to hardware and software innovations, IT architects can compose their data centers to perform more like a Lamborghini and less like yesterday's clunker.

On-premises private clouds require constant oversight, management, and maintenance. Operational costs, such as replacing drives or overprovisioning, can add up quickly over time. This raises the total cost of ownership (TCO) of data centers and eats into a company's bottom line. Especially with data capture, creation, and utilization growing exponentially every year.

Those issues are precisely why what Seagate calls the composability megatrend is currently underway. The modern, composable data center allows each private cloud server to be comprised of components that are disaggregate but interconnected with optimal fabric types and bandwidth. Composability provides a flexible way to manage and access data across multiple applications and data workflows.

IT architects must decide whether to use an on-premises private cloud solution or a public/private cloud hosted by a third party. Public clouds share computing services among different customers and are hosted at an external data center. Third-party cloud providers also offer hosted private clouds, which are also externally hosted but the services are not shared.

Private on-premises clouds are managed and maintained by a company's own data center and do not share services with other organizations. The major benefits of using an on-premises private cloud include greater sense of security, flexibility, and higher performance. Users can customize their resources and services so that the hardware specifically matches software requirements, rather than a one-size-fits-all approach. Users also have complete control over the security, scalability, and configurability of their servers.

As the demands of applications increase or decrease, components and servers communicate with one another to shift the workload. IT architects can now fashion data centers with a wider array of hardware and components from various manufacturers. In effect they're taking apart—or disaggregating—the traditional data center infrastructure. Once they have gutted the chassis, the data center can then be reconstructed for more efficient use of installed resources. IT architects can avoid purchasing unnecessary hardware at extra cost and components can be easily replaced without any downtime.

Data center disaggregation and composability in the private cloud evolved from traditional network architecture into today's dynamic infrastructure that enables powerful—and demanding—systems to function. Composable disaggregation has several key benefits, including reduced latency and increased security and control over data.

Limitations of the Traditional Data Center

Traditional IT architecture is reaching its limits due to exponential data growth and the increasing complexity of software applications. Central processing units (CPU), dynamic access memory units (DRAM), storage class memory units (SCM), graphic processing units (GPU), solid state disk units (SSD), and hard disk drives (HDD) are among the critical components that comprise a data center. These components are typically housed together in one box or server and are the foundations upon which the data center is built. In this traditional enterprise cloud architecture, each component, such as an HDD, is directly attached to the rest of the server.

Data centers originally operated under a one-application-per-one-box paradigm. And when applications outgrew the storage and data processing capabilities that single servers could provide, IT architects began grouping multiple servers into clusters, all of which could be drawn on as a pool of resources. Proprietary solutions from the likes of IBM, EMC, NetApp, and Seagate's own Dot Hill team led the industry's initial foray into pooled server resources.

Data centers could then scale upward and satisfy the needs of software applications as they grew in complexity in this way: if an application required more storage, bandwidth, or CPU power, additional servers—or nodes—could then be added to the cluster. The clustered model of pooled resources forms the basis of what we now know as converged and hyperconverged enterprise cloud infrastructure, using enterprise hypervisor applications like VMware and others.

Node clusters served their purpose in the cloud's infancy but are prone to overprovisioning. This occurs when IT architects purchase more servers, which are bound to contain more resources of one kind or another than are needed—resources that then don’t get used. Although the clustered approach has its benefits—such as guaranteeing sufficient storage and processing power—unused resources within servers are inefficient. Nevertheless, IT architects had to rely on overprovisioning to meet scale, since there was no way for data centers to dynamically scale only specific resources or workloads to the demands of software applications. Excess costs are a natural byproduct of overprovisioning.

IT architects were also limited in terms of which hardware components they could use to comprise a server. Hardware for each server or cluster had to be purchased from single manufacturers for compatibility purposes. And there were no open application programming interfaces (API) available to help hardware from various manufacturers communicate and coordinate. If architects wanted to swap out a CPU for a faster one—one from another manufacturer, for example—often they were out of luck due to incompatibility. Hardware from different manufacturers couldn't communicate or coordinate with one another.

But hardware incompatibilities aren't the only thing straining traditional data infrastructure. There's also the issue of the massive amounts of data that need to be collected, stored, and analyzed. The explosion of big data is not only pushing the storage limits of traditional private cloud clusters, it’s also creating a data processing bottleneck. Each CPU is often too busy on local data processing to share resources with other applications, resulting in overall resource scaling inefficiencies in the data center.

For instance, complex artificial intelligence (AI) applications are predicated on the ability to process large amounts of data in a short amount of time. When an AI application is utilizing a clustered data center, bottlenecks in data collection and processing tend to occur. And if the application needs more processing power, there's no way to shift additional workload to other clusters. Latency—or delays in data transmission between devices—is another negative consequence.

For example, there might be two server clusters in the same data center, one of which is overburdened and the other underutilized. The application using the overburdened cluster might slow down or have performance issues—a problem that could easily be solved by integrating the underused, over-provisioned cluster to help. But the resource pool that application can pull from is strictly limited to the single cluster dedicated to that application. It's a perfect illustration of why IT architects have been searching for more efficient ways to compose a data center.

The proprietary era is coming to an end—if it hasn't already. Sophisticated software applications demand more processing and storage power than the traditional clustered data center can provide. And IT architects are limited in which hardware they can use because of a lack of open APIs that allow inter-device communication. For IT architects to move forward, they'll need a deeper understanding of how modern applications impact private cloud architecture and how composability can help overcome traditional IT infrastructure challenges.

How Applications Impact the Private Cloud Data Center

One of the biggest forces driving the composability megatrend is software application demands. Software like AI or business analytics demands an increasingly complex set of hardware requirements specific to that application's needs. This creates fierce competition for resource pools of storage and processing.

As noted, traditional data centers have now reached a breaking point, where the processing power required by various apps regularly exceeds the limits of a clustered model. Application requirements also evolve constantly over time, and changes can take place quickly. The new version of a business app, for example, might demand twice the storage or processing power of the old one. If it exceeds the limits of its dedicated cluster, more hardware must be purchased. Advances in software stress what a traditional clustered data center can offer.

Composability gives applications access to resource pools outside of their dedicated cluster, unlocking the processing power—or other resources—available within overprovisioned servers. Each CPU, GPU, or storage node can be scaled up independently according to the price needs of each application.

Additional processing demands also create bottlenecks within the traditional data center fabric. The data center fabric is what connects various nodes and clusters. Ideally, a composable fabric that meets the needs of modern software applications should create a pool of flexible fabric capacity. This fabric should be instantly configurable to provision infrastructure and resources dynamically as the processing needs of an application increase. The goal is to provide advanced applications with not just faster processing, but real-time processing that facilitates these applications to run at optimal speeds.

Composability and disaggregation are essential to meeting the demands of advanced software applications. Traditional clustered architecture simply isn't up to the task, and information can't travel over ethernet fabrics quickly enough to allow advanced applications like AI to function properly. By disaggregating components within a server box and giving them a way to communicate with API protocols, data centers can serve complex apps in a cost-efficient manner.

The Benefits of Private Cloud Disaggregation

Disaggregating a private cloud data center means completely doing away with the traditional server box model. Resource components—such as CPUs, GPUs, all tiers of memory, SSDs, and HDD storage—can all be disaggregated and re-composed à la carte within their proper fabrics. These resources can then be utilized based on what a specific application needs, not based on how the components are configured inside a specific, physical server. Anything that can be accessed on a network fabric can be disaggregated and later re-composed.

For example, the storage resource pool that an application draws from might consist of HDDs in 10 different server racks in various locations in a data center. If the application needs more storage than it's currently using, one HDD can simply communicate with another HDD that has space and transmit data seamlessly. Processing workload can also be shifted dynamically when application demands increase. Conversely, when app demands decrease, storage and processing can be reallocated in the most energy-efficient way possible to reduce—or eliminate—costly overprovisioning.

It's a stark change from JBODs, or just a bunch of disks, being confined to a single server rack. JBODs evolved into pools that applications can call on at any time, making data center resource allocation more intelligent. Architects then began turning toward standardized external storage that could communicate with one another.

Disaggregation also introduces standardized interface monitoring and allows IT architects to manage an entire composable data center. Selecting requirements-based hardware—whether it's SSDs, HDDs, CPUs, or fabric components—is only one part of disaggregating the traditional data center and shifting toward creating a composable one. Architects still require the correct open API protocols—like Redfish or Swordfish—for seamless integration and a single user interface to manage the data center. An open API allows hardware and software that speak different languages to communicate and work together.

Letting the application define how the data center is composed, instead of vice versa, results in software-defined networking (SDN) and software-defined storage (SDS). The next evolution of SDN and SDS is the hyper-composed data center. This could put private cloud architecture on par with some of the hyperscalers like Amazon Web Services (AWS) and Microsoft Azure. Hyperscalers are large data center providers that can increase processing and storage on a massive scale. Ethernet fabrics—the network backbone of a data center—can even be custom built to run in tandem with low-latency HDD and SSD devices. This customization decreases delays in data traffic or processing, as applications can draw from disaggregated resource pools operating on low-latency fabrics.

Open API protocols like Redfish and Swordfish are critical to all disaggregated components working in harmony. Seagate has its own legacy REST API that it maintains for its specific class of data center products that promote inter-device operability. In the past, changing out devices could take weeks to install and integrate. API protocols enable data center architects to take an à-la-carte, plug-and-play approach to sourcing hardware. New devices from disparate manufacturers can be installed and functional in record time.

Disaggregating the data center is what makes composability possible. Once architects have chosen the hardware devices specific to their software application needs, they can then construct, operate, and optimize tomorrow's composed data center.

Efficiency of the Modern Composable Data Center for Private Cloud

Data center composability not only connects all the disaggregated components, it also helps improve key performance indicators (KPIs) related to the cost efficiency and performance of data centers. In traditional data centers, overprovisioning is one of the biggest drains on budgets. When data centers are overprovisioned, servers and resources are paid for but go unused. In essence, IT architects are paying for resource pools that go underutilized, resulting in those hard drives being underprovisioned.

Underprovisioned resources result in what are known as orphan pools, or pools of resources that have no home due to underutilization. This can include CPU, GPU, field programmable gate array (FPGA), dynamicrandom-access memory (DRAM), SSD, HDD, or storage class memory (SCM). These building blocks get dynamically composed to create application-specific hardware through software APIs.

Before composable, open source data center architecture, private clouds were typically constructed using hardware from a single vendor. Physically connected data centers are more costly up front and organizations are often locked into a vendor-specific architecture that becomes costly over time.

The composability trend began to gain traction as part of public cloud architectures. Products such as AWS and Microsoft Azure are two examples. A similar approach can be taken in terms of private cloud deployments, saving monetary resources by avoiding vendor lock-in and composing data centers with devices from multiple vendors.

This enables IT managers to allocate more budgetary resources to greater storage capacity that empowers them to extract insights and value from stored data.

Organizations can now use a third-party storage solution, drop it into their data center, and seamlessly integrate it with an open source API. If an IT architect wants an SSD from one manufacturer but their data center is built from components of a different manufacturer, that SSD can be placed in the data center without much headache. APIs ensure that all components work in concert. Data center managers don't need to obsess over how components will communicate via telemetry, for example, reducing the cost and stress associated with parts from vendors that the data center isn't currently composed of.

Another key benefit of pooled or shared memory is the opportunity for applications to now survive a node failure without a state disruption on the virtual machines or container clusters impacted by this node. The idea is that other nodes can now either copy the last state of the virtual machine’s memory into their own memory space (in the case of pooled memory architectures) or use a zero-copy process by pointer redirection across the ultra-low-latency fabric and continue where the failed node left off seamlessly (in the case of shared memory architectures). This can be considered a significant evolution in the seamless fault tolerance capabilities of the data centers to provide a higher reliability and availability.

A composable architecture also allows for faster collection and processing of data from multiple sources and, overall, a more efficient use of installed resources. Composable data source architecture typically includes physical sensors, data simulations, user-generated information, and telemetry communications. Data center managers can then spin up resources in a matter of minutes, sharing them between applications on the fly.

A Single View of All Resources

Orchestrating and managing a private cloud data center is also easier with a composable architecture using a containerized orchestration client. All hardware can be disaggregated, composed, and monitored from a single interface. Data center managers can get a clear, real-time picture as to which resource pools are being utilized and ensure there's no overprovisioning. In many cases, management will be conducted with standard software. This can be either proprietary or open source.

Deployment of new software applications in an open source, containerized environment is more flexible as well, especially due to how flexibly storage resources can be purchased and laid out. For example, data center architects no longer need to overprovision expensive flash storage tiers. That's because any application workload can be instantly shifted to an existing set of SSD resources as needed, no matter where it physically sits. This prevents overprovisioning, as well as physical space that can be used for other purposes such as increased raw storage capacity.

Creating more raw storage capacity is becoming more necessary due to the growth and explosion of raw data. In addition, organizations are striving to gain even more value and actionable insights from more of that data to enable new opportunities that enhance the bottom line. What's crucial is that IT architects are able to design infrastructures that collect, store, and transmit massive amounts of data. More efficient computing hardware layouts means more room is available for the raw mass storage necessary for today's big data environment.

The solution is the composable data center that connects any CPU and any storage pool with other devices as needed. All other devices use CSI, Redfish, and Swordfish to plug into the management network, orchestrated by software engines. All other building blocks of the data center can be dynamically composed to become application specific through software APIs.

Cost, Performance, Efficiency

The disaggregation and composability trend in data centers is being driven by cost, performance, and efficiency. Gone are the days of the CPU being the center of the known universe, with all other devices being stuffed into the same box along with it. Private cloud architects can now choose the most appropriate devices, hardware, and software based on use case and specific needs.

The traditional containerized cluster data center opened the door for more complex apps, but today's software is so high performance that a more dynamic architecture has become necessary. And in the future, composable GPUs, FPGAs, and memory will be enabled by ultra-low-latency interfaces.

Composability means that processing workload is distributed in real time, sharing the burden with underutilized devices and eliminating orphan data pools. The result is a fully orchestrated private cloud infrastructure that helps data centers to process workloads faster and costs less money to operate. With disaggregation and composability, IT architects can meet the needs of the most demanding software applications. The composable data center is truly more than the sum of its parts.