Article

The Benefits of Composability in Private Cloud Architecture

Explore the cost and performance benefits of compostability in a private cloud architecture.

Table of Contents

Composability Benefits Private Cloud Architecture Composability Benefits Private Cloud Architecture Composability Benefits Private Cloud Architecture

Putting together a private cloud data centre is like building a sports car from scratch. You need to pick the right engine, parts, and equipment to meet the performance demands of the road and driver. Thanks to hardware and software innovations, IT architects can compose their data centres to perform more like a Lamborghini and less like yesterday's banger.

On-premises private clouds require constant oversight, management and maintenance. Operational costs, such as replacing drives or over provisioning, can add up quickly over time. This raises the total cost of ownership (TCO) of data centres and eats into a company's bottom line. Especially with data capture, creation, and utilisation growing exponentially every year.

Those issues are precisely why what Seagate calls the composability megatrend is currently underway. The modern, composable data centre allows each private cloud server to be comprised of components that are disaggregated but interconnected with optimal fabric types and bandwidth. Composability provides a flexible way to manage and access data across multiple applications and data workflows.

IT architects must decide whether to use an on-premises private cloud solution or a public/private cloud hosted by a third party. Public clouds share computing services among different customers and are hosted at an external data centre. Third-party cloud providers also offer hosted private clouds, which are also externally hosted but the services are not shared.

Private on-premises clouds are managed and maintained by a company's own data centre and do not share services with other organisations. The major benefits of using an on-premises private cloud include greater sense of security, flexibility and higher performance. Users can customise their resources and services so that the hardware specifically matches software requirements, rather than a one-size-fits-all approach. Users also have complete control over the security, scalability and configurability of their servers.

As the demands of applications increase or decrease, components and servers communicate with one another to shift the workload. IT architects can now fashion data centres with a wider array of hardware and components from various manufacturers. In effect, they're taking apart — or disaggregating — the traditional data centre infrastructure. Once they have gutted the chassis, the data centre can then be reconstructed for more efficient use of installed resources. IT architects can avoid purchasing unnecessary hardware at extra cost and components can be easily replaced without any downtime.

Data centre disaggregation and composability in the private cloud evolved from traditional network architecture into today's dynamic infrastructure that enables powerful — and demanding — systems to function. Composable disaggregation has several key benefits, including reduced latency and increased security and control over data.

Limitations of the Traditional Data Centre

Traditional IT architecture is reaching its limits due to exponential data growth and the increasing complexity of software applications. Central processing units (CPU), dynamic access memory units (DRAM), storage class memory units (SCM), graphic processing units (GPU), solid state disk units (SSD), and hard disk drives (HDD) are among the critical components that comprise a data centre. These components are typically housed together in one box or server and are the foundations upon which the data centre is built. In this traditional enterprise cloud architecture, each component, such as an HDD, is directly attached to the rest of the server.

Data centres originally operated under a one-application-per-one-box paradigm. And when applications outgrew the storage and data processing capabilities that single servers could provide, IT architects began grouping multiple servers into clusters, all of which could be drawn on as a pool of resources. Proprietary solutions from the likes of IBM, EMC, NetApp, and Seagate's own Dot Hill team led the industry's initial foray into pooled server resources.

Data centres could then scale upward and satisfy the needs of software applications as they grew in complexity in this way: if an application required more storage, bandwidth, or CPU power, additional servers — or nodes — could then be added to the cluster. The clustered model of pooled resources forms the basis of what we now know as converged and hyperconverged enterprise cloud infrastructure, using enterprise hypervisor applications like VMware and others.

Node clusters served their purpose in the cloud's infancy but are prone to over provisioning. This occurs when IT architects purchase more servers, which are bound to contain more resources of one kind or another than are needed — resources that then don’t get used. Although the clustered approach has its benefits — such as guaranteeing sufficient storage and processing power — unused resources within servers are inefficient. Nevertheless, IT architects had to rely on over provisioning to meet scale, since there was no way for data centres to dynamically scale only specific resources or workloads to the demands of software applications. Excess costs are a natural by product of over provisioning.

IT architects were also limited in terms of which hardware components they could use to comprise a server. Hardware for each server or cluster had to be purchased from single manufacturers for compatibility purposes. And there were no open application programming interfaces (API) available to help hardware from various manufacturers communicate and coordinate. If architects wanted to swap out a CPU for a faster one — one from another manufacturer, for example — often they were out of luck due to incompatibility. Hardware from different manufacturers couldn't communicate or coordinate with one another.

But hardware incompatibilities aren't the only thing straining traditional data infrastructure. There's also the issue of the massive amounts of data that need to be collected, stored and analysed. The explosion of big data is not only pushing the storage limits of traditional private cloud clusters, it’s also creating a data processing bottleneck. Each CPU is often too busy on local data processing to share resources with other applications, resulting in overall resource scaling inefficiencies in the data centre.

For instance, complex artificial intelligence (AI) applications are predicated on the ability to process large amounts of data in a short amount of time. When an AI application is utilising a clustered data centre, bottlenecks in data collection and processing tend to occur. And if the application needs more processing power, there's no way to shift additional workload to other clusters. Latency — or delays in data transmission between devices — is another negative consequence.

For example, there might be two server clusters in the same data centre, one of which is overburdened and the other underutilised. The application using the overburdened cluster might slow down or have performance issues — a problem that could easily be solved by integrating the underused, over-provisioned cluster to help. But the resource pool that application can pull from is strictly limited to the single cluster dedicated to that application. It's a perfect illustration of why IT architects have been searching for more efficient ways to compose a data centre.

The proprietary era is coming to an end — if it hasn't already. Sophisticated software applications demand more processing and storage power than the traditional clustered data centre can provide. And IT architects are limited in which hardware they can use because of a lack of open APIs that allow inter-device communication. For IT architects to move forward, they'll need a deeper understanding of how modern applications impact private cloud architecture and how composability can help overcome traditional IT infrastructure challenges.

How Applications Impact the Private Cloud Data Centre

One of the biggest forces driving the composability megatrend is software application demands. Software like AI or business analytics demands an increasingly complex set of hardware requirements specific to that application's needs. This creates fierce competition for resource pools of storage and processing.

As noted, traditional data centres have now reached a breaking point, where the processing power required by various apps regularly exceeds the limits of a clustered model. Application requirements also evolve constantly over time, and changes can take place quickly. The new version of a business app, for example, might demand twice the storage or processing power of the old one. If it exceeds the limits of its dedicated cluster, more hardware must be purchased. Advances in software stress what a traditional clustered data centre can offer.

Composability gives applications access to resource pools outside of their dedicated cluster, unlocking the processing power — or other resources — available within overprovisioned servers. Each CPU, GPU, or storage node can be scaled up independently according to the price needs of each application.

Additional processing demands also create bottlenecks within the traditional data centre fabric. The data centre fabric is what connects various nodes and clusters. Ideally, a composable fabric that meets the needs of modern software applications should create a pool of flexible fabric capacity. This fabric should be instantly configurable to provision infrastructure and resources dynamically as the processing needs of an application increase. The goal is to provide advanced applications with not just faster processing, but real-time processing that facilitates these applications to run at optimal speeds.

Composability and disaggregation are essential to meeting the demands of advanced software applications. Traditional clustered architecture simply isn't up to the task, and information can't travel over ethernet fabrics quickly enough to allow advanced applications like AI to function properly. By disaggregating components within a server box and giving them a way to communicate with API protocols, data centres can serve complex apps in a cost-efficient manner.

The Benefits of Private Cloud Disaggregation

Disaggregating a private cloud data centre means completely doing away with the traditional server box model. Resource components — such as CPUs, GPUs, all tiers of memory, SSDs, and HDD storage — can all be disaggregated and re-composed à la carte within their proper fabrics. These resources can then be utilised based on what a specific application needs, not based on how the components are configured inside a specific, physical server. Anything that can be accessed on a network fabric can be disaggregated and later re-composed.

For example, the storage resource pool that an application draws from might consist of HDDs in 10 different server racks in various locations in a data centre. If the application needs more storage than it's currently using, one HDD can simply communicate with another HDD that has space and transmit data seamlessly. Processing workload can also be shifted dynamically when application demands increase. Conversely, when app demands decrease, storage and processing can be reallocated in the most energy-efficient way possible to reduce — or eliminate — costly over provisioning.

It's a stark change from JBODs, or just a bunch of disks, being confined to a single server rack. JBODs evolved into pools that applications can call on at any time, making data centre resource allocation more intelligent. Architects then began turning toward standardized external storage that could communicate with one another.

Disaggregation also introduces standardized interface monitoring and allows IT architects to manage an entire composable data centre. Selecting requirements-based hardware — whether it's SSDs, HDDs, CPUs, or fabric components — is only one part of disaggregating the traditional data centre and shifting toward creating a composable one. Architects still require the correct open API protocols — like Redfish or Swordfish — for seamless integration and a single user interface to manage the data centre. An open API allows hardware and software that speak different languages to communicate and work together.

Letting the application define how the data centre is composed, instead of vice versa, results in software-defined networking (SDN) and software-defined storage (SDS). The next evolution of SDN and SDS is the hyper-composed data centre. This could put private cloud architecture on par with some of the hyperscalers like Amazon Web Services (AWS) and Microsoft Azure. Hyperscalers are large data centre providers that can increase processing and storage on a massive scale. Ethernet fabrics — the network backbone of a data centre — can even be custom built to run in tandem with low-latency HDD and SSD devices. This customisation decreases delays in data traffic or processing, as applications can draw from disaggregated resource pools operating on low-latency fabrics.

Open API protocols like Redfish and Swordfish are critical to all disaggregated components working in harmony. Seagate has its own legacy REST API that it maintains for its specific class of data centre products that promote inter-device operability. In the past, changing out devices could take weeks to install and integrate. API protocols enable data centre architects to take an à-la-carte, plug-and-play approach to sourcing hardware. New devices from disparate manufacturers can be installed and functional in record time.

Disaggregating the data centre is what makes composability possible. Once architects have chosen the hardware devices specific to their software application needs, they can then construct, operate, and optimise tomorrow's composed data centre.

Efficiency of the Modern Composable Data Centre for Private Cloud

Data centre composability not only connects all the disaggregated components, it also helps improve key performance indicators (KPIs) related to the cost efficiency and performance of data centres. In traditional data centres, over provisioning is one of the biggest drains on budgets. When data centres are over provisioned, servers and resources are paid for but go unused. In essence, IT architects are paying for resource pools that go underutilised, resulting in those hard drives being under provisioned.

Under provisioned resources result in what are known as orphan pools, or pools of resources that have no home due to underutilisation. This can include CPU, GPU, field programmable gate array (FPGA), dynamic random-access memory (DRAM), SSD, HDD, or storage class memory (SCM). These building blocks get dynamically composed to create application-specific hardware through software APIs.

Before composable, open source data centre architecture, private clouds were typically constructed using hardware from a single vendor. Physically connected data centres are more costly up front and organisations are often locked into a vendor-specific architecture that becomes costly over time.

The composability trend began to gain traction as part of public cloud architectures. Products such as AWS and Microsoft Azure are two examples. A similar approach can be taken in terms of private cloud deployments, saving monetary resources by avoiding vendor lock-in and composing data centres with devices from multiple vendors.

This enables IT managers to allocate more budgetary resources to greater storage capacity that empowers them to extract insights and value from stored data.

Organisations can now use a third-party storage solution, drop it into their data centre, and seamlessly integrate it with an open source API. If an IT architect wants an SSD from one manufacturer but their data centre is built from components of a different manufacturer, that SSD can be placed in the data centre without much headache. APIs ensure that all components work in concert. Data centre managers don't need to obsess over how components will communicate via telemetry, for example, reducing the cost and stress associated with parts from vendors that the data centre isn't currently composed of.

Another key benefit of pooled or shared memory is the opportunity for applications to now survive a node failure without a state disruption on the virtual machines or container clusters impacted by this node. The idea is that other nodes can now either copy the last state of the virtual machine’s memory into their own memory space (in the case of pooled memory architectures) or use a zero-copy process by pointer redirection across the ultra-low-latency fabric and continue where the failed node left off seamlessly (in the case of shared memory architectures). This can be considered a significant evolution in the seamless fault tolerance capabilities of the data centres to provide a higher reliability and availability.

A composable architecture also allows for faster collection and processing of data from multiple sources and, overall, a more efficient use of installed resources. Composable data source architecture typically includes physical sensors, data simulations, user-generated information and telemetry communications. Data centre managers can then spin up resources in a matter of minutes, sharing them between applications on the fly.

A Single View of All Resources

Orchestrating and managing a private cloud data centre is also easier with a composable architecture using a containerized orchestration client. All hardware can be disaggregated, composed, and monitored from a single interface. Data centre managers can get a clear, real-time picture as to which resource pools are being utilised and ensure there's no over provisioning. In many cases, management will be conducted with standard software. This can be either proprietary or open source.

Deployment of new software applications in an open source, containerized environment is more flexible as well, especially due to how flexibly storage resources can be purchased and laid out. For example, data centre architects no longer need to over provision expensive flash storage tiers. That's because any application workload can be instantly shifted to an existing set of SSD resources as needed, no matter where it physically sits. This prevents over provisioning, as well as physical space that can be used for other purposes such as increased raw storage capacity.

Creating more raw storage capacity is becoming more necessary due to the growth and explosion of raw data. In addition, organisations are striving to gain even more value and actionable insights from more of that data to enable new opportunities that enhance the bottom line. What's crucial is that IT architects are able to design infrastructures that collect, store, and transmit massive amounts of data. More efficient computing hardware layouts means more room is available for the raw mass storage necessary for today's big data environment.

The solution is the composable data centre that connects any CPU and any storage pool with other devices as needed. All other devices use CSI, Redfish, and Swordfish to plug into the management network, orchestrated by software engines. All other building blocks of the data centre can be dynamically composed to become application specific through software APIs.

Cost, Performance, Efficiency

The disaggregation and composability trend in data centers is being driven by cost, performance and efficiency. Gone are the days of the CPU being the centre of the known universe, with all other devices being stuffed into the same box along with it. Private cloud architects can now choose the most appropriate devices, hardware, and software based on use case and specific needs.

The traditional containerized cluster data centre opened the door for more complex apps, but today's software is so high performance that a more dynamic architecture has become necessary. And in the future, composable GPUs, FPGAs, and memory will be enabled by ultra-low-latency interfaces.

Composability means that processing workload is distributed in real time, sharing the burden with underutilised devices and eliminating orphan data pools. The result is a fully orchestrated private cloud infrastructure that helps data centres to process workloads faster and costs less money to operate. With disaggregation and composability, IT architects can meet the needs of the most demanding software applications. The composable data centre is truly more than the sum of its parts.