Hyper-converged systems are being discussed with increasing regularity by the trade press and analyst community. The largest storage vendors in the world are now talking about a transition to hyper-converged systems (the merging of compute + storage), validating this new model for infrastructure whichNutanix introduced a few years ago.
Hyper-convergence is a fundamentally different architecture from traditional three-tier infrastructure solutions that are prevalent in data centers today. Hyper-convergence is an architectural model for IT infrastructure in which compute and storage resources are physically packaged in small servers, typically x86 based.
Hyper-converged systems are by their nature distributed systems. The building block of a hyper-converged solution is a physical x86 server with a processor, memory and storage (including Flash). Customers deploy these solutions by buying several of these servers or nodes and connecting them to a top-of-rack Ethernet switch.
The distributed nature of hyper-converged systems requires a fundamental rethink of how data and metadata are stored across nodes and accessed for storage operations. In a traditional scale-up storage system, all the data is stored in drives that sit behind a pair of storage controllers. These controllers are responsible for all I/O, as well as for storage operations such as deduplication, compression and snapshots.
In contrast to this centralized approach to storage, hyper-converged systems have data spread across several nodes. Any centralized treatment of the metadata or storage operations will severely limit the scalability of the cluster and/or introduce points of failure or weakness in the system.
This is where web-scale architectures come in. Companies such as Facebook, Google and Amazon built large-scale distributed systems that delivered the ability to quickly and predictably add infrastructure resources when needed with zero disruption to the system, and to manage the growing footprint with few admins.
In building these distributed systems these companies pioneered a transformative approach to buying, deploying, managing and scaling infrastructure. This is what web-scale is all about. The principles of web-scale are tried and tested rules for building distributed systems the right way.
What does this mean for enterprise IT customers? Not all hyper-converged systems are created equal. If you’re considering hyper-converged systems to simplify your data center and scale your infrastructure predictably on the go, make sure that the solution is built using core web-scale principles and architectures. Some questions to ask:
1. Can you start with a few nodes and grow in small increments (one node) when needed?
2. Is data distributed across all nodes in the cluster to ensure no hotspots?
3. Do all nodes participate in serving I/O without one node limiting throughput or
becoming a bottleneck?
4. Is metadata distributed across all nodes in the cluster for linear scalability without limits, or are there special metadata or name nodes that limit deployment size or eventually degrade performance?
5. Do all nodes participate equally in storage activities such as deduplication and compression across the cluster to maximize performance by using all CPU resources across the system?
6. Is the cluster able to withstand failure of individual nodes or components and ensure data availability without data loss? This is an essential foundation for always-on operation
7. If a node were to fail, do all other nodes participate in the recovery process to accelerate rebuilding of data and quickly restore state.
Predictions for the Private and Public Cloud Market – 5 years and 10 years
5 year timeframe
Hardware
A small number of cloud service providers (CSPs), the hyper-scale companies, will command a disproportionate share of the market. They will continue to innovate and push the IT hardware complex to rethink the role of hardware in an evolving IT landscapeon the hardware side with disaggregation, open platforms, custom pods and possibly dense multicore servers. These companies will serve as the vanguard for IT transformation
For a majority of CSPs, the economics of custom and customizable hardware will not make sense given their lack of scale. For these providers, the decision will come down to whether to build their own infrastructure solutions based on open-source system software such as OpenStack Swift/Ceph or buy turnkey solutions that incorporate web-scale principles. In either case, the future of the cloud points to commodity off-the-shelf hardware and intelligent software, and away from proprietary, single-purpose, specialty built hardware devices
Software
Web-scale applications built on scale-out platforms will become commonplace across a wide range of industries, including healthcare and medical research, manufacturing, retail and education. These cloud-native applications will leverage the elasticity of public cloud infrastructure as well as internally-managed private clouds built with similar scale-out architecture to handle unpredictable customer demand and for data-driven decision making
Even traditional off-the-shelf enterprise applications will be re-architected to scale predictably on demand. We are already starting to see vendors like Microsoft make this transition to a scale-out world. In five years, this new approach to designing software will be the norm and monster VMs built to run on scale-up infrastructure will go the way of the mainframe
The vision of the software-defined datacenter will become pervasive, and even traditionally embedded IT and datacenter services such as security and encryption, data management and manipulation will become software-driven services that are instantiated in real-time, anywhere in the datacenter, when needed by the application
Innovations in AI and machine learning will continue to drive operational efficiency in how clouds are run. Data-driven insights power prescriptive, proactive and predictive intelligence in IT. Cloud environments will the forerunners in bringing about this transition
Cultural and Political
Enterprise customers will be able to "think globally and consume locally" with public cloud services. In other words, they'll be able to plan and manage their workloads in the public cloud across the world while leveraging the flexibility provided by CSPs to keep data local in regions where regulations restrict data movement across national boundaries
As infrastructure gets converged, automated and simplified, IT organizations will see a breakdown of rigid management silos. Deep specializations in storage, networking and virtualization will give way to an IT generalist role. Though different teams handling backup, security, etc. may still exist, the overall culture will embrace collaboration and agility
Industry-specific cloud services such as those focused on financial services, government and healthcare will tackle the regulatory requirements specific to these industries while also offering business models that are tailored to their needs
Concerns around security and privacy on the clouds will get largely mitigated or addressed.
10-year timeframe
In ten years, clouds will disappear. In other words, public clouds will become ubiquitous and seamless to the point that IT infrastructure will be an invisible resource, much like electricity. Barriers between public and private clouds such as hypervisor and container incompatibilitieswill be overcome and data and services will be able to flow seamlessly between clouds. In this future, policy-driven intelligence will take into account economics, regulations and performance requirements to determine where data and services are placed at any time.
With it’s current innovations and involvement with some of the largest cloud providers in the world, Nutanix is, and will continue to be at the forefront of cloud evolution.