Drilling Down To The Data Problem Inside The Data Center

Drilling Down To The Data Problem Inside The Data Center

Legacy data centers have an efficiency problem, with multiple systems processing the same data several times. It’s time to rethink that.

IT departments are under amplified pressure to increase efficiency, which generally means changing the way that IT operates — anything from small course corrections to major initiatives. Storage efficiency has typically referred to the processes resulting in reduced storage and bandwidth capacity requirements.

Compression, thin provisioning, data deduplication and even storage virtualization have had a huge impact on storage, IT efficiency and, ultimately, the total cost of ownership (TCO) of enterprise storage. These technologies are pervasive in data center services, such as production storage, backup, WAN optimization and archiving.

In today’s post-virtualization data center, virtualized workloads with different IO streams are sharing the physical resources of the host. This results in random IO streams competing for resources — and the emergence of new efficiency requirements as the IOPS required to service virtual workloads has increased. Some band-aids applied to the IOPS problem include over-provisioning HDDs or investment in SSDs/flash. Both of these contribute to a higher cost per gigabyte of storage allocated to each virtual machine.

Data efficiency capitalizes on the familiar storage efficiency technologies, such as compression and deduplication, but executes them in a way that positively impacts both capacity and IOPS — and costs — in today’s modern data centers. This is one of the key benefits of a hyperconverged infrastructure, which at the highest level, is a way to enable cloudlike economics and scale without compromising the performance, reliability and availability expected in a data center.