20 Aug Drilling Down To The Data Problem Inside The Data Center
Legacy data centers have an efficiency problem, with multiple systems processing the same data several times. It’s time to rethink that.
IT departments are under amplified pressure to increase efficiency, which generally means changing the way that IT operates — anything from small course corrections to major initiatives. Storage efficiency has typically referred to the processes resulting in reduced storage and bandwidth capacity requirements.
Compression, thin provisioning, data deduplication and even storage virtualization have had a huge impact on storage, IT efficiency and, ultimately, the total cost of ownership (TCO) of enterprise storage. These technologies are pervasive in data center services, such as production storage, backup, WAN optimization and archiving.
In today’s post-virtualization data center, virtualized workloads with different IO streams are sharing the physical resources of the host. This results in random IO streams competing for resources — and the emergence of new efficiency requirements as the IOPS required to service virtual workloads has increased. Some band-aids applied to the IOPS problem include over-provisioning HDDs or investment in SSDs/flash. Both of these contribute to a higher cost per gigabyte of storage allocated to each virtual machine.