Advertisement

When Data and IT Infrastructure Are Overly Complex

By on
Read more about author Eric Herzog.

As efforts to simplify IT infrastructure through automation are ongoing, the action to pinpoint where complexity resides is key. Data complexity not only complicates the user experience but also wreaks havoc in the backend for administrators and IT decision-makers. 

As the world of data has grown exponentially and transcended the borders of enterprises, the management of data and the infrastructure that supports all of it has become an even bigger challenge that needs to be unpacked, analyzed, and addressed on a continuing basis. It’s important to know how to identify the signs of data complexity. 

Two signs that data is overly complex are: (a) data is trapped in silos, and (b) it is costly to use. This can happen anywhere in an organization’s networked systems that are all interconnected with each other and can easily span data centers across the world. It tends to particularly happen in legacy systems and in implementations where additional capacity was added ad hoc over time.  

For example, adding a storage array on top of another array and, then, another array, and so forth, over a matter of years may have made sense at the moment the decisions were made, but it’s time for enterprises to take a step back and look at the complexity that has gripped its data as a result of too many storage arrays all strung together based on short-term, operational decisions. 

One of the reasons that data complexity arises is due to having too many storage arrays. If your organization has 25 arrays, it’s naturally more complex to try to find specific data on which physical box it resides on – compared to having two modern arrays that can easily handle all the applications and workloads of the 25 older arrays. Complexity creates risk for an enterprise, as well as slows down operations and applications, increases expenses, creates frustration, and wastes time. This is a nightmare that data storage administrators too often tolerate. 

When you consolidate storage arrays down, however, you reduce the number of independent silos that you have to manage, dramatically simplifying the entire data process. You know it’s on one of the two boxes, so it significantly reduces complexity and operation manpower. CIOs value consolidation because reducing complexity means lowering costs, reducing waste, and streamlining IT processes. 

To have vast sets of arrays means the need for more rack space, more power, more cooling, more floor space, more networking, and more daily operational management. But with consolidation, enterprises can substantially reduce CAPEX, OPEX, and operational management. A set-it-and-forget-it approach with consolidation lowers operational manpower even more. You don’t need to set up RAID groups or LUN or play with the storage to optimize for the application’s performance needs or do any other configuration. 

Silos + Cost = Complexity

Consolidation solves the silo problem, while the automation that can be unlocked and unleashed through consolidation solves the cost problem. By bringing down the silos and the barriers around data, you also dramatically reduce complexity and speed up data and application management. 

Due to high performance and low latency in a software-defined storage architecture, an increasing number of organizations are able to consolidate multiple workloads onto a single storage array, dramatically cutting CAPEX and OPEX. There is no need any longer for 25 or 50 different older arrays each running one application or workload when all 25 or 50 of those applications and workloads can fit on just one or two modern storage arrays. This translates into saving on watts, slots, power, cooling, floor space, and operational manpower.

Additionally, the use of storage built from the ground up with autonomous automation substantially reduces the complexity of your operational manpower – hence OPEX. Having the storage system automatically adjust caching and other performance parameters on the fly with autonomous accuracy or automatically configuring your storage system reduces the workload on your IT, data center, and storage administrators. 

Also, coupling that with AIOps-centric storage monitoring and metricing software with proactive support lowers your OPEX and CAPEX as well. In fact, some storage companies have tightly integrated their AIOps storage software with that from data center AIOps vendors, reducing not only OPEX for your storage, but also OPEX for your overall data center. At the end of the day, addressing complexity pays off.

Leave a Reply