The High Cost of Bad Data

The High Cost of Bad Data

Contando_Dinheiro_(8228640)

In a 2013 study, Gartner surveyed a wide variety of companies and found that on average, data quality costs them an estimated (and staggering) $14.2 million annually. Those surveyed also believed that the problems will only get worse based upon the technology trends driving most corporate environments today – big data and analytics, cloud based services, mobility and BYOD, and social/collaboration. Each of these will only add to  data quality issues, hindering the IT organization’s ability to:

  • Optimize performance
  • Make better decisions
  • Manage risk
  • Cut costs

Along with cloud and security, big data analytics is one of the top three priorities on the CIO agenda. It has crossed over into the functional areas of the business as well, with other C-suite executives wanting to understand how to harness data to improve all four areas listed above. However, it has become apparent that if the underlying data is bad, then so will be any decisions based on its analysis.

Within the IT organization, this explosion of data has created a management and operations nightmare. Even with the implementation of processes, tools, and improvement programs (such as ITIL, CoBIT and Six Sigma), data quality problems persist. And bad data results in both practical and hidden costs.

The larger discussion of data quality includes all data an organization uses. But we want to focus here  on data used by the IT organization, primarily data stored and shared within the primary IT Service Management (ITSM) tools: HPE Service Manager, HPE Asset Manager and HPE Configuration Management System (CMS)/UCMDB.

These key ITSM applications can combine to create a single platform for managing the delivery of IT services and support key ITIL processes such as Incident, Problem and Change (a weak spot in today’s dynamic IT environments). They also share data. But the out of the box integrations contribute to the the data quality problem. If the underlying data isn’t current and/or accurate, the systems are sub-optimized at the most fundamental level.

An untuned or badly integrated system causes all the problems identified in the Gartner study, creating four main problem areas with ITSM data:

  • Duplicate or Conflicting Data
  • Inconsistent Data
  • Irrelevant Data
  • Outdated Data

The average CMS/UMCDB usually contains between 50-75% junk data, including duplicates, inaccurate, and out of date records. More alarming still is that the majority of this data occurs right out of the box, with the first discovery (for everything you need to evaluate data quality in your own environment, check out our Webinar and queries).

At Effectual, we have witnessed these data problems firsthand. We believe it is the number one reason most CMDB initiatives stall or fail. The initial enthusiasm over the benefits of a fully integrated system soon give way to the daunting task of managing questionable data across multiple tools. In the worst cases, data cannot be shared, takes increased time to reconcile (and is still not reconciled properly), and ends up being “off loaded” into spreadsheets. As you inevitably add external service providers (such as SaaS or Cloud services), the problem only gets worse.

To get an overall feel, let’s take a simple example, and put a “cost” to the data quality issue. Change management is a weak spot for Service management, so let’s start there. In this hypothetical example we used the average pay for one change professional man-hour at a rate of one hour per change. Many changes take longer, and many organizations have far more monthly changes, but this example should demonstrate how quickly costs can stack up:

Bad Data Cost Savings Table

As you can see above, an improvement of just 25% in time spent on changes, would equate to an annual savings of approximately $150,000. It would allow 3,000 man hours to be reallocated to other areas of your IT services delivery, time better spent on improvement or new projects. And that’s just on changes alone.

This is a very simple example. However, experience shows us that a 25% improvement is easily achievable with Effectual’s Packaged Integration Enhancements (PIE). Reliable data enables the automation that makes HPE Service Manager such a great product, leading to the reduction in time spent on change activity.

Trying to “scrub” your data every once in a while is costly and time consuming.  But it’s also not productive. By the time you’re done cleaning up the old data, you’ve already got a new batch of bad data. This approach treats the symptom rather than the disease.

But what if you could prevent bad data from being created in the first place? With our combined solutions, we have seen bad data reduced on average from 50-75% to less than 10%. This translates to improved discovery times, better data integration between tools, and the elimination of data duplication.

We have been asked over and over again “What can be done about bad data?” Our answer is fairly simple; get rid of the old bad data, prevent new bad data. In most cases, it requires rebuilding the data and re-implementing the architecture of the CMS to include a second tier exclusively dedicated to Discovery efforts. This allows our clients to quickly identify data quality problems, focus on building better data, and waste less time wading through the junk. When your data is good, every time, the overall efficiency, performance, and business value of the IT organization is automatically increased.

To find out how you can get good data and keep it good, in as little as six weeks, simply

Contact Us