Webinar: How to Measure UCMDB Data Quality

Webinar: How to Measure UCMDB Data Quality

The first installmentour 2015 webinar series, How To Measure Data Quality In Your UCMDB, was a great success! In case you missed it, we’ve summarized some of the main points here. If you’d like to know more, you can register below for the complete recording, presentation slides, and Queries you’ll need to evaluate your environment.

Why is the UCMDB so important?

To fully appreciate why UCMDB data quality is important, you must first understand the importance of the configuration management database (CMDB). Within the HPE ITOM Service Management platform, the tool used is called Universal Configuration Management Database or just UCMDB. The UCMDB provides a single system of record for IT to manage and support ITIL processes, allowing professionals to monitor the entire IT infrastructure from a comprehensive set of data.

The primary reason for the UCMDB is its ability to become a decision support system. Having a full (and accurate) UCMDB allows the IT organization to understand the full environment beyond just a set of physical assets, to include relationships. If, for example, there is a Zero Day Threat (read our article about Sandworm here), IT wants to quickly see what systems could be affected and how they relate to the broader infrastructure. Without a complete, accurate UCMDB, IT’s support organizations will spend countless hours backtracking through asset information, contracts, service requests, change requests, and incident reports. With the right automation (and reliable data quality), all of this can be accomplished with a quick set of queries.

Being able to integrate our UCMDB into our ITIL core processes allows IT to quickly track their work and understand issues in the environment as well as how Incidents, Problems, Changes, Requests, Releases and other processes are affecting Configuration Items. A clean and accurate UCMDB supports and ensures quality IT service delivery and end user support.

What is “Bad Data”?

So how does data quality come into play here? Put simply, without reliable data the CMDB completely loses its value. At its core, UCMDB is a data collection, storage, and processing application that discovers important relationships between IT assets.The discovery process within the UCMDB is an ongoing automated method to gather data from systems connected to the network. As such, it is constantly creating and updating CI records. If the CI data is “bad,” it will be replicated and reused throughout the UCMDB hundreds, or in many cases thousands, of times. Sadly, many integrations create bad CIs almost as the rule, clogging the system, and greatly hindering both performance and business value.

We’ve identified three areas where data quality problems continually arise within the UCMDB.  We regularly see UCMDBs with over three million (3,000,000+) CIs, which, on average, have in excess of 50% to 80% “junk” data clogging the system. These junk CIs break down as follows:

  • 10% – Duplicates that are exact matches.  Although this is less common in more mature UCMDB implementations, it is also one of the more difficult to clean up.
  • 30% to 50% – Duplicates that are near matches.  This occurs with Out-of-the-Box Discovery and accounts for the majority of the data quality issues within UCMDB.
  • 20% – Duplicates that are very different.  This problem increases over time because of reconciliation, virtualization, integration and external data (usually spreadsheet) imports.

All three types have one thing in common: they cost your enterprise time and money.

Root Causes

As we look beyond the numbers and begin to explore why the data quality can be so low, we need to begin to look at root causes.  At Effectual, we have identified several which are causing the majority of data quality problems. Solve these, and you will have solved the majority of the situations where data quality issues arise.

  • Out-of-the-Box (OOTB) Discovery Triggers “boil the ocean” during every run, and treat known (and previously undiscovered) CIs as if they were new CIs.
  • For Discovery, job order matters. Inexperience and random order of Discovery jobs create malformed CIs and relationships.
  • Mixed credential use and creation of multiple Agents create multiple versions of the same actual CI.
  • Inconsistent usage of Discovery jobs over time lead to some CI and relationships “aging out” and changing, then being recreated. This can also lead to CI attribute and relationship flip-flopping.
  • Legacy issues with UCMDB and Probe Performance lead to incorrectly merged and reconciled CIs.
  • Out of date results processing behind more recent results cause errors and incorrect updates. This then results in reversion to an older version of the environment, not the current one.
  • Some elements of the CI Types escape Aging through manual editing/touches, modeling, enrichments and errors, which then blocks model updates.
  • Failure to watch and understand bulk in errors and result processing.
  • Incorrect usage of enriching relationships, lack of understanding of the CIT, and inconsistent, brute-force use of relationships and related CI functionality.
  • Jobs that fail regularly are often still believed to be running, and so believed to be adding information and updating CIs, when they aren’t.
  • Inattention to the entire CIT model contents, all job results, and the scheduling of jobs.
  • Importing and syncing large data sets isn’t flawless when reconciling relationships. Reconciliation rules often need to be tweaked for this process to happen correctly.

These root causes have been seen over and over again. This happens primarily from the outset and builds over time. So in most cases, the more mature a UCMDB, the more problems we see with data quality. If the data is never fully scrutinized and data quality issues are not addressed, the UCMDB will continue to operate on a sub-par basis, creating performance issues and lessening trust in any CMDB-related solutions.

Take Action

Data quality goes beyond the “strength” of the CI data collected and stored. A large amount of this data bloat can occur when Discovery or Reconciliation is incomplete. As we stated above, these bad CIs often comprise in excess of half the data in a given UCMDB. Primarily due to the generic nature of Discovery and Reconciliation function within the UCMDB, we notice a higher instance of bad data when looking specifically at Nodes and Computers.

So what can you actually do about it? Here are a few concrete actions you can take today to begin cleansing your UCMDB data and regain control over the environment:

  • Stop discovery, delete the junk.
  • Tune your Trigger Queries for each job.
  • Know your jobs. Isolate jobs by Purpose and Credential, maybe even Region.
  • Tune your Discovery Job Order & Properties.
  • Tune Your Job Create/Update/Delete requirements.
  • Identify CITs that can use more aggressive Aging.
  • Identify related CITs that change frequently.
  • Learn to manually remove candidates for deletion.
  • Understand the lifecycle of Discovery per CIT/device.
  • Ensure that your Probes & UCMDB are performing everything that is required of them (all daily Discovery should complete in one day).

Of course, if you aren’t confident in your knowledge of these areas, or if just you want to skip the manual overhead, Effectual’s already done this work for you. Our PIE for Universal Discovery and Universal Discovery Trigger Package are both available to trim the fat, eliminate wasted overhead, reduce run times so you can run Discovery more often and with more confidence, and eradicate recurring data quality issues permanently.

We believe data quality is the top issue that stands in the way of a successful CMDB project. To find out more, register below to access the entire webinar, the presentation slides, and the complete queries and instructions you’ll need to start down the path to real CMDB success.

 

Download UCMDB Data Quality Webinar and Queries