Over the past few weeks, we’ve presented a webinar, a white paper, and a few blog articles, all in support of the idea of Data Quality. It may appear that we are beating the drum too loudly. But at Effectual, we believe it cannot be overstated: without the foundation of good data, it is impossible to gain value from your HPE ITOM Service Management tools.
In previous articles, we’ve pointed out how things can go wrong when your data is bad. Today we’ll look at “a day in the life” of a CI, and then focus on how things go right when your data quality is high.
Changes can wreak havoc on data quality, yet they are a constant reality within your environment. Though our systems may appear static at times, there is a perpetual movement of data and changes happening automatically behind the scenes. With each event, there exists the possibility of changes that need to be recorded and tracked.
The easiest way to visualize this process is to think about what can happen to a single Configuration Item (CI) during a single day. For example, take a single server or computer. In a one day it can change hundreds of time based upon any number of events:
- It is pinged, then resolved
- Another device/computer/device connects to it
- It connects to another device/computer/server
- It is inventoried (potentially multiple times throughout a given day)
- It connects to a process to satisfy other job requirements or applications
- It is Age Checked
- It is History Checked
- It is checked, rechecked, and used dozens to hundreds of times with or without user involvement, depending on views, enrichments, patterns, and any integration activity
As you can see, one single data element can be changed, edited, altered, analyzed, replaced, modified, shared, combined, etc. And if that one single data element is wrong, and then duplicated, there is a significant cascading effect to other ITSM tools and support systems. It’s as though the system is playing a game of telephone with itself. The result is that any decisions made based upon that data, or any business processes and reports utilizing that data to make decisions, pose a significant risk. The potential for lost gains is enormous.
Put simply: bad data makes bad decisions. And bad decisions cost you money. Or to put it another way, a good decision based on bad data is a bad decision. Once it becomes clear that the underlying data is bad, stakeholders will assume all the data is wrong.
It’s just a matter of trust. Once it is established that the data is not reliable, one of two things happens. People either won’t use it at all, sacrificing the expense and utility of the tools altogether, or they will double and triple check everything, adding huge manual overhead to processes which should be automated. Huge amounts of money are spent in the form of research, meetings & conf. calls to get consensus. At one customer they’d been having two 4-hour calls a week to discuss changes attended by many people. The costs of those calls alone is astronomical. Not to mention the thousands of hours spent trying to manually “scrub” the data.
Without a system that supports good data, all the steps a CI takes along the way – the people that are influenced by it, all the time and resources spent on support – are wasted. Not because our companies or people are inefficient, but rather because the collective corrupting effect of all those single elements of data creates a system where good people make bad decisions.
Better Decisions Mean Better Value
It doesn’t take long to imagine how the positive effects of clean data start to actually return dividends. Even small improvements in quality at the data element level can restore thousands of hours to an overworked IT department. Hours previously spent sorting through poor quality data can now be used to support new services, new implementations, close down projects, analyze performance/utilization data, and many of the “value” added aspects of a highly efficient IT organization.
Let’s take a look at what happens when you have significantly increased data quality within your UCMDB:
- Automation you can trust – Bi-directional flow of accurate data between ITSM tools and support systems creates reliable, up to date data throughout the platform. When you can trust your data, more processes can be automated, and automation can work more quickly, greatly increasing efficiency. Increasing data quality saves the countless hours of discarding results or manually filtering data.
- Faster automated operations – Many of our clients had discovery tasks or changes take days or hours to complete (or they would time-out and not complete at all). After implementing our solutions, these same processes completed in a fraction of the time; days became hours, hours became minutes.
- Faster response and resolution times – Automated workflows that detect changes and potential policy breaches are enabled by highly tuned discovery and integrations. So updates are provided hourly, instead of days or weeks after they’re required. Zero-day events are detected earlier and managed more easily with automated workflows which reveal efforts made towards compliance, and call out any breaches in policy.
- Increased efficiency with Incident, Problem, Change processes – Business owners and first responders can easily identify if an event occurs. More accurate impact analysis leads to improved confidence, and more timely and reliable decisions can be made from presented data.
- Asset & Configuration data align to support the Service Catalog – Hardware and maintenance contracts, and costs associated with data center infrastructure, are consumable with application server VM configurations and their related software contracts & licenses in a single view. This provides a better understanding of the infrastructure, licensing, contractual and service delivery cost(s) of a service throughout the lifecycles of its component parts.
It’s been our experience that once Effectual’s Packaged Integration Enhancements (PIE) are implemented, the customer has seen dramatic improvements in productivity. These are not small step improvements, but exponential improvement in data quality, runtimes, system speeds, and the ability to have both automated and ad-hoc queries execute within minutes, not hours or days. Your data will have better validity, consistency, accuracy, and relevance.
When you start with quality data, it is easier to both scale and improve the ITSM tools and processes. With Effectual’s PIE, we implement bi-directional data flow not only between the UCMDB and Service Manager, but also include full data flow between the UCMDB and Asset Manager. We take what were previously separate (but industry-leading) tools and create a real single integrated platform. This true Configuration Management System (CMS) allows our customers to support all core use cases within their HPE ITSM tools, and to finally achieve a SACM outcome.
With high quality data, it is possible to support a higher number of CIs and volume of information exchange – meaning more CIs, more relationships, more attributes – all mapped and synchronized between tools. These enhancements strengthen and expand day-to-day operations, and Incident, Problem, and Change processes. They also are there to support you when a major event happens (such as a Zero-day vulnerability or an auto-update gone awry) giving your enterprise a truly agile advantage.
The one thing we want to emphasize is this: Data quality starts at the data element and radiates from that point. A system that creates bad data can just never be current or accurate, you’ll always be chasing your tail. When it comes to data quality, the benefits derived more than pay for themselves in support of overall cost, speed, and efficiency.
To find out how to get good data and keep it good, just