How to Measure Data Accuracy?


Explaining the Rise in Pickpocketing

Winston Chen

If you believe that better data quality has huge business value, and you believe the old axiom that you cannot improve something if you cannot measure it, then it follows that measuring data quality is very, very important. And it’s not a one-time exercise. Data quality should be measured regularly to establish a baseline and trend; otherwise continuous improvement wouldn’t be possible.

Measuring data quality is not simple. We have all been exposed to metrics like accuracy, completeness, timeliness, integrity, consistency, appropriateness, etc. Wikipedia’s entry for Data Quality says that there’re over 200 such metrics.  Some metrics, like completeness and integrity, are relatively easy to measure. Most data quality tools and ETL tools can express them as executable rules. But others are a lot harder to measure.

Accuracy is notorious. Let me give you an example. A Canadian law enforcement agency saw that in crime statistics, pickpocketing is usually high. Further investigation revealed that in the application for entering crime reports, “Pickpocketing” is the first item in the dropdown list box for crime type. So, how would one go about measuring the accuracy of this field? I can only think of two good ways.

First is to manually audit a sample. Take a small percentage of new crime reports and have data analysts go through them to determine if, given other pieces of descriptive information, the crime type field is accurate.

Second is to allow anyone in the organization to identify data inaccuracies and raise issues. The issues can then be routed to the right person for correction. And the issues can be rolled up to compile metrics. This approach is akin to crowd-sourcing.

I’ve seen other ways but I don’t think they’re very effective. You could compare the data with authoritative records. But if you had authoritative records, this wouldn’t be a problem in the first place! You could also measure statistical distribution and detect anomalies. For example, pick-pocketing typically represent 10% of all crimes; if it goes up to 15%, then there may be a problem. But it’s very hard to tell whether the data is wrong or there is an actual change in the real world. You end up resorting to manual auditing again.

Of these techniques, I think crowd-sourcing is the best. The trick is to provide end users with a dead easy way to raise an issue the moment an inaccuracy is discovered. Both Kalido MDM and Data Governance Director provide browser interfaces for raising issues. We also have an open API for issues to be reported, tracked, and acted upon.

Ideally, in every screen that presents data to end users, whether it’s a business application, dashboard, or report, there is a button for raising data issues. So SAP and Oracle, what are you waiting for?

Related Blogs:

 

Tags: , , ,

8 Responses to “How to Measure Data Accuracy?”

  1. Dylan Jones July 30, 2010 at 1:53 am #

    One approach I’ve seen to reducing these kind of data-entry related style inaccuracies is to design contextual, dynamic forms.

    For example, if you are entering details of a pickpocket, you may wish to enter details of the victim, time of day, street, pickpocket approach – was it violent/in busy crowd/at a concert etc.

    If the crime was burglary then there would be a completely different set of fields.

    The point being that sometimes the form design creates inaccuracies, by making it easier for the staff to enter the correct information I’ve seen far better accuracy.

    I agree with your point completely though that it is far too difficult for down-stream data users to flag issues with the data and this is just a matter of common sense and basic process improvement.

    • Winston Chen
      Winston Chen July 30, 2010 at 6:58 am #

      Dylan, yes, form design can absolutely improve accuracy. And this is something application vendors should pay more attention to. Also, as you said, process improvement is ultimately the most effective cure for data quality problems. Thanks for your comment.

      • Julian Schwarzenbach July 31, 2010 at 6:17 am #

        Winston, Dylan,

        Another way to counteract the problem of the default option being left unchanged is to set the default value as “Please select”. This makes it even easier to spot those that have not bothered to enter a suitable value!

        Julian

  2. Julian Schwarzenbach July 30, 2010 at 2:48 am #

    Winston,

    I fully agree that measuring accuracy is both a vital activity and also one that is difficult to undertake. Your ‘pickpocket’ example is a good one, as it will be difficult to go back to those involved in a crime to confirm the details of the events.

    In the physical asset management world accuracy checking is made difficult for a number of reasons:
    1. Assets are frequently widely dispersed, so accuracy checking may involve significant amounts of travel
    2. Assets may be in hazardous locations which prevent easy access and may require permits to work, multi-person teams etc.
    3. Assets such as pipes and cables will typically be buried, so cannot be accessed to check the data accuracy
    4. Due to the wide variations in types and ages of assets deployed, it can be difficult to ensure that samples of assets checked for accuracy represent a valid subset of the overall asset stock
    5. Relying on checking data when someone has to respond to a problem will not be representative of the full population of assets

    Although all these points indicate the difficulty of assessing the accuracy of asset data, these should not be used as excuses for not assessing your data accuracy. Without a valid assessment of accuracy there is a risk that resulting business decisions may be compromised.

    Julian

    • Winston Chen
      Winston Chen July 30, 2010 at 7:00 am #

      Julian, Thanks for your comment. I heard a story from an oil pipeline operator about how often a truck driving far out to perform maintenance on a piece of asset, and realizing that the data about the asset is wrong, and he/she brought the wrong equipment. You’re right, physically assets present their unique challenges.

  3. Ken O'Connor August 2, 2010 at 11:23 am #

    Hi Winston,

    Great post – well done. I really like the idea of empowering everyone in the organisation to flag data quality issues.

    Your post prompted me to write about a new post about what I call the “Ryanair Data Entry Model”.

    Rgds Ken

  4. Sushil Kumra December 16, 2010 at 12:10 pm #

    Data quality measure is challenging but not impossible task. There is no silver bullet. One has to define valid data values for each data element collected so that one would know what we are measuring against. The descriptive data collection and validation is always a challenge. In the descriptive data collection “Drop-down” are often used to minimize the key strokes and improve the data accuracy. Human being human will make mistake and select a wrong choice.
    To fix this problem, one needs to develop data validation based on the event context. If we are collecting data about a crime, as Dylan suggested, there are some data elements unique to a particaular crime. For example, pickpocking location is a house address defintely raises a suspicion if captured data in the crime field is accurate. This validation needs to take place as data is being submitted to save and store. To develope context based validation is a daunting task but I believe will be effective.
    Another simple way to measure data quality is by using :D ata Profiling” tool. One can determine what kind of data quality issues are there and take appropriate measure to fix it.

    • Winston Chen
      Winston Chen January 6, 2011 at 10:22 am #

      Thanks Sushil for your comment. You’re absolutely right that event context is the key to solving the problem, but it is not easy. Context is a hard thing for computers to get — which makes automation hard.

Leave a Reply