10 Key Characteristics of Data Quality

3 minute read

10 Key Characteristics of Data Quality

Data quality means ensuring your organization’s information is appropriate for the intended purpose and free of errors. To determine if your data is accurate, your organization must have a sense of how they will use the data. A customer address that is verified correct only to the state level, for example, may be adequate for identifying the names of an individual’s United States Senators. However, to find the same person’s congressperson in the US House of Representatives requires greater address granularity.

One way of ensuring data quality is enforcing standard acceptable values. The abbreviations the US Postal Service uses for addresses is a good example. In addresses, Boulevards are always “Blvd” and never “Boul” or “Boulv”. Expressways are always “Expy”, not “Expr”, “Expw”, or “Express.”.

Continuous validation is important for maintaining data quality. Your data can be corrupted or overwritten with new values, or acceptable data values can change. For instance, communities change street names more often than you might think. A data record with a valid street name might easily be invalid a few months later, when a city names a thoroughfare after a prominent citizen.

Here are ten key characteristics that determine the quality of your data:

  1. Accuracy—Is the data free of mistakes?
  2. Accessibility—Can the data be obtained when needed?
  3. Comprehensiveness—Is all the data present as required by the applications that use it?
  4. Consistency—How reliable is the data? Is the value of the data the same across all systems?
  5. Currency—How recent was the data collected or updated?
  6. Definition—Is the meaning of the data clear or is it open to interpretation?
  7. Granularity—Is the stored data adequate for its purpose?
  8. Precision—Does the data fall within the range of acceptable values?
  9. Relevancy—How does the data relate to the applications using the data?
  10. Timeliness—Is the data stored soon after collected, or has it gone stale?

Data quality management is the process you use to ensure the integrity of your data. As companies rely more on artificial intelligence (AI) and less on human interpretation for some tasks, the quality of the data to which AI has access becomes even more important.

Some data is specialized and requires custom routines to scan, correct, or reject bad data. Product names within a manufacturing company, for example, can’t be validated with a common utility. The spelling, categorization, and use of product names are unique to the company. Fortunately, address data, one of the most valuable pieces of information an organization may store, can be compared to a set of standards and automatically corrected.

Firstlogic and  SAP’s data quality software ensure address data is accurate, standardized, and current. For updating addresses, the Move Update module compares addresses on file with address change requests citizens submit to the USPS and replaces old addresses with new ones. The software also augments data stored in corporate databases with added information you can use to develop comprehensive profiles of your customers and prospects.

For descriptions of all Firstlogic and SAP’s data quality products, visit www.firstlogic.com.

Data quality can be a big job. The more data sources, volumes, uses, and collection points, the more complex the task. Adhering to the ten key characteristics of data quality will assure you your data is reliable and will yield accurate and actionable results.