Health Language Blog

3 Ways Data Normalization Helps Improve Quality Measures and Reporting

Posted on 09/09/14

3_Ways_Data_Normalization_Helps_Improve_Quality_Measures_and_Reporting

Healthcare providers and payers are increasingly on the hook for monitoring the effectiveness and safety of care and generating reports for internal use and external regulators.

A number of factors contribute to the heightened focus on quality measures and reporting. Population health management, promoted under the Affordable Care Act (ACA), seeks to identify at-risk populations, tailor interventions for individuals, and then measure the clinical impact.  Value-based purchasing requires quality reporting  “to measure, report, and reward excellence in health care delivery.”

The Medicare and Medicaid programs administered by the Centers for Medicare & Medicaid Services (CMS) have offered a number of incentives for quality reporting. The Physician Quality Reporting System (PQRS) offers incentive payments for reporting on quality measures. The EHR incentive program (often referred to as “Meaningful Use”) has broadened quality reporting. In order to qualify for the financial incentives under Meaningful Use, physicians and hospitals must submit clinical quality measure (CQM) data from a certified EHR system.

With Medicare incentives for Accountable Care Organizations (ACOs) under the ACA, incentives are evolving from “pay for reporting” to actual “pay for performance." Those ACOs established to coordinate the care of a Medicare patient population can become eligible to share in any savings that stem from quality improvements and cost reduction. But ACOs must monitor 33 quality measures, as well as document cost savings, in order to qualify for the shared savings.

Beyond Medicare and Medicaid, many health plans have been using the National Committee for Quality Assurance’s Healthcare Effectiveness Data and Information Set (HEDIS) as a quality-of-care yardstick. As the ACO model advances into the working age population, requirements for quality reporting will surely increase.

Quality monitoring and reporting relies on the collection of data from a variety of sources. Unfortunately, the explosive growth in HIT has resulted in patient data being scattered across an array of rapidly proliferating IT systems – each with their own way of representing clinical terms.

The key to making these sort of analytic initiatives work is ensuring your ecosystem consists of structured data—data that can be mined for quality reporting.  Data normalization is required to 1) standardize local content to controlled formats or ‘terminologies’ and 2) semantically translate data between standards to eliminate ambiguity of meaning.  

Here are three ways data normalization helps improve quality measures and reporting:

1. Cleansing Data From A Range of Sources

As stated prior, quality reporting will typically involve a range of IT applications and data sources.

Meaningful Use CQMs, for instance, measure such items as clinical processes, patient safety, care coordination and population health. The corresponding data collection effort spans multiple systems -- including EHRs and practice management systems -- many of which will use different terminology and coding schemas. Data normalization cleans and organizes that data so quality can be effectively monitored, measured and reported.  

As an example, take the case of tracking hemoglobin A1c values in an environment that contains several IT systems that use unstructured text and local code sets. Tracking hemoglobin A1c values is important in assessing the quality of care for the population of patients with diabetes. However, local laboratory codes are notoriously unstandardized; the same test may be referred to as “HbA1c” at one institution, “A1c” at a second, and “glycosylated hemoglobin” at a third. Normalizing these to a common LOINC code allows for a comprehensive apples-to-apples cross-institutional view for population management.

As another example, take the example of a quality measure that tracks the number of patients with Asthma that have been admitted to the hospital. The measure is a simple formula:

(Patients discharged from the hospital with a principal diagnosis code (ICD-9-CM) for asthma)/
(Total population of patients with a principal diagnosis code (ICD-9-CM) for asthma)

Now, let’s say your quality measure analysis engine responsible for computing this measure is solely based on claims data (ICD-9-CM codes) that is sourced from a payer that is part of the ACO.  However, you may also be able to identify patients with an active problem of asthma based on EMR data feeds. Problem list entries for asthma may be represented as SNOMED CT codes or free-text entries. If you’re not able to map (semantically translate) these to ICD-9-CM you may be missing patients with asthma who belong in the denominator of your quality measure.  Missing these patients will inflate your hospitalization rate for patients with asthma, making it appear that your organization is providing suboptimal care. This can undermine your ACO reimbursement and contracting. In this case, you need data normalization to semantically translate between two different terminology standards - ICD-9-CM and SNOMED CT - and possibly to map free-text entries to these standards as well.


2. Bridging the Claims and Clinical Data Gap

Quality measures and reporting originated in the payer side. Payer organizations have invested in analytical systems that ingest quality data and crunch the numbers. These systems have been predominantly built using administrative or claims data.  More recently, the adoption of EHRs means that clinical data is also funneling into quality reporting. While quality reporting programs still frequently rely on either administrative data or clinical data, some healthcare entities such as ACOs seek to use both for a more comprehensive view of patient care.

For example, claims systems may store medication data using NDC codes and EMR systems may use a variety of proprietary medication terminology systems to do the same. Data normalization can translate these disparate sources into RxNorm code. This will ensure accurate computation of quality measures such as selecting the most appropriate antibiotic for community-acquired pneumonia (an eligible hospital CQM) or providing beta blockers for patients with heart failure (an eligible provider CQM).

3. Filtering and De-duplicating Redundant Data

Systems using differing terminology and coding aren’t the only data normalization problem in the quality measures and reporting challenge. When data is pulled in from disparate sources, there’s considerable potential for duplicate data. As healthcare delivery organizations attempt to aggregate data from multiple sources to present, for example, a unified problem list, the last thing a physician wants to see is a bunch of problems that all mean the same thing. Duplication can also introduce inaccuracies that can skew quality measures.  For example, when attempting to analyze clinical data, the term 'Ascites' can be represented by many different terms like ‘hydroperitoneum', ‘edematous abdomen', or even ‘abdominal dropsy’. Removing these duplicates and representing them as a single problem or diagnosis is critical to ensure the patient isn’t counted multiple times or represented with multiple illnesses in a report. Data normalization counters this problem, de-duplicating data from different sources.

Ready To Report?

As a physician, hospital administrator or health plan manager, you will probably have multiple reporting requirements. How are you dealing with quality measures? How do you manage the process? Leave your comments below.

data normalization

Topics: data normalization

About the Author

Dr. Steve Ross, MD is a physician informaticist with Health Language, part of Wolters Kluwer Health. Dr. Ross joined Health Language after 16 years as faculty in the University of Colorado Division of General Internal Medicine, researching personal health records and health information exchanges.