The growth of electronically available data in healthcare presents a quandary.
On the one hand, electronic health records and other health IT systems frees clinical information from paper charts and file rooms. On the other hand, that patient information is spread across multiple healthcare players and myriad IT systems across the healthcare community. Healthcare providers need to exchange data to collaborate on patient care in emerging delivery models such as accountable care organizations (ACOs). In addition, healthcare researchers need to extract and aggregate data from a variety of electronic sources for data warehousing and analysis.
But sharing data among disparate IT systems has proven difficult. Data aggregation has also run into hurdles. The reason in both cases is the same: different data sources use different terminology content sets. The industry lacks a single, universally accepted standard that nails down the meaning of various types of healthcare data in an unambiguous way. Two labs, for example, may code and report test results differently. If computers are to successfully share or analyze that data, a method must be found to map the data to a commonly understood meaning. Otherwise, making like-to-like comparisons becomes impossible.
Finding Common Ground
This is where data normalization plays a pivotal role. The task here is to harmonize data from different sources into standard terminologies. A data normalization solution can provide a shared vocabulary that can ease data exchange and help improve data analytics-driven initiatives such as population health management.
Here are three characteristics to look for in a solution:
1. The ability to automatically map local terminologies to standards
Localized terminologies proliferated among individual providers, labs and other healthcare entities before the era of electronic records. One example are the glut of local, proprietary laboratory test codes that have surfaced over the years. But without electronic data and mechanisms to send electronic messages back and forth, no one was particularly worried about terminology incompatibilities. The healthcare industry was not yet focused on the issue of machine-to-machine communication and semantic interoperability.
The task now is to bridge the gap between local terminologies, now that healthcare organizations aim to share and aggregate data. So, a data normalization solution should be able to match the terminology from local healthcare data sources to broader standards to promote communication. Moreover, this mapping should occur automatically. Why? The volume of data healthcare providers will need to exchange everyday demands that a large percentage of this data is handled automatically. Manual intervention and exception management won’t work in a high-volume environment.
2. The ability to model and create custom mappings to standards
As it turns out, there’s more than one way to go about mapping data. It all depends on a healthcare organization’s objectives for a given data exchange. In addition, some mapping efforts -- such as those making matches between ICD-9 and ICD-10 or between localized, proprietary lab codes and LOINC -- provide “approximate maps.” Accordingly, an organization may need to create custom maps to provide a tighter fit between codes. A normalization solution should support this custom mapping. Here’s why: the need to manage custom content becomes inevitable, regardless of whether an organization is a provider or a health plan. So, a data normalization solution should offer the ability to create value-added content and custom maps.
Consider this example: A health plan will want to know how many of its providers prescribe a Codeine 100 MG Extended Release Tablet. Provider A may represent a local drug code as “codERtab.1gm.” But if the health plan wants to aggregate that data point into a report, it must first model and map the local code to the standard code RxNorm 248550.
3. The availability of supporting tools
A comprehensive data normalization solutions should include tools, resources and workflows that ease the task of mapping disparate data. A mapping tool, for instance,helps organizations create and maintain their own mappings as they transition from local lab codes to LOINC. Alternatively, a healthcare entity might prefer to use a professional services group associated with the data normalization solution. That group would use the same mapping tools, and apply its own workflow expertise, to assist the healthcare customer with its mapping efforts.
Overall, a data normalization solution must provide automated mapping and the ability to create custom maps. Those are the core features that make normalization viable in the complex, data-heavy healthcare industry. But situations will arise where the base solution isn’t quite enough. A particularly tricky mapping project may call for a specialized tool. Or a healthcare organization may lack the staff to run a tool and so requires outside assistance. In light of those cases, a data normalization solution should include supplemental tools, professional services staff and workflow optimization resources.
Enable Sharing and Aggregation
Normalization enables data sharing and aggregation. ACOs, big data analytics, population health management and data sharing on any level all require normalized data to ensure smooth and reliable communication.
Do you have tools for normalizing data today? Leave your comments below.