Health information exchange (HIE) is a critical component to the future of healthcare. As more systems are created to improve patient outcomes and improve operational efficiencies, it will become increasingly important for various health data systems to be able to communicate with one another.
More payments are being tied to successful reporting on a number of fronts like Meaningful Use and the Physician Quality Reporting System, and practices and payers will have to make sure the data in their HIT systems is clean to to be paid.
Unfortunately, payers and providers have acquired a variety of systems over the years that use different terminologies. This makes it hard for these systems to communicate with each other and even harder for healthcare providers to develop a comprehensive view of billing and patient data.
This is where data normalization comes in to play. Data normalization allows for apples-to-apples comparisons of information from different systems by 1) standardizing local content to terminology standards and 2) semantically translating data between standards to eliminate any ambiguity of meaning.
In general, data normalization establishes a foundation for achieving semantic interoperability and creates an infrastructure that enables data sharing and aggregation. From meeting the interoperability and terminology requirements of the Meaningful Use initiative to ensuring the clinical data in which an ACO is built upon is complete and semantically understood, data normalization is critical to healthcare.
A well-designed data normalization solution can ensure better visibility into operations, quality measures and other criteria simply not feasible with disparate systems which do not share a single source of terminology truth.
What are some of the biggest data normalization challenges you’ve faced? Share your answers in the comments.