Terminology Standards Blog Series: Part 1
In the seminal work Crossing the Quality Chasm, published by the Institute of Medicine in 2001, there was a clear call to action for the U.S. healthcare system. This work has driven much of what we are seeing in healthcare information management today. Crossing the Quality Chasm called for healthcare to be safe, effective, patient centered, timely, efficient, and equitable. The argument was made that the adoption of information technology is critical to meeting these goals. I would agree.
So how are we doing? By 2015, about 75% of physician offices and 97% of hospitals were utilizing an electronic health record (HHS/ONC Health IT Dashboard, February 2016: http://dashboard.healthit.gov/report‐to‐congress/2015‐update‐adoption‐health‐information‐technology‐executive‐summary.php). These systems provide us with much-needed data about our patient populations. With the collection of data, CMS is upping the ante on quality reporting and pay for performance measures. All of this together means that the role of HIM professionals is becoming increasingly complex. In fact, in their 2016 strategy, AHIMA has adopted three goals directly related to the capture, storage, and retrieval of clinical information through information governance, informatics, and innovation (http://www.ahima.org/about/aboutahima?tabid=strategy).
Only when you can merge clinical, claims, and patient-collected data can you generate a holistic view of the patient and his or her care over time. Once you can describe an individual patient, aggregation of data across patient populations becomes possible and enables robust population health analytics. All of this data can then be used in evidence-based medicine, epidemiologic research, cost containment solutions, and many other initiatives. Sounds great! The problem is that information systems in a healthcare organization rarely capture information in the same way . Often the data is not codified using standard terminology, and much of the rich clinical data is free text with syntactic variations. How many ways can a provider say hypertension? What exactly is going on with the patient diagnosed with Other specified heart block? Is it A1C test or Hemoglobin A1C? Data normalization provides a means to codify all of that data by mapping it to nationally recognized standards.
Once normalized, data can be effectively exchanged between information systems in a contextual, clinically appropriate manner without losing the meaning of the message, and semantic interoperability is achieved. Unfortunately, this is not currently common procedure for healthcare organizations. In a study published in 2014 in the Journal of Biomedical Informatics it was found that less than 17% of the laboratory data sent to a public health entity in Indiana was coded in LOINC. Even worse is that a similar entity in Wisconsin receives only 13% of its clinical data using the standard terminology of SNOMED (Journal of Biomedical Informatics 2014 June; 49: 3–8. doi:10.1016/j.jbi.2014.03.011). The remainder of the information being exchanged is not normalized to a standard terminology. Thus, the receiving institution is left to make sense of this disparate data in order to monitor and respond to public health concerns.
This blog series is intended to help HIM professionals better understand the standard terminologies needed to meet the increasing need for semantic interoperability to cross the quality chasm. We will discuss the common standard code sets used in the clinical setting, and the history, governing bodies, and maintenance of those standards as well as when you should use them (and when you shouldn’t) to help you organize and manage data throughout your organization. The standards to be addressed will cover administrative data, code sets used in revenue cycle management, LOINC, SNOMED, and RxNorm. Once you understand the basic of terminology, you will be well on your way to semantic interoperability.
Read the next blog in series: Medical Billing and Coding: Exciting Changes Ahead