Health Language Blog

Webinar Recap: Quality Data: Three Steps to Simplify Data Governance, Enable Semantic Interoperability, and Enhance Your Reporting and Analytics

Posted on 05/30/19

iStock_000019991152_Double

With healthcare spending at an all-time high, healthcare leaders face increasing pressure to cut costs while still delivering on organizational initiatives and improved patient care. To be successful, healthcare organizations are looking to innovative technology solutions such as automation, analytics, and artificial intelligence (AI) to help staff be more effective and efficient.

With that goal in mind, healthcare organizations must figure out how to extract value from all available data assets. However, the data generated within the healthcare industry is overwhelming: it is estimated that the amount of healthcare data doubles every three years; by 2020 it is projected to double every 73 days.1 Hidden inside this data is the potential to transform healthcare delivery and improve patient outcomes, but most healthcare organizations lack the resources, technology, and time required to fully leverage its value.

Because the healthcare industry relies so heavily on data to inform mission-critical strategies and organizational initiatives, it is imperative that the data is accurate and reliable. By establishing a strong foundation of quality data, stakeholders can leverage advanced analytics and AI for high-level initiatives.

In the recent webinar “Quality Data: Three Steps to Simplify Data Governance, Enable Interoperability, and Enhance your Reporting Analytics” (watch now on demand), hosted by HealthLeaders, an expert from Wolters Kluwer and a medical informatics consultant discussed the importance of high-quality data, explored common challenges healthcare organizations face due to poor data quality, and discussed how to address those challenges to get the most value from their data assets.

Sarah Bryan, Director of Product Management for Health Language solutions at Wolters Kluwer, kicked off the webinar by stating that data is the foundation for conducting most organizational initiatives such as analytics, AI, clinical decision support, and quality measures. The most important first step in pursuing new technology should be on data transformation and enrichment. This process includes assembling all available data sources, extracting data from unstructured text, normalizing or codifying the data to common standards, and categorizing the data into clinical concepts.

For most healthcare organizations, this is a complex undertaking due to the proliferation of disparate data sources across the healthcare continuum. Medical informatics consultant Brian Levy, M.D., discussed how poor data quality can have negative downstream consequences on quality measures reporting, billing, and analytics, and can ultimately impact your bottom line.

To illustrate this challenge, Dr. Levy referred to the common population health initiative of improving the outlook on diabetes, drawing from data related to laboratory A1c levels. Healthcare organizations must be able to aggregate all disparate A1c values across the continuum to manage their diabetic population health. But many organizations experience issues due to the countless ways these values are represented across electronic In one striking example, Dr. Levy shared that a recent client used over 100 different terms to represent an A1C test.

The variety of disparate data collected only adds to the complexity. Dr. Levy described that within disparate data sources such as billing data, EHR data, and emerging data, the actual data is represented in a variety of data types that need to be leveraged to get a complete, 360-degree view of the patient’s health.

Specifically, Dr. Levy explored:

  • Structured data such as claims information derived from CPT®, ICD-10, MS-DRGs, and HCPCS codes.
  • Semi-structured data such as labs, medications, and social determinants of health (SDoH) found in EHR drop-down menus.
  • Unstructured data such as free text found in clinician notes and PDF documents.

To address this challenge, organizations would benefit from leveraging an advanced enterprise terminology management system such as the Health Language solution from Wolters Kluwer, which can comprehensively address data quality for optimized downstream initiatives.

An effective enterprise terminology management vendor must have the capabilities, technology, and expertise to harmonize your data from three vantage points:

  • First, your vendor must have a Reference Data Management solution that streamlines data governance by establishing a single source of truth for all reference data.
  • Second, your vendor must have a Data Normalization solution that can automatically map non-standard data such as local labs or drugs to a standard terminology. The Health Language Data Normalization and Interoperability solutions, for example, automate the mapping of disparate data to appropriate standards using domain-specific algorithms powered by machine learning techniques. Bryan emphasized that the business case for investing in automation to support data normalization is an easy one to make as error-prone manual processes draw heavily on resources and typically fall short of optimization.
  • Third, your vendor must offer a Clinical Natural Language Processing (CNLP) solution that leverages a foundation of comprehensive clinical data and provider-specific synonyms and acronyms to extract valuable data such as problems, diagnoses, labs, medications, procedures, and immunizations from unstructured text fields.

When healthcare organizations implement practices to achieve high data quality, they can more effectively leverage their own data assets using emerging technologies such as AI. Dr. Levy discussed use cases where AI is being implemented today within the healthcare industry:

  • Leveraging AI to build intelligent content. Example: Article tagging and pharmacovigilance use cases
  • Enabling interoperability through data normalization. Example: Normalizing discrete and unstructured data elements
  • Ensuring accurate quality measures. Examples: CQM, HEDIS
  • Providing tailored clinical decision support. Example: Extracting family history to customize clinical decision support
  • Using predictive analytics to improve outcomes. Examples: Improving C diff and sepsis detection
  • Using emerging applications. Examples: Chatbots and telemedicine; smart devices; wearables; and other data sources

To get started, Bryan suggested that healthcare organizations consider three steps when thinking about tackling data quality initiatives:

  • Identify high-value areas that could benefit most from improved data quality and from using “low-hanging fruit” as a starting point.
  • Identify a partner with deep clinical expertise and technology capabilities to accelerate data quality and AI initiatives.
  • Allocate a budget to support data quality before investing in analytics and AI initiatives, to ensure maximum effectiveness.

To watch the full webinar, click here.

If you are ready to get started, contact a Health Language solutions expert to support your data quality initiatives. 

1 2011 study in Transactions of the American Clinical and Climatological Association. Link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3116346

CPT® is a registered trademark of the American Medical Association (AMA).

Topics: data normalization, interoperability, mapping, quality reporting, NLP, Natural Language Processing, Reference Data Management, artificial intelligence, Machine learning, clinical decision support, quality measure reporting, value-based care, clinical and claims data, enabling interoperability

About the Author

Ali Gilinger has over eight years of experience in the healthcare industry, with primary focus on product management and strategic product marketing. Prior to joining the Wolters Kluwer Health Language team, Ali was a Solutions Manager for the pharmacy automation division of Swisslog Healthcare. Ali is responsible for strategic product marketing of the complete Health Language solution portfolio.