This Week Health
December 7, 2021

Predictive Analytic Models with Angelique Russell

Change is inevitable, especially when it comes to the healthcare industry. Angelique Russell, Senior Clinical Data Scientist and Informaticist at Cogitativo, gave insight on advancements in predictive analytic models, data, and the future of machine-driven technologies.

Angelique Russell This Week in Health IT

Angelique Russell, Senior Clinical Data Scientist and Informaticist at Cogitativo

Population Health Needs Comprehensive Statistics

The delivery of care contributes between 10% and 20% to health outcomes, according to research from the American Action Forum.

Russell explained that the best data for attacking this problem may not yet exist. However, health systems may be "second best" (as opposed to a personal doctor) because of social determinants and known health information.

There is an opportunity for data collaboration between various stakeholders, Russell explained. She does not yet see a dream dataset available combining patient location, social determinants, behavior, health information, and genomics.

"I don't think that that comprehensive data set has really been put together," she said.

The Signals of Predictive Analytics

Within predictive analytic models, two signals–vitals and lab values–reveal a patient's condition. Lab values overlap with treatment domains, influencing physician decisions.

Russell explained that the treatment domain's data is subjective upon patient treatment. Options can change as time goes on, meaning those once used to predict in an algorithm no longer are useful.

As treatment options, guidelines, and order sets change over time, models can drift. This presents challenges for predictive analytic models utilizing these methods.

Historical bias is an additional influential aspect seen recently through the pandemic. As clinicians in 2020 believed it to be a bad year of the flu, there was uncertainty at the beginning of who actually had COVID-19. With strict criteria testing, Russell explained data was frequently mislabeled until patterns formed, .

"There isn't always a confirmatory test to rely on, and that mislabeled data can send all kinds of wonky signals if what you're trying to do is, for example, detect COVID," she said.

Avoiding Model Drifts in Predictive Analytics

Order sets constantly change. Therefore, Russell emphasized the importance of staying aware of changes in the model. This way, unreliable signals will not be used. Early in the model design process, she recommends limiting signals less likely to change.

There is an idea in healthcare data science that deep learning can be applied and left unsupervised, having algorithms find signals in vast databases. This attitude presents risks, as there are signals that will, over time, be inconsistent, she explained.

"They might be related to things that are in flux, and you won't know that they're in flux because you let the algorithm find the signal. You don't really know what the signal is. That kind of black-box approach, I don't think it works at all in healthcare. And it certainly can create problems like model drift," she said.

Traditionally, to train a model means to teach algorithms to detect patterns. Tests split data into training and tests--one set with 80-to-90% of data and the second 10-20% for confirmation.

Considering how data changes, she suggests collecting additional recent data. Data across a span of five years, for example, will not be purely consistent with the previous year's data. It is necessary to hold our on data this way, according to Russell.

Russell Explores Predictive Model Failures in Sepsis Patients

In a LinkedIn article, Russell explored four reasons why predictive sepsis models fail, following the lack of accuracy in Epic System's prediction tool. Earlier this year, researchers found the tool was correctly identifying patients 63% of the time.

1. Lack of Timely Automated EHR Data

ED and ICUs rely on pulse-oximetry and vital signs that collect automatically into the EHR system every 30-to-120 minutes.

For example, less monitored patients–those recovering from procedures or on a med surge floor–do not have consistent automated devices for entering basic vital signs. Russell explained that nurses often record these vitals on pieces of paper, inputting data hours afterward.

According to Russell, this lack of timeliness ruins the potential for accurate and reliable algorithms, which depend on the most recent data available.

2. Upcoding: An Uncomfortable Truth and Source of Label Bias

The goal of non-profit health systems is to recover as much revenue as possible for each patient to cover the cost of care, Russell explained. Conditions like strokes bring reimbursements that are not equal to the cost of care. This is the driver behind upcoding, Russell noted.

Systems like computer-aided coding recover the cost of care. By identifying key terms and individual coders, doctors sign-off to ensure accurate bills.

Patients exhibiting certain symptoms will be tested to rule out sepsis, which is a code added to the chart. This causes confusion for outsiders, according to Russell, as sepsis is not thought to be the probable cause.

"It's so hard for outsiders to kind of understand this concept because we think of things like sepsis as being totally objective. Like there must be a definition for sepsis," she said.

However, there is no consensus over the last 15 years in the medical community on a definition for sepsis. Russell explained this is partly because clinicians cannot rely on cultures for sepsis testing.

3. Sepsis Models May Not Generalize to Other Patient Populations

With differing definitions of sepsis, it may not generalize to other populations. Russell explained this presents difficulties when underlying patterns are physiologically different in populations or in populations skewed towards a certain age range.

According to Russell, to confirm the model works, validation data sets are stratified for accuracy across different demographics.

"Do I have vulnerable patients such as pediatrics, elderly, or immunocompromised that are distinctly different from a general population and how accurate it is in that population?" she asked.

The Epic data set relies on claims data, which are huge datasets. According to Russell, datasets are not the best for labels. Rather, telemetry data is the most beneficial for sepsis models.

Russell described the optimal way to label a sepsis dataset is by relying on objective rules within the data. This still requires resolving ambiguous cases.

4. Now What?

According to the study on Epic's model, the system is only detecting 7% of missed sepsis cases. This statistic highlights, for Russell, the need to know what to predict or detect.

"In the case of sepsis, where we have preventable mortality, is usually we missed it [detection]. We dropped the ball and that's what we want to prevent. We don't want to miss a sepsis case or delay intervention or care in a sepsis case," she said.

Moving forward, Russell suggested starting over to decide what needs detecting and how to label them in the dataset.

The Future of AI, Machine Learning, and NLP

Russell expressed that the future of machine-driven technologies is understanding how to augment human decisions. Breakthrough algorithms available at the bedside are usually tools giving physicians insight not previously available.

In the case of Watson Health, the organization tried to build a quasi-recommender system. Based upon medical literature and treatment notes, the system would recommend treatment from historical patterns.

However, Russell found this added little value because the past is not always optimal. To use past patterns to predict future actions or automation means to be realistic that not all treatment decisions are optimal presently.

"If we only rely on past patterns, then that those treatment decisions and bias and suboptimal care is going to be what we predict and recommend going forward, which no one wants," she said.

Russell explained that there needs to be an emphasis on learning how to use data now that we have machine learning capabilities.

“In medicine today, even when we're not in a pandemic, there are still a lot of decisions where we're nowhere near-optimal. And there's so much potential to use our data to get to optimal,” she said.

 

** Editor's Note: All attribution to "Russell' in this article refer to guest Angelique Russell, and not This Week In Health IT's Bill Russell **

Contributions

Want more from this Interview? Enjoy the fulll episode on your favorite listening platform

Transform Healthcare - One Connection at a Time

© Copyright 2023 Health Lyrics All rights reserved