Health systems use machine learning to predict costly care

ХHealth care providers and payers who want to reduce costs think the answer is in a small group of patients who account for more costs than others.

If they can catch these patients – typically “high users” or “high costs, high needs” – before their condition worsens, providers and insurers can refer them to first aid or social programs, such as food services, to get them out of the situation. keep emergency, focus. department. An increasing number also want to identify patients who are at risk of re-hospitalization, which could be more costly. To find them, they develop their own algorithms that rely on data from previous claims, drug history, and demographic factors, such as age and gender.

A growing number of providers he works with around the world are testing and using predictive technologies to prevent it, said Mutaz Shegevi, research director of a market research company on the global experience of an IT provider.

advertising

Nigam Shah, a professor of biomedical computer science at Stanford, said these models are accurate and precisely constructed, can significantly reduce costs and also keep patients healthy. “We can use algorithms to do a good job, find people who are potentially valuable, and then identify the ones we can do for them.”

But this requires a level of coordination and confidence, which still remains rare in the use of health care algorithms. There is no guarantee that these models, which are often grown by insurers and health systems, will work as they were intended. If they rely solely on past expenditures as predictions of future expenditures and medical needs, they avoid the risk of patients with a history of not having access to medical care. Experts warn that predictions will never help if providers, payers, and social services do not streamline their workflow to attract these patients to prevention programs.

advertising

“The organization is very small,” King said. “Definitely industry standardization is necessary both in terms of what you do and what you do with the information.”

The first problem, experts say, is that there is no agreed definition of what constitutes high utilization. When health care providers and insurers develop new models, Shah said they need to be very precise and transparent – about the fact that their algorithms measure medical costs to identify potentially expensive patients, the number of visits compared to baseline or needs. medical based clinical data.

Some models use cost as a proxy measure of medical needs, but they often do not account for the inequality of an individual’s ability to provide real care. In a widely cited document in 2019 that studied the algorithm used by Optum, the researchers concluded that a tool that used previous costs to predict patient needs was more effective than black patients who were identical to white patients. sending more tracking care.

“The prediction of future high-cost patients may differ from the prediction of patients with high medical needs due to complex factors such as the insurance situation,” said Irene Chen, a computer science researcher at MIT, who co-authored an article on health issues. health.

If the costly algorithm is inaccurate or exacerbates bias, it can be difficult to understand – especially when the models are developed and implemented in individual health systems, without oversight or external audit by the government or industry. A group of Democratic lawmakers has proposed a bill that would require organizations that use AI to consider them biased and establish a public warehouse of these systems in the Federal Trade Commission, although it is not yet known if it will. will progress.

This puts the responsibility of health systems and insurers at the same time on making their models fair, accurate and beneficial to all patients. King suggested that the developers of any cost forecasting model – especially payers outside the clinical system – check the data with providers to make sure that the target patients also have higher medical needs.

“If we were able to know who is having the problem, the medical problem, fully realizing that cost is a factor … we can involve human processes to prevent this from happening,” he said.

Another key question about the use of algorithms to identify costly patients is exactly what health systems and payers need to do with this information.

“Even if you can predict that a person will have high costs next year because this year they have stage 3 bowel cancer, you can’t dream of their cancer so that these costs aren’t prevented,” King said. he said.

At present, the difficult task of determining whether the predictions of the generated algorithms are left in the hands of health systems that develop their own models. Thus, data collection is needed to understand whether these interventions vary in outcomes or patient costs.

At the UTHealth Psychology Center Harris County, a safety network center that primarily serves low-income individuals in Houston, researchers are using machine learning to better understand which patients are most in need and increase resources for this population. In one study, researchers found that certain factors such as high school dropout or a diagnosis of schizophrenia were associated with frequent and costly visits. Another analysis showed that lack of income is strongly linked to homelessness, which in turn is associated with expensive psychiatric hospitals.

Some of these findings may seem obvious, but identifying the strength of these connections will help hospital decision-makers with limited staff and resources to address what social health factors to address, said study author Jane Hamilton, assistant professor of psychology and behavior at the Center. University of Texas Health Sciences at Houston Medical School.

For example, homelessness studies have led to more local interventions, such as “downstream” housing programs for the mentally ill. “All you have to do is force all the social workers to actually sell it to the social work department and the medical department to focus on a specific find,” Hamilton said.

Predictive technology has not yet been incorporated directly into the health record system, so it is not yet part of support for clinical decisions. Instead, social workers, physicians, nurses, and managers are informed separately about the factors that determine the algorithm for relapse risk so that they can refer some patients for interventions such as short-term visits, Lokesh Shahani said. , the hospital’s chief medical officer and associate professor of UTHealth’s Department of Psychiatry and Behavioral Sciences. “We rely on the profile that the algorithm identifies and then pass this information on to our clinics,” Shahani said.

“Deploying a sophisticated algorithm in the EHR in a hospital and changing the workflow is a little harder,” Hamilton said, though. Shahani said the psychiatric hospital plans to integrate the two systems so that risk factors can be recorded in individual records over the next few months.

Part of changing hospital operations is to determine which visits can actually be avoided and which ones are part of the normal maintenance process. “We’re really looking for reasonable factors,” Hamilton said. “What else can we do?

Leave a Comment