Research & Innovation

Kleinberg Receives $2.3 Million to Develop Artificial Intelligence that Improves Doctor–Patient Collaborations

With three new grants, Samantha Kleinberg, associate professor of Computer Science at Stevens Institute of Technology, will investigate how multiple factors influence the behavior of patients, leading to greater trust in their doctor’s care

Samantha Kleinberg, associate professor of computer science at Stevens Institute of Technology, was recently awarded three grants totalling $2.3 million to develop artificial intelligence that personalizes the information patients receive in order to help them make better decisions about their health.

“We are investigating how people think and make decisions,” said Kleinberg. “We are developing methods for understanding a person’s mental model—what their beliefs are; and then figuring out how to model people’s behavior based on this.”

Each investigation will utilize artificial intelligence and machine learning in its methods.

“Uniting Causal and Mental Models for Shared Decision-Making in Diabetes”

Aiming to improve the shared decision-making approach, Kleinberg is developing tools to personalize the information that patients receive in order to better decide on a treatment plan, taking into account the doctor’s recommendations as well as the patient’s preferences. A grant of $917,879 from the National Science Foundation is funding this study.

When a patient comes to a doctor, trust in their care is paramount to achieving positive health outcomes. Furthermore, when patients engage in shared decision-making, a healthcare model in which providers and patients collaborate to plan the course of treatment, they are more likely to comply. However, patients come to appointments with their own set of biases and knowledge, which may or may not be rooted in evidence. While a doctor understands the medical model of a disease and its treatment from extensive education, research, and experience, a patient relies on limited, less reliable sources, including the news, internet, and popular conceptions. For example, web forums provide peer support from other patients, but content written by peers is not always reliable nor moderated.

Kleinberg will use artificial intelligence to better understand first what patients think they know, and then use this data to develop algorithms that automatically personalize the information presented to them, with the aim of ultimately achieving a mutual doctor–patient understanding of the disease and its treatment. An educational plan can then be developed that will better lead patients to accept recommendations. This research is being conducted in patients with diabetes.

“The challenge is that it’s difficult to have shared decision-making and trust if you’re not considering the patient’s beliefs,” Kleinberg said. “We want to understand how beliefs impact shared decisions and trust.”

Consider the following scenarios:

If a model of treatment and disease—in this case, the presentation of facts about diabetes—presents information that is nearly right, but with one wrong message, then an individual may ignore the whole model.

If a model is wrong but an individual believes it to be correct, then that person continues to believe it, without questioning their assumptions.

Here’s where it gets tricky: If a model is completely right (e.g., it is presented by the treating physician), but an individual has different beliefs, then that person may ignore the model and does not place trust in their doctor.

In the last scenario, while a doctor and patient may share the same goals, the patient is less willing to collaborate by following a prescribed plan, because they lack foundational trust in the materials presented to them. This is one of the greatest challenges in applied healthcare: advances in medicine are foiled by a patient’s lack of understanding and compliance. This is where Kleinberg’s algorithm can intervene.

“People do well in cases when they don’t know anything,” Kleinberg explained, “but if they already have beliefs then they become less confident in what we show them—so we are working on decision-making. We’ll be working on figuring out the right approach for imparting knowledge—how you can build upon existing knowledge and correct faulty beliefs.”

With recognition of their own limited understanding, patients may become more willing to trust their doctors. But how do we know what information to present without overwhelming the patient? Kleinberg’s algorithm will enable a more calculated, individually tailored presentation of facts.

“We want to intervene on people’s beliefs based on what they think they know, and try to make these beliefs more accurate,” she said.

Psychology also plays an important role in shaping a person’s beliefs, which is why Kleinberg, in collaboration with cognitive scientist Jessecae Marsh at Lehigh University, is creating training modules to educate clinicians about how patient beliefs influence trust and decision-making.

“Moving Beyond Knowledge to Action: Evaluating and Improving the Utility of Causal Inference”

Kleinberg’s work with people with diabetes has demonstrated that they often fare worse when presented with more complete and complex models of disease and treatment, while they make better decisions when they are given fewer details. With this National Science Foundation grant of $499,454, Kleinberg will evaluate the utility of algorithms that identify not just the most accurate model, but what information will help individuals make better decisions.

Research in artificial intelligence has put considerable effort into determining cause and effect, with a focus on developing methods to mine data to find these causal structures. However, understanding causes is not always enough to influence or even evaluate behavior. Rather than focusing on causal inferences alone, Kleinberg’s new research direction will focus on what makes the output of an algorithm useful in decision-making.

“There is an assumption that if you find more of the causes and a more accurate model, then it will be a more useful model. But this hasn’t been evaluated,” said Kleinberg. As it turns out, “Accuracy is not a proxy for utility.”

Kleinberg’s line of investigation aims to add utility to these causal models by parsing out the information provided to people, ultimately helping them take better actions based on more usable, personalized information.

One problem with existing algorithms is that sometimes their theoretical basis is not applicable, because they assume you can do things like control heart rate or turn diabetes on or off like a computer switch. Such variables can be manipulated and controlled in an algorithm, but when it comes to human behavior, changing one part of a system often leads to changes in other parts of the system—because people compensate for the change in different ways.

For example, imagine bikers who wear helmets as a safety precaution. It should follow that they are safer than bikers without helmets. However, when a driver sees the biker’s helmet, the driver may then be less cautious, altering the safety dynamics in this system. The same goes for wearing seatbelts—drivers who wear seatbelts may then take more risks.

“You do one intervention, and then people change other parts of the system,” Kleinberg said. “We want to figure out which interventions change multiple parts of a system.”

Using machine learning, Kleinberg will investigate everyday decisions surrounding diet and exercise, and identify opportunities for intervention.

“Harnessing Patient Generated Data to Identify Causes and Effects of Nutrition during Pregnancy”

Gestational diabetes affects about nine percent of women during pregnancy, which increases the risk of developing type II diabetes later. Kleinberg, in collaboration with Andrea Deierlein, a nutritional and reproductive epidemiologist at New York University, will use wearable sensors to obtain patient-generated health data with the aim to identify what factors cause the disease, and to identify targets for early intervention. She obtained $864,220 from the National Institutes of Health for this investigation.

"Wouldn’t it be great if you could buy something like a fitbit for nutrition?” Kleinberg imagined. In a previous study, this idea became a reality. Her team computed wearable sensors that accurately tracked what people were eating and how much, without the need for users to log this data—as manual logs often are not reliable. In addition, they measured heart rate, stress, and activity level.

Next, Kleinberg will utilize these monitoring devices to obtain similar nutritional data during the well-defined period of a pregnancy. Her team has already validated their methods, and this study will build upon that research by gathering data in a population of 150 women over four weeks during two distinct stages of pregnancy.

“Gestational diabetes presents a perfect test case, because a nine-month pregnancy [lends itself to a] defined study period,” she said. “This is also a good place to intervene, because people are willing to change their diets during pregnancy.”

With this new data, Kleinberg will evaluate why people’s diets are changing, and what factors cause gestational diabetes. She hopes that these findings will help to guide decision-making during pregnancy.

Better Doctor–Patient Relationships, Better Decisions, Better Health Outcomes

With these three inquiries, Kleinberg hopes to better understand a patient’s model of a disease and its treatment, and their ensuing actions. This could open new doors to effective education and intervention, ultimately leading patients to place trust in their doctor’s care and make better decisions about their health.

In addition to these investigations, Kleinberg has three papers and an edited book volume being published this fall.