Artificial Intelligence (AI) is one of those hot topics that we all talk about, but at the same time we have a hard time grasping what AI really is and how it is implemented in our daily lives. And understandably so, AI is a very complex and extensive topic. Needless to say, implementing AI models is difficult, because so many (ethical) variables play a role for a successful, acceptable, and most of all beneficial implementation. It’s one of the most interesting fields to study and that’s exactly what LUMC PhD candidates Marieke van Buchem and Anne de Hond are doing, so that they can contribute to better validation and implementation of AI models in medical care. Their exchange to Stanford University helps them move forward with their research.
Background and premise of their research
Both Marieke and Anne are part of the LUMC’s CAIRELab and its Value Based Healthcare & AI team. The team focusses on unravelling needs of medical professionals that AI models can potentially fulfil. For Marieke and Anne, their PhDs revolve around the questions: ‘How can we create valuable AI models and implement these in medical care?’ and ‘How do we prevent models from ending up in the trash can?’ Because the latter does happen when a model isn’t solid enough. It’s for this reason that Marieke and Anne will look at how we should develop AI models, validate them and ensure they add value to medical care before implementing them. “What’s currently missing is a clear strategy and pipeline to develop and implement AI models. If we can contribute to a clear strategy with our research, we can really help push the implementation of AI models forward”, says Anne.
For their research specifically, Anne and Marieke will work on predictive AI models for real time patient care, both working on different types of datasets; Anne on structured datasets, such as parameters from the Electronic Health Record (EHR), and Marieke on unstructured datasets, such as processing natural language exchanges.
What Anne and Marieke are trying to achieve are predictive AI models based on patient data from the EHR to assist medical professionals in decision making processes or help discover a patient’s condition more efficiently. These predictive models are targeted towards real time care. “For instance, a patient arrives at the Emergency Room and a medical professional needs to determine if he/she needs to be hospitalized or not. There’s many boxes that need to be ticked before making a decision like this. This is often a time consuming process and also prevents the medical professional from attending to other patients. An AI model may help the doctor to accelerate some of the decision making. As a result, patients get the right treatment sooner, the medical professional gets to spend his/her time more efficiently and productively, and other patients can be attended to sooner”, says Anne.
Marieke mentions working on a speech recognition project as an example. “The speech recognition model is trained to extract important information from patient-doctor conversations and turn this into a structured, clinical note. A tool like this relieves a medical professional from a lot of time-consuming administrative tasks, helping the doctor to spend his/her time more efficiently and productively, just like in Anne’s example”, says Marieke. Both models are examples of needs expressed by medical professionals themselves. “We don’t invent these models because we think they’re useful, we create them because the doctors have expressed a need for them. Our goal is to facilitate them and the patients”, says Anne.
Stanford University School of Medicine and the LUMC collaboration
Going to Stanford University School of Medicine helps the PhD candidates to further develop their own skills and research. “The LUMC is one of the frontrunners in the Netherlands in medical AI research, but there’s still a lot we can learn from frontrunners in the United States, such as Stanford; institutes that are a bit further advanced than we are in some regards. An experience at such a top institute can really help us set valuable steps in implementing AI models in medical care”, says Anne. “There’s very few people trained in both the development and implementation aspects of AI models. That’s what makes this collaboration so interesting for both the LUMC and Stanford”, says Marieke.
The Stanford project and future
At Stanford, Marieke and Anne will work on discovering symptoms of depression early on in cancer patients from open text data combined with structured data. “After being diagnosed with cancer, patients often suffer from all kinds of mental complaints which can negatively influence treatment and their quality of life. With the proper data to discover tendencies for depressive behaviour, we can intervene early on and offer proper support to the patient”, says Marieke. It’s a prime example of the value of AI models in practice, which will hopefully strengthen the case for AI model usage in medical care.
Edouard Fu, PhD student at the Department of Clinical Epidemiology at the LUMC, uses big data to study the effectiveness and safety of kidney, diabetes and heart failure treatments. With a Rubicon grant from NWO, Edouard will conduct further research at Harvard Medical School over the next two years.
From the 24th till the 28th of January, the LUMC organized the Data Ethics winter school on behalf of the Eurolife consortium, an international network consisting of 9 European universities. The network focuses predominantly on biomedical sciences, but also related subjects in the field of Life Sciences & Health. We are very proud of the amazing success of the winter school and the exciting responses we’ve had from students. In this article we would like to share the experience of one of the participating students, Jonneke Bouwhuis.
On June 23rd, we organized the first LUMC Global Community Meeting on establishing successful Sino-Dutch collaborations in (bio)medical research and education.