For localized information and support, would you like to switch to your country-specific website for {0}?
- Insights
- LabLeaders
- Unlocking new possibilities for artificial intelligence (AI) in patient care
Key takeaways
- Artificial intelligence technology has progressed rapidly in recent years and has many practical applications in the healthcare space.
- Laboratories could benefit from artificial intelligence, which has the potential to change the way they operate.
- Regulatory challenges to the adoption of artificial intelligence must be overcome to enable more widespread use and benefits.
Artificial intelligence (AI) technology has progressed rapidly over the past few years. It has shown value in its ability to process complex algorithms and complete routine tasks quickly and accurately, with performance levels comparable to experts in the relevant fields.1
As test utilization grows increasingly important in providing answers to healthcare professionals, the application of AI to laboratory medicine is gaining attention as a way to manage the huge amount of complex test results produced in labs every day.
At the recent Association for Diagnostics and Laboratory Medicine (ADLM) conference, Peter McCaffrey, MD, MS, FCAP, Chief AI Officer at the University of Texas Medical Branch, discussed how data-rich laboratory environments can utilize AI to enable efficient processing, and ultimately bring the lab to the front and center of the patient care workflow.
Reducing inefficiency and increasing patient interaction
Many day-to-day, routinely structured tasks or functions can be made more efficient and less labor intensive through the introduction of AI, and this is where Dr. McCaffrey sees an initial use in the healthcare space.2
He points out that a lot of doctors’ time is taken up with tasks that are not focused on human interaction with a patient. He believes AI can help with unburdening people from documentation, and by removing some of the more inefficient administrative tasks that can interrupt patient care workflows. “We, at the University of Texas Medical Branch (UTMB), see opportunity in offloading the things that are not the doctor talking to the patient about their health, (but) learning what they need, following them, counseling them, the sort of humanistic touch,” he says.
To enable doctors to provide the best possible care, Dr. McCaffrey sees the lab playing an “enormous role.” “If you think about what doctors want and what people want from healthcare, they want clinical inference all the time on all kinds of things,” he points out, and this is where he says the test results from the lab come in. “The humanistic things like enriching the relationship are really driven in large part by what we provide to the physicians — the guidance we give and the results that we give.”
Currently, the interpretation of results takes a lot of time and requires physician expertise, which is costly in terms of resources, but increasingly AI is able to improve performance in this area and reduce the time required for the task. “I think we're going to see a trend where there's a cheapening, if you will, of the value of an interpretation, but an expectation to do it 10 times more as a lab,” Dr. McCaffrey says.
Potential to change the way labs operate with AI tools
There are many applications for AI in the form of predictive tools, generative tools, and operational tools that can help process and analyze individual data, and even give recommendations for the next steps. Dr. McCaffrey believes these AI tools mean the clinical laboratory will become critical to the patient care workflow.
Predictive tools
AI in the form of computational software tools can now help to predict individuals at risk for certain conditions. When his lab receives a complete blood count, Dr. McCaffrey asks what information can be taken from it. “Can we predict who's going to become anemic? Can we use that to triage workflows for colonoscopies or follow-ups? We do this across radiology as well…if you get a chest x-ray, or a CT at UTMB those will be scrutinized for aortic calcification, coronary artery calcification,” he says.
He believes that adopting “opportunistic screening” as a routine would bring better outcomes for patients. This is because patients are creating data feeds no matter why they’re at the hospital, whether it's an image, a tissue specimen, or a blood tube, there is valuable diagnostic information in that. He says, “Our perspective is it shouldn't require an individual to decide it's worth scrutinizing for it to be scrutinized, it should be scrutinized basically for free all the time so that the things that are anomalous about it can be used to move the care workflow forward. And I think this brings better outcomes to patients.”
Generative tools
A more complex application of AI in healthcare is through generative AI. Dr. McCaffrey has seen the introduction of this technology in the areas of interpretation and imperative guidance, resulting in AI-guided workflows. For example, at UTMB, the team has put together a process for toxicology interpretation with a draft written by chat GPT-4, with some prompt engineering and using Retrieval-Automated Generation (RAG) technology which can tap into a huge amount of validated medical knowledge to provide additional information.3
The workflow now follows the steps below:
- A urine mass spectrometry drug screen is ordered.
- Completed liquid chromatography–mass spectrometry results are produced.
- Results are picked up by an AI agent (a software program to perform specific tasks).
- Using RAG technology, the AI agent pulls additional information from electronic health records such as active meds, patient age, patient sex, patient ethnicity, and other parameters.
- The knowledge base of interpretive guidance is used to write the report, which is pushed back into electronic health records automatically.
- Physician looks at their outstanding list and can sign out the patient report, or edit it if required.
This process allows more context to be provided to physicians and this is where Dr. McCaffrey hopes to see AI driving things forward to enrich healthcare provider knowledge and improve patient care.
Operational tools
Dr. McCaffrey also sees many uses for AI in a more clerical or administrative capacity. Currently, UTMB is using AI to help with tasks such as prior authorization, clerical documentation, and collating the variety of faxes and emails that come from different stakeholders.
UTMB staff have also found value in utilizing AI to answer Standard Operating Procedure (SOP) guidance questions and for document review. When previously Dr. McCaffrey might have been asked for input, instead staff can now ask an AI agent some questions, such as, ‘What do you do with this tube?’ They have also used AI to perform tasks like examining SOPs for contradictions between policies, which has flagged some inaccuracies. “That says 5 [millitres], that says 4 mills, that says a green top, that says an orange top.” he says “You know, it's like 10,000 pages of PDF, no one's going to look through that, but AI can look through that. There's another area where it's very helpful.”
- A urine mass spectrometry drug screen is ordered.
- Completed liquid chromatography–mass spectrometry results are produced.
- Results are picked up by an AI agent (a software program to perform specific tasks).
- Using RAG technology, the AI agent pulls additional information from electronic health records such as active meds, patient age, patient sex, patient ethnicity, and other parameters.
- The knowledge base of interpretive guidance is used to write the report, which is pushed back into electronic health records automatically.
- Physician looks at their outstanding list and can sign out the patient report, or edit it if required.
This process allows more context to be provided to physicians and this is where Dr. McCaffrey hopes to see AI driving things forward to enrich healthcare provider knowledge and improve patient care.
Get our latest insights
Join our community and stay up to date with the latest laboratory innovations and insights.
Okay, Thank you. Thank you for that introduction. I really appreciate it. So over the next 20 minutes, I want some of you to count how many times we can use the word AI and artificial intelligence. Since it's one of the biggest trigger words, we're gonna see how many we can go for in this talk. So we're gonna start off. So this is Dr. Peter McCaffrey, the chief of AI at University of Texas Medical Branch. And so we really appreciate you all joining today, and thank you so much for flying in here. Thank you for having me. Perfect. All right.
So first, can you share with us about the current state of AI in the clinical laboratory? Yeah, it's a good question. So I'll start with some context where I think we see AI broadly around the lab, 'cause it relates to where it will go in the lab. In my role at UTB, I'm a pathologist; I'm a CP-trained pathologist, but I kind of lead AI for the whole enterprise. Some of it's HR, some of it's billing, some it's scheduling, some of it's lab diagnostics. And so when we look at where AI can be helpful in healthcare, a lot of it is in getting rid of the inefficient things that kind of contaminate the workflows that we all have. So there's the stuff we all think about, like using AI for surgical planning and pushing the boundaries of what's possible. But I think more tangibly, it's enriching the doctor-patient relationship. It's deburdening people from documentation. It's a lot of these kinds of things. We at UTMB see opportunity in offloading the things that are not the doctor talking to the patient about their health, learning what they need, following them, counseling them, the sort of humanistic touch that can be provided there. 'Cause healthcare is full of non-humanistic stuff that has to go on around that. So where does that relate to the lab and where do we kind of see interesting things happening in the lab? So I'll take a step back and say when we talk about healthcare, I'll talk about Texas, but I think this is true of many places, we talk about a physician shortage in Texas. We talk about the fact that of the 250-some-odd counties, 230 have a shortage, and 33 have no primary care doctors. It's a huge shortage. So I would upfront say we are not in a situation where we think AI will replace doctors. I think the need for doctors so far outpaces the availability of doctors. It'd be great to be at a point to even think about that, but that's not the reality. We need to scale the doctors that we have. And even if we add more, scale them and preserve their time.
So back to the lab, if you think about what doctors want and what people want from healthcare, they want clinical inference all the time on all kinds of things. So whenever I get that blood test back, whenever I'm about to eat that dessert, will it impact my hemoglobin, A1C, my glucose? What do I need to do to get insulin? What I do here, what do I do there? People want to draw on that, 24/7, everywhere. And so the real question is who's gonna meet that? And I think the lab plays an enormous role in meeting that, 'cause a tremendous amount of those questions are about what we do actually and the guidance that we give to patients and to doctors about how to care for patients. So the humanistic things like enriching the relationship are really driven in large part by what we provide to the physicians, the guidance we give and the results that we give. So in summary, I see AI making the clinical lab really front and center of the care workflow, because we're not just giving you a la carte orders and numbers; we're giving you prescriptive guidance. And that's what we desperately need for everybody. And so we are in the middle of that, and it's exciting that we can think about doing that and scaling that. Wonderful, thank you for that.
Now, have there been any challenges with doing that in the lab? Yes, quite a few. So I think the challenges are numerous, and some of them are things we're familiar with. There's things like how well does a model perform? Is it sensitive enough, specific enough, and biased, these kinds of things. Obviously we would aspire to have models that perform perfectly and are unbiased. Realistically speaking, nothing performs perfectly and is unbiased. I think the milestone that we face, the big challenge as a field, is can we at least describe the behavior of the models? Can we get there first? Then we can pursue optimal behavior. So a big challenge overall I see for healthcare, and especially labs, is how do you determine the model is good or bad? Who measures the sensitivity specificity? Who measures the bias? On what dataset? How do we do it collectively? You know, if you look out at some of the recent thought leadership from groups like CHAI, the Coalition for Health AI, they espouse this vision of AI assurance labs to fill something like this, which are sort of leading groups, hospitals, systems that would come forward and help provide this. But as a lab, we don't have regulatory clarity, we don't have technical clarity, we don't have political clarity, we don't have logistical clarity on how to deploy something, where, what to do about monitoring it, what the extent of our behavior is there. So this needs to be crystallized. A framework, they kind of exist primordially. They need to mature and be democratized for us to use, like we have with in vitro testing. And I think it's doable, but that we have to get there. 'Cause right now it's just like a lot of uncertainty for the way forward for hospitals and labs especially. Wonderful, thank you for sharing that with me. Yeah.
So now can you share some examples of how you used AI in the lab? Yeah, for sure. So there are several that we do that span predictive tools, documentation, sort of clerically assistive tools and generative tools: some are kind of new, some are not as new. As a quick aside, I'll just mention that I think the laboratory, if I can put some terminology on it, really offers what are like cognitive utilities, if that makes sense. So if I can take a moment as a pathologist to make a forward-looking statement of the field, if I may, Please. which I think guides why we pursue certain projects. So we are used to a model, a valuation model of healthcare and pathology that's RVU-driven in a large part. And the premise of the RVU-driven workflow is that there's a large premium on your intellect and your time to devote that to interpreting this person's test or that person's test. A lot of effort goes into it; because of the effort there's compensation. But what we see AI able to do is scale at least part of that, and seems like increasingly more of that, which dilutes the time required to do that. So if the interpretation that you did render took 30 minutes, now it takes five minutes, the compensation will follow, the RVU tabulation will follow. I think we're gonna see a trend where there's a cheapening, if you will, of the value of an interpretation, but an expectation to do it 10 times more as a lab, to infuse it all over the place to everyone who's a stakeholder about it. And so you can only do that if you approach it as a system that could support that kind of a scale. And, you know, we all hear the stat, like 70% of medical decisions are based on lab tests. Yes, at least, maybe even more. I mean, the more lab tests we have, the more decisions are driven by them and the more guidance we can offer to physicians in this regard.
I think some of the things at UTMB that we are interested in, I'll talk about predictive tools, are things like LGI Flag. Now, can we look at CBCs? Can we predict who's gonna become anemic? Can we use that to triage workflows for colonoscopy or follow-ups? We do this across radiology as well. Just as an example, we do things like if you get a CXR or a CT at UTMB, it'll be scrutinized for aortic calcification, coronary artery calcification. Then we have the discussion of what happens when those things are positive or anomalous, what do they reflect? What do those workflows look like? And so I think the way we approach the diagnostic part is it's like a ramp. There are some things that are always on, probably higher sensitivity, very sensitive, maybe not so specific that might be used to trigger and cascade other types of AI models. So LGI Flag is one, Sepsis is another one. Can we at least offer detection ubiquitously to the front lines, as a cognitive service really, to the provider base. But then the questions that follow that, is then what do you do about things that are positive, and how much can you and should you automate the decision-making around that? And I think the answer is you can automate quite a bit, actually. And arguably, you should, because a lot of the very early top-of-funnel workflows are not mysterious and yet also variable.
And they put a lot of weight on this one provider thinking of this one thing at this one time, making this one decision to do something. And we want to systematize that whole thing. So specifically for predictive LGI Flag, Sepsis, I would also say things like no-show, risk of unplanned readmission, risk of hospital appointment no-show. Some of these reach into Epic cognitive models. Some of them are also predictive around reimbursement status for things like coding optimization; not just for revenue, but to things like avoiding the patient getting, you know, balanced billing kinds of things happening to them. So there's lots of this that we can offer too. Radiology, the term they use in radiology is opportunistic screening. I don't know if we want to use that term, but that's the common term. Dr. Topol has written a lot about this. But things like you're creating data feeds no matter why you're at the hospital, whether it's an image, whether it's a tissue specimen, whether it's a blood tube. There is latent diagnostic information in that. And I think at the core, our perspective is that it shouldn't require an individual to decide it's worth scrutinizing for it to be scrutinized. It should be scrutinized basically for free all the time, so that then the things that are anomalous about it can be used to move the care workflow forward. And I think this brings, you know, better outcomes to patients. So in the radiology side, we have like 45 modules. Now we talk about how we merge the two. So what happens when someone comes in, you have a high CAC predicted score, can we reflex the lipid testing? Can we move to apoB and Lpa, what do we do there? Can we reflex to a cardiology nurse manager, follow-up and routing? So those are some specific things.
Another thing that we are really excited about, again, to this point of scaling interpretation and imperative guidance is where generative AI plays a role. It plays a lot of obvious roles like clerical stuff, billing and interpreting the facts from the payer, what do they need from here and there, but it has a diagnostic role too. So at UTMB, we've put together some things around generative AI for toxicology interpretation, for example. So it's finally verified by a person, but the draft is written by GPT-4, with some prompt engineering, some special RAG databases. So the workflow is like this: someone orders, you know, a urine mass spec drug screen; that gets completed, LC-MS results come out, an agent picks that up, goes and pulls the results from Epic via FHIR, pulls the active meds from Epic via FHIR, pulls the patient age, the patient sex, the patient ethnicity and other parameters, uses this knowledge base of interpretive guidance, writes the report and pushes it into Epic automatically. So when you go in as a physician to an outstanding list, you can just sign out the interp or edit it if you want. And this kind of thing means that we can provide that to more people, more, in more context. And I think this is again where we hope to see AI driving things forward. That's another big aspect. And, you know, things like prior authorization, clerical documentation, and putting together a lot of the distributed faxes and emails that come from payors about certain things is an area that AI has been good for that we've used. And I guess the fourth bucket is operational. So SOPs, we've done some things with AI that have been like asking questions about the SOP for guidance, which usually would be a page that I get, you know, what do I do with this tube? But instead you could ask an agent: What do you do with this tube? We've even used AI to do things like examine the SOPs for contradictions between the policies. And we found all kinds of things there, like, oh, that says five mls, that says four mls, that says a green top, that says an orange top. Whoops. You know, it's like 10,000 pages of PDFs. No one's gonna look through that. But AI can look through that. This is another area where it's very helpful. I think that the generative stuff, especially, the comment I make is so broadly applicable that the innovations really are like you as a frontline person in a workflow, and how you can cleverly apply it to your workflow. It used to be for a long time, I might be preaching to the choir here, that to do any kind of AI innovation required deep technical expertise to build a deep learning model. While that's definitely still a thing, you can be a bleeding-edge innovator in how you wire together a preexisting model, a preexisting prompt, a preexisting data set. And the ability to do that and innovate as a clinic is not something that we necessarily saw a lot before. And this is something that we are super excited about. And again, I think pathology is at a unique spot to lead that, because we sit on all the data, really. The idea of who owns the data, like this department versus that department is always a bit of a question. But we are an inherently data-centric profession. We run the internal operations for results, distribution for order intake. We can reach around other pieces of the chart. We are kind of the natural nexus for data services, and therefore also for cognitive services, I would say.
So those are a couple of our specific initiatives. And right now, if you come to our hospital with a facial droop, it'll be an AI that activates the stroke team first. If you come and you think we have a PE, it'll be an AI that activates the PE team first. If you come and you get a urinalysis, it'll be an AI that wrote your interp first. If you come and you get pharmacogenomics, it'll be an AI that wrote your interp first. If you come and we say you should go have a follow-up because we think you have LGI bleed, it'll be an AI that found that anomaly first. And I think this is just a trend we're going to see, because, like, how else could we offer that unless we hired 200 people we can't hire. So we see the benefit and we see what looks like the future. So yeah. - Well, thank you for that. So one more question, and then we'll jump to questions from the audience. But, you know, are there any thoughts about, you know, anything else in AI or where you would like to go with AI in the hospital? - Yeah, what I would love to see in the hospital, overall, and I get that I'm a laboratorian, so I speak with a biased perspective, is that we let the lab have more, I don't wanna use the word ownership; more providence, I guess, over the care pathways. I think what we can see is that more care, at least in common phenotypes, is pathway-driven. And it's a question of do you classify it into the pathway, and can we execute the pathway. Especially for diagnostics, we should do that more standardly. And I think AI finally can let us do that. And I have good faith that it will show the value of course in doing that. And so generally, that's what I want to see. Specifically, I want to see these kinds of things happen in practice: Sepsis, tox, PGx, anemia and LGI bleed, AKI, cardiovascular be automated detections and automated nominations into care pathways and intervention. And I think we can really get there, and we're seeing the first inkling. And as a sidebar, I think the regulatory mysteries, the liability mysteries, while nontrivial, will be solved along the way, because they're certainly feasible to solve. And they'll be solved by the market pressures and the collective will of the field bringing them forth. We've brought forth lots of complicated stuff, like mass spec in the past. I don't think this is more complex, really, just a slightly different use case. - Wonderful, thank you for that. So now we're gonna jump to the audience. Is there anybody who wants to be first up in asking questions? Any questions? Sounds good. Well, thank you guys all for your attention today. We really appreciate it. We are gonna be offstage, so if you want to ask anything privately, we can do so. Thank you so much for your time.
Digital Solutions Unlock New Possibilities for Better Care
Artificial intelligence can assist in fostering a holistic approach to better care that moves beyond transactional operations. Hear the Roche-moderated conversation with Peter McCaffrey, chief AI officer for the University of Texas Medical Branch, on how navify® and other AI-driven solutions can unlock possibilities with cutting-edge algorithms.
View more Roche Idea Lab sessions on timely topics in diagnostics and lab medicine.
Challenges to the adoption of AI in healthcare
Although there is clear value for AI in the laboratory space, there are some challenges to more widespread adoption such as attitudes and uncertainty over regulations.4
“We don't have regulatory clarity, we don't have technical clarity, we don't have political clarity,” Dr. McCaffrey says. “We don't have logistical clarity on how to deploy something, or where, or what to do about monitoring it, and what the extent of our behavior is there. So, this needs to be crystallized.”
He believes clarity is possible and that market pressure, along with a collective will for change will bring about solutions to these challenges. He notes, “We’ve brought forth lots of complicated stuff, like mass spec, in the past. I don't think this is more complex really, just a slightly different use case.”
His hope for the future, once a regulatory framework is in place, is that labs will be given more ‘providence’ over care pathways due to the large amount of information they can offer to healthcare providers. In practice, this could mean AI-automated detections and nominations into patient care pathways for certain conditions. “We should be doing that more standardly and I think AI, finally, can let us do that and I have good faith that it will show the value in doing that,” he says.
For more examples of the use of AI from Dr. McCaffrey, watch the full presentation, “Digital Solutions Unlock New Possibilities for Better Care.”
Contributors
Peter McCaffrey , MD, MS, FCAP
Explore articles from our community
- Hou H et al. (2024). Clinica Chimica Acta, 559, 119724. Paper available from https://www.sciencedirect.com/science/article/abs/pii/S0009898124019752 [Accessed October 2024]
- Forbes. (2024). Article available from https://www.forbes.com/sites/bernardmarr/2024/06/17/what-jobs-will-ai-replace-first/ [Accessed October 2024]
- KeyReply. (2024). Article available from https://www.keyreply.com/blog/leveraging-rag-to-rethink-healthcare-a-new-era-of-ai-integration [Accessed October 2024]
- Mennella C et al. (2024). Heliyon 10, e26297. Paper available from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10879008/ [Accessed October 2024]