For localized information and support, would you like to switch to your country-specific website for {0}?
Key takeaways
- Agentic AI is proactive, goal-driven, and capable of independent decision-making within defined guardrails
- It doesn’t replace people, but it does enhance human roles by automating complexity and enabling more meaningful patient care
- Human oversight, transparency, and governance must remain central to any implementation strategy of AI agents
The role of artificial intelligence in healthcare is undergoing a profound evolution, moving beyond analytical tools and toward active digital partnership. This new frontier is the domain of agentic AI in healthcare—designed not merely to assist, but to anticipate needs, respond to commands, take initiative, and orchestrate complex clinical workflows.
To explore the immense potential and inherent complexities of this shift, we sought the expertise of Shweta Maniar, Global Director for Healthcare and Life Sciences at Google. She illuminates what makes agentic systems a fundamentally different class of technology, examines how they are already beginning to reshape care delivery, and discusses the imperative for keeping human oversight at the very core of this transformation.
Defining agentic AI
HT: Can you explain the key differences between traditional AI and agentic AI?
Shweta Maniar: Traditional AI, often called narrow AI, is designed for specific, well-defined tasks. It might process imaging data, extract information from clinical notes, or match keywords in a patient record. It’s rule-based, reactive, and dependent on predefined inputs. It does what it’s told, nothing more.
Agentic AI is a different class entirely. These are proactive systems that can plan, adapt, and act independently within defined constraints. Think of it as a digital collaborator, not just a tool. It understands context, pursues goals, and learns from outcomes to improve performance over time. Where traditional AI helps with one step in a workflow, an AI agent in healthcare can orchestrate the entire sequence.1
Why agentic AI is a true collaborator
HT: What makes agentic AI a true collaborator rather than just an assistant in healthcare settings?
Shweta Maniar: It’s the ability to take initiative within a clinical or operational context. An assistant might flag an issue or summarize a note. A collaborator will flag the issue, propose a solution, and begin executing steps, while still keeping the human in control.
For example, if a patient cancels a visit, an agentic AI system might automatically reschedule based on clinician availability, location logistics, and urgency of care. It anticipates needs, makes decisions based on logic and past patterns, and reduces administrative burden, all without requiring manual prompting at every step.
It’s also context-aware. In clinical settings, that means not just retrieving data, but analyzing it in light of comorbidities, medications, or care history. And critically, these systems improve over time. They’re designed to learn, not just to repeat.
Transforming the patient-provider relationship
HT: How do you think agentic AI will transform patient-provider interactions in the future?
Shweta Maniar: We’re already seeing it begin. One of the biggest pain points in healthcare is the administrative weight on clinicians, from filling out forms to managing records to coordinating follow-up. Agentic AI can take much of that load off their plate.
Imagine walking into a visit where the intake is already completed, summaries are prepared, and potential follow-up actions are queued up. That frees the clinician to focus fully on the patient, not the keyboard. It also allows for more personalized and empathetic care, because the AI is handling the background orchestration.
For patients, it means a smoother, more connected journey. From pre-visit reminders to post-visit instructions, agentic systems can provide timely, tailored communications in the format the patient prefers—whether that’s a text, email, or in-app message.
Where agentic AI is already delivering value in the healthcare industry
HT: What are some of the most promising use cases of agentic AI in healthcare?
Shweta Maniar: A few stand out. Clinical trial matching is a major one. Agentic AI can sift through large datasets, EMRs, trial criteria, patient registries, and proactively identify candidates, reducing the time and resources needed for recruitment.2
Another exciting area is surgical support. While we’re not talking about autonomous surgery, we are seeing systems that assist in real time by flagging anomalies, suggesting interventions, or pulling up relevant imaging without a single click.
Medication management is another use case with significant potential. Instead of simply reminding a patient to refill a prescription, agentic AI can check pharmacy inventory, detect non-adherence patterns, and automatically request authorization, creating a more seamless continuum of care.
Enhancing clinical search and summarization
HT: How can agentic AI enhance clinical search and summarization for healthcare providers?
Shweta Maniar: This is a space where we’re seeing a lot of traction. Traditional search tools retrieve documents. Agentic AI retrieves answers. It can understand the clinical intent behind a query, navigate multiple sources, and synthesize a concise, relevant response.
For example, instead of surfacing five studies about resistant hypertension in renal patients, it will extract key recommendations from the latest guidelines, match them to the patient’s profile, and offer them up in seconds during the consultation.
It’s also valuable during real-time patient interactions. When a provider mentions a symptom, the system can proactively suggest differential diagnoses, potential lab orders, or drug interactions to watch for. That kind of in-the-moment intelligence is a huge leap forward.
Accelerating innovation and discovery
HT: In your opinion, how will agentic AI in healthcare accelerate scientific discoveries?
Shweta Maniar: The research implications are immense. Agentic AI in healthcare can rapidly analyze multimodal datasets, genomic data, imaging, lab results, to identify patterns that might take years for human researchers to uncover.2
It can also generate new hypotheses by connecting previously unrelated variables and even simulate early-stage experiments in silico. That means you can test ideas faster, refine them more effectively, and move to benchwork or clinical trials with greater confidence.3
It’s also incredibly useful in data curation, automating the process of cleaning, labeling, and organizing complex datasets. That’s a huge time saver and allows scientists to focus on the creative and strategic aspects of discovery.
Important considerations for agentic AI in healthcare now and in the future
HT: What’s the role of human oversight in the age of agentic AI?
Shweta Maniar: It’s critical. As powerful as agentic AI is, it must operate within frameworks that reflect human judgment, clinical experience, and ethical standards. We can’t afford “set it and forget it” systems in healthcare.
That’s why governance is essential – ensuring transparency, auditability, and responsibility. In practice, that means every AI-driven action still has a human checkpoint where necessary. It also means building trust over time, by proving the system’s reliability in low-risk scenarios before moving into more sensitive areas.
Agentic AI is a partner, not a replacement. Just like any partnership, it works best when both sides are empowered to do what they do best.
Challenges for implementing agentic AI in healthcare
HT: We’ve talked a lot about the incredible potential of Agentic AI in healthcare. Do you see any potential problems that could arise from its implementation, and what advice would you give as we look to the future with this technology?
Shweta Maniar: It’s true that the potential of agentic AI in healthcare is incredible, but it’s also smart to think about the potential problems and how we’ll handle them. A big one could be a lack of transparency. If an AI suggests a treatment but its reasoning is a total “black box,” it’s tough for both doctors and patients to trust it. We also have to be careful about biased data; if the AI learns from unfair data, it could actually make healthcare disparities worse. The good news is, we can get ahead of these issues by building a culture where the AI’s logic is clear and its data is fair from the very start.
My advice for the future is to really lean into collaboration and accountability. We shouldn’t just let AI developers build this tech in a vacuum. We need to bring doctors, nurses, patients, and ethics experts to the table to make sure these tools are designed to help, not to replace, human expertise. We also need to set up clear rules for checking the AI’s performance regularly. By making sure these agents are not only effective but also fair and unbiased, we can use them to make healthcare more efficient, more personal, and ultimately better for everyone.
References
- Bansod PB. (2025). Tata Institute of Social Sciences. Article available from https://arxiv.org/html/2506.01438v1#:~:text=These%20systems%20address%20the%20challenge,Report%20issue%20for%20preceding%20element [Accessed August 2025]
- Karunanayake N. (2025). Informatics and Health 2(2), 73-83. Paper available from https://www.sciencedirect.com/science/article/pii/S2949953425000141 [Accessed August 2025]
- Zou J, Topol EJ. (2025). The Lancet 405(10477), 457. Paper available from https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(25)00202-8/abstract [Accessed August 2025]