Article

The urgent need for health technology assessment in the AI era

Published on November 7, 2025 | 6 min read
technology-assessment-ai

Key takeaways

  • Medical and/or pharmaceutical evaluation frameworks are ill-suited for dynamic AI and digital health technologies
  • Building clinical trust and adopting continuous evidence generation methods are vital for scaling AI
  • Structural changes, a shared language, and public-private collaboration are necessary to overcome regulatory challenges and advance health technology assessment globally
Building value-based AI and digital health evaluation frameworks

The integration of artificial intelligence and digital health technologies into health care promises sweeping solutions to systemic problems such as workforce shortages, soaring costs, and the high burden of chronic diseases.1 Yet, a profound gap exists between this potential and practical deployment, largely due to evaluation systems that are not “fit for purpose” for dynamic, evolving technologies.2,3 To highlight a recent report from the London School of Economics and Political Science (LSE), Healthcare Transformers hosted a virtual panel of experts to reflect on the crisis of evidence and propose a path forward for health technology assessment and responsibly scaling digital health and AI technologies. 

After a presentation by Robin van Kessel of LSE Health, the panelists discussed a wide range of issues: clinical trust, regulatory fragmentation, evidence generation, and the need for structural change and cross-sector alignment. 

Moving beyond promises

Robin van Kessel, of LSE Health, set the stage by noting that health care systems are in “dire straits” due to financial constraints and accelerating workforce shortages. Investing just 24 US cents per patient per year in digital health could save over two million lives over the next decade.4 However, adoption is stalled because the discourse is driven by “promises, not evidence”. Current evaluation processes, derived from pharmaceuticals, are “heavily skewed towards patients,” often ignoring health care professionals’ unique needs, and are ineffective mechanisms for evaluating digital health and AI technologies. Van Kessel noted that while most AI technologies currently classified under Software as a Medical Device are low-to-medium risk, the impending EU AI Act is likely to classify medical AI systems as high risk. The LSE report’s key contribution is an “evidence-based taxonomy for professional facing digital health and AI technologies,” a “building blocks framework” based on seven dimensions (e.g., intended use case, data inputs, driving technology) designed to allow evaluators to compare “apples to apples”. Furthermore, while randomized control trials (RCTs) remain the gold standard, probabilistic methods like Bayesian analysis are being recognized for their particular relevance to communicating uncertainty for inherently probabilistic AI technologies.

The imperative of clinical trust for digital health and AI

Professor Jochen Klucken, Professor and Chair of Digital Medicine, University of Luxembourg, focused on the clinical perspective, emphasizing that scaling AI requires acceptance and trust of health care professionals. He noted that doctors currently have difficulties with AI, and that significant education is necessary. Beyond trust, the evidence must prove real clinical impact, as traditional metrics like precision and accuracy are not enough. There is also a fundamental methodological mismatch: AI inherently gives a “probabilistic answer,” unlike the binary outcomes of classical RCTs. He advocated for Bayesian models to evaluate AI effectiveness, which better align with the probabilistic nature of both AI and a doctor’s decision-making built on a priori knowledge. 

AI-digitalhealth-trust

The risks of regulatory failure in health technology assessment

Antonio Spina of the World Economic Forum mentioned that “public private collaboration has always been fundamental” to regulation and health technology assessment, but the changing nature of digital and AI innovation—with continuous life cycles and new actors—demands new models of collaboration, specifically the co-creation of regulatory assessment. He warned of the consequences if stakeholders fail to convene and align, namely: regulatory fragmentation (inconsistencies and duplication), misdirected regulation (which could be too restrictive or too lax regarding emerging risks like model drift), and ultimately loss of competitiveness. The biggest risk, however, is that already strained health systems will be unable to address modern global health challenges unless technology can be responsibly scaled. 

Embracing continuous evidence generation

Speaking from the innovator’s perspective, Chaohui Guo of Roche Diagnostics affirmed that the fundamental challenge is that dynamic AI solutions cannot be measured with traditional static RCTs. Guo advocated for a definitive shift toward a “more pragmatic and continuous evidence generation approach,” embracing pragmatic clinical studies, real world evidence, and simulation-based methodologies. Guo hoped that such an alignment would be a “gamechanger for evidence generation”, and that harmonization would provide “clarity and transparency” to innovators. Furthermore, if health technology assessment explicitly values data from wearables and electronic health records, it will incentivize investment in robust data infrastructure, and encourage a shift towards a health system that learns continuously.

The grim reality of health technology assessment

Katarzyna Markiewicz-Barreaux, AI Strategic Intelligence Lead at Phillips and a Health Technology Assessment Committee Co-chair at MedTech Europe, provided sobering statistics to underscore the urgency of the issue. She revealed that research indicates that only a very low percentage of AI medical devices have sufficient evidence to support a full Health Technology Assessment, particularly regarding economic impact and safety, even after receiving a CE mark.Markiewicz-Barreaux noted that this lack of evaluation is dangerous, as improper use risks technology making crucial mistakes at clinical sites, and eroding trust. She described the process for innovators as “Russian roulette” due to a lack of transparency regarding evidence requirements. 

Structural change is essential for progress

George Wharton provided structural guidance, identifying the current inertia as a “problem of coordination” because “no single actor is responsible for the whole process” of approval, HTA, adoption, and reimbursement. For professional-facing tools, the lack of defined “outcome measures” (e.g., workflow efficiency versus time saved) reinforces risk aversion. Distinguishing between “necessities” and “niceties,” Wharton argued that the “essential” interventions are those that address structural problems and integrate entire classes of evolving technologies systematically. The absolute “prerequisite” is a “shared language and taxonomy”—like the one provided in the report—because regulators use risk-based classifications while HTA agencies use function-based categories, making coordination difficult. He emphasized the necessity of aligning “evidence standards to be aligned with those functions, with those use cases” and committing to “iteration and experimentation” to prevent the evaluation gap from widening.

Building a robust structural foundation to ensure adequate health technology assessment

The discussion confirmed that health care systems are operating under a self-imposed paradox: desiring rapid AI innovation while maintaining static, risk-averse evaluation gates. The speakers issued a clear mandate for next steps: policy efforts must focus on structural, systemic factors. This requires immediate investment in developing a shared language and taxonomy across jurisdictions and stakeholders, coupled with the adoption of continuous, pragmatic, and probabilistic methods (like Bayesian analysis) for evidence generation. Ultimately, public and private sectors must co-create a system that commits to rapid iteration and experimentation, ensuring the regulatory architecture is flexible enough to manage technologies that change and relearn over time. The future of health care hinges on whether policymakers can pivot from incremental improvements to building a robust structural foundations that ensures adequate health technology assessment.

AI-governance-for-healthcare

AI governance for healthcare: A digital health evaluation framework

Solving systemic adoption barriers for digital health technologies requires frameworks that can solve issues in AI governance for healthcare.

Get our latest insights

Join our community and stay up to date with the latest laboratory innovations and insights.

Contributor

Robin van Kessel headshot

Robin van Kessel, PhD

André Hoffmann Fellow, London School of Economics and Political Science

Robin van Kessel is the André Hoffmann Fellow on Health System Financing and Payment Models at LSE Health and the World Economic Forum. At the LSE, he co-founded and leads the digital health research unit with a particular focus on the regulation, implementation, and financing of digital health and AI technologies. He holds a PhD in Comparative Health Policy from Maastricht University. His main research portfolio focuses on the intersection of digital health and artificial intelligence, health systems and policy, and health inequalities. Dr van Kessel’s work is published in leading medical and health policy journals such as npj Digital Medicine, The Lancet Digital Health, The Lancet Regional Health - Europe, the Bulletin of the World Health Organization, and The BMJ.

George Wharton headshot

George Wharton, MSc

Deputy Head of Department, LSE Health

George Wharton is Associate Professor of Health Policy, with an academic background in International Relations and Health Policy. George joined LSE in 2018 with the launch of the Department of Health Policy, on the Professionally Qualified Faculty scheme. Prior to LSE Health, George joined the UK Civil Service, working in a variety of policy delivery roles before joining Healthcare UK, a government agency which promotes the UK’s health and life sciences sectors overseas. George’s current work focuses on a broad range of themes in comparative international health policy. He has consulted and written for national governments, international organizations, philanthropic foundations and industry bodies, and has co-authored articles on health systems and policies in the BMJ, The Lancet, and others.

Katarzyna Markiewicz-Barreaux headshot

Katarzyna Markiewicz-Barreaux, PhD

Global AI Strategic Intelligence Lead, Philips

Katarzyna Markiewicz-Barreaux has been with Philips for the last 9 years, currently in the position of Global AI Strategic Intelligence Lead. In the MedTech European industry association Katarzyna is leading the Health Technology Assessment (HTA) Committee focusing on Digital Health and AI. In this role, she is concentrating on the successful implementation of the HTA Regulation (HTA-R).

Dr Chaohui Guo headshot

Chaohui Guo, MPhil, PhD

Head, Clinical Value & Validation Chapter for Roche Information Solutions, Roche Diagnostics

Chaohui Guo is the Head of the Clinical Validation Chapter at Roche Information Solutions (RIS), Roche Diagnostics. She joined Roche in 2019 and has worked on end-to-end evidence generation for digital health products. Prior to Roche, she worked at McKinsey & Company, focusing on the healthcare industry. She has an MPhil in Biology from the University of Cambridge, and a PhD in Neuroeconomics from Zurich University. Chaohui collaborates broadly with industry and academic partners to bring innovative solutions to address healthcare challenges, and publishes her work in peer-reviewed scientific publications and journals.

Antonio Spina headshot

Antonio Spina Global Lead for Digital Health, World Economic Forum

Antonio Spina is the Global Lead for Digital Health at the World Economic Forum. At the Forum, he is responsible for the Digital Healthcare Transformation (DHT) Initiative, including health AI scalability and health data collaboration across a network of 200+ industry, government, and civil society partner organizations. He has led projects and convenings across North America, Europe, Africa, India, the Middle East, China, and elsewhere. Prior to joining the Forum, Antonio led management consulting and strategy programs in healthcare, technology, and other sectors, and was a fellow at the Gates Foundation in life sciences partnerships. Antonio holds a degree in Biomedical Engineering from Johns Hopkins University.

Jochen Klucken

Jochen Klucken, MD

Professor and Chair of Digital Medicine, University of Luxembourg

Jochen Klucken is a neurologist and neuroscientist with an interest in medical technologies and digital applications in neurodegenerative diseases. As a full professor and Chair of Digital Medicine and head of the Digital Medicine research group (dMed) at the University of Luxembourg and the Centre Hospitalier de Luxembourg his research focuses on I) shaping and innovating personalized digital healthcare solutions, II) evaluating the medico-socio-economic benefit of new digital medical devices and services, and III) assessing the societal impact of AI driven health technologies on patients with Parkinson disease, their healthcare professionals, related researchers, regulators and policy makers. This implementation research approach targets all healthcare stakeholder needs: patient empowerment, healthcare provider quality, payers’ efficiency, social transformation and acceptance, as well as regulatory and policy changes. His professional focus lies on conducting translational research on the development of digital medical devices/services and their integration into healthcare procedures, as well as the evidence generation for their clinical impact. Prof. Klucken is also co-leading as a rapporteur a task force of European governmental HTA bodies and healthcare policy makers to harmonize European healthcare technology assessment frameworks in the light of the EHDS, DGA, HTAR and AI act. He has a broad interdisciplinary experience in health technology development and deployment, including start-ups, public and private clinical research requirements for innovative healthcare technologies, business cases, and sustainable data-driven healthcare services.

Newsletter for healthcare leaders and experts

Written for experts by experts, we offer the healthcare newsletter of choice when it comes to leading healthcare transformation.

Healthcare Transformers delivers insights on digital health, patient experience, healthcare business, value-based care, and data privacy and security—key topics and emerging trends facing healthcare leaders today. Collaborating with esteemed industry experts and innovators worldwide, we offer content that helps you gain first-hand knowledge, explore challenges, and think through solutions on the most pressing developments and issues. Subscribe to our Healthcare Transformers newsletter today and get critical discussions and invaluable perspectives delivered straight to your inbox.

References

  1. Meskó, B. et Al. (2018). Will artificial intelligence solve the human resource crisis in healthcare? Available from: https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-018-3359-4
  2. Farah L. et Al. (2024) Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. Available from: https://www.jmir.org/2024/1/e51514/
  3. Park, Y. et. Al (2020). Evaluating artificial intelligence in medicine: phases of clinical research. Available from: https://pubmed.ncbi.nlm.nih.gov/33215066/
  4. World Health Organization (WHO) & International Telecommunication Union (ITU). (2024). Going digital for noncommunicable diseases: The case for action. Available from: https://www.who.int/publications/b/71552
  5. Farah, L. et Al. (2023). Are current clinical studies on artificial intelligence-based medical devices comprehensive enough to support a full health technology assessment? A systematic review. Available from: https://doi.org/10.1016/j.artmed.2023.102547