Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Samsung’s Galaxy XR headset to take on Apple with help from Google and Qualcomm 

    Pardoned Capitol rioter charged with threatening to kill Hakeem Jeffries

    Tennr Named a Top Emerging Solution by KLAS, Recognized for Reducing the Cost of Care and Improving Clinician Experience

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Health»Q&A with EHR Association AI Task Force Leadership
    Health

    Q&A with EHR Association AI Task Force Leadership

    AdminBy AdminNo Comments9 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Oct 21
    2025

    Q&A with EHR Association AI Task Force Leadership

    Artificial intelligence (AI) is evolving rapidly, reshaping the health IT landscape while state and federal governments race to put regulations in place to ensure it is safe, effective, and accessible. For these reasons, AI has emerged as a priority for the EHR Association. We sat down with EHR Association AI Task Force Chair Tina Joros, JD (Veradigm), and Vice Chair Stephen Speicher, MD (Flatiron Health), to discuss the direction of AI regulations, the anticipated impact on adoption and use, and what the EHR Association sees as its priorities moving forward.

    Stephen Speicher, MD

    EHR: What are the EHR Association’s priorities in the next 12-18 months, and is/how is AI changing them?

    Regulatory requirements from both D.C. and state governments are a significant driver for the decisions made by the provider organizations that use our collective products, so a lot of the work the EHR Association does relates to public policy. We’re currently spending a fair amount of our time working on AI-related conversations, as they’re a high-priority topic, as well as tracking and responding to deregulatory adjustments being made by the Trump administration. Other key areas of focus are anticipated changes to the ASTP/ONC certification program, rules that increase the burdens on providers and vendors, and working to address areas of industry frustration, such as the prior authorization process.

    EHR: How has the Association adapted since its establishment, and what areas of the health IT industry require immediate attention, if any?

    The EHR Association is structured to adapt quickly to industry trends. Our Workgroups and Task Forces, all of which are led by volunteers, are evaluated periodically throughout the year to ensure we’re giving our members a chance to meet and discuss the most pressing topics on their minds. Most recently, that has meant the addition of new efforts specific to both consent management and AI, given the prevalence of those topics within the general health IT policy conversation taking place at both the federal and state levels.

    Tina Joros

    EHR: If you were to welcome young healthcare entrepreneurs to take on the sector’s most pressing challenges, what guidance would you offer them?

    Health IT is a great sector for entrepreneurs to focus on. The work is always interesting because it evolves so quickly, both from a technological perspective and the fact that public policy impacting health IT is getting a lot of attention at the federal and state levels. There are a lot of paths to work in the industry, so it’s always helpful for both entrepreneurs and potential health IT company team members to have a clear understanding of the complexities of our nation’s healthcare system and how the business of healthcare works. Plus, they need a good grasp of the increasingly critical role of data in clinical and administrative processes in hospitals, physician practices, and other care settings.

    EHR: What principles are critical to the safe and responsible development of AI in healthcare? How do they reflect the Association’s priorities and position on current AI governance issues?

    One of the first things the AI Task Force did when it was formed was to identify certain principles that we believe are essential for ensuring the safe and high-quality development of AI-driven software tools in healthcare. These guiding principles should also be part of the conversation when developing state and federal policies and regulations regarding the use of AI in health IT.

    1. Focus on high-risk AI applications by prioritizing governance of tools that impact critical clinical decisions or add significant privacy or security risk. Fewer restrictions on other use cases, such as administrative workflows, will help ensure rapid innovation and adoption. This risk-based approach should guide oversight and reference frameworks like the FDA risk analysis.
    2. Align liability with the appropriate actor. Clinicians, not AI vendors, maintain direct responsibility for AI when it is used for patient care, when the latter provides clear documentation and training.
    3. Require ongoing AI monitoring and regular updates to prevent outdated or biased inputs, as well as transparency in model updates and performance tracking.
    4. Support AI utilization by all healthcare organizations, regardless of size, by considering the varying technical capabilities of large hospitals vs. small clinics. This will make AI adoption feasible for all healthcare providers, ensuring equitable access to AI tools and avoiding the exacerbation of the already oversized digital divide in US healthcare.

     Our goal with these principles is to strike a balance between innovation and patient safety, thereby ensuring that AI enhances healthcare without unnecessary regulatory burdens.

    EHR: In its January 2025 letter to the US Senate HELP Committee, the EHR Association cited its preference for consolidating regulatory action at the federal level. Since then, a flurry of state-level activity has introduced new AI regulations, while federal regulatory agencies work on finding their footing under the Trump Administration. Has the EHR Association’s position on regulation changed as a result?

    Our preference continues to be a federal approach to AI regulation, which would eliminate the growing complexity we face in complying with multiple and often conflicting state laws. Consolidating regulations at the Federal level would also ensure consistency across the healthcare ecosystem, which would reduce confusion for software developers and providers with locations in multiple states.

    However, while our position hasn’t changed, the regulatory landscape has. In the months since submitting our letter to the HELP Committee, California, Colorado, Texas, and several other states have enacted laws regulating AI that take effect in 2026. Even if the appetite for legislative action was there, it’s unlikely the federal government could act quickly enough to put in place a regulatory framework that would preempt those state laws. Faced with that reality, we’re working on a dual track of supporting our member companies’ compliance efforts at the state level while continuing to push for a federal regulatory framework.

    EHR: What benefits will be realized by focusing regulations on AI use cases with direct implications for high-risk clinical workflows?

    Centering AI regulations on high-risk clinical workflows makes sense because they represent a higher possibility of patient harm, and that focus would simultaneously ensure room for innovation on lower-risk use cases. Our collective clients have many ideas as to how AI could help them address areas of frustration, and that’s where our member companies therefore want room to move from development to adoption more expediently, unencumbered by regulation—for example, administrative AI use cases like patient communication support, claims remittance and streamlining benefits verification, all of which our internal polling shows are in high demand by physicians and provider organizations.

    A smart, efficient risk-based regulatory framework would be grounded in the understanding that not all AI use cases have a direct or consequential impact on patient care and safety. That differentiation, however, is not happening in many states that have passed or are contemplating AI regulations. They tend to categorize everything as high-risk, even when the AI tools have no direct impact on the delivery of care or the risk to patients is minimal.

    The unintended consequence of this one-size-fits-all approach is that it stifles AI innovation and adoption. It’s why we believe the better approach is granular, differentiating between high- and low-risk workflows, and leveraging existing frameworks that stratify risk based on the probability of occurrence, severity, and positive impact or benefit. This also helps ease the reporting burden on all technologies incorporated into an EHR that may be used at the point of care.

    EHR: Where should the ultimate liability for outcomes involving AI tools lie–with developers or end users–and why?

    This is an interesting aspect of AI regulation that remains largely undefined. Until recently, there hasn’t been any discussion about liability in state rulemaking. For example, New York became one of the first states to address liability when a bill was introduced that holds everyone involved in creating an AI tool responsible, although it’s not specific to healthcare. California recently enacted legislation stating that a defendant—including developers, deployers, and users—cannot avoid liability by blaming AI for misinformation.

    Given the criticality of “human-in-the-loop” approaches to technology use—the concept that providers are ultimately accountable for reviewing the recommendations of AI tools and making final decisions about patient care—our stance is that liability for patient care ultimately lies with clinicians, including when AI is used as a tool. Existing liability frameworks should be followed for instances of medical malpractice that may involve AI technologies.

    EHR: Why must human-in-the-loop or human override safeguards be incorporated into AI use cases? What are the top considerations for ensuring those safeguards add value and mitigate risk?

    The Association strongly advocates for technologies that incorporate or public policy that requires human-in-the-loop or human override capabilities, ensuring that an appropriately trained and knowledgeable person remains central to decisions involving patient care. This approach also ensures that clinicians use AI recommendations, insights, or other information only to inform their decisions, not to make them.

    For truly high-risk use cases, we also support the configuration of human-in-the-loop or human override safeguards, along with other reasonable transparency requirements, when implementing and using AI tools. Finally, end users should be required to implement workflows that prioritize human-in-the-loop principles for using AI tools in patient care.

    Interestingly, we are seeing some states address the idea of human oversight in proposed legislation. Texas recently passed a law that exempts healthcare practitioners from liability when using AI tools to assist with medical decision-making, provided the practitioner reviews all AI-generated records in accordance with standards set by the Texas Medical Board. It doesn’t offer blanket immunity, but it does emphasize accountability through oversight. California, Colorado, and Utah also have elements of human oversight built into some of their AI regulations.

    by Scott Rupp
    Tags:
    EHR Association, Flatiron Health, Stephen Speicher, Tina Joros, Veradigm

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Tennr Named a Top Emerging Solution by KLAS, Recognized for Reducing the Cost of Care and Improving Clinician Experience

    Audible Opens The Pillars in Newark

    Nearly 9,000 Florida Children and Families Could Lose Head Start Access Amid Shutdown

    Assumption University Receives National Recognition in Healthcare Education, Launches New Programs

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Judge reverses Trump administration’s cuts of billions of dollars to Harvard University

    Prabowo jets to meet Xi in China after deadly Indonesia protests

    This HP laptop with an astonishing 32GB of RAM is just $261

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    8.9

    Review: Xiaomi’s New Loudspeakers for Hi-fi and Home Cinema Systems

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2025 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.