Where the AI action plan is below the health care trust

In a recent article published by The Hill, Drs. John Whyte and Margaret Lozovatsky praise the current Action of AI of the US administration as an exciting first step towards confidence in the AI of health care.
They claim that the plan “shows particular attention to the construction of public and professional confidence in AI technology through transparent and ethical journeys [sic] and accelerate national safety, performance and interoperability standards. »»
To be clear, AI is very promising for health care. And there are aspects of the plan that deserves to eloquent, such as the acceleration of AI innovation in diagnostic and processing options, the expansion of public-private partnerships and the accent on interoperability. But these advantages are overshadowed by three key concerns which will have a disproportionate impact on vulnerable populations if the plan is implemented as written.
Risks of confidentiality of unified health files
A major AI action plan argument is the implementation of a data monitoring system that will allow patients to more easily share information on personal health (PHI) with providers. The compromise is that large technological companies will have access to details that were previously shared only with patients, service providers and insurance companies.
This change creates risks by centralizing large amounts of sensitive medical data, such as diagnostics, prescriptions and laboratory results, in systems that become attractive targets for cybercriminals. Unlike isolated violations with individual practices, a compromise in unified records could simultaneously expose the most sensitive patients of patients.
Patients relying on providers with fewer cybersecurity resources, such as community health centers. These patients also tend to be less literally digitally and face greater consequences of health -based discrimination, such as employment or insurance refusal after mental health or genetic data violations.
As written, the plan offers few guarantees beyond existing regulations that were not designed for health data systems focused on AI on this scale. Without stronger encryption standards, compulsory violation notification deadlines and explicit protections for PHI, the convenience of data sharing presents an unacceptable risk for patients’ privacy.
Vague standards and punitive approach
Effective AI governance requires clear and robust regulatory standards. In my opinion, a unified federal framework would be better for health care than the state patchwork with which the United States is currently operating. But since the AI action plan pushes deregulation to the detriment of patient safety – going so far as to punish states with “heavy regulations of AI” – is not clearly the time of a federal framework.
It was therefore encouraging to see the Senate vote massively to abolish the moratorium on HR AI 1 last month, which would have prevented the states from regulating AI independently. However, the AI action plan adopts the opposite approach by calling for the deletion of “expensive” rules without defining what it really considers heavy or expensive.
This vague approach becomes more worrying given the declared philosophy of the plan: a “construction, baby, build” mentality referenced to page 1 which prioritizes speed on security. Such an approach creates specific risks in health care, where the issues are higher than in other industries. Within the framework of this framework, states like Illinois, which have just adopted legislation prohibiting the use of AI for mental health decisions, could face penalties to treat patient protections as essential rather than administrative formalities.
The plan fails to explain how AI systems will be monitored after deployment, leaving any surveillance to the voluntary practice of the industry. Since AI algorithms continue to learn and change over time, they are likely to develop new biases or errors that may have an impact on patient quality care. Without robust surveillance requirements, patients – especially in communities with fewer resources – become involuntary test subjects for the evolution of AI systems.
Instead of relying on voluntary surveillance of the industry, health care would benefit from the stricter application of clearly defined regulations which monitor AI performance, would make the algorithmic decision -making more transparent and validate various populations of patients. These protections are particularly essential for vulnerable communities which often lack resources to seek alternative care when AI systems fail.
Amplification of health care disparities
Finally, the plan rejects the concerns concerning the bias of the AI by removing the requirements of diversity, equity and inclusion (DEI) of the surveillance executives. But in health care, algorithmic bias is not political – it is a problem of patient safety which already costs lives in poorly served communities.
The best known example of this tragedy is how AI models trained mainly on white patient data underestimated the risk of breast cancer in black women who were in fact at high risk of developing the disease. This has probably led to less follow -up and cases of non -diagnosed or unrepected breast cancer, aggravating health results and contributing to higher mortality rates in black women.
This is not an isolated case. Similar biases have been documented in several health care applications, pain assessment tools that underestimate discomfort in black patients with diagnostic algorithms that lack heart disease in women. However, the deletion by the plan of everything that means that the DEI means that there will be no integrated checks and counterweights to prevent these biases from being integrated into new health care AI systems.
Without mandates to test algorithms through various populations, such disparities is generalized as the adoption of AI accelerates.
Main to remember
As written, the Action Plan of AI actively discourages the type of governance of rigorous AI and focused on the actions that patient safety requires. Without correction of courses, the risks of health care AI widen rather than filling the existing gaps in the quality of care and access.
This is clearly clear by a disturbing dynamic: states that try to protect vulnerable patients from AI risks could face federal financial sanctions to maintain “heavy” regulations. This actually makes states to reduce their standards precisely when stronger protections are the most necessary.
Inadequate confidentiality guarantees will only increase systemic vulnerabilities. To tackle rather than amplify existing health disparities in the United States, the mechanisms for monitoring and prevention of biases must be reinforced, and not eliminated.
Photo: Narvo Vexar, Getty Images
Lauren Spiller is a business analyst at Manageengine, where she explores how emerging technologies like AI transform digital workplaces. His research and writing focus on governance, security and the human side of technological adoption. Before joining Manageengine, she worked at Gartner, developing data focused on data to help business leaders and software buyers to make smarter decisions on rapid development markets.
This message appears through the Medcity influencers program. Anyone can publish their point of view on business and innovation in health care on Medcity News through Medcity influencers. Click here to find out how.



:max_bytes(150000):strip_icc()/Health-GettyImages-DairyWithMoreProbioticsThanYogurt-61aceb114e8542caae52b4e4a6067ac6.jpg?w=390&resize=390,220&ssl=1)
:max_bytes(150000):strip_icc()/VWH-GettyImages-2185000593-94f3e0dc89e54771b196dcfaf0129c37.jpg?w=390&resize=390,220&ssl=1)