AI In Medical Devices: How Does The FDA Regulate To Reduce Patient Risk?



AI (artificial intelligence) algorithms are now able to make decisions or simplify tedious manual healthcare tasks that once required human intervention. At present, the FDA website lists 692 device clearance applications for medical devices that are enabled by AI or ML (machine learning).

Of the 692, twenty-one are de novo devices, meaning that they are devices that are of a whole new type. The list of new devices will only grow as AI and ML technology improves and becomes more ubiquitous.

AI In Healthcare: What’s The Impact?

The common thread between these medical devices is that they automate tasks that carry a high mental load on the physician, as one might expect, since these devices are “intelligent.”

The way they do this is often through a “teaching” session to familiarize the computer with the types of scenarios most likely to be encountered, and then having the machine produce the result.

For example, a machine might be taught how to spot smaller abnormalities in a radiology image, which might normally be overlooked. Other purposes may include employing predictive analytics to monitor patient status, or tedious tasks such as counting the number of lesions on an image.

AI + ML Diagnosis & Treatment

The NIH has published an abstract of a literature review demonstrating that AI and ML has tremendous potential for diagnosis and treatment of medical conditions.  These devices can be life-changing for patients, and labor-saving to healthcare professionals.

But, the news is filled with instances of AI “hallucinations,” instances when AI yields false or misleading information. These AI and ML devices need to include controls to assure accurate healthcare information, and to prevent costly and hazardous medical errors.

AI Isn’t Perfect

In recent memory, misuse of AI tools resulted in a legal brief with a fictitious footnoted case reference, only caught after it had been submitted to the judge for review.  (Apparently, the attorney authoring the legal brief didn’t take the time to proofread). While this may appear to be a symptom of carelessness.

Hallucinations In AI

In the professional setting, these AI hallucinations can be difficult to detect to all including the most careful, skilled eye. Similarly, AI or ML-enabled applications need to have a proofreading step built into the device to prevent medical errors. Evidently, any decision made by an AI bot cannot be left to stand on its own.

Yikes, ChatGPT Misdiagnosed

Indeed, use of ChatGPT misdiagnosed greater than eighty percent of pediatric cases in a recent study.  Clearly, AI cannot be used on its own to formulate a reliable medical decision.

Ten Pillars Of Responsible AI Practice

A KPMG report on ethical AI usage cited ten pillars of responsible AI practice, including Fairness, Transparency, Explain-ability, Accountability, Data Integrity (a basic need for the medical device industry) Reliability, Security (specifically cybersecurity) and Safety (another basic need for medical devices), privacy and sustainability. Medical devices employing AI/ML technology clearly need to keep these principles in mind.

Safety Is Key

The most important of these are those that assure safety of the medical device. The safety controls will vary according to the purpose of the medical device, but may include a requirement for physician results approval.

This feature may take place with a review feature that shows the specific lesions-of-interest so the doctor can confirm that the AI is focused on the correct image features. If a device has no decision-making features, other controls may be necessary.

Design Control Is Essential

From the FDA perspective, design control is an essential element of understanding the risks of AI medical device features, driven by an extensive risk analysis to ensure that AI/ML medical devices have fail safes and necessary cybersecurity and data integrity features to prevent false medical information that has the potential to harm patients.

Since the algorithm for AI is ever-changing as the machine continues to refine its understanding of the data, the output of the AI process is difficult to predict at this time.

Should Limitations Be Placed On AI?

With all the internet misinformation, another type of control is to place limitations on the device on the source of the machine learning data. For example, if the device is intended to assist with diagnosis of medical conditions, the information accessible to the machine might be limited to certain pre-vetted information.

This prevents inclusion of spurious medical information in the AI library, which might lead to misdiagnosis, or diagnosis of an unrecognized medical condition.

Consider Another Pair Of Eyes

As time moves forward, however, we expect AI to become more transparent to industry professionals. Until such time, with AI in its infancy, Compliance Team urges care in developing AI/ML medical devices, which may need an outside “pair of eyes” to review and confirm your FDA compliance and help enhance the risk assessment.

If you need some third-party review of your AI device’s design controls, Compliance Team is here to help! Compliance Team has you covered. We understand the process for validating software enabled medical devices.

Our FDA product submission experts can help you to understand how your algorithms will be scrutinized and reviewed prior to a medical device clearance or approval. Let us help you minimize the product risk and shorten timelines.

1 The Food and Drug Administration. “Artificial intelligence and machine learning (AI/ML) enabled medical devices,” December 06, 2023. Accessed online on 11 Jan 2024 via Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices | FDA

2 Kumar, Yogesh, et.al. “Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda.” J Ambient Intell Humaniz Comput., 2023; 14(7): 8459-8486. Accessed online on 11 Jan 2024 via Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda – PMC (nih.gov)

3 The Associated Press “Michael Cohen says he unwittingly sent AI-generated fake legal cases to his attorney.” December 30, 2023. Accessed online on 11 Jan 2024 via Michael Cohen sent AI-generated fake legal cases to his lawyer : NPR

4 Axios Pro. “AI legislation, lawmakers and companies to watch right now.” Accessed online on 11 Jan 2024 via AI legislation, lawmakers and companies to watch right now (axios.com)

5 Choi, Joseph. “ChatGPT incorrectly diagnosed more than 8 in 10 pediatric case studies, research finds.” The Hill, January 03, 2024, accessed online on 11 Jan 2024 via ChatGPT incorrectly diagnosed more than 8 in 10 pediatric case studies, research finds | The Hill

6 KPMG “We are committed to using AI ethically and responsibly.” KPMG Careers and Culture page, accessed only on 11 Jan 2024 via KPMG Trusted AI