Nearly half of FDA-approved AI-powered medical devices lack clinical validation data


A recent study has found that nearly half of all artificial intelligence-powered medical devices authorized for use by the Food and Drug Administration (FDA) have not gone through proper testing with patients and lack proper data proving their safety and effectiveness.

Recently published in Nature Medicine, the study found that nearly 43 percent of the 521 AI health devices approved by the FDA between 2016 and 2022 lack publicly available clinical validation data showing they were tested on real patient data. This means that almost half of these tools were not necessarily “trained” on actual patient cases, sparking concerns about their performance in real-world medical settings.

The researchers involved in the study are calling for more transparency and better standards in the development and approval of AI-powered medical devices. They argue that clearer guidelines are needed to determine which devices are truly effective and which might need more testing.

Lead author Sammy Chouffani El Fassi, a medical student from the University of North Carolina at Chapel Hill, emphasized the lack of a standard for understanding the quality and reliability of these devices. (Related: AI-powered medical devices could enhance detection and differentiation of skin cancers – but could also give false readings.)

The absence of public data does not always mean the data doesn’t exist. The FDA reviews extensive confidential information before approving a device, which might include real patient data. However, this lack of transparency could make clinicians hesitant to adopt new AI tools, as they may be unsure how these devices will perform in real-world scenarios. According to Chouffani El Fassi, physicians are unlikely to trust devices that haven’t been rigorously tested in real-world conditions.

The study also highlighted the opportunity for greater involvement from clinicians and researchers in testing these devices. By actively evaluating how AI tools perform on patients, health care professionals and respected academic institutions can improve the quality and reliability of AI in medicine.

Most of the AI tools examined were Class II devices, which are considered to have a moderate risk to patients and are typically approved based on their similarity to existing technologies. Interestingly, more than half of the devices lacking clinical validation were radiology tools, which are often used for image archiving and communication. These functions may not require prospective validation as they don’t always directly impact patient care.

However, Dr. Nigam H. Shah, a professor and chief data scientist at Stanford Health Care, noted that some of these tools might not need real-world validation to prove their effectiveness as they can be tested using any data, even non-medical images.

How clinicians can make a difference

The number of AI devices approved by the FDA now stands at around 950 AI and machine learning-enabled devices. This surge reflects a growing interest in AI technology but it also presents a challenge for regulators to keep up. According to Shah, this is where academics and clinicians can play a critical role.

Clinicians and researchers can contribute by validating AI devices as part of their regular work, such as during medical fellowships. Device companies could also collaborate with organizations like the Coalition for Health AI to conduct thorough prospective validations.

This kind of research does not have to be overly complicated. For example, testing an AI tool that helps identify brain hemorrhages or strokes in CT scans could involve simple comparisons, like measuring the time to diagnosis with and without AI or asking radiologists to rate the tool’s effectiveness on a simple scale.

Even a basic feedback method, such as a five-point Likert scale, can provide valuable insights. If clinicians find a tool difficult to use or not beneficial in practice, that feedback helps determine the tool’s clinical value. This, in turn, could lead to greater acceptance and use of AI devices in health care.

Despite the potential advantages of AI, only 38 percent of physicians were using it, according to a recent survey by the American Medical Association. And while 65 percent of physicians believe AI could benefit health care, many are hesitant because they aren’t convinced the benefits justify the costs.

Watch this video to learn what’s next for AI in healthcare in 2023.

This video is from the Daily Videos channel on Brighteon.com.

More related stories:

Apple warns users with medical devices to keep iPhones away from the body because they emit EMF.

Next cyberattack target? Medical devices.

Congress allows FDA to ban off-label use of medical devices.

FDA approval of medical devices based on complete science fraud.

Surgeons receive millions from Big Pharma to promote medical devices in journals.

Sources include:

TheEpochTimes.com

Nature.com

FDA.gov

HealthcareDive.com

Retently.com

AMA-Assn.org

Brighteon.com


Submit a correction >>

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Comments
comments powered by Disqus

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.

RECENT NEWS & ARTICLES

Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.