On October 14, 2021, the United States Food and Drug Administration (â€œFDAâ€ or â€œthe Agencyâ€) hosted a virtual workshop titled Artificial Intelligence (â€œAIâ€) Transparency / Machine Learning (â€œMLâ€) -enabled Medical Devices. The workshop builds on the Agency’s previous efforts in the area of â€‹â€‹AI / ML.
In 2019, the FDA released a discussion paper and request for comment titled Proposed Regulatory Framework for Changes to AI / ML-Based Software as a Medical Device (â€œSaMDâ€). To support the continued development of the framework and increase collaboration and innovation among key stakeholders and specialists, the FDA established the Digital Health Center of Excellence in 2020. And, in January 2021, the FDA released a AI / ML action plan, based, in part, on stakeholders. comments on the 2019 discussion paper.
The October 2021 workshop aimed to develop the current FDA action plan. The objectives of the workshop included exploring what transparency means for manufacturers, suppliers and patients regarding AI / ML compatible medical devices; why such transparency is important; and how transparency can be achieved. Overall, the workshop emphasized the role of transparency in creating safer and more effective AI / ML compatible devices and in building trust in patients and providers using or prescribing such products. Below we describe other key themes of the workshop.
Device data and product development
The workshop highlighted how important transparency in data and product development is in fostering confidence in AI / ML compatible medical devices. Some stakeholders, including patients, providers, and software developers, recommended transparency focused on the data sources used to train and create device software as well as the data used to validate software. The workshop included a discussion of the history of bias in the medical industry and the reasons for including a diverse population in the data used to train and validate software. Stakeholders recommended that the FDA and device manufacturers provide information regarding the underlying device data, including demographics of populations involved in testing and validation of the medical device.
The FDA has also raised questions regarding software and technology development, specifically asking what degree of transparency is needed to build trust between the patient / provider and the device manufacturer. Whether the device software uses “static” or “dynamic” training, some patients and vendors have indicated that they would like to have an overview of the software source code and any changes to the source code.  While recognizing that transparency is important for fostering patient and provider confidence, the device manufacturers in attendance explained that such transparency can raise significant issues related to the protection of proprietary market information.
During the workshop, some providers and patients suggested that AI / ML compatible medical devices include curriculum / educational materials to provide sufficient transparency. These stakeholders recommended that potential users of IA / ML compatible medical devices have access to information regarding the risks of the device, instructions on the safe use of the device, and ongoing training on how to interpret device data. . Additionally, patients and a data scientist suggested that medical device manufacturers contact patients and suppliers in the event of product recalls, device malfunctions, or necessary software updates.
Cost and accessibility
Some workshop participants identified issues related to cost and accessibility of devices. For example, some providers and patients have raised concerns about insurance coverage for AI / ML compatible medical devices. These stakeholders suggested that the FDA and device manufacturers work closely with insurance companies and hospitals to encourage access to transparent information about costs and coverage (for example, whether a compatible device AI / ML would have tracking costs for updates or product changes). Other stakeholders expressed a wish to have a better idea of â€‹â€‹the accessibility of the products (for example, what types of resources, like internet or smart phones, would the patient need to use the device correctly? ).
The FDA encouraged all interested entities to submit comments regarding the workshop by November 15, 2021. Comments should be submitted to file number FDA-2019-N-1185 at www.regulations.gov/. The agency explained that public comments would help guide potential next steps in the AI â€‹â€‹/ ML regulatory space. The FDA provides an up-to-date list of AI / ML compatible medical devices here.
 A “static” driven device uses a “locked” algorithm in its software, which produces consistent output whenever an identical input is provided. A “dynamic” trained device uses an “adaptive” or “continuous learning” algorithm in its software, which means that if the same input is provided, the output may be different each time.