Risk Management for Medical Devices

Speaker: Christopher Perry, Senior Principal Systems Engineer

In the MassMEDIC and Sunrise Labs webinar, you will learn how to align disparate stakeholders by creating a common terminology and understanding of the Risk Management Process. Having misaligned stakeholders can extend the schedule and budget. Sunrise will present methods to improve alignment and set the groundwork for the Risk Management Process.

KEY TAKEAWAYS:

  • A basic understanding of the Risk Management Process
  • Strategies for Fostering Team Alignment
  • Mastering the Language of Risk: Key Terms Explained & Training Best Practices

Questions asked during the presetation:

When you set probability of occurrence how do you know you got it ‘right’?

Determining the “right” probability of occurrence for risk management in medical device development is a complex task that involves a combination of expert judgment, data analysis, and risk assessment methodologies.

It’s important to note that determining the “right” probability is often subjective and can be influenced by various factors, including:

  • Project Complexity: More complex projects may have higher inherent risks.
  • Regulatory Requirements: Stringent regulatory requirements can increase the likelihood of certain risks.
  • Organizational Culture: The risk appetite and tolerance of the organization can affect how probabilities are assessed.

Regarding risk analysis, according to ISO 14971: 2019, we cannot use FMEA only, right?

That’s correct. While FMEA (Failure Mode and Effects Analysis) is a valuable tool for risk analysis in medical device development, ISO 14971:2019 specifically states that a single risk assessment technique is not sufficient.

Can we use the instructions for users as a risk control measure according to ISO 14971: 2019?

Yes, user instructions can be considered a risk control measure according to ISO 14971:2019.

Are there any FDA guidance documents related to risk management?

Yes, the FDA has issued several guidance documents related to risk management in medical device development. These documents provide recommendations and best practices for manufacturers to ensure the safety and effectiveness of their products.

Here are a few key FDA guidance documents on risk management:

  • ISO 14971:2019: Medical Devices – General Requirements for Safety and Essential Performance (Guidance) – This document provides guidance on implementing ISO 14971, an international standard for medical device risk management.
  • Medical Device Risk Management: A Guide for Manufacturers – This guidance document provides a comprehensive overview of risk management principles and practices for medical device manufacturers.
  • Quality System Regulation (QSR) – While not specifically focused on risk management, the QSR requires medical device manufacturers to have a comprehensive quality system in place, which includes risk management as a key component.
  • Guidance for Industry: Postmarket Management of Medical Devices – This guidance document emphasizes the importance of ongoing risk management activities after a device is on the market.

Does the EU require risks to be controlled as far as possible?

Yes, the European Union (EU) requires that risks to patients and users of medical devices be controlled as far as possible. This is a key requirement of the Medical Device Regulation (MDR) 2017/745, which sets out the regulatory framework for medical devices in the EU.

Is the FDA ok with controlling to the point that the residual risk is acceptable?

Yes, the FDA is generally satisfied with controlling risks to the point that the residual risk is acceptable. This means that while it is not always possible to eliminate all risks associated with a medical device, the risks that remain should be minimized and managed in a way that ensures the device is safe for use.

Can you talk about how to resolve disagreements within the team that invariably arise when: a. deciding the harm and b. determining the probability and severity ratings (e.g. is the exposure of identifiable patient data a low, medium, or high severity? How do you decide?)

Disagreements within a team regarding risk assessment, particularly when determining harm, probability, and severity ratings, are common. Here are some strategies to foster constructive dialogue and reach consensus:

  • Facilitate Open and Honest Communication
    • Encourage respectful discussion: Create a safe environment where team members feel comfortable expressing their opinions without fear of judgment or reprisal.
    • Active listening: Ensure that everyone is heard and understood. Paraphrase and summarize others’ points to confirm comprehension.
  • Leverage Expert Knowledge
    • Consult experts: Involve subject matter experts who can provide insights and guidance on specific areas of concern. For example, a medical professional can help assess the potential harm of a medical device.
    • Seek external validation: Consider consulting external experts or industry standards to validate team assessments.
  • Use Structured Tools and Techniques
    • Risk matrix: Use a risk matrix to visually represent the severity and likelihood of risks. This can help facilitate discussions and identify areas of agreement or disagreement.
    • Decision trees: Break down complex decisions into smaller, more manageable components to help identify potential outcomes and their associated risks.
    • Scenario planning: Explore different scenarios to assess potential consequences and identify risk mitigation strategies.
  • Consider Multiple Perspectives
    • Diverse viewpoints: Encourage team members to consider different perspectives and potential consequences. This can help identify blind spots and ensure a more comprehensive risk assessment.
    • Scenario analysis: Explore various scenarios to assess potential outcomes and their associated risks.
  • Use Data-Driven Decision Making
    • Evidence-based approach: Whenever possible, base decisions on data and evidence rather than solely on subjective opinions.
    • Historical data: Analyze past incidents and trends to inform risk assessments.
  • Establish Clear Criteria
    • Defined criteria: Develop clear and consistent criteria for assessing harm, probability, and severity. This can help reduce subjectivity and improve the accuracy of risk assessments.
    • Consensus building: Work together to establish agreed-upon criteria that are relevant to the specific context.
  • Mediation or Facilitation
    • Neutral party: If disagreements persist, consider involving a neutral third party (e.g., a facilitator or mediator) to help the team reach a consensus.

I’ve heard some folks argue that training or warnings in manuals are only a last-ditch risk control measures – can you speak to this?

That’s a common misconception. While training and warnings in manuals are important risk control measures, they are not considered “last ditch” efforts. In fact, they are often essential components of a comprehensive risk management plan.

So “Patient Safety” is the theme here including supporting FMEA. How about CFMEA ?

CFMEA (Critical Failure Mode and Effects Analysis) is a valuable tool for identifying and assessing critical failures in medical devices. It’s a more focused approach than traditional FMEA, specifically targeting failures that could have a significant impact on patient safety.

I always worry I go too far down the rabbit hole with risk management. ‘A micrometeor kills the driver’ is clearly too rare to put into the spreasheet. What are methods to ensure the team is doing the proper breadth and depth? When do you declare you are done?

That’s a great point. It’s easy to get caught up in the details of risk management and lose sight of the big picture. Here are some strategies to ensure your team is striking the right balance between breadth and depth:

  • Prioritization and Focus
    • Identify critical risks: Use techniques like FMEA (Failure Mode and Effects Analysis) or CFMEA (Critical FMEA) to prioritize risks based on their likelihood and severity.
    • Focus on high-impact risks: Concentrate your efforts on risks that could have a significant impact on patient safety, product performance, or regulatory compliance.
  • Risk Matrix and Thresholds
    • Establish thresholds: Define clear thresholds for when a risk should be considered high, medium, or low based on its severity and likelihood.
    • Risk matrix: Use a risk matrix to visually represent the risks and their corresponding levels of severity and likelihood. This can help you prioritize risks and allocate resources accordingly.
  • Risk Control Measures
    • Assess effectiveness: Evaluate the effectiveness of risk control measures to ensure they are adequate to address the identified risks.
    • Continuous monitoring: Regularly review and update your risk management plan as the project progresses to ensure that it remains relevant and effective.
  • Regulatory Requirements and Standards
    • Compliance: Ensure that your risk management activities comply with relevant regulatory requirements and industry standards, such as ISO 14971.
    • Alignment: Align your risk management efforts with the specific needs and requirements of your medical device.
  • Team Expertise and Consensus
    • Leverage expertise: Utilize the collective knowledge and experience of your team to assess risks and determine appropriate control measures.
    • Consensus building: Foster open communication and collaboration to ensure that the team is in agreement on the prioritization and management of risks.

When to Declare “Done”

  • Risk acceptance: Once you have identified and assessed all significant risks, and implemented appropriate control measures, you can declare that your risk management activities are complete.
  • Residual risk: Ensure that the residual risk (the risk that remains after implementing control measures) is acceptable and aligned with your organization’s risk appetite.
  • Regulatory compliance: Verify that your risk management activities comply with all relevant regulatory requirements.

Remember, risk management is an ongoing process. Even after declaring “done,” it’s important to continue monitoring and updating your risk management plan as the project progresses and new information becomes available.

Do you need to assess if there are new risks introduced with risk control measures? Where and how do you show this was considered?

Yes, it is essential to assess if new risks are introduced with risk control measures. This is because implementing new measures can sometimes inadvertently create new hazards or vulnerabilities.

Here’s how you can demonstrate that this consideration was included in your risk management process:

  • Risk Control Effectiveness Assessment
    • Regular reviews: Conduct periodic reviews of risk control measures to assess their effectiveness in reducing or eliminating identified risks.
    • Identify unintended consequences: Look for any unintended side effects or new risks that may have arisen as a result of the control measures.
    • Document findings: Record the results of your assessments, including any new risks that were identified.
  • Risk Control Failure Analysis
    • Scenario analysis: Consider potential failure scenarios for each risk control measure.
    • Identify potential consequences: Determine what could happen if a risk control measure fails.
    • Assess new risks: Evaluate if the failure of a risk control measure could introduce new risks.
  • Risk Control Interaction Analysis
    • Consider interdependencies: Examine how different risk control measures interact with each other.
    • Identify potential conflicts: Look for any potential conflicts or contradictions that could lead to new risks.
    • Document findings: Record your analysis of risk control interactions and any identified risks.
  • Risk Control Monitoring and Review
    • Continuous monitoring: Implement a system for continuously monitoring the effectiveness of risk control measures.
    • Regular reviews: Conduct regular reviews to identify any changes in the risk landscape or the effectiveness of control measures.
    • Update risk management plan: If new risks are identified, update your risk management plan to address them.

How do you define the ‘Criteria for Risk Acceptability’?

Criteria for Risk Acceptability in medical device development are the standards or benchmarks used to determine whether a level of residual risk is acceptable. This involves balancing the benefits of the device against the potential risks associated with its use.

Could you expand on the difference between “mitigation” and “risk control?”

Mitigation and risk control are often used interchangeably in risk management, but they have slightly different meanings.

Mitigation refers to the actions taken to reduce the likelihood or severity of a risk. This can involve implementing preventive measures, such as design changes, process improvements, or training.

Risk control is a broader term that encompasses all activities aimed at managing risks, including both mitigation and other strategies. This might involve avoiding risks altogether, transferring risks to a third party, or accepting risks.

In essence, mitigation is a specific strategy within the broader framework of risk control. By effectively mitigating risks, you can significantly reduce their impact and improve the overall safety and effectiveness of your medical device.

After doing dFMEA and uFMEA, do you go back to your hazard analysis and add those more detailed causes in there?

Yes, it’s generally recommended to revisit your hazard analysis after conducting a dFMEA (Failure Mode and Effects Analysis) and uFMEA (User Failure Mode and Effects Analysis).

Shouldn’t part of Risk Management be determining, testing the Failure Envelope of the design parameters, and some 1.25 or 1.5x Safety Margin beyond the Use Design Intent?

Absolutely, determining and testing the failure envelope of design parameters, along with incorporating a safety margin, is a critical component of comprehensive risk management.

Are there regional differences for risk approaches?

Yes, there are regional differences in risk approaches, particularly when it comes to medical device development. These differences can be influenced by factors such as:

  • Regulatory requirements: Different countries or regions may have varying regulatory standards and expectations for risk management. For example, the FDA in the United States and the EU Medical Device Regulation (MDR) have specific requirements for risk assessment and control.
  • Cultural factors: Cultural differences can influence how risks are perceived and prioritized. For instance, some cultures may have a higher tolerance for risk, while others may be more risk-averse.
  • Healthcare systems: The structure and organization of healthcare systems can also impact risk management practices. For example, countries with centralized healthcare systems may have different approaches to risk management than those with decentralized systems.

Some examples of regional differences in risk approaches include:

  • Asia: Some countries in Asia may have a more risk-averse approach to medical device development, with a strong emphasis on safety and quality.
  • Europe: The EU MDR places a significant emphasis on risk management, with strict requirements for risk assessment and control.
  • United States: The FDA has specific guidance documents on risk management for medical devices, which outline the expectations for manufacturers.

We have struggled to agree on what the pre-mitigated system is… When is an aspect of the design inherent vs a risk control measure that the team incorporated in the initial design?

Understanding the distinction between inherent aspects of a design and risk control measures is crucial for effective risk management.

Inherent aspects are those characteristics or properties of a design that are intrinsic to its nature and cannot be easily changed without altering the fundamental concept or function of the device. These might include:

  • Material properties: The choice of materials used in the design can have inherent risks associated with them (e.g., allergies, biocompatibility).
  • Design principles: The underlying principles or concepts on which the design is based (e.g., mechanical design, electrical circuitry) can have inherent risks.
  • Intended use: The specific function or purpose of the device can introduce inherent risks (e.g., a surgical instrument designed for a particular procedure may have risks associated with its intended use).

Risk control measures, on the other hand, are actions taken to reduce or eliminate risks that are identified during the design process. These measures can be added to the design to mitigate or prevent potential hazards. Examples of risk control measures might include:

  • Safety features: Incorporating additional features to enhance safety, such as alarms, warnings, or interlocks.
  • Design changes: Modifying the design to reduce the likelihood of failure or adverse events.
  • User instructions: Providing clear and concise instructions to guide users in the safe and effective use of the device.

Here are some tips for distinguishing between inherent aspects and risk control measures:

  • Consider the “as-designed” state: Ask yourself if the feature or characteristic was present in the initial design concept or if it was added later to address a specific risk.
  • Evaluate necessity: Determine if the feature is essential for the device’s function or if it was added to mitigate a risk.
  • Consider alternatives: If a feature could be removed or modified without affecting the device’s core function, it is likely a risk control measure.

What’s your take on changing the severity after implementation of controls (i.e., reducing the severity number in the FMEA)? Some of my colleagues (in the Medical Device industry) challenge this practice and their opinion is that severity should not be changed post-control (in most cases).

The practice of changing severity ratings in an FMEA after implementing controls is a complex issue with valid arguments on both sides.

Those who argue against changing severity ratings often cite the following reasons:

  • Maintaining original assessment: They believe that the original severity rating reflects the inherent risk of the failure mode, and that implementing controls should not alter this fundamental assessment.
  • Preventing complacency: Changing severity ratings could potentially lead to complacency, as it might suggest that the risk has been completely eliminated or significantly reduced.
  • Regulatory compliance: Some regulatory bodies may have specific guidelines or expectations regarding the treatment of severity ratings in risk management.

However, there are also valid arguments in favor of changing severity ratings in certain cases:

  • Effective control measures: If a control measure is highly effective in reducing the likelihood or severity of a failure mode, it may be appropriate to reassess the severity rating to reflect the reduced impact.
  • Continuous improvement: Changing severity ratings can be a useful tool for monitoring the effectiveness of risk control measures and identifying areas for improvement.
  • Risk prioritization: By adjusting severity ratings based on the effectiveness of control measures, organizations can prioritize risks more accurately and allocate resources accordingly.

Ultimately, the decision of whether or not to change severity ratings after implementing controls should be based on a careful evaluation of the specific circumstances and the organization’s risk management objectives. It’s important to consider the following factors:

  • Effectiveness of control measures: How effective are the implemented control measures in reducing the likelihood or severity of the failure mode?
  • Regulatory requirements: Are there any specific guidelines or expectations from regulatory bodies regarding the treatment of severity ratings?
  • Risk prioritization: How will changing the severity rating affect the prioritization of risks and the allocation of resources?
  • Organizational culture: What is the organization’s risk tolerance and its approach to risk management?

Can you expand on when it is acceptable to reduce the severity level in an FMEA?

Here are some scenarios where it might be acceptable to reduce the severity level in an FMEA after implementing controls:

  • Highly Effective Controls: If the control measures implemented are demonstrably effective in significantly reducing the likelihood or severity of a failure mode, it may be reasonable to reassess the severity rating. For example, if a design change virtually eliminates the possibility of a critical failure, reducing the severity rating could be justified.
  • Risk Tolerance and Acceptance: In some cases, an organization may be willing to accept a certain level of risk, even if it’s considered high. If a control measure reduces the risk to an acceptable level within the organization’s risk tolerance, it might be appropriate to reduce the severity rating.
  • Regulatory Compliance: If a regulatory body allows for a reduction in severity rating based on the effectiveness of control measures, it may be acceptable to do so to maintain compliance.
  • Risk Prioritization: If a high-severity failure mode is significantly reduced in likelihood or severity due to control measures, it may be appropriate to reduce the severity rating to allow for more resources to be allocated to other, higher-risk areas.

However, it’s important to exercise caution when reducing severity ratings. Always ensure that the control measures are truly effective and that the reduced severity rating accurately reflects the residual risk. Additionally, document your rationale for any changes to severity ratings to support your risk management activities.

Here are some questions to consider when evaluating whether to reduce a severity rating:

  • Has the severity of the consequences been significantly reduced?
  • Are the control measures in place reliable and effective?
  • Does reducing the severity rating align with the organization’s risk tolerance and regulatory requirements?

Can you talk a bit about post market surveillance activities, specifically using risk levels deciding whether an action is required to reduce complaints, knowing that we have both the hazard analysis and FMEAs? Do you recommend using the hazard analysis table to make that determination instead of FMEAs?

Clause 10 in ISO 14971 specifically discusses requirements for managing post market surveillance activities. Clause 10.3 talks directly at how surveillance data should be reviewed.

In short, you need to assess the complaint to decide what it is describing:

  • If the complaint describes a new hazard, the hazard analysis should be updated to include this new information, with suitable risk controls applied. If this new hazard can be presented by device failure modes / use errors, then the design / use FMEAs will also need to be updated to capture this information, again with suitable risk controls applied.
  • If the complaint describes a new failure mode / use error, then the relevant FMEAs will need to be updated with new entries, with suitable risk controls applied.
  • If the complaint doesn’t describe new hazards / failure modes / use errors, then it generally indicates that you should revisit you residual probability assessment for these issues and decide if more or new risk control measures are needed to further reduce the residual probability.

Can you speak a little more around the P1/P2 breakdown? What are some examples when you would break them apart, when would it be better to move forward with a single probability.

Usually P2 is best assessed by you clinical representative, where as estimating P1 is generally best undertaken by the engineering team: Using the ‘Crossing the street’ example, it’s a lot easier for engineers to quantify how many times a car passes a spot on the street, P1, than how serious an injury would be when a pedestrian is struck, P2.

If you project can get P2 values for each hazard situation / harm agreed upon at the start of the project then the engineering team doesn’t have to struggle with that part of the risk assessment, which streamlines the risk assessment process.

Alas, this rarely is the case, so for mechanical / electrical / use FMEAs, the team generally needs to estimate the overall probability.

On the other hand, section B.4.3 of IEC 62304 does include the following language, ‘When software is present in a sequence or combination of events leading to a HAZARDOUS SITUATION, the probability of the software failure occurring cannot be considered in estimating the RISK for the HAZARDOUS SITUATION. In such cases, considering a worst case probability is appropriate, and the probability for the software failure occurring [P1] should be set to 1.’

I generally suggest that when conducting software DFMEAs either:

  • Include separate columns for P1 & P2, setting the former to 1, have the team assess P2, and calculate the overall probability accordingly.
  • Have the team assess overall probability assuming that the software has failed, which will need to be policed throughout the process to assure this logic is consistently applied. This software risk assessment process should be explained somewhere in the risk management file, such as the risk management plan or within a scope section of the risk document in question

Are there tools you recommend for performing risk analsysis and linking?

I’ve used both Doors and Helix to build risk management files / requirements documents. Doors is particularly configurable and can be extensively automated. Helix is more limited in its capabilities but easier to configure for smaller projects / organizations.

Jama is reportedly a very feature rich tool, though Sunrise doesn’t have any recent experience with it.
In all cases they need a lot of effort to set them up and maintain them. Your organization needs to make sure there are resources to do so.

Do you have a dedicated “risk management team” or are hazards identification, hazards analysis, FMEA tasks for the design engineering team? Is there an ideal team makeup?

Building risk files by dedicated teams or using engineering team members are both practiced, and which is used probably depends in part on organization size. Some organizations have risk assessments exclusively performed by the quality team.

At Sunrise if our customers request our assistance in building their risk file, the assigned systems engineer is responsible for building the file. The systems engineer will construct the hazard analysis, pull in design team members as necessary to construct dFMEAs in their area of expertise, and supervise the design review process.

My opinion is that getting the design team involved is key in getting them to understand what the problems are and to leverage their knowledge to craft the most cost-effective risk controls. Having the design team involved does impinge on time needed for their other deliverables, but it’s my experience that this is well worth the tradeoff.

If someone is reviewing / auditing a risk management matrix and find a risk is not listed in the document. How does the people know if this risk was considered and was determined not applicable (e.g. too low of probability) or it was not considered at all.

When it isn’t obvious if a hazard should be included, then I default to including it. If it doesn’t present a significant risk, then the assessment will reflect this and the notes can further explain the reason for its inclusion and why it presents such a low risk. Alternatively, if there are a class of hazards that the author reasons shouldn’t be included, this should be explained somewhere in the risk management file, such as the risk management plan or within a scope section of the risk document in question.

In the case of the reviewer identifying ‘missed’ risk assessments, they should work with the author to have the document or risk file updated as I’ve described. Even if there is sound reasoning for not including an entry, the fact that the reviewer perceives a deficiency indicates that others may conclude the same.

We should strive to build risk management files that the team, including reviewers, agree are complete. This may result in a larger file, but this is a fair tradeoff when considering cost of a deficiency during a regulatory review.

I have seen the acceptability matrix broken up into varying degrees. What is your opinion of that?

Risk management files can certainly include more than two risk levels. A typical arrangement might be:
• ‘Unacceptable’,
• ‘As Low As Low Practicable’ / ‘As Far As Possible’
• ‘Broadly Acceptable’

The terms for these intermediate risk levels discussed in note 1 of clause 4.2 of ISO 14971 and their use is described in ISO /TR 24971 and Annex D of earlier versions of ISO 14971: They can part of a strategy for ‘flagging’ where additional risk controls are needed, where certain risk control strategies may be used, or where specific items should be discussed as part of the risk-benefit analysis.

I’ve included these terms in my RMFs before, but they can complicate the risk assessment process and can be cumbersome when assessing overall residual risk, as required by clause 8 of ISO 14971. I’ve been moving away from their use: Ultimately, we have to decide if the risk posed by a Hazardous Situation / Failure Mode is acceptable, even if there might be caveats.

Can you please explain why the probability of a software failure occurring should be set to 1, which makes p=100% even after risk control according to my understanding. Is this statement accurate?

The assessment of P1 in software dFMEAs is based the following in section B.4.3 of IEC 62304, ‘When software is present in a sequence or combination of events leading to a HAZARDOUS SITUATION, the probability of the software failure occurring cannot be considered in estimating the RISK for the HAZARDOUS SITUATION. In such cases, considering a worst case probability is appropriate, and the probability for the software failure occurring should be set to 1.’

To be clear, this is only the assessment of P1, not P, e.g. the probability of entered the HAZARDOUS SITUATION, not that the patient/caregiver/operator automatically received a ‘High’ severity HARM as a result.

Also, this assessment of P1 is only the default for the risk assessment, not necessarily the residual risk assessment. Applying risk control measures that reduce P1 allows you to assume less than ‘1’ for the residual probability. The trick is that it’s very hard to justify reducing the residual P1 by software means, unless you implement a redundant software system with diversity. The more robust means to reduce residual P1 is utilized a risk control that is independent of software.

MassMEDIC logo

The Massachusetts Medical Device Industry Council (MassMEDIC) represents the thriving health technology sector in Massachusetts and New England, representing more than 300 MedTech companies. Through advocacy, events, mentoring networks, and matchmaking, MassMEDIC is the engine of one of the most powerful life sciences clusters in the world. Join MassMEDIC to get access to events like this and many more benefits.


Related Posts

Cybersecurity in 2024 & Beyond

Sunrise Labs & MassMEDIC Webinar: “Cybersecurity in 2024 and Beyond” Recap Sunrise Labs, in partnership with MassMEDIC, recently hosted a webinar titled “Cybersecurity in 2024 and Beyond; Benefits of a Systems Approach in Medical Device Development.” The informative session featured Sunrise Labs’ experts: Dave Hibbard (VP of Engineering), Christine Nason (Director of Software), and Nick […]

Read More

Stay in the know

Stay up-to-date on Sunrise Labs' events and thought leadership!