Moe Alsumidaie, María Proupín-Pérez, PhD, Artem Andrianov, PhD, Beat Widler, PhD, Peter Schiemann, PhD, Johanna Schenk, MD, PhD
Originally published on Applied Clinical Trials

[link to the article].

Quality Risk Management (QRM), Quality by Design (QbD), Risk-based Monitoring (RbM), data driven monitoring or centralized monitoring have become interrelated terms of a hot topic, and there is hardly an organization that does not claim to already apply or considers to implement such an approach for clinical development and pharmacovigilance. This begs the question: is it just hype or a shift in paradigm? We are convinced that this is more than just hype. However, the shift in paradigm has not yet happened either. We – the RbM Consortium, which includes recognized experts from this area – observe many approaches to a risk-based study management but came to the conclusion that it is too early to celebrate an advent of a new era in clinical trials and development. We notice that the majority of approaches proposed lack fundamental elements of Quality by Design and Quality Risk Management strategies.
We are observing five major flaws:

  1. Lack of a uniform and broad understanding of underlying principles, methodology and approach:
    A shared understanding of terminology, scope of tasks, deliverables as well as roles and responsibilities across the entire stakeholder community involved in the planning, conduct, analysis, reporting and assessment of clinical trials – e.g., sponsors, contract research organizations (CROs), investigators (and, as a result, patients) – is important to the realization of a risk-based study management approach.
  2. Decisions about risks are not based on soundly defined objective decision criteria:
    Risk assessments and their resulting decisions today are mostly based on the evaluation and opinion of teams or individuals. What happens if those teams or individuals are replaced by others? Will their decisions be the same? This is highly doubtful, since individuals base their decisions on their experience and the situation they are currently in. As important experience is for establishing a risk-based system, independent decision criteria or decision frameworks are needed to keep the human factor as small as possible once the system goes live.
  3. Emphasis is not put on the foundation of risk-based monitoring:
    The foundation of a sound risk-based approach starts much earlier than the trial itself with aspects such as protocol design, study start-up assessments, site qualification, patient enrollment optimization, to name a few.
    The common approach focuses only on a centralized review of incoming data or a strategy of reduced source document verification (SDV) and site monitoring but protocol design aspects, which are the blueprint for the trial, are neglected. As a result, protocols of inadequate design (structure and content) are implemented with a study management approach that is likely to be unfit for the complexity of the protocol.
  4. No integrated quality strategy is developed and implemented:
    Such strategy must include study design, site selection (investigators who have the required therapeutic and Good Clinical Practice (GCP) experience as well as access to the patients to be enrolled), study management (centralized/data driven monitoring) and general oversight aspects (quality management plan, third party oversight, audits, etc.).
  5. Risk assessments are conducted in silos and not shared within and between sponsors:
    In order to save time and money, sharing of assessments of “common” entities, such as CROs, central laboratories and others can contribute to a leaner and more effective approach to managing clinical trials. In addition, networks of service providers should be created to serve as one-stop shop for a sponsor of a clinical study covering all elements from protocol design to reporting of trial results including disclosure, submissions, training, qualifications, risk assessments, safety duties and many more.

Based on the above-mentioned deficiencies we have put together a catalogue of ten key aspects on how to navigate QbD, QRM and RbM and to implement a sound Risk-based Study Management Strategy.

1. Is it true that only larger pharma or device companies can apply a risk-based approach?

The RbM Consortium’s Position:
Identifying proactively and early, by means of measurable/objective criteria, the impact, likelihood and detectability of a scientific/medical, regulatory and operational risk and implementing mitigating and/or preventive actions that are based on transparent and predefined outcome measures are the foundation of a risk-based approach that can (and should) be applied by any organization.

Designing and implementing a risk-based approach does not depend on the size of a company. Smaller biotech, pharmaceutical and device companies as well as academia can do it, too. All will benefit as much as Big Pharma from a smart approach to monitoring. We all face the same challenges: lack of resources, time pressure and growing regulatory requirements.
It is important that the designed solutions to RbM are meeting the criteria defined above in our statement. There are many solutions to RbM, whether they are built by using a spreadsheet or supported by a highly sophisticated IT system with automated data analysis. The more clinical studies and the more complex they are, i.e., global, multiple arms, comparators, etc., the better would be a solution that is technically sound and to a high degree automated. However, this is not needed in all scenarios. We will explain it further in the following.

2. Do I need sophisticated IT tools to implement a risk-based approach?

The RbM Consortium’s Position:
A risk-based approach means a change in the planning and conduct of a clinical trial. Central elements of this change are a structured analysis, based on objective criteria, as well as the mitigation of any risks. IT tools are technical enablers but not a condition for implementing a risk-based approach.

A risk-based approach starts with the identification of risks followed by the analysis as to their impact, likelihood and detectability, and the possible actions to mitigate them. For such an assessment, no sophisticated IT tools are required. The tracking of risk profiles – at the sponsor, CRO or clinical trial center level – and of actions triggered by the risk assessments can be performed with off-the-shelf office tools (e.g., spreadsheets). If larger volumes of data need to be assessed, more sophisticated web-based tools may be user-friendlier.
However, when clinical trial data are to be regularly reviewed for outliers, errors or other non-compliances, and large volumes of data need to be processed; this would require the use of a validated and well-designed computerized system to minimize workload. In addition, data need to be presented in reports that are easy to understand, classified in risk categories that are based on objective criteria, and result in actionable mitigating actions.

3. Is it true that we need to change the way we write protocols and set up trials to implement successfully a risk-based approach?

The RbM Consortium’s Position:
Protocols need to be based on a clear rationale reflecting the needs of prescribers, patients and payers and on one or maximum two primary study objectives. These, as well as inclusion and exclusion criteria, study schedule and other critical protocol elements, need to be verified by means of real-life data in order to meet expectations and capabilities of investigators and patients. Not to forget the structure of a protocol that can be easily understood by those who have to execute the trial.
Such careful planning is the key in avoiding unnecessary protocol amendments.

There is consensus that the current approach to clinical development is not sustainable due to the following reasons:

  • About 50 % of sites fail to reach the planned recruitment targets [1].
  • The majority if not all milestones missed in the clinical trials – more than 95 % of clinical trials do not end on time and on budget as planned in the first place [1].
  • 90 % of the studies meet their recruitment goals, but at the expense of mostly twice as much time as originally planned [1]. The costs attributable to these delays are gigantic.
  • Too many avoidable protocol amendments, with a first amendment implemented even before the very first patient has been enrolled – most of them due to the rush towards “First-Patient-In”.
  • Compliance flaws causing delays and additional costs.
  • Bottom line: trials take much longer and cost much more than originally planned.

A senior inspector summarized recently her reservations about the current approach as follows: “If sponsors and CROs do not learn to write better protocols, moving from the current trial monitoring approach to a risk-based or centralized monitoring approach, it is likely to end in a compliance disaster”.
We are also absolutely convinced that the quality of protocols needs substantial improvement, for example, by implementing the following plan changes:

  • Focusing on one study rationale and one or two primary objectives.
  • Verifying with real-life data that inclusion criteria are fit for purpose and support the value proposition of the new product.
  • Verifying that the study set-up is aligned with patients’ and investigators’ priorities and capabilities through patient-centric models and simulations.
  • Verifying with real-life data that enrollment plans (recruitment goal and rate) are realistic and the right sites are being involved.

We should never forget that the rationale and objectives of a protocol should target a clear value proposition and that the key stakeholders of a protocol are not the editors of a scientific journal but the professionals performing reviews at health authorities and Ethics Committees/Institutional Review Boards (EC/IRB) prior to clinical trial approval or in the context of Health Technology Assessments (HTA) for the decision on pricing and reimbursement.

4. What are KRIs and KPIs, and how can they support a risk-based approach?

The RbM Consortium’s Position:
A KRI (Key Risk Indicator) is an objective measurement of a study-related parameter against a pre-set threshold (therefore digital – “on” or “off”), providing a signal about the risk of a study process or any of its deliverables. KRIs are different from KPIs (Key Performance Indicators). A KPI measures the achievement of an operational or performance target such as completing enrollment of patients within the planned timelines.

KRIs are used to measure in an objective manner the risk imposed on a process or system. We distinguish leading and lagging KRIs. A leading KRI measures parameters that indicate a problem building up before it materializes so there is still time to correct. Lagging KRIs are indicating deviations that already happened, but as a single event would not have a great impact, however, when piling up, impose a risk on the process/deliverables looked at. Generally, leading KRIs are more effective but are also more difficult to measure than lagging ones. A proper suite of KRIs combining leading and lagging KRIs allows a study management team to plan and execute a risk-based strategy, and KRIs can be benchmarked via empirical data analyses.
Delayed Monitoring Visits (compared to the timelines set in the Study Monitoring Plan) is an example of a leading KRI. A delay in on-site monitoring does not per se threat patients’ safety, integrity and rights, and data integrity but increases the risk of tardy detection of a GCP issue, and as such it is sensible from a study oversight perspective to address such delay in a proactive manner. Protocol Deviations is an example of a lagging KRI: the GCP deviation has already occurred and although the deviation cannot be corrected its recurrence can be prevented.
KRIs can be designed as “digital” triggers, i.e., the KRI is either on or off: when the threshold is reached the KRI “fires” and generates a signal. This approach, rather than having “multi-level” KRI thresholds, avoids complexity in the calculation logic and also facilitates the “validation” of the threshold. Indeed, any threshold needs to be justified by means of objective criteria. Validation of a KRI threshold can in essence be achieved in three ways:

  • a) Thresholds taken from regulatory, SOP, industry, etc. standards, e.g., the KRI for processing a serious adverse event would be set at day 13 to avoid non-compliance with the 15 days reporting timelines.
  • b) Analyzing compliance data from past studies, i.e., analyzing the behavior of a KRI threshold in studies known for good compliance vs. studies with recognized GCP deficiencies.
  • c) Analyzing “real world” data, e.g., analyzing Electronic Health Records (EHR) to get clues about realistic expectations vs. protocol requirements. For instance, EHRs can be analyzed to determine how frequently and to what extent a patient’s visit deviates from the scheduled visit date.
  • d) If none of the above approaches can be applied, an arbitrary threshold can be set and the adequacy of the threshold can be verified or fine-tuned later through on-site monitoring and auditing approaches.

When defining KRIs study teams must practice caution to measure what has an impact on compliance and process effectiveness and not what can easily be measured. For instance, measuring compliance against timelines may induce team members to take short cuts, which can be the root cause of new deficiencies.
Reports aggregating KRI analyses data can be presented via tables, graphic mapping, odometers, and other analytical visualizations.

5. Will a risk-based approach make site selection/qualification and patient recruitment into my trials any easier, and will it help to get better data faster and cheaper?

The RbM Consortium’s Position:
Time invested to carefully plan the clinical study and to verify plans with real-life data streamlines site selection, patient recruitment, patient retention, investigator satisfaction, drives GCP compliance and reduces, if not eliminates, enrollment-related and other unnecessary protocol amendments.

Risk-based approaches apply not only to monitoring, but also to many facets of a clinical trial, such as site selection/qualification, protocol design and subject enrollment/retention. Sponsors can mitigate clinical trial risks in the following areas:

  • Predict Study Site Enrollment and Quality Potential Prior to Initiation: Initiating poorly performing study sites is common and is costly. Up to one third of study sites never enroll a single patient [2], and it can cost up to 50,000 USD per site to seek, initiate, maintain and close poorly performing sites [3]. With the availability of numerous data sources (i.e., patient populations, number of ongoing clinical trials at an investigative site, competitive trials, site resources, years of investigator experience, etc.), we advise sponsors/CROs to conduct empirical analyses and run predictive models against historical site performance to unveil factors that affect quality and enrollment outcomes. Quantifying site enrollment and quality potential prior to site initiation enables sponsors to minimize exposure to poorly performing sites, which saves costs and improves quality outcomes.
  • Protocol Design Optimization: According to FDA and EU inspectors, a well-designed protocol is the blueprint for quality clinical trials, and many studies experience timeline slippage and compliance issues because of poorly designed protocols, which result in numerous amendments, and costing more than 400,000 USD per amendment [4]. In order to minimize timeline slippage, we advise sponsors/CROs to test protocols against real-time aggregated Electronic Medical Records data. This test enables sponsors/CROs to optimize protocol inclusion/exclusion criteria to better match real-world patient medical criteria, which can minimize enrollment slippage and the number of protocol amendments.
  • Retain and Engage Patients: Missing clinical trial data due to subject dropouts has a significant impact on clinical trial timelines, budgets and quality outcomes. Reasons for subject dropout can vary from subject geo-demographics, protocol complexity, subject commitment, education, distance from study site, employment status, physical activity and much more [5, 6]. Albeit there are technologies that focus on engaging patients, many are unproven in the realm of healthcare and clinical trials. We advise sponsors/CROs to not only test out feasible technologies, but also engage patients through validated technologies, such as SMS (Short Messaging Service, also known as Text Messages), which is a proven and cost-effective way to improve medication adherence [7], improve medical office appointments [8], and motivate patients to stay in a clinical trial.

6. Will a risk-based approach allow me to stop SDV (source document verification) and reduce the burden of on-site monitoring?

The RbM Consortium’s Position:
Traditionally, SDV was used to help improve data quality in clinical trials, but research has shown that SDV has a limited impact on the data quality [9]. However, the SDV reduction without any other action is one of the major RbM pitfalls.
Source document verification in a risk-based environment is a tool in the study team’s toolbox to supplement information that cannot be obtained through central review and to confirm conclusions drawn based on the central review.

With RbM, source document verification (SDV) is not becoming obsolete but the way it is applied needs to change as outlined below. The FDA clearly states, “No single approach to monitoring is appropriate or necessary for every clinical trial. FDA recommends that each sponsor defines a monitoring plan that is tailored to the specific human subject protection and data integrity risks of the trial. Ordinarily, such a risk-based plan would include a mix of centralized and on-site monitoring practices.” [10]. Therefore, through the systematic use of a centralized review of the study data we can consider a reduction in the burden of on-site monitoring, improvements in clinical data quality and an increase in the clinical monitors’ effectiveness. All sources of data should be mined and analyzed such as the data from the CRF, its metadata (e.g., audit trail data), the CTMS, the TMF, safety database, etc. When such an approach to study management is applied, SDV will serve as a root cause analysis tool when needed rather than a data comparison tool. We should always keep in mind that the goal of any monitoring activity – regardless of the tools used – is to protect patients’ safety, integrity and rights as well as ensuring data integrity. A proactive approach to study oversight increases efficiency by reducing the need for corrective actions.

7. How much money do I save by implementing a risk-based approach?

The RbM Consortium’s Position:
A risk-based management approach to quality in clinical trials will lead to savings in the long run. Currently there’s no trustworthy evidence that any RbM approach has saved a substantial amount of money, however, one or the other company claim some petty savings. We are convinced that the implementation of a risk-based approach to managing clinical trials, considering all aspects from QbD to RbM, will lead to clinical trials actually ending on time and on budget as planned. This will result in savings in time to market launch, which is the ultimate saving in clinical development.

Depending on the complexity of a clinical study, the costs of monitoring (personnel, travel, expenses, etc.) can reach magnitudes of 25-30 % of the clinical trial budget [11]. Since quite some time this has been a target for cost-saving exercises at many pharmaceutical and biotech companies. However, none has actually achieved the goal of cost reduction in this budget line item. With risk-based monitoring, however, many believe that now has come the time, and finally the tools are available, to reduce the monitoring budget.
Although certain tasks can now be done centrally, there is still a need for visiting the centers to ensure that patients’ safety and the integrity of the data are observed. Therefore, the current monitoring effort that is rather distributed equally will be shifted towards sites in greater need, such as centers with many patients, not experienced in conducting GCP trials, potential problems indicated through KRIs and central data analysis/risk assessment, etc. This means that we will rather see a shift in effort than a reduction of it. However, the positive result from this shift is that the sites in need now get the attention and support necessary to help them to deliver good results.
Savings in the long run will be realized when through well planned and implemented clinical studies the timelines to data base lock and ultimately to the launch of the product will be shortened, which will ensure longer patent exclusivity. When we take a medicinal product with annual sales of 500 million USD and divide those annual sales by 365 days, we can see that the real savings of 1.65 million USD per day are waiting for those who set up their studies in the QbD spirit.

8. How will Health Authorities react if they discover a major or critical finding that was not detected or not addressed through your risk management approach?

The RbM Consortium’s Position:
A critical finding is – either detected through internal audits or Health Authority inspections – any process or data deficiency that evidences or fails to demonstrate absence of a breach of patients’ safety, integrity and rights, or data integrity. Even though audits are only a snapshot of the current situation, if a critical finding is detected, the impact on the credibility and integrity of the study must be carefully assessed, and a sensitivity analysis should be performed. An RbM approach does not eliminate the possibility that audits or inspections detect unknown critical deficiencies and GCP violations. However, through focusing on areas where it matters, most deficiencies that would lead to a critical finding can be addressed in time, especially with leading KRIs in place.

We have discussed this aspect multiple times with Health Authorities’ inspectors and their feedback was clear and consistent with regulators’ messages about RbM: “errors in clinical trials are acceptable to regulators as long as had we perfect data we would still make the same decision and come to the same conclusion”. In other words, inspectors will continue to detect and report non-compliances, however if inspectors and reviewers come to the conclusion that through those non-compliances patients’ safety, integrity and rights as well as data integrity were not compromised, a trial/system/process will still be considered compliant in the sense that the goal of GCP (patients’ safety, rights, integrity, and data integrity) was reached. Obviously, even if such findings were not considered as critical, the sponsor must, however, implement adequate CAPAs.
Additionally, the risk management approach has the advantage of knowledge formalization and incorporation of the critical findings into the future risk-monitoring process, which leads to continuous improvement.

9. How does a risk-based approach impact on my organization, i.e., study site, Quality Assurance, Data Management, Clinical Operations, Drug Safety, Biometrics, etc.?

The RbM Consortium’s Position:
The switch from the current traditional approach in managing a clinical study to a risk-based approach will affect all functions involved in clinical development activities. Study oversight activities by different functions must be aligned and consolidated in a comprehensive proactive Risk-based Study Management Plan. Addressing the change management tasks and challenges from the very start of a Risk-based Monitoring (RbM)/Quality Risk Management (QRM) project is probably the most critical “must do”. The other critical condition for success is the full and undivided support by senior management.

Our experience shows that change management activities around the development and rollout of an RbM/QRM strategy in study management and oversight are key to success. A “must do” is early communication, honestly and proactive alignment of the entire organization with the plans, goals and – most importantly – the organizational and people implications.
Unfortunately, still today, a silo approach to study oversight activities is the norm. Within a sponsor company, Clinical Operations develops monitoring plans, QA audit plans and Clinical Science develops plans about the benefit–risk assessments. Resources are allocated by functions and not in an integrated manner. For instance, the data management budget may allow developing an EDC and data management solution but does not include resources for programming and maintaining KRIs. Similarly, the biostatisticians may be tasked to develop a Statistical Analysis Plan (SAP) but do not have resources to develop programs for the early detection of outliers or abnormalities in the data structure. When CROs are involved, the wealth of insight about process efficiencies and deficiencies are rarely shared across their clients to generate learnings and trigger process improvements, too.
Therefore, when a company decides to adopt an RbM approach, a holistic, integrated risk-based study management plan needs to be developed. From this follows that all stakeholders need to work together to identify the required tools, activities and minimum tolerable error levels. The latter is nothing else than the definition of the “design space” for a given study and the justification with objective evidence of that “space”. RbM requires a change in mindset and, therefore, change management that is introduced at the very beginning of an RbM project is key to its success.

10. What is the best implementation strategy for a risk-based approach?

The RbM Consortium’s Position:
A timely rollout strategy and plan are essential for the successful implementation of an RbM approach. Since the switch to an RbM approach affects all teams and all jobs of staff members involved in clinical development activities, exhaustive and honest information about the goals and impact on the organization of the RbM strategy is critical. Communicating success stories is an effective mean for showing that RbM works.

Our experience has shown that a systematic introduction of an RbM approach rather than a big bang launch is the best implementation strategy for such a project. Indeed, a big bang strategy is likely to overwhelm an organization, underestimates the technical and organizational challenges and thus triggers resistance amongst concerned staff members. A four-step approach has proven multiple times to work best:

Step 1. Predictive Modeling and Empirical Analysis:

      In the predictive modeling and empirical analysis phase, study teams need to (a) determine clinical operations/business research objectives, and (b) conduct empirical research on data from numerous completed clinical trials to uncover factors that impact clinical trial risk and performance outcomes, for example, factors that predict clinical trial failure. Once teams identify risk factors, we advise teams to conduct statistical analyses to determine analytical parameters, which can be used as benchmarks, and to predict future outcomes. After developing a portfolio of statistically linked factors and analytical expected outcomes, study teams can use these parameters to proceed with the proof of concept phase (step 2), and examine the application of these predictive models in real time.

Step 2. Proof of Concept (PoC):
In the PoC phase, the tools, methodologies and processes needed to implement a Quality by Design (QbD) and RbM approach are developed and applied in one to a maximum of three studies with teams that are eager to move to a modern RbM methodology. Involving people enthusiastic about RbM is essential as it allows overcoming hurdles and backslashes without being forced to fight skepticism. This PoC generates important learnings on how to implement RbM in the sponsor organization. Specifically, on how to address gaps, conditions for success and limitations to RbM imposed by external constraints. For instance, it can give an overview of the capabilities of current IT systems that cannot be easily fixed. The PoC also delivers “success stories” that can be shared within the sponsor organization as a preparation for step 3. Typically, in the PoC there is little IT involvement and the tools developed and employed use simple technology such as structured questionnaires built in spreadsheets.

Step 3. Pilot:
In the pilot phase, the tools, approaches and processes developed during the PoC are applied to a larger number of trials or an entire development program to fine-tune the methodology and to start automating some of the tools developed during the PoC. The pilot will also involve teams that may be more skeptical about an RbM approach to gain also experience in managing expectations of any type of team member. The pilot provides the roadmap for the full-fledged implementation of RbM in the sponsor’s organization. In this phase, there is heavy involvement of IT, Data Management and Biometrics to develop the QbD assessment, KRIs and tools for the verification of the plausibility of trial data. In parallel, the new processes are fixed based on the learnings from the pilot phase and the supporting organization is designed and introduced.

Step 4. Full Deployment:
In the full deployment phase, all trials managed by the sponsor will now use an RbM approach. Tools, processes and methodologies are continuously refined to integrate feedback from users. The new organization, or changes to the current one, are implemented to support the new procedures and to participate in the fine-tuning. The goal is to implement an integrated quality oversight strategy from study planning to regulatory submissions.

 

References:
[1] Tufts Center for the Study of Drug Development Impact Report “New Research from Tufts Characterizes Effectiveness and Variability of Patient Recruitment and Retention Practices”. 15 (1), January/February 2013.
[2] S. Young. Non-Enrolling Sites Come at a Price. Medidata Solutions, blog.medsol.com, July 2012.
[3] D. Handelsman. Optimizing Clinical Research Operations with Business Analytics, SAS Global Forum 2011, paper 204.
[4] K. A. Getz et al. Measuring the Incidence, Causes, and Repercussions of Protocol Amendments. Drug Information Journal 2011, 45, 265-275.
[5] J. S. K. B. Yadlapalli and I. G. Martin. Seeking Predictable Subject Characteristics That Influence Clinical Trial Discontinuation. Drug Information Journal 2012, 46, 313-319.
[6] A. Siddiqi, et al. Early Participant Attrition from Clinical Trials: Role of Trial Design and Logistics. Clin Trials 2008, 5, 328-335.
[7] S. Khonsari et al. Effect of a Reminder System Using an Automated Short Messaging Service on Medication Adherence Following Acute Coronary Syndrome. Eur J Cardiovasc Nurs. Online 2 February 2014.
[8] S. Prasad and R. Anand. Use of Mobile Telephone Short Message Service as a Reminder: the Effect on Patient Attendance. International Dental Journal 2012, 62, 21-26.
[9] Medidata. Medidata and TransCelerate BioPharma Inc. Announce Findings of Joint Research Initiative on Clinical Trial Site Monitoring Methods. Online Press Release 19 November 2014.
[10] U.S. Department of Health and Human Services, Food and Drug Administration (FDA): Guidance for Industry, Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring, August 2013.
[11] E. L. Eisenstein. Reducing the Costs of Phase III Cardiovascular Clinical Trials. American Heart Journal 2005, 149, 482-488.