Dear Asma,
I see that you have a few options here depending on what samples remain and how the data is being used.
First, you could disqualify the analysts with higher variability on this method and retest the samples from the analysts with better precision. This works if you have sufficient remaining samples.
Second, you can report the data with the actual ISR data for the original samples and additional samples (but not the retesting of the +/- 50% because that looks like testing into compliance). While the ISR failed, you can report what the assay variability was in study and ask the DMPK scientist whether it impacts their analysis. It will depend on the data scatter, including how different individual samples might be not just how many within a percentage, but if the variability does not adversely affect the conclusions, a failed ISR may still be sufficient to meet the study needs. Going this route requires a deviation discussed and signed off by the study director.
The root cause when it is analyst-attributed is often training. It is not uncommon for variability to increase in the testing phase versus assay validation simply because of the priority of testing volume and that attention to detail wavers as the assay feels more routine. Thinking of ways to combat this is a challenge. Retraining is a possibility, but performance can still slip. Are there ways to reduce the mental and physical strain that comes from repeatedly performing the same task so that focus can be maintained during key steps? Are there ways to incentivize skills? (Be careful not to deincentivize by overloading your best analysts)
------------------------------
Joleen White Ph.D.
AAPS 2024 Global Health Community Chair
Bioanalytical 101 Course Development
Senior Advisor
BioData Solutions LLC
[email protected]Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
------------------------------
Original Message:
Sent: 05-29-2025 10:52
From: Asma Ejaz
Subject: ISR in Preclincal study
Hello everyone,
We are currently facing an issue where we were unable to pass Incurred Sample Reanalysis (ISR) for a preclinical study, and I'm uncertain how best to address this gap from a documentation standpoint. I'd greatly appreciate any suggestions or experiences you could share.
An investigation was initiated to evaluate potential causes of variability, including pipetting technique and other technical factors. While the assay passed robustness criteria during validation, results from specific analysts exhibited higher variability. Additionally, most study samples were near the ULOQ or LLOQ regions known to present higher ISR failure rates.
As a mitigation step, we retested ISR samples that were within ±50% of the original values and included additional samples for evaluation. Despite these efforts, we did not meet the 67% acceptance threshold (within ±30% difference). Of the total ISR results, only 61% fell within ±30%, and 69% were within ±35% of the original values.
Has anyone encountered a similar scenario? How did you document the ISR failure and justify the integrity of the data in regulatory or internal reports? Any input on best practices for addressing such gaps would be very helpful. can we change the criteria from
67% pass to
61%, or instead of a 30% difference, can we do a 35%
Thank you in advance!
Best regards,