Hi Jason,
Thanks for raising a number of questions that I'm sure are on people's minds. I have long advocated the concept that immunogenicity is a biomarker, so have thought a fair bit about these points.
I agree that we have no good way to measure the analytical variability of the true analyte in pre-study validation. We can, however, analytically characterize assay performance with regard to the PC, imperfect as that is. It at least tells you the analytical variability associated with measuring the same sample multiple times. That is a starting place. The power of using all the S/N data from all time points (not invoking a cut point) is that subject/patient full profiles provide a much more comprehensive view of immunogenicity development over time. Additionally, placebo 'profiles' provide insight into the biological variability, cross-sectionally and longitudinally in the study population over the duration of the study, allowing for a better understanding of what magnitude of responses exceed the analytical and biological variability seen in the untreated population.
Your proposed hypothetical situation does not align with the real data sets I have evaluated. When the assay is analytically robust, as determined with PCs, I have observed similar analytical precision with study samples. Most S/N repeat measure analyses with study samples that I have undertaken report CVs <10%. In pre-study validation, you can use your PC precision as a starting place and then in-study you can use your Pbo samples as a guide to incorporation of biological variability. At the end of the day, what we are looking for is clinically relevant immunogenicity and in every post-hoc case I've examined to date, complete S/N profiles from the screening assay have provided a clearer, more granular view of the development of ADA and differentiated meaningful responses from biological variability as well as from real, but low-level, clinically irrelevant responses. Additionally, unlike other biomarkers, clinically relevant immunogenicity appears, increases and persists - these profiles are easily discerned from transient profiles and biological noise.
I'd approach each assay considering context of use, demonstrate pre-study analytical precision to a degree that will meet that context, and then carefully evaluate my in study data to assure myself the assay is meeting needs of its intended use. The 3-tiered paradigm already uses S/N to declare a sample 'positive' and then we are happy to report titers that are inherently imprecise, which many believe is precise enough for most immunogenicity analyses. The good news here is that even as currently designed, the screening assays perform well for S/N and if we take the opportunity to look at the full data sets they offer, we already gain more insight. This is likely because we already develop these assays to meet S/N precision requirements for the PC at multiple levels.
------------------------------
Lauren Stevenson Ph.D.
Chief Scientific Officer
Immunologix Laboratories
Tampa FL
[email protected]Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
------------------------------
Original Message:
Sent: 12-05-2025 18:46
From: Jason Delcarpini
Subject: Analytical Variability of Reactive ADA Samples
I wanted to post this here because as the paradigm for ADA assays shifts, something has been gnawing at me. I'm fully on board with moving away from the three tier approach, and nothing I'm raising here is "fixed" by keeping it. But if we're now using language like "ADA are a biomarker," then I think there are analytical implications to that shift that I haven't seen well explained. Thank you to Robert for the recent presentation, it pushed me to try to make my thoughts more coherent.
Before ADA appears, the only analytical variance we can characterize is assay noise because we have no relevant surrogate to measure analytical variability of an ADA response. The mixture is too heterogeneous to represent meaningfully until it exists in a study sample. But once ADA appears, the assay is measuring an analyte, a mixture yes (IgM, IgG, low affinity, high affinity), but that mixture is the analyte.
At that point the meaningful question becomes: how consistently can we measure that analyte?
I've heard the argument that because the analyte mixture may shift from day to day or timepoint to timepoint, that variability should be treated as biological. But that doesn't sit well with me if we are treating ADA assays like biomarker assays. With other biomarkers, we do not assume precision; we measure it using endogenous samples because they contain the analyte in the true matrix. ADA assays seem to be the only place where we go straight from "the signal is above assay noise" to "this might be biologically meaningful," without ever asking whether the measurement itself is stable enough to make that determination.
Consider a simple hypothetical. Take one ADA positive sample, split it into identical aliquots, freeze them, and run them on different days. You might get S:N values of 5, 7, 10, 6, 9. The biology is unchanged, same antibodies, same complexes, yet the measured ADA response swings twofold simply because the assay was run on a different day. To me, that level of variation is potentially meaningful. If modelers receive a data transfer with a S:N of 5 vs 10, they may arrive at a different conclusion as to what is clinically meaningful response. It is also important to point out that those divergent interpretations come entirely from analytical variation, repeated measures of the same sample.
So if ADA is a biomarker and S:N is the quantity we intend to report, the three-tier approach and cutpoints are gone, but analytical rigor doesn't go away, it shifts. It is worth asking whether we are now responsible for understanding the uncertainty around the S N measurement itself. If a sample is above assay noise, do we need to re-analyze it multiple times to understand the analytical variation around that sample before drawing PK, PD, or clinical conclusions? That is exactly how other ligand binding biomarker assays are handled: precision is established against the real analyte (endogenous) in the real matrix. Why would ADA be the exception? Because the analyte mixture changes from time point to time point? Because it's harder?
If we don't want to go to the level of reporting a CV or a SD from a n=6 of every sample above the assay noise – I think the median of an n=3 would be a far more accurate S:N measurement, on average, than simply going with the S:N we received in the first test.
And just to be clear, the goal here is not to do extra work, but logically that seems to be the direction it is heading. Thoughts?
------------------------------
Jason Delcarpini
Director
Moderna
Cambridge MA
[email protected]
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
------------------------------