Therapeutic Product Immunogenicity Community

 View Only
  • 1.  Strategies to mitigate ultra-low cut points

    Posted 10-01-2025 22:13

    Hello everyone,

    I would appreciate hearing your perspectives on strategies to avoid ultra-low ADA screening cut points. I have encountered discussions suggesting the use of a minimum cut point (for example, a cut point of 1.2 to incorporate the accepted CV allowance), even if that causes the FPR to fall below 5%, and leads to missing some low-titer, inconsequential ADA responses. Can anyone share their experiences/strategies on this topic?

    Additionally, are there any publications/guidances on this topic that I may have missed?

    Thank you

    Arkadeep



    ------------------------------
    Arkadeep Sinha, PhD
    Director, Bioanalytical Sciences
    Upstream Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------


  • 2.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-02-2025 21:27
    Arkadeep,

    The cut-point incorporates two values - the average S/N ratio of the cut-point samples, and the inter-subject variability of those values. The first thing I would ask myself is whether the NC is representative of the population. The average S/N should be very close to 1.0, once the outliers have been removed. (Most labs are using S/N rather than raw signal, to account for inter-run variability).

    Second, make sure your outlier removal strategy is not overzealous. The intent is to remove samples with pre-existing ADA, while retaining the natural variability of the negative population. A single round of outlier testing using boxplot with a fence of Q3+3.0*IQR will identify only extreme outliers. 

    The idea of a minimum cut-point was first proposed by Kubiak et al (J Imm Methods, 2018). It is based on the observed intra-well variance of the NC. If your assay has excellent precision, then the minimum cut-point is going to be very low. I'm not sure an arbitrary 1.20 minimum cut-point would be scientifically justified.

    My final point is that very low screening cut-points are not bad per se. If the NC is representative of the cut-point samples, and the cut-point samples are representative of the study population, then you should see ~5% false positives. Excess false positives can be eliminated in the confirmatory assay. The problem comes when you have many confirmed positive results that are of no clinical significance. That is not really a problem of the cut-point.

    John Kamerud







  • 3.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-04-2025 17:26

    Thank you, John for your detailed explanation and reference to Robert's paper. In my past experience dealing with ultra-low cut points, I have applied several of the approaches you mentioned - like ensuring NC suitability, using S/N rather than raw signals. I have typically used 1.5*IQR for outlier removal, resorting to 3*IQR only when outliers exceed a certain %.

    With these methods, I have generally observed FPR between the recommended 2-11% in the screening assay.  My concern, however, lies with the actual low titer positives that I detect due to these ultra-low cut points (which as you mentioned are not really a problem of the cut point).  In many cases, these true low-titer ADA positives lack clinical relevance. Given the recent discussions around the identification of only clinically relevant ADAs, and talks about changing the testing paradigm, my post was intended to question whether it might be time to reconsider the relevance of these ultra-low cut points.

    Thank you

    Arkadeep



    ------------------------------
    Arkadeep Sinha, PhD
    Director, Bioanalytical Sciences
    Upstream Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 4.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-06-2025 21:19

    Hi Arkadeep,

    I can't resist adding my 5 cents to this discussion. I strongly believe that it's time to re-think the immunogenicity testing paradigm and make some changes in accordance with roughly 20 years of industry experience with clinical immunogenicity data. We got quite proficient with estimating the 95th or 99th percentile of population responses in the screening and confirmatory tiers. However, the billion-dollar question is (my rough estimate how much the industry spends on cut point experiments and data analyses); are these cut points relevant to evaluating immune response of study subjects? Here are some problems that I have with the idea of cut point:

    • 1)   Formation of ADA in an individual subject is independent of other subjects in the population. My immune system doesn't know and doesn't care about the other 95% or 99% of individuals selected for cut point determination. However, my positive/negative classification is going to be judged against responses of other individuals.
    • 2)      Individuals with the top 5% responses in the population are considered ADA-positive regardless of their responses post-baseline. At the same time, the bottom 95% of the population must increase their post-baseline responses to be considered ADA-positive. In other words, the potential for a false negative classification is < 95%. This is not because the cut point is too high; it's because we chose to apply a cut point.
    • 3)      We like to assume that cut-point determined at baseline for drug-naïve subjects (whether pre- or in-study) is suitable evaluating samples post-baseline. This is wishful thinking on our part, because study subjects are no longer drug-naïve post baseline and as a result of drug exposure, their plasma may undergo changes which are difficult to predict. Patients taking the drug are getting better (hopefully) while patients in the control group may be getting worse. There is no way knowing with the pre-dose cut point is suitable for the post-dose time points.
    • 4)      Formation of ADA is a pharmacodynamic response (something that the drug does to our body). Perhaps we should re-focus our efforts from detecting biding events to monitoring changes in response over time. In other words, instead of identifying the lowest response indicative of biding (and over-interpreting it as ADA), look at the smallest change of response that is inconsistent with normal assay variability.
    • 5)      We can recognize ADA by the time-course of response, and the related impact on PK and PD. We don't really need a point cut for that. Most screening and confirmatory cut point values reported in literature are around 1.3 and 33%, respectively, regardless of the assay platform, drug modality, indication and even the species of the subjects. This tells me that these cut points are for the most part the numbers we happen to like rather than something that describes a study population. I know that determination of cut points keeps us gainfully implied but maybe it's time to stop the busy work and redirect our efforts to maximize benefits to our patients.

    This was more like 50 cents worth of my opinions, but I am very opinionated on this subject.

    Cheers!

    Robert



    ------------------------------
    Robert J. Kubiak, PhD
    Director, Head of Bioanalytical Science
    Third Arc Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 5.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-04-2025 10:14

    Hello Arkdeep,

    I'm posting this on the Immunogenicity board and the Bioanalytical board to cover both communication streams.  Apologies to those of you that are reading this twice.  John Kamerud and Robert Neely all share excellent points regarding ultra-low cut points.  You should definitely check to make sure you aren't excluding too many outliers (I start paying attention above 10% and get nervous if >15% are excluded), and making sure your negative control is representative of the cut point individuals.  

    The timing of your question is excellent as I'm working on a collaborative paper with two CROs and BioData Solutions to discuss this topic in greater detail.  Our conclusion is that ultra-low cut points are not bad at all.  They don't lead to inflated rates of immunogenicity and runs don't fail more often than those with higher cut points.  Outside the scope of the manuscript's discussion is the clinical interpretation of immunogenicity and Robert Neely brings up a good point that reporting S/N can add nuanced information about the immune response superior to Positive/Negative/Titer. 

    Here is a sneak peak of what is in the paper:  We tabulated cut points from 185 assays and over half of the assays had cut points at 1.20 or lower and a third of assays had cut points less than or equal to 1.10, thus ultra-low.  We performed in-depth analysis of ~12 ultra-low cut point case studies using various assay formats and drug modalities and only two required in-study cut points. In both cases there were assignable causes for the ultra-low cut points, rather than low assay and biological variability.  The first case was overzealous removal of outliers (>15%) and the second case was a negative control that was above the cut point individuals' responses.  We presented a portion of this data at EBF, so if you'd like a copy of the poster, please let me know.  

    In short, ultra-low cut points were found to be quite robust and shouldn't cause undue concern for the bioanalytical scientists as long as an excessive number of outliers are not excluded, and the negative control is representative of the population.   

    Robert



    ------------------------------
    Robert Kernstock
    Principal Scientist
    Astellas Research Institute of America LLC
    Northbrook IL
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 6.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-04-2025 17:46

    Thank you for reposting it Robert. With the approaches of NC suitability and controlling outlier removal, I have generally observed FPR between the recommended 2-11% in the screening assay.  As you mentioned, assay-wise, there were no issues either. My concern, however, lies with the actual low-titer positives that I detect due to these ultra-low cut points. In many cases, these true low-titer ADA positives lack clinical relevance. Given the recent discussions around the identification of only clinically relevant ADAs, and talks about changing the testing paradigm, my post was intended to question whether it might be time to reconsider the relevance of these ultra-low cut points. I would love to have a copy of your poster.

    Thank you

    Arkadeep



    ------------------------------
    Arkadeep Sinha, PhD
    Director, Bioanalytical Sciences
    Upstream Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 7.  RE: Strategies to mitigate ultra-low cut points

    Community Leadership
    Posted 10-05-2025 14:21

    This is a great point @Arkadeep Sinha, and I think a lot of us are wondering how to establish a clinically relevant threshold, which in practice is quite similar to the diagnostic space where you need to collect the data first to establish an appropriate threshold.

    If you are using a cut point factor, that is more detectable incidence than clinically relevant incidence. Later you would still want to use the clinical data to establish which reportable result is potentially clinically relevant, historically titer but could be S/N. Patients who exceed that threshold would be included in the clinically relevant incidence. The risk you would face with a higher cut point factor is that you don't go into data analysis a priori knowing the distribution of the true clinically relevant positives, so using a false positive rate is used to reduce risk of false negatives. There is no guarantee that the higher false positive rate results in a meaningfully lower false negative rate, however. It's just a trend that many analytes follow. Setting it higher for a lower false positive rate conversely doesn't necessarily increase the false negative rate, but you have no way of knowing. I guess what I'm saying is that I agree with Robert K that it doesn't make a meaningful difference. 

    Per Robert N's response in the Bioanalytical Community without the false binary, the proposed approach of plotting all S/N data without a cut point factor is basically a 0 cut point factor, reporting everything without even regarding an assay LOD. A key advantage here is that essentially you have maximized the false positive rate, all data is included. In this context, there is no confusion about the detectable incidence versus the clinically relevant incidence, because essentially you are never calculating a detectable incidence. But it is harder to describe in the earlier stages other than describing tentative lower bounds for clinically relevant thresholds for at least PK interference, which may be the only other data you have in Phase 1.

    Tying back to the fit for purpose, if there are no consequences on safety if ADA develop, and your drug concentrations are in the ug/mL, you may be able to limit your detection with a higher threshold without risking missing anything clinically relevant. In this case, however, I might prefer to keep the false positive rate but not push the assay sensitivity as much to have a more robust assay. Perhaps setting a higher cut point factor would also accomplish this, but I find the former easier to do prior to sample testing.



    ------------------------------
    Joleen White Ph.D.
    AAPS 2026 National Biotechnology Conference Track Chair
    Bioanalytical 101 Course Development
    Senior Advisor
    BioData Solutions LLC
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 8.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-06-2025 12:53
    I agree with Robert that low CPs are not a particular problem.  It is important to be clear on distinguishing the Cut Point and the Cut Point Factor, the latter being normalized to the plate NCs, and which value I believe Robert is presenting.  I dare say, as I have repeatedly over the years, the CPF can be <1 since the pool comprising the NC (as a single data point among the individuals, it can reside anywhere in the population distribution used to determine the CP) can have a value above the population of individuals that comprise the CP.  In that sense the NC is in fact a "Normalizing Factor" and need not be truly "negative", i.e., below the determined CP.  This makes most people nervous, but a CPF<1 serves exactly the same purpose and just as well as a CPF>1.





  • 9.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-07-2025 21:25

    Arkdeep, Lauren, John, Robert N., Robert Kubiak, Joleeen, and Eric,

    Once again, I'm replying on both discussion boards. I'm so thankful for your initial post and all of the responses so far.  

    The publication on ultra-low cut points, as Lauren suggested, is targeted toward the Bioanalytical Scientist to give them confidence that their cut point factor is solid (reliable and assay appropriate) as long as they assess the outlier removal and NC response vs the population.  I couldn't agree more that the approach towards the assessment of unwanted immunogenicity could use a refresh.  I love the idea of looking at S/N, which in most cases follows Titer, as a means of teasing out impact of ADA on pharmacokinetics, safety, and efficacy.  Additionally, S/N has the advantage of a single sample measurement in the screening assay rather than going through the three-tiered approach cutting down on unnecessary sample analysis costs and timelines.  If I may be so bold, S/N is a surrogate for ADA concentrations, just like titer is, and who knows, in the future immunogenicity assays may have a standard curve applied to them resulting in relative ADA 'concentrations'.  To be clear, immunogenicity assays are not PK assays, they are biomarker assays, and many of the challenges with biomarker assays regarding reference standards would apply to quantitative immunogenicity assays.  I'm also not advocating for such an approach, but it is one that could be taken.  

    Now to address Arkdeep's original point about clinical impact. I would say that in the case studies we explored, the rates of immunogenicity were not elevated, but I can't say for certain if those positive immunogenicity results were clinically impactful.  My guess is they were not. I'd also say that immunogenicity assessments are almost always best understood with supplemental data in retrospect.  It would be nearly impossible to say that a cut point is appropriate before conducting the clinical study (see Joleen's comment).  In my situation, and probably similarly to what is being done with Lauren at Immunologix, you start stratifying the ADA responses with PK, efficacy, and safety data.  For example, does an ADA S/N < 3 (or titers < 400) impact drug concentrations?  If not, then you've got great supporting data to suggest that while 'low-level' ADA may be detected, below a certain threshold these ADA have no apparent clinical safety/efficacy impact. Then you can state things like 20% of subjects developed ADA, but only 5% developed ADA that led to decreased efficacy.  

    It is a shame that there is so much swirl when we are reporting high rates of immunogenicity where the bulk of the data shows that they are not impactful (summarizing Lauren's comment).  I've been on programs where immunogenicity results led to product termination when there were no data showing impact on efficacy or safety, nevertheless "rate of immunogenicity" was a marketing concern.  I'm so glad that Bioanalytical scientists (and AAPS) are having these discussions and publishing white papers and other examples demonstrating the use of alternative strategies to summarize the impact of ADA (e.g. S/N vs Titer, and utility of confirmatory assays).  I hope that smart science continues to lead us towards better outcomes.  



    ------------------------------
    Robert Kernstock
    Principal Scientist
    Astellas Research Institute of America LLC
    Northbrook IL
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 10.  RE: Strategies to mitigate ultra-low cut points

    Posted 10-08-2025 22:40

    @Joleen White @Robert Kernstock @Robert Kubiak @Eric Wakshull @John Kamerud. Thank you all very much for your insightful perspectives on this topic. Lots of great points and different approaches to consider.

    Arkadeep



    ------------------------------
    Arkadeep Sinha, PhD
    Director, Bioanalytical Sciences
    Upstream Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------