Therapeutic Product Immunogenicity Community

 View Only
  • 1.  Options when In-study and Validated Cut points are different

    This message was posted by a user wishing to remain anonymous
    Posted 01-15-2025 10:57
      |   view attached
    This message was posted by a user wishing to remain anonymous

    Hello all,

    I recently ran into an issue where the in-study and validated cut points were found to be different. Both the variances (by Levene's test) and means (By T-test) for the study population were found to be different than the population used for the cut point calculation during validation. Additionally, the in-study cut point was calculated to be less than 1. 

     <u5:p></u5:p>

    So, following the Fig 8. flow chart from Devanarayan et al. (2017) (screenshot attached), we are in the bottom right section..<u5:p></u5:p>

    • Option 2 does not work for us as the new cut point will make the NCs screen positive and cause plate failure.<u5:p></u5:p>
    • This leaves us with Option 1. We can screen different individual samples to confirm a lower response in the assay than the current NC pool and make a pool with that. 

    To reduce additional validation work, I was wondering if there is any precedence of using two different cut points for the same plate. That is, the validation cut point for the controls on the plate, and the in-study cut point for the samples? Given that this assay would be used for multiple indications, The goal is that if the other indications also seem different, l do not have to keep changing NCs again and again. One issue I do see from not changing the NC (in this specific case) is with titration runs, where I will have to dilute the samples with NC itself, and the signals might never cross the titer cut point.

    Your advice on how you have dealt with issues like this would be highly appreciated.

    Thank you


    <u5:p></u5:p>



  • 2.  RE: Options when In-study and Validated Cut points are different

    Community Leadership
    Posted 01-15-2025 17:48

    Hi there,

    One of the assumptions for the NC is that it is representive of the population of interest. If you are seeing the study samples running (much) lower the the validation NC, then I would strongly recommend identifying/creating and new NC pool that is closer to the study population. This might require running numerous individual samples to identify those that are most representative of the study population before pooling (commercial pools might be skewed by a small number of individuals)

    Whilst it is theoretically possible to run analysis with a negative cut-point factor, it could be questioned whether the day-to-day variation in signal of the NC tracks with the day-to-day variation in the study samples.

    On the point of titering, if you are not able to reproducibly dilute samples below the screening cut-point (SCP), it is common practice to set a titer cut-point (TCP) at the 99th or 99.9th percentile to move out of the noise. This is described in both the Devanarayan et al 2017 paper and the Anti‑Drug Antibody Validation Testing and Reporting Harmonization paper (Myler et al 2022). 

    Thanks,

    Rob



    ------------------------------
    Robert Nelson
    Scientific Officer, Senior Director
    BioAgilytix Europe GmbH
    Hamburg Germany
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or other entities to which I am affiliated.
    ------------------------------



  • 3.  RE: Options when In-study and Validated Cut points are different

    Community Leadership
    Posted 01-15-2025 17:51

    There's also some discussion on in-study cut-points in the ADA Harmonization paper and some nice examples/figures (prepared by Devan...) for assessing when and when not to apply an in-study cut-point, or indeed a sub-population cut-point when you're dealing with multiple indications in a single trial.



    ------------------------------
    Robert Nelson
    Scientific Officer, Senior Director
    BioAgilytix Europe GmbH
    Hamburg Germany
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or other entities to which I am affiliated.
    ------------------------------



  • 4.  RE: Options when In-study and Validated Cut points are different

    Posted 01-16-2025 11:40

    Hi,

    My previous job, we used the validation CP for the plate controls, and for each indication population we apply a population specific in-study CP. I have seen in-study CPs that were <1.0 (around 0.8). So this is not new, but I assume that the individual samples you used for NC pool preparation and validation CP screening did not have any pre-existing ADAs and you addressed the biological or statistical outliners from the CP individuals during validation. 

    I'm not sure how to solve the titer issue though. if you have a titer CP established separate from the screening CP during validation, just stick to that one?

    Also, to save you some matrix pool, we never use 100% pool dilute the samples, we do MRD dilution with assay buffer first and then do further dilutions beyond MRD with assay buffer containing x% matrix pool (pool diluted at MRD).

    Regards,

    Rachel



    ------------------------------
    Rachel Wang, Ph.D.
    Bioanalytical assay development lead
    Spark Therapeutics
    Philadelphia, PA.
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 5.  RE: Options when In-study and Validated Cut points are different

    Posted 01-17-2025 16:38

    Dear Anonymous,

                    I'm afraid I don't have any advice for you that wouldn't require lots of tedious work. However, your conundrum underscores the weakness of the cut point idea as some threshold response that delineates ADA "negative" from "positive". There are a couple of reasons for this. First, a typical bridging assay format does not detect ADA – it detects species that can bind multiple molecules of the drug. It could be the drug target, drug target in complex with some other protein(s) or it could be the ADA. And you don't need to know if the cut point is 1.06 or 1.60 to detect the telltale signs of ADA presence. All it takes is to take a look at the time course of S/N response in your ADA assay and its relation to PK and PD measurements. It's rather preposterous to call a sample ADA-positive just because it gives a response ≥ cut point. Cut point has nothing to do with ADA; it's just part of a statistical description of drug-naïve (and presumably ADA-negative) population.

    Which leads us to the second weakness of the cut point approach. Cut points are determined in drug-naïve commercial samples which are very different from the study population. Study samples are not drug-naïve, they contain the drug and they may also contain ADA. Additionally, if the drug has the desired biological activity, the study population is unlike any other; these are people who were previously sick and now are better because of the treatment. In other words, study samples are very different from anything you can buy from a commercial source. Perhaps the best example of the disconnect between the validation cut point and the study samples is the cut point for nAb assays. NAb cut point is determined in a drug naïve population, but the samples tested in the nAb assay come from a subset of patients in the drug-treated population who are ADA-positive. Believing that validation cut point determined in drug-naïve samples is relevant to drug-treated and ADA-positive samples is just wishful thinking on our part.

    I could ramble longer, but I invite you and everyone in the community to re-think our approach to immunogenicity testing. We have lots of new data on ADA and its impact since the first white papers were published back in 2004 and 2008. It's time to examine and draw lessons from about 20 years of data. It seems to me that during assay validation we do a lot of work for little return, while in study we tend to waste patients' samples by subjecting them to multiple tests of low or no value. 

    Cheers!

    Robert



    ------------------------------
    Robert J. Kubiak, PhD
    Director, Head of Bioanalytical Science
    Third Arc Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 6.  RE: Options when In-study and Validated Cut points are different

    Community Leadership
    Posted 01-20-2025 11:22

    @Robert, you make some great points, so I can't help chiming in here.  The response I hear most when I challenge the need for cut points is that we 'need to know what is positive or not'.  But, to your point, the cut point is a statistical calculation, not an arbiter of clinical significance.  This is where I think we have fallen down a bit of a rabbit hole chasing what we can theoretically measure vs. what we originally intended to measure.  As you point out, looking at the time course of S/N and its relationship with PK, PD and safety/clinical measures will identify relevant responses, while avoiding the swirl that often accompanies 'high' incidence (of clinically irrelevant responses).

    In every instance to date where I have reanalyzed my ADA screening data in this fashion I have either come to similar or more insightful conclusions (mainly the latter) as achieved with the current paradigm.  This analysis is faster, less expensive and provides greater depth of information.  While it can be difficult to effect a paradigm shift, I try to remind myself that this is why I chose science - because the answers change as we build larger data sets and iterate our thinking.



    ------------------------------
    Lauren Stevenson Ph.D.
    Chief Scientific Officer
    Immunologix Laboratories
    Tampa FL
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 7.  RE: Options when In-study and Validated Cut points are different

    Posted 01-24-2025 20:52

    Hi @Lauren. I sort of expected that you might chime in 😊, and we do need a community discussion on various forums. I understand people may be concerned that discarding cut point would mean that we can no longer tell what's positive vs negative. I agree that you know ADA when you see it, but when programmers program tables/figures/listings, what should they look at to "see" ADA as well? Examination of ADA time profiles is very informative on the subject level, but on the population level we still need to summarize immunogenicity results from hundreds or thousands of patients in language that can fit in a single paragraph on the prescribing label. I tried to put some ideas together, but at this point they are too garbled to share even this informal setting.

    Need mare data to play with.



    ------------------------------
    Robert J. Kubiak, PhD
    Director, Head of Bioanalytical Science
    Third Arc Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 8.  RE: Options when In-study and Validated Cut points are different

    Community Leadership
    Posted 01-25-2025 09:18

    @Robert, this is another question I hear frequently.  At the end of the day (or maybe I should say at the beginning of the day?) immunogenicity is fundamentally a biomarker - it is a biological response to a therapeutic intervention.  The context of use is to determine what level of response has clinical relevance.  We manage biomarker data sets and describe and summarize them in clinical reports all the time. The beauty of no cut point is you can use all the data to inform your analysis.  The placebo population helps you define the true biological noise and responses above that noise can be parsed as you see fit.  For biomarkers, people often start by looking at quartiles or tertiles of the response magnitudes and look to correlate those groups with clinical impact, but in truth one could look at any number of potential cut offs and re-evaluate the data.  Does this mean you can't just call POS or NEG out of the gate - yes. (But we know that the POS/NEG call is frequently misleading and inflates concerns over incidence without impact).  Conversely, does  it allow you to use a data-driven approach to identify meaningful responses - also yes. If one proposed to regulators that they were going to do a statistical calculation on the signal from known negative samples to identify a relevant biomarker, they'd get nowhere.  Identifying clinical relevance requires fully analyzing the data set. Cut points simply cause us to put blinders on, look at only a subset of the data, and then over-emphasize the importance of the POS data regardless of magnitude.  My recent presentation at EBF included multiple case studies where the CP caused important data to be overlooked and erroneous conclusions about the overall study to be drawn, while also incurring unnecessary time, money and resource.

    So, in short, the data need to be handled as we do for any other continuous response/impact analysis as we do for PD and other biomarkers.  It's different from how we have reported and summarized immunogenicity data in the past, but it is not a process with which we are unfamiliar.  Ultimately, under the current paradigm, we are trying to get to clinical impact, which is typically described as subjects with titers > x had impact y.  Here we have an opportunity to get there more quickly and with more granularity, with a conclusion instead being subjects with S/N >x had impact y.

    But, in so many cases with modern drugs, we don't see any clinically relevant ADA or by bucketing POS vs NEG we miss insights because meaningful responses in a few subjects get lost in the noise of low level, irrelevant responses. Additionally, titer imprecision adds noise to the analysis.  We adopted the current paradigm for all the right reasons at the time - we were in new territory and didn't have data.  Now we are sitting on over two decades of data and in that same timeframe we have learned a lot about biomarker analysis.  Armed with that knowledge, I believe we can build a path forward that reduces complexity and confusion and delivers greater clarity.



    ------------------------------
    Lauren Stevenson Ph.D.
    Chief Scientific Officer
    Immunologix Laboratories
    Tampa FL
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------



  • 9.  RE: Options when In-study and Validated Cut points are different

    Posted 01-30-2025 21:18

    Hi @Lauren. I definitively agree that cut point determined as a 95th (or 99th) percentile of the subject population can be misleading. Subjects in the top 5% of the population are always going to be marked as "positive", while subjects in the bottom 5% may start seroconverting but we ignore them because they remain below the population cut point.Sure, we can start removing "outliers" to avoid such situations, but then we end up super low cut points not to mention that we don't remove subjects from study just because they look funny in our assay.  If I am a patient or a doctor taking care of the patient, I don't really care about other individuals in the population - I'm only interested in how one particular patient responds to the treatment.

    However, I think there may be some value in cut point but defined as the "limit of blank" i.e. the highest response that can be explained by the analytical variability of the assay. Previously, we called this a "minimum cut point" or the cut point that would be obtained if the same negative sample is tested multiple times. It appears that many cut points reported in literature are below the minimum cut point which means that random assay noise can end up as ADA-positive.

    We could combine the two ideas above; for each individual subject we are interested in the post-baseline changes of S/N that cannot be explained by the normal inter-assay analytical variability. I think this is close to what you propose (as I understand it): the pattern of signal changes over time analyzed in context of PK and PD for every subject.

    For me, the difficult part is providing a description of ADA response for each subject and the whole population. For example, when we collect PK data, we are interested in Cmax, tmax, AUC, t1/2, and other well-defined parameters which can be pre-specified in the statistical analysis plan. I have no idea what ADA parameters are going to be of interest; is it maximum signal, time to the max response, duration of response? I always liked the idea of AUC for ADA i.e. S/N multiplied by time. This AUC should be proportional to the total amount of ADA generated during treatment. The problem with this idea is that two subjects with the same AUC could have different "shape" of response not to mention ADAs with different affinity, avidity, class etc. In short, that's not going to be easy ☹



    ------------------------------
    Robert J. Kubiak, PhD
    Director, Head of Bioanalytical Science
    Third Arc Bio
    [email protected]

    Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
    ------------------------------