HCP Assay Development: Managing Risks with Evolving Technologies

View PDF

reading chromatograms detailing HCP assay resultsHost-cell proteins (HCPs) are major impurities of concern in biomanufacturing. When present in drug formulations, they can reduce efficacy (by compromising product stability), introduce toxicity, and increase a recipient’s risk for long-term immunogenicity. Understanding HCP profiles and integrating effective removal strategies are important parts of developing new biological drugs — to fulfill regulatory guidelines and to ensure patient safety through product quality.

HCP populations can be both complex and structurally diverse, and some changes in upstream culture conditions can affect their concentrations and thus influence how control strategies work. Accurate and reliable HCP quantitation is essential to monitoring the effects of process adjustments and for optimizing purification steps to ensure adequate removal of impurities.

Removal of HCPs from drug substances is critical in manufacturing high-quality drug products. It is predicated on thorough analysis of HCP contaminants, which vary considerably among expression systems but less based on culture-media components and process parameters. A well-developed, broadly reactive, and qualified HCP immunoassay is vital to such analyses for demonstrating process consistency and final drug-substance purity. How well a particular assay recognizes all HCPs will depend on how well its antibodies recognize the actual HCP profile: Commercial “generic†enzyme-linked immunosorbent assay (ELISA) kits can differ substantially in their ability to detect similar types and levels of HCPs. Generic immunoassays are based on polyclonal antibodies raised against HCPs from the cell line cultured in a generic process. Such antibody preparations ideally would react with all potential HCP impurities, but it is impossible for any single assay to suffice in all cases. The reactivity of the antibodies will depend on how the antigen was prepared to induce them, the immunization method used, and how the antibodies are purified. In-house–developed platform and process-specific assays are necessary, but because of the high risk of drug failure, most companies put off establishing them until later stages of product development. In chapter <1132> “Residual Host Cell Protein Measurement in Biopharmaceuticals,†the US Pharmacopeia (USP) recommends combining two-dimensional (2D) differential gel electrophoresis (DIGE) with Western blot or immunoaffinity approaches as complementary methods. Meanwhile, many laboratories are implementing liquid chromatography with mass spectrometry (LC-MS) workflows for HCP analysis, among other things. Understanding available options can help biopharmaceutical analysts implement a coverage strategy that optimizes accuracy and minimizes the risk to patient safety and product-approval delays.

Denise Krawitz is principal consultant at CMC Paradigms in California. With over 20 years of strategic and technical chemistry, manufacturing, and controls (CMC) experience, she has served in multiple technical and team leadership roles at Genentech, BioMarin, and Ambrx. Krawitz holds a doctor of philosophy in molecular and cell biology from the University of California at Berkeley. She also studied protein folding at the University of Regensburg in Germany on a Fulbright fellowship.

In January 2021, we discussed a number of topics and trends in HCP assay development. What follows is the core of our conversation.

The New Cool Tool
ELISA is the established “gold standard†for HCP analysis, but in recent years I’ve heard a lot of talk about MS as a more rapid and automatable option. Is that a replacement trend or more of an orthogonal development? MS and ELISA are very orthogonal technologies — and neither one is perfect. We’ve been talking for many years about the imperfections of ELISA, when it comes to coverage, standards, and whatever. We know it’s not a perfect method, and it’s unlikely ever to be. And although MS is the shiny new tool in our laboratories, it’s actually not that new of a technology. I’ve been using it for HCPs since 2002, when we were using the mouse genome to try to find Chinese hamster ovary (CHO) cell proteins. MS has become much more accessible, sophisticated, and powerful over the past 10 years.

What major changes have created that increase? The sensitivity of MS improves logarithmically every few years, the software is getting much better, and the genome databases are improving. But what people don’t talk about enough is that MS is also not perfect. It also does not detect everything. There are issues associated with sample preparation and with quantitation. Generally, it is a fantastic tool. I would never run an HCP program now without using MS. But we need to remember that it also has its own inherent biases and weaknesses. Every method has its own bias. The reason why I don’t think MS will replace ELISA is that one works very well in quality control (QC) laboratory workflows, and one does not. I wouldn’t move to a challenging method that is imperfect over a substantially easier method that is also imperfect. That said, I will continue to use MS; I’m just not a fan of putting it into a control system.

You would hope that their imperfections are complementary. And they are. It also kind of depends on the stage of development. By the time you get to a commercial stage, you need your HCP ELISA to function more for process control than product purity. So MS can be more important as a characterization tool at later stages in development and commercialization.

USP is working on new standards to support MS analysis of HCPs. Do you know what stage that project is at now? I think the effort at USP is not directed at getting MS into control systems; it’s about standardizing and recommending best practices for identification, characterization, and quantitation of HCPs. They are going to be writing a chapter above <999>, which means that it is not something that people must do. Chapters numbered 1000 and above are considered guidances and best practices, so even the HCP chapter <1132> is not enforceable.

Note that things are different in Europe with the European Pharmacopoeia. If one of its chapters gets referenced in a monograph, then it does become enforceable. That has happened in Europe for HCPs, but in the United States, it’s still just guidance.

The topics the USP is considering for the new chapter are quantitative/qualitative analysis and relative abundance, as well as some standardization on sample preparation, which actually can make a big difference in the results that you get from MS. Standards and data acquisition/processing are both major topics.

Think of this type of standard as an assay calibration: If you’re trying to use MS to quantify total HCPs or individual proteins, then how do you do that? If you’re using it to quantify individual proteins, then how do you do that? The first question is difficult to answer, and there are some great methods being considered — also imperfect, but they’re very good, and I use them. (Editor’s Note: See the article by Sjöblom, Duus, and Amstrup in this month’s issue for discussion of a biolayer interferometry method.) If you know you have a problem with a particular HCP, and you want an MS assay for that, then you can develop a standard method specific to that individual protein. Such a highly specific quantitative assay could end up in a control system. As far as generic HCP assays, I don’t see MS replacing ELISA, but for individual proteins — or maybe a certain small subset of HCPs — that could be possible.

As for physical standards, it’s generally recognized that the ideal would be intact proteins. Each such protein would go through the same preparation process as your sample. But it’s very hard to generate intact protein standards for all these HCPs. USP is starting with peptides, which are easier to standardize and distribute globally. They’re good for quantification in a MS method, but using them as standards won’t account for sample preparation. So you have to assume that you get 100% HCP recovery from the sample digest — because if you don’t, then you’re underquantifying HCPs in your sample. If you’re using peptides to quantify your HCP biomass, then you’re quantifying only what you have recovered, so you’d better be able to show that you get good recovery. In most cases, recovery of peptides can be very different. Theoretically, the stoichiometry of all peptides in a protein should be exactly the same. But methods are imperfect, and we rarely observe that with our MS methods.

There is a lot of value to peptide standards and having some standardization across the industry. Using a given method, I know I can detect this peptide at this dilution or at this quantity, or whatever. There is a ton of value to that. But generally speaking, there are weaknesses with only using peptide standards to quantify HCPs.

And around the world, people’s knowledge and approaches to sample preparation vary widely. Absolutely. If you only talk to MS people, they’ll tell you it’s fine. But integrating the technology into the overall biology of a system can be a challenge. Some MS folks are really good at both, and I treasure those people.

Managing Data and Risks
Data analysis appears to be an important aspect of LC-MS workflows for HCPs. What databases and software are most useful? There’s a lot of good software packages out there. Many are tied to the MS platform. So if you buy from Waters, then you get the Waters system; if you buy from Agilent, you get the Agilent system; and so on. Some other software packages aren’t tied to a system, so you can analyze all your data, then export and then pull them into a different package. There are many ways to analyze the data.

I’m no expert in this MS methodology, so I don’t use those packages directly. But I know that the way you set up search parameters and threshold levels for hits and false positives can be very important. Many people say, “Oh, I analyzed a sample, and I got 5,000 hits.†And I doubt that. How many of those are false positives? You want to kill your CMC team? Give them a false positive.

What’s the next step when you’re trying to figure out what’s a false positive? Well, databases can be messy. The same protein can have different names and multiple entries, so the database often is the weak link in trying to get a meaningful list of HCPs. You tell me you found 5,000 things, and honestly, I don’t believe you. When I’ve gone through and curated those kinds of results, I’ve found database errors and demonstrated that a couple of times by cloning genes from the CHO cells, then sequencing them.

Sometimes it’s an error, sometimes it’s a genetic variation, and both of those things can be true. In at least one case, I saw a frameshift and was able to contact Kelvin Lee (at the University of Delaware), who does a lot of genome maintenance. He went back and looked at the original sequencing data, and yes, in fact, it was ambiguous and an error.
That was a while ago, by the way. It’s getting better every day. But it’s important to understand what database you’re using, how you’re sorting through and identifying hits. Some people are doing great work on this, and we always have a focus on this at the Biopharmaceutical Emerging Best Practice Association (BEBPA) meetings. Martha Staples and Frieder Kroener, for example, are well versed in parsing what you can crank out of a software package from what’s meaningful for your team. Those are often different answers.

What do people do when suspecting false positives or trying to deal with them? What’s the next step? First, determine how many peptides you have identified from a given protein, going back to tandem mass spectrometry (MS-MS) to see what the quality of the sequence was and to make sure it’s correct. Make sure that the sequence is tied unambiguously to a given protein. There are also sensitivity controls in our laboratories, making sure that you’re not getting carryover from sample to sample. Those things are just good laboratory practices and part of being a diligent scientist — not necessarily trying to crank out the biggest list possible, but rather trying to understand what’s relevant for your product and your process.

At the same time, you don’t want to miss anything, right? You’d rather have a large list that you can pare down than a very small list that’s clearly missing something. Ideally, you want to be somewhere in between. I’ve seen spreadsheets with 5,000 proteins on them, and you could sit there for weeks trying to sort through them and figure out what’s relevant, and that’s awful.

A false negative would be something that’s there but not detected. So the question is whether you didn’t detect it because it’s not there, or because it’s not at a high enough abundance, or because there’s a limitation in your method.

How do you define high-enough abundance? It’s not the same for every protein. Limit of detection is not going to be the same for every molecule. At some point, you have to invoke practicality. We could push technology and find a protein that’s present at 0.1 ppb. But is that important for my product or process just because the MS can tell me it’s there? Unfortunately, it’s not that simple. It depends on the patient population, indication, dose level, and all those risk-assessment considerations (1–4). But we don’t need to push our technology to a point at which it’s not clinically meaningful to safety, efficacy, and stability of your product.

The practical approach some people are using is to develop standards for a mixture of 50 or so proteins, then show that their methods can detect pretty much any protein out of that standard panel down to 10 ng/mg of final product. If you can say, “Here is a universal protein standard of 48 different proteins (with a wide range of molecular weights and pI levels), and I can detect every one of those down to 10 ppm when spiked into my antibody,†well, that’s pretty reasonable. That’s a practical application of what limit of detection (LoD) can be applied to a method.

That brings biosimilars to mind — how you want them to be as much like the originators as possible. But now technology has advanced to the point that you can find things or do things in your process that might be even better than the originator could do. So how “similar†do you have to be? Well for HCPs with biosimilars, it’s actually pretty clear: You do not have to be the same. Given the length of patent protection, a lot of the originator products are made with older technologies and just “dirtier.†But that becomes obvious only now. There are a few published cases (5–10) comparing HCPs in an originator product and a biosimilar. And the biosimilar always has fewer HCPs, and sometimes the profiles overlap. That becomes another data point to indicate that the HCPs copurifying with your product probably have as much to do with your product as your process.

At the online BEBPA conference in October 2020, one MS company was able to show some data on HCPs found in commercial products. I think they’ve done that work with clients working on biosimilars. I expect biosimilars to be cleaner than their original counterparts. And when companies overhaul the production or downstream process for a legacy product, it also needs to end up cleaner. You don’t have to match HCPs, but the profile can’t be worse.

The more you can do, the more you have to do. This indicates why CHO is so dominant and probably will remain dominant for some time: because we know so much about it. We know a lot more about its HCP makeup than just about any other cell line. Any time you want to try something new, you need standards to compare it with. We have a long history of giving millions of doses of CHO-derived biologics to humans, with an excellent safety record especially when it comes to HCPs. Only a handful of case studies have shown clinical issues related to HCPs. Some of this is because the industry takes this subject seriously and does the work to remove them, and we’re getting better at doing that over time. And with the doses administered in clinical trials, it’s hard to attribute any safety events to HCPs at all.

Safety issues can depend on the expression system. Consider Escherichia coli, for example. Our immune system is primed to detect certain E. coli proteins, so those might be a little more concerning if they show up in a product.

It’s impossible to think about a safety profile without putting that into the context of a full risk assessment: What does a given HCP derive from? Who’s getting the final product? Is it going into a pregnant woman, or a baby, or a 90-year-old cancer patient? Is it going into an immune-suppressed population or an immune-activated population? There are many questions to think through. Risk assessment is difficult. But our MS specialists are getting better all the time, and we’re continuing to name specific HCPs. We have to learn how to manage HCP risk when we know the names of the HCPs in biologics. There’s probably not a single product derived from cells that is 100% free of them.

It seems impossible. That’s right. As our technologies get more sensitive, we’ll get better at detecting those things. Then the question becomes, “How do I know this is okay?†And that’s a hard question to answer. One of the first times I had to deal with that was with the issue of PLBL2 [hamster phospholipase B-like 2] at Genentech (11–13). We had to work with toxicologists and our clinical team to understand the full context of what was going on with that protein.

For More Discussion
The BioPharmaceutical Emerging Best Practices Association (BEBPA) is a nonprofit organization created to serve as an international forum for discussion of scientific issues and problems encountered in product and process development. The group provides a platform for industrial scientists to discuss common technical problems, suggest potential solutions, and openly discuss their merits. BEBPA conferences include two to three days of presentations, workshops, and round-table discussions on topics of current interest to the analytical biopharmaceutical community. BEBPA’s annual bioassay conferences will be 22–25 March 2021 (virtual) and 22–24 September 2021 (Rome, Italy). The annual host-cell protein conference will be virtual this year, 17–19 May 2021. Find more information online at https://www.bebpa.org/conferences. On the same website, you can access presentation abstracts from past conferences as well as a number of white papers, survey results, and other resources.

Setting Specifications
We talked a little bit about specification limits. What’s the latest thinking? There tend to be what I would call “generic†limits that people apply particularly in earlier clinical development, usually 100 ng/mg (100 ppm). We reference that as the acceptable number. But if you look back to where that number came from, someone just made it up. It’s been working for decades, but it’s important for people to understand that there was no scientific basis for establishing that as a generic limit.

So there could have been other arbitrary numbers back in the day that didn’t pan out very well? That’s right. We’re talking about CHO-based products. It’s hard to say anything about the safety of HCPs without taking context into account. It is different for E. coli, though many people use 100 ppm for that too. But I’d want to do a little deeper digging with MS before I said 100 ppm was okay — make sure nothing in there looks scary. Anyway, I don’t see a lot of pushback on that 100-ppm number. Even though it’s based on nothing, the industry has tons of experience to back it up.

As you go toward commercial manufacturing, those generic limits don’t apply anymore. Some people argue that they can. But our limits at the commercial stage need to be tied to the product quality that was used in clinical trials, from which safety and efficacy were demonstrated. With a typical monoclonal antibody, you’re unlikely to have more than 10 ng/mg in clinical materials. For a commercial specification, I wouldn’t go as high as 100 ppm. If you always run your commercial process below 10 ng/mg, and suddenly you see a batch at 80 ng/mg, then you know something happened with your process. And that’s when you need your HCP ELISA as a process monitoring tool. I wouldn’t release that batch without understanding what happened with my process. So it would be crazy, if all of your clinical experience is really low, to set your specification up high.

Nonetheless, some people try to do it. I think that at the biologics license application (BLA) stage, your specs need to be tied to clinical experience. This is where you pull in your biostatistician for better understanding.

At that stage, you should have lots of data to analyze. That’s right. And you can set a limit based on a confidence interval, for example, but what you really want is to know whether your process is delivering something unexpected. Then you will want to investigate that. You can set an alert limit and acceptance criteria or just acceptance criteria. Say you’ve never had more than 10 ng/mg, then maybe you want to set an alert limit at 15 ng/mg. So if you see something, you’ll at least look at it before product gets released. There needs to be some mechanism that shows when the process is delivering something unexpected.

Does that relate to design space? So you have different levels from ideal to OK to acceptable to unacceptable? Absolutely. When people do those process validation studies to define that design space, often I’ll say, “Do a worst-case linkage study. Find the worst possible parameters for purity and what your HCPs look like.†That design space investigation for HCP clearance also can be a part of your biostatistician’s analysis. The important thing to say about setting specifications is that your commercial limits need to be tied to clinical experience; they cannot be based on some platform.

Assay Coverage
To determine acceptable coverage levels for ELISAs, laboratories use 2D Western blotting, immunoaffinity chromatography (IAC), and dye-based 2D DIGE. What are the main concerns in this kind of assay qualification effort? It’s for ELISA characterization. You’re asking how good the antibodies are. I’ll simplify: Let’s say that at harvest there are 100 HCPs. How many can my antibody detect? If only five of them, then I have 5% coverage and a terrible monitoring tool. If my antibody detects 95% of them, I’m detecting almost everything coming through, so I probably have a pretty good monitoring tool.

An ELISA monitors only a subset of the HCP profile — hopefully a large subset, but still only a subset. It’s a statistical sampling of the HCPs that went into your downstream process. So if you have high coverage, that’s a pretty good statistical sample; if it’s low coverage, then it’s a pretty poor statistical sample. If your process goes out of control, then as you’re sampling these groups of proteins, you should see changes in the readout.

When a HCP copurifies with a product, that’s not random. Usually it is because that protein interacts with your product in some way. People used to say it was the same sort of biophysical and biochemical characteristics letting it copurify. But we’re getting more evidence that HCPs coming all the way through often have some sort of affinity for the protein of interest.

Wow, that’s even worse. They’re not just alongside; they’re bound to it! That’s right. It could be a hydrophobic interaction. Some products are glutathionated in cell culture, so glutathione S-transferase would bind to those. And then after you figure out that connection, you just smack yourself in the head: “Of course.†If I see an HCP copurify with a product, and I don’t immediately understand why that’s happening, then I just have to figure that out.

Does structural analysis help there? It could. Often, it’s a hydrophobic interactions, so you just need to wash your protein A column differently.

HCPs coming through with your product are not some random statistical sample. When you’re talking about coverage, you’re really talking about your ability to monitor process control. Because you can miss proteins that specifically bind your product, you need orthogonal techniques like MS.

And then there’s the other question of what percent coverage is good enough. Is it 5%? No. Is it 95%? Of course. But usually we’re somewhere between those extremes. My experience has been that better than 70% is good. This is where the third piece comes in: determining what that coverage is — and that’s really hard. I guarantee that every one of the different methods to do it will give you different results.

You can use IAC: Set up a column of your ELISA antibodies and pour your HCPs over it to find out what binds and what doesn’t. That’s a great method, but there are some issues with it. For one thing, proteins in solution don’t just float around separately. They bind with other molecules. So if protein 1 binds the resin, and protein 2 is just stuck onto protein 1, then it’s going to look like you have coverage for protein 2 when you don’t.

Another problem is that I can keep pouring more sample over that column and drive low-affinity interactions that aren’t representative of real binding on an ELISA well plate.
Then there are 2D gels and blots, where one spot does not equal one protein. It’s hard to tell visually what’s going on. If you give two people the same image, you’ll get two different numbers. They might be in the same ballpark, but they will differ. When you look at a 2D gel and see only a few proteins lighting up with your Western blot, then you know you have a problem. If it is such a complex pattern that you have trouble counting, then you’re more likely to be in good shape.

The higher the coverage, the messier the result? Pretty much. Health authorities expect a number for coverage, and it’s hard to get an unbiased analysis of that number. So what you can do as a responsible assay developer is to look at the totality of data. Use a couple of different methods and show you’re getting that good statistical sampling. If not, then your ELISA can’t tell you if your process is out of control.

How is that subset determined? There are different ways to think about that. For a process-specific ELISA, if you had the time and resources, you could use MS on every column pool in downstream processing. You would determine what HCPs are cleared at each step. You could analyze the coverage with a combination of techniques and assess what HCPs you’re actually detecting. And you’d call those that persist further into the process your “higher-risk HCPs†— from a process standpoint, though, not necessarily for patients.

They’re not necessarily the most dangerous, but they’re the most problematic. That’s right. So I would be happy if my antibodies covered and recognized all of those higher risk proteins. But perfection doesn’t exist. That’s what I would drive for, and then if there are gaps — one HCP goes all the way through and the ELISA can’t detect it — then I consult the process validation studies to see how well the process can control that particular impurity. I can use MS to ask that question. If it is well within control, then I don’t have to worry about it so much. If it’s not well within control, or it’s been somewhat variable, then I need to think about controlling that HCP. If my ELISA doesn’t control it, then I need another method: an individual-protein ELISA or a multiple-reaction monitoring (MRM) MS method, for example, depending on the protein.

But it’s not practical to do all that kind of testing for each step in a downstream process. You can, but it’s expensive.

Is the alternative to look at your harvest and your drug substance, and then go from there? You have to test the harvest because that’s how you establish coverage. It’s upstream-process specific. From a downstream perspective, you can bookend it with capture and final product. If something comes through the protein A pool, for example, then it’s probably a higher risk for your process. Then you look at the end and compare the two. That’s the more practical approach.

Some people do a full process characterization. But how many processes do you analyze? Do you consider different variations of a process — both upstream and downstream? You can end up with an enormous amount of data. You can’t do it comprehensively; there are too many variables, and MS is not high-throughput enough to work at that scale.

Is it something you do to support process changes? Absolutely — and process transfers.

But if I wanted to determine whether an HCP ELISA is acceptable for process control, then I begin with three questions: Does my standard represent the HCPs that enter my process? Do the antibodies recognize a significant portion of those proteins? And do those antibodies recognize HCPs that should be removed by the process? If the ELISA shows no reduction in HCPs over a given step, then it’s not recognizing the proteins that are removed by that step.

Then you can dig in a little deeper and say, “These are the HCPs in my capture pool,†or “These are most commonly seen copurifying with my product. Do my antibodies recognize those proteins?†That’s another level that’s not required to show as part of assay characterization, but I think it’s good practice.

There has to be a practical limit. With HCPs, you could go down a long and expensive rabbit hole that is not necessarily relevant for your product. At a certain point, there’s only so much you can do. It’s hard to know which questions will be meaningful. It’s reasonable to ask whether your HCP ELISA detects the three proteins that frequently copurify with your product. But I’d be a little nervous if health authorities started asking that specifically — because I’d want to know where they were going with it. But I recommend that my clients know the answer to the question.

When you hear people talking about laboratory automation freeing up your time to worry about other things, that’s what they mean. You can’t automate risk assessment, which is really important. It’s going to become more so as our technologies get better.

References
1 Wang F, et al. Host-Cell Protein Risk Management and Control During Bioprocess Development: A Consolidated Biotech Industry Review. BioProcess Int. 16(5–6) 2018; http://lne.e92.mwp.accessdomain.com/business/risk-management/host-cell-protein-risk-management-and-control-during-bioprocess-development-a-consolidated-biotech-industry-review-part-1;
http://lne.e92.mwp.accessdomain.com/business/risk-management/host-cell-protein-risk-management-and-control-during-bioprocess-development-a-consolidated-biotech-industry-review-part-2.

2 de Zafra CLZ, et al. Host Cell Proteins in Biotechnology-Derived Products: A Risk Assessment Framework. Biotechnol. Bioeng. 112(11) 2015: 2284–2291; https://doi.org/10.1002/bit.25647.

3 Jawa V, et al. Evaluating Immunogenicity Risk Due to Host Cell Protein Impurities in Antibody-Based Biotherapeutics. AAPS J. 22 July 2016: 1439–1452; https://doi.org/10.1208/s12248-016-9948-4.

4 Wang X, Hunter AK, Mozier NM. Host Cell Proteins in Biologics Development: Identification, Quantitation and Risk Assessment. Biotechnol. Bioeng. 103(3) 2009: 446–458; https://doi.org/10.1002/bit.22304.

5 CBER/CDER. Quality Considerations in Demonstrating Biosimilarity of a Therapeutic Protein Product to a Reference Product: Guidance for Industry. US Food and Drug Administration: Rockville, MD, April 2015; https://www.fda.gov/media/135612/download.

6 Mihara K, et al. Host Cell Proteins: The Hidden Side of Biosimilarity Assessment. J. Pharma. Sci. 104(12) 2015: 3991–3996; https://doi.org/10.1002/jps.24642.

7 Reichert J. Next Generation and Biosimilar Monoclonal Antibodies: Essential Considerations Towards Regulatory Acceptance in Europe, February 3–4, 2011, Freiburg, Germany. mAbs 3(3) 2011: 223–240; https://doi.org/10.4161/mabs.3.3.15475.

8 Fang J, et al. Advanced Assessment of the Physicochemical Characteristics of Remicade and Inflectra By Sensitive LC/MS Techniques. mAbs 8(6) 2016: 1021–1034; https://doi.org/10.1080/19420862.2016.1193661.

9 Liu J, et al. Assessing Analytical Similarity of Proposed Amgen Biosimilar ABP 501 to Adalimumab. BioDrugs 30, 2016: 321–338; https://doi.org/10.1007/s40259-016-0184-3.

10 EMA/CHMP/BWP/247713/2012. Guideline on Similar Biological Medicinal Products Containing Biotechnology-Derived Proteins As Active Substance: Quality Issues (Revision 1). European Medicines Agency: London, UK, 2014; https://www.ema.europa.eu/en/similar-biological-medicinal-products-containing-biotechnology-derived-proteins-active-substance#current-effective-version-section.

11 Vanderlaan M, et al. Hamster Phospholipase B-Like 2 (PLBL2): A Host-Cell Protein Impurity in Therapeutic Monoclonal Antibodies Derived from Chinese Hamster Ovary Cells. BioProcess Int. 13(4) 2015: 18–29, 55; http://lne.e92.mwp.accessdomain.com/analytical/downstream-validation/hamster-phospholipase-b-like-2-plbl2-a-host-cell-protein-impurity-in-therapeutic-monoclonal-antibodies-derived-from-chinese-hamster-ovary-cells.

12 Tran B, et al. Investigating Interactions Between Phospholipase B-Like 2 and Antibodies During Protein A Chromatography. J. Chromatog. A 1438, 2016: 31–38; https://doi.org/10.1016/j.chroma.2016.01.047.

13 Zhang S, et al. Putative Phospholipase B-Like 2 Is Not Responsible for Polysorbate Degradation in Monoclonal Antibody Drug Products. J. Pharm. Sci. 109(9) 2020: 2710–2718; https://doi.org/10.1016/j.xphs.2020.05.028.

Further Reading
Levy NE. Identification and Characterization of Host Cell Protein Product-Associated Impurities in Monoclonal Antibody Bioprocessing. Biotechnol. Bioeng. 111(5) 2014: https://doi.org/10.1002/bit.25158.

Nogal B, Chhiba K, Emery JC. Select Host Cell Proteins Coelute with Monoclonal Antibodies in Protein A Chromatography. Biotechnol. Prog. 28(2) 2012: 454–458; https://doi.org/10.1002/btpr.1514.

Seisenberger C, et al. Questioning Coverage Values Determined By 2D Western Blots: A Critical Study on the Characterization of Anti-HCP ELISA Reagents. Biotechnol. Bioeng. 2020: 1–11; https://doi.org/10.1002/bit.27635.

Singh SK, et al. Understanding the Mechanism of Copurification of “Difficult to Remove†Host Cell Proteins in Rituximab Biosimilar Products. Biotechnol. Prog. 36, 2020:1–12; https://doi.org/10.1002/btpr.2936.

Sisodiya VN, et al. Studying Host Cell Protein Interactions with Monoclonal Antibodies Using High Throughput Protein A Chromatography. Biotechnol. J. 7, 2012: 1233–1241; https://doi.org/10.1002/biot.201100479.

Vanderlaan M. Experience with Host Cell Protein Impurities in Biopharmaceuticals. Biotechnol. Prog. 2018; https://doi.org/10.1002/btpr.2640.

Wilson MR, Easterbrook-Smith SB. Clusterin Binds By a Multialent Mechanism to the Fc and Fab Regions of IgG. Biochim. Biophys. Acta. 1159, 1992: 319–326; https://doi.org/10.1016/0167-4838(92)90062-i.

Zhang Q, et al. Characterization of the Co-Elution of Host Cell Proteins with Monoclonal Antibodies during Protein A Purification. Biotechnol. Prog. 32(3) 2016: https://doi.org/10.1002/btpr.2272.

Denise Krawitz, PhD, is principal consultant with CMC Paradigms LLC, 49 Oak Springs Drive, San Anselmo, CA 94960; denisekra@gmail.com. She also teaches a FasTrain course on host-cell protein methods (https://fastraincourses.com). Cheryl Scott is cofounder and senior technical editor of BioProcess International, part of Informa Connect, PO Box 70, Dexter, OR 97431; cheryl.scott@informa.com.