RT-PCR Validations: Square Peg/Round Hole?

Biotech_ResearchUncategorized

Jacob McDonald, CSO – Lovelace Biomedical

I set a disclaimer right away, in that I have been called a jack of all trades, and lately, a master of probably none (don’t tell my kids). I have been pontificating over the past couple of years on the quandary of quantitative PCR for support of Gene Therapy studies. In these studies, it is important to understand the titer of the material, (often not using quantitative PCR solely) and it is typically required to assess the biodistribution of the vectors (i.e. DNA, RNA, etc.), to understand where they may make an impact in the body, (did they hit more than the target of interest) and if they are eliminated through any excreta (shedding) that could result in environmental or person-to-person contamination. The quandary of quantitation by PCR as I have noticed as the tool box has primarily been implemented in the past for academic activities, and the relatively young world of gene therapy as compared to small molecule therapeutics. As a result, there is a shortage of regulatory guidance, trained staff, and experienced people with background in a combination of regulatory science, molecular biology and quantitative analysis. To address this issue, we have had to take the approach of developing these hybrid people, or bringing them in. Below are some comments on the history at a high level of where we have seen PCR evolve, without getting too deep into the technical weeds.

Lovelace first started performing quantitative PCR approximately 12 years ago when we initiated our Gene Therapy Center with the University of Pennsylvania and the NIH. It was early in the Gene Therapy world, and we were working hand in hand with the innovators and the FDA to try and generate the best science with the tools at hand. The trick from the get-go was how to perform studies in a GLP environment where we are using molecular tools and/or animal models that had never been considered for that environment. The animal models were often derived from academic laboratories that had developed phenotypes from some sort of altered genotype. Typically, there was no historical data relative to pharm/tox, but we made it work as best we could. When it came to analysis of biodistribution, we sort of scratched our head in collaboration with the FDA. There were plenty of regulatory guidance documents on analytical chemistry approaches, and there was a fair amount of guidance for bioanalytical by LCMS, etc. However, there was little to no guidance on molecular techniques such as PCR. We developed a hybrid approach where we identified the major elements of a bioanalytical validation and adapted those to a PCR validation. Ultimately in the early days the FDA allowed us to exempt that validations from the GLPs. We knew, however, that this luxury wouldn’t last forever.

What are the challenges? I won’t be able to capture all of the challenges in one assessment. However, especially early on, much of the tools were not considered for GLP in terms of their characterization, instrument vendor support, etc. Much of this has changed, and it is quite easy to ensure that the equipment has adequate software, IQ/OQ, etc. Further, the reagents that were mostly research grade early on have increased dramatically in quality. However, often the analytical/bioanalytical standards in PCR are not readily available and or properly characterized (leading to significant lab to lab variability). I know we (the royal we) have learned from silly mistakes that have helped us design our assays and even our laboratories better. Much like sensitive mass-spectrometers, PCR has the ‘advantage’ of being extremely sensitive. So much so that it is critical to design the laboratory such that the work performed prior to amplification and after, are in separate work areas to avoid contamination. All consumables, supplies, equipment, gloves, etc. should be separated. Further, specimen processing should be in a dedicated area and considerations for work flow to avoid contamination should be made. Beyond contamination considerations, issues associated with creating good quantitative PCR assays (and things that can bite you) include RNA degradation, lack of internal controls (blanks, pos/neg controls that have the gene present or not), and lack of specificity. All of these should be considered both in the design of the assay and in the design of the experiments to ensure that the assay is performing properly.

After the equipment and work flow, we required the obvious development of SOP’s that maintain the flexibility of the PCR but integrate some of the standards in terms of quality that we use for mass spectrometry based on regulatory guidance in that area. We literally took our bioanalytical SOP for MS and a methods SOP for PCR and put them side by side to see what processes we can practically integrate to take advantage of the process benefits we have learned from experience and regulations on the MS side. We focused then on how do you adapt accuracy, specificity, linearity, selectivity, robustness, precision, etc. in the context of PCR. Overall the definitions for these terms for PCR versus LCMS are quite similar. Of course, the tests are a little different but the goals are the same. For example, the goal of determining specificity in both cases is the ability to analyze without interfering substances. The nature of the interferences can be quite different. For limit of quantitation is determined both for amplification sensitivity (how much starting material is needed to get a good result) and variant sensitivity (how much of the target needs to be there to be detected), all considering the requirements of hitting the precision and accuracy needs.

In the case of biodistribution studies, we often struggle on the validation of each tissue matrix. In most academic settings, it is assumed that since you are digesting and isolating out the target (RNA, DNA, etc.), the matrix it came from should not matter. Of course this is enough to make a traditional bioanalytical chemist squirm. We think this is an area of struggle for many laboratories and there is likely a happy medium between validating every tissue and validating no tissues. Our approach, for what it’s worth, is that we start (in most cases) with five tissues that are expected to be the main targets. If there are any differences among these tissues, we extend the validation to any additional tissues. If there is no difference, we may do a modest additional quality control check in the additional tissues but don’t perform a full validation. Sometimes there are over 20 tissues per animal. Validating that many tissues when the science does not support it could be a money and time pit.

One of the issues that comes up with our laboratory, and other laboratories is the issue of laboratory to laboratory variation. This can occur because of subtle (or not so subtle) differences in protocol or quality control. Differences can include the source and availability of a standard, the way in which the standard curve is implemented (circular vs linearized), quality errors in pipetting (this is big), type of probe used in the assay (Taqman vs LNA vs SYBR), chemical purity and homogeneity of primers and probes, type of real-time machine and compatibility with dyes that are used, and quality/source of master mix. I know in our laboratory we just invested a decent house (at least in Albuquerque) worth in robotics to help shore up our precision and efficiency in pipetting. Some of the error/differences associated with the curve can be solved with the use of droplet digital PCR, which gives an absolute quantification of the sample. There are some strengths/limitations of that approach as well, that I won’t go into, but one is that it is less used (but has promise) and is currently being optimized for its use in industry.

Overall, we are excited about the Gene Therapy movement and to have the opportunity to work among scientists and regulators to help determine the future, building on our past of working in regulated environments. At first we thought about the process of taking an academic tool and validating it to a bioanalytical standard as trying to fit a square peg into a round hole. However, over the years of working internally and with colleagues and regulators we feel that the feasibility and approach is becoming more and more standardized. In fact, this year at the WRIB (an industry regulatory working group that focuses on bioanalytical technical/regulatory issues) we attended a great session on PCR (the first we had seen at WRIB). The FDA also had a workshop this past year that is published on-line in consideration of challenges in PCR for bioanalysis. We expect that formal guidance will evolve from the FDA over the next few years, and we look forward to working with them. In the meantime, we are pleased with the progress we have made in helping to shave the square peg of PCR to fit into the round hole of regulatory driven bioanalytical.

See more content, and follow us at our linkedin and twitter page!