19 July 2021 | Monday | Opinion
Source : PerkinElmer
This practice is lab- and data-driven, aspiring to make a bench-to-bedside approach that’ll efficiently develop therapeutic strategies. It identifies biomarkers that can then be used to inform the patient’s molecular profiles and disease etiologies. Biomarkers can include blood sugar levels that identify patients with diabetes, or certain gene mutations that can signal a patient’s risk of developing cancer.
Translational medicine’s use of establishing molecular profiles has been beneficial in creating drugs that are specialized to target specific pathways based on patient diagnoses. This approach, when compared to one-size-fits-all drug production, creates fewer side effects with better results. Translational medicine can be an effective method when done well, with financial benefits on top of health achievements. That said, it’s a data-heavy activity. Here’s an overview of working with data in translational medicine:
Establishing molecular profiles such as mutations or expression levels supports a targeted, cost-effective development and better overall drug response compared to traditional pharma R&D strategies. Drugs designed in accordance with translational medicine principles can zero-in on specific pathways identified in patient diagnoses. Accordingly, they stand a greater chance of delivering superior benefits with fewer side effects than drugs created under a one-size-fits-all approach that simply groups individuals together based on a shared ailment.
In areas like oncology, translational medicine principles have crossed over into the clinical domain to help find the right drug for the right patient. Clinicians now use patient diagnostic profiles to determine if a given treatment could deliver the greatest benefits with minimal side effects. The breast cancer medication Herceptin, for instance, usually works well for someone with overexpression of the HER2 gene but less so in the case of wildtype expression levels. Realizing this disparity via translational medicine spares the patient the prospect of experiencing severe side effects and possibly paying a significant sum on top of insurance, all whilst seeing no benefit.
Executed properly, translational medicine can achieve superior health and financial outcomes. It can also yield major savings for drugmakers, thanks to more accurate predictions of drug failure; even a 10% increase on that front can save $100 million per drug. However, successful translational medicine requires concerted focus on everything from preclinical hypotheses to the processes involved in producing the informatics necessary for accelerated scientific discoveries and applications. Let’s go through a checklist of what to prioritize with translational medicine.
Before any clinical work begins under a translational medicine-informed approach to drug development, there are preclinical tasks to complete, related to the expected utility of the associated biomarkers and companion diagnostics. Such preclinical due diligence determines if the biomarkers in question will have an application in a clinical setting.
Hypothesis creation and experimental design, in particular, are essential at this stage; it is important to gather as much information as possible to inform in vitro, in vivo and preclinical testing of the translational medicine hypotheses. In some cases, the available companion diagnostics for the biomarkers may illuminate a relatively straightforward link between a pathway, a condition and a potentially effective drug, as with the example of Herceptin (targeting HER2 protein) and with Keytruda (PD1). In other words, their clinical utility is well-established.
It is not always that simple. Sometimes many minor biomarkers that don’t have much significance individually could still have a major cumulative effect. Evaluating their overall utility would take a significant amount of time and effort, and require close collaboration between clinicians working under clear clinical guidance.
The exact duration of hypothesis testing, experimental design and other preclinical work will depend on this level of clinician communication, along with how well the disease is understood pathologically. Considering translational medicine is still very much an exploratory activity, it can take time to get everything right and find an optimal process.
Translational medicine is a data-heavy activity. Copious amounts of information will be generated by the numerous technologies at play, from the bench to bedside. But for the purposes of drug development, it is not enough just to have a large quantity of data to work with; its quality matters, too. That data will eventually have to be analyzed, integrated and applied into translational medicine processes — tasks that will be easier if it had been collected cleanly and in a standardized way initially.
Some best practices for data collection, aggregation and analysis include:
Following these guidelines to attain high-quality data is helpful not only for developing drugs that work but also for performing retrospective analysis on the ones that didn’t. A notable drug for treating non-small cell lung cancer, Keytruda, emerged only after its underlying compound had initially been shelved as an ineffective treatment for controlling immune response in autoimmune diseases.
The reopening of research into Keytruda’s potential uses followed a new hypothesis, which was based on the findings of an article published in the New England Journal of Medicine. As a result, the drug maker implemented a biomarker-based strategy for its clinical trials, allowing for the stringent screening of trial participants to find those most likely to respond to treatment. The eventual success of this pharmaceutical stemmed from the application of translational medicine best practices to effectively identify a stratified patient population.
In large-scale genome projects, it’s advised for all personnel to standardize on one platform when performing experiments. The subsequent data analysis, however, can be more fragmented.
Many different analysis pipelines get created during this process, along with informational silos. This can complicate the progress of the overall project as translational scientists, clinicians and others struggle to collaborate. Accordingly, it is prudent to break down these silos and harmonize these disjointed processes — but what is the best way to accomplish that goal?
A good starting point is to implement tools that are usable even by non-experts so that multiple members across a team can better conceptualize the data and apply their biological understandings to the data that has been analyzed. However, in reality, accessing the right data for cross-study analysis can be highly cumbersome for non-expert users.
Such inefficiency in data access is further compounded by the fact that even routine analytics, such as comparing the survival rates of different patient groups for oncology, need to be performed by experts. Introducing solutions that can streamline data access for cross-study analysis, which saves time and helps keep development on track.
Translational medicine has made great strides in oncology, and its significant potential is also being realized in clinical research and drug development of other therapeutic areas. Through careful experiment design and hypothesis testing, scalable data management and the use of intuitive, accessible data analysis tools, R&D teams can be in a better-position than ever to create drugs that deliver the most value for pharmaceutical companies and patients alike.
Most Read
Bio Jobs
News
Editor Picks