Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Despite heated discussion over whether it works, the FDA has approved Aduhelm, bringing a new ray of hope to the Alzheimer’s patients.
Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc
Despite heated discussion over whether it works, the FDA has approved Aduhelm, bringing a new ray of hope to the Alzheimer’s patients.
On Monday, 7th June 2021, a controversial new Alzheimer’sDisease treatment was licensed in the United States for the first time in nearly 20 years, sparking calls for it to be made available worldwide despite conflicting evidence about its usefulness. The drug was designed for people with mild cognitive impairment, not severe dementia, and it was designed to delay the progression of Alzheimer’s disease rather than only alleviate symptoms.
The route to FDA clearance for Aducanumab has been bumpy – and contentious.
Though doctors, patients, and the organizations that assist them are in desperate need of therapies that can delay mental decline, scientists question the efficacy of the new medicine, Aducanumab or Aduhelm. In March 2019, two trials were halted because the medications looked to be ineffective. “The futility analysis revealed that the studies were most likely to fail,” said Isaacson of Weill Cornell Medicine and NewYork-Presbyterian. Biogen, the drug’s manufacturer revealed several months later that a fresh analysis with more participants found that individuals who got high doses of Aducanumab exhibited a reduction in clinical decline in one experiment. Patients treated with high-dose Aducanumab had 22% reduced clinical impairment in their cognitive health at 18 months, indicating that the advancement of their early Alzheimer’s disease was halted, according to FDA briefing documents from last year.
When the FDA’s members were split on the merits of the application in November, it was rejected. Three of its advisers went public, claiming that there was insufficient evidence that it worked in a scientific journal. They were concerned that if the medicine was approved, it might reduce the threshold for future approvals, owing to the scarcity of Alzheimer’s treatments.
Dr. Caleb Alexander, a drug safety and effectiveness expert at the Johns Hopkins Bloomberg School of Public Health, was one of the FDA advisers who was concerned that the data presented to the agency was a reanalysis after the experiment was stopped. It was “like the Texas sharpshooter fallacy,” he told the New York Times, “where the sharpshooter blows up a barn and then goes and paints a bullseye around the cluster of holes he loves.”
Some organizations, such as the non-profit Public Citizen’s Health Research Group, claimed that the FDA should not approve Aducanumab for the treatment of Alzheimer’s disease because there is insufficient proof of its efficacy.
The drug is a monoclonal antibody that inhibits the formation of amyloid protein plaques in the brain, which are thought to be the cause of Alzheimer’s disease. The majority of Alzheimer’s medications have attempted to erase these plaques.
Aducanumab appears to do this in some patients, but only when the disease is in its early stages. This means that people must be checked to see if they have the disease. Many persons with memory loss are hesitant to undergo testing because there is now no treatment available.
The few Alzheimer’s medications available appear to have limited effectiveness. When Aricept, also known as Donepezil, was approved more than 20 years ago, there was a major battle to get it. It was heralded as a breakthrough at the time – partly due to the lack of anything else. It has become obvious that it slows mental decline for a few months but makes little effect in the long run.
The findings of another trial for some patients backed up those conclusions.
Biogen submitted a Biologics License Application to the FDA in July 2020, requesting approval of the medicine.
The FDA’s decision has been awaited by Alzheimer’s disease researchers, clinicians, and patients since then.
Support for approval of the drug
Other groups, such as the Alzheimer’s Association, have supported the drug’s approval.
The Alzheimer’s Association‘s website stated on Friday, “This is a critical time, regardless of the FDA’s final judgment. We’ve never been this close to approving an Alzheimer’s drug that could affect the disease’s development rather than just the symptoms. We can keep working together to achieve our goal of a world free of Alzheimer’s disease and other dementias.”
The drug has gotten so much attention that the Knight Alzheimer Disease Research Center at Washington University in St. Louis issued a statement on Friday stating that even if it is approved, “it will still likely take several months for the medication to pass other regulatory steps and become available to patients.”
Biogen officials told KGO-TV on Monday that the medicine will be ready to ship in about two weeks and that they have identified more than 900 facilities across the United States that they feel will be medically and commercially suitable.
Officials stated the corporation will also provide financial support to qualifying patients so that their out-of-pocket payments are as low as possible. Biogen has also pledged not to raise the price for at least the next four years.
Most Medicare customers with supplemental plans, according to the firm, will have a limited or capped co-pay.
Case studies connected to the Drug Approval
Case 1
Ann Lange, one of several Chicago-area clinical trial volunteers who received the breakthrough Alzheimer’s treatment, said,
It really offers us so much hope for a long, healthy life.
Lange, 60, has Alzheimer’s disease, which she was diagnosed with five years ago. Her memory has improved as a result of the monthly infusions, she claims.
She said,
I’d forget what I’d done in the shower, so I’d scribble ‘shampoo, conditioner, face, body’ on the door. Otherwise, I’d lose track of what I’m doing “Lange remarked. “I’m not required to do that any longer.
Case 2
Jenny Knap, 69, has been receiving infusions of the Aducanumab medication for about a year as part of two six-month research trials. She told CNN that she had been receiving treatment for roughly six months before the trial was halted in 2019, and that she had recently resumed treatment.
Knap said,
I can’t say I noticed it on a daily basis, but I do think I’m doing a lot better in terms of checking for where my glasses are and stuff like that.
When Knap was diagnosed with mild cognitive impairment, a clinical precursor to Alzheimer’s disease, in 2015, the symptoms were slight but there.
Her glasses were frequently misplaced, and she would repeat herself, forgetting previous talks, according to her husband, Joe Knap.
Joe added,
We were aware that things were starting to fall between the cracks as these instances got more often
Jenny went to the Lou Ruvo Center for Brain Health at the Cleveland Clinic in Ohio for testing and obtained her diagnosis. Jenny found she was qualified to join in clinical trials for the Biogen medicine Aducanumab at the Cleveland Clinic a few years later, in early 2017. She volunteered and has been a part of the trial ever since.
It turns out that Jenny was in the placebo category for the first year and a half, Joe explained, meaning she didn’t get the treatment.
They didn’t realize she was in the placebo group until lately because the trial was blind. Joe stated she was given the medicine around August 2018 and continued until February 2019 as the trial progressed. The trial was halted by Biogen in March 2019, but it was restarted last October, when Jenny resumed getting infusions.
Jenny now receives Aducanumab infusions every four weeks at the Cleveland Clinic, which is roughly a half-hour drive from their house, with Joe by her side. Jenny added that, despite the fact that she has only recently begun therapy, she believes it is benefiting her, combined with a balanced diet and regular exercise (she runs four miles).
The hope of Aducanumab is to halt the progression of the disease rather than to improve cognition. We didn’t appreciate any significant reduction in her condition, Jenny’s doctor, Dr. Babak Tousi, who headed Aducanumab clinical studies at the Cleveland Clinic, wrote to CNN in an email.
This treatment is unlike anything we’ve ever received before. There has never been a drug that has slowed the growth of Alzheimer’s disease, he stated, Right now, existing medications like donepezil and memantine aid with symptoms but do not slow the disease’s progression.
Jenny claims that the medicine has had no significant negative effects on her.
There was signs of some very minor bleeding in the brain at one point, which was quite some time ago. It was at very low levels, in fact, Joe expressed concern about Jenny, but added that the physicians were unconcerned.
According to Tousi, with repeated therapy, “blood vessels may become leaky, allowing fluid and red blood cells to flow out to the surrounding area,” and “micro hemorrhages have been documented in 19.1% of trial participants who got” the maximal dose of therapy”.
Jenny and Joe’s attitude on the future has improved as a result of the infusions and keeping a healthy lifestyle, according to Joe. They were also delighted to take part in the trial, which they saw as an opportunity to make a positive influence in other people’s lives.
There was this apprehension of what was ahead before we went into the clinical trial, Joe recalled. “The medical aspect of the infusion gives us reason to be optimistic. However, doing the activity on a daily basis provides us with immediate benefits.”
The drug’s final commercialization announcement
Aducanumab, which will be marketed as Aduhelm, is a monthly intravenous infusion that is designed to halt cognitive decline in patients with mild memory and thinking issues. It is the first FDA-approved medication for Alzheimer’s disease that targets the disease process rather than just the symptoms.
The manufacturer, Biogen, stated Monday afternoon that the annual list price will be $56,000. In addition, diagnostic tests and brain imaging will very certainly cost tens of thousands of dollars.
The FDA approved approval for the medicine to be used but ordered Biogen to conduct a new clinical trial, recognizing that prior trials of the medicine had offered insufficient evidence to indicate effectiveness.
Biogen Inc said on Tuesday that it expects to start shipping Aduhelm, a newly licensed Alzheimer’s medicine, in approximately two weeks and that it has prepared over 900 healthcare facilities for the intravenous infusion treatment.
Other Relevant Articles
Gene Therapy could be a Boon to Alzheimer’s disease (AD): A first-in-human clinical trial proposed
Alnylam Announces First-Ever FDA Approval of an RNAi Therapeutic, ONPATTRO™ (patisiran) for the Treatment of the Polyneuropathy of Hereditary Transthyretin-Mediated Amyloidosis in Adults
Anybody can do something once; the problem is: can you do it twice, or for that matter, over and over again?
This is the essential issue faced by those personnel in the throes of the commercialization process.
Any successful commercial process has to meet a number of criteria:
The process must be reproducible — it must yield the same product/results given the same inputs.
The process must be economically viable: given the constraints of raw material, energy, and labor costs, depreciation schedules for equipment, expected process failures, R/D, Marketing, and Sales support costs, the process needs to yield both a profit and positive cash flow
The process should be implemented using readily available commercial components and control instrumentation. On occasion, successful implementation of a project will require specialized components; however these components themselves must meet the criteria for successful commercialization
The process must be “simple” enough so that suitably trained operators can manage the process. A unit that requires Ph.D.’s to maintain operations is doomed to failure
History is replete with novel processes that worked on the lab scale, but were failures when a commercial operation was attempted. The issues that are most responsible for lab-to-production failure are listed under the general classification of “scale-up”. Scale-up principles are covered in my monograph, “The Art of Scale-up” (www.artofscaleup.com), but in general follow these rules:
Identification of those process parameters that will have major impact on commercial viability: reaction kinetics, mass transfer vs. temperature/kinetic control; if multi-phase systems are involved, the type and energy of required stirring; heat transfer considerations; side reactions; etc.
Materials of construction; raw material and product hazards; etc.
Regulatory considerations: FDA, OSHA, EPA.
Failure to address any of these issues prior to commercialization will lead to surprises during commercialization.
In addition to the engineering/scale-up aspects of commercialization, there are several other criteria that may need attention:
When to launch a product – where will the new product fit into the overall corporate product portfolio?
Where is the proper location to launch? A product aimed at flu symptom suppression in cold-weather conditions may not do well in Florida; ….. super-sweet tea does well in the South, and not so well in New England, so that a product to replace sugar might do well in the South.
Who is going to use the product? Are you targeting doctor’s offices, hospitals, or direct to consumer routes?
How to launch – social media and “influencers” have given rise to new avenues of product introductions.
The old aphorism of “measure twice, cut once” has a special resonance when doing commercialization of a new process or product. The more the process is thought out ahead of time, the less issues there will be down the road. In the commercial world, there is constant pressure to rush things to meet management deadlines, which always leads to problems and extra expense. A crusty of R/D chemist once remarked, “There is never time to do it right, but always time to do it twice.” Everyone needs to keep this in the back of their mind
Crystal Resolution in Raman Spetctoscopy for Pharmaceutical Analysis, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
Crystal Resolution in Raman Spetctoscopy for Pharmaceutical Analysis
Curator: Larry H. Bernstein, MD, FCAP
Investigating Crystallinity Using Low Frequency Raman Spectroscopy: Applications in Pharmaceutical Analysis
Crystallinity is an important factor when producing pharmaceuticals because it directly affects the bioavailability of the drug. Low-frequency Raman spectroscopy offers some advantages to the detection and analysis of crystallinity in pharmaceutical samples. Here the experimental requirements for low-frequency Raman measurements are described. The application of the technique to the study of crystallinity with a number of examples is discussed and the advantages and limitations are highlighted and compared with other techniques.
Raman spectroscopy is typically a nondestructive technique that uses lasers to probe for information about intra- and intermolecular bond vibrations. It involves irradiating the sample with monochromatic light to excite molecules within the sample to a virtually excited state. The molecules then relax to a higher (Stokes scattering) or lower (anti-Stokes scattering) vibrational level—resulting in the scattered light being lower or higher in frequency than the irradiating photons, respectively. Raman scattering is produced when this inelastic scattering occurs, and can be used to deduce information about the nature of the sample. Relative to the incident light, Raman scattering is a rare phenomenon, occurring in only 10-6 of the irradiated molecules (1). What occurs more commonly is elastic scattering, Rayleigh scattering, where the light emitted from the sample has the same energy as the incident light. The intensity of Raman scattering is determined by whether the vibrational mode of the molecule has a change in polarizability along its normal coordinate—similar to how infrared (IR) spectroscopy requires a changing dipole moment (1). For example, H2O has vibrational modes that have large dipole moment changes along the normal coordinate and as such it has strong IR bands. However, as H2O is an σ-bonded compound, the electrons have low polarizability and the change in polarizability with vibration is also low, hence the Raman scattering is weak. Conversely, the active pharmaceutical ingredients (APIs) in many drugs are generally π-bonded and easily polarizable, thus producing strong features in Raman spectra (2). This is one of the reasons why Raman spectroscopy is actively used in pharmaceutical studies.
Crystallinity and Why It Is Important for Pharmaceuticals
The term crystalline is used to describe solids in which the atoms or molecules are arranged in an ordered manner. For many pharmaceuticals, the crystalline form is more kinetically stable than the amorphous form, which typically results in crystalline solids being less soluble, and therefore less bioavailable than their amorphous counterparts. The influence that crystallinity has over the solubility of a solid is what makes this an important factor when manufacturing pharmaceuticals, as those which are supposed to be fast-acting should be readily soluble in the body, while slow-acting drugs should dissolve relatively slowly. This has been shown to be the case in previous studies of bioavailability of APIs in the literature (3–5). Because of this, it is important to be able to control the crystallinity of a drug and monitor it. However, the crystallinity of a sample cannot be assumed to be simply either 100% ordered or 100% amorphous. Instead, the literature has indicated that disorder occurs along a continuous scale where a sample can gradually become more or less crystalline before becoming fully ordered or disordered (6). Crystallinity can be controlled in a number of different ways. One such method involves creating a fully crystalline pharmaceutical product and introducing disorder into the structure mechanically by milling the sample (4,5). Previous studies have also shown that amorphous pharmaceuticals can be produced through different drying methods (7,8). Even accidental adjustment of properties such as moisture level can lead to a change in crystallinity (9,10). Methods to monitor the crystallinity of a sample are therefore useful.
Established methods for monitoring crystallinity include calorimetry and X-ray diffraction (XRD) (6). Terahertz spectroscopy is arguably a more recently implemented method of analysis of crystallinity and is the only one of these three techniques described in any detail here. The first terahertz technique described is terahertz absorption spectroscopy or far-IR spectroscopy. The terahertz radiation has a wavelength that is between that of IR and microwaves (0.1–1 mm or 10–100 cm-1) and has the ability to allow observation of various low energy vibrations within the sample (11–13). Vibrations can essentially be divided into two categories when studying crystalline samples: external and internal vibrational modes. Internal vibrations can be considered the local vibrations that occur within each molecule (that is, intramolecular bond vibrations). External vibrations can be considered the vibrations involving the overall lattice of the crystal, such as phonon modes or torsional vibrations (14,15). Terahertz spectroscopy tends to focus on the external vibrations. The technique has been known for more than 100 years, and two experimental challenges have inhibited its widespread use. The first of these is the presence of strong water absorption above 60 cm-1, which has complicated analysis of wet samples (16) and the second is the presence of ambient terahertz radiation that compromises spectral detection (17). The second of these issues has been solved in large part with the advent of terahertz time-domain spectroscopy, also known as terahertz pulsed spectroscopy (TPS), which generates terahertz radiation using femtosecond laser pulses, using a laser wavelength of around 780 nm. For more in-depth detail about terahertz spectroscopy, refer to references 11–13. Despite its relative underutilization, this technique has been found to be an effective method for observing and even quantifying crystallinity in various papers in the literature. Model compounds such as cellulose and gelatin–amino acid mixtures have been used previously to determine the efficacy of the technique when compared to XRD (18) or to determine whether it would be an efficient method for in-line and off-line analysis of pharmaceutical crystallinity (19). In particular, the use of terahertz spectroscopy with a multivariate analysis technique—partial least squares (PLS)—permitted the quantification of the crystallinity index (CI) of cellulose (18). However, not only has the crystallinity of model systems been analyzed using the technique, drugs such as indomethacin, ketoprofen, carbamazepine, enalapril maleate, fenoprofin, irbesartan, and diclofenac acid have also been studied (11,19–23). The use of PLS also proved effective when working with pharmaceutical products, allowing not only real formulations to be distinguished from one another, but also the different crystalline forms to be distinguished. These forms were indicated by Strachan and colleagues to include polymorphic, liquid crystalline, and amorphous states (11,24). Therefore, terahertz spectroscopy has seen growing interest based on its success with the study of pharmaceuticals. Other reviews have also come to compare terahertz spectroscopy with other forms of vibrational spectroscopy for pharmaceutical analysis (25,26).
Low-Frequency Raman
Raman spectroscopy has been used for decades to analyze and characterize polymorphs (27,28); however, like terahertz spectroscopy, low-frequency Raman spectra have been used more recently (29–31). Mid-frequency and high-frequency regions of Raman spectra are typically used to analyze the intramolecular bonding of molecules within a sample (internal vibrations). However, low-frequency Raman spectroscopy involves the analysis of the low-frequency regions of Raman spectra, which tend to contain features attributable to external vibrations of the crystalline lattice (14). Frequencies in the same region as those observed with terahertz spectroscopy can be observed using standard continuous wave lasers. Because Raman scattering is such a rare process compared to Rayleigh scattering, the Rayleigh signal must be removed before recording spectra. Many ways of doing this have been developed. Until the 1990s it was most commonly performed by the use of a triple-grating spectrograph in conjunction with a photomultiplier tube (32,33). This method is reliable but because of the many mirrors and diffraction gratings necessary, the ultimate efficiency or throughput of the system is inevitably low. To compensate for this, high laser powers and long acquisition times are needed (30). A less commonly used method involves the stabilization of iodine gas at specific temperatures to allow for the absorption of the unwanted laser signal (34). However, these methods are cumbersome and it later became more common to use holographic notch or edge filters that reject light with a specific wavelength. These filters can be designed to reject nearly any wavelength of light to match that of the laser used while allowing the rest of the spectrum to pass through. This method allows for the use of more efficient single grating spectrographs. In conjunction with array detectors or charge coupled devices (CCDs) an entire Raman spectrum can be acquired simultaneously. However, the filters also block a portion of the Raman scattering signal in addition to the Rayleigh line. In most cases, the spectra can only be collected above about 100 cm-1 (35).
Over the past decade new filtering technologies have been developed that allow for the acquisition of Raman spectra reaching very close to the laser frequency (35) while using standard dispersive Raman systems with all of their advantages. These filters are typically based on holographic reflective volume Bragg gratings (VBGs) and can achieve frequencies as low as 5 cm-1(34).
In most cases low-frequency Raman is measured using near-IR (NIR) diode laser sources for excitation. This is because the selectivity of VBGs decreases as wavelength decreases. Therefore, it is easier to acquire low-frequency Raman data with longer-wavelength lasers (34).
An issue that arises when performing low-frequency Raman measurements are artifacts caused by the laser itself (34). These may not be apparent when standard holographic filters are used because they appear so close to the laser line and would normally be blocked. These artifacts arise from laser instability, temperature fluctuations, amplified spontaneous emission, and plasma lines. VBG filters are therefore also used to clean up the laser line before it is focused onto the sample. In some optical designs a single VBG is used for both purposes simultaneously (34,35).
The resulting spectrum then allows detection of crystalline or amorphous forms, as the low-frequency Raman bands are associated with the low energy bond vibrations, hydrogen bonds, and phonon modes. Therefore, the overall environment and arrangement of the molecules will have a considerable effect on these vibrations. If the sample is crystalline, then the molecules will be highly ordered, and the low energy bond vibrations will show up in the spectrum as sharp peaks because the bonds will have a very similar environment. Conversely, if the sample is amorphous, there will be very little order and there will be a wide variety of molecular environments which, in turn, will typically produce only one broad band, known as the boson peak, and no other low-frequency features (36). Illustrations of crystalline and amorphous spectra are provided later in this article. This distinct difference between crystalline and amorphous materials is what suggests that low-frequency Raman spectroscopy could prove to be extremely useful in the study of pharmaceuticals.
Analysis
Overview of the Setup of a Low-Frequency Raman System
The exemplar data presented in this article were collected using two different low-frequency Raman systems, the first of which (Figure 1) is a home-built system based on a wavelength-stabilized 80-mW, 785-nm laser module (Ondax, Inc). The laser line is initially filtered by use of two BragGrate reflective VBGs (OptiGrate Corp.) to remove amplified spontaneous emission. The sample is arranged in a 135° backscattering geometry relative to a collection lens (f/2.3). The collected light is passed through a pair of VBGs (OptiGrate Corp. BragGrate 785 nm, OD3). The collimated, filtered light is then focused by a second f/2.3 lens onto a fiber-optic cable. The cable is coupled to an LS 785 spectrograph (Princeton Instruments). A third VBG is used to further filter light imaged onto the entrance slit of the spectrograph. Detection is achieved using a CCD (Princeton Instruments thermoelectrically cooled PIXIS 100 BR CCD). A typical spectral range for this experiment is about 2400 cm-1. Both the Stokes and anti-Stokes region of the spectrum can be seen in addition to a small remaining laser line signal. Raman standards such as sulfur and 1,4-bis(2-methylstyryl)benzene (BMB) are used to calibrate the spectrometer (37).
Figure 1: Illustration of an exemplar low-frequency Raman setup with a 785-nm laser.
The second system is based on a pre-built SureBlock XLF-CLM THz-Raman system from Ondax Inc. The laser (830 nm, 200 mW), cleanup filters, and laser line filters are all self-contained inside of the instrument but operate on the same principles as the 785-nm system. The sample is arranged in a 180° backscattering geometry relative to a 10× microscope lens. This system is then coupled via a fiber-optic cable to a Princeton Instruments SP2150i spectrograph and PIXIS 100 CCD camera. The 0.15-m spectrograph is used in conjunction with either a 1200- or 1800-groove/mm blazed diffraction grating to adjust the resolution and spectral range.
Crystalline Versus Amorphous Samples
The Raman spectrum of crystalline and amorphous solids differ greatly in the low-frequency region (see Figure 2) because of the highly ordered and highly disordered molecular environments of the respective solids. However, the mid-frequency region can also be noticeably altered by the changing environment (Figure 3).
A potential issue is optical artifacts, and these may be identified by the analysis of both Stokes and anti-Stokes spectra. One advantage of the experimental setups described is that signal from the sample may be measured within minutes and it is nondestructive, thus allowing Raman spectra to be collected from a single sample using both techniques at virtually the same time. This approach permits the examination of low-frequency Raman data with 785-nm and 830-nm excitation and allows comparison with Fourier transform (FT)-Raman spectra, in which it is possible to collect meaningful data down to a Raman shift of 50 cm-1. The benefits are demonstrated in Figure 4. In this data, each technique produces consistent bands with similar Raman shifts and relative intensities. While Raman data were not collected below 50 cm-1 using the 1064-nm system, the bands at 69 and 96 cm-1 are consistent with the 785- and 830-nm data. Furthermore, the latter two methods show consistency with bands appearing around 32 and 46 cm-1 for both techniques.
Figure 4: Comparison of the low-frequency region of three Raman spectroscopic techniques.
Case Studies
So far there have been few studies to utilize low-frequency Raman spectroscopy in the analysis of pharmaceutical crystallinity. Despite this, the literature does contain articles that demonstrate the promising applicability of the technique.
Mah and colleagues (38) studied the level of crystallinity of griseofulvin using low-frequency Raman spectroscopy with PLS analysis. In this study a batch of amorphous griseofulvin (which was checked using X-ray powder diffractometry) was prepared by melting the griseofulvin and rapidly cooling it again using liquid nitrogen. Condensed water was removed by placing the sample over phosphorus pentoxide and the glassy sample was then ground using mortar and pestle. Calibrated samples of 2%, 4%, 6%, 8%, and 10% crystallinity were then created though geometric mixing of the amorphous and crystalline samples; following this mixing, the samples were then pressed into tablets. Many tablets were then stored in differing temperatures (30 °C, 35 °C, and 40 °C) at 0% humidity. Low-frequency 785-nm, mid-frequency 785-nm, and FT-Raman spectroscopies were performed simultaneously on each sample. After PLS analysis, limits of detection (LOD) and limits of quantification (LOQ) were calculated. The results of this research showed that each of these three techniques were capable of quantifying crystallinity. It also showed that FT-Raman and low-frequency Raman techniques were able to both detect and quantify crystallinity earlier than the mid-frequency 785 nm Raman technique. The respective LOD and LOQ values for FT-Raman, low-frequency Raman, and mid-frequency Raman are as follows: LOD values: 0.6%, 1.1%, and 1.5%; LOQ values: 1.8%, 3.4%, and 4.6%. The root mean squared errors of prediction (RMSEP) were also calculated and, like the LOD and LOQ values, indicated that the FT-Raman data had the lowest error, followed by the low-frequency Raman, and mid-frequency Raman had the largest errors of the three techniques. The recrystallization tests that were performed indicated that higher temperatures showed a distinct increase in the rate of recrystallization and that each technique provided similar results (within experimental error). It is also important to note that each technique gave similar spectra (where applicable), which provides supporting evidence that the data is meaningful. Overall, the conclusions of this research were that low-frequency predictions of crystallinity are at least as accurate as the predictions made using mid-frequency Raman techniques. It is arguable that low-frequency Raman is better because of the presence of stronger spectral features and because they are intrinsically linked with crystallinity.
Hédoux and colleagues (36) investigated the crystallinity of indomethacin using low-frequency Raman spectroscopy and compared the results with high frequency data. The ranges of interest were indicated to be 5–250 cm-1and 1500–1750 cm-1 regions. Samples of indomethacin were milled using a cryogenic mill to avoid mechanical heating of the sample, with full amorphous samples being obtained after 25 min of milling. Methods used in this study include Raman spectroscopy, isothermal differential scanning calorimetry (DSC), and X-ray diffractometry as well as the milling technique. The primary objective of this research was to use all of these techniques to monitor the crystallization of amorphous indomethacin to the more stable γ-state while the sample was at room temperature–well below the glass transition temperature,Tg = 43 °C. The results of this research did in fact show that low-frequency Raman spectroscopy is a very sensitive technique for identifying very small amounts of crystallinity within mostly amorphous samples. The data was supported by the well-established methods for monitoring crystallinity: XRD and DSC. This paper particularly noted the benefit of low acquisition times associated with low-frequency Raman spectroscopy compared with the other techniques used.
Low-frequency Raman spectroscopy was also used to monitor two polymorphic forms of caffeine after grinding and pressurization of the samples (39). Pressurization was performed hydrostatically using a gasketed membrane diamond anvil cell (MDAC), while ball milling was used as the method of grinding the sample. Analysis methods used were low-frequency Raman and X-ray diffraction. Low-frequency Raman spectra revealed that, upon slight pressurization, caffeine form I transforms into a metastable state slightly different from that of form II and that a disordered (amorphous) state is achieved in both forms when pressurized above 2 GPa. In contrast, it is concluded that grinding results in the transformation of each form into the other with precise grinding times, thus also generating an intermediate form, which was found to only be observable using low-frequency Raman spectroscopy. The caffeine data, as well as the low-frequency data obtained for indomethacin were further discussed by Hédoux and colleagues (40).
Larkin and colleagues (41) used low-frequency Raman in conjunction with other techniques to characterize several different APIs and their various forms. The other techniques include FT-Raman spectroscopy, X-ray powder diffraction (XRPD), and single-crystal X-ray diffractometry. The APIs studied include carbamazepine, apixaban diacid co-crystals, theophylline, and caffeine and were prepared in various ways that are not detailed here. During this research, low-frequency Raman spectroscopy played an important role in understanding the structures while in their various forms. However, more importantly, low-frequency Raman spectroscopy produced information-rich regions below 200 cm-1 for each of the crystalline samples and noticeably broad features when the APIs were in solution.
Wang and colleagues (42) investigated the applicability of low-frequency Raman spectroscopy in the analysis of respirable dosage forms of various pharmaceuticals. The analyzed pharmaceuticals were involved in the treatment of asthma or chronic obstructive pulmonary disease (COPD) and include salmeterol xinafoate, formoterol fumarate, glycopyrronium bromide, fluticasone propionate, mometasone furoate, and salbutamol sulfate. Various formulations of amino acid excipients were also analyzed in this study. Results indicated that the use of low-frequency Raman analysis was beneficial because of the large features found in the region and allowed for reliable identification of each of the dosage forms. Not only this, it also allowed unambiguous identification of two similar bronchodilators, albuterol (Ventolin) and salbutamol (Airomir).
Heyler and colleagues (43) collected both the low-frequency and fingerprint region of Raman spectra from several polymorphs of carbamazepine, an anticonvulsant and mood stabilizer. This study found that the different polymorphs of this API could be distinguished effectively using these two regions. Similarly, Al-Dulaimi and colleagues (44) demonstrated that polymorphic forms of paracetamol, flufenamic acid, and imipramine hydrochloride could be screened using low-frequency Raman and only milligram quantities of each drug. In this study, paracetamol and flufenamic acid were used as the model compounds for comparison with a previously unstudied system (imipramine hydrochloride). Features within the low-frequency Raman regions of spectra were shown to be significantly different between forms of each drug. Therefore this study also indicated that the polymorphs were highly distinguishable using the technique. Hence, like all other previously mentioned case studies, these investigations further demonstrate the utility of low-frequency Raman spectroscopy as a fast and effective method for screening pharmaceuticals for crystallinity.
Conclusions
Low-frequency Raman spectroscopy is a new technique in the field of pharmaceuticals, as well as in general studies of crystallinity. This is despite indications in previous studies showing an innate ability of the technique for identifying crystalline materials and in some cases, quantifying crystallinity. Arguably one of the most beneficial aspects of this technique is the relatively small amount of time necessary to prepare and analyze samples when compared with XRD or DSC. This should ensure the growing use of low-frequency Raman spectroscopy in, not only pharmaceutical crystallinity studies, but also crystallinity studies of other substances as well.
References
J.R. Ferraro and K. Nakamoto, Introductory Raman Spectroscopy, 1st Edition (Academic Press, San Diego, 1994).
K.C. Gordon and C.M. McGoverin, Int. J. Pharm. 417, 151–162 (2011).
D. Law et al., J. Pharm. Sci.90, 1015–1025 (2001).
G.H. Ward and R.K. Schultz, Pharm. Res.12, 773–779 (1995).
M.D. Ticehurst et al., Int. J. Pharm.193, 247–259 (2000).
M. Rani, R. Govindarajan, R. Surana, and R. Suryanarayanan, Pharm. Res.23, 2356–2367 (2006).
M.J. Pikal, in Polymorphs of Pharmaceutical Solids, H.G. Brittain, Ed. (Marcel Dekker, New York, 1999), pp. 395–419.
M. Ohta and G. Buckton, Int. J. Pharm.289, 31–38 (2005).
J. Han and R. Suryanarayanan, Pharm. Dev. Technol.3, 587–596 (1998).
S. Debnath and R. Suryanarayanan, AAPS PharmSciTech. 5, 1–11 (2004).
C.J. Strachan, T. Rades, D.A. Newnham, K.C. Gordon, M. Pepper, and P.F. Taday, Chem. Phys. Lett.390, 20–24 (2004).
Y.C. Shen, Int. J. Pharm.417, 48–60 (2011).
G.W. Chantry, in Submillimeter Spectroscopy: A Guide to the Theoretical and Experimental Physics of the Far Infrared, 1st Edition (Academic Press Inc. Ltd., Waltam, 1971).
D. Tuschel, Spectroscopy30(9), 18–31 (2015).
P.M.A. Sherwood, Vibrational Spectroscopy of Solids (Cambridge University Press, Cambridge, 1972).
L. Ho et al., J. Control. Release. 119, 253–261 (2007).
V.P. Wallace et al., Faraday Discuss. 126, 255–263 (2004).
F.S. Vieira and C. Pasquini, Anal. Chem. 84, 3780–3786 (2014).
J. Darkwah, G. Smith, I. Ermolina, and M. Mueller-Holtz, Int. J. Pharm.455, 357–364 (2013).
S. Kojima, T. Shibata, H. Igawa, and T. Mori, IOP Conf. Ser. Mater. Sci. Eng.54, 1–6 (2014).
T. Shibata, T. Mori, and S. Kojima, Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 150, 207–211 (2015).
S.P. Delaney, D. Pan, M. Galella, S.X. Yin, and T.M. Korter, Cryst. Growth Des.12, 5017–5024 (2012).
The concept of group wavenumbers is defined, and the importance of recognizing patterns in infrared (IR) spectra is discussed. Continuing our theme of investigating the IR spectra of hydrocarbons, we look at the nature of aromatic bonding and why aromatic rings have unique structures, bonding, and IR spectra. The IR spectrum of benzene is analyzed in detail as a prototype example of an aromatic hydrocarbon.
A look at the spectrum of any pure molecule will disclose that many functional groups have multiple peaks. It is this pattern of peaks that defines the presence of a specific functional group in a sample, not one specific peak. Thus, interpreting spectra is not about memorizing peak positions, but is instead an exercise in pattern recognition. The human brain has evolved to be great at pattern recognition. Computers are good at many things except pattern recognition, hence the need to use our eyeballs and brains to interpret infrared (IR) spectra. A computer’s inability to interpret spectra is part of why there is a need for a column series like this one. Since humans are good pattern recognizers, I believe most people can learn to interpret IR spectra. My approach going forward will be to emphasize the pattern of peaks that defines the presence of a functional group in a sample rather than throwing hundreds of peak positions at you.
The peaks that define a functional group are what I will call its group wavenumbers. Some people call these “group frequencies,” but this is a misnomer. The x-axes of IR spectra are plotted in wavenumber, not frequency. A good group wavenumber peak has three useful properties:
It will be intense so it is easy to see.
It will appear in a unique wavenumber region where no other functional groups absorb.
It will fall in a narrow wavenumber range regardless of what molecule the functional group appears in.
We have talked about group wavenumbers and peak patterns in previous articles without realizing it. For example, the methyl and methylene C-H stretches that were introduced previously (1). Now, we have given these peak patterns a proper name.
To be clear, not all of the peaks from a given functional group will be useful group wavenumbers. For example, a peak may be too weak to be seen reliably, may appear in a region where lots of other peaks appear, or move around a lot from molecule to molecule. Many of the upcoming columns will be devoted to the useful group wavenumbers of economically important functional groups.
One final thought: I compare interpreting IR spectra to playing a piano. You can’t just walk up to a Steinway and play a Beethoven sonata, you have to practice first. Similarly, you can’t just walk up to a spectrum and pull information out of it, you have to practice first. The purpose of these columns is to give you the knowledge you need to interpret spectra and the problem spectra give you the opportunity to practice what you have learned.
Introduction to Aromatic Molecules
To be able to interpret IR spectra, one must have a nodding familiarity with the nomenclature and structures of organic chemistry. In today’s world many people interpreting spectra do not have this background, so I have and will continue to discuss the structure and nomenclature of functional groups whose spectra we will discuss.
Aromatic molecules were originally named because some of them smell nice. However, if you have ever smelled pyridine you know that not all aromatic molecules do smell nice. It has been found that the bonding in aromatic molecules is unique.
The prototypical aromatic molecule is benzene, C6H6, whose structure is represented in Figure 1.
Figure 1: The chemical structure of the benzene molecule, C6H6.
The drawing in Figure 1 is that of a six-membered ring or hexagon. A carbon atom is located at each vertex of the hexagon and a hydrogen atom is attached to each carbon, although it is not written in. The circle inside the ring represents that the electrons are delocalized which is illustrated in Figure 2.
Figure 2: Top: The P orbitals on each of the six carbon atoms in benzene that contribute an electron to the ring. Bottom: the collection of delocalized P orbital electrons forming a cloud of electron density above and below the benzene ring.
Each of the carbon atoms in a benzene ring contains two P orbitals containing a lone electron, and one of these orbitals is perpendicular to the benzene ring as seen in the top of Figure 2. There is enough orbital overlap that these electrons, rather than being confined between two carbon atoms as might be expected, instead delocalize and form clouds of electron density above and below the plane of the ring. This type of bonding is called aromatic bonding(2), and a ring that has aromatic bonding is called an aromatic ring. It is aromatic bonding that gives aromatic rings their unique structures, chemistry, and IR spectra. Benzene is simply a commonly found aromatic ring. Other types of aromatic molecules include polycyclic aromatic hydrocarbons (PAHs), such as naphthalene, that contain two or more benzene rings that are fused (which means adjacent rings share two carbon atoms), and heterocyclic aromatic rings which are aromatic rings that contain a noncarbon atom such as nitrogen. Pyridine is an example of one of these. The interpretation of the IR spectra of these latter aromatic molecules will be discussed in future articles.
Figure 3: The IR spectrum of benzene, measured as a capillary thin film between two KBr windows.
……
The area labeled B in Figure 3 refers to a region in aromatic ring spectra called the summation bands. More information on these peaks will come in a later column. The label C in Figure 3 at 1478 cm-1 is an example of a ring mode peak. A ring mode is a vibration that involves the stretching and contracting of the carbon-carbon bonds in an aromatic ring. These are typically sharp, but vary in number and intensity depending upon the molecule. They fall between 1620 and 1400 cm-1 as stated in Table I. The region labeled D in Figure 3 is where aromatic ring C-H in-plane bending peaks fall. These peaks are generally medium to weak in intensity, show up in a very busy spectral region, and hence are not useful group wavenumbers.
The most intense peak in Figure 3, labeled E, is an out-of-plane C-H bend. Since aromatic rings are planar, all the hydrogens are in the plane of the molecule. When these hydrogens bend above and below the plane of the molecule they are undergoing a C-H out-of-plane bend, which is sometimes called a wag because of the vibration’s resemblance to the wagging of a dog’s tail. This vibration gives rise to a large peak that typically falls between 1000 cm-1 and 700 cm-1. In the spectrum of benzene, this peak falls at 674 cm-1 because the molecule is unsubstituted.
Conclusion
To review then, the useful group wavenumbers for benzene rings are one or more C-H stretches between 3100 and 3000 cm-1, one or more sharp ring modes between 1620 and 1400 cm-1, and an intense ring bend from 1000 to 700 cm-1. Most of the time when a benzene ring is encountered it contains one or more substituents. The IR spectra of mono- and disubstituted benzene rings will be the topic of the next installment of this column.
Super-Resolution Fluorescence Microscopy: Where To Go Now? Bernd Rieger, Quantitative Imaging Group Leader, Delft University of Technology
09:30
Keynote Presentation
From Molecules To Whole Organs Francesco Pavone, Principal Investigator, LENS, University of Florence
Some examples of correlative microscopies, combining linear and non linear techniques will be described. Particular attention will be devoted Alzheimer disease or to neural plasticity after damage as neurobiological application.
10:15
Super-Resolution Imaging by dSTORM Markus Sauer, Professor, Julius-Maximilians-Universität Würzburg
10:45
Coffee and Networking in Exhibition Hall
11:15
Correlated Fluorescence And X-Ray Tomography: Finding Molecules In Cellular CT Scans Carolyn Larabell, Professor, University of California San Francisco
11:45
Integrating Advanced Fluorescence Microscopy Techniques Reveals Nanoscale Architecture And Mesoscale Dynamics Of Cytoskeletal Structures Promoting Cell Migration And Invasion Alessandra Cambi, Assistant Professor, University of Nijmegen
This lecture will describe our efforts to exploit and integrate a variety of advanced microscopy techniques to unravel the nanoscale structural and dynamic complexity of individual podosomes as well as formation, architecture and function of mesoscale podosome clusters.
12:15
Multi-Photon-Like Fluorescence Microscopy Using Two-Step Imaging Probes George Patterson, Investigator, National Institutes of Health
12:45
Lunch & Networking in Exhibition Hall
14:15
Technology Spotlight
14:30
3D Single Particle Tracking: Following Mitochondria in Zebrafish Embryos Don Lamb, Professor, Ludwig-Maximilians-University
15:00
Visualizing Mechano-Biology: Quantitative Bioimaging Tools To Study The Impact Of Mechanical Stress On Cell Adhesion And Signalling Bernhard Wehrle-Haller, Group Leader, University of Geneva
15:30
Superresolution Imaging Of Clathrin-Mediated Endocytosis In Yeast Jonas Ries, Group Leader, EMBL Heidelberg
We use single-molecule localization microscopy to investigate the dynamic structural organization of the east endocytic machinery. We discovered a striking ring-shaped pre-patterning of the actin nucleation zone, which is key for an efficient force generation and membrane invagination.
16:00
Coffee and Networking in Exhibition Hall
16:30
Optical Imaging of Molecular Mechanisms of Disease Clemens Kaminski, Professor, University of Cambridge
17:00
3-D Optical Tomography For Ex Vivo And In Vivo Imaging James McGinty, Professor, Imperial College London
17:30
End Of Day One
Wednesday, 15 June 2016
09:00
Imaging Gene Regulation in Living Cells at the Single Molecule Level James Zhe Liu, Group Leader, Janelia Research Campus, Howard Hughes Medical Institute
09:30
Keynote Presentation
Super-Resolution Microscopy With DNA Molecules Ralf Jungmann, Group Leader, Max Planck Institute of Biochemistry
10:15
A Revolutionary Miniaturised Instrument For Single-Molecule Localization Microscopy And FRET Achillefs Kapanidis, Professor, University of Oxford
10:45
Coffee and Networking in Exhibition Hall
11:15
Democratising Live-Cell High-Speed Super-Resolution Microscopy Ricardo Henriques, Group Leader, University College London
Information In Localisation Microscopy Susan Cox, Professor, Kings College London
12:45
Lunch & Networking in Exhibition Hall
14:15
Technology Spotlight
14:30
High-Content Imaging Approaches For Drug Discovery For Neglected Tropical Diseases Manu De Rycker, Team Leader, University of Dundee
The development of new drugs for intracellular parasitic diseases is hampered by difficulties in developing relevant high-throughput cell-based assays. Here we present how we have used image-based high-content screening approaches to address some of these issues.
15:00
High Resolution In Vivo Histology: Clinical in vivo Subcellular Imaging using Femtoseceond Laser Multiphoton/CARS Tomography Karsten König, Professor, Saarland University
We report on a certified, medical, transportable multipurpose nonlinear microscopic imagingsystem based on a femtosecond excitation source and a photonic crystal fiber with multiple miniaturized time-correlated single-photon counting detectors.
15:30
Coffee and Networking in Exhibition Hall
16:00
Lateral Organization Of Plasma Membrane Constituents At The Nanoscale Gerhard Schutz, Professor, Vienna University of Technology
It is of interest how proteins are spatially distributed over the membrane, and whether they conjoin and move as part of multi-molecular complexes. In my lecture, I will discuss methods for approaching the two questions, and provide biological examples.
16:30
Correlative Light And Electron Microscopy In Structural Cell Biology Wanda Kukulski, Group Leader, University of Cambridge
Human Factor Engineering: New Regulations Impact Drug Delivery, Device Design And Human Interaction
Curator: Stephen J. Williams, Ph.D.
Institute of Medicine report brought medical errors to the forefront of healthcare and the American public (Kohn, Corrigan, & Donaldson, 1999) and estimated that between
44,000 and 98,000 Americans die each year as a result of medical errors
An obstetric nurse connects a bag of pain medication intended for an epidural catheter to the mother’s intravenous (IV) line, resulting in a fatal cardiac arrest. Newborns in a neonatal intensive care unit are given full-dose heparin instead of low-dose flushes, leading to threedeaths from intracranial bleeding. An elderly man experiences cardiac arrest while hospitalized, but when the code blue team arrives, they are unable to administer a potentially life-saving shock because the defibrillator pads and the defibrillator itself cannot be physically connected.
Human factors engineering is the discipline that attempts to identify and address these issues. It is the discipline that takes into account human strengths and limitations in the design of interactive systems that involve people, tools and technology, and work environments to ensure safety, effectiveness, and ease of use.
Several drug delivery devices are on a draft list of med tech that will be subject to a final guidance calling for the application of human factors and usability engineering to medical devices. The guidance calls called for validation testing of devices, to be collected through interviews, observation, knowledge testing, and in some cases, usability testing of a device under actual conditions of use. The drug delivery devices on the list include anesthesia machines, autoinjectors, dialysis systems, infusion pumps (including implanted ones), hemodialysis systems, insulin pumps and negative pressure wound therapy devices intended for home use. Studieshave consistently shown that patients struggle to properly use drug delivery devices such as autoinjectors, which are becoming increasingly prevalent due to the rise of self-administered injectable biologics. The trend toward home healthcare is another driver of usability issues on the patient side, while professionals sometimes struggle with unclear interfaces or instructions for use.
Human–factors engineering, also called ergonomics, or human engineering, science dealing with the application of information on physical and psychological characteristics to the design of devices and systems for human use. ( for more detail see source@ Britannica.com)
The term human-factors engineering is used to designate equally a body of knowledge, a process, and a profession. As a body of knowledge, human-factors engineering is a collection of data and principles about human characteristics, capabilities, and limitations in relation to machines, jobs, and environments. As a process, it refers to the design of machines, machine systems, work methods, and environments to take into account the safety, comfort, and productiveness of human users and operators. As a profession, human-factors engineering includes a range of scientists and engineers from several disciplines that are concerned with individuals and small groups at work.
The terms human-factors engineering and human engineering are used interchangeably on the North American continent. In Europe, Japan, and most of the rest of the world the prevalent term is ergonomics, a word made up of the Greek words, ergon, meaning “work,” and nomos, meaning “law.” Despite minor differences in emphasis, the terms human-factors engineering and ergonomics may be considered synonymous. Human factors and human engineering were used in the 1920s and ’30s to refer to problems of human relations in industry, an older connotation that has gradually dropped out of use. Some small specialized groups prefer such labels as bioastronautics, biodynamics, bioengineering, and manned-systems technology; these represent special emphases whose differences are much smaller than the similarities in their aims and goals.
The data and principles of human-factors engineering are concerned with human performance, behaviour, and training in man-machine systems; the design and development of man-machine systems; and systems-related biological or medical research. Because of its broad scope, human-factors engineering draws upon parts of such social or physiological sciences as anatomy, anthropometry, applied physiology, environmental medicine, psychology, sociology, and toxicology, as well as parts of engineering, industrial design, and operations research.
Two general premises characterize the approach of the human-factors engineer in practical design work. The first is that the engineer must solve the problems of integrating humans into machine systems by rigorous scientific methods and not rely on logic, intuition, or common sense. In the past the typical engineer tended either to ignore the complex and unpredictable nature of human behaviour or to deal with it summarily with educated guesses. Human-factors engineers have tried to show that with appropriate techniques it is possible to identify man-machine mismatches and that it is usually possible to find workable solutions to these mismatches through the use of methods developed in the behavioral sciences.
The second important premise of the human-factors approach is that, typically, design decisions cannot be made without a great deal of trial and error. There are only a few thousand human-factors engineers out of the thousands of thousands of engineers in the world who are designing novel machines, machine systems, and environments much faster than behavioral scientists can accumulate data on how humans will respond to them. More problems, therefore, are created than there are ready answers for them, and the human-factors specialist is almost invariably forced to resort to trying things out with various degrees of rigour to find solutions. Thus, while human-factors engineering aims at substituting scientific method for guesswork, its specific techniques are usually empirical rather than theoretical.
The Man-Machine Model: Human-factors engineers regard humans as an element in systems
The simple man-machine model provides a convenient way for organizing some of the major concerns of human engineering: the selection and design of machine displays and controls; the layout and design of workplaces; design for maintainability; and the work environment.
Components of the Man-Machine Model
human operator first has to sense what is referred to as a machine display, a signal that tells him something about the condition or the functioning of the machine
Having sensed the display, the operator interprets it, perhaps performs some computation, and reaches a decision. In so doing, the worker may use a number of human abilities, Psychologists commonly refer to these activities as higher mental functions; human-factors engineers generally refer to them as information processing.
Having reached a decision, the human operator normally takes some action. This action is usually exercised on some kind of a control—a pushbutton, lever, crank, pedal, switch, or handle.
action upon one or more of these controls exerts an influence on the machine and on its output, which in turn changes the display, so that the cycle is continuously repeated
Driving an automobile is a familiar example of a simple man-machine system. In driving, the operator receives inputs from outside the vehicle (sounds and visual cues from traffic, obstructions, and signals) and from displays inside the vehicle (such as the speedometer, fuel indicator, and temperature gauge). The driver continually evaluates this information, decides on courses of action, and translates those decisions into actions upon the vehicle’s controls—principally the accelerator, steering wheel, and brake. Finally, the driver is influenced by such environmental factors as noise, fumes, and temperature.
How BD Uses Human Factors to Design Drug-Delivery Systems
Posted in Design Services by Jamie Hartford on August 30, 2013
Human factors testing has been vital to the success of the company’s BD Physioject Disposable Autoinjector.
The BD Physioject Disposable Autoinjector offers users a 360° view of the drug injection process and features a one-touch injection button.
Improving the administration and compliance of drug delivery is a common lifecycle strategy employed to enhance short- and long-term product adoption in the biotechnology and pharmaceutical industries. With increased competition in the industry and heightened regulatory requirements for end-user safety, significant advances in product improvements have been achieved in the injectable market, for both healthcare professionals and patients. Injection devices that facilitate preparation, ease administration, and ensure safety are increasingly prevalent in the marketplace.
Traditionally, human factors engineering addresses individualized aspects of development for each self-injection device, including the following:
Task analysis and design.
Device evaluation and usability.
Patient acceptance, compliance, and concurrence.
Anticipated training and education requirements.
System resilience and failure.
To achieve this, human factors scientists and engineers study the disease, patient, and desired outcome across multiple domains, including cognitive and organizational psychology, industrial and systems engineering, human performance, and economic theory—including formative usability testing that starts with the exploratory stage of the device and continues through all stages of conceptual design. Validation testing performed with real users is conducted as the final stage of the process.
To design the BD Physioject Disposable Autoinjector System , BD conducted multiple human factors studies and clinical studies to assess all aspects of performance safety, efficiency, patient acceptance, and ease of use, including pain perception compared with prefilled syringes.5 The studies provided essential insights regarding the overall user-product interface and highlighted that patients had a strong and positive response to both the product design and the user experience.
As a result of human factors testing, the BD Physioject Disposable Autoinjector System provides multiple features designed to aide in patient safety and ease of use, allowing the patient to control the start of the injection once the autoinjector is placed on the skin and the cap is removed. Specific design features included in the BD Physioject Disposable Autoinjector System include the following:
Ergonomic design that is easy to handle and use, especially in patients with limited dexterity.
A 360° view of the drug and injection process, allowing the patient to confirm full dose delivery.
A simple, one-touch injection button for activation.
A hidden needle before and during injection to reduce needle-stick anxiety.
A protected needle before and after injection to reduce the risk of needle stick injury.
YouTube VIDEO: Integrating Human Factors Engineering (HFE) into Drug Delivery
Notes:
The following is a slideshare presentation on Parental Drug Delivery Issues in the Future
The Dangers of Medical Devices
The FDA receives on average 100,000 medical device incident reports per year, and more than a third involve user error.
In an FDA recall study, 44% of medical device recalls are due to design problems, and user error is often linked to the poor design of a product.
Drug developers need to take safe drug dosage into consideration, and this consideration requires the application of thorough processes for Risk Management and Human Factors Engineering (HFE).
Although unintended, medical devices can sometimes harm patients or the people administering the healthcare. The potential harm arises from two main sources:
failure of the device and
actions of the user or user-related errors. A number of factors can lead to these user-induced errors, including medical devices are often used under stressful conditions and users may think differently than the device designer.
Instead of blaming test participants for use errors, look more carefully at your device’s design.
Great posting on reasons typical design flaws creep up in medical devices and where a company should integrate fixes in product design. Posted in Design Services by Jamie Hartford on July 8, 2013
YouTube VIDEO: Integrating Human Factors Engineering into Medical Devices
Notes:
Regulatory Considerations
Unlike other medication dosage forms, combination products require user interaction
Combination products are unique in that their safety profile and product efficacy depends on user interaction
Human Factors Review: FDA Outlines Highest Priority Devices
The US Food and Drug Administration (FDA) on Tuesday released new draft guidance to inform medical device manufacturers which device types should have human factors data included in premarket submissions, as well final guidance from 2011 on applying human factors and usability engineering to medical devices.
FDA said it believes these device types have “clear potential for serious harm resulting from use error and that review of human factors data in premarket submissions will help FDA evaluate the safety and effectiveness and substantial equivalence of these devices.”
Manufacturers should provide FDA with a report that summarizes the human factors or usability engineering processes they have followed, including any preliminary analyses and evaluations and human factors validation testing, results and conclusions, FDA says.
The list was based on knowledge obtained through Medical Device Reporting (MDRs) and recall data, and includes:
Auto injectors (when CDRH is lead Center; e.g., KZE, KZH, NSC )
Automated external defibrillators
Duodenoscopes (on the reprocessing; e.g., FDT) with elevator channels
Gastroenterology-urology endoscopic ultrasound systems (on the reprocessing; e.g., ODG) with elevator channels
Hemodialysis and peritoneal dialysis systems (e.g., FKP, FKT, FKX, KDI, KPF ODX, ONW)
Implanted infusion pumps (e.g., LKK, MDY)
Infusion pumps (e.g., FRN, LZH, MEA, MRZ )
Insulin delivery systems (e.g., LZG, OPP)
Negative-pressure wound therapy (e.g., OKO, OMP) intended for home use
Robotic catheter manipulation systems (e.g., DXX)
Robotic surgery devices (e.g., NAY)
Ventilators (e.g., CBK, NOU, ONZ)
Ventricular assist devices (e.g., DSQ, PCK)
Final Guidance
In addition to the draft list, FDA finalized guidance from 2011 on applying human factors and usability engineering to medical devices.
The agency said it received over 600 comments on the draft guidance, which deals mostly with design and user interface, “which were generally supportive of the draft guidance document, but requested clarification in a number of areas. The most frequent types of comments requested revisions to the language or structure of the document, or clarification on risk mitigation and human factors testing methods, user populations for testing, training of test participants, determining the appropriate sample size in human factors testing, reporting of testing results in premarket submissions, and collecting human factors data as part of a clinical study.”
In response to these comments, FDA said it revised the guidance, which supersedes guidance from 2000 entitled “Medical Device Use-Safety: Incorporating Human Factors Engineering into Risk Management,” to clarify “the points identified and restructured the information for better readability and comprehension.”
Details
The goal of the guidance, according to FDA, is to ensure that the device user interface has been designed such that use errors that occur during use of the device that could cause harm or degrade medical treatment are either eliminated or reduced to the extent possible.
FDA said the most effective strategies to employ during device design to reduce or eliminate use-related hazards involve modifications to the device user interface, which should be logical and intuitive.
In its conclusion, FDA also outlined the ways that device manufacturers were able to save money through the use of human factors engineering (HFE) and usability engineering (UE).
Agilent was created as a spin off from Hewlett-Packard Company in 1999.
Agilent Technologies Inc. is engaged in the life sciences, diagnostics and applied chemical markets. The Company provides application focused solutions that include instruments, software, services and consumables for the entire laboratory workflow. The Company has three business segments:
the life sciences and applied markets business,
the diagnostics and genomics business, and
the Agilent Cross Lab business
The Company’s life sciences and applied markets business segment brings together the Company’s analytical laboratory instrumentation and informatics.
The Company’s diagnostics and genomics business segment consists of three businesses: the Dako business, the genomics business and the nucleic acid solutions business.
The Company’s Agilent Cross Lab business segment combines its analytical laboratory services and consumables business
CARPINTERIA, Calif.–(BUSINESS WIRE)–Dako, an Agilent Technologies company and a worldwide provider of cancer diagnostics, today announced the U.S. Food and Drug Administration has approved a new test that can identify PD-L1 expression levels on the surface of non-small cell lung cancer tumor cells and provide information on the survival benefit with OPDIVO® (nivolumab) for patients with non-squamous NSCLC.
Argentina | Australia | Austria | Brazil | Canada |Chile | China | Colombia | Czech Republic | Denmark | Ecuador | Finland | Germany |Hong Kong | Israel | Italy | Japan | Korea | Malaysia | Mexico | New Zealand | Norway | Paraguay | Peru| Philippines | Poland | Romania | Singapore | South Africa | Spain | Sweden |Switzerland | Taiwan ROC | Thailand | Turkey | United Kingdom | Uruguay | Vietnam
Gen9 is building on advances in synthetic biology to power a scalable fabrication capability that will significantly increase the world’s capacity to produce DNA content. The privately held company’s next-generation gene synthesis technology allows for the high-throughput, automated production of DNA constructs at lower cost and higher accuracy than previous methods on the market. Founded by world leaders in synthetic biology, Gen9 aims to ensure the constructive application of synthetic biology in industries ranging from enzyme and chemical production to pharmaceuticals and biofuels.
SERVICES
Synthetic Biology
Gene Synthesis Services
Variant Libraries
Gene Sequence Design Services
INVESTORS
Agilent Technologies : Private Equity
CAMBRIDGE, Mass. and SANTA CLARA, Calif. — April 24, 2013 —Gen9 Receives $21 Million Strategic Investment from Agilent Technologies
GenScript is the largest gene synthesis provider in the USA
GenScript Corporation, a biology contract research organization, provides biological research and drug discovery services to pharmaceutical companies, biotech firms, and research institutions in the United States, Europe, and Japan. It offers bio-reagent, custom molecular biology, custom peptide, protein production, custom antibody production, drug candidates testing, assay development and screening, lead optimization, antibody drug development, gene synthesis, and assay-ready cell line production services.
The company also offers molecular biology, peptide, protein, immunoassay, chemicals, and cell biology products. It offers its products through distributors in Tokyo, Japan; and Seoul, Korea. GenScript Corporation has a strategic partnership with Immunologix, Inc. The company was founded in 2002 and is based in Piscataway, New Jersey. It has subsidiaries in France, Japan, and China.
Note: As of October 24, 2011, Immunologix, Inc. was acquired by Intrexon Corporation. Immunologix, Inc. develops and produces antibody-based therapeutics for various biological targets. It produces human monoclonal antibodies against viral, bacterial, and tumor antigens, as well as human auto antigens.
Intrexon Corporation, founded in 1998, is a leader in synthetic biology focused on collaborating with companies in Health, Food, Energy, Environment and Consumer sectors to create biologically based products that improve quality of life and the health of the planet.
PRODUCTS AND SERVICES
Gene synthesis
Antibody services
Protein Services
Peptide services
INVESTORS
Note: The Balloch Group (‘TBG’) was established in 2001 by Howard Balloch (Canada‘s ambassador to China from 1996 to 2001). TBG has since grown from a market-entry consultancy working with North American clients in China to a leading advisory and merchant banking firm serving both domestic Chinese companies and multinational corporations. TBG was ranked as the number one boutique investment bank in China by ChinaVenture in 2008.
Monica Heger : SAN FRANCISCO (GenomeWeb) – Illumina today announced two new next-generation sequencing platforms, a targeted sequencing system called MiniSeq and a semiconductor sequencer that is still under development.
Illumina disclosed the initiatives during a presentation at the JP Morgan Healthcare conference held here today. During the presentation, Illumina CEO Jay Flatley also announced a new genotyping array called Infinium XT; a partnership with Bio-Rad to develop a single-cell sequencing workflow; preliminary estimates of its fourth-quarter 2015 revenues; and an update on existing products. The presentation followed the company’s announcement on Sunday that it has launched a new company called Grail to develop a next-generation sequencing test for early cancer detection from patient blood samples.
The MiniSeq system, which is based on Illumina’s current sequencing technology, will begin shipping early this quarter and has a list price of $49,500. It can perform a variety of targeted DNA and RNA applications, from single-gene to pathway sequencing, and promises “all-in” prices, including library prep and sequencing, of $200 to $300 per sample, Flatley said during the JP Morgan presentation.
Integrated DNA Technologies, Inc. (IDT), the global leader in nucleic acid synthesis, serving all areas of life sciences research and development, offers products for a broad range of genomics applications. IDT’s primary business is the production of custom, synthetic nucleic acids for molecular biology applications, including qPCR, sequencing, synthetic biology, and functional genomics. The company manufactures and ships an average of 44,000 custom nucleic acids per day to more than 82,000 customers worldwide. For more information, visit idtdna.com.
Dyes GMP for Molecular Diagnostics Large Scale Oligo Synthesis
Note : Skokie, IL – December 1, 2015. Integrated DNA Technologies Inc. (“IDT”), the global leader in custom nucleic acid synthesis, has entered into a definitive agreement to acquire the oligonucleotide synthesis business of AITbiotech Pte. Ltd. in Singapore (“AITbiotech”). With this acquisition, IDT expands its customer base across Southeast Asia making it possible for these additional customers to now have access to its broad range of products for genomic applications. AITbiotech will continue operations in its other core business areas.
With over 20 years of experience in oligonucleotide development and production, and over 1000 sequences manufactured, Avecia has played an integral role in the advancing oligo therapeutic market. Our mission is to continue to build value for our customers, as they progress through drug development into commercialization. And as a member of the Nitto Denko Corporation (nitto.com), Avecia is committed to the future of the oligonucleotide market. We are driven by innovative ideas and flexible solutions, designed to provide our customers with the best in service, quality, and technology.
OriGene Technologies, Inc. develops, manufactures, and sells genome wide research and diagnostic products for pharmaceutical, biotechnology, and academic research applications. The company offers cDNA clones, including TrueORF cDNA, viral ORF, destination vectors, TrueClones (human), TrueClones (mouse), organelle marker plasmids, MicroRNA tools, mutant and variant clones, plasmid purification kits, transfection reagents, and gene synthesis service; and HuSH shRNA, siRNA, miRNA, qPCR reagents, plasmid purification products, transfection reagents, PolyA+ and total RNA products, first-strand cDNA synthesis, and CRISPR/Cas9 genome products. It also provides proteins and lysates, such as purified human proteins, over-expression cell lysates, mass spectrometry standard proteins, and protein purification reagents; UltraMAB IHC antibodies, TrueMAB primary antibodies, anti-tag and fluorescent proteins, ELISA antibodies, luminex antibodies, secondary antibodies, and controls and others; and anatomic pathology products, including IHC antibodies, detection systems, and IHC accessories
The company offers luminex and ELISA antibody pairs, autoantibody profiling arrays, ELISA kits, cell assay kits, assay reagents, custom development, and fluorogenic cell assays; TissueFocus search tools; tissue sections; tissue microarrays, cancer protein lysate arrays, TissueScan cDNA arrays, tissue blocks, and quality control products, as well as tissue RNA, DNA, and protein lysates; and lab essentials. Its research areas include cancer biomarker research, RNAi, pathology IHC, stem cell research, ion channels, and protein kinase products. The company provides gene synthesis and molecular biology services, genome editing, custom cloning, custom shRNA, purified protein, monoclonal antibody development, and assay development. It sells its products through distributors worldwide, as well as online. OriGene Technologies, Inc. was incorporated in 1995 and is based in Rockville, Maryland.
Louis, MO – November 18, 2015 Merck KGaA, Darmstadt, Germany, Completes Sigma-Aldrich Acquisition
Merck KGaA today announced the completion of its $17 billion acquisition of Sigma-Aldrich, creating one of the leaders in the $130 billion global industry to help solve the toughest problems in life science.
Press Release: 18-Nov-2015
Letter to our Life Science Customers from Dr. Udit Batra
The life science business of Merck KGaA, Darmstadt, Germany brings together the world-class products and services, innovative capabilities and exceptional talent of EMD Millipore and Sigma-Aldrich to create a global leader in the life science industry.
“Everything we do starts with our shared purpose – to solve the toughest problems in life science by collaborating with the global scientific community.
This combination is built on complementary strengths, which will enable us to serve you even better as one organization than either company could alone.
This means providing a broader portfolio with a catalog of more than 300,000 products, including many of the most respected brands in the industry, greater geographic reach, and an unmatched combination of industry-leading capabilities.”
Thermo Fisher Scientific Inc. is a provider of analytical instruments, equipment, reagents and consumables, software and services for research, manufacturing, analysis, discovery and diagnostics. The company operates through four segments: Life Sciences Solutions, provides reagents, instruments and consumables used in biological and medical research, discovery and production of new drugs and vaccines as well as diagnosis of disease; Analytical Instruments, provides instruments, consumables, software and services that are used in the laboratory; Specialty Diagnostics, offers diagnostic test kits, reagents, culture media, instruments and associated products, and Laboratory Products and Services, offers self-manufactured and sourced products for the laboratory.
WALTHAM, Mass. & SANTA CLARA, Calif.–(BUSINESS WIRE)–Jan. 8, 2016– Thermo Fisher Scientific Inc. (NYSE:TMO), the world leader in serving science, and Affymetrix Inc. (NASDAQ:AFFX), a leading provider of cellular and genetic analysis products, today announced that their boards of directors have unanimously approved Thermo Fisher’s acquisition of Affymetrix for $14.00 per share in cash. The transaction represents a purchase price of approximately $1.3 billion.
Oxazolidinone represent a novel chemical class of synthetic antimicrobial agents.Linezolid represents the first member of this class to be used clinically. Oxazolidinones display activity against important Gram-positive human and veterinary pathogens including Methicillin-Resistant Staphylococcus aureus (MRSA), Vancomycin Resistant Enterococci (VRE) and β-lactam Resistant Streptococcus pneumoniae (PRSP). The oxazolidinones also show activity against Gram-negative aerobic bacteria, Gram-positive and Gram-negative anaerobes. (Diekema D J et al., Lancet 2001 ; 358: 1975-82).
Various oxazolidinones and their methods of preparation are disclosed in the literature. International Publication No. WO 1995/25106 discloses substituted piperidino phenyloxazolidinones and International Publication No. WO 1996/13502 discloses phenyloxazolidinones having a multisubstituted azetidinyl or pyrrolidinyl moiety. US Patent Publication No. 2004/0063954, International Publication Nos. WO 2004/007489 and WO 2004/007488 disclose piperidinyl phenyl oxazolidinones for antimicrobial use.
Pyrrolidinyl/piperidinyl phenyl oxazohdinone antibacterial agents are also described in Kim H Y et al., Bioorg. & Med. Chem. Lett., (2003), 13:2227-2230. International Publication No. WO 1996/35691 discloses spirocyclic and bicyclic diazinyl and carbazinyl oxazolidinone derivatives. Diazepeno phenyloxazolidinone derivatives are disclosed in the International Publication No. WO 1999/24428. International Publication No. WO 2002/06278 discloses substituted aminopiperidino phenyloxazolidinone derivatives.
Various other methods of preparation of oxazolidinones are reported in US Patent No. 7087784, US Patent No. 6740754, US Patent No. 4948801 , US Patent No. 3654298, US Patent No. 5837870, Canadian Patent No. 681830, J. Med. Chem., 32, 1673 (1989), Tetrahedron, 45, 1323 (1989), J. Med. Chem., 33, 2569 (1990), Tetrahedron Letters, 37, 7937-40 (1996) and Organic Process Research and Development, 11 , 739-741(2007).
Indian Patent Application No. 2534/MUM/2007 discloses a process for the preparation of substituted piperidino phenyloxazolidinones. International Publication No. WO2012/059823 further discloses the process for the preparation of phosphoric acid mono-(L-{4-[(5)-5-(acetylaminomethyl)-2-oxo-oxazolidin-3-yl]-2,6-difluorophenyl}4-methoxymethyl piperidine-4-yl)ester.
US Patent No. 8217058 discloses (5S)-N-{3-[3,5-difluoro-4-(4-hydroxy-4-methoxymethyl-piperidin-l-yl)-phenyl]-2-oxo-oxazolidin-5-ylmethyl}-acetamide as an antibacterial agent and its process for preparation.
Phosphoric acid mono-(l-{4-[(S)-5-(acetylamino- methyl)-2-oxo-oxazolidin-3-yl]-2,6-difluorophenyl}-4-methoxymethyl-piperidin-4-yl) ester of Formula (A),
the process comprising the steps of:
a) Converting intermediate of Formula (1) into intermediate of Formula (3)
b) Converting intermediate of Formula (3) into intermediate of Formula (5)
c) Converting intermediate of Formula (5) into intermediate of structure (6)
(5) <6> d) Converting intermediate of Formula (6) into intermediate of Formula (10)
e) Converting intermediate of Formula (10) into intermediate of Formula (11),
f) Converting intermediate of Formula (11) into compound of Formula (A) or Pharmaceutically acceptable salts thereof
MinION could help achieve NIH’s goal of $1,000 human genome sequencing and in remote clinics and outbreak zones shift testing away from medical laboratories
Point-of-care DNA sequencing technology is edging ever closer to widespread commercial use as the Oxford Nanopore MinION sequencer draws praise and registers successes in pre-release testing.
A pocketsize gene-sequencing machine such as the MinION could transform the marketplace by shifting DNA testing to remote clinics and outbreak zones while eliminating the need to return samples to clinical laboratories for analysis. Such devices also are expected to increase the need for trained genetic pathologists andmedical technologists.
After Much Anticipation, MinION Delivers on Promises
The MinION, produced by United Kingdom-based Oxford Nanopore Technologies, is a miniaturized instrument about the size of a USB memory stick that plugs directly into a PC or laptop computer’s USB port. Unlike bench-top sequencers, the MinION uses nanopore “strand sequencing” technology to deliver ultra-long-read-length single-molecule sequence data.
“The USB-powered sequencer contains thousands of wells, each containing nanopores—narrow protein channels that are only wide enough for a single strand of DNA. When DNA enters the channels, each base gives off a unique electronic signature that can be detected by the system, providing a readout of the DNA sequence,” reported
After several years of unfulfilled promises, Oxford began delivering the MinION in the spring of 2014 to researchers participating in its early access program called MAP . For a $1,000 access fee, participants receive a starter kit and may purchase consumable supplies. The current price for additional flow cells ranges from $900 for one to $500 per piece when purchased in 48-unit quantities.
Nick Loman, an Independent Research Fellow in the Institute for Microbiology and Infection at the University of Birmingham, UK, had questioned if MinION’s promise would ever be realized. But the USB-size sequencer won him over after he used it to detect Salmonella within 15 minutes in samples sent from a local hospital.
Loman received the MinION in May 2014 as part of the MAP program and quickly tested its usefulness. After using the device to sequence a strain of Pseudomonas aeruginosa, a common hospital-acquired infection (HAI), he next helped solve the riddle of an outbreak of Salmonella infection in a Birmingham hospital that had affected 30 patients and staff.
“The hospital wanted to understand quickly what was happening,” Loman stated. “But routine genome sequencing is quite slow. It usually takes weeks or even months to get information back.”
Using MinION, Loman detected Salmonella in some of the samples sent from the hospital in less than 15 minutes. Ultimately, the main source of the outbreak was traced to a German egg supplier.
“The MinION just blew me away,” Loman stated in Wired. “The idea that you could do sequencing on a sort of USB stick that you can chuck around does stretch credulity.”
Portable Sequencing Opens Up Intriguing Possibilities for Pathologists
In May 2015, Oxford released a second version of the device, the MinION MkI. According to the company website, the updated MinION is a “full production device featuring improvements of performance and ease of use,” such as improved temperature control and updated mechanism to engage the device with the consumable flow cells.
“The bench-top sequencers opened up the market to a certain degree,” Loman says. “You started seeing [them] in intensive research groups and in the clinic. But what if anyone could have this hanging off their key ring and go do sequencing? That’s an insane idea, and we don’t really know what it’s going to mean in terms of the potential applications. We’re very much at the start of thinking about what we might be able to do, if anyone can just sequence anything, anywhere they are.”
Joshua Quick, a PhD candidate at the University of Birmingham, UK believes Oxford Nanopore Technologies’ portable and inexpensive device will change the gene sequencing landscape.
Accuracy One Trade-off for Portability
Beta-testers have shown that the miniature device can read out relatively long stretches of genetic sequence with increasing accuracy, but according to the report in the journal Nature , the MinION MkI will need to correct several shortcomings found in the original sequencer:
• It is not practical to sequence large genomes with the device, with some experts estimating it would take a year for the original version to sequence the equivalent of a human genome.
• The machine has a high error rate compared with those of existing full-sized sequencers, misidentifying DNA sequence 5%–30% of the time.
• It also has difficulties reading sections of genome that contain long stretches of a single DNA base.
Yet researchers who have used the device remain enthusiastic about the future of this fourth-generation sequencing technique, which may have the potential to achieve the $1,000-per-human-genome goal set by the National Institutes of Health (NIH).
“This is the democratization of sequencing,” Joshua Quick, a PhD candidate at the University of Birmingham, told Nature. “You don’t have to rely on expensive infrastructure and costly equipment.”
News accounts did not provide information about Oxford Nanopore’s plans to obtain an EU mark for its MinION device. That will be the next step to demonstrating that the device is ready for widespread clinical use. At the same time, clinical laboratory managers and pathologist should take note of the capabilities of the MinION MkI as described above. Researchers are already finding it useful to identify infectious diseases in clinical setting where other diagnostic methods have not yet identified the agent causing the infection.
pathway and network analysis of complex ‘omics data
Larry H. Bernstein, MD, FCAP, Curator
LPBI
While blood tests can be used to detect some cancers, the FDA said a San Diego company has no proof its blood test works in patients who have not already been diagnosed with some form of the disease.
WASHINGTON, Sept. 25 (UPI) — A San Diego company selling an early cancer detection test was notified by the U.S. Food and Drug Administration it can find no evidence the test actually works, and is concerned it could prove to be harmful for some people.
Pathway Genomics debuted its CancerIntercept test in early September with claims it can detect cancer cell DNA in the blood, picking up mutations linked to as many as 10 different cancers. The goal is to catch cancer early in people who are “otherwise healthy” and not showing symptoms of the disease.
“Based on our review of your promotional materials and the research publication cited above, we believe you are offering a high risk test that has not received adequate clinical validation and may harm the public health,” said FDA Deputy Director James L. Woods in a letter to the company.
CancerIntercept is billed by the company as a blood test looking for DNA fragments in the bloodstream and testing them for 96 genomic markers it says are found in several specific tumor types.
The direct-to-consumer test can be purchased through the Pathway Genomics website, with programs ranging from a one-time test to a quarterly “subscription” for people who want regular testing.
The company states, in several sections of its website, “the presence of one or more of these genomic markers in a patient’s bloodstream may indicate that the patient has a previously undetected cancer. However, the test is not diagnostic, and thus, follow-up screening and clinical testing would be required to confirm the presence or absence of a specific cancer in the patient.”
The FDA is concerned that people may seek treatment for tumors that do not require medical attention, or spend money and possibly seek out treatment they do not need at all — in either case, unnecessary treatment for cancer is potentially harmful to people, the agency said.
CancerIntercept has not been approved by the FDA for use as a medical device, nor has it been subjected to peer review as most tests of its type would be. The company published a white paper on its website which outlines how the test works, supporting its efficacy with references to several clinical trials on detection of mutated DNA in the bloodstream.
Glenn Braunstein, Chief Medical Officer at Pathway Genomics, told The VergePathway had validated its tests with “hundreds” of patients, though those patients had well-defined, often advanced cancers.
In the letter from the FDA, Woods requests the company provide a timeline for meeting with the agency to review plans for future longitudinal studies on the product and specific details on studies that have been conducted before it was made available to consumers.
The clinical laboratory is an essential player in the treatment of cancer providing a diagnostic, potentially a prognostic, and follow-up treatment armamentarium. The laboratory diagnostics industry has grown over the last half century into a highly accurate, well regulated industry with highly automated and point of care technologies. Prior to introduction, the tests that are put on the market have to be validated prior to introduction.
How are they validated?
The most common approach is for the test to be used concomitantly with treatment in a clinical trial. Measurements may be made prior to surgical biopsy and treatment, and at a month or 6 months to a year later. The pharmaceutical and diagnostics industries are independent, even though a large company may have both pharmaceutical and diagnostic divisions. Consequently, the integration of diagnostics and therapeutics occurs on the front lines of patient care.
How this discrepancy between the FDA and the manufacturer could occur is not clear because prior to introduction, the test would have to be rigorously reviewed by the American Association for Clinical Chemistry, the largest and most competent organization to cover the scientific work, having industry-based committees. The only problem is that the companies may have products that are patented and have competing claims or interests. This is perhaps most likely to be problematic in the competitive environment of genomics testing.
The company here reported on is Pathway Genomics, that offers Ingenuity for pathway and variant analysis. There is no concern about the analysis methods, that are well studied. The concern is the validation of such method for screening of patients without prior diagnosis.
Model, analyze, and understand the complex biological and chemical systems at the core of life science research with IPA
QIAGEN’S Ingenuity Pathway Analysis (IPA) has been broadly adopted by the life science research community and is cited in thousands of peer-reviewed journal articles.
For the analysis and interpretation of ’omics data
Market Leading Pathway Analysis
Unlock the insights buried in experimental data by quickly identifying relationships, mechanisms, functions, and pathways of relevance.
Predictive Causal Analytics
Powerful causal analytics at your fingertips help you to build a more complete regulatory picture and a better understanding of the biology underlying a given gene expression study.
NGS/RNA-Seq Data Analysis
Get a better understanding of the isoform-specific biology resulting from RNA-Seq experiments.
Identify causal variants from human sequencing data
Ingenuity Variant Analysis combines analytical tools and integrated content to help you rapidly identify and prioritize variants by drilling down to a small, targeted subset of compelling variants based both upon published biological evidence and your own knowledge of disease biology. With Variant Analysis, you can interrogate your variants from multiple biological perspectives, explore different biological hypotheses, and identify the most promising variants for follow-up.
Variant Analysis used in NCI-60 Interpretation of Genomic Variants
The NCI-60 Data Set offers tremendous promise in the development and prescription of cancer drugs
97% of surveyed researchers are satisfied with the ease of use of Ingenuity Variant Analysis and we are honored that they chose to share the data through our Publish tool.
“Being a bioinformatician, I appreciated the speed and the complexity of analysis. Without Variant Analysis, I couldn’t have completed the analysis of 700 exomes in such a short time …. I found Variant Analysis very intuitive and easy to use.”
Francesco Lescai, Senior Research Associate in Genome Analysis, University College of London.
This appears to be the new rocky road to verification for validity in diagnostic and treatment application.
Advances in Gene Editing Technology: New Gene Therapy Options in Personalized Medicine
Curators: Stephen J Williams, PhD and Aviva Lev-Ari, PhD, RN
Recent Advances in Gene Editing Technology Adds New Therapeutic Potential for the Genomic Era
Author and Curator:Stephen J Williams, PhD
2.1.3.2 Advances in Gene Editing Technology: New Gene Therapy Options in Personalized Medicine, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair
The fundamental shift presently occurring within the medical field as well as our understanding of underlying biology has been brought on by revolutionary advances in the disciplines referred to as ‘OMICS’ (genomics, metabolomics, transcriptomics, proteomics). This paradigm shift has brought a new, more “personalized” mindset in investigating, treating, detecting, and policy-decision making disease as well as the physician-patient relationship. This Volume One of Genomics explains this paradigm shift as our classical understanding of the gene has evolved with rapid development of molecular technologies and high-end computational methods to a vision beyond the classic model. This new model involves big data to focus of the “code of OMIC signature”, moving from our investigational focus of “one gene at a time” to analysis of the changes in the networks of protein and gene expression occurring during disease progression.
Moving toward this promise of genome-based therapeutics has required the concomitant development of methodologies unavailable to the researcher and drug developer for most of the 20th century. These new technologies have allowed for the sequencing of the whole genome (advanced and inexpensive pyrosequencing), analyze the proteome for changes in post-translational modifications (new mass spectroscopy techniques combined with automated high-throughput gel electrophoresis on robotic platforms), ability to track all the changes happening to a patient’s metabolic profile (LC-MS in combination with an array of biocurated database functions), and develop new therapeutics based on discrete disease-specific changes in protein, enzyme, and DNA/RNA (mutational analysis, and advanced molecular techniques to allow for manipulation of DNA/RNA such as gene editing and therapeutic vectors) all advancements being dependent on the massive advancements in computing power and software development.
Although this final chapter on a specific technology (Cas9-mediated gene editing) might seem out of place to the reader for the subject of this Genomics volume, as discussed above, the development of these omics-related technologies have spurred the advent of personalized therapies. For example, in the 1990’s (as highlighted in the earlier chapters of this book) Dr. Craig Venter founded Celera Genomics with the goals of 1) sequencing the human genome in a cost effective manner (using new DNA sequencing technology and workflow he and colleagues had developed, and 2) use the information from whole genome sequencing to develop a new line of genomic-based therapeutics. Other companies such as Human Genome Sciences, Myriad Genetics, Seattle Genetics and recently new ventures from 23andMe and Google Ventures were also founded based on the promise that high-end sequencing information could directly lead to this new era of genome-based therapeutics. And although many in the medical field have felt that the primary goal of these companies, in particular using genomic analysis to enhance drug development has been a bit disappointing, AS IN ALL SCIENTIFIC AND MEDICAL DISCOVERY, which involves both SERENDIPITY and INDIRECT HAPPENSTANCE, three important breakthroughs, directly related to the development of a post-genomics era personalized medicine approach, resulted from the aforementioned efforts. These were:
The detection of disease-specific mutations in exomes resulting in “druggable” protein targets and ability to define the respective drug-responsive patient cohorts
Chronic myelogenous (or myeloid or myelocytic) leukemia (CML) was one of the first cancers attributed to a specific chromosomal aberration, namely the translocation event resulting in a fusion protein between part of the BCR (“breakpoint cluster region”) gene from chromosome 22 with the ABL gene on chromosome 9. Early drug development efforts were directed against the tyrosine kinase activity of the aberrant BCR/ABL protein. The first of this new class of drugs was imatinib mesylate (Gleevec™) showed early success but was later noticed that a subset of patients had significantly greater response rates. This led to more detailed investigation of Gleevec’s mechanism of action and was determined that Gleevec’s therapeutic action depended on the drug’s ability to bind to an ATP binding pocket within the BCR/ABL. Patients with a specific mutation in this ATP pocket (C944T and T1052C) were found resistant to Gleevec. This finding, that pateint’s DNA could be sequenced to stratify them in responder versus nonresponder groups became a cornerstone for tyrosine kinase inhibitor (TKI) development for various cancers. One example is the development of crizotanib, a TKI directed against a mutant version of the anaplasticlymphomakinase (ALK) enzyme, namely in patients carrying the ALK-EML4 fusion gene. As with Gleevec, certain mutations in the ATP binding pocket confer resistance to the inhibitory effects of crizotanib. Therefore, the Whole Exome Sequencing (WES) has shown its utility not only in drug development against cancer-specific mutant targets but stratifies patient cohorts into eligible versus non-eligible for a specific personalized therapy.
Ability to define at-risk populations based on genomic data and development of corresponding genetic risk assessment for disease
Tremendous advances have been made in the area of risk-assessment for a plethora of diseases, including various malignancies, heart disease, and metabolic diseases. These risk factors have been identified given our advances in whole genome sequencing and proteomic and metabolomics. And, although the aforementioned companies had not developed therapeutic agents using these technologies, their major contribution has been the development of the diagnostic tests which identify at-risk patients and susceptible populations for a given disease. For example, the development of tests for carriers of the BRCA1/BRAC2 breast/ovarian cancer susceptibility mutation or APC (for colon cancer) has led to the appearance of Family Risk Assessment Programs and radically changed the discourse between patient and physician. And although determining risk factors to a disease such as cardiac disease in a large population can be fraught with complexities, the advanced research tools together with gene-directed technologies discussed in this Volume and current chapter may give better clarity in this regard. In essence, the technology had been developed well before its use in the clinic had been identified.
Supplying and verifying linkages of specific genetic alterations to heritable diseases and offering a framework for future advances in gene-replacement and mutation-correction therapy
Our abilities to phenotypically correct inheritable diseases thru a gene-therapy (either by gene replacement or correction of mutated genes) have been hampered by three main areas. First identifying the specific mutations for a given inheritable disease used to be an arduous time-consuming process (linkage analysis), especially in small affected populations. However as whole exome sequencing rapidly evolved this had no longer become a rate-limiting step toward the development of a gene-directed therapy. Second and more troubling was determining a process which could deliver therapeutic genes in a safe, reliable and persistent manner. The first attempts at gene-therapy, relying on DNA virus and retroviral based delivery met with disaster and set back the field of gene therapy for decades (this story is too long for an introduction but for reference see the link.) Recently there have been improvements in therapeutic gene-therapy delivery systems such as the use of conditionally replicative adenovirus (cRADs) and novel serotype AAV (Recombinant adeno-associated virus, a nonpathogenic single stranded DNA human parvovirus) which have greatly improved safety and therapeutic profiles). The third issue, directly related to this chapter on Cas9-mediatied DNA editing) is the ability to integrate therapeutic DNA into the genome in a safe manner or correct mutations in their proper place. It is well established that the random integration of pieces of DNA has spurious effects on gene expression or contribute to transformation by an insertional mutagenesis mechanism.
This chapter will discuss how CRISPR/Cas9-mediated gene editing is being used in ex vivo strategies, namely to insert T-cell specific genes, in definable and safe loci, for the development of the new CAR-T cancer immuno-based therapies. In addition CRISPR/Cas9-mediated gene editing has much hope and promise for correcting specific mutations related to inheritable diseases, although investigations are at an infantile yet rapidly expanding area. As discussed above, new technologies have preceded their clinical use, mostly in a serendipitous and advantageous manner. Therefore it is a natural progression, using the concepts and curations in previous chapters, to investigate how a new technology, such as CRISPR/Cas9 medicated gene editing will fit into the ‘OMICS era of medicine.
Introduction
Larry H Bernstein, MD, FCAP
This document is a review and of the brilliant accomplishment of the Doudna Laboratory at University of California, Berkeley. It also traces the developments leading up to this groundbreaking work. The principle investigator is a young woman of significant accomplishments with the astounding publication of 4 papers at this time in 2015 and 20 in 2014. She
is a member of the National Academy of Sciences, and recipient of the Breakthrough Prize and the Lurie Prize in Biomedical Sciences, R. B. Woodward Visiting Professor, Harvard University (2000-2001). She achieved the Henry Ford II Professor of Molecular Biophysics and Biochemistry, Center for Structural Biology, Department of Molecular Biophysics and Biochemistry, Yale University (1994-2002) nine years after completion of her B.A. at Pomona College, and her Ph.D. under Jack Stozak at Harvard in 1989, became a Searle Scholar in 1996, and a Howard Hughes Investigator in 1997.
Her work has encompassed the editing of genes using the CRISPR-Cas9 system, and her team replaced a gene in a human cell which was convincing replicated in the Broad Laboratory at Harvard. The laboratory is currently working on the just reported immunological implications for CRISPR-Cas9 with respect to editing prokaryotic CRISPR-Cas genomic loci that encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. This is because acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome.
Of special note are the following applications:
21.4.2 CRISPR: Applications for Autoimmune Diseases @UCSF
Doudna’s Interview from the National Academy of Science in 2004
Doudna discusses her current work with signal recognition particles, a type of RNA that is found in virtually all cell types and is responsible for directing specific proteins to specific membranes. She also discusses how advances in genomic sequencing may help catalog the complete range of functional RNA molecules. (9 minutes)
The Doudna lab pursues mechanistic understanding of fundamental biological processes involving RNA molecules. Research in the lab is currently focused on three major areas:
In January, the pharmaceutical giant Novartis announced that it would be using Doudna’s CRISPR technology for its research into cancer treatments. It plans to edit the genes of immune cells so that they will attack tumors.
At Editas, a company based in Cambridge, Massachusetts, scientists have been investigating the Cas9 enzyme made by another species of bacteria, Staphylococcus aureus and Streptococcus pyogenes
Heritable human genetic modifications pose serious risks, and the therapeutic benefits are tenuous, warn Edward Lanphier, Fyodor Urnov and colleagues.
The CRISPR technique has dramatically expanded research on genome editing. But we cannot imagine a situation in which its use in human embryos would offer a therapeutic benefit over existing and developing methods. It would be difficult to control exactly how many cells are modified. Increasing the dose of nuclease used would increase the likelihood that the mutated gene will be corrected, but also raise the risk of cuts being made elsewhere in the genome.
Track 6: Future Directions (follow link to track 6; requires RealPlayer)
Doudna discusses her current work with signal recognition particles, a type of RNA that is found in virtually all cell types and is responsible for directing specific proteins to specific membranes. She also discusses how advances in genomic sequencing may help catalog the complete range of functional RNA molecules. (9 minutes)
RNA-guided, site-specific DNA cleavage tool, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats), and the CRISPR-associated (Cas)9 system has been developed from the Streptococcus pyogenes type II CRISPR adaptive immune system.
The Structural Biology of CRISPR-Cas Systems
Curr Opin Struct Biol. 2015 Feb 24;30C:100-111
Authors: Jiang F, Doudna JA
Abstract
Prokaryotic CRISPR-Cas genomic loci encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. Acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome. CRISPR-derived transcripts assemble with CRISPR-associated (Cas) proteins to target complementary nucleic acids for degradation. Here we review recent advances in the structural biology of these targeting complexes, with a focus on structural studies of the multisubunit Type I CRISPR RNA-guided surveillance and the Cas9 DNA endonuclease found in Type II CRISPR-Cas systems. These complexes have distinct structures that are each capable of site-specific double-stranded DNA binding and local helix unwinding.
PMID: 25723899 [PubMed – as supplied by publisher]
Genome Engineering: CRISPR & MAGE
Multiplex Automated Genome Engineering (MAGE), is an intentionally broad term. In practice, it has come to be associated with a very efficient oligonucleotide allele-replacment (lambda red beta), so far restricted mainly to E.coli. CRISPR, in contrast, works in nearly every organism tested.
CRISPR-Cas is a prokaryotic defense system against invading genetic elements. In a collaboration with John van der Oost’s laboratory, we are studying the structure and function of the effector complex of the Type III-A CRISPR-Cas system of Thermus thermophilus: the Csm complex (TtCsm). Recently, we showed that multiple Cas proteins and a crRNA guide assemble to recognize and cleave invader RNAs at multiple sites . Our negative stain EM structure of the TtCsm complex exhibits the characteristic architecture of Type I and Type III CRISPR-associated ribonucleoprotein complexes, suggesting a model for cleavage of the target RNA at periodic intervals (in collaboration with Eva Nogales, UC Berkeley, HHMI).
Double-stranded RNA induces potent and specific gene silencing in a broad range of eukaryotic organisms through a pathway known as RNA interference (RNAi). RNAi begins with the processing of endogenous or introduced precursor RNA into micro-RNAs (miRNAs) and small interfering RNAs (siRNAs) 21-25 nucleotides in length by the enzyme Dicer. We previously determined the crystal structure of an intact Dicer enzyme, revealing how Dicer functions as a molecular ruler to measure and cleave duplex RNAs of a specific length. Current work focuses on the mechanism of a complex of proteins known as the RISC loading complex (RLC) which load miRNA into the endonuclease Argonaute. The RLC contains the enzyme Dicer as well as TRBP, an RNA-binding protein hypothesized to interact with miRNA and Dicer during RISC loading. We seek to determine the molecular underpinnings of these interactions, along with the role of TRBP in RISC loading.
MicroRNAs (miRNAs) regulate endogenous eukaryotic genes by repressing gene expression through direct base-pairing interactions with their target messenger RNAs (mRNAs). To date, the rules used to predict miRNA-mRNA interactions have been based on one-dimensional sequence analysis. A more complete picture of miRNA-mRNA interactions should take into account the ability of RNA to form two- and three-dimensional structures. We are investigating the role of mRNA structure in the efficiency and specificity of targeting by miRNAs. Specifically, we are investigating the structure of Alu elements found within the 3′ untranslated regions (UTRs) of many human mRNAs and whether these structured domains serve as targets of a subset of human miRNAs. We are using in vitro biochemical methods and cell-based assays to probe the relationship between miRNA binding and mRNA structure.
The 5’ UTR of mRNA is also the site of multiple regulatory mechanisms, including upstream open reading frames (uORFs), internal ribosome entry sites (IRESs), protein binding sites, and stable secondary structures. Genes that profoundly influence cellular state often are controlled by multiple of these regulatory mechanisms. We are attempting to further understand regulatory elements in the 5′ UTR of mammalian mRNA using a combination of in vitro, cell-based and high-throughput techniques.
What does this mean for the development of therapeutics in the near future?
New methods for programming cell phenotype have broadly enabled drug screening, disease modeling, and regenerative medicine. Current research explores genome engineering tools, such as CRISPR/Cas9-based gene regulation and epigenome editing, to more precisely reprogram gene networks and control cellular decision making.
Donald Zack, M.D., Ph.D., Associate Professor of Ophthalmology and Neuroscience, Johns Hopkins University School of Medicine, is using CRISPR/Cas9 technology to generate retinal cell type-specific reporter ES and iPS lines and to introduce retinal degeneration-associated mutations. These reporter lines can be used to follow retinal neuronal specification during differentiation, they allow the purification of specific cell types by sorting and immunopanning, and they also are useful for the development of drug screening assays.
Jacquin C. Niles, M.D., Ph.D., Associate Professor of Biological Engineering, Massachusetts Institute of Technology is using CRISPR-Cas9 technology to study functional genetics in the human malaria parasite, Plasmodium falciparum. The team has established strategies for achieving controllable gene expression, and has integrated these into an experimental framework that facilitates efficient interrogation of virtually any target parasite gene using CRISPR/Cas9 editing.
Sidi Chen, Ph.D., Postdoctoral Fellow, Laboratory of Dr. Feng Zhang, Broad Institute and the Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology is observing that cancer genomics has revealed hundreds to thousands of mutations associated with human cancer. To test the roles of these mutations, we applied CRISPR/Cas9-mediated genome editing platform to engineer specific mutations in oncogenes and tumor suppressor genes. This results in tumorigenesis in several internal organs in mice. Our method expedites modeling of multigenic cancer with virtually any combination of mutations.
Samuel Hasson, Ph.D., Principal Investigator, Neuroscience, Pfizer, Inc., observes that while RNAi-based functional genomics is a staple of gene pathway and drug target exploration, there is a need for tools to provide rapid orthogonal validation of gene candidates that emerge from RNAi campaigns. CRISPR, CRISPRi, and CRISPRa are not only developing into primary screening platforms, they are a promising method to compliment RNAi and enhance the quality of functional genomic datasets.
These are some of the developments that will be discussed in detail at an upcoming meeting in Boston, MA in June titled ‘Gene Editing for Drug Discovery’.
Staals RH, Zhu Y, Taylor DW, Kornfeld JE, Sharma K, Barendregt A, Koehorst JJ, Vlot M, Neupane N, Varossieau K, Sakamoto K, Suzuki T, Dohmae N, Yokoyama S, Schaap PJ, Urlaub H, Heck AJ, Nogales E, Doudna JA, Shinkai A, van der Oost J Mol Cell 2014 Nov 20;56(4):518-530
Jinek M, Jiang F, Taylor DW, Sternberg SH, Kaya E, Ma E, Anders C, Hauer M, Zhou K, Lin S, Kaplan M, Iavarone AT, Charpentier E, Nogales E, Doudna JA Science 2014 Feb 6
The Doudna lab pursues mechanistic understanding of fundamental biological processes involving RNA molecules. Research in the lab is currently focused on three major areas:
Protecting Your Biotech IP and Market Strategy: Notes from Life Sciences Collaborative 2015 Meeting
Achievement Beyond Regulatory Approval – Design for Commercial Success
Stephen J. Williams, Ph.D.: Reporter
The Mid-Atlantic group Life Sciences Collaborative, a select group of industry veterans and executives from the pharmaceutical, biotechnology, and medical device sectors whose mission is to increase the success of emerging life sciences businesses in the Mid-Atlantic region through networking, education, training and mentorship, met Tuesday March 3, 2015 at the University of the Sciences in Philadelphia (USP) to discuss post-approval regulatory issues and concerns such as designing strong patent protection, developing strategies for insurance reimbursement, and securing financing for any stage of a business.
The meeting was divided into three panel discussions and keynote speech:
Panel 1: Design for Market Protection– Intellectual Property Strategy Planning
Panel 2: Design for Market Success– Commercial Strategy Planning
Panel 3: Design for Investment– Financing Each Stage
Keynote Speaker: Robert Radie, President & CEO Egalet Corporation
Below are Notes from each PANEL Discussion:
For more information about the Life Sciences Collaborative SEE
Panel 1: Design for Market Protection; Intellectual Property Strategy Planning
Take-home Message: Developing a very strong Intellectual Property (IP) portfolio and strategy for a startup is CRITICALLY IMPORTANT for its long-term success. Potential investors, partners, and acquirers will focus on the strength of a startup’s IP so important to take advantage of the legal services available. Do your DUE DIGILENCE.
Panelists:
John F. Ritter, J.D.., MBA; Director Office Tech. Licensing Princeton University
Panel Moderator: Dipanjan “DJ” Nag, PhD, MBA, CLP, RTTP; President CEO IP Shaktl, LLC
Notes:
Dr. Nag:
Sometimes IP can be a double edged sword; e.g. Herbert Boyer with Paul Berg and Stanley Cohen credited with developing recombinant technology but they did not keep the IP strict and opened the door for a biotech revolution (see nice review from Chemical Heritage Foundation).
Naked patent licenses are most profitable when try to sell IP
John Ritter: Mr. Ritter gave Princeton University’s perspective on developing and promoting a university-based IP portfolio.
30-40% of Princeton’s IP portfolio is related to life sciences
Universities will prefer to seek provisional patent status as a quicker process and allows for publication
Princeton will work closely with investigators to walk them through process – Very Important to have support system in place INCLUDING helping investigators and early startups establish a STRONG startup MANAGEMENT TEAM, and making important introductions to and DEVELOPING RELATIONSHIOPS with investors, angels
Good to cast a wide net when looking at early development partners like pharma
Good example of university which takes active role in developing startups is University of Pennsylvania’s Penn UPstart program.
Last 2 years many universities filing patents for startups as a micro-entity
Comment from attendee: Universities are not using enough of their endowments for purpose of startups. Princeton only using $500,00 for accelerator program.
Cozette McAvoy: Mrs. McAvoy talked about monetizing your IP from an industry perspective
Industry now is looking at “indirect monetization” of their and others IP portfolio. Indirect monetization refers to unlocking the “indirect value” of intellectual property; for example research tools, processes, which may or may not be related to a tangible product.
Good to make a contractual bundle of IP – “days of the $million check is gone”
Big companies like big pharma looks to PR (press relation) buzz surrounding new technology, products SO IMPORTANT FOR STARTUP TO FOCUS ON YOUR PR
Ryan O’Donnell: talked about how life science IP has changed especially due to America Invests Act
Need to develop a GLOBAL IP strategy so whether drug or device can market in multiple countries
Diagnostics and genes not patentable now – Major shift in patent strategy
Companies like Unified Patents can protect you against the patent trolls – if patent threatened by patent troll (patent assertion entity) will file a petition with the USPTO (US Patent Office) requesting institution of inter partes review (IPR); this may cost $40,000 BUT WELL WORTH the money –BE PROACTIVE about your patents and IP
Panel 2: Design for Market Success; Commercial Strategy Planning
Take-home Message: Commercial strategy development is defined market facing data, reimbursement strategies and commercial planning that inform labeling requirements, clinical study designs, healthcare economic outcomes and pricing targets. Clarity from payers is extremely important to develop any market strategy. Develop this strategy early and seek advice from payers.
Panelists:
David Blaszczak; Founder, Precipio Health Strategies
Terri Bernacchi, PharmD, MBA; Founder & President Cambria Health Advisory Professionals
Paul Firuta; President US Commercial Operations, NPS Pharma
Panel Moderator: Matt Cabrey; Executive Director, Select Greater Philadelphia
Notes:
David Blaszczak:
Commercial payers are bundling payment: most important to get clarity from these payers
Payers are using clinical trials to alter marketing (labeling) so IMPORTANT to BUILD LABEL in early clinical trial phases (phase I or II)
When in early phases of small company best now to team or partner with a Medicare or PBM (pharmacy benefit manager) and payers to help develop and spot tier1 and tier 2 companies in their area
Terri Bernacchi:
Building relationship with the payer is very important but firms like hers will also look to patients and advocacy groups to see how they respond to a given therapy and decrease the price risk by bundling
Value-based contracting with manufacturers can save patient and payer $$
As most PBMs formularies are 80% generics goal is how to make money off of generics
Patent extension would have greatest impact on price, value
Paul Firuta:
NPS Pharma developing a pharmacy benefit program for orphan diseases
How you pay depends on mix of Medicare, private payers now
Most important change which could affect price is change in compliance regulations
Panel 3: Design for Investment; Financing Each Stage
Take-home Message: VC is a personal relationship so spend time making those relationships. Do your preparation on your value and your market. Look to non-VC avenues: they are out there.
Panelists:
Ting Pau Oei; Managing Director, Easton Capital (NYC)
Manya Deehr; CEO & Founder, Pediva Therapeutics
Sanjoy Dutta, PhD; Assistant VP, Translational Devel. & Intl. Res., Juvenile Diabetes Research Foundation
In 2000 his experience finding 1st capital was what are your assets; now has changed to value
Notes:
Ting Pau Oei:
Your very 1st capital is all about VALUE– so plan where you add value
Venture Capital is a PERSONAL RELATIONSHIP
1) you need the management team, 2) be able to communicate effectively (Powerpoint, elevator pitch, business plan) and #1 and #2 will get you important 2nd Venture Capital meeting; VC’s don’t decide anything in 1st meeting
VC’s don’t normally do a good job of premarket valuation or premarket due diligence but know post market valuation well
Best advice: show some phase 2 milestones and VC will knock on your door
Manya Deehr:
Investment is more niche oriented so find your niche investors
Define your product first and then match the investors
Biggest failure she has experienced: companies that go out too early looking for capital
Dr. Dutta: funding from a non-profit patient advocacy group perspective
Your First Capital: find alliances which can help you get out of “valley of death”
Develop a targeted product and patient treatment profile
Non-profit groups ask three questions:
1) what is the value to patients (non-profits want to partner)
2) what is your timeline (we can wait longer than VC; for example Cystic Fibrosis Foundation waited long time but got great returns for their patients with Kalydeco™)
3) when can we see return
Long-term market projections are the knowledge gaps that startups have (the landscape) and startups don’t have all the competitive intelligence
Have a plan B every step of the way
Other posts on this site related to Philadelphia Biotech, Startup Funding, Payer Issues, and Intellectual Property Issues include: