Feeds:
Posts
Comments

Archive for the ‘Bio Instrumentation in Experimental Life Sciences Research’ Category

Author: Tilda Barliya PhD

Category owner: Nanotechnology in drug deliveryImage

Nanotechnology is simply defined as the technology to manipulate the matter on the atomic and/or molecular scale. It is generalized to materials, devices and structures with dimensions sizes at the nanoscale of 1 to 1000 nanometers (nm) (1,2).

Nanotachnology can be applied to many fields including sensors, biomaterials for tissue engineering, and nanostructures or 3D materials for molecular imaging and drug delivery among others. In medicine, nanotechnology is essentially a multidisciplinary field of physics, organic and polymer chemistry as well as molecular biology, pharmacology and engineering. These fields team up together to design a better and most opt treatment option for a disease using “the right drug, the right vehicle and the right route of administration”. In pharmaceutical industries, a new molecular entity (NME) that demonstrates potent biological activity but poor water solubility, or a very short circulating halflife, will likely face significant development challenges or be deemed undevelopable. There is always a degree of compromise, and such tradeoffs may inevitably result in the production of less-ideal drugs. However, with the emerging trends and recent advances in nanotechnology, it has become increasingly possible to address some of the shortcomings associated with potential NMEs. By using nanoscale delivery vehicles, the pharmacological properties (e.g., solubility and circulating half-life) of such NMEs can be drastically improved, essentially leading to the discovery of optimally safe and effective drug candidates. (3,4).

This is just one example which demonstrates the degree to which nanotechnology may revolutionize the rules and possibilities of drug discovery and change the landscape of pharmaceutical industries. (5)

Nanomedicine is facing many challenges in overcoming biological barriers, arrival and accumulation at the target site, therefore advances in nanoparticle engineering, as well as advances in understanding the importance of nanoparticle characteristics such as size, shape and surface properties for biological interactions, are necessary to create new opportunities for the development of nanoparticles for therapeutic applications (6).

Compared to conventional drug delivery, the first generation nanosystems provide a number of advantages. In particular, they can enhance the therapeutic activity by prolonging drug half-life, improving solubility of hydrophobic drugs, reducing potential immunogenicity, and/or releasing drugs in a sustained or stimuli-triggered fashion. Thus, the toxic side effects of drugs can be reduced, as well as the administration frequency. In addition, nanoscale particles can passively accumulate in specific tissues (e.g., tumors) through the enhanced permeability and retention (EPR) effect. Beyond these clinically efficacious nanosystems, nanotechnology has been utilized to enable new therapies and to develop next generation nanosystems for “smart” drug delivery (such as gene theraphy).

In summary; there are several factors that need to be included for a rational nanocarrier design:

–          Protect the drug from premature degradation

–          Protect the drug from premature interaction with biological environment

–          Enhance the absorption of the drug into the selected tissue-site

–          Improve intracellular drug penetration

–          Improve and control the drug pharmacokinetics and distribution profile.

Moreover there are several other factors that need to be taken into consideration to effectively influence the clinical translation of the drug delivery system (DDS) i.e materials that are biodegradable and biocompatible, easily functionalized, exhibit high differential uptake efficiency etc.(7-9).

In the next few chapters, we will try to address some of these factors as well as some examples that succeeded in the clinical setting as well as those who failed.

References:

  1. Nanotechnology and Drug Delivery Part 1: Background and Applications Nelson A Ochekpe, Patrick O Olorunfemi and Ndidi C Ngwuluka.Tropical Journal of Pharmaceutical Research, June 2009; 8 (3): 265-274. http://www.tjpr.org/vol8_no3/2009_8_3_11_Ochekpe.pdf
  2. Davis, M. E., Chen, Z. & Shin, D. M. Nanoparticle therapeutics: an emerging treatment modality for cancer. Nature Rev. Drug Discov. 7, 771–782 (2008). http://www.nature.com/nrd/journal/v7/n9/abs/nrd2614.html
  3. Nanotechnology in Drug Delivery and Tissue Engineering: From Discovery to Applications Jinjun Shi,†,§ Alexander R. Votruba,§ Omid C. Farokhzad,†,§ and Robert Langer*,†,‡. Nano Lett. 2010, 10, 3223–3230. http://engineering.unl.edu/academicunits/chemical-engineering/research/focuslab/kidambi_lab/CHME_896_496_files/Impact%20of%20Nanotechnology%20on%20Drug%20Delivery-Langer_ACSNano’09.pdf
  4. Sengupta, S. et al. Temporal targeting of tumour cells and neovasculature with a nanoscale delivery system. Nature 436, 568–572 (2005) http://www.ncbi.nlm.nih.gov/pubmed/16049491
  5. Torchilin, V. P. Recent advances with liposomes as pharmaceutical carriers. Nature Rev. Drug Discov. 4, 145–160 (2005). http://www.chem.umass.edu/~thompson/Courses/chem697a/papers/TorchilinReviewLiposomeCarriers.pdf
  6. Decuzzi, P. et al. Size and shape effects in the biodistribution of intravascularly injected particles. J. Control. Release 141, 320–327 (2010) http://www.ncbi.nlm.nih.gov/pubmed?term=Decuzzi%2C%20P.%20et%20al.%20Size%20and%20shape%20effects%20in%20the%20biodistribution%20of%20intravascularly%20injected%20particles.%20J.%20Control.%20Release%20141%2C%20320%E2%80%93327%20(2010)
  7. Nanocarriers as an emerging platform for cancer therapy. Dan Peer1†, Jeffrey M. Karp2,3†, Seungpyo Hong4†, Omid C. Farokhzad5, Rimona Margalit6 and Robert Langer3,4*. nature nanotechnology 2007 |  vol 2 751-760. http://www.nature.com/nnano/journal/v2/n12/abs/nnano.2007.387.html
  8. Alonso, M. J. Nanomedicines for overcoming biological barriers. Biomed. Pharmacother. 58, 168–172 2004. http://www.ncbi.nlm.nih.gov/pubmed/15082339
  9. Torchilin, V. P. Recent advances with liposomes as pharmaceutical carriers. Nat. Rev. Drug Discov.4, 145–160 (2005) http://www.chem.umass.edu/~thompson/Courses/chem697a/papers/TorchilinReviewLiposomeCarriers.pdf

Read Full Post »

Expanding the Genetic Alphabet and Linking the Genome to the Metabolome

English: The citric acid cycle, also known as ...

English: The citric acid cycle, also known as the tricarboxylic acid cycle (TCA cycle) or the Krebs cycle. Produced at WikiPathways. (Photo credit: Wikipedia)

Expanding the Genetic Alphabet and Linking the Genome to the Metabolome

 

Reporter& Curator:  Larry Bernstein, MD, FCAP

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Unlocking the diversity of genomic expression within tumorigenesis and “tailoring” of therapeutic options

1. Reshaping the DNA landscape between diseases and within diseases by the linking of DNA to treatments

In the NEW York Times of 9/24,2012 Gina Kolata reports on four types of breast cancer and the reshaping of breast cancer DNA treatment based on the findings of the genetically distinct types, which each have common “cluster” features that are driving many cancers.  The discoveries were published online in the journal Nature on Sunday (9/23).  The study is considered the first comprehensive genetic analysis of breast cancer and called a roadmap to future breast cancer treatments.  I consider that if this is a landmark study in cancer genomics leading to personalized drug management of patients, it is also a fitting of the treatment to measurable “combinatorial feature sets” that tie into population biodiversity with respect to known conditions.   The researchers caution that it will take years to establish transformative treatments, and this is clearly because in the genetic types, there are subsets that have a bearing on treatment “tailoring”.   In addition, there is growing evidence that the Watson-Crick model of the gene is itself being modified by an expansion of the alphabet used to construct the DNA library, which itself will open opportunities to explain some of what has been considered junk DNA, and which may carry essential information with respect to metabolic pathways and pathway regulation.  The breast cancer study is tied to the  “Cancer Genome Atlas” Project, already reported.  It is expected that this work will tie into building maps of genetic changes in common cancers, such as, breast, colon, and lung.  What is not explicit I presume is a closely related concept, that the translational challenge is closely related to the suppression of key proteomic processes tied into manipulating the metabolome.

Saha S. Impact of evolutionary selection on functional regions: The imprint of evolutionary selection on ENCODE regulatory elements is manifested between species and within human populations. 9/12/2012. PharmaceuticalIntelligence.Wordpress.com

Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, et al. An anatomically comprehensive atlas of the adult human brain transcriptome. Nature  Sept 14-20, 2012

Sarkar A. Prediction of Nucleosome Positioning and Occupancy Using a Statistical Mechanics Model. 9/12/2012. PharmaceuticalIntelligence.WordPress.com

Heijden et al.   Connecting nucleosome positions with free energy landscapes. (Proc Natl Acad Sci U S A. 2012, Aug 20 [Epub ahead of print]).  http://www.ncbi.nlm.nih.gov/pubmed/22908247

2. Fiddling with an expanded genetic alphabet – greater flexibility in design of treatment (pharmaneogenesis?)

Diagram of DNA polymerase extending a DNA stra...

Diagram of DNA polymerase extending a DNA strand and proof-reading. (Photo credit: Wikipedia)

A clear indication of this emerging remodeling of the genetic alphabet is a new
study led by scientists at The Scripps Research Institute appeared in the
June 3, 2012 issue of Nature Chemical Biology that indicates the genetic code as
we know it may be expanded to include synthetic and unnatural sequence pairing (Study Suggests Expanding the Genetic Alphabet May Be Easier than Previously Thought, Genome). They infer that the genetic instructions for living organisms
that is composed of four bases (C, G, A and T)— is open to unnatural letters. An expanded “DNA alphabet” could carry more information than natural DNA, potentially coding for a much wider range of molecules and enabling a variety of powerful applications. The implications of the application of this would further expand the translation of portions of DNA to new transciptional proteins that are heretofore unknown, but have metabolic relavence and therapeutic potential. The existence of such pairing in nature has been studied in Eukariotes for at least a decade, and may have a role in biodiversity. The investigators show how a previously identified pair of artificial DNA bases can go through the DNA replication process almost as efficiently as the four natural bases.  This could as well be translated into human diversity, and human diseases.

The Romesberg laboratory collaborated on the new study and his lab have been trying to find a way to extend the DNA alphabet since the late 1990s. In 2008, they developed the efficiently replicating bases NaM and 5SICS, which come together as a complementary base pair within the DNA helix, much as, in normal DNA, the base adenine (A) pairs with thymine (T), and cytosine (C) pairs with guanine (G). It had been clear that their chemical structures lack the ability to form the hydrogen bonds that join natural base pairs in DNA. Such bonds had been thought to be an absolute requirement for successful DNA replication, but that is not the case because other bonds can be in play.

The data strongly suggested that NaM and 5SICS do not even approximate the edge-to-edge geometry of natural base pairs—termed the Watson-Crick geometry, after the co-discoverers of the DNA double-helix. Instead, they join in a looser, overlapping, “intercalated” fashion that resembles a ‘mispair.’ In test after test, the NaM-5SICS pair was efficiently replicable even though it appeared that the DNA polymerase didn’t recognize it. Their structural data showed that the NaM-5SICS pair maintain an abnormal, intercalated structure within double-helix DNA—but remarkably adopt the normal, edge-to-edge, “Watson-Crick” positioning when gripped by the polymerase during the crucial moments of DNA replication. NaM and 5SICS, lacking hydrogen bonds, are held together in the DNA double-helix by “hydrophobic” forces, which cause certain molecular structures (like those found in oil) to be repelled by water molecules, and thus to cling together in a watery medium.

The finding suggests that NaM-5SICS and potentially other, hydrophobically bound base pairs could be used to extend the DNA alphabet and that Evolution’s choice of the existing four-letter DNA alphabet—on this planet—may have been developed allowing for life based on other genetic systems.

3.  Studies that consider a DNA triplet model that includes one or more NATURAL nucleosides and looks closely allied to the formation of the disulfide bond and oxidation reduction reaction.

This independent work is being conducted based on a similar concep. John Berger, founder of Triplex DNA has commented on this. He emphasizes Sulfur as the most important element for understanding evolution of metabolic pathways in the human transcriptome. It is a combination of sulfur 34 and sulphur 32 ATMU. S34 is element 16 + flourine, while S32 is element 16 + phosphorous. The cysteine-cystine bond is the bridge and controller between inorganic chemistry (flourine) and organic chemistry (phosphorous). He uses a dual spelling, using  sulfphur to combine the two referring to the master catalyst of oxidation-reduction reactions. Various isotopic alleles (please note the duality principle which is natures most important pattern). Sulfphur is Methionine, S adenosylmethionine, cysteine, cystine, taurine, gluthionine, acetyl Coenzyme A, Biotin, Linoic acid, H2S, H2SO4, HSO3-, cytochromes, thioredoxin, ferredoxins, purple sulfphur anerobic bacteria prokaroytes, hydrocarbons, green sulfphur bacteria, garlic, penicillin and many antibiotics; hundreds of CSN drugs for parasites and fungi antagonists. These are but a few names which come to mind. It is at the heart of the Krebs cycle of oxidative phosphorylation, i.e. ATP. It is also a second pathway to purine metabolism and nucleic acids. It literally is the key enzymes between RNA and DNA, ie, SH thiol bond oxidized to SS (dna) cysteine through thioredoxins, ferredoxins, and nitrogenase. The immune system is founded upon sulfphur compounds and processes. Photosynthesis Fe4S4 to Fe2S3 absorbs the entire electromagnetic spectrum which is filtered by the Allen belt some 75 miles above earth. Look up chromatium vinosum or allochromatium species.  There is reasonable evidence it is the first symbiotic species of sulfphur anerobic bacteria (Fe4S4) with high potential mvolts which drives photosynthesis while making glucose with H2S.
He envisions a sulfphur control map to automate human metabolism with exact timing sequences, at specific three dimensional coordinates on Bravais crystalline lattices. He proposes adding the inosine-xanthosine family to the current 5 nucleotide genetic code. Finally, he adds, the expanded genetic code is populated with “synthetic nucleosides and nucleotides” with all kinds of customized functional side groups, which often reshape nature’s allosteric and physiochemical properties. The inosine family is nature’s natural evolutionary partner with the adenosine and guanosine families in purine synthesis de novo, salvage, and catabolic degradation. Inosine has three major enzymes (IMPDH1,2&3 for purine ring closure, HPGRT for purine salvage, and xanthine oxidase and xanthine dehydrogenase.

English: DNA replication or DNA synthesis is t...

English: DNA replication or DNA synthesis is the process of copying a double-stranded DNA molecule. This process is paramount to all life as we know it. (Photo credit: Wikipedia)

3. Nutritional regulation of gene expression,  an essential role of sulfur, and metabolic control 

Finally, the research carried out for decades by Yves Ingenbleek and the late Vernon Young warrants mention. According to their work, sulfur is again tagged as essential for health. Sulfur (S) is the seventh most abundant element measurable in human tissues and its provision is mainly insured by the intake of methionine (Met) found in plant and animal proteins. Met is endowed with unique functional properties as it controls the ribosomal initiation of protein syntheses, governs a myriad of major metabolic and catalytic activities and may be subjected to reversible redox processes contributing to safeguard protein integrity.

Consuming diets with inadequate amounts of methionine (Met) are characterized by overt or subclinical protein malnutrition, and it has serious morbid consequences. The result is reduction in size of their lean body mass (LBM), best identified by the serial measurement of plasma transthyretin (TTR), which is seen with unachieved replenishment (chronic malnutrition, strict veganism) or excessive losses (trauma, burns, inflammatory diseases).  This status is accompanied by a rise in homocysteine, and a concomitant fall in methionine.  The ratio of S to N is quite invariant, but dependent on source.  The S:N ratio is typical 1:20 for plant sources and 1:14.5 for animal protein sources.  The key enzyme involved with the control of Met in man is the enzyme cystathionine-b-synthase, which declines with inadequate dietary provision of S, and the loss is not compensated by cobalamine for CH3- transfer.

As a result of the disordered metabolic state from inadequate sulfur intake (the S:N ratio is lower in plants than in animals), the transsulfuration pathway is depressed at cystathionine-β-synthase (CβS) level triggering the upstream sequestration of homocysteine (Hcy) in biological fluids and promoting its conversion to Met. They both stimulate comparable remethylation reactions from homocysteine (Hcy), indicating that Met homeostasis benefits from high metabolic priority. Maintenance of beneficial Met homeostasis is counterpoised by the drop of cysteine (Cys) and glutathione (GSH) values downstream to CβS causing reducing molecules implicated in the regulation of the 3 desulfuration pathways

4. The effect on accretion of LBM of protein malnutrition and/or the inflammatory state: in closer focus

Hepatic synthesis is influenced by nutritional and inflammatory circumstances working concomitantly and liver production of  TTR integrates the dietary and stressful components of any disease spectrum. Thus we have a depletion of visceral transport proteins made by the liver and fat-free weight loss secondary to protein catabolism. This is most accurately reflected by TTR, which is a rapid turnover protein, but it is involved in transport and is essential for thyroid function (thyroxine-binding prealbumin) and tied to retinol-binding protein. Furthermore, protein accretion is dependent on a sulfonation reaction with 2 ATP.  Consequently, Kwashiorkor is associated with thyroid goiter, as the pituitary-thyroid axis is a major sulfonation target. With this in mind, it is not surprising why TTR is the sole plasma protein whose evolutionary patterns closely follow the shape outlined by LBM fluctuations. Serial measurement of TTR therefore provides unequaled information on the alterations affecting overall protein nutritional status. Recent advances in TTR physiopathology emphasize the detecting power and preventive role played by the protein in hyper-homocysteinemic states.

Individuals submitted to N-restricted regimens are basically able to maintain N homeostasis until very late in the starvation processes. But the N balance study only provides an overall estimate of N gains and losses but fails to identify the tissue sites and specific interorgan fluxes involved. Using vastly improved methods the LBM has been measured in its components. The LBM of the reference man contains 98% of total body potassium (TBK) and the bulk of total body sulfur (TBS). TBK and TBS reach equal intracellular amounts (140 g each) and share distribution patterns (half in SM and half in the rest of cell mass). The body content of K and S largely exceeds that of magnesium (19 g), iron (4.2 g) and zinc (2.3 g).

TBN and TBK are highly correlated in healthy subjects and both parameters manifest an age-dependent curvilinear decline with an accelerated decrease after 65 years. Sulfur Methylation (SM) undergoes a 15% reduction in size per decade, an involutive process. The trend toward sarcopenia is more marked and rapid in elderly men than in elderly women decreasing strength and functional capacity. The downward SM slope may be somewhat prevented by physical training or accelerated by supranormal cytokine status as reported in apparently healthy aged persons suffering low-grade inflammation or in critically ill patients whose muscle mass undergoes proteolysis.

5.  The results of the events described are:

  • Declining generation of hydrogen sulfide (H2S) from enzymatic sources and in the non-enzymatic reduction of elemental S to H2S.
  • The biogenesis of H2S via non-enzymatic reduction is further inhibited in areas where earth’s crust is depleted in elemental sulfur (S8) and sulfate oxyanions.
  • Elemental S operates as co-factor of several (apo)enzymes critically involved in the control of oxidative processes.

Combination of protein and sulfur dietary deficiencies constitute a novel clinical entity threatening plant-eating population groups. They have a defective production of Cys, GSH and H2S reductants, explaining persistence of an oxidative burden.

6. The clinical entity increases the risk of developing:

  • cardiovascular diseases (CVD) and
  • stroke

in plant-eating populations regardless of Framingham criteria and vitamin-B status.
Met molecules supplied by dietary proteins are submitted to transmethylation processes resulting in the release of Hcy which:

  • either undergoes Hcy — Met RM pathways or
  • is committed to transsulfuration decay.

Impairment of CβS activity, as described in protein malnutrition, entails supranormal accumulation of Hcy in body fluids, stimulation of activity and maintenance of Met homeostasis. The data show that combined protein- and S-deficiencies work in concert to deplete Cys, GSH and H2S from their body reserves, hence impeding these reducing molecules to properly face the oxidative stress imposed by hyperhomocysteinemia.

Although unrecognized up to now, the nutritional disorder is one of the commonest worldwide, reaching top prevalence in populated regions of Southeastern Asia. Increased risk of hyperhomocysteinemia and oxidative stress may also affect individuals suffering from intestinal malabsorption or westernized communities having adopted vegan dietary lifestyles.

Ingenbleek Y. Hyperhomocysteinemia is a biomarker of sulfur-deficiency in human morbidities. Open Clin. Chem. J. 2009 ; 2 : 49-60.

7. The dysfunctional metabolism in transitional cell transformation

A third development is also important and possibly related. The transition a cell goes through in becoming cancerous tends to be driven by changes to the cell’s DNA. But that is not the whole story. Large-scale techniques to the study of metabolic processes going on in cancer cells is being carried out at Oxford, UK in collaboration with Japanese workers. This thread will extend our insight into the metabolome. Otto Warburg, the pioneer in respiration studies, pointed out in the early 1900s that most cancer cells get the energy they need predominantly through a high utilization of glucose with lower respiration (the metabolic process that breaks down glucose to release energy). It helps the cancer cells deal with the low oxygen levels that tend to be present in a tumor. The tissue reverts to a metabolic profile of anaerobiosis.  Studies of the genetic basis of cancer and dysfunctional metabolism in cancer cells are complementary. Tomoyoshi Soga’s large lab in Japan has been at the forefront of developing the technology for metabolomics research over the past couple of decades (metabolomics being the ugly-sounding term used to describe research that studies all metabolic processes at once, like genomics is the study of the entire genome).

Their results have led to the idea that some metabolic compounds, or metabolites, when they accumulate in cells, can cause changes to metabolic processes and set cells off on a path towards cancer. The collaborators have published a perspective article in the journal Frontiers in Molecular and Cellular Oncology that proposes fumarate as such an ‘oncometabolite’. Fumarate is a standard compound involved in cellular metabolism. The researchers summarize that shows how accumulation of fumarate when an enzyme goes wrong affects various biological pathways in the cell. It shifts the balance of metabolic processes and disrupts the cell in ways that could favor development of cancer.  This is of particular interest because “fumarate” is the intermediate in the TCA cycle that is converted to malate.

Animation of the structure of a section of DNA...

Animation of the structure of a section of DNA. The bases lie horizontally between the two spiraling strands. (Photo credit: Wikipedia)

The Keio group is able to label glucose or glutamine, basic biological sources of fuel for cells, and track the pathways cells use to burn up the fuel.  As these studies proceed, they could profile the metabolites in a cohort of tumor samples and matched normal tissue. This would produce a dataset of the concentrations of hundreds of different metabolites in each group. Statistical approaches could suggest which metabolic pathways were abnormal. These would then be the subject of experiments targeting the pathways to confirm the relationship between changed metabolism and uncontrolled growth of the cancer cells.

Related articles

Read Full Post »

typical changes in CK-MB and cardiac troponin ...

typical changes in CK-MB and cardiac troponin in Acute Myocardial Infarction (Photo credit: Wikipedia)

Reporter and curator:

Larry H Bernstein, MD, FCAP

This posting is a followup on two previous posts covering the design and handling of HIT to improve healthcare outcomes as well as lower costs from better workflow and diagnostics, which is self-correcting over time.

The first example is a non technology method designed by Lee Goldman (Goldman Algorithm) that was later implemented at Cook County Hospital in Chicago with great success.     It has been known that there is over triage of patients to intensive care beds, adding to the costs of medical care.  If the differentiation between acute myocardial infarction and other causes of chest pain could be made more accurate, the quantity of scare resources used on unnecessary admissions could be reduced.  The Goldman algorithm was introduced in 1982 during a training phase at Yale-New Haven Hospital based on 482 patients, and later validated at the BWH (in Boston) on 468 patients.They demonstrated improvement in sensitivity as well as specificity (67% to 77%), and positive predictive value (34% to 42%).  They modified the computer derived algorithm in 1988 to achieve better results in triage of patients to the ICU of patients with chest pain based on a study group of 1379 patients.  The process was tested prospectively on 4770 patients at two university and 4 community hospitals.  The specificity improved by 74% vs 71% in recognizing absence of AMI by the algorithm vs physician judgement. The sensitivity was not different for admission (88%).  Decisions based solely on the protocol would have decreased admissions of patients without AMI by 11.5% without adverse effects.  The study was repeated by Qamar et al. with equal success.

Pain in acute myocardial infarction (front)

Pain in acute myocardial infarction (front) (Photo credit: Wikipedia)

An ECG showing pardee waves indicating acute m...

An ECG showing pardee waves indicating acute myocardial infarction in the inferior leads II, III and aVF with reciprocal changes in the anterolateral leads. (Photo credit: Wikipedia)

Acute myocardial infarction with coagulative n...

Acute myocardial infarction with coagulative necrosis (4) (Photo credit: Wikipedia)

Goldman L, Cook EF, Brand DA, Lee TH, Rouan GW, Weisberg MC, et al. A computer protocol to predict myocardial infarction in emergency department patients with chest pain. N Engl J Med. 1988;318:797-803.

A Qamar, C McPherson, J Babb, L Bernstein, M Werdmann, D Yasick, S Zarich. The Goldman algorithm revisited: prospective evaluation of a computer-derived algorithm versus unaided physician judgment in suspected acute myocardial infarction. Am Heart J 1999; 138(4 Pt 1):705-709. ICID: 825629

The usual accepted method for determining the decision value of a predictive variable is the Receiver Operator Characteristic Curve, which requires a mapping of each value of the variable against the percent with disease on the Y-axis.   This requires a review of every case entered into the study.  The ROC curve is done to validate a study to classify data on leukemia markers for research purposes as shown by Jay Magidson in his demonstation of  Correlated Component Regression (2012)(Statistical Innovations, Inc.)  The test for the contribution of each predictor is measured by Akaike Information Criteria and Bayes Information Criteria, which have proved to be critically essential tests over the last 20 years.

I go back 20 years and revisit the application of these principles in clinical diagnostics, but the ROC was introduced to medicine in radiology earlier.   A full rendering of this matter can be found in the following:
R A Rudolph, L H Bernstein, J Babb. Information induction for predicting acute myocardial infarction.Clin Chem 1988; 34(10):2031-2038. ICID: 825568.

Rypka EW. Methods to evaluate and develop the decision process in the selection of tests. Clin Lab Med 1992; 12:355

Rypka EW. Syndromic Classification: A process for amplifying information using S-Clustering. Nutrition 1996;12(11/12):827-9.

Christianson R. Foundations of inductive reasoning. 1964.  Entropy Publications. Lincoln, MA.

Inability to classify information is a major problem in deriving and validating hypotheses from PRIMARY data sets necessary to establish a measure of outcome effectiveness.  When using quantitative data, decision limits have to be determined that best distinguish the populations investigated.   We are concerned with accurate assignment into uniquely verifiable groups by information in test relationships.  Uncertainty in assigning to a supervisory classification can only be relieved by providing suffiuciuent data.

A method for examining the endogenous information in the data is used to determine decision points.  The reference or null set is defined as a class having no information.  When information is present in the data, the entropy (uncertainty in the data set) is reduced by the amount of information provided.  This is measureable and may be referred to as the Kullback-Liebler distance, which was extended by Akaike to include statistical theory.   An approach is devised using EW Rypka’s S-Clustering has been created to find optimal decision values that separate the groups being classified.  Further, it is possible to obtain PRIMARY data on-line and continually creating primary classifications (learning matrices).  From the primary classifications test-minimized sets of features are determined with optimal useful and sufficient information for accurately distinguishing elements (patients).  Primary classifications can be continually created from PRIMARY data.  More recent and complex work in classifying hematology data using a 30,000 patient data set and 16 variables to identify the anemias, moderate SIRS, sepsis, lymphocytic and platelet disorders has been  published and recently presented.  Another classification for malnutrition and stress hypermetabolism is now validated and in press in the journal Nutrition (2012), Elsevier.
G David, LH Bernstein, RR Coifman. Generating Evidence Based Interpretation of Hematology Screens via Anomaly Characterization. Open Clinical Chemistry Journal 2011; 4 (1):10-16. 1874-2416/11 2011 Bentham Open.  ICID: 939928

G David; LH Bernstein; RR Coifman. The Automated Malnutrition Assessment. Accepted 29 April 2012.
http://www.nutritionjrnl.com. Nutrition (2012), doi:10.1016/j.nut.2012.04.017.

Keywords: Network Algorithm; unsupervised classification; malnutrition screening; protein energy malnutrition (PEM); malnutrition risk; characteristic metric; characteristic profile; data characterization; non-linear differential diagnosis

Summary: We propose an automated nutritional assessment (ANA) algorithm that provides a method for malnutrition risk prediction with high accuracy and reliability. The problem of rapidly identifying risk and severity of malnutrition is crucial for minimizing medical and surgical complications. We characterized for each patient a unique profile and mapped similar patients into a classification. We also found that the laboratory parameters were sufficient for the automated risk prediction.
We here propose a simple, workable algorithm that provides assistance for interpreting any set of data from the screen of a blood analysis with high accuracy, reliability, and inter-operability with an electronic medical record. This has been made possible at least recently as a result of advances in mathematics, low computational costs, and rapid transmission of the necessary data for computation.  In this example, acute myocardial infarction (AMI) is classified using isoenzyme CKMB activity, total LD, and isoenzyme LD-1, and repeated studies have shown the high power of laboratory features for diagnosis of AMI, especially with NSTEMI.  A later study includes the scale values for chest pain and for ECG changes to create the model.

LH Bernstein, A Qamar, C McPherson, S Zarich.  Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72(4):259-268. ICID: 825617

The quantitative measure of information, Shannon entropy treats data as a message transmission.  We are interested in classifying data with near errorless discrimination.  The method assigns upper limits of normal to tests computed from Rudolph’s maximum entropy definitions of group-based normal reference.  Using the Bernoulli trial to determine maximum entropy reference, we determine from the entropy in the data a probability of a positive result that is the same for each test and conditionally independent of other results by setting the binary decision level for each test.  The entropy of the discrete distribution is calculated from the probabilities of the distribution. The probability distribution of the binary patterns is not flat and the entropy decreases when there is information in the data.  The decrease in entropy is the Kullback-Liebler distance.

The basic principle of separatory clustering is extracting features from endogenous data that amplify or maximize structural information into disjointed or separable classes.  This differs from other methods because it finds in a database a theoretic – or more – number of variables with required VARIETY that map closest to an ideal, theoretic, or structural information standard. Scaling allows using variables with different numbers of message choices (number bases) in the same matrix, binary, ternary, etc (representing yes-no; small-modest, large, largest).   The ideal number of class is defined by x^n.   In viewing a variable value we think of it as low, normal, high, high high, etc.  A system works with related parts in harmony.  This frame of reference improves the applicability of S-clustering.  By definition, a unit of information is log.r r = 1.

The method of creating a syndromic classification to control variety in the system also performs a semantic function by attributing a term to a Port Royal Class.  If any of the attributes are removed, the meaning of the class is made meaningless.  Any significant overlap between the groups would be improved by adding requisite variety.  S-clustering is an objective and most desirable way to find the shortest route to diagnosis, and is an objective way of determining practice parameters.

Multiple Test Binary Decision Patterns where CK-MB = 18 u/l, LD-1 = 36 u/l, %LD1 = 32 u/l.

No.               Pattern       Freq                   P1                       Self information                Weighted information

0                    000             26                   0.1831                    2.4493                                     0.4485
1                    001                3                    0.0211                   5.5648                                     0.1176
2                    010               4                    0.0282                   5.1497                                     0.1451
3                    011                2                    0.0282                   6.1497                                     0.0866
4                    100               6                    0.0423                   4.5648                                     0.1929
6                    110                8                    0.0563                  4.1497                                     0.2338
7                    111               93                   0.6549                  0.6106                                     0.3999

Entropy: sum of weighted information (average)           1.6243 bits

The effective information values are the least-error points. Non AMI patients exhibit patterns 0, 1, 2, 3, and 4: AMI patients are 6 and 7.  There is 1 fp 4, and 1 fn 6.  The error rate is 1.4%.

Summary:

A major problem in using quantitative data is lack of a justifiable definition of reference (normal).  Our information model consists of a population group, a set of attributes derived from observations, and basic definitions using Shannon’s information measure entropy. In this model, the population set and its values for its variables are considered to be the only information available.  The finding of a flat distribution with the Bernoulli test defines the reference population that has no information.  The complementary syndromic group, treated in the same way, produces a distribution that is not flat and has a less than maximum information uncertainty.

The vector of probabilities – (1/2), (1/2), …(1/2), can be related to the path calculated from the Rypka-Fletcher equation, which

Ct = 1 – 2^-k/1 -2^-n

determines the theoretical maximum comprehension from the test of n attributes.  We constructed a ROC curve from theoriginal IRIS  data of R Fisher from four measurements of leaf and petal with a result obtained using information-based induction principles to determine discriminant points without the classification that had to be used for the discriminant analysis.   The principle of maximum entropy, as formu;ated by Jaynes and Tribus proposes that for problems of statistical inference – which as defined, are problems of induction – the probabilities should be assigned so that the entropy function is maximized.  Good proposed that maximum entropy be used to define the null hypothesis and Rudolph proposed that medical reference be defined as at maximum entropy.

Rudolph RA. A general purpose information processing automation: generating Port Royal Classes with probabilistic information. Intl Proc Soc Gen systems Res 1985;2:624-30.

Jaynes ET. Information theory and statistical mechanics. Phys Rev 1956;106:620-30.

Tribus M. Where do we stand after 30 years of maximum entropy? In: Levine RD, Tribus M, eds. The maximum entropy formalism. Cambridge, Ma: MIT Press, 1978.

Good IJ. Maximum entropy for hypothesis formulation, especially for multidimensional contingency tables. Ann Math Stat 1963;34:911-34.

The most important reason for using as many tests as is practicable is derived from the prominent role of redundancy in transmitting information (Noisy Channel Theorem).  The proof of this theorem does not tell how to accomplish nearly errorless discrimination, but redundancy is essential.

In conclusion, we have been using the effective information (derived from Kullback-Liebler distance) provided by more than one test to determine normal reference and locate decision values.  Syndromes and patterns that are extracted are empirically verifiable.

Related articles

Read Full Post »

Carotid Stenting: Vascular surgeons have pointed to more minor strokes in the stenting group and cardiologists to more myocardial infarctions in the CEA cohort.

Reporter: Aviva Lev-Ari, PhD,RN

Why CREST is a Game Changer for Carotid Stenting

 

Speaker: Gary Roubin, MD, PhD

The CREST Trial was the largest, most rigorous and because of the way it was conducted, the most relevant investigation into the role of carotid stenting to date.

The National Institute of Neurological Diseases after critical review concluded that the study “Demonstrated that Endarterectomy and Stenting were equally efficacious methods of preventing stroke caused by carotid bifurcation stenoses.”

The primary endpoint was unequivocal but the components of this combined endpoint have been dissected by various groups to support different conclusions. Vascular surgeons have pointed to more minor strokes in the stenting group and cardiologists to more myocardial infarctions in the CEA cohort. The CREST Trial demonstrated remarkable safety from both procedures with a very low and similar major stroke and death rate.  The small numbers of excess strokes in the stenting group were minor strokes and importantly further analyses of temporal trends have demonstrated this delta disappeared over the course of the study.  Stenting stroke rates improved over time probably related to better selection of younger patients with more suitable anatomy for stenting.

If CREST was to restart in 2012, it is extremely unlikely that any difference whatsoever would be seen in comparing CEA and Stenting.

Importantly, minor strokes were not associated with a later excess mortality while a periprocedural MI was associated with death over the follow up period.

Quality of life analyses reflected the minor, non disabling nature of the small number of excess minor strokes.  The comprehensive panel of SF36 mental and physical quality of life measures demonstrated no difference whatsoever between stenting and CEA.

Despite completing the study with first generation stents and embolic protection devices, the outcomes were gratifying.  A critical FDA panel subsequently approved the extension of labeling for stenting use in standard risk CEA patients.

Now we await a considered response from CMS to acknowledge the demonstration of “reasonable safety and efficacy” and long awaited reimbursement for this patient friendly, percutaneous procedure.

We now are experiencing a curious push back from some in the neurological community and even some surgeons who argue that neither CEA nor stenting are needed in the treatment of asymptomatic patients. 

  • This—despite multiple trials that have demonstrated the superiority of revascularization and markedly improved revascularization results.
  • This—despite no scientific evidence to support the equivalence of medical therapy in preventing stroke in carotid bifurcation disease.

 

Panel Discussion

 

NICK HOPKINS

The CREST data stand on their own merits. Looking at the survival curves for minor stroke versus myocardial infarction, as a surgeon who does a large volume of CEA and stenting, I am impressed with the benign outcome in the minor strokes and the bad outcomes associated with M.I. Again, as a surgeon, I doubt we can do anything to reduce the incidence of MI, but as a stent operator, I feel we can do much more to further reduce the incidence of stroke events.

For example, the neurological community has focused on the ICSS Trial sub-study that demonstrated a significant incidence of MRI-DWI defects after stenting. We don’t really know what DWI changes mean but the neurological community assumes they are bad.  We see a 15-20% incidence with just a routine angiogram and they are probably just micro-bubbles causing these temporary defects. In the ICSS trials, the incidence of these lesions was 50% in the stent arm and 12% in the CEA arm.   The conclusion was that stenting was “bad” but embolic protection devices were only used in 75% of the ICSS patients. Now we have new devices such as proximal occlusion balloons that have been shown to markedly reduce the incidence of these lesions. So, this is just one example of new stenting techniques that will reduce the incidence of stroke to even lower rates. There is also a lot of activity in the industry to make carotid stents covered with a fine, semi-permeable membrane that will reduce the chance of embolic debris from the procedure. With current devices and certainly the stent used in CREST, debris may be forced through the stent strut when you dilate.

So to me these are just two examples of things that will improve the stroke rate from the current 1% to 3% to near zero.

TY COLLINS

I am not sure what happened at the Medicare Coverage Advisory Meeting last week (January 2012) but basically the committee did not appear to focus on the CREST data.

JIRI VITEK

One of the most important differentiating features of CREST compared to the European Trials was the emphasis on operator credentialing and ongoing training of operators over the 8 years of recruitment.  This is evidenced by the improvement of stent outcomes over the time course of the trial.

I also want to point out that none of these trials place enough emphasis on cranial nerve injuries that are an exclusive and important complication of CEA.  In CREST there was a 4.5% incidence of cranial nerve palsy in the CEA cohort, and 2% were still present at 6 months.

BARRY KATZEN

I firmly believe that CREST is a “game changer.” I spent a full day at the Medicare Coverage Advisory Committee last week (January 2012). Two things happened. The first was that the entire discussion was derailed by irrelevant discussion of the supposed value of medical therapy. As Ty said before, nobody appeared to be focused on the CREST data but was distracted by arguments about the value of medical management. A neurologist, Anne Abbott, took a large amount of time basically “trouncing” any type of revascularization therapy. 

The critical consideration by the FDA and their approval of the devices for this indication is a better representation of where we stand today. Interdisciplinary factional disputes and politics aside, I believe CMS will want to expand the coverage for carotid stenting in some way.

JIM ZIDAR

I will say that although CREST is a “game changer”, it seems the cards may be stacked against stenting.  Besides the cost issues that may be associated with expanded coverage, they are influenced by the self interest of the Society of Vascular Surgery that decided they were not going to support the data. If coverage is not expanded for stenting, I wonder if it is not unreasonable for other professional societies to conclude and pronounce that CEA should not be reimbursed in asymptomatic patients.

NICK HOPKINS

CMS would be interested in that.  I actually think that given the dialogue on the day, it definitely could have gone that way.

BARRY KATZEN

It is fascinating to think about this. Given all of the level 1 scientific data supporting revascularization over medical therapy, in the United States today, CEA is the standard of care. Primary care physicians throughout the country recommend this for patients with severe stenoses. CMS is basically talking about erasing that standard of care or now an equivalent procedure from a reimbursement standpoint.

GARY ROUBIN

Let us all be clear about the evidence from CREST. In the younger patients, male patients and asymptomatic patients there was no significant difference in outcomes for stroke and death between stenting and CEA. 

 

Read Full Post »

 

Reporter: Aviva Lev-Ari, PhD, RN

NATIONAL CENTERS FOR BIOMEDICAL COMPUTING

An overarching approach to several disciplines:

  • Other Genomics related subdisciplines:
  • The Biomedical Computing Space

An illustration of the systems approach to biology

http://en.wikipedia.org/wiki/Systems_biology

 

The National Centers for Biomedical Computing (NCBCs) are part of the U.S. NIH plan to develop and implement the core of a universal computing infrastructure that is urgently needed to speed progress in biomedical research. Their mission is to create innovative software programs and other tools that will enable the biomedical community to integrate, analyze, model, simulate, and share data on human health and disease.

Biomedical Information Science and Technology Initiative (BISTI): Recognizing the potential benefits to human health that can be realized from applying and advancing the field of biomedical computing, the Biomedical Information Science and Technology Initiative (BISTI) was launched at the NIH in April 2000. This initiative is aimed at making optimal use of computer science and technology to address problems in biology and medicine. The full text of the original BISTI Report (June 1999) is available.

Current Centers

SimBioS
National Center for Simulation of Biological Structures (SimBioS) at Stanford University
MAGNet
National Center for the Multiscale Analysis of Genomic and Cellular Networks (MAGNet) at Columbia University
NA-MIC Logo
National Alliance for Medical Image Computing (NA-MIC) at Brigham and Women’s Hospital, Boston, MA
I2B2
Integrating Biology and the Bedside (I2B2) at Brigham and Women’s Hospital, Boston, MA
NCBO
National Center for Biomedical Ontology (NCBO) at Stanford University
IDASH
Integrate Data for Analysis, Anonymization, and Sharing (IDASH) at the University of California, San Diego

Biositemap is a way for a biomedical research institution of organisation to show how biological information is distributed throughout their Information Technology systems and networks. This information may be shared with other organisations and researchers.

The Biositemap enables web browserscrawlers and robots to easily access and process the information to use in other systems, media and computational formats. Biositemaps protocols provide clues for the Biositemap web harvesters, allowing them to find resources and content across the whole interlink of the Biositemap system. This means that human or machine users can access any relevant information on any topic across all organisations throughout the Biositemap system and bring it to their own systems for assimilation or analysis.

http://en.wikipedia.org/wiki/Biositemaps

http://www.ncbcs.org/

For

Genome and Genetics: Resources @Stanford, @MIT, @NIH’s NCBCS

go to

http://pharmaceuticalintelligence.com/2012/09/18/genome-and-genetics-resources/

 

Biomedical Computation Review (BCR) is a quarterly, open-access magazine funded by the National Institutes of Health and published by Simbios, one of the National Centers for Biomedical Computing located at Stanford University. First published in 2005, BCR covers such topics as molecular dynamicsgenomicsproteomicsphysics-based simulationsystems biology, and other research involvingcomputational biology. BCR’s articles are targeted to those with a general science or biology background, in order to build a community among biomedical computational researchers who come from a variety of disciplines.

http://en.wikipedia.org/wiki/Biomedical_Computation_Review

 

REFERENCES on BIOINFORMATICS

  1. ^ Biositemaps online editor
  2. a b Dinov ID, Rubin D, Lorensen W, et al. (2008). “iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources”PLoS ONE 3 (5): e2265. doi:10.1371/journal.pone.0002265PMC 2386255PMID 18509477.
  3. ^ M.L. Nelson, J.A. Smith, del Campo, H. Van de Sompel, X. Liu (2006). “Efficient, Automated Web Resource Harvesting”WIDM’06.
  4. ^ Brandman O, Cho J, Garcia-Molina HShivakumar N (2000). “Crawler-friendly Web Servers”ACM SIGMETRICS Performance Evaluation Review 28 (2). doi:10.1145/362883.362894.
  5. ^ Cannata N, Merelli E, Altman RB (December 2005). “Time to organize the bioinformatics resourceome”PLoS Comput. Biol. 1 (7): e76.doi:10.1371/journal.pcbi.0010076PMC 1323464PMID 16738704.
  6. ^ Chen YB, Chattopadhyay A, Bergen P, Gadd C, Tannery N (January 2007). “The Online Bioinformatics Resources Collection at the University of Pittsburgh Health Sciences Library System—a one-stop gateway to online bioinformatics databases and software tools”.Nucleic Acids Res. 35 (Database issue): D780–5. doi:10.1093/nar/gkl781PMC 1669712PMID 17108360.
 REFERENCES on GENOMICS

  1. ^ National Human Genome Research Institute (2010-11-08).“FAQ About Genetic and Genomic Science”Genome.gov. Retrieved 2011-12-03.
  2. ^ EPA Interim Genomics Policy
  3. ^ [1]
  4. ^ Min Jou W, Haegeman G, Ysebaert M, Fiers W (1972). “Nucleotide sequence of the gene coding for the bacteriophage MS2 coat protein”. Nature 237 (5350): 82–88. Bibcode1972Natur.237…82Jdoi:10.1038/237082a0.PMID 4555447.
  5. ^ Fiers W, Contreras R, Duerinck F, Haegeman G, Iserentant D, Merregaert J, Min Jou W, Molemans F, Raeymaekers A, Van den Berghe A, Volckaert G, Ysebaert M (1976). “Complete nucleotide sequence of bacteriophage MS2 RNA: primary and secondary structure of the replicase gene”. Nature 260 (5551): 500–507.Bibcode 1976Natur.260..500Fdoi:10.1038/260500a0.PMID 1264203.
  6. ^ Sanger F, Air GM, Barrell BG, Brown NL, Coulson AR, Fiddes CA, Hutchison CA, Slocombe PM, Smith M (1977). “Nucleotide sequence of bacteriophage phi X174 DNA”. Nature 265 (5596): 687–695. Bibcode 1977Natur.265..687S.doi:10.1038/265687a0PMID 870828.
  7. ^ Fleischmann RD, Adams MD, White O, Clayton RA, Kirkness EF, Kerlavage AR, Bult CJ, Tomb JF, Dougherty BA, Merrick JM, et al. (1995). “Whole-genome random sequencing and assembly of Haemophilus influenzae Rd”. Science 269 (5223): 496–512.Bibcode 1995Sci…269..496Fdoi:10.1126/science.7542800.PMID 7542800.
  8. ^ “Complete genomes: Viruses”NCBI. 2011-11-17. Retrieved 2011-11-18.
  9. ^ “Genome Project Statistics”Entrez Genome Project. 2011-10-07. Retrieved 2011-11-18.
  10. ^ Hugenholtz, Philip (2002). “Exploring prokaryotic diversity in the genomic era”. Genome Biology 3 (2): reviews0003.1-reviews0003.8. ISSN 1465-6906.
  11. ^ BBC article Human gene number slashed from Wednesday, 20 October 2004
  12. ^ CBSE News, Thursday, 16 October 2003
  13. ^ National Human Genome Research Institute (2004-07-14).“Dog Genome Assembled: Canine Genome Now Available to Research Community Worldwide”Genome.gov. Retrieved 2012-01-20.
  14. ^ McGrath S and van Sinderen D, ed. (2007). Bacteriophage: Genetics and Molecular Biology (1st ed.). Caister Academic Press. ISBN 978-1-904455-14-1.
  15. ^ Herrero A and Flores E, ed. (2008). The Cyanobacteria: Molecular Biology, Genomics and Evolution (1st ed.). Caister Academic Press. ISBN 978-1-904455-15-8.
  16. ^ McElheny, Victor (2010). Drawing the map of life : inside the Human Genome Project. New York NY: Basic Books. ISBN 978-0-465-04333-0.
  17. ^ Hugenholz, P; Goebel BM, Pace NR (1 September 1998).“Impact of Culture-Independent Studies on the Emerging Phylogenetic View of Bacterial Diversity”J. Bacteriol 180 (18): 4765–74. PMC 107498PMID 9733676.
  18. ^ Eisen, JA (2007). “Environmental Shotgun Sequencing: Its Potential and Challenges for Studying the Hidden World of Microbes”PLoS Biology 5 (3): e82.doi:10.1371/journal.pbio.0050082PMC 1821061.PMID 17355177.
  19. ^ Marco, D, ed. (2010). Metagenomics: Theory, Methods and Applications. Caister Academic Press. ISBN 978-1-904455-54-7.
  20. ^ Marco, D, ed. (2011). Metagenomics: Current Innovations and Future TrendsCaister Academic PressISBN 978-1-904455-87-5.
  21. ^ Wang L (2010). “Pharmacogenomics: a systems approach”.Wiley Interdiscip Rev Syst Biol Med 2 (1): 3–22.doi:10.1002/wsbm.42PMID 20836007.
  22. ^ Becquemont L (June 2009). “Pharmacogenomics of adverse drug reactions: practical applications and perspectives”.Pharmacogenomics 10 (6): 961–9. doi:10.2217/pgs.09.37.PMID 19530963.
  23. ^ “Guidance for Industry Pharmacogenomic Data Submissions” (PDF). U.S. Food and Drug Administration. March 2005. Retrieved 2008-08-27.
  24. ^ Squassina A, Manchia M, Manolopoulos VG, Artac M, Lappa-Manakou C, Karkabouna S, Mitropoulos K, Del Zompo M, Patrinos GP (August 2010). “Realities and expectations of pharmacogenomics and personalized medicine: impact of translating genetic knowledge into clinical practice”.Pharmacogenomics 11 (8): 1149–67. doi:10.2217/pgs.10.97.PMID 20712531.

http://en.wikipedia.org/wiki/Genomics

 

Read Full Post »

 

Molecular pathogen identification comes to the bedside

Reporter:  Larry H Bernstein, MD, FCAP

The developments in molecular diagnostics have been proceeding at a rapid pace.  Naturally it is not surprising that it would reach into clinical microbiology early.  Microbiology and virology have many methods for validation of type of pathogen, and the identification of new pathogens can require delay because of use of a State laboratory.  This will be less an issue with the consolidation of regional facilities and associated laboratories.

I present an example of point-of-care technology from the University of California, Davis developed by Gerald Kost and colleagues with UC Lawrence Livermore National Point-of-Care Technologies Center .

Tran NK, Wisner DH, Albertson TE, Cohen S, et al.  Multiplex polymerase chain reaction pathogen detection in patients with suspected septicemia after trauma, emergency, and burn surgery. Surgery 2012 Mar;151(3):456-63. Epub 2011 Oct 5.  nktran@ucdavis.edu

The goal of the study:  to determine the clinical value of multiplex polymerase chain reaction (PCR) study for enhancing pathogen detection in patients with suspected septicemia after trauma, emergency, and burn surgery.

Finding: PCR-based pathogen detection quickly reveals occult bloodstream infections in these high-risk patients and may accelerate the initiation of targeted antimicrobial therapy.

Type study: a prospective observational study

Population:  30 trauma and emergency surgery patients compared to 20 burn patients.

Method:  Whole- routine blood cultures (BCs) were tested using a new multiplex, PCR-based, pathogen detection system. PCR results were compared to culture data.

Arbitrated Case Review

Arbitrated case review was performed by a medical intensivist, 3 trauma surgeons, 3 burn surgeons, 1 microbiologist, and an infectious disease physician to determine antimicrobial adequacy based on paired PCR/BC results. The arbitrated case review process is adapted from a previous study. Physicians were first presented cases with only BC results. Cases were then represented with PCR results included.

Results:

  • PCR detected rapidly more pathogens than culture methods.
  • Acute Physiology and Chronic Health Evaluation II (APACHE II), Sequential Organ Failure Assessment (SOFA), and Multiple Organ Dysfunction (MODS) scores were greater in PCR-positive versus PCR-negative trauma and emergency surgery patients (P ≤ .033).
  • Negative PCR results (odds ratio, 0.194; 95% confidence interval, 0.045-0.840; P = .028) acted as an independent predictor of survival for the combined surgical patient population.

CONCLUSION:

  • PCR results were reported faster than blood culture results.
  • Severity scores were significantly greater in PCR-positive trauma and emergency surgery patients.
  • The lack of pathogen DNA as determined by PCR served as a significant predictor of survival in the combined patient population.
  • PCR testing independent of traditional prompts for culturing may have clinical value in burn patients.

NK Tran, et al.  Multiplex Polymerase Chain Reaction Pathogen Detection in Trauma, Emergency, and Burn Surgery Patients with Suspected Septicemia.  Surgery. 2012 March; 151(3): 456–463. PMID: 21975287 [PubMed – indexed for MEDLINE] PMCID: PMC3304499 On-line 2011 October 5.
doi:  10.1016/j.surg.2011.07.030
PMCID: PMC3304499.  NIHMSID: NIHMS288960

Plymerase chain reaction, PCR

Plymerase chain reaction, PCR (Photo credit: Wikipedia)

 

Read Full Post »

 

Reporter: Aviva Lev-Ari, PhD, RN

Ten Biotech Powerhouses Such as Abbott Laboratories (ABT),AstraZeneca PLC (AZN) Unite to Form TransCelerate BioPharma Inc. to Accelerate the Development of New Meds

TransCelerate – New Non-Profit Organization to Speed Pharmaceutical R&D,  headquartered in Philadelphia

“This initiative is complementary to efforts of CTTI, and we look forward to working with TransCelerate BioPharma to improve the conduct of clinical trials.”
As shared solutions in clinical research and other areas are developed, TransCelerate will involve industry alliances including:

9/19/2012 9:29:28 AM

PHILADELPHIA, Sept. 19, 2012 /PRNewswire/ — Ten leading biopharmaceutical companies announced today that they have formed a non-profit organization to accelerate the development of new medicines. Abbott, AstraZeneca, Boehringer Ingelheim, Bristol-Myers Squibb, Eli Lilly and Company, GlaxoSmithKline, Johnson & Johnson, Pfizer, Genentech a member of the Roche Group, and Sanofi launched TransCelerate BioPharma Inc. (“TransCelerate”), the largest ever initiative of its kind, to identify and solve common drug development challenges with the end goals of improving the quality of clinical studies and bringing new medicines to patients faster.

 

Through participation in TransCelerate, each of the ten founding companies will combine financial and other resources, including personnel, to solve industry-wide challenges in a collaborative environment. Together, member companies have agreed to specific outcome-oriented objectives and established guidelines for sharing meaningful information and expertise to advance collaboration.

“There is widespread alignment among the heads of R&D at major pharmaceutical companies that there is a critical need to substantially increase the number of innovative new medicines, while eliminating inefficiencies that drive up R&D costs,” said newly appointed acting CEO of TransCelerate BioPharma, Garry Neil, MD, Partner at Apple Tree Partners and formerly Corporate Vice President, Science & Technology, Johnson & Johnson. “Our mission at TransCelerate BioPharma is to work together across the global research and development community and share research and solutions that will simplify and accelerate the delivery of exciting new medicines for patients.”

Members of TransCelerate have identified clinical study execution as the initiative’s initial area of focus. Five projects have been selected by the group for funding and development, including: development of a shared user interface for investigator site portals, mutual recognition of study site qualification and training, development of risk-based site monitoring approach and standards, development of clinical data standards, and establishment of a comparator drug supply model.

As shared solutions in clinical research and other areas are developed, TransCelerate will involve industry alliances including Clinical Data Interchange Standards Consortium (CDISC), Critical-Path Institute (C-Path), Clinical Trials Transformation Initiative (CTTI), Innovative Medicines Initiative (IMI), regulatory bodies including the US Food and Drug Administration (FDA) and European Medicines Agency (EMA), and Contract Research Organizations (CROs).

Janet Woodcock, MD, director of FDA’s Center for Drug Evaluation and Research, said, “We applaud the companies in TransCelerate BioPharma for joining forces to address a series of longstanding challenges in new drug development. This collaborative approach in the pre-competitive arena, utilizing the collective experience and resources of 10 leading drug companies and others to follow, has the promise to lead to new paradigms and cost savings in drug development, all of which would strengthen the industry and its ability to develop innovative and much-needed therapies for patients.”

“These leading pharmaceutical companies are in a position to significantly influence changes in the way that clinical trials are done, so that better answers about the benefits and risks of drugs and other therapies are provided in a more efficient manner,” said Robert Califf, MD, Co-Chair of CTTI and Director of the Duke Translational Medicine Institute. “This initiative is complementary to efforts of CTTI, and we look forward to working with TransCelerate BioPharma to improve the conduct of clinical trials.”

TransCelerate BioPharma evolved from relationships fostered via the Hever Group, a forum for executive R&D leadership to discuss relevant issues facing the industry and solutions for addressing common challenges. TransCelerate was incorporated in early August 2012 and will file for non-profit status this fall. The Board of Directors includes R&D heads of ten member companies. Membership in TransCelerate is open to all pharmaceutical and biotechnology companies who can contribute to and benefit from these shared solutions. TransCelerate’s headquarters will be located in Philadelphia, PA.

http://news.bms.com/press-release/rd-news/ten-pharmaceutical-companies-unite-accelerate-development-new-medicines-0&t=634836499683795253

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

A New Approach Uses Compression to Speed Up Genome Analysis

Public-Domain Computing Resources

Structural Bioinformatics

The BetaWrap program detects the right-handed parallel beta-helix super-secondary structural motif in primary amino acid sequences by using beta-strand interactions learned from non-beta-helix structures.
Wrap-and-pack detects beta-trefoils in protein sequences by using both pairwise beta-strand interactions and 3-D energetic packing information
The BetaWrapPro program predicts right-handed beta-helices and beta-trefoils by using both sequence profiles and pairwise beta-strand interactions, and returns coordinates for the structure.
The MSARi program indentifies conserved RNA secondary structure in non-coding RNA genes and mRNAs by searching multiple sequence alignments of a large set of candidate catalogs for correlated arrangements of reverse-complementary regions
The Paircoil2 program predicts coiled-coil domains in protein sequences by using pairwise residue correlations obtained from a coiled-coil database. The original Paircoil program is still available for use.
The MultiCoil program predicts the location of coiled-coil regions in amino acid sequences and classifies the predictions as dimeric or trimeric. An updated version, Multicoil2, will soon be available.
The LearnCoil Histidase Kinase program uses an iterative learning algorithm to detect possible coiled-coil domains in histidase kinase receptors.
The LearnCoil-VMF program uses an iterative learning algorithm to detect coiled-coil-like regions in viral membrane-fusion proteins.
The Trilogy program discovers novel sequence-structure patterns in proteins by exhaustively searching through three-residue motifs using both sequence and structure information.
The ChainTweak program efficiently samples from the neighborhood of a given base configuration by iteratively modifying a conformation using a dihedral angle representation.
The TreePack program uses a tree-decomposition based algorithm to solve the side-chain packing problem more efficiently. This algorithm is more efficient than SCWRL 3.0 while maintaining the same level of accuracy.
PartiFold: Ensemble prediction of transmembrane protein structures. Using statistical mechanics principles, partiFold computes residue contact probabilities and sample super-secondary structures from sequence only.
tFolder: Prediction of beta sheet folding pathways. Predict a coarse grained representation of the folding pathway of beta sheet proteins in a couple of minutes.
RNAmutants: Algorithms for exploring the RNA mutational landscape.Predict the effect of mutations on structures and reciprocally the influence of structures on mutations. A tool for molecular evolution studies and RNA design.
AmyloidMutants is a statistical mechanics approach for de novo prediction and analysis of wild-type and mutant amyloid structures. Based on the premise of protein mutational landscapes, AmyloidMutants energetically quantifies the effects of sequence mutation on fibril conformation and stability.

Genomics

GLASS aligns large orthologous genomic regions using an iterative global alignment system. Rosetta identifies genes based on conservation of exonic features in sequences aligned by GLASS.
RNAiCut – Automated Detection of Significant Genes from Functional Genomic Screens.
MinoTar – Predict microRNA Targets in Coding Sequence.

Systems Biology

The Struct2Net program predicts protein-protein interactions (PPI) by integrating structure-based information with other functional annotations, e.g. GO, co-expression and co-localization etc. The structure-based protein interaction prediction is conducted using a protein threading server RAPTOR plus logistic regression.
IsoRank is an algorithm for global alignment of multiple protein-protein interaction (PPI) networks. The intuition is that a protein in one PPI network is a good match for a protein in another network if the former’s neighbors are good matches for the latter’s neighbors.

Other

t-sample is an online algorithm for time-series experiments that allows an experimenter to determine which biological samples should be hybridized to arrays to recover expression profiles within a given error bound.

http://people.csail.mit.edu/bab/computing_new.html#systems

Compressive genomics

http://www.nature.com/nbt/journal/v30/n7/abs/nbt.2241.html

Nature Biotechnology 30, 627–630 (2012) doi:10.1038/nbt.2241

Published online 10 July 2012
Algorithms that compute directly on compressed genomic data allow analyses to keep pace with data generation.

Figures at a glance

Introduction

In the past two decades, genomic sequencing capabilities have increased exponentially123, outstripping advances in computing power45678. Extracting new insights from the data sets currently being generated will require not only faster computers, but also smarter algorithms. However, most genomes currently sequenced are highly similar to ones already collected9; thus, the amount of new sequence information is growing much more slowly.
Here we show that this redundancy can be exploited by compressing sequence data in such a way as to allow direct computation on the compressed data using methods we term ‘compressive’ algorithms. This approach reduces the task of computing on many similar genomes to only slightly more than that of operating on just one. Moreover, its relative advantage over existing algorithms will grow with the accumulation of genomic data. We demonstrate this approach by implementing compressive versions of both the Basic Local Alignment Search Tool (BLAST)10 and the BLAST-Like Alignment Tool (BLAT)11, and we emphasize how compressive genomics will enable biologists to keep pace with current data.

Conclusions

Compressive algorithms for genomics have the great advantage of becoming proportionately faster with the size of the available data. Although the compression schemes for BLAST and BLAT that we presented yield an increase in computational speed and, more importantly, in scaling, they are only a first step. Many enhancements of our proof-of-concept implementations are possible; for example, hierarchical compression structures, which respect the phylogeny underlying a set of sequences, may yield additional long-term performance gains. Moreover, analyses of such compressive structures will lead to insights as well. As sequencing technologies continue to improve, the compressive genomic paradigm will become critical to fully realizing the potential of large-scale genomics.Software is available at http://cast.csail.mit.edu/.
References
  1. Lander, E.S. et alNature 409, 860–921 (2001).
  2. Venter, J.C. et alScience 291, 1304–1351 (2001).
  3. Kircher, M. & Kelso, J. Bioessays 32, 524–536 (2010).
  4. Kahn, S.D. Science 331, 728–729 (2011).
  5. Gross, M. Curr. Biol. 21, R204–R206 (2011).
  6. Huttenhower, C. & Hofmann, O. PLoS Comput. Biol. 6, e1000779 (2010).
  7. Schatz, M., Langmead, B. & Salzberg, S. Nat. Biotechnol. 28, 691–693 (2010).
  8. 1000 Genomes Project data available on Amazon Cloud. NIH press release, 29 March 2012.
  9. Stratton, M. Nat. Biotechnol. 26, 65–66 (2008).
  10. Altschul, S.F., Gish, W., Miller, W., Myers, E.W. & Lipman, D.J. J. Mol. Biol. 215, 403–410 (1990).
  11. Kent, W.J. Genome Res. 12, 656–664 (2002).
  12. Grumbach, S. & Tahi, F. J. Inf. Process. Manag. 30, 875–886 (1994).
  13. Chen, X., Li, M., Ma, B. & Tromp, J. Bioinformatics 18, 1696–1698 (2002).
  14. Christley, S., Lu, Y., Li, C. & Xie, X. Bioinformatics 25, 274–275 (2009).
  15. Brandon, M.C., Wallace, D.C. & Baldi, P. Bioinformatics 25, 1731–1738 (2009).
  16. Mäkinen, V., Navarro, G., Sirén, J. & Välimäki, N. in Research in Computational Molecular Biology, vol. 5541 of Lecture Notes in Computer Science (Batzoglou, S., ed.) 121–137 (Springer Berlin/Heidelberg, 2009).
  17. Kozanitis, C., Saunders, C., Kruglyak, S., Bafna, V. & Varghese, G. in Research in Computational Molecular Biology, vol. 6044 of Lecture Notes in Computer Science (Berger, B., ed.) 310–324 (Springer Berlin/Heidelberg, 2010).
  18. Hsi-Yang Fritz, M., Leinonen, R., Cochrane, G. & Birney, E. Genome Res. 21, 734–740 (2011).
  19. Mäkinen, V., Navarro, G., Sirén, J. & Välimäki, N. J. Comput. Biol. 17, 281–308 (2010).
  20. Deorowicz, S. & Grabowski, S. Bioinformatics 27, 2979–2986 (2011).
  21. Li, H., Ruan, J. & Durbin, R. Genome Res. 18, 1851–1858 (2008).
  22. Li, H. & Durbin, R. Bioinformatics 25, 1754–1760 (2009).
  23. Langmead, B., Trapnell, C., Pop, M. & Salzberg, S. Genome Biol. 10, R25 (2009).
  24. Carter, D.M. Saccharomyces genome resequencing project. Wellcome Trust Sanger Institute http://www.sanger.ac.uk/Teams/Team118/sgrp/ (2005).
  25. Tweedie, S. et alNucleic Acids Res. 37, D555–D559 (2009).

Primary authors

  1. P.-R.L. and M.B. contributed equally to this work.
    • Po-Ru Loh &
    • Michael Baym

Affiliations

  1. Po-Ru Loh, Michael Baym and Bonnie Berger are in the Department of Mathematics and Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
  2. Michael Baym is also in the Department of Systems Biology, Harvard Medical School, Boston, Massachusetts, USA.

Competing financial interests

The authors declare no competing financial interests.

Corresponding authors

Correspondence to:

September 2012

Compressing a dataset with specialized algorithms is typically done in the context of data storage, where compression tools can shrink data to save space on a hard drive. But a group of researchers at MIT has developed tools that compute directly on compressed genomic datasets by exploiting the fact that most sequenced genomes are very similar to previously sequenced genomes.

 Speed Up Genome Analysis

by exploiting the fact that most sequenced genomes are very similar to previously sequenced genomes.

Led by MIT professor Bonnie Berger, the group has recently released tools called CaBlast and CaBlat, compressive versions of the widely used Blast and Blat alignment tools, respectively.

In a Nature Biotechnology paper published in July, Berger and her colleagues describe how the algorithms deliver alignment and analysis results up to four times faster than Blast and Blat when searching for a particular sequence in 36 yeast genomes.

“What we demonstrate is that the more highly similar genomes there are in a database, the greater the relative speed of CaBlast and CaBlat compared to the original non-compressive versions,” Berger says. “As we increase the number of genomes, the amount of work required for compressive algorithms scales only linearly in the amount of non-redundant data. The idea is that we’ve already done most of the work on the first genome.”

These two algorithms are still in the beta phase, and the MIT team has several refinements planned for future release to optimize performance. To that end, Berger has made the code for both algorithms available with the hope that developers will help them build “industrial-strength” software that can be used by the research community.

“To achieve optimal performance in real-use cases, we expect the code will need to be tuned for the engineering trade-offs specific to the application at hand,” she says. “The algorithm used to find and compress similar sequences in the database may need to be tweaked to take this issue into account, and the coarse- and fine-search steps should be aware of these constraints as well.”

While computing resources are becoming increasingly powerful, Berger contends that better algorithms and the use of compression technology will play a crucial role in helping researchers to keep up with the production of next-generation sequencing data.

Matthew Dublin is a senior writer at Genome Technology.

Read Full Post »

 

Reporter: Aviva Lev-Ari, hD, RN

 

A research team from Massachusetts and Maryland used array-based transcriptome profiling to explore the genetic basis of a progressive neuromuscular condition called facioscapulohumeral muscular dystrophy, or FSHD. By testing bicep and deltoid muscle biopsy samples from dozens of individuals with FSHD and almost as many unaffected relatives of those subjects, the team tracked down hundreds of genes showing expression shifts in those with FSHD. Of those, 29 genes were differentially expressed in both bicep and deltoid muscle samples, the researchers report. And, they found expression levels at 15 genes could distinguish between bicep samples from those with or without the disease around 90 percent of the time in follow-up experiments. The accuracy was closer to 80 percent when classifying deltoid tissue based on expression of these genes. Those involved in the study say such a ‘molecular signature’ of FSHD could help in understanding the disease and in testing new treatments for it.

http://www.genomeweb.com//node/1126816?hq_e=el&hq_m=1349154&hq_l=4&hq_v=09187c3305

Transcriptional profiling in facioscapulohumeral muscular dystrophy to identify candidate biomarkers

  1. Fedik Rahimova,b,1,

  2. Oliver D. Kingb,c,1,
  3. Doris G. Leungd,e,
  4. Genila M. Bibatd,
  5. Charles P. Emerson, Jrb,c,
  6. Louis M. Kunkela,b,f,2, and
  7. Kathryn R. Wagnerd,e,g,2

+Author Affiliations


  1. aProgram in Genomics, Division of Genetics, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115;

  2. bThe Senator Paul D. Wellstone Muscular Dystrophy Cooperative Research Center and

  3. cBoston Biomedical Research Institute, Watertown, MA 02472;

  4. dHugo W. Moser Research Institute at Kennedy Krieger Institute, Baltimore, MD 21205; Departments of

  5. eNeurology and

  6. gNeuroscience, The Johns Hopkins School of Medicine, Baltimore, MD 21205; and

  7. fThe Manton Center for Orphan Disease Research, Boston Children’s Hospital, Boston, MA 02115
  1. Contributed by Louis M. Kunkel, June 4, 2012 (sent for review May 24, 2012)

Abstract

Facioscapulohumeral muscular dystrophy (FSHD) is a progressive neuromuscular disorder caused by contractions of repetitive elements within the macrosatellite D4Z4 on chromosome 4q35. The pathophysiology of FSHD is unknown and, as a result, there is currently no effective treatment available for this disease. To better understand the pathophysiology of FSHD and develop mRNA-based biomarkers of affected muscles, we compared global analysis of gene expression in two distinct muscles obtained from a large number of FSHD subjects and their unaffected first-degree relatives. Gene expression in two muscle types was analyzed using GeneChip Gene 1.0 ST arrays: biceps, which typically shows an early and severe disease involvement; and deltoid, which is relatively uninvolved. For both muscle types, the expression differences were mild: using relaxed cutoffs for differential expression (fold change ≥1.2; nominal P value <0.01), we identified 191 and 110 genes differentially expressed between affected and control samples of biceps and deltoid muscle tissues, respectively, with 29 genes in common. Controlling for a false-discovery rate of <0.25 reduced the number of differentially expressed genes in biceps to 188 and in deltoid to 7. Expression levels of 15 genes altered in this study were used as a “molecular signature” in a validation study of an additional 26 subjects and predicted them as FSHD or control with 90% accuracy based on biceps and 80% accuracy based on deltoids.

Footnotes

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

During Investor Day, Roche Highlights Personalized Medicine as Key Area for Future Growth

September 12, 2012

As regulators and payors around the world are demanding more evidence that healthcare products improve patient outcomes and save money, Roche this week attempted to reassure investors that its strategy to develop innovative products — with a strong focus on molecularly guided personalized medicines — will place it ahead of competitors.

Through several presentations during an investor day in London, Roche officials highlighted a number of drugs for cancer, neuropsychiatric conditions, and autoimmune diseases for which the company is investigating biomarkers that can help target treatment to specific groups of patients. The company said that more than 60 percent of the compounds in its drug pipeline are currently paired with a companion diagnostic and that it has more than 200 companion diagnostic projects underway across its pharma and diagnostic business groups.

Personalized medicines are not only a major part of Roche’s plan for future growth, but they also represent a way for the company to differentiate its products from competitors. By setting its drugs apart from other me-too treatments in the marketplace, the company is hoping that its products won’t be as heavily affected by the pricing pressures currently plaguing the pharma and biotech sectors.

“Yes, regulators are very stringent. But if I look back at our most recent launches, particularly in the US, if you have true medical innovation, then regulators are very willing to bring those medicines and novel diagnostics to the market,” Roche CEO Severin Schwan said during the investor conference. He highlighted that the US Food and Drug Administration reviewed and approved the BRAF inhibitor Zelboraf for metastatic melanoma and its companion diagnostic in record time and that the recent approval of the HER2-targeted breast cancer drug Perjeta also occurred ahead of schedule (PGx Reporter 8/17/2011 and 6/13/2012).

“Likewise, if you look at the payors, there is cost pressure,” Schwan reflected, but he noted that the “innovative nature” of its portfolio helps it to “negotiate better prices with payors.”

Despite this optimistic forecast, Roche has experienced some pushback from cost-conscious national payors in Europe. For example, in June the UK’s National Institute for Health and Clinical Excellence deemed Zelboraf, which costs more than $82,000 for a seven-month treatment, too pricey. Zelboraf, which Roche launched in the US market last year and in European countries earlier this year, netted the company around $97 million in revenue for the six months ended June 30.

In an effort to battle pushback from national payors, Roche is in discussions with European governments about value-based pricing schemes for several of its products. In this regard, high priced personalized medicine drugs are well suited to these types of arrangements. David Loew, chief marketing officer at Roche, told investors that governments are increasingly developing registries to track how individual patients are doing on various treatments. This information will help governments move from a volume-based pricing model for drugs to paying for them based on the drug’s indication.

He noted that in Germany, for example, Roche has developed a payment scheme where in colorectal cancer, patients pay a certain amount for up to 10 grams of the oncologic Avastin, receive it for free for up to 12 months, and then the scheme repeats. For personalized medicines, such as Herceptin, Perjeta, T-DM1, and Zelboraf, “we will have to think about different ways of pricing those new combinations,” Loew said.

Schwan highlighted that one of the major advantages for Roche in this difficult environment is that it has both drug and diagnostic capabilities in house. This, according to Schwan, enables Roche to have significant internal capabilities in early-phase research, and makes the company attractive for partnerships, as well. Roche currently has more than 70 new molecular entities in clinical development and since 2011 there have been 25 late-stage clinical trials that have yielded positive results. The firm plans to bring three more products into late-stage clinical trials by the end of the year and would like to move 10 products into late-stage development in 2013.

On the diagnostics side, newly hired chief operating officer Roland Diggelmann said that Roche is aiming to grow its presence in the testing market by becoming “the partner of choice” for developing companion assays and collaborating internally with Roche pharma to advance personalized medicine.

“We need to make sure that science translates into great medicines by designing trials that take smart risk into account, that really focus on ensuring that the molecules are being developed in the right diseases; to make sure we have the right dose; to make sure, whenever possible, we have the … companion diagnostic strategies,” Chief Medical Officer Hal Barron said at the meeting. “This whole strategy needs to result in a higher probability of success so that the return on investment is above the cost of capital and an important driver for our business.”

While Roche plans on identifying new product opportunities through a mix of its internal capabilities and external collaborations, growth through large mergers and acquisitions – a strategy that other large pharmaceutical companies have readily utilized to expand product portfolios – doesn’t seem to be a priority at the company. Noting that there may be opportunities for smaller M&A deals, Alan Hippe, chief financial and information technology officer, noted that at Roche, “we are not big fans of big mergers and big M&A.”

Targeting Cancer

A large portion of Roche’s personalized medicine strategy will be directed toward oncology, where the company has allocated 50 percent of its research and development budget.

In June, the FDA approved Perjeta in combination with Herceptin and decetaxel chemotherapy as a treatment for metastatic breast cancer patients whose tumors overexpress the HER2 protein. The agency simultaneously also approved two companion tests that can help doctors discern best responders to the treatment (PGx Reporter 6/13/2012).

Herceptin (trastuzumab), approved in 1998, still comprises a big chunk of Roche’s therapeutics business, contributing 11 percent of the $18.2 billion the firm netted in overall drug sales in the first half of the year. Roche is hoping to preserve earnings from this blockbuster drug — often hailed as the first personalized medicine success story — by combining it with Perjeta and linking it with a derivative of the chemotherapy maytansine, DM1.

Recently, Roche announced data from a late-stage clinical trial called EMILIA that showed that advanced breast cancer patients receiving the antibody drug conjugate trastuzumab emtansine, or T-DM1, lived “significantly” longer than those treated with a combination of Genentech’s Xeloda (capecitabine) and GlaxoSmithKline’s Tykerb (lapatinib). The patients in EMILIA had to have progressed after initial treatment with Herceptin and taxane chemotherapy.

According to Loew, the company is currently conducting a study looking at T-DM1 as a potential option for first-line metastatic breast cancer patients. In addition, Roche is also studying T-DM1 as an adjuvant treatment in early-stage breast cancer patients with residual disease; comparing T-DM1 plus Perjeta against Herceptin plus Perjeta in the adjuvant early-stage breast cancer setting; and looking at T-DM1-based chemotherapy in the neoadjuvant setting.

“So if we are successfully delivering those results, I think the HER2-positive breast cancer space has been completely changed and redefined,” Loew told investors.

At the end of the year, another study, called the Protocol of Herceptin Adjuvant with Reduced Exposure, or PHARE, is slated to report results, and the outcome could have a negative impact on Herceptin sales. PHARE is comparing whether patients given Herceptin for 12 months, which is currently the standard of care in the US, fare better than those given the drug for six months.

Industry observers have projected that Perjeta and T-DM1 could be a sufficient buffer against a scenario in which six months of Herceptin is found to be non-inferior to a year of the drug.

Barron noted that Roche is readily applying the strategy behind antibody-drug conjugates such as T-DM1 – where antibodies to attach to antigens on the surface of cancer cells to localize chemotherapy delivery and reduce adverse reactions – in 25 projects across its portfolio. He added that antibody-drug conjugates offer a promising mechanism for personalizing treatments.

In non-small cell lung cancer, Roche is studying MetMab (onartuzumab) in combination with Tarceva in patients with tumors that overexpress the Met protein. Data from this Phase III trial, called METLUNG, is expected in 2014. Data from a Phase II study looking at MetMab and Tarceva as a second-line NSCLC treatment yielded negative results when all comers were considered. However, the subgroup of patients who over-expressed Met had a “doubling” of progression-free survival and a “pronounced” effect on overall survival compared to the low-Met group.

Roche is also investigating MetMab in metastatic gastric cancer (Phase III), triple-negative breast cancer, (Phase II), metastatic colorectal cancer (Phase II), glioblastoma (Phase II), as well as in combination with Avastin in various cancer indications.

Other Areas of Personalization

Outside of oncology, Roche is exploring biomarker strategies to personalize drugs for Alzheimer’s disease and schizophrenia. Phase I data from a study involving gantenerumab, a IgG1 monoclonal antibody, suggest that the drug could potentially reduce amyloid plaque in Alzheimer’s patients’ brains.

Investigational drugs targeting beta-amyloid, which many researchers believe to be involved in the pathogenesis of Alzheimer’s disease, haven’t fared well in clinical trials. Most recently, Johnson & Johnson/Pfizer’s drug bapineuzumab, which also targeted the β-amyloid protein, failed to benefit Alzheimer’s patients who were non-carriers of APOE4 gene variations.

Wall Street analysts are hoping that Roche’s biomarker-driven strategy for gantenerumab will help it avoid a similar fate. The company is currently conducting a 770-patient trial called Scarlet Road, in which researchers will measure Tau/Aβ levels in study participants’ spinal fluid to identify early onset or prodormal Alzheimer’s patients and treat them with gantenerumab. Roche is developing a companion test to gauge Tau/Aβ levels in trial participants. Results from Scarlet Road are expected in 2015.

Roche subsidiary Genentech is testing another compound, crenezumab, to see if it can prevent Alzheimer’s in a population genetically predisposed to getting the disease. Genentech, in collaboration with Banner Alzheimer’s Institute and the National Institutes of Health, is conducting a Phase II trial investigating crenezumab in the residents of Medellin, Colombia, where people share a common ancestor and have a high prevalence of mutations in the presenelin 1 gene. Those harboring the dominant gene mutation will start to lose their memory in their mid-40s and their cognitive functions will deteriorate by age 50.

The five-year study will involve approximately 300 participants, of whom approximately 100 mutation carriers will receive crenezumab and another 100 mutation carriers will receive a placebo. In a third arm, approximately 100 participants who don’t carry the mutations will receive a placebo. Study investigators will begin recruiting patients for this study next year.

In schizophrenia, Roche is exploring bitopertin, a glycine reuptake inhibitor, in six Phase III studies slated for completion next year. Three of these studies are looking at the drug’s ability to control negative symptoms in schizophrenia, while the other three trials are studying the drug’s impact on sub-optimally controlled disease symptoms. “A companion diagnostics assay is in development to validate the hypothesis for an exploratory biomarker predicting response to therapy with bitopertin,” Roche said in a statement.

For lupus, Roche is conducting a proof of concept Phase II trial involving rontalizumab, an anti-interferon-alpha antibody, in which researchers are using a biomarker to identify patients most likely to respond to the drug. Data from this trial will be presented at a medical conference later this year.

Growing Role of Diagnostics

Daniel O’Day, who served as CEO of Roche Molecular Diagnostics until last week when he was appointed chief operating officer of the company’s pharma division, valued the worldwide diagnostics market at $53 billion. “We represent 20 percent of that, or around 10 billion Swiss francs ($11 billion),” he said in his investor day presentation.

While molecular diagnostics promise to be a growing part of Roche’s business in the coming years, these products currently only represent a single-digit percent of Roche’s overall diagnostics business. For the first half of this year, molecular diagnostics comprised around 6 percent of Roche’s diagnostics sales of $5.3 billion.

Roche’s Ventana Medical Systems subsidiary will likely play a large role in advancing Roche’s presence in the companion diagnostics space. This year, Ventana announced it was developing companion tests for a number of drug makers, including Aeterna Zentaris, Syndax Pharmaceuticals, Pfizer, and Bayer (PGx Reporter 1/18/2012).

In addition to these external collaborations, Roche officials highlighted the company’s internal diagnostics capabilities as particularly advantageous for expanding its presence in the personalized medicine space. For example, Roche developed the BRAF companion test for Zelboraf. The company is also developing a companion EGFR-mutation test for its non-small cell lung cancer drug Tarceva in the first-line setting, and a test to gauge so-called “super-responders” to the investigational asthma drug lebrikizumab being developed by Genentech.

In terms of molecular diagnostics, O’Day highlighted a test that gauges the overexpression of the p16 gene in cervical Pap test samples to gauge whether women have precancerous lesions.

Additionally, the FDA this year approved the use of Ventana’s INFORM HER2 Dual ISH DNA Probe cocktail on the BenchMark ULTRA automated slide staining platform, which allows labs to analyze fluorescent in situ hybridization and immunohistochemistry samples in one assay. According to O’Day, this test has been more successful than standard FISH tests in identifying HER2 status in difficult-to-diagnose patients. The company will be publishing data on this test soon, showing that it can “identify about 4 percent more [HER2-postiive patients] than FISH alone.”

When it comes to molecular technologies, Roche, like other pharma and biotech players, appear to be sticking to tried and tested technologies, such as IHC, FISH, and PCR, and reserving whole-genome sequencing for research use. “Today, sequencing is predominantly a research tool. And it’s a very valuable research tool in the future,” O’Day said, estimating that sequencing-based tests will “go into the clinic” in the next half decade.

Turna Ray is the editor of GenomeWeb’s Pharmacogenomics Reporter. She covers pharmacogenomics, personalized medicine, and companion diagnostics. E-mail her here or follow her GenomeWeb Twitter account at @PGxReporter.

Related Stories

SOURCE

 

Read Full Post »

« Newer Posts - Older Posts »