Advertisements
Feeds:
Posts
Comments

Archive for the ‘Genomic Endocrinology’ Category


Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

In an African cichlid fish, Astatotilapia burtoni, fertile females select a mate and perform a stereotyped spawning / mating routine, offering quantifiable behavioral outputs of neural circuits. A male fish attracts a fertile female by rapidly quivering his brightly colored body. If she chooses him, he guides her back to his territory, where he quivers some more as she pecks at fish egg–colored spots on his anal fin. Next, she lays eggs and quickly scoops them up in her mouth. With a mouthful of eggs, she continues pecking at the male’s spots, “believing” them to be eggs to be collected. As she does, he releases sperm from near his anal fin, which she also gathers. This fertilizes the eggs, and she carries the embryos in her mouth for two weeks as they develop.

 

But, the question was how these females can time their reproduction to coincide with when they are fertile. The female fish will not approach or choose males until they are ready to reproduce, so there must be something in their brains that signals when sexual behavior will be required. The scientists began by considering signaling molecules previously associated with sexual behavior and reproduction, and showed that PGF2α injection activates a naturalistic pattern of sexual behavior in female Astatotilapia burtoni. They would engage in mating behavior even if they were non-fertile, doing the quiver dance with males, but wouldn’t actually lay eggs since they had none.

 

The scientists also identified cells in the brain that transduce the prostaglandin signal to mate and showed that the gonadal steroid 17α, 20β-dihydroxyprogesterone modulates mRNA levels of the putative receptor for PGF2α. The scientists keyed in on a receptor for PGF2α in the preoptic area (POA) within the hypothalamus of the brain, a region involved in sexual behavior across animals. They suspected that when PGF2α levels elevated in the fish, the molecule attaches to this receptor and triggers sexual behavior. Then they used CRISPR/Cas9 to generate PGF2α receptor knockout fish. This gene deletion or knockout uncoupled the sexual behavior from fertility status to prove that the receptor of PGF2α is necessary for the initiation of sexual behavior.

 

The finding has parallels across all vertebrates, and might influence the understanding of social behavior in humans. The next steps for this work will involve understanding other behaviors that are regulated by this receptor, and the finding provides insight into both the evolution of reproduction and sexual behaviors. In mammals and other vertebrates, PGF2α promotes the onset of labor and motherly behaviors, and this present research, coupled with other studies, suggests that PGF2α signaling has a common ancestral function associated with birth and its related behaviors.

 

References:

 

http://www.ncbi.nlm.nih.gov/pubmed/26996507

 

http://news.stanford.edu/news/2016/march/fish-mating-behavior-031716.html

 

 

http://www.academia.edu/676252/The_Genetics_of_Female_Sexual_Behaviour

 

https://scifeeds.com/news/scientists-identify-genetic-switch-for-female-sexual-behavior/

Advertisements

Read Full Post »


Fat Cells Reprogrammed to Make Insulin

Curator: Larry H. Bernstein, MD, FCAP

 

A New Use for Love Handles, Insulin-Producing Beta Cells

http://www.genengnews.com/gen-news-highlights/a-new-use-for-love-handles-insulin-producing-beta-cells/81252612/

http://www.genengnews.com/Media/images/GENHighlight/112856_web9772135189.jpg

 

Scientists at the Swiss Federal Institute of Technology (ETH) in Zurich have found an exciting new use for the cells that reside in the undesirable flabby tissue—creating pancreatic beta cells. The ETH researchers extracted stem cells from a 50-year-old test subject’s fatty tissue and reprogrammed them into mature, insulin-producing beta cells.

The findings from this study were published recently in Nature Communications in an article entitled “A Programmable Synthetic Lineage-Control Network That Differentiates Human IPSCs into Glucose-Sensitive Insulin-Secreting Beta-Like Cells.”

The investigators added a highly complex synthetic network of genes to the stem cells to recreate precisely the key growth factors involved in this maturation process. Central to the process were the growth factors Ngn3, Pdx1, and MafA; the researchers found that concentrations of these factors change during the differentiation process.

For instance, MafA is not present at the start of maturation. Only on day 4, in the final maturation step, does it appear, its concentration rising steeply and then remaining at a high level. The changes in the concentrations of Ngn3 and Pdx1, however, are very complex: while the concentration of Ngn3 rises and then falls again, the level of Pdx1 rises at the beginning and toward the end of maturation.

Senior study author Martin Fussenegger, Ph.D., professor of biotechnology and bioengineering at ETH Zurich’s department of biosystems science and engineering stressed that it was essential to reproduce these natural processes as closely as possible to produce functioning beta cells, stating that “the timing and the quantities of these growth factors are extremely important.”

The ETH researchers believe that their work is a real breakthrough, in that a synthetic gene network has been used successfully to achieve genetic reprogramming that delivers beta cells. Until now, scientists have controlled such stem cell differentiation processes by adding various chemicals and proteins exogenously.

“It’s not only really hard to add just the right quantities of these components at just the right time, but it’s also inefficient and impossible to scale up,” Dr. Fussenegger noted.

While the beta cells not only looked very similar to their natural counterparts—containing dark spots known as granules that store insulin—the artificial beta cells also functioned in a very similar manner. However, the researchers admit that more work needs to be done to increase the insulin output.

“At the present time, the quantities of insulin they secrete are not as great as with natural beta cells,” Dr. Fussenegger stated. Yet, the key point is that the researchers have for the first time succeeded in reproducing the entire natural process chain, from stem cell to differentiated beta cell.

In future, the ETH scientists’ novel technique might make it possible to implant new functional beta cells in diabetes sufferers that are made from their adipose tissue. While beta cells have been transplanted in the past, this has always required subsequent suppression of the recipient’s immune system—as with any transplant of donor organs or tissue.

“With our beta cells, there would likely be no need for this action since we can make them using endogenous cell material taken from the patient’s own body,” Dr. Fussenegger said. “This is why our work is of such interest in the treatment of diabetes.”

A programmable synthetic lineage-control network that differentiates human IPSCs into glucose-sensitive insulin-secreting beta-like cells

Pratik SaxenaBoon Chin HengPeng BaiMarc FolcherHenryk Zulewski & Martin Fussenegger
Nature Communications7,Article number:11247
         doi:10.1038/ncomms11247

Synthetic biology has advanced the design of standardized transcription control devices that programme cellular behaviour. By coupling synthetic signalling cascade- and transcription factor-based gene switches with reverse and differential sensitivity to the licensed food additive vanillic acid, we designed a synthetic lineage-control network combining vanillic acid-triggered mutually exclusive expression switches for the transcription factors Ngn3 (neurogenin 3; OFF-ON-OFF) and Pdx1 (pancreatic and duodenal homeobox 1; ON-OFF-ON) with the concomitant induction of MafA (V-maf musculoaponeurotic fibrosarcoma oncogene homologue A; OFF-ON). This designer network consisting of different network topologies orchestrating the timely control of transgenic and genomic Ngn3, Pdx1 and MafA variants is able to programme human induced pluripotent stem cells (hIPSCs)-derived pancreatic progenitor cells into glucose-sensitive insulin-secreting beta-like cells, whose glucose-stimulated insulin-release dynamics are comparable to human pancreatic islets. Synthetic lineage-control networks may provide the missing link to genetically programme somatic cells into autologous cell phenotypes for regenerative medicine.

Cell-fate decisions during development are regulated by various mechanisms, including morphogen gradients, regulated activation and silencing of key transcription factors, microRNAs, epigenetic modification and lateral inhibition. The latter implies that the decision of one cell to adopt a specific phenotype is associated with the inhibition of neighbouring cells to enter the same developmental path. In mammals, insights into the role of key transcription factors that control development of highly specialized organs like the pancreas were derived from experiments in mice, especially various genetically modified animals1, 2, 3, 4. Normal development of the pancreas requires the activation of pancreatic duodenal homeobox protein (Pdx1) in pre-patterned cells of the endoderm. Inactivating mutations of Pdx1 are associated with pancreas agenesis in mouse and humans5, 6. A similar cell fate decision occurs later with the activation of Ngn3 that is required for the development of all endocrine cells in the pancreas7. Absence of Ngn3 is associated with the loss of pancreatic endocrine cells, whereas the activation of Ngn3 not only allows the differentiation of endocrine cells but also induces lateral inhibition of neighbouring cells—via Delta-Notch pathway—to enter the same pancreatic endocrine cell fate8. This Ngn3-mediated cell-switch occurs at a specific time point and for a short period of time in mice9. Thereafter, it is silenced and becomes almost undetectable in postnatal pancreatic islets. Conversely, Pdx1-positive Ngn3-positive cells reduce Pdx1 expression, as Ngn3-positive cells are Pdx1 negative10. They re-express Pdx1, however, as they go on their path towards glucose-sensitive insulin-secreting cells with parallel induction of MafA that is required for proper differentiation and maturation of pancreatic beta cells11. Data supporting these expression dynamics are derived from mice experiments1, 11, 12. A synthetic gene-switch governing cell fate decision in human induced pluripotent stem cells (hIPSCs) could facilitate the differentiation of glucose-sensitive insulin-secreting cells.

In recent years, synthetic biology has significantly advanced the rational design of synthetic gene networks that can interface with host metabolism, correct physiological disturbances13 and provide treatment strategies for a variety of metabolic disorders, including gouty arthritis14, obesity15 and type-2 diabetes16. Currently, synthetic biology principles may provide the componentry and gene network topologies for the assembly of synthetic lineage-control networks that can programme cell-fate decisions and provide targeted differentiation of stem cells into terminally differentiated somatic cells. Synthetic lineage-control networks may therefore provide the missing link between human pluripotent stem cells17 and their true impact on regenerative medicine18, 19, 20. The use of autologous stem cells in regenerative medicine holds great promise for curing many diseases, including type-1 diabetes mellitus (T1DM), which is characterized by the autoimmune destruction of insulin-producing pancreatic beta cells, thus making patients dependent on exogenous insulin to control their blood glucose21, 22. Although insulin therapy has changed the prospects and survival of T1DM patients, these patients still suffer from diabetic complications arising from the lack of physiological insulin secretion and excessive glucose levels23. The replacement of the pancreatic beta cells either by pancreas transplantation or by transplantation of pancreatic islets has been shown to normalize blood glucose and even improve existing complications of diabetes24. However, insulin independence 5 years after islet transplantation can only be achieved in up to 55% of the patients even when using the latest generation of immune suppression strategies25, 26. Transplantation of human islets or the entire pancreas has allowed T1DM patients to become somewhat insulin independent, which provides a proof-of-concept for beta-cell replacement therapies27, 28. However, because of the shortage of donor pancreases and islets, as well as the significant risk associated with transplantation and life-long immunosuppression, the rational differentiation of stem cells into functional beta-cells remains an attractive alternative29, 30. Nevertheless, a definitive cure for T1DM should address both the beta-cell deficit and the autoimmune response to cells that express insulin. Any beta-cell mimetic should be able to store large amounts of insulin and secrete it on demand, as in response to glucose stimulation29, 31. The most effective protocols for the in vitro generation of bonafide insulin-secreting beta-like cells that are suitable for transplantation have been the result of sophisticated trial-and-error studies elaborating timely addition of complex growth factor and small-molecule compound cocktails to human pancreatic progenitor cells32, 33, 34. The differentiation of pancreatic progenitor cells to beta-like cells is the most challenging part as current protocols provide inconsistent results and limited success in programming pancreatic progenitor cells into glucose-sensitive insulin-secreting beta-like cells35, 36, 37. One of the reasons for these observations could be the heterogeneity in endocrine differentiation and maturation towards a beta cell phenotype. Here we show that a synthetic lineage-control network programming the dynamic expression of the transcription factors Ngn3, Pdx1 and MafA enables the differentiation of hIPSC-derived pancreatic progenitor cells to glucose-sensitive insulin-secreting beta-like cells (Supplementary Fig. 1).

 

Vanillic acid-programmable positive band-pass filter

The differentiation pathway from pancreatic progenitor cells to glucose-sensitive insulin-secreting pancreatic beta-cells combines the transient mutually exclusive expression switches of Ngn3 (OFF-ON-OFF) and Pdx1 (ON-OFF-ON) with the concomitant induction of MafA (OFF-ON) expression10,11. Since independent control of the pancreatic transcription factors Ngn3, Pdx1 and MafA by different antibiotic transgene control systems responsive to tetracycline, erythromycin and pristinamycin did not result in the desired differential control dynamics (Supplementary Fig. 2), we have designed a vanillic acid-programmable synthetic lineage-control network that programmes hIPSC-derived pancreatic progenitor cells to specifically differentiate into glucose-sensitive insulin-secreting beta-like cells in a seamless and self-sufficient manner. The timely coordination of mutually exclusive Ngn3 and Pdx1 expression with MafA induction requires the trigger-controlled execution of a complex genetic programme that orchestrates two overlapping antagonistic band-pass filter expression profiles (OFF-ON-OFF and ON-OFF-ON), a positive band-pass filter for Ngn3 (OFF-ON-OFF) and a negative band-pass filter, also known as band-stop filter, for Pdx1 (ON-OFF-ON), the ramp-up expression phase of which is linked to a graded induction of MafA (OFF-ON).

The core of the synthetic lineage-control network consists of two transgene control devices that are sensitive to the food component and licensed food additive vanillic acid. These devices are a synthetic vanillic acid-inducible (ON-type) signalling cascade that is gradually induced by increasing the vanillic acid concentration and a vanillic acid-repressible (OFF-type) gene switch that is repressed in a vanillic acid dose-dependent manner (Fig. 1a,b). The designer cascade consists of the vanillic acid-sensitive mammalian olfactory receptor MOR9-1, which sequentially activates the G protein Sα (GSα) and adenylyl cyclase to produce a cyclic AMP (cAMP) second messenger surge38 that is rewired via the cAMP-responsive protein kinase A-mediated phospho-activation of the cAMP-response element-binding protein 1 (CREB1) to the induction of synthetic promoters (PCRE) containing CREB1-specific cAMP response elements (CRE; Fig. 1a). The co-transfection of pCI-MOR9-1 (PhCMV-MOR9-1-pASV40) and pCK53 (PCRE-SEAP-pASV40) into human mesenchymal stem cells (hMSC-TERT) confirmed the vanillic acid-adjustable secreted alkaline phosphatase (SEAP) induction of the designer cascade (>10nM vanillic acid; Fig. 1a). The vanillic acid-repressible gene switch consists of the vanillic acid-dependent transactivator (VanA1), which binds and activates vanillic acid-responsive promoters (for example, P1VanO2) at low and medium vanillic acid levels (<2μM). At high vanillic acid concentrations (>2μM), VanA1 dissociates from P1VanO2, which results in the dose-dependent repression of transgene expression39 (Fig. 1b). The co-transfection of pMG250 (PSV40-VanA1-pASV40) and pMG252 (P1VanO2-SEAP-pASV40) into hMSC-TERT corroborated the fine-tuning of the vanillic acid-repressible SEAP expression (Fig. 1b).

Figure 1: Design of a vanillic acid-responsive positive band-pass filter providing an OFF-ON-OFF expression profile.

Design of a vanillic acid-responsive positive band-pass filter providing an OFF-ON-OFF expression profile.

http://www.nature.com/ncomms/2016/160411/ncomms11247/images_article/ncomms11247-f1.jpg

a) Vanillic acid-inducible transgene expression. The constitutively expressed vanillic acid-sensitive olfactory G protein-coupled receptor MOR9-1 (pCI-MOR9-1; PhCMV-MOR9-1-pA) senses extracellular vanillic acid levels and triggers G protein (Gs)-mediated activation of the membrane-bound adenylyl cyclase (AC) that converts ATP into cyclic AMP (cAMP). The resulting intracellular cAMP surge activates PKA (protein kinase A), whose catalytic subunits translocate into the nucleus to phosphorylate cAMP response element-binding protein 1 (CREB1). Activated CREB1 binds to synthetic promoters (PCRE) containing cAMP-response elements (CRE) and induces PCRE-driven expression of human placental secreted alkaline phosphatase (SEAP; pCK53, PCRE-SEAP-pA). Co-transfection of pCI-MOR9-1 and pCK53 into human mesenchymal stem cells (hMSC-TERT) grown for 48h in the presence of increasing vanillic acid concentrations results in a dose-inducible SEAP expression profile. (b) Vanillic acid-repressible transgene expression. The constitutively expressed, vanillic acid-dependent transactivator VanA1(pMG250, PSV40-VanA1-pA, VanA1, VanR-VP16) binds and activates the chimeric promoter P1VanO2 (pMG252, P1VanO2-SEAP-pA) in the absence of vanillic acid. In the presence of increasing vanillic acid concentrations, VanA1 is released from P1VanO2, and transgene expression is shut down. Co-transfection of pMG250 and pMG252 into hMSC-TERT grown for 48h in the presence of increasing vanillic acid concentrations results in a dose-repressible SEAP expression profile. (c) Positive band-pass expression filter. Serial interconnection of the synthetic vanillic acid-inducible signalling cascade (a) with the vanillic acid-repressible transcription factor-based gene switch (b) by PCRE-mediated expression of VanA1 (pSP1, PCRE-VanA1-pA) results in a two-level feed-forward cascade. Owing to the opposing responsiveness and differential sensitivity to vanillic acid, this synthetic gene network programmes SEAP expression with a positive band-pass filter profile (OFF-ON-OFF) as vanillic acid levels are increased. Medium vanillic acid levels activate MOR9-1, which induces PCRE-driven VanA1 expression. VanA1remains active and triggers P1VanO2-mediated SEAP expression in feed-forward manner, which increases to maximum levels. At high vanillic acid concentrations, MOR9-1 maintains PCRE-driven VanA1 expression, but the transactivator dissociates from P1VanO2, which shuts SEAP expression down. Co-transfection of pCI-MOR9-1, pSP1 and pMG252 into hMSC-TERT grown for 48h in the presence of increasing vanillic acid concentrations programmes SEAP expression with a positive band-pass profile (OFF-ON-OFF). Data are the means±s.d. of triplicate experiments (n=9).

The opposing responsiveness and differential sensitivity of the control devices to vanillic acid are essential to programme band-pass filter expression profiles. Upon daisy-chaining the designer cascade (pCI-MOR9-1; PhCMV-MOR9-1-pASV40; pSP1, PCRE-VanA1-pASV40) and the gene switch (pSP1, PCRE-VanA1-pASV40; pMG252, P1VanO2-SEAP-pASV40) in the same cell, the network executes a band-pass filter SEAP expression profile when exposed to increasing concentrations of vanillic acid (Fig. 1c). Medium vanillic acid levels (10nM to 2μM) activate MOR9-1, which induces PCRE-driven VanA1 expression. VanA1 remains active within this concentration range and, in a feed-forward amplifier manner, triggers P1VanO2-mediated SEAP expression, which gradually increases to maximum levels (Fig. 1c). At high vanillic acid concentrations (2μM to 400μM), MOR9-1 maintains PCRE-driven VanA1 expression, but the transactivator is inactivated and dissociates from P1VanO2, which results in the gradual shutdown of SEAP expression (Fig. 1c).

Vanillic acid-programmable lineage-control network

For the design of the vanillic acid-programmable synthetic lineage-control network, constitutive MOR9-1 expression and PCRE-driven VanA1 expression were combined with pSP12 (pASV40-Ngn3cm←P3VanO2right arrowmFT-miR30Pdx1g-shRNA-pASV40) for endocrine specification and pSP17(PCREm-Pdx1cm-2A-MafAcm-pASV40) for maturation of developing beta-cells (Fig. 2a,b). ThepSP12-encoded expression unit enables the VanA1-controlled induction of the optimized bidirectional vanillic acid-responsive promoter (P3VanO2) that drives expression of a codon-modified Ngn3cm, the nucleic acid sequence of which is distinct from its genomic counterpart (Ngn3g) to allow for quantitative reverse transcription–PCR (qRT–PCR)-based discrimination. In the opposite direction, P3VanO2 transcribes miR30Pdx1g-shRNA, which exclusively targets genomicPdx1 (Pdx1g) transcripts for RNA interference-based destruction and is linked to the production of a blue-to-red medium fluorescent timer40 (mFT) for precise visualization of the unit’s expression dynamics in situ. pSP17 contains a dicistronic expression unit in which the modified high-tightness and lower-sensitivity PCREm promoter (see below) drives co-cistronic expression of Pdx1cm andMafAcm, which are codon-modified versions producing native transcription factors that specifically differ from their genomic counterparts (Pdx1g, MafAg) in their nucleic acid sequence. After individual validation of the vanillic acid-controlled expression and functionality of all network components (Supplementary Figs 2–9), the lineage-control network was ready to be transfected into hIPSC-derived pancreatic progenitor cells. These cells are characterized by high expression of Pdx1g and Nkx6.1 levels and the absence of Ngn3g and MafAg production32, 33, 34 (day 0:Supplementary Figs 10–16).

 

Figure 2: Synthetic lineage-control network programming differential expression dynamics of pancreatic transcription factors.

Synthetic lineage-control network programming differential expression dynamics of pancreatic transcription factors.

 

http://www.nature.com/ncomms/2016/160411/ncomms11247/images/ncomms11247-f2.jpg

(a) Schematic of the synthetic lineage-control network. The constitutively expressed, vanillic acid-sensitive olfactory G protein-coupled receptor MOR9-1 (pCI-MOR9-1; PhCMV-MOR9-1-pA) senses extracellular vanillic acid levels and triggers a synthetic signalling cascade, inducing PCRE-driven expression of the transcription factor VanA1 (pSP1, PCRE-VanA1-pA). At medium vanillic acid concentrations (purple arrows), VanA1 binds and activates the bidirectional vanillic acid-responsive promoter P3VanO2 (pSP12, pA-Ngn3cm←P3VanO2right arrowmFT-miR30Pdx1g-shRNA-pA), which drives the induction of codon-modified Neurogenin 3 (Ngn3cm) as well as the coexpression of both the blue-to-red medium fluorescent timer (mFT) for precise visualization of the unit’s expression dynamics and miR30pdx1g-shRNA (a small hairpin RNA programming the exclusive destruction of genomic pancreatic and duodenal homeobox 1 (Pdx1g) transcripts). Consequently, Ngn3cm levels switch from low to high (OFF-to-ON), and Pdx1g levels toggle from high to low (ON-to-OFF). In addition, Ngn3cm triggers the transcription of Ngn3g from its genomic promoter, which initiates a positive-feedback loop. At high vanillic acid levels (orange arrows), VanA1 is inactivated, and both Ngn3cm and miR30pdx1g-shRNA are shut down. At the same time, the MOR9-1-driven signalling cascade induces the modified high-tightness and lower-sensitivity PCREm promoter that drives the co-cistronic expression of the codon-modified variants of Pdx1 (Pdx1cm) and V-maf musculoaponeurotic fibrosarcoma oncogene homologue A (MafAcm; pSP17, PCREm-Pdx1cm-2A-MafAcm-pA). Consequently, Pdx1cm and MafAcm become fully induced. As Pdx1cm expression ramps up, it initiates a positive-feedback loop by inducing the genomic counterparts Pdx1g and MafAg. Importantly, Pdx1cm levels are not affected by miR30Pdx1g-shRNA because the latter is specific for genomic Pdx1g transcripts and because the positive feedback loop-mediated amplification of Pdx1gexpression becomes active only after the shutdown of miR30Pdx1g-shRNA. Overall, the synthetic lineage-control network provides vanillic acid-programmable, transient, mutually exclusive expression switches for Ngn3 (OFF-ON-OFF) and Pdx1 (ON-OFF-ON) as well as the concomitant induction of MafA (OFF-ON) expression, which can be followed in real time (Supplementary Movies 1 and 2). (b) Schematic illustrating the individual differentiation steps from human IPSCs towards beta-like cells. The colours match the cell phenotypes reached during the individual differentiation stages programmed by the lineage-control network shown in a.

Following the co-transfection of pCI-MOR9-1 (PhCMV-MOR9-1-pASV40), pSP1 (PCRE-VanA1-pASV40), pSP12 (pASV40-Ngn3cm←P3VanO2right arrowmFT-miR30Pdx1g-shRNA-pASV40) and pSP17(PCREm-Pdx1cm-2A-MafAcm-pASV40) into hIPSC-derived pancreatic progenitor cells, the synthetic lineage-control network should override random endogenous differentiation activities and execute the pancreatic beta-cell-specific differentiation programme in a vanillic acid remote-controlled manner. To confirm that the lineage-control network operates as programmed, we cultivated network-containing and pEGFP-N1-transfected (negative-control) cells for 4 days at medium (2μM) and then 7 days at high (400μM) vanillic acid concentrations and profiled the differential expression dynamics of all of the network components and their genomic counterparts as well as the interrelated transcription factors and hormones in both whole populations and individual cells at days 0, 4, 11 and 14 (Figs 2 and 3 and Supplementary Figs 11–17).

 

Figure 3: Dynamics of the lineage-control network.

Dynamics of the lineage-control network.

http://www.nature.com/ncomms/2016/160411/ncomms11247/images/ncomms11247-f3.jpg

(a,b) Quantitative RT–PCR-based expression profiling of the pancreatic transcription factors Ngn3cm/g, Pdx1cm/g and MafAcm/g in hIPSC-derived pancreatic progenitor cells containing the synthetic lineage-control network at days 4 and 11. Data are the means±s.d. of triplicate experiments (n=9). (cg) Immunocytochemistry of pancreatic transcription factors Ngn3cm/g, Pdx1cm/g and MafAcm/g in hIPSC-derived pancreatic progenitor cells containing the synthetic lineage-control network at days 4 and 11. hIPSC-derived pancreatic progenitor cells were co-transfected with the lineage-control vectors pCI-MOR9-1 (PhCMV-MOR9-1-pA), pSP1 (PCRE-VanA1-pA), pSP12 (pA-Ngn3cm←P3VanO2right arrowmFT-miR30Pdx1g-shRNA-pA) and pSP17 (PCREm-Pdx1cm-2A-MafAcm) and immunocytochemically stained for (c) VanA1 and Pdx1 (day 4), (d) VanA1 and Ngn3 (day 4), (e) VanA1 and Pdx1 (day 11), (f) MafA and Pdx1 (day 11) as well as (g) VanA1 and insulin (C-peptide) (day 11). The cells staining positive for VanA1 are containing the lineage-control network. DAPI, 4′,6-diamidino-2-phenylindole. Scale bar, 100μm.

…….

Multicellular organisms, including humans, consist of a highly structured assembly of a multitude of specialized cell phenotypes that originate from the same zygote and have traversed a preprogrammed multifactorial developmental plan that orchestrates sequential differentiation steps with high precision in space and time19, 51. Because of the complexity of terminally differentiated cells, the function of damaged tissues can for most medical indications only be restored via the transplantation of donor material, which is in chronically short supply52.

Despite significant progress in regenerative medicine and the availability of stem cells, the design of protocols that replicate natural differentiation programmes and provide fully functional cell mimetics remains challenging29, 53. For example, efforts to generate beta-cells from human embryonic stem cells (hESCs) have led to reliable protocols involving the sequential administration of growth factors (activin A, bone morphogenetic protein 4 (BMP-4), basic fibroblast growth factor (bFGF), FGF-10, Noggin, vascular endothelial growth factor (VEGF) and Wnt3A) and small-molecule compounds (cyclopamine, forskolin, indolactam V, IDE1, IDE2, nicotinamide, retinoic acid, SB−431542 and γ-secretase inhibitor) that modulate differentiation-specific signalling pathways31, 54, 55. In vitro differentiation of hESC-derived pancreatic progenitor cells into beta-like cells is more challenging and has been achieved recently by a complex media formulation with chemicals and growth factors32, 33, 34.

hIPSCs have become a promising alternative to hESCs; however, their use remains restricted in many countries56. Most hIPSCs used for directed differentiation studies were derived from a juvenescent cell source that is expected to show a higher degree of differentiation potential compared with older donors that typically have a higher need for medical interventions37, 57, 58. We previously succeeded in producing mRNA-reprogrammed hIPSCs from adipose tissue-derived mesenchymal stem cells of a 50-year-old donor, demonstrating that the reprogramming of cells from a donor of advanced age is possible in principle59.

Recent studies applying similar hESC-based differentiation protocols to hIPSCs have produced cells that release insulin in response to high glucose32, 33, 34. This observation suggests that functional beta-like cells can eventually be derived from hIPSCs32, 33. In our hands, the growth-factor/chemical-based technique for differentiating human IPSCs resulted in beta-like cells with poor glucose responsiveness. Recent studies have revealed significant variability in the lineage specification propensity of different hIPSC lines35, 60 and substantial differences in the expression profiles of key transcription factors in hIPSC-derived beta-like cells33. Therefore, the growth-factor/chemical-based protocols may require further optimization and need to be customized for specific hIPSC lines35. Synthetic lineage-control networks providing precise dynamic control of transcription factor expression may overcome the challenges associated with the programming of beta-like cells from different hIPSC lines.

Rather than exposing hIPSCs to a refined compound cocktail that triggers the desired differentiation in a fraction of the stem cell population, we chose to design a synthetic lineage-control network to enable single input-programmable differentiation of hIPSC-derived pancreatic progenitor cells into glucose-sensitive insulin-secreting beta-like cells. In contrast with the use of growth-factor/chemical-based cocktails, synthetic lineage-control networks are expected to (i) be more economical because of in situ production of the required transcription factors, (ii) enable simultaneous control of ectopic and chromosomally encoded transcription factor variants, (iii) tap into endogenous pathways and not be limited to cell-surface input, (iv) display improved reversibility that is not dependent on the removal of exogenous growth factors via culture media replacement, (v) provide lateral inhibition, thereby reducing the random differentiation of neighbouring cells and (vi) enable trigger-programmable and (vii) precise differential transcription factor expression switches.

The synthetic lineage-control network that precisely replicates the endogenous relative expression dynamics of the transcription factors Pdx-1, Ngn3 and MafA required the design of a new network topology that interconnects a synthetic signalling cascade and a gene switch with differential and opposing sensitivity to the food additive vanillic acid. This differentiation device provides different band-pass filter, time-delay and feed-forward amplifier topologies that interface with endogenous positive-feedback loops to orchestrate the timely expression and repression of heterologous and chromosomally encoded Ngn3, Pdx1 and MafA variants. The temporary nature of the engineering intervention, which consists of transient transfection of the genetic lineage-control components in the absence of any selection, is expected to avoid stable modification of host chromosomes and alleviate potential safety concerns. In addition, the resulting beta-cell mass could be encapsulated inside vascularized microcontainers28, a proven containment strategy in prototypic cell-based therapies currently being tested in animal models of prominent human diseases14, 15, 16, 61, 62 as well as in human clinical trials28.

The hIPSC-derived beta-like cells resulting from this trigger-induced synthetic lineage-control network exhibited glucose-stimulated insulin-release dynamics and capacity matching the human physiological range and transcriptional profiling, flow cytometric analysis and electron microscopy corroborated the lineage-controlled stem cells reached a mature beta-cell phenotype. In principle, the combination of hIPSCs derived from the adipose tissue of a 50-year-old donor59 with a synthetic lineage-control network programming glucose-sensitive insulin-secreting beta-like cells closes the design cycle of regenerative medicine63. However, hIPSCs that are derived from T1DM patients, differentiated into beta-like cells and transplanted back into the donor would still be targeted by the immune system, as demonstrated in the transplantation of segmental pancreatic grafts from identical twins64. Therefore, any beta-cell-replacement therapy will require complementary modulation of the immune system either via drugs30, 65, engineering or cell-based approaches66, 67 or packaging inside vascularizing, semi-permeable immunoprotective microcontainers28.

Capitalizing on the design principles of synthetic biology, we have successfully constructed and validated a synthetic lineage-control network that replicates the differential expression dynamics of critical transcription factors and mimicks the native differentiation pathway to programme hIPSC-derived pancreatic progenitor cells into glucose-sensitive insulin-secreting beta-like cells that compare with human pancreatic islets at a high level. The design of input-triggered synthetic lineage-control networks that execute a preprogrammed sequential differentiation agenda coordinating the timely induction and repression of multiple genes could provide a new impetus for the advancement of developmental biology and regenerative medicine.

Other related articles published in this Open Access Online Scientific Journal include the following:

Adipocyte Derived Stroma Cells: Their Usage in Regenerative Medicine and Reprogramming into Pancreatic Beta-Like Cells

Curator: Evelina Cohn, Ph.D.

https://pharmaceuticalintelligence.com/2016/03/03/adipocyte-derived-stroma-cells-their-usage-in-regenerative-medicine-and-reprogramming-into-pancreatic-beta-like-cells/

Read Full Post »


Genomics and epigenetics link to DNA structure

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Sequence and Epigenetic Factors Determine Overall DNA Structure

http://www.genengnews.com/gen-news-highlights/sequence-and-epigenetic-factors-determine-overall-dna-structure/81252592/

http://www.genengnews.com/Media/images/GENHighlight/Atomiclevelsimulationsshowingelectrostaticforcesbetweeneachatom1259202113.jpg

Atomic-level simulations show electrostatic forces between each atom. [Alek Aksimentiev, University of Illinois at Urbana-Champaign]

 

The traditionally held hypothesis about the highly ordered organization of DNA describes the interaction of various proteins with DNA sequences to mediate the dynamic structure of the molecule. However, recent evidence has emerged that stretches of homologous DNA sequences can associate preferentially with one another, even in the absence of proteins.

Researchers at the University of Illinois Center for the Physics of Living Cells, Johns Hopkins University, and Ulsan National Institute of Science and Technology (UNIST) in South Korea found that DNA molecules interact directly with one another in ways that are dependent on the sequence of the DNA and epigenetic factors, such as methylation.

The researchers described evidence they found for sequence-dependent attractive interactions between double-stranded DNA molecules that neither involve intermolecular strand exchange nor are mediated by DNA-binding proteins.

“DNA molecules tend to repel each other in water, but in the presence of special types of cations, they can attract each other just like nuclei pulling each other by sharing electrons in between,” explained lead study author Hajin Kim, Ph.D., assistant professor of biophysics at UNIST. “Our study suggests that the attractive force strongly depends on the nucleic acid sequence and also the epigenetic modifications.”

The investigators used atomic-level supercomputer simulations to measure the forces between a pair of double-stranded DNA helices and proposed that the distribution of methyl groups on the DNA was the key to regulating this sequence-dependent attraction. To verify their findings experimentally, the scientists were able to observe a single pair of DNA molecules within nanoscale bubbles.

“Here we combine molecular dynamics simulations with single-molecule fluorescence resonance energy transfer experiments to examine the interactions between duplex DNA in the presence of spermine, a biological polycation,” the authors wrote. “We find that AT-rich DNA duplexes associate more strongly than GC-rich duplexes, regardless of the sequence homology. Methyl groups of thymine act as a steric block, relocating spermine from major grooves to interhelical regions, thereby increasing DNA–DNA attraction.”

The findings from this study were published recently in Nature Communications in an article entitled “Direct Evidence for Sequence-Dependent Attraction Between Double-Stranded DNA Controlled by Methylation.”

After conducting numerous further simulations, the research team concluded that direct DNA–DNA interactions could play a central role in how chromosomes are organized in the cell and which ones are expanded or folded up compactly, ultimately determining functions of different cell types or regulating the cell cycle.

“Biophysics is a fascinating subject that explores the fundamental principles behind a variety of biological processes and life phenomena,” Dr. Kim noted. “Our study requires cross-disciplinary efforts from physicists, biologists, chemists, and engineering scientists and we pursue the diversity of scientific disciplines within the group.”

Dr. Kim concluded by stating that “in our lab, we try to unravel the mysteries within human cells based on the principles of physics and the mechanisms of biology. In the long run, we are seeking for ways to prevent chronic illnesses and diseases associated with aging.”

 

Direct evidence for sequence-dependent attraction between double-stranded DNA controlled by methylation

Jejoong Yoo, Hajin Kim, Aleksei Aksimentiev, and Taekjip Ha
Nature Communications 7 11045 (2016)    DOI:10.1038/ncomms11045BibTex

http://bionano.physics.illinois.edu/sites/default/files/styles/large/public/telepathy_figures_0.png?itok=VUJIHX2_

Although proteins mediate highly ordered DNA organization in vivo, theoretical studies suggest that homologous DNA duplexes can preferentially associate with one another even in the absence of proteins. Here we combine molecular dynamics simulations with single-molecule fluorescence resonance energy transfer experiments to examine the interactions between duplex DNA in the presence of spermine, a biological polycation. We find that AT-rich DNA duplexes associate more strongly than GC-rich duplexes, regardless of the sequence homology. Methyl groups of thymine acts as a steric block, relocating spermine from major grooves to interhelical regions, thereby increasing DNA–DNA attraction. Indeed, methylation of cytosines makes attraction between GC-rich DNA as strong as that between AT-rich DNA. Recent genome-wide chromosome organization studies showed that remote contact frequencies are higher for AT-rich and methylated DNA, suggesting that direct DNA–DNA interactions that we report here may play a role in the chromosome organization and gene regulation.

Formation of a DNA double helix occurs through Watson–Crick pairing mediated by the complementary hydrogen bond patterns of the two DNA strands and base stacking. Interactions between double-stranded (ds)DNA molecules in typical experimental conditions containing mono- and divalent cations are repulsive1, but can turn attractive in the presence of high-valence cations2. Theoretical studies have identified the ion–ion correlation effect as a possible microscopic mechanism of the DNA condensation phenomena3, 4, 5. Theoretical investigations have also suggested that sequence-specific attractive forces might exist between two homologous fragments of dsDNA6, and this ‘homology recognition’ hypothesis was supported by in vitro atomic force microscopy7 and in vivo point mutation assays8. However, the systems used in these measurements were too complex to rule out other possible causes such as Watson–Crick strand exchange between partially melted DNA or protein-mediated association of DNA.

Here we present direct evidence for sequence-dependent attractive interactions between dsDNA molecules that neither involve intermolecular strand exchange nor are mediated by proteins. Further, we find that the sequence-dependent attraction is controlled not by homology—contradictory to the ‘homology recognition’ hypothesis6—but by a methylation pattern. Unlike the previous in vitro study that used monovalent (Na+) or divalent (Mg2+) cations7, we presumed that for the sequence-dependent attractive interactions to operate polyamines would have to be present. Polyamine is a biological polycation present at a millimolar concentration in most eukaryotic cells and essential for cell growth and proliferation9, 10. Polyamines are also known to condense DNA in a concentration-dependent manner2, 11. In this study, we use spermine4+(Sm4+) that contains four positively charged amine groups per molecule.

Sequence dependence of DNA–DNA forces

To characterize the molecular mechanisms of DNA–DNA attraction mediated by polyamines, we performed molecular dynamics (MD) simulations where two effectively infinite parallel dsDNA molecules, 20 base pairs (bp) each in a periodic unit cell, were restrained to maintain a prescribed inter-DNA distance; the DNA molecules were free to rotate about their axes. The two DNA molecules were submerged in 100mM aqueous solution of NaCl that also contained 20 Sm4+molecules; thus, the total charge of Sm4+, 80 e, was equal in magnitude to the total charge of DNA (2 × 2 × 20 e, two unit charges per base pair; Fig. 1a). Repeating such simulations at various inter-DNA distances and applying weighted histogram analysis12 yielded the change in the interaction free energy (ΔG) as a function of the DNA–DNA distance (Fig. 1b,c). In a broad agreement with previous experimental findings13, ΔG had a minimum, ΔGmin, at the inter-DNA distance of 25−30Å for all sequences examined, indeed showing that two duplex DNA molecules can attract each other. The free energy of inter-duplex attraction was at least an order of magnitude smaller than the Watson–Crick interaction free energy of the same length DNA duplex. A minimum of ΔG was not observed in the absence of polyamines, for example, when divalent or monovalent ions were used instead14, 15.

Figure 1: Polyamine-mediated DNA sequence recognition observed in MD simulations and smFRET experiments.
Polyamine-mediated DNA sequence recognition observed in MD simulations and smFRET experiments.

(a) Set-up of MD simulations. A pair of parallel 20-bp dsDNA duplexes is surrounded by aqueous solution (semi-transparent surface) containing 20 Sm4+ molecules (which compensates exactly the charge of DNA) and 100mM NaCl. Under periodic boundary conditions, the DNA molecules are effectively infinite. A harmonic potential (not shown) is applied to maintain the prescribed distance between the dsDNA molecules. (b,c) Interaction free energy of the two DNA helices as a function of the DNA–DNA distance for repeat-sequence DNA fragments (b) and DNA homopolymers (c). (d) Schematic of experimental design. A pair of 120-bp dsDNA labelled with a Cy3/Cy5 FRET pair was encapsulated in a ~200-nm diameter lipid vesicle; the vesicles were immobilized on a quartz slide through biotin–neutravidin binding. Sm4+ molecules added after immobilization penetrated into the porous vesicles. The fluorescence signals were measured using a total internal reflection microscope. (e) Typical fluorescence signals indicative of DNA–DNA binding. Brief jumps in the FRET signal indicate binding events. (f) The fraction of traces exhibiting binding events at different Sm4+ concentrations for AT-rich, GC-rich, AT nonhomologous and CpG-methylated DNA pairs. The sequence of the CpG-methylated DNA specifies the methylation sites (CG sequence, orange), restriction sites (BstUI, triangle) and primer region (underlined). The degree of attractive interaction for the AT nonhomologous and CpG-methylated DNA pairs was similar to that of the AT-rich pair. All measurements were done at [NaCl]=50mM and T=25°C. (g) Design of the hybrid DNA constructs: 40-bp AT-rich and 40-bp GC-rich regions were flanked by 20-bp common primers. The two labelling configurations permit distinguishing parallel from anti-parallel orientation of the DNA. (h) The fraction of traces exhibiting binding events as a function of NaCl concentration at fixed concentration of Sm4+ (1mM). The fraction is significantly higher for parallel orientation of the DNA fragments.

Unexpectedly, we found that DNA sequence has a profound impact on the strength of attractive interaction. The absolute value of ΔG at minimum relative to the value at maximum separation, |ΔGmin|, showed a clearly rank-ordered dependence on the DNA sequence: |ΔGmin| of (A)20>|ΔGmin| of (AT)10>|ΔGmin| of (GC)10>|ΔGmin| of (G)20. Two trends can be noted. First, AT-rich sequences attract each other more strongly than GC-rich sequences16. For example, |ΔGmin| of (AT)10 (1.5kcalmol−1 per turn) is about twice |ΔGmin| of (GC)10 (0.8kcalmol−1 per turn) (Fig. 1b). Second, duplexes having identical AT content but different partitioning of the nucleotides between the strands (that is, (A)20 versus (AT)10 or (G)20 versus (GC)10) exhibit statistically significant differences (~0.3kcalmol−1 per turn) in the value of |ΔGmin|.

To validate the findings of MD simulations, we performed single-molecule fluorescence resonance energy transfer (smFRET)17 experiments of vesicle-encapsulated DNA molecules. Equimolar mixture of donor- and acceptor-labelled 120-bp dsDNA molecules was encapsulated in sub-micron size, porous lipid vesicles18 so that we could observe and quantitate rare binding events between a pair of dsDNA molecules without triggering large-scale DNA condensation2. Our DNA constructs were long enough to ensure dsDNA–dsDNA binding that is stable on the timescale of an smFRET measurement, but shorter than the DNA’s persistence length (~150bp (ref. 19)) to avoid intramolecular condensation20. The vesicles were immobilized on a polymer-passivated surface, and fluorescence signals from individual vesicles containing one donor and one acceptor were selectively analysed (Fig. 1d). Binding of two dsDNA molecules brings their fluorescent labels in close proximity, increasing the FRET efficiency (Fig. 1e).

FRET signals from individual vesicles were diverse. Sporadic binding events were observed in some vesicles, while others exhibited stable binding; traces indicative of frequent conformational transitions were also observed (Supplementary Fig. 1A). Such diverse behaviours could be expected from non-specific interactions of two large biomolecules having structural degrees of freedom. No binding events were observed in the absence of Sm4+ (Supplementary Fig. 1B) or when no DNA molecules were present. To quantitatively assess the propensity of forming a bound state, we chose to use the fraction of single-molecule traces that showed any binding events within the observation time of 2min (Methods). This binding fraction for the pair of AT-rich dsDNAs (AT1, 100% AT in the middle 80-bp section of the 120-bp construct) reached a maximum at ~2mM Sm4+(Fig. 1f), which is consistent with the results of previous experimental studies2, 3. In accordance with the prediction of our MD simulations, GC-rich dsDNAs (GC1, 75% GC in the middle 80bp) showed much lower binding fraction at all Sm4+ concentrations (Fig. 1b,c). Regardless of the DNA sequence, the binding fraction reduced back to zero at high Sm4+ concentrations, likely due to the resolubilization of now positively charged DNA–Sm4+ complexes2, 3, 13.

Because the donor and acceptor fluorophores were attached to the same sequence of DNA, it remained possible that the sequence homology between the donor-labelled DNA and the acceptor-labelled DNA was necessary for their interaction6. To test this possibility, we designed another AT-rich DNA construct AT2 by scrambling the central 80-bp section of AT1 to remove the sequence homology (Supplementary Table 1). The fraction of binding traces for this nonhomologous pair of donor-labelled AT1 and acceptor-labelled AT2 was comparable to that for the homologous AT-rich pair (donor-labelled AT1 and acceptor-labelled AT1) at all Sm4+ concentrations tested (Fig. 1f). Furthermore, this data set rules out the possibility that the higher binding fraction observed experimentally for the AT-rich constructs was caused by inter-duplex Watson–Crick base pairing of the partially melted constructs.

Next, we designed a DNA construct named ATGC, containing, in its middle section, a 40-bp AT-rich segment followed by a 40-bp GC-rich segment (Fig. 1g). By attaching the acceptor to the end of either the AT-rich or GC-rich segments, we could compare the likelihood of observing the parallel binding mode that brings the two AT-rich segments together and the anti-parallel binding mode. Measurements at 1mM Sm4+ and 25 or 50mM NaCl indicated a preference for the parallel binding mode by ~30% (Fig. 1h). Therefore, AT content can modulate DNA–DNA interactions even in a complex sequence context. Note that increasing the concentration of NaCl while keeping the concentration of Sm4+ constant enhances competition between Na+ and Sm4+ counterions, which reduces the concentration of Sm4+ near DNA and hence the frequency of dsDNA–dsDNA binding events (Supplementary Fig. 2).

Methylation determines the strength of DNA–DNA attraction

Analysis of the MD simulations revealed the molecular mechanism of the polyamine-mediated sequence-dependent attraction (Fig. 2). In the case of the AT-rich fragments, the bulky methyl group of thymine base blocks Sm4+ binding to the N7 nitrogen atom of adenine, which is the cation-binding hotspot21, 22. As a result, Sm4+ is not found in the major grooves of the AT-rich duplexes and resides mostly near the DNA backbone (Fig. 2a,d). Such relocated Sm4+ molecules bridge the two DNA duplexes better, accounting for the stronger attraction16, 23, 24, 25. In contrast, significant amount of Sm4+ is adsorbed to the major groove of the GC-rich helices that lacks cation-blocking methyl group (Fig. 2b,e).

Figure 2: Molecular mechanism of polyamine-mediated DNA sequence recognition.
Molecular mechanism of polyamine-mediated DNA sequence recognition.

(ac) Representative configurations of Sm4+ molecules at the DNA–DNA distance of 28Å for the (AT)10–(AT)10 (a), (GC)10–(GC)10 (b) and (GmC)10–(GmC)10 (c) DNA pairs. The backbone and bases of DNA are shown as ribbon and molecular bond, respectively; Sm4+ molecules are shown as molecular bonds. Spheres indicate the location of the N7 atoms and the methyl groups. (df) The average distributions of cations for the three sequence pairs featured in ac. Top: density of Sm4+ nitrogen atoms (d=28Å) averaged over the corresponding MD trajectory and the z axis. White circles (20Å in diameter) indicate the location of the DNA helices. Bottom: the average density of Sm4+ nitrogen (blue), DNA phosphate (black) and sodium (red) atoms projected onto the DNA–DNA distance axis (x axis). The plot was obtained by averaging the corresponding heat map data over y=[−10, 10] Å. See Supplementary Figs 4 and 5 for the cation distributions at d=30, 32, 34 and 36Å.

If indeed the extra methyl group in thymine, which is not found in cytosine, is responsible for stronger DNA–DNA interactions, we can predict that cytosine methylation, which occurs naturally in many eukaryotic organisms and is an essential epigenetic regulation mechanism26, would also increase the strength of DNA–DNA attraction. MD simulations showed that the GC-rich helices containing methylated cytosines (mC) lose the adsorbed Sm4+ (Fig. 2c,f) and that |ΔGmin| of (GC)10 increases on methylation of cytosines to become similar to |ΔGmin| of (AT)10 (Fig. 1b).

To experimentally assess the effect of cytosine methylation, we designed another GC-rich construct GC2 that had the same GC content as GC1 but a higher density of CpG sites (Supplementary Table 1). The CpG sites were then fully methylated using M. SssI methyltransferase (Supplementary Fig. 3; Methods). As predicted from the MD simulations, methylation of the GC-rich constructs increased the binding fraction to the level of the AT-rich constructs (Fig. 1f).

The sequence dependence of |ΔGmin| and its relation to the Sm4+ adsorption patterns can be rationalized by examining the number of Sm4+ molecules shared by the dsDNA molecules (Fig. 3a). An Sm4+ cation adsorbed to the major groove of one dsDNA is separated from the other dsDNA by at least 10Å, contributing much less to the effective DNA–DNA attractive force than a cation positioned between the helices, that is, the ‘bridging’ Sm4+ (ref. 23). An adsorbed Sm4+ also repels other Sm4+ molecules due to like-charge repulsion, lowering the concentration of bridging Sm4+. To demonstrate that the concentration of bridging Sm4+ controls the strength of DNA–DNA attraction, we computed the number of bridging Sm4+ molecules, Nspm (Fig. 3b). Indeed, the number of bridging Sm4+ molecules ranks in the same order as |ΔGmin|: Nspm of (A)20>Nspm of (AT)10Nspm of (GmC)10>Nspm of (GC)10>Nspm of (G)20. Thus, the number density of nucleotides carrying a methyl group (T and mC) is the primary determinant of the strength of attractive interaction between two dsDNA molecules. At the same time, the spatial arrangement of the methyl group carrying nucleotides can affect the interaction strength as well (Fig. 3c). The number of methyl groups and their distribution in the (AT)10 and (GmC)10 duplex DNA are identical, and so are their interaction free energies, |ΔGmin| of (AT)10Gmin| of (GmC)10. For AT-rich DNA sequences, clustering of the methyl groups repels Sm4+ from the major groove more efficiently than when the same number of methyl groups is distributed along the DNA (Fig. 3b). Hence, |ΔGmin| of (A)20>|ΔGmin| of (AT)10. For GC-rich DNA sequences, clustering of the cation-binding sites (N7 nitrogen) attracts more Sm4+ than when such sites are distributed along the DNA (Fig. 3b), hence |ΔGmin| is larger for (GC)10 than for (G)20.

Figure 3: Methylation modulates the interaction free energy of two dsDNA molecules by altering the number of bridging Sm4+.
Methylation modulates the interaction free energy of two dsDNA molecules by altering the number of bridging Sm4+.

(a) Typical spatial arrangement of Sm4+ molecules around a pair of DNA helices. The phosphates groups of DNA and the amine groups of Sm4+ are shown as red and blue spheres, respectively. ‘Bridging’ Sm4+molecules reside between the DNA helices. Orange rectangles illustrate the volume used for counting the number of bridging Sm4+ molecules. (b) The number of bridging amine groups as a function of the inter-DNA distance. The total number of Sm4+ nitrogen atoms was computed by averaging over the corresponding MD trajectory and the 10Å (x axis) by 20Å (y axis) rectangle prism volume (a) centred between the DNA molecules. (c) Schematic representation of the dependence of the interaction free energy of two DNA molecules on their nucleotide sequence. The number and spatial arrangement of nucleotides carrying a methyl group (T or mC) determine the interaction free energy of two dsDNA molecules.

Genome-wide investigations of chromosome conformations using the Hi–C technique revealed that AT-rich loci form tight clusters in human nucleus27, 28. Gene or chromosome inactivation is often accompanied by increased methylation of DNA29 and compaction of facultative heterochromatin regions30. The consistency between those phenomena and our findings suggest the possibility that the polyamine-mediated sequence-dependent DNA–DNA interaction might play a role in chromosome folding and epigenetic regulation of gene expression.

  1. Rau, D. C., Lee, B. & Parsegian, V. A. Measurement of the repulsive force between polyelectrolyte molecules in ionic solution: hydration forces between parallel DNA double helices. Proc. Natl Acad. Sci. USA 81, 26212625 (1984).
  2. Raspaud, E., Olvera de la Cruz, M., Sikorav, J. L. & Livolant, F. Precipitation of DNA by polyamines: a polyelectrolyte behavior. Biophys. J. 74, 381393 (1998).
  3. Besteman, K., Van Eijk, K. & Lemay, S. G. Charge inversion accompanies DNA condensation by multivalent ions. Nat. Phys. 3, 641644 (2007).
  4. Lipfert, J., Doniach, S., Das, R. & Herschlag, D. Understanding nucleic acid-ion interactions.Annu. Rev. Biochem. 83, 813841 (2014).
  5. Grosberg, A. Y., Nguyen, T. T. & Shklovskii, B. I. The physics of charge inversion in chemical and biological systems. Rev. Mod. Phys. 74, 329345 (2002).
  6. Kornyshev, A. A. & Leikin, S. Sequence recognition in the pairing of DNA duplexes. Phys. Rev. Lett. 86, 36663669 (2001).
  7. Danilowicz, C. et al. Single molecule detection of direct, homologous, DNA/DNA pairing.Proc. Natl Acad. Sci. USA 106, 1982419829 (2009).
  8. Gladyshev, E. & Kleckner, N. Direct recognition of homology between double helices of DNA in Neurospora crassa. Nat. Commun. 5, 3509 (2014).
  9. Tabor, C. W. & Tabor, H. Polyamines. Annu. Rev. Biochem. 53, 749790 (1984).
  10. Thomas, T. & Thomas, T. J. Polyamines in cell growth and cell death: molecular mechanisms and therapeutic applications. Cell. Mol. Life Sci. 58, 244258 (2001).

Read Full Post »


Triglycerides: Is it a Risk Factor or a Risk Marker for Atherosclerosis and Cardiovascular Disease ? The Impact of Genetic Mutations on (ANGPTL4) Gene, encoder of (angiopoietin-like 4) Protein, inhibitor of Lipoprotein Lipase

 

Reporters, Curators and Authors: Aviva Lev-Ari, PhD, RN and Larry H. Bernstein, MD, FCAP

Introduction

The role for triglycerides as a risk factor for cardiovascular disease is not new, going back to Donald Frederickson’s classification of hyperlipidemias, at least with respect to Type I and Type IIb. Whether there was a mechanism beyond the observations was yet an open question.  The paper that follows addresses such a question.

 

Large Genetic Studies Support Role For Triglycerides In Cardiovascular Disease

SOURCE

http://cardiobrief.org/2016/03/03/large-genetic-studies-support-role-for-triglycerides-in-cardiovascular-disease/#comments

 

Two  papers published in the New England Journal of Medicine offer new genetic evidence to support the increasingly accepted though still controversial view that triglycerides play an important causal role in cardiovascular disease. If fully validated the new findings could lead to new drugs to prevent and treat cardiovascular disease, though others caution that there is still a long way to go before this could happen.

Both studies describe the impact of genetic mutations on a gene (ANGPTL4) which encodes for a protein (angiopoietin-like 4) that inhibits lipoprotein lipase, an enzyme that plays a key role in breaking down and removing triglycerides from the blood. The large studies found that people with  mutations that inactivate ANGPTL4 have lower levels of triglycerides, higher levels of HDL cholesterol, and decreased risk for cardiovascular disease.

The findings, writes Sander Kersten (Wageningen University, the Netherlands) in an accompanying editorial, “suggest that lowering plasma triglyceride levels is a viable approach to reducing the risk of coronary artery disease.”

The Genetics of Dyslipidemia — When Less Is More

Sander Kersten, Ph.D.   Mar 2, 2016;   http://dx.doi.org:/10.1056/NEJMe1601117

Two groups of investigators now describe in the Journal important genetic evidence showing a causal role of plasma triglycerides in coronary heart disease. Stitziel and colleagues2 tested 54,003 coding-sequence variants covering 13,715 human genes in more than 72,000 patients with coronary artery disease and 120,000 controls. Dewey and colleagues3 sequenced the exons of the gene encoding angiopoietin-like 4 (ANGPTL4) in samples obtained from nearly 43,000 participants in the DiscovEHR human genetics study. The two groups found a significant association between an inactivating mutation (E40K) in ANGPTL4 and both low plasma triglyceride levels and high levels of HDL cholesterol. ANGPTL4 is an inhibitor of lipoprotein lipase, the enzyme that breaks down plasma triglycerides along the capillaries in heart, muscle, and fat.4 Extensive research has shown that ANGPTL4 orchestrates the processing of triglyceride-rich lipoproteins during physiologic conditions such as fasting, exercise, and cold exposure.4 The E40K mutation in ANGPTL4 was previously shown to nearly eliminate the ability of ANGPTL4 to inhibit lipoprotein lipase, a mechanism that may result in part from the destabilization of ANGPTL4.5

The key finding in each study was that carriers of the E40K mutation and other rare mutations in ANGPTL4 had a lower risk of coronary artery disease than did noncarriers, a result that is consistent with the lower triglyceride levels and higher HDL cholesterol levels among mutation carriers. These findings confirm previous data6 and provide convincing genetic evidence that an elevated plasma triglyceride level increases the risk of coronary heart disease. In combination with extensive recent data on other genetic variants that modulate plasma triglyceride levels, the studies suggest that lowering plasma triglyceride levels is a viable approach to reducing the risk of coronary artery disease.

However, as a cautionary note, Talmud and colleagues7 previously found that the presence of the E40K variant was associated with an increased risk of coronary heart disease after adjustment for the altered plasma lipids. Consistent with this hypothesis, the overexpression of Angptl4 in mice was found to protect against atherosclerosis independent of plasma lipids.8

The studies also “implicate targeted inactivation of ANGPTL4 as a potential weapon in the war on heart disease,” though he also points to a previous study that did not support this hypothesis. Sekar Kathiresan (Broad Institute), senior author of one of the NEJM studies, told me that the previous study was small and “basically got the result wrong. Between, the two papers in this NEJM issue, we are looking at 10X more data.”

Recent large genetic studies have resulted in an important change in the field. Many researchers now believe that HDL, which was once thought to play an important protective role in atherosclerosis, is only a marker of disease. In contrast, triglycerides are now thought by many to play an important functional role.

One of the NEJM papers showed that a human monoclonal antibody to ANGPTL4 lowered triglyceride levels in animals. The study was funded by Regeneron and was performed by researchers at Regeneron and Geisinger, as part of an ongoing collaboration using deidentified genetic data from Geisinger patients. In their NEJM paper the researchers reported inflammation and other side effects in the animals treated with the antibody, but they said that no such problem has been observed in humans who have mutations that have the same functional effect as the antibody.

Coding Variation in ANGPTL4, LPL, and SVEP1and the Risk of Coronary Disease Myocardial Infarction Genetics and CARDIoGRAM Exome Consortia Investigators

March 2, 2016     http://dx.doi.org:/10.1056/NEJMoa1507652

Although genomewide association studies have identified more than 56 loci associated with the risk of coronary artery disease,1-3 the disease-associated variants are typically common (minor-allele frequency >5%) and located in noncoding sequences; this has made it difficult to pinpoint causal genes and affected pathways. This lack of a causal mechanism has in part hindered the immediate translation of the findings of genomewide association studies into new therapeutic targets. However, the discovery of rare or low-frequency coding-sequence variants that affect the risk of coronary artery disease has facilitated advances in the prevention and treatment of disease. The most recent example of such advances is the development of a new class of therapeutic agents that is based on the discovery of the gene encoding proprotein convertase subtilisin/kexin type 9 (PCSK9) as a regulator of low-density lipoprotein (LDL) cholesterol4 and the discovery that low-frequency and loss-of-function variants in this gene protect against coronary artery disease.5,6

Recently, low-frequency coding variation across the genome was systematically tabulated with the use of next-generation exome and whole-genome sequencing data from more than 12,000 persons of various ancestries (including a major contribution from the National Heart, Lung, and Blood Institute Exome Sequencing Project). Protein-altering variants (i.e., nonsynonymous, splice-site, and nonsense single-nucleotide substitutions) that were observed at least twice among these 12,000 persons were included in a genotyping array (hereafter referred to as the exome array). In addition, the exome array contains previously described variants from genomewide association studies, a sparse genomewide grid of common markers, markers that are informative with regard to ancestry (i.e., African American, Native American, and European), and some additional content. Additional information on the design of the exome array is provided at http://genome.sph.umich.edu/wiki/Exome_Chip_Design. In this study, we focused on the 220,231 autosomal variants that were present on the array and were expected to alter protein sequence (i.e., missense, nonsense, splice-site, and frameshift variants) and used these to test the contribution of low-frequency coding variation to the risk of coronary artery disease.

Low-Frequency Coding Variants Associated with Coronary Artery Disease

The discovery cohort comprised 120,575 persons (42,335 patients and 78,240 controls) (Table S1 in the Supplementary Appendix). In the discovery cohort, we found significant associations between low-frequency coding variants in theLPA and PCSK9 genes and coronary artery disease (Table 1

TABLE 1

Low-Frequency Coding Variations Previously Associated with Coronary Artery Disease.). Both gene loci also harbor common noncoding variants associated with coronary artery disease that had previously been discovered through genomewide association studies. These variants were also present on the exome array and had significant associations with coronary artery disease in our study (Table 1). In a conditional analysis, the associations between coronary artery disease and the low-frequency coding variants in both LPA and PCSK9 were found to be independent of the associations between coronary artery disease and the more common variants (Table 1). ….

We found a significant association between SVEP1 p.D2702G and blood pressure (Table 3TABLE 3   Association between Low-Frequency Variants and Traditional Risk Factors., and Table S7 in the Supplementary Appendix). The allele associated with an increased risk of coronary artery disease was also associated with higher systolic blood pressure (0.94 mm Hg higher for each copy of the allele among allele carriers, P=3.0×10−7) and higher diastolic blood pressure (0.57 mm Hg higher for each copy of the allele among allele carriers, P=4.4×10−7). We did not find an association between SVEP1 p.D2702G and any plasma lipid trait. In contrast, ANGPTL4 p.E40K was not associated with blood pressure but instead was found to be associated with significantly lower levels of triglycerides (approximately 0.3 standard deviation units lower for each copy of the allele among allele carriers, P=1.6×10−13) (Table 3) and with higher levels of high-density lipoprotein (HDL) cholesterol (approximately 0.3 standard deviation units higher for each copy of the allele among allele carriers, P=8.2×10−11) (Table 3). In a conditional analysis, these effects appeared to be at least partially independent of each other (Table S8 in the Supplementary Appendix). We did not observe any significant association between ANGPTL4 p.E40K and LDL cholesterol level (Table 3). Both SVEP1 p.D2702G and ANGPTL4 p.E40K were nominally associated with type 2 diabetes in a direction concordant with the associated risk of coronary artery disease.

ANGPTL4 Loss-of-Function Mutations, Plasma Lipid Levels, and Coronary Artery Disease

The finding that a missense allele in ANGPTL4 reduced the risk of coronary artery disease, potentially by reducing triglyceride levels, raised the possibility that complete loss-of-function variants in ANGPTL4 may have an even more dramatic effect on triglyceride concentrations and the risk of coronary artery disease. We therefore examined sequence data for the seven protein-coding exons of ANGPTL4 in 6924 patients with early-onset myocardial infarction and 6834 controls free from coronary artery disease (details of the patients and controls are provided in Table S3 in the Supplementary Appendix). We discovered a total of 10 variants that were predicted to lead to loss of gene function (Figure 1A FIGURE 1    

Loss-of-Function Alleles in ANGPTL4 and Plasma Lipid Levels., and Table S9 in the Supplementary Appendix), carried by 28 heterozygous persons; no homozygous or compound heterozygous persons were discovered. Carriers of loss-of-function alleles had significantly lower levels of triglycerides than did noncarriers (a mean of 35% lower among carriers, P=0.003) (Figure 1B, and Table S10 in the Supplementary Appendix), with no significant difference in LDL or HDL cholesterol levels. Moreover, we found a lower risk of coronary artery disease among carriers of loss-of-function alleles (9 carriers among 6924 patients vs. 19 carriers among 6834 controls; odds ratio for disease, 0.47; P=0.04) (Table S11 in the Supplementary Appendix). A similar investigation was performed for the 48 protein-coding exons of SVEP1; however, only 3 loss-of-function allele carriers were discovered (2 carriers among 6924 patients vs. 1 carrier among 6834 controls).

Coding Variation in LPL and the Risk of Coronary Artery Disease

On the basis of the fact that a loss of ANGPTL4 function was associated with reduced risk of coronary artery disease and that ANGPTL4 inhibits lipoprotein lipase (LPL), one would expect a gain of LPL function to also be associated with a lower risk of coronary artery disease, whereas a loss of LPL function would be expected to be associated with a higher risk. In observations consistent with these expectations, we found a low-frequency missense variant in LPL on the exome array that was associated with an increased risk of coronary artery disease (p.D36N; minor-allele frequency, 1.9%; odds ratio for disease, 1.13; P=2.0×10−4) (Table S12 in the Supplementary Appendix); previous studies have shown that this allele (also known as p.D9N) is associated with LPL activity that is 20% lower in allele carriers than in noncarriers.8 We also identified a nonsense mutation in LPL on the exome array that was significantly associated with a reduced risk of coronary artery disease (p.S447*; minor-allele frequency, 9.9%; odds ratio, 0.94; P=2.5×10−7) (Table S12 in the Supplementary Appendix). Contrary to most instances in which the premature introduction of a stop codon leads to loss of gene function, this nonsense mutation, which occurs in the penultimate codon of the gene, paradoxically induces a gain of LPL function.9 …..

Through large-scale exomewide screening, we identified a low-frequency coding variant in ANGPTL4 that was associated with protection against coronary artery disease and a low-frequency coding variant in SVEP1 that was associated with an increased risk of the disease. Moreover, our results highlight LPL as a significant contributor to the risk of coronary artery disease and support the hypothesis that a gain of LPL function or loss of ANGPTL4 inhibition protects against the disease.

ANGPTL4 has previously been found to be involved in cancer pathogenesis and wound healing.10 Previous functional studies also revealed that ANGPTL4 regulates plasma triglyceride concentration by inhibiting LPL.11 The minor allele at p.E40K has previously been associated with lower levels of triglycerides and higher levels of HDL cholesterol.12 We now provide independent confirmation of these lipid effects. In vitro and in vivo experimental evidence suggests that the lysine allele at p.E40K results in destabilization of ANGPTL4 after its secretion from the cell in which it was synthesized. It may be that the p.E40K variant leads to increases in the enzymatic activity of LPL because of this destabilization.13 Previous, smaller studies produced conflicting results regarding p.E40K and the risk of coronary artery disease14,15; we now provide robust support for an association between p.E40K and a reduced risk of coronary artery disease.

An important caveat  to this research is that it is still very early. Most promising therapeutic targets do not work out. James Stein (University of Wisconsin) praised the papers but also offered a word of caution. “This is great science and important research that sheds light on the genetic regulation of TG-rich lipoproteins, serum TG levels, and CVD risk,” he said. “Since it is hard, if not impossible, to disconnect TG-rich lipoproteins from LDL, we should be humble in extrapolating these findings to clinical medicine in an era of low LDL due to statins and PCSK9 inhibitors. I hope this research identifies new targets for drug therapy and better understanding of CVD risk prediction– only time will tell.”

Previous studies with fibrates and other drugs have failed to convincingly show that lowering triglycerides is beneficial. Kathiresan said that what really seems to matter is “how you alter the plasma triglyceride-rich lipoproteins (TRLs).” Some genes that alter TRLs have other metabolic effects. As an example he cited a gene that lowers TRLs but increases the risk for type 2 diabetes. The NEJM papers, by observing the effect of specific mutations, therefore point the way to targets that may be clinically significant.

Conclusions:  The work that has been presented puts a new light on the possible role of triglycerides in the development of congenitally predetermined cardiovascular disease. It does not necessarily establish a general link to mechanism of cardiovascular disease, but it opens up new pathways to our understanding.

SOURCE

http://cardiobrief.org/2016/03/03/large-genetic-studies-support-role-for-triglycerides-in-cardiovascular-disease/#comments

John Contois commented on your update
“Are triglycerides a CHD risk factor? The answer is still maybe. Triglyceride-rich lipoproteins are inextricably linked to LDL metabolism and LDL particle number (and apo B). Still these are important new data and targets for novel therapeutics.”

 

Risk of Dis-lipids Syndromes in Modern Society

 

Risk of Dis-lipids Syndrome in Modern Society

Aurelian Udristioiuᶪ, Manole Cojocaru²
¹Department of Biochemistry, Clinical Laboratory, Emergency County Hospital Targu Jiu & Titu Maiorescu University, Bucharest, Romania,
Department of Physiology, Faculty of Medicine, Titu Maiorescu University, Bucharest, Romania

Abstract
Aim of this work was to emphasis the preclinical evaluation of dis-lipids syndromes types at the patients which were presented to a routine control for checking health status, in the hospital ambulatory.
Material and Method:
Were analyzed 60 patients, registered in Clinical Laboratory, assessing by running on the Hitachi 912 Analyzer, the principal biochemical parameters of lipid metabolism: Cholesterol, Triglycerides and fractions of Cholesterol, HDL and LDL. From the total of 60 patients 35 were females and 25 males.
Results
The persons with an alarm signal of atherosclerotic process were in 28 % and persons with low HDL was in 17%. The cases with atherosclerotic index, report-LDL/HDL>3.5 for men and 2.5 for women were in 14 % , the cases with predictive value with coronary risk, report-CO/HDL>5 were presented in 5 % and the cases with dis-lipid syndrome type 2- 4, with high Cholesterol and Triglycerides, were presented in 30% percent.
Conclusions
Lipids controls, and its fractions, are necessary to be prevented atherosclerotic process in the incipient status of ill.

 

http://video.epccs.eu/video_1466.html

 

 

REFERENCES

http://www.nejm.org/doi/full/10.1056/NEJMe1601117

http://www.nejm.org/doi/full/10.1056/NEJMoa1507652

March 2, 2016 Regeneron Genetics Center Publication in New England Journal of Medicine Links ANGPTL4 Inhibition and Risk of Coronary Artery Disease Demonstrates power of large-scale Precision Medicine initiatives

http://files.shareholder.com/downloads/REGN/1634352863x0x879015/042D3D02-04CB-4DD1-89FE-53927F422025/REGN_News_2016_3_2_General_Releases.pdf

e-Books by Leaders in Pharmaceutical Business Intelligence (LPBI) Group

  • Perspectives on Nitric Oxide in Disease Mechanisms, on Amazon since 6/2/12013

http://www.amazon.com/dp/B00DINFFYC

http://www.amazon.com/dp/B012BB0ZF0

  • Genomics Orientations for Personalized Medicine, on Amazon since 11/23/2015

http://www.amazon.com/dp/B018DHBUO6

  • Milestones in Physiology: Discoveries in Medicine, Genomics and Therapeutics, on Amazon.com since 12/27/2015

http://www.amazon.com/dp/B019VH97LU

  • Cardiovascular, Volume Two: Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation, on Amazon since 11/30/2015

http://www.amazon.com/dp/B018Q5MCN8

  • Cardiovascular Diseases, Volume Three: Etiologies of Cardiovascular Diseases: Epigenetics, Genetics and Genomics, on Amazon since 11/29/2015

http://www.amazon.com/dp/B018PNHJ84

  • Cardiovascular Diseases, Volume Four: Regenerative and Translational Medicine: The Therapeutics Promise for Cardiovascular Diseases, on Amazon since 12/26/2015

http://www.amazon.com/dp/B019UM909A

Other related articles on this topic published in this Open Access Online Scientific Journal include the following:

Editorial & Publication of Articles in e-Books by  Leaders in Pharmaceutical Business Intelligence:  Contributions of Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2014/10/16/editorial-publication-of-articles-in-e-books-by-leaders-in-pharmaceutical-business-intelligence-contributions-of-larry-h-bernstein-md-fcap/

Editorial & Publication of Articles in e-Books by Leaders in Pharmaceutical Business Intelligence: Contributions of Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2014/10/16/editorial-publication-of-articles-in-e-books-by-leaders-in-pharmaceutical-business-intelligence-contributions-of-aviva-lev-ari-phd-rn/

TRIGLYCERIDES

https://pharmaceuticalintelligence.com/?s=Triglycerides

HDL

https://pharmaceuticalintelligence.com/?s=HDL

PCSK9

https://pharmaceuticalintelligence.com/?s=PCSK9

STATINS

https://pharmaceuticalintelligence.com/?s=Statins

STATIN

https://pharmaceuticalintelligence.com/?s=STATIN

Read Full Post »


Breast Cancer Extratumor Microenvironment has Effect on Progression

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Tumor Microenvironment Diversity Predicts Breast Cancer Outcomes

GEN News Highlights   Feb 17, 2016   http://www.genengnews.com/gen-news-highlights/tumor-microenvironment-diversity-predicts-breast-cancer-outcomes/81252378/

 

Intratumor heterogeneity, it is known, can complicate cancer treatments. Now it appears the same may be true of tumor microenvironment heterogeneity. According to a new study from the Institute of Cancer Research (ICR), London, breast cancers that develop within an “ecologically diverse” breast cancer microenvironment are particularly likely to progress and lead to death.

The study took an unusual approach: It combined ecological scoring methods with genome-wide profiling data. This approach, besides showing clinical utility in the evaluation of breast cancer outcomes, demonstrated that even so contextual a discipline as genomics can benefit from being placed within a larger context. In this case, the context is essentially Darwinian, albeit at a small scale.

Natural selection is typically studied at the level of ecosystems consisting of animals and plants. In the current study, however, it was assessed at the level of the tumor microenvironment, which consists of cancer cells, immune system lymphocytes, and stromal cells.

The ICR scientists, led by Yinyin Yuan, Ph.D., presented their work February 16 in the journal PLoS Medicine, in an article entitled “Microenvironmental Heterogeneity Parallels Breast Cancer Progression: A Histology–Genomic Integration Analysis.” The article describes how the scientists developed a tumor ecosystem diversity index (EDI), a scoring system that indicates the degree of microenvironmental heterogeneity along three spatial dimensions in solid tumors. EDI scores take account of “fully automated histology image analysis coupled with statistical measures commonly used in ecology.”

“[EDI] was compared with disease-specific survival, key mutations, genome-wide copy number, and expression profiling data in a retrospective study of 510 breast cancer patients as a test set and 516 breast cancer patients as an independent validation set,” wrote the authors. “In high-grade (grade 3) breast cancers, we uncovered a striking link between high microenvironmental heterogeneity measured by EDI and a poor prognosis that cannot be explained by tumor size, genomics, or any other data types.”

By using the EDI, the ICR team was able to identify several particularly aggressive subgroups of breast cancer. In fact, the EDI was a stronger predictor of survival than many established markers for the disease.

The ICR researchers also looked at the EDI in addition to genetic factors. For example, the researchers found that the prognostic value of EDI was enhanced with the addition of TP53 mutation status. By integrating EDI data and genome-wide profiling data, the researchers identified losses of specific genes on 4p14 and 5q13 that were enriched in grade 3 tumors. These tumors, which showed high microenvironmental diversity, substratified patients into poor prognostic groups.

“Our findings show that mathematical models of ecological diversity can spot more aggressive cancers,” said Dr. Yuan. “By analyzing images of the environment around a tumor based on Darwinian natural selection principles, we can predict survival in some breast cancer types even more effectively than many of the measures used now in the clinic.

“In the future, we hope that by combining cell diversity scores with other factors that influence cancer survival, such as genetics and tumor size, we will be able to tell apart patients with more or less aggressive disease so we can identify those who might need different types of treatment.”

“This ingenious study…teaches us a valuable lesson,” added Paul Workman, Ph.D., chief executive of the ICR. “[We] should always remember that cancer cells are not developing and growing in isolation, but are part of a complex ecosystem that involves normal human cells, too. By better understanding these ecosystems, we aim to create new ways to diagnose, monitor and treat cancer.”

 

Microenvironmental Heterogeneity Parallels Breast Cancer Progression: A Histology–Genomic Integration Analysis

 

Background

The intra-tumor diversity of cancer cells is under intense investigation; however, little is known about the heterogeneity of the tumor microenvironment that is key to cancer progression and evolution. We aimed to assess the degree of microenvironmental heterogeneity in breast cancer and correlate this with genomic and clinical parameters.

Methods and Findings

We developed a quantitative measure of microenvironmental heterogeneity along three spatial dimensions (3-D) in solid tumors, termed the tumor ecosystem diversity index (EDI), using fully automated histology image analysis coupled with statistical measures commonly used in ecology. This measure was compared with disease-specific survival, key mutations, genome-wide copy number, and expression profiling data in a retrospective study of 510 breast cancer patients as a test set and 516 breast cancer patients as an independent validation set. In high-grade (grade 3) breast cancers, we uncovered a striking link between high microenvironmental heterogeneity measured by EDI and a poor prognosis that cannot be explained by tumor size, genomics, or any other data types. However, this association was not observed in low-grade (grade 1 and 2) breast cancers. The prognostic value of EDI was superior to known prognostic factors and was enhanced with the addition of TP53 mutation status (multivariate analysis test set, p = 9 × 10−4, hazard ratio = 1.47, 95% CI 1.17–1.84; validation set, p = 0.0011, hazard ratio = 1.78, 95% CI 1.26–2.52). Integration with genome-wide profiling data identified losses of specific genes on 4p14 and 5q13 that were enriched in grade 3 tumors with high microenvironmental diversity that also substratified patients into poor prognostic groups. Limitations of this study include the number of cell types included in the model, that EDI has prognostic value only in grade 3 tumors, and that our spatial heterogeneity measure was dependent on spatial scale and tumor size.

Conclusions

To our knowledge, this is the first study to couple unbiased measures of microenvironmental heterogeneity with genomic alterations to predict breast cancer clinical outcome. We propose a clinically relevant role of microenvironmental heterogeneity for advanced breast tumors, and highlight that ecological statistics can be translated into medical advances for identifying a new type of biomarker and, furthermore, for understanding the synergistic interplay of microenvironmental heterogeneity with genomic alterations in cancer cells.

Background

The human body contains millions of cells, all of which grow, divide, and die in an orderly fashion to build tissues during early life and to replace worn-out or dying cells and repair injuries during adult life. Sometimes, however, normal cells acquire genetic changes (mutations) that allow them to divide uncontrollably and to move around the body (metastasize), resulting in cancer. Because any cell in the body can acquire the mutations needed for cancer development, there are many types of cancer. For example, breast cancer, the most common cancer in women, begins when the cells in the breast that normally make milk become altered. Moreover, different types of cancer progress and evolve differently—some cancers grow quickly and kill their “host” soon after diagnosis, whereas others can be successfully treated with drugs, surgery, or radiotherapy. The behavior of individual cancers depends both on the characteristics of the cancer cells within the tumor and on the interactions between the cancer cells and the normal stromal cells (the connective tissue cells of organs) and other cells (for example, immune cells) that surround and feed cancer cells (the tumor microenvironment).

Why Was This Study Done?

Although recent studies have highlighted the importance of the tumor microenvironment for disease-related outcomes, little is known about how the heterogeneity of the tumor microenvironment—the diversity of non-cancer cells within the tumor—affects outcomes. Mathematical modeling suggests that tumors with heterogeneous and homogeneous microenvironments have different growth patterns and that heterogeneous microenvironments are more likely to be associated with aggressive cancers than homogenous microenvironments. However, the lack of methods to quantify the spatial variability and cellular composition across solid tumors has prevented confirmation of these predictions. Here, the researchers develop a computational system for quantifying microenvironmental heterogeneity in breast cancer based on tumor morphology (shape and form) in histological sections (tissue samples taken from tumors that are examined microscopically). They then use this system to analyze the associations between clinical outcomes, molecular changes, and microenvironmental heterogeneity in breast cancer.

What Did the Researchers Do and Find?

The researchers used automated image analysis and statistical analysis to develop the ecosystem diversity index (EDI), a numerical measure of microenvironmental heterogeneity in solid tumors. They compared the EDI with prognosis (likely outcome), key mutations, genome-wide copy number (tumor cells often contain abnormal numbers of copies of specific genes), and expression profiling data (the expression of several key proteins is altered in tumors) in a test set of 510 samples from patients with breast cancer and in a validation set of 516 additional samples. Among high-grade breast cancers (grade 3 cancers; the grade of a cancer indicates what the cells look like; high-grade breast cancers have a poor prognosis), but not among low-grade breast cancers (grades 1 and 2), a high EDI (high microenvironmental heterogeneity) was associated with a poor prognosis. Specifically, patients with grade 3 tumors and a high EDI had a ten-year disease-specific survival rate of 51%, whereas the remaining patients with grade 3 tumors had a ten-year survival rate of 70%. Notably, the combination of a high EDI with specific DNA alterations—mutations in a gene called TP53 and loss of genes on Chromosomes 4p14 and 5q13—improved the accuracy of prognosis among patients with grade 3 breast cancer and stratified them into subgroups with disease-specific five-year survival rates of 35%, 9%, and 32%, respectively.

What Do These Findings Mean?

These findings establish a method for measuring the spatial heterogeneity of the microenvironment of solid tumors and suggest that the measurement of tumor microenvironmental heterogeneity can be coupled with information about genomic alterations to provide an accurate way to predict outcomes among patients with high-grade breast cancer. The association between EDI, specific genomic alterations, and outcomes needs to be confirmed in additional patients. However, these findings suggest that microenvironmental heterogeneity might provide an additional biomarker to help clinicians identify those patients with advanced breast cancer who have a particularly bad prognosis. The ability to identify these patients is important because it will help clinicians target aggressive treatments to individuals with a poor prognosis and avoid the overtreatment of patients whose prognosis is more favorable. Finally, and more generally, these findings describe a new way to investigate the interactions between the tumor microenvironment and genomic alterations in cancer cells.

Additional Information

This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001961.

Citation: Natrajan R, Sailem H, Mardakheh FK, Arias Garcia M, Tape CJ, Dowsett M, et al. (2016) Microenvironmental Heterogeneity Parallels Breast Cancer Progression: A Histology–Genomic Integration Analysis.
PLoS Med 13(2): e1001961.     http://dx.doi.org:/10.1371/journal.pmed.1001961
Fig 1. In silico tumor dissection pipeline for quantifying spatial diversity in the tumor ecosystem.
Fig 1. In silico tumor dissection pipeline for quantifying spatial diversity in the tumor ecosystem. (A) Flow diagram depicting the overall study design. (B) Schematic of our pipeline for quantifying spatial diversity in pathological samples. H&E sections are morphologically classified and divided into regions to be spatially scored. The number of clusters k in the regional scores is indicative of the number of sub-populations of cell types in the tumor regions. (C) Examples of tumor regions with low and high diversity scores using the Shannon diversity index, accounting for cancer cells (outlined in green), lymphocytes (blue), and stromal cells (red). Cell classification is automated by image analysis. (D) The 3-D landscape of cell diversity scores on an example H&E section; the x- and y-axes are the geometric axes of the image, and the z-axis is cell diversity computed on a region-by-region basis. (E) The distribution of regional scores in a tumor from the METABRIC study with two regional clusters identified using Gaussian mixture clustering (grey shading: histogram; dashed black line: density; solid black lines: mixture components/clusters).
Fig 2. Application of EDI to 1,026 breast tumors from the METABRIC study.
Fig 2. Application of EDI to 1,026 breast tumors from the METABRIC study. (A) The frequencies of EDI scores in breast tumors. (B) H&E staining, distribution of classified cells (green: cancer; blue: lymphocyte; red: stromal cells), and the heatmap of regional diversity scores for a tumor with the highest EDI score (EDI = 5). (C) Representative regions from each of the clusters k1–k5 are shown in a tumor with EDI = 5, with cluster k1 having the lowest diversity score and k5 the highest. By mapping regional clusters to the H&E image, we can begin to interpret these clusters with different cell diversity. We observed predominantly cancer cells in k1, increasingly more stromal cells and ductal in situ carcinoma cells (DCIS) in k2, and a vessel in k3. Cluster k4 features extensive stromal lymphocytes between ductal in situ carcinoma components, while k5 shows tumor-infiltrating lymphocytes (TIL) associated with invasive carcinoma cells.
Fig 3. Reproducibility, stability, and independence of the EDI-high group in 507 grade 3 breast tumors.
Fig 3. Reproducibility, stability, and independence of the EDI-high group in 507 grade 3 breast tumors. (A) Kaplan–Meier curves of disease-specific survival to illustrate the prognosis of EDI-high samples compared to other grade 3 samples in two independent patient cohorts. Shown below the graph are the number of patients (the number of disease-specific events) per group for EDI-low (grey) and EDI-high (red). (B) Agreement of the EDI subtyping between 100% data and resampling with progressively fewer tumor regions in 200 repeats. (C) Distribution of known subtypes in grade 3 tumors stratified by EDI; asterisks mark subtypes enriched in the EDI-high group. (D) Kaplan–Meier curves illustrating the duration of disease-specific survival according to tumor size (left) and improvement of stratification with the addition of EDI information (right).
Accumulating evidence suggests that the interactions of cancer cells and stromal cells within their microenvironment govern disease progression, metastasis, and, ultimately, the evolution of therapeutic resistance [1–3]. Recent reports have highlighted the significance of the contribution of stromal gene expression and morphological structure as powerful prognostic determinants for a number of tumor types, emphasizing the importance of the tumor microenvironment in disease-related outcomes [4–7]. In breast cancer, a number of studies have demonstrated the prognostic correlation of individual cell types, including the immune cell infiltrate that predicts response to therapy [8–10], and the high percentage of tumor stroma that predicts poor prognosis in triple-negative disease but good prognosis in estrogen receptor (ER)–positive disease [11,12]. Nevertheless, different types of cells coexist with varying degrees of heterogeneity within a tumor. This fundamental feature of human tumors and the combinatorial effects of cell types have been largely ignored, and the collective implications for clinical outcome remain elusive. Consistent observations from mathematical models have highlighted that tumors with diverse microenvironments show growth patterns dramatically different from those of tumors with homogeneous environments [13] and are more likely to be associated with aggressive cancer phenotypes [2] that select for cell migration and eventual metastasis by allowing cancer cells to evolve more rapidly [14]. These observations highlight the need to understand the collective physiological characteristics and heterogeneity of tumor microenvironments. However, there is a lack of methods to quantify the high spatial variability and diverse cellular composition across different solid tumors. Moreover, the interplay of genomic alterations in cancer cells and microenvironmental heterogeneity and its subsequent role in treatment response have not been explored. Our aims were (i) to develop a computational system for quantifying microenvironmental heterogeneity based on tumor morphology in routine histological sections, (ii) to define the clinical implications of microenvironmental heterogeneity, and (iii) to integrate this histologybased index with RNA gene expression and DNA copy number profiling data to identify molecular changes associated with microenvironmental heterogeneity.
The Ecosystem Diversity Index To characterize the tumor ecosystem based on cell compositions, we developed a new index to be used in conjunction with our image analysis tool [16]. First, we used our automated morphological classification method [16] to identify and classify cells into cancer, lymphocyte, or stromal cell classes in H&E sections (Fig 1B). We next divided sections into smaller spatial regions and quantified the diversity of the tumor ecosystem in a tumor region j using the Shannon diversity index: dj ¼ Sm i pi logpi ; ð1Þ where m is the number of cell types and pi is the proportion of the ith cell type (Fig 1B and 1C). A high value of the Shannon diversity index dj reports a heterogeneous environment populated by many cell types, whilst a low value indicates a homogeneous environment (Fig 1C). Compared to other methods such as the Simpson index, the Shannon diversity index accounts for rare species and, hence, is less dominated by main species [17]. Subsequently, we derived the ecosystem diversity index (EDI) by applying unsupervised clustering that identifies the optimum number of clusters in the dataset in an unbiased manner, in order to group tumor regions and quantify the degree of spatial heterogeneity. Let D = d1,d2,…,dn be the Shannon index for n regions in a tumor. We used Gaussian mixture models to fit data D: D SK k¼1okNðmk; s2 kÞ: ð2Þ where μk, ,s2 k, and ωk are the mean, variance, and weight of a Gaussian distribution k, and K is the number of clusters. The Bayesian information criterion was then used to select the best number of clusters K [18]. We used K = 1–5 as the range of K to avoid small EDI groups (S1 Text). The final value of K thus is a measurement of heterogeneity and the score of EDI for a tumor.
Fig 5. The relationship between ecological heterogeneity and cancer genomic aberrations in 507 grade 3 tumors. (A) Genome-wide copy number aberrations in grade 3 breast tumors and genomic coordinates of genes with copy number aberrations enriched in the EDI-high group. Lengths of black lines denote level of enrichment significance with copy number gains (above the horizontal line) or losses (below the horizontal line). (B) Kaplan–Meier curves illustrating the duration of disease-specific survival in grade 3 breast cancer patients according to copy number loss of the 4p14 region (left) and the EDI-high group with additional information of 4p14 copy number loss (right). (C) Kaplan–Meier curves illustrating the duration of disease-specific survival according to copy number loss of the 5q13 region (left) and the EDI-high group with additional information of 5q13 copy number loss (right).
This study has a number of limitations. The motivation for our computational development was to use a data-driven model and measure the degree of spatial heterogeneity in tumor pathological specimens. In this model, only three major cell types in breast tumors were considered. Further sub-classification of the different types of stromal and immune cells by immunohistochemistry may add additional discriminatory value to our model. For dissecting spatial heterogeneity, we chose to use square regions with equal sizes. We found that EDI was correlated with the size of the region chosen for calculation of the Shannon diversity index, and as such the spatial heterogeneity is scale dependent. This phenomenon has been well described in a number of studies in ecology that show that a scale needs to be chosen that is appropriate for the ecological process under study [38,39], further highlighting the analogy between tumor studies and ecology. Similar to the recent observation that breast cancer subclonal heterogeneity is correlated with tumor size [35], we also found an association between microenvironmental heterogeneity and tumor size; hence, EDI may have more limited value in smaller tumors. However, small tumors were present in the EDI-high group, and addition of EDI within tumors grouped by size further stratified their prognosis. We found that EDI was prognostic only in grade 3 tumors in our study, which could be a limitation, given the possible discordance in grading between pathologists.
The identification of additional biomarkers in subgroups of patients that identify them as high risk is important for patient management and to avoid overtreatment for low-risk patients. We envision that the use of our measure of microenvironment heterogeneity, together with key genomic alterations, will enable the diagnosis of patients at very high risk of relapse and facilitate the enrollment of these patients into additional clinical trials for novel therapies or treatment intensification. Our novel computational approach provides a fully automated tool that is relatively easy to implement. Integration of this measure with genomic profiling provides additional prognostic information independent of known clinical parameters. The results of this study highlight the possibility of a grade-3-specific prognostic tool that may aid in further classification of high-grade breast cancer patients beyond standard assays such as ER and HER2 status.

 

Read Full Post »


Blocking miRNAs in Triple Negative Breast Cancer

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Why Blocking microRNAs in Triple Negative Breast Cancer Is so Difficult

http://www.genengnews.com/gen-news-highlights/why-blocking-micrornas-in-triple-negative-breast-cancer-is-so-difficult/81252048/

New research uncovers why attempts at blocking microRNAs for triple negative breast cancer often fail. [National Breast Cancer Foundation]

http://www.genengnews.com/Media/images/GENHighlight/thumb_Dec2_2015_NationalBreastCancerFoundation_MicroRNAs4198253108.jpg

 

While a triple negative score is hardly ever a good thing, for breast cancer it is especially troubling. Triple-negative breast cancer (TNBC) refers to a disease scenario where the cancer cells do not express the genes for estrogen, progesterone, or HER2/neu receptors simultaneously—making this cancer particularly aggressive and difficult to treat as most chemotherapies target one of these receptors.

Over the past several years, researchers have discovered that various microRNAs (miRNAs) underlie the expression of certain genes that can enable cancer cells to proliferate faster. However, the ability to block these miRNAs in TNBC has been met with failure. Yet now, scientists from Thomas Jefferson University in Philadelphia believe they have discovered the reason conventional methods to block these miRNAs has been thus far unsuccessful.

“Triple-negative breast cancer is one of the most aggressive forms of breast cancer, and there’s been a lot of excitement in blocking the microRNAs that appear to make this type of cancer grow faster and resist conventional treatment,” explained senior author Eric Wickstrom, Ph.D., professor in the department of biochemistry and molecular biology at TJU. “However blocking microRNAs hasn’t met with great success and this paper offers one explanation for why that might be the case.”

The findings from this study were published online today in PLOS ONE through an article entitled “Non-specific blocking of miR-17-5p guide strand in triple negative breast cancer cells by amplifying passenger strand activity.” Insight from this study may enable new and more effective design of blockers against previously intractable miRNAs.

“Triple negative breast cancer strikes younger women, tragically killing them in as little as two years,” noted lead author Yuan-Yuan Jin, a doctoral candidate in the department of biochemistry and molecular biology at TJU. “Only chemotherapy and radiation are approved therapies for triple negative breast cancer. We want to treat a genetic target that will keep patients alive with a good quality of life.”

The investigators targeted the miRNA molecule miR-17, which has been shown previously to cause a surge in TNBC growth by alternating genes that would normally signal a diseased or early cancerous cell to die—specifically, the tumor suppressor genes PDCD4 and PTEN.

Although, when the TJU researchers tried to reduce the levels of miR-17 in TNBC cells, rather than increase the levels of the tumor suppressor genes, as they had anticipated, they saw an even larger decrease in these genes than unmodified controls.

The TJU team was acting under the current assumptions that miRNA, which are double stranded, only silence genes using one of their two strands, which is complementary to parts of the messenger RNA coding sequence. The matching, or so-called passenger strand, was thought to be discarded and degraded by the cell.

Using a method to silence RNAs, which involves flooding the cell with modified RNA sequences that mimic the passenger strand and bind to the single-stranded microRNA before it reaches its target, the TJU team saw more silencing of the PDCD4 and PTEN genes.  After some bioinformatic and folding energy calculations, the authors realized that both strands of miR-17 were active in downregulating the tumor suppressor genes.

“Rather than blocking miR-17, we were inadvertently boosting its levels, and, therefore, boosting the cell’s cancerous potential,” noted Jin.

The results of the current study should help to open a pathway to designing specific blockers of one microRNA strand without imitating the opposite strand. Dr. Wickstrom added that “we are now testing new miR-17 blocker designs made possible by these results.”

Read Full Post »


Clinical Laboratory Challenges

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

CLINICAL LABORATORY NEWS   

The Lab and CJD: Safe Handling of Infectious Prion Proteins

Body fluids from individuals with possible Creutzfeldt-Jakob disease (CJD) present distinctive safety challenges for clinical laboratories. Sporadic, iatrogenic, and familial CJD (known collectively as classic CJD), along with variant CJD, kuru, Gerstmann-Sträussler-Scheinker, and fatal familial insomnia, are prion diseases, also known as transmissible spongiform encephalopathies. Prion diseases affect the central nervous system, and from the onset of symptoms follow a typically rapid progressive neurological decline. While prion diseases are rare, it is not uncommon for the most prevalent form—sporadic CJD—to be included in the differential diagnosis of individuals presenting with rapid cognitive decline. Thus, laboratories may deal with a significant number of possible CJD cases, and should have protocols in place to process specimens, even if a confirmatory diagnosis of CJD is made in only a fraction of these cases.

The Lab’s Role in Diagnosis

Laboratory protocols for handling specimens from individuals with possible, probable, and definitive cases of CJD are important to ensure timely and appropriate patient management. When the differential includes CJD, an attempt should be made to rule-in or out other causes of rapid neurological decline. Laboratories should be prepared to process blood and cerebrospinal fluid (CSF) specimens in such cases for routine analyses.

Definitive diagnosis requires identification of prion aggregates in brain tissue, which can be achieved by immunohistochemistry, a Western blot for proteinase K-resistant prions, and/or by the presence of prion fibrils. Thus, confirmatory diagnosis is typically achieved at autopsy. A probable diagnosis of CJD is supported by elevated concentration of 14-3-3 protein in CSF (a non-specific marker of neurodegeneration), EEG, and MRI findings. Thus, the laboratory may be required to process and send CSF samples to a prion surveillance center for 14-3-3 testing, as well as blood samples for sequencing of the PRNP gene (in inherited cases).

Processing Biofluids

Laboratories should follow standard protective measures when working with biofluids potentially containing abnormally folded prions, such as donning standard personal protective equipment (PPE); avoiding or minimizing the use of sharps; using single-use disposable items; and processing specimens to minimize formation of aerosols and droplets. An additional safety consideration is the use of single-use disposal PPE; otherwise, re-usable items must be either cleaned using prion-specific decontamination methods, or destroyed.

Blood. In experimental models, infectivity has been detected in the blood; however, there have been no cases of secondary transmission of classical CJD via blood product transfusions in humans. As such, blood has been classified, on epidemiological evidence by the World Health Organization (WHO), as containing “no detectible infectivity,” which means it can be processed by routine methods. Similarly, except for CSF, all other body fluids contain no infectivity and can be processed following standard procedures.

In contrast to classic CJD, there have been four cases of suspected secondary transmission of variant CJD via transfused blood products in the United Kingdom. Variant CJD, the prion disease associated with mad cow disease, is unique in its distribution of prion aggregates outside of the central nervous system, including the lymph nodes, spleen, and tonsils. For regions where variant CJD is a concern, laboratories should consult their regulatory agencies for further guidance.

CSF. Relative to highly infectious tissues of the brain, spinal cord, and eye, infectivity has been identified less often in CSF and is considered to have “low infectivity,” along with kidney, liver, and lung tissue. Since CSF can contain infectious material, WHO has recommended that analyses not be performed on automated equipment due to challenges associated with decontamination. Laboratories should perform a risk assessment of their CSF processes, and, if deemed necessary, consider using manual methods as an alternative to automated systems.

Decontamination

The infectious agent in prion disease is unlike any other infectious pathogen encountered in the laboratory; it is formed of misfolded and aggregated prion proteins. This aggregated proteinacious material forms the infectious unit, which is incredibly resilient to degradation. Moreover, in vitro studies have demonstrated that disrupting large aggregates into smaller aggregates increases cytotoxicity. Thus, if the aim is to abolish infectivity, all aggregates must be destroyed. Disinfectant procedures used for viral, bacterial, and fungal pathogens such as alcohol, boiling, formalin, dry heat (<300°C), autoclaving at 121°C for 15 minutes, and ionizing, ultraviolet, or microwave radiation, are either ineffective or variably effective against aggregated prions.

The only means to ensure no risk of residual infectious prions is to use disposable materials. This is not always practical, as, for instance, a biosafety cabinet cannot be discarded if there is a CSF spill in the hood. Fortunately, there are several protocols considered sufficient for decontamination. For surfaces and heat-sensitive instruments, such as a biosafety cabinet, WHO recommends flooding the surface with 2N NaOH or undiluted NaClO, letting stand for 1 hour, mopping up, and rinsing with water. If the surface cannot tolerate NaOH or NaClO, thorough cleaning will remove most infectivity by dilution. Laboratories may derive some additional benefit by using one of the partially effective methods discussed previously. Non-disposable heat-resistant items preferably should be immersed in 1N NaOH, heated in a gravity displacement autoclave at 121°C for 30 min, cleaned and rinsed in water, then sterilized by routine methods. WHO has outlined several alternate decontamination methods. Using disposable cover sheets is one simple solution to avoid contaminating work surfaces and associated lengthy decontamination procedures.

With standard PPE—augmented by a few additional safety measures and prion-specific decontamination procedures—laboratories can safely manage biofluid testing in cases of prion disease.

 

The Microscopic World Inside Us  

Emerging Research Points to Microbiome’s Role in Health and Disease

Thousands of species of microbes—bacteria, viruses, fungi, and protozoa—inhabit every internal and external surface of the human body. Collectively, these microbes, known as the microbiome, outnumber the body’s human cells by about 10 to 1 and include more than 1,000 species of microorganisms and several million genes residing in the skin, respiratory system, urogenital, and gastrointestinal tracts. The microbiome’s complicated relationship with its human host is increasingly considered so crucial to health that researchers sometimes call it “the forgotten organ.”

Disturbances to the microbiome can arise from nutritional deficiencies, antibiotic use, and antiseptic modern life. Imbalances in the microbiome’s diverse microbial communities, which interact constantly with cells in the human body, may contribute to chronic health conditions, including diabetes, asthma and allergies, obesity and the metabolic syndrome, digestive disorders including irritable bowel syndrome (IBS), and autoimmune disorders like multiple sclerosis and rheumatoid arthritis, research shows.

While study of the microbiome is a growing research enterprise that has attracted enthusiastic media attention and venture capital, its findings are largely preliminary. But some laboratorians are already developing a greater appreciation for the microbiome’s contributions to human biochemistry and are considering a future in which they expect to measure changes in the microbiome to monitor disease and inform clinical practice.

Pivot Toward the Microbiome

Following the National Institutes of Health (NIH) Human Genome Project, many scientists noted the considerable genetic signal from microbes in the body and the existence of technology to analyze these microorganisms. That realization led NIH to establish the Human Microbiome Project in 2007, said Lita Proctor, PhD, its program director. In the project’s first phase, researchers studied healthy adults to produce a reference set of microbiomes and a resource of metagenomic sequences of bacteria in the airways, skin, oral cavities, and the gastrointestinal and vaginal tracts, plus a catalog of microbial genome sequences of reference strains. Researchers also evaluated specific diseases associated with disturbances in the microbiome, including gastrointestinal diseases such as Crohn’s disease, ulcerative colitis, IBS, and obesity, as well as urogenital conditions, those that involve the reproductive system, and skin diseases like eczema, psoriasis, and acne.

Phase 1 studies determined the composition of many parts of the microbiome, but did not define how that composition affects health or specific disease. The project’s second phase aims to “answer the question of what microbes actually do,” explained Proctor. Researchers are now examining properties of the microbiome including gene expression, protein, and human and microbial metabolite profiles in studies of pregnant women at risk for preterm birth, the gut hormones of patients at risk for IBS, and nasal microbiomes of patients at risk for type 2 diabetes.

Promising Lines of Research

Cystic fibrosis and microbiology investigator Michael Surette, PhD, sees promising microbiome research not just in terms of evidence of its effects on specific diseases, but also in what drives changes in the microbiome. Surette is Canada research chair in interdisciplinary microbiome research in the Farncombe Family Digestive Health Research Institute at McMaster University
in Hamilton, Ontario.

One type of study on factors driving microbiome change examines how alterations in composition and imbalances in individual patients relate to improving or worsening disease. “IBS, cystic fibrosis, and chronic obstructive pulmonary disease all have periods of instability or exacerbation,” he noted. Surette hopes that one day, tests will provide clinicians the ability to monitor changes in microbial composition over time and even predict when a patient’s condition is about to deteriorate. Monitoring perturbations to the gut microbiome might also help minimize collateral damage to the microbiome during aggressive antibiotic therapy for hospitalized patients, he added.

Monitoring changes to the microbiome also might be helpful for “culture negative” patients, who now may receive multiple, unsuccessful courses of different antibiotics that drive antibiotic resistance. Frustration with standard clinical biology diagnosis of lung infections in cystic fibrosis patients first sparked Surette’s investigations into the microbiome. He hopes that future tests involving the microbiome might also help asthma patients with neutrophilia, community-acquired pneumonia patients who harbor complex microbial lung communities lacking obvious pathogens, and hospitalized patients with pneumonia or sepsis. He envisions microbiome testing that would look for short-term changes indicating whether or not a drug is effective.

Companion Diagnostics

Daniel Peterson, MD, PhD, an assistant professor of pathology at Johns Hopkins University School of Medicine in Baltimore, believes the future of clinical testing involving the microbiome lies in companion diagnostics for novel treatments, and points to companies that are already developing and marketing tests that will require such assays.

Examples of microbiome-focused enterprises abound, including Genetic Analysis, based in Oslo, Norway, with its high-throughput test that uses 54 probes targeted to specific bacteria to measure intestinal gut flora imbalances in inflammatory bowel disease and irritable bowel syndrome patients. Paris, France-based Enterome is developing both novel drugs and companion diagnostics for microbiome-related diseases such as IBS and some metabolic diseases. Second Genome, based in South San Francisco, has developed an experimental drug, SGM-1019, that the company says blocks damaging activity of the microbiome in the intestine. Cambridge, Massachusetts-based Seres Therapeutics has received Food and Drug Administration orphan drug designation for SER-109, an oral therapeutic intended to correct microbial imbalances to prevent recurrent Clostridium difficile infection in adults.

One promising clinical use of the microbiome is fecal transplantation, which both prospective and retrospective studies have shown to be effective in patients with C. difficile infections who do not respond to front-line therapies, said James Versalovic, MD, PhD, director of Texas Children’s Hospital Microbiome Center and professor of pathology at Baylor College of Medicine in Houston. “Fecal transplants and other microbiome replacement strategies can radically change the composition of the microbiome in hours to days,” he explained.

But NIH’s Proctor discourages too much enthusiasm about fecal transplant. “Natural products like stool can have [side] effects,” she pointed out. “The [microbiome research] field needs to mature and we need to verify outcomes before anything becomes routine.”

Hurdles for Lab Testing

While he is hopeful that labs someday will use the microbiome to produce clinically useful information, Surette pointed to several problems that must be solved beforehand. First, molecular methods commonly used right now should be more quantitative and accurate. Additionally, research on the microbiome encompasses a wide variety of protocols, some of which are better at extracting particular types of bacteria and therefore can give biased views of communities living in the body. Also, tests may need to distinguish between dead and live microbes. Another hurdle is that labs using varied bioinfomatic methods may produce different results from the same sample, a problem that Surette sees as ripe for a solution from clinical laboratorians, who have expertise in standardizing robust protocols and in automating tests.

One way laboratorians can prepare for future, routine microbiome testing is to expand their notion of clinical chemistry to include both microbial and human biochemistry. “The line between microbiome science and clinical science is blurring,” said Versalovic. “When developing future assays to detect biochemical changes in disease states, we must consider the contributions of microbial metabolites and proteins and how to tailor tests to detect them.” In the future, clinical labs may test for uniquely microbial metabolites in various disease states, he predicted.

 

Automated Review of Mass Spectrometry Results  

Can We Achieve Autoverification?

Author: Katherine Alexander and Andrea R. Terrell, PhD  // Date: NOV.1.2015  // Source:Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/november/automated-review-of-mass-spectrometry-results-can-we-achieve-autoverification

 

Paralleling the upswing in prescription drug misuse, clinical laboratories are receiving more requests for mass spectrometry (MS) testing as physicians rely on its specificity to monitor patient compliance with prescription regimens. However, as volume has increased, reimbursement has declined, forcing toxicology laboratories both to increase capacity and lower their operational costs—without sacrificing quality or turnaround time. Now, new solutions are available enabling laboratories to bring automation to MS testing and helping them with the growing demand for toxicology and other testing.

What is the typical MS workflow?

A typical workflow includes a long list of manual steps. By the time a sample is loaded onto the mass spectrometer, it has been collected, logged into the lab information management system (LIMS), and prepared for analysis using a variety of wet chemistry techniques.

Most commercial clinical laboratories receive enough samples for MS analysis to batch analyze those samples. A batch consists of a calibrator(s), quality control (QC) samples, and patient/donor samples. Historically, the method would be selected (i.e. “analysis of opiates”), sample identification information would be entered manually into the MS software, and the instrument would begin analyzing each sample. Upon successful completion of the batch, the MS operator would view all of the analytical data, ensure the QC results were acceptable, and review each patient/donor specimen, looking at characteristics such as peak shape, ion ratios, retention time, and calculated concentration.

The operator would then post acceptable results into the LIMS manually or through an interface, and unacceptable results would be rescheduled or dealt with according to lab-specific protocols. In our laboratory we perform a final certification step for quality assurance by reviewing all information about the batch again, prior to releasing results for final reporting through the LIMS.

What problems are associated with this workflow?

The workflow described above results in too many highly trained chemists performing manual data entry and reviewing perfectly acceptable analytical results. Lab managers would prefer that MS operators and certifying scientists focus on troubleshooting problem samples rather than reviewing mounds of good data. Not only is the current process inefficient, it is mundane work prone to user errors. This risks fatigue, disengagement, and complacency by our highly skilled scientists.

Importantly, manual processes also take time. In most clinical lab environments, turnaround time is critical for patient care and industry competitiveness. Lab directors and managers are looking for solutions to automate mundane, error-prone tasks to save time and costs, reduce staff burnout, and maintain high levels of quality.

How can software automate data transfer from MS systems to LIMS?

Automation is not a new concept in the clinical lab. Labs have automated processes in shipping and receiving, sample preparation, liquid handling, and data delivery to the end user. As more labs implement MS, companies have begun to develop special software to automate data analysis and review workflows.

In July 2011, AIT Labs incorporated ASCENT into our workflow, eliminating the initial manual peak review step. ASCENT is an algorithm-based peak picking and data review system designed specifically for chromatographic data. The software employs robust statistical and modeling approaches to the raw instrument data to present the true signal, which often can be obscured by noise or matrix components.

The system also uses an exponentially modified Gaussian (EMG) equation to apply a best-fit model to integrated peaks through what is often a noisy signal. In our experience, applying the EMG results in cleaner data from what might appear to be poor chromatography ultimately allows us to reduce the number of samples we might otherwise rerun.

How do you validate the quality of results?

We’ve developed a robust validation protocol to ensure that results are, at minimum, equivalent to results from our manual review. We begin by building the assay in ASCENT, entering assay-specific information from our internal standard operating procedure (SOP). Once the assay is configured, validation proceeds with parallel batch processing to compare results between software-reviewed data and staff-reviewed data. For new implementations we run eight to nine batches of 30–40 samples each; when we are modifying or upgrading an existing implementation we run a smaller number of batches. The parallel batches should contain multiple positive and negative results for all analytes in the method, preferably spanning the analytical measurement range of the assay.

The next step is to compare the results and calculate the percent difference between the data review methods. We require that two-thirds of the automated results fall within 20% of the manually reviewed result. In addition to validating patient sample correlation, we also test numerous quality assurance rules that should initiate a flag for further review.

What are the biggest challenges during implementation and continual improvement initiatives?

On the technological side, our largest hurdle was loading the sequence files into ASCENT. We had created an in-house mechanism for our chemists to upload the 96-well plate map for their batch into the MS software. We had some difficulty transferring this information to ASCENT, but once we resolved this issue, the technical workflow proceeded fairly smoothly.

The greater challenge was changing our employees’ mindset from one of fear that automation would displace them, to a realization that learning this new technology would actually make them more valuable. Automating a non-mechanical process can be a difficult concept for hands-on scientists, so managers must be patient and help their employees understand that this kind of technology leverages the best attributes of software and people to create a powerful partnership.

We recommend that labs considering automated data analysis engage staff in the validation and implementation to spread the workload and the knowledge. As is true with most technology, it is best not to rely on just one or two super users. We also found it critical to add supervisor level controls on data file manipulation, such as removing a sample that wasn’t run from the sequence table. This can prevent inadvertent deletion of a file, requiring reinjection of the entire batch!

 

Understanding Fibroblast Growth Factor 23

Author: Damien Gruson, PhD  // Date: OCT.1.2015  // Source: Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/october/understanding-fibroblast-growth-factor-23

What is the relationship of FGF-23 to heart failure?

A Heart failure (HF) is an increasingly common syndrome associated with high morbidity, elevated hospital readmission rates, and high mortality. Improving diagnosis, prognosis, and treatment of HF requires a better understanding of its different sub-phenotypes. As researchers gained a comprehensive understanding of neurohormonal activation—one of the hallmarks of HF—they discovered several biomarkers, including natriuretic peptides, which now are playing an important role in sub-phenotyping HF and in driving more personalized management of this chronic condition.

Like the natriuretic peptides, fibroblast growth factor 23 (FGF-23) could become important in risk-stratifying and managing HF patients. Produced by osteocytes, FGF-23 is a key regulator of phosphorus homeostasis. It binds to renal and parathyroid FGF-Klotho receptor heterodimers, resulting in phosphate excretion, decreased 1-α-hydroxylation of 25-hydroxyvitamin D, and decreased parathyroid hormone (PTH) secretion. The relationship to PTH is important because impaired homeostasis of cations and decreased glomerular filtration rate might contribute to the rise of FGF-23. The amino-terminal portion of FGF-23 (amino acids 1-24) serves as a signal peptide allowing secretion into the blood, and the carboxyl-terminal portion (aa 180-251) participates in its biological action.

How might FGF-23 improve HF risk assessment?

Studies have shown that FGF-23 is related to the risk of cardiovascular diseases and mortality. It was first demonstrated that FGF-23 levels were independently associated with left ventricular mass index and hypertrophy as well as mortality in patients with chronic kidney disease (CKD). FGF-23 also has been associated with left ventricular dysfunction and atrial fibrillation in coronary artery disease subjects, even in the absence of impaired renal function.

FGF-23 and FGF receptors are both expressed in the myocardium. It is possible that FGF-23 has direct effects on the heart and participates in the physiopathology of cardiovascular diseases and HF. Experiments have shown that for in vitro cultured rat cardiomyocytes, FGF-23 stimulates pathological hypertrophy by activating the calcineurin-NFAT pathway—and in wild-type mice—the intra-myocardial or intravenous injection of FGF-23 resulted in left ventricular hypertrophy. As such, FGF-23 appears to be a potential stimulus of myocardial hypertrophy, and increased levels may contribute to the worsening of heart failure and long-term cardiovascular death.

Researchers have documented that HF patients have elevated FGF-23 circulating levels. They have also found a significant correlation between plasma levels of FGF-23 and B-type natriuretic peptide, a biomarker related to ventricular stretch and cardiac hypertrophy, in patients with left ventricular hypertrophy. As such, measuring FGF-23 levels might be a useful tool to predict long-term adverse cardiovascular events in HF patients.

Interestingly, researchers have documented a significant relationship between FGF-23 and PTH in both CKD and HF patients. As PTH stimulates FGF-23 expression, it could be that in HF patients, increased PTH levels increase the bone expression of FGF-23, which enhances its effects on the heart.

 

The Past, Present, and Future of Western Blotting in the Clinical Laboratory

Author: Curtis Balmer, PhD  // Date: OCT.1.2015  // Source: Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/october/the-past-present-and-future-of-western-blotting-in-the-clinical-laboratory

Much of the discussion about Western blotting centers around its performance as a biological research tool. This isn’t surprising. Since its introduction in the late 1970s, the Western blot has been adopted by biology labs of virtually every stripe, and become one of the most widely used techniques in the research armamentarium. However, Western blotting has also been employed in clinical laboratories to aid in the diagnosis of various diseases and disorders—an equally important and valuable application. Yet there has been relatively little discussion of its use in this context, or of how advances in Western blotting might affect its future clinical use.

Highlighting the clinical value of Western blotting, Stanley Naides, MD, medical director of Immunology at Quest Diagnostics observed that, “Western blotting has been a very powerful tool in the laboratory and for clinical diagnosis. It’s one of many various methods that the laboratorian brings to aid the clinician in the diagnosis of disease, and the selection and monitoring of therapy.” Indeed, Western blotting has been used at one time or the other to aid in the diagnosis of infectious diseases including hepatitis C (HCV), HIV, Lyme disease, and syphilis, as well as autoimmune disorders such as paraneoplastic disease and myositis conditions.

However, Naides was quick to point out that the choice of assays to use clinically is based on their demonstrated sensitivity and performance, and that the search for something better is never-ending. “We’re constantly looking for methods that improve detection of our target [protein],” Naides said. “There have been a number of instances where we’ve moved away from Western blotting because another method proves to be more sensitive.” But this search can also lead back to Western blotting. “We’ve gone away from other methods because there’s been a Western blot that’s been developed that’s more sensitive and specific. There’s that constant movement between methods as new tests are developed.”

In recent years, this quest has been leading clinical laboratories away from Western blotting toward more sensitive and specific diagnostic assays, at least for some diseases. Using confirmatory diagnosis of HCV infection as an example, Sai Patibandla, PhD, director of the immunoassay group at Siemens Healthcare Diagnostics, explained that movement away from Western blotting for confirmatory diagnosis of HCV infection began with a technical modification called Recombinant Immunoblotting Assay (RIBA). RIBA streamlines the conventional Western blot protocol by spotting recombinant antigen onto strips which are used to screen patient samples for antibodies against HCV. This approach eliminates the need to separate proteins and transfer them onto a membrane.

The RIBA HCV assay was initially manufactured by Chiron Corporation (acquired by Novartics Vaccines and Diagnostics in 2006). It received Food and Drug Administration (FDA) approval in 1999, and was marketed as Chiron RIBA HCV 3.0 Strip Immunoblot Assay. Patibandla explained that, at the time, the Chiron assay “…was the only FDA-approved confirmatory testing for HCV.” In 2013 the assay was discontinued and withdrawn from the market due to reports that it was producing false-positive results.

Since then, clinical laboratories have continued to move away from Western blot-based assays for confirmation of HCV in favor of the more sensitive technique of nucleic acid testing (NAT). “The migration is toward NAT for confirmation of HCV [diagnosis]. We don’t use immunoblots anymore. We don’t even have a blot now to confirm HCV,” Patibandla said.

Confirming HIV infection has followed a similar path. Indeed, in 2014 the Centers for Disease Control and Prevention issued updated recommendations for HIV testing that, in part, replaced Western blotting with NAT. This change was in response to the recognition that the HIV-1 Western blot assay was producing false-negative or indeterminable results early in the course of HIV infection.

At this juncture it is difficult to predict if this trend away from Western blotting in clinical laboratories will continue. One thing that is certain, however, is that clinicians and laboratorians are infinitely pragmatic, and will eagerly replace current techniques with ones shown to be more sensitive, specific, and effective. This raises the question of whether any of the many efforts currently underway to improve Western blotting will produce an assay that exceeds the sensitivity of currently employed techniques such as NAT.

Some of the most exciting and groundbreaking work in this area is being done by Amy Herr, PhD, a professor of bioengineering at University of California, Berkeley. Herr’s group has taken on some of the most challenging limitations of Western blotting, and is developing techniques that could revolutionize the assay. For example, the Western blot is semi-quantitative at best. This weakness dramatically limits the types of answers it can provide about changes in protein concentrations under various conditions.

To make Western blotting more quantitative, Herr’s group is, among other things, identifying losses of protein sample mass during the assay protocol. About this, Herr explains that the conventional Western blot is an “open system” that involves lots of handling of assay materials, buffers, and reagents that makes it difficult to account for protein losses. Or, as Kevin Lowitz, a senior product manager at Thermo Fisher Scientific, described it, “Western blot is a [simple] technique, but a really laborious one, and there are just so many steps and so many opportunities to mess it up.”

Herr’s approach is to reduce the open aspects of Western blot. “We’ve been developing these more closed systems that allow us at each stage of the assay to account for [protein mass] losses. We can’t do this exactly for every target of interest, but it gives us a really good handle [on protein mass losses],” she said. One of the major mechanisms Herr’s lab is using to accomplish this is to secure proteins to the blot matrix with covalent bonding rather than with the much weaker hydrophobic interactions that typically keep the proteins in place on the membrane.

Herr’s group also has been developing microfluidic platforms that allow Western blotting to be done on single cells, “In our system we’re doing thousands of independent Westerns on single cells in four hours. And, hopefully, we’ll cut that down to one hour over the next couple years.”

Other exciting modifications that stand to dramatically increase the sensitivity, quantitation, and through-put of Western blotting also are being developed and explored. For example, the use of capillary electrophoresis—in which proteins are conveyed through a small electrolyte-filled tube and separated according to size and charge before being dropped onto a blotting membrane—dramatically reduces the amount of protein required for Western blot analysis, and thereby allows Westerns to be run on proteins from rare cells or for which quantities of sample are extremely limited.

Jillian Silva, PhD, an associate specialist at the University of California, San Francisco Helen Diller Family Comprehensive Cancer Center, explained that advances in detection are also extending the capabilities of Western blotting. “With the advent of fluorescence detection we have a way to quantitate Westerns, and it is now more quantitative than it’s ever been,” said Silva.

Whether or not these advances produce an assay that is adopted by clinical laboratories remains to be seen. The emphasis on Western blotting as a research rather than a clinical tool may bias advances in favor of the needs and priorities of researchers rather than clinicians, and as Patibandla pointed out, “In the research world Western blotting has a certain purpose. [Researchers] are always coming up with new things, and are trying to nail down new proteins, so you cannot take Western blotting away.” In contrast, she suggested that for now, clinical uses of Western blotting remain “limited.”

 

Adapting Next Generation Technologies to Clinical Molecular Oncology Service

Author: Ronald Carter, PhD, DVM  // Date: OCT.1.2015  // Source: Clinical Laboratory News

https://www.aacc.org/publications/cln/articles/2015/october/adapting-next-generation-technologies-to-clinical-molecular-oncology-service

Next generation technologies (NGT) deliver huge improvements in cost efficiency, accuracy, robustness, and in the amount of information they provide. Microarrays, high-throughput sequencing platforms, digital droplet PCR, and other technologies all offer unique combinations of desirable performance.

As stronger evidence of genetic testing’s clinical utility influences patterns of patient care, demand for NGT testing is increasing. This presents several challenges to clinical laboratories, including increased urgency, clinical importance, and breadth of application in molecular oncology, as well as more integration of genetic tests into synoptic reporting. Laboratories need to add NGT-based protocols while still providing old tests, and the pace of change is increasing.What follows is one viewpoint on the major challenges in adopting NGTs into diagnostic molecular oncology service.

Choosing a Platform

Instrument selection is a critical decision that has to align with intended test applications, sequencing chemistries, and analytical software. Although multiple platforms are available, a mainstream standard has not emerged. Depending on their goals, laboratories might set up NGTs for improved accuracy of mutation detection, massively higher sequencing capacity per test, massively more targets combined in one test (multiplexing), greater range in sequencing read length, much lower cost per base pair assessed, and economy of specimen volume.

When high-throughput instruments first made their appearance, laboratories paid more attention to the accuracy of base-reading: Less accurate sequencing meant more data cleaning and resequencing (1). Now, new instrument designs have narrowed the differences, and test chemistry can have a comparatively large impact on analytical accuracy (Figure 1). The robustness of technical performance can also vary significantly depending upon specimen type. For example, LifeTechnologies’ sequencing platforms appear to be comparatively more tolerant of low DNA quality and concentration, which is an important consideration for fixed and processed tissues.

https://www.aacc.org/~/media/images/cln/articles/2015/october/carter_fig1_cln_oct15_ed.jpg

Figure 1 Comparison of Sequencing Chemistries

Sequence pile-ups of the same target sequence (2 large genes), all performed on the same analytical instrument. Results from 4 different chemistries, as designed and supplied by reagent manufacturers prior to optimization in the laboratory. Red lines represent limits of exons. Height of blue columns proportional to depth of coverage. In this case, the intent of the test design was to provide high depth of coverage so that reflex Sanger sequencing would not be necessary. Courtesy B. Sadikovic, U. of Western Ontario.

 

In addition, batching, robotics, workload volume patterns, maintenance contracts, software licenses, and platform lifetime affect the cost per analyte and per specimen considerably. Royalties and reagent contracts also factor into the cost of operating NGT: In some applications, fees for intellectual property can represent more than 50% of the bench cost of performing a given test, and increase substantially without warning.

Laboratories must also deal with the problem of obsolescence. Investing in a new platform brings the angst of knowing that better machines and chemistries are just around the corner. Laboratories are buying bigger pieces of equipment with shorter service lives. Before NGTs, major instruments could confidently be expected to remain current for at least 6 to 8 years. Now, a major instrument is obsolete much sooner, often within 2 to 3 years. This means that keeping it in service might cost more than investing in a new platform. Lease-purchase arrangements help mitigate year-to-year fluctuations in capital equipment costs, and maximize the value of old equipment at resale.

One Size Still Does Not Fit All

Laboratories face numerous technical considerations to optimize sequencing protocols, but the test has to be matched to the performance criteria needed for the clinical indication (2). For example, measuring response to treatment depends first upon the diagnostic recognition of mutation(s) in the tumor clone; the marker(s) then have to be quantifiable and indicative of tumor volume throughout the course of disease (Table 1).

As a result, diagnostic tests need to cover many different potential mutations, yet accurately identify any clinically relevant mutations actually present. On the other hand, tests for residual disease need to provide standardized, sensitive, and accurate quantification of a selected marker mutation against the normal background. A diagnostic panel might need 1% to 3% sensitivity across many different mutations. But quantifying early response to induction—and later assessment of minimal residual disease—needs a test that is reliably accurate to the 10-4 or 10-5 range for a specific analyte.

Covering all types of mutations in one diagnostic test is not yet possible. For example, subtyping of acute myeloid leukemia is both old school (karyotype, fluorescent in situ hybridization, and/or PCR-based or array-based testing for fusion rearrangements, deletions, and segmental gains) and new school (NGT-based panel testing for molecular mutations).

Chemistries that cover both structural variants and copy number variants are not yet in general use, but the advantages of NGTs compared to traditional methods are becoming clearer, such as in colorectal cancer (3). Researchers are also using cell-free DNA (cfDNA) to quantify residual disease and detect resistance mutations (4). Once a clinically significant clone is identified, enrichment techniques help enable extremely sensitive quantification of residual disease (5).

Validation and Quality Assurance

Beyond choosing a platform, two distinct challenges arise in bringing NGTs into the lab. The first is assembling the resources for validation and quality assurance. The second is keeping tests up-to-date as new analytes are needed. Even if a given test chemistry has the flexibility to add analytes without revalidating the entire panel, keeping up with clinical advances is a constant priority.

Due to their throughput and multiplexing capacities, NGT platforms typically require considerable upfront investment to adopt, and training staff to perform testing takes even more time. Proper validation is harder to document: Assembling positive controls, documenting test performance criteria, developing quality assurance protocols, and conducting proficiency testing are all demanding. Labs meet these challenges in different ways. Laboratory-developed tests (LDTs) allow self-determined choice in design, innovation, and control of the test protocol, but can be very expensive to set up.

Food and Drug Administration (FDA)-approved methods are attractive but not always an option. More FDA-approved methods will be marketed, but FDA approval itself brings other trade-offs. There is a cost premium compared to LDTs, and the test methodologies are locked down and not modifiable. This is particularly frustrating for NGTs, which have the specific attraction of extensive multiplexing capacity and accommodating new analytes.

IT and the Evolution of Molecular Oncology Reporting Standards

The options for information technology (IT) pipelines for NGTs are improving rapidly. At the same time, recent studies still show significant inconsistencies and lack of reproducibility when it comes to interpreting variants in array comparative genomic hybridization, panel testing, tumor expression profiling, and tumor genome sequencing. It can be difficult to duplicate published performances in clinical studies because of a lack of sufficient information about the protocol (chemistry) and software. Building bioinformatics capacity is a key requirement, yet skilled people are in short supply and the qualifications needed to work as a bioinformatician in a clinical service are not yet clearly defined.

Tumor biology brings another level of complexity. Bioinformatic analysis must distinguish tumor-specific­ variants from genomic variants. Sequencing of paired normal tissue is often performed as a control, but virtual normal controls may have intriguing advantages (6). One of the biggest challenges is to reproducibly interpret the clinical significance of interactions between different mutations, even with commonly known, well-defined mutations (7). For multiple analyte panels, such as predictive testing for breast cancer, only the performance of the whole panel in a population of patients can be compared; individual patients may be scored into different risk categories by different tests, all for the same test indication.

In large scale sequencing of tumor genomes, which types of mutations are most informative in detecting, quantifying, and predicting the behavior of the tumor over time? The amount and complexity of mutation varies considerably across different tumor types, and while some mutations are more common, stable, and clinically informative than others, the utility of a given tumor marker varies in different clinical situations. And, for a given tumor, treatment effect and metastasis leads to retesting for changes in drug sensitivities.

These complexities mean that IT must be designed into the process from the beginning. Like robotics, IT represents a major ancillary decision. One approach many labs choose is licensed technologies with shared databases that are updated in real time. These are attractive, despite their cost and licensing fees. New tests that incorporate proprietary IT with NGT platforms link the genetic signatures of tumors to clinically significant considerations like tumor classification, recommended methodologies for monitoring response, predicted drug sensitivities, eligible clinical trials, and prognostic classifications. In-house development of such solutions will be difficult, so licensing platforms from commercial partners is more likely to be the norm.

The Commercial Value of Health Records and Test Data

The future of cancer management likely rests on large-scale databases that link hereditary and somatic tumor testing with clinical outcomes. Multiple centers have such large studies underway, and data extraction and analysis is providing increasingly refined interpretations of clinical significance.

Extracting health outcomes to correlate with molecular test results is commercially valuable, as the pharmaceutical, insurance, and healthcare sectors focus on companion diagnostics, precision medicine, and evidence-based health technology assessment. Laboratories that can develop tests based on large-scale integration of test results to clinical utility will have an advantage.

NGTs do offer opportunities for net reductions in the cost of healthcare. But the lag between availability of a test and peer-evaluated demon­stration of clinical utility can be considerable. Technical developments arise faster than evidence of clinical utility. For example, immuno­histochemistry, estrogen receptor/progesterone receptor status, HER2/neu, and histology are still the major pathological criteria for prognostic evaluation of breast cancer at diagnosis, even though multiple analyte tumor profiling has been described for more than 15 years. Healthcare systems need a more concerted assessment of clinical utility if they are to take advantage of the promises of NGTs in cancer care.

Disruptive Advances

Without a doubt, “disruptive” is an appropriate buzzword in molecular oncology, and new technical advances are about to change how, where, and for whom testing is performed.

• Predictive Testing

Besides cost per analyte, one of the drivers for taking up new technologies is that they enable multiplexing many more analytes with less biopsy material. Single-analyte sequential testing for epidermal growth factor receptor (EGFR), anaplastic lymphoma kinase, and other targets on small biopsies is not sustainable when many more analytes are needed, and even now, a significant proportion of test requests cannot be completed due to lack of suitable biopsy material. Large panels incorporating all the mutations needed to cover multiple tumor types are replacing individual tests in companion diagnostics.

• Cell-Free Tumor DNA

Challenges of cfDNA include standardizing the collection and processing methodologies, timing sampling to minimize the effect of therapeutic toxicity on analytical accuracy, and identifying the most informative sample (DNA, RNA, or protein). But for more and more tumor types, it will be possible to differentiate benign versus malignant lesions, perform molecular subtyping, predict response, monitor treatment, or screen for early detection—all without a surgical biopsy.

cfDNA technologies can also be integrated into core laboratory instrumentation. For example, blood-based EGFR analysis for lung cancer is being developed on the Roche cobas 4800 platform, which will be a significant change from the current standard of testing based upon single tests of DNA extracted from formalin-fixed, paraffin-embedded sections selected by a pathologist (8).

• Whole Genome and Whole Exome Sequencing

Whole genome and whole exome tumor sequencing approaches provide a wealth of biologically important information, and will replace individual or multiple gene test panels as the technical cost of sequencing declines and interpretive accuracy improves (9). Laboratories can apply informatics selectively or broadly to extract much more information at relatively little increase in cost, and the interpretation of individual analytes will be improved by the context of the whole sequence.

• Minimal Residual Disease Testing

Massive resequencing and enrichment techniques can be used to detect minimal residual disease, and will provide an alternative to flow cytometry as costs decline. The challenge is to develop robust analytical platforms that can reliably produce results in a high proportion of patients with a given tumor type, despite using post-treatment specimens with therapy-induced degradation, and a very low proportion of target (tumor) sequence to benign background sequence.

The tumor markers should remain informative for the burden of disease despite clonal evolution over the course of multiple samples taken during progression of the clinical course and treatment. Quantification needs to be accurate and sensitive down to the 10-5 range, and cost competitive with flow cytometry.

• Point-of-Care Test Methodologies

Small, rapid, cheap, and single use point-of-care (POC) sequencing devices are coming. Some can multiplex with analytical times as short as 20 minutes. Accurate and timely testing will be possible in places like pharmacies, oncology clinics, patient service centers, and outreach programs. Whether physicians will trust and act on POC results alone, or will require confirmation by traditional laboratory-based testing, remains to be seen. However, in the simplest type of application, such as a patient known to have a particular mutation, the advantages of POC-based testing to quantify residual tumor burden are clear.

Conclusion

Molecular oncology is moving rapidly from an esoteric niche of diagnostics to a mainstream, required component of integrated clinical laboratory services. While NGTs are markedly reducing the cost per analyte and per specimen, and will certainly broaden the scope and volume of testing performed, the resources required to choose, install, and validate these new technologies are daunting for smaller labs. More rapid obsolescence and increased regulatory scrutiny for LDTs also present significant challenges. Aligning test capacity with approved clinical indications will require careful and constant attention to ensure competitiveness.

References

1. Liu L, Li Y, Li S, et al. Comparison of next-generation sequencing systems. J Biomed Biotechnol 2012; doi:10.1155/2012/251364.

2. Brownstein CA, Beggs AH, Homer N, et al. An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge. Genome Biol 2014;15:R53.

3. Haley L, Tseng LH, Zheng G, et al. Performance characteristics of next-generation sequencing in clinical mutation detection of colorectal ­cancers. [Epub ahead of print] Modern Pathol July 31, 2015 as doi:10.1038/modpathol.2015.86.

4. Butler TM, Johnson-Camacho K, Peto M, et al. Exome sequencing of cell-free DNA from metastatic cancer patients identifies clinically actionable mutations distinct from primary ­disease. PLoS One 2015;10:e0136407.

5. Castellanos-Rizaldos E, Milbury CA, Guha M, et al. COLD-PCR enriches low-level variant DNA sequences and increases the sensitivity of genetic testing. Methods Mol Biol 2014;1102:623–39.

6. Hiltemann S, Jenster G, Trapman J, et al. Discriminating somatic and germline mutations in tumor DNA samples without matching normals. Genome Res 2015;25:1382–90.

7. Lammers PE, Lovly CM, Horn L. A patient with metastatic lung adenocarcinoma harboring concurrent EGFR L858R, EGFR germline T790M, and PIK3CA mutations: The challenge of interpreting results of comprehensive mutational testing in lung cancer. J Natl Compr Canc Netw 2015;12:6–11.

8. Weber B, Meldgaard P, Hager H, et al. Detection of EGFR mutations in plasma and biopsies from non-small cell lung cancer patients by allele-specific PCR assays. BMC Cancer 2014;14:294.

9. Vogelstein B, Papadopoulos N, Velculescu VE, et al. Cancer genome landscapes. Science 2013;339:1546–58.

10. Heitzer E, Auer M, Gasch C, et al. Complex tumor genomes inferred from single circulating tumor cells by array-CGH and next-generation sequencing. Cancer Res 2013;73:2965–75.

11. Healy B. BRCA genes — Bookmaking, fortunetelling, and medical care. N Engl J Med 1997;336:1448–9.

 

 

 

Read Full Post »

« Newer Posts - Older Posts »