Feeds:
Posts
Comments

Archive for the ‘Biomedical Measurement Science’ Category

New method for 3D imaging of brain tumors

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

Third-Harmonic Generation Microscopy Provides In Situ Brain Tumor Imaging

AMSTERDAM, Netherlands, April 25, 2015 — A technique involving third-harmonic generation microscopy could allow neurosurgeons to image and assess brain tumor boundaries during surgery, providing optical biopsies in near-real time and increasing the accuracy of tissue removal.

Pathologists typically use staining methods, in which chemicals like hematoxylin and eosin turn different tissue components blue and red, revealing its structure and whether there are any tumor cells. A definitive diagnosis can take up to 24 hours, meaning surgeons may not realize some cancerous tissue has escaped from their attention until after surgery — requiring a second operation and more risk.

Tissue from a patient diagnosed with low-grade glioma.

Tissue from a patient diagnosed with low-grade glioma. The green image is taken with the new method, while the pink uses conventional hematoxylin and eosin staining. From the upper left to the lower right, both images show increasing cell density due to more tumor tissue. The insets reveal the high density of tumor cells. Courtesy of N.V. Kuzmin et al./VU University Amsterdam.

Brain tumors — specifically glial brain tumors — are often spread out and mixed in with the healthy tissue, presenting a particular challenge. Surgery, irradiation and chemotherapy often cause substantial collateral damage to the surrounding brain tissue.

Now researchers from VU University Amsterdam, led by professor Marloes Groot, have demonstrated a label-free optical method for imaging cancerous brain tissue. They were able to produce most images in under a minute; smaller ones took <1 s, while larger images of a few square millimeters took 5 min.

The study involved firing short, 200-fs, 1200-nm laser pulses into the tissue. When three photons converged at the same time and place, the photons interacted with the nonlinear optical properties of the tissue. Through the phenomena of third harmonic generation, the interactions produced a single 400- or 600-nm photon (in the case of third or second harmonic generation, respectively).

The shorter-wavelength photon scatters in the tissue, and when it reaches a detector — in this case a high-sensitivity GaAsP photomultiplier tube — it reveals what the tissue looks like inside. The resulting images enabled clear recognition of cellularity, nuclear pleomorphism and rarefaction of neuropil in the tissue.

While this technique has been used in other applications — to image insects and fish embryos, for example — the researchers said this is the first time it’s been used to analyze glial brain tumors.

Groot and her team are now developing a handheld device for tumor border detection during surgery. The incoming laser pulses can only reach a depth of about 100 μm into the tissue currently; to reach further, Groot envisions attaching a needle that can pierce the tissue and deliver photons deeper.

The research was published in Biomedical Optics Express, a publication of The Optical Society (OSA) (doi: 10.1364/boe.7.001889).

 

Third harmonic generation imaging for fast, label-free pathology of human brain tumors

Biomedical Optics Express 2016  7(5):1889-1904    doi: 10.1364/BOE.7.001889

In brain tumor surgery, recognition of tumor boundaries is key. However, intraoperative assessment of tumor boundaries by the neurosurgeon is difficult. Therefore, there is an urgent need for tools that provide the neurosurgeon with pathological information during the operation. We show that third harmonic generation (THG) microscopy provides label-free, real-time images of histopathological quality; increased cellularity, nuclear pleomorphism, and rarefaction of neuropil in fresh, unstained human brain tissue could be clearly recognized. We further demonstrate THG images taken with a GRIN objective, as a step toward in situ THG microendoscopy of tumor boundaries. THG imaging is thus a promising tool for optical biopsies.

 

Glial tumors (gliomas) account for almost 80% of the tumors originating from brain tissue. The vast majority of these tumors are so-called ‘diffuse gliomas’ as they show very extensive (‘diffuse’) growth into the surrounding brain parenchyma. With surgical resection, irradiation, and/or chemotherapy it is impossible to eliminate all glioma cells without serious damage to the brain tissue. As a consequence, until now, patients with a diffuse glioma have had a poor prognosis, a situation which strongly contributes to the fact that brain tumor patients experience more years of life lost than patients with any other type of cancer [1,2].

Meanwhile it has also been demonstrated that the prognosis of patients with a diffuse glioma correlates with the extent of resection [3–5]. During brain surgery, however, it is extremely difficult for the neurosurgeon to determine the boundary of the tumor, i.e. whether a brain area contains tumor cells or not. If the neurosurgeon could have histopathological information on the tumor boundaries during brain surgery, then recognition of these tumor boundaries and with that, the surgical resection, could be significantly improved.

Occasionally, intra-operative analysis using hematoxylin-and-eosin (H&E) stained sections of snap-frozen material or smear preparations is performed by the pathologist to help establish brain tumor boundaries, but this procedure only allows analysis of small, selected regions, can only be performed on tissue fragments that are already resected, and is rather time consuming (frozen section diagnosis) or does not allow analysis of tumor in the histological context (smear preparations). Fluorescence imaging techniques are increasingly used during surgery [6,7] but are associated with several drawbacks, such as heterogeneous delivery and nonspecific staining [8,9]. In particular, low-grade gliomas and normal brain tissue have an intact blood-brain barrier and take up little circulating dye [10–12]. Alternative techniques are therefore required, that can detect the presence of tumor cells in tissue without fluorescent labels and with a speed that enables ‘live’ feedback to the surgeon while he/she operates.

The past year has seen exciting new developments in which optical coherence tomography [13] and stimulated Raman microscopy [14,15] were reported to reliably detect tumor tissue in the brain of human glioma patients, and a handheld Raman spectroscopy device was even implemented intra-surgical to assess brain tissue prior to excision [16]. These techniques are especially sensitive in densely tumor-infiltrated areas, and for the Raman spectroscopy device study a sensitivity limit of 17 tumor cells in an area of 150 × 150 μm2 was reported. The discriminating power of the Raman techniques is based on subtle differences in the vibrational spectra of tumor tissue and healthy tissue, and they require extensive comparison of experimental spectra against libraries of reference spectra. A technique capable of directly visualizing the classical histopathological hallmark criteria currently used by pathologists for classification of tumor tissue could potentially be even more reliable and make the transition from the current practice—histopathological analysis of fixated tissue—to in situ optical biopsy easier. Diffuse gliomas are histopathologically characterized by variably increased cellularity, nuclear pleomorphism and—especially in higher-grade neoplasms—brisk mitotic activity, microvascular proliferation, and necrosis. To visualize these features in live tissue, a technique that elucidates the morphology of tissue is required. In this context, third harmonic generation (THG) microscopy is a promising tool because of its capacity to visualize almost the full morphology of tissue. THG is a nonlinear optical process that relies on spatial variations of the third-order non-linear susceptibility χ(3) intrinsic to the tissue and (in the case of brain tissue) mainly arises from interfaces with lipid-rich molecules [17–27]. SHG signals arise from an optical nonlinear process involving non-centrosymmetric molecules present in, for example, microtubules and collagen. THG has been successfully applied to image unstained samples such as insect embryos, plant seeds and intact mammalian tissue [28], epithelial tissues [29–31], zebra fish embryos [32], and the zebra fish nervous system [33]. In brain tissue of mice, augmented by co-recording of SHG signals, THG was shown to visualize cells, nuclei, the inner and outer contours of axons, blood cells, and vessels, resulting in the visualization of both gray and white matter (GM and WM) as well as vascularization, up to a depth of 350 μm [24,26]. Here, we explore the potential of THG and SHG imaging for real time analysis of ex-vivo human brain tissue in the challenging cases of diffuse tumor invasion in low-grade brain tumors as well as of high-grade gliomas and structurally normal brain tissues.

 

Multiphoton imaging

THG and SHG are nonlinear optical processes that may occur in tissue depending on the nonlinear susceptibility coefficients χ(3) and χ(2) of the tissue and upon satisfying phase matching conditions [17–19,21,23–27]. In the THG process, three incident photons are converted into one photon with triple energy and one third of the wavelength (Fig. 1(A)). In the SHG process, signals result from the conversion of an incident photon pair into one photon with twice the energy and half the wavelength. Two- and three photon excited fluorescence signals (2PF, 3PF) may simultaneously be generated by intrinsic proteins (Fig. 1(B)). As a result, a set of distinct (harmonic) and broadband (autofluorescence) spectral peaks is generated in the visible range. The imaging setup (Fig. 1(C)) to generate and collect these signals consisted of a commercial two-photon laser-scanning microscope (TriMScope I, LaVision BioTec GmbH) and a femtosecond laser source. The laser source was an optical parametric oscillator (Mira-OPO, APE) pumped at 810 nm by a Ti-sapphire oscillator (Coherent Chameleon Ultra II). The OPO generates 200 fs pulses at 1200 nm with a repetition rate of 80 MHz. We selected this wavelength as it falls in the tissue transparency window, providing deeper penetration and reduced photodamage compared to the 700–1000 nm range, as well as harmonic signals generated in the visible wavelength range, facilitating their collection and detection with conventional objectives and detectors. We focused the OPO beam on the sample using a 25 × /1.10 (Nikon APO LWD) water-dipping objective (MO). The 1200 nm beam focal spot size on the sample was dlateral ~0.7 μm and daxial ~4.1 μm. It was measured with 0.175 μm fluorescent microspheres (see Section 3.4) yielding two- and three-photon resolution values Δ2P,lateral ~0.5 μm, Δ2P,axial ~2.9 μm, Δ3P,lateral ~0.4 μm, and Δ3P,axial ~2.4 μm. Two high-sensitivity GaAsP photomultiplier tubes (PMT, Hamamatsu H7422-40) equipped with narrowband filters at 400 nm and 600 nm were used to collect the THG and SHG signals, respectively, as a function of position of the focus in the sample. The signals were filtered from the 1200 nm fundamental photons by a dichroic mirror (Chroma T800LPXRXT, DM1), split into SHG and THG channels by a dichroic mirror (Chroma T425LPXR, DM2), and passed through narrow-band interference filters (F) for SHG (Chroma D600/10X) and THG (Chroma Z400/10X) detection. The efficient back-scattering of the harmonic signals allowed for their detection in epi-direction. The laser beam was transversely scanned over the sample by a pair of galvo mirrors (GM). THG and SHG modalities are intrinsically confocal and therefore provide direct depth sectioning. We obtained a full 3D image of the tissue volume by scanning the microscope objective with a stepper motor in the vertical (z) direction. The mosaic imaging of the sample was performed by transverse (xy) scanning of the motorized translation stage. Imaging data was acquired with the TriMScope I software (“Imspector Pro”); image stacks were stored in 16-bit tiff-format and further processed and analyzed with “ImageJ” software (ver. 1.49m, NIH, USA). All images were processed with logarithmic contrast enhancement.

Fig. 1 THG/SHG microscopy for brain tissue imaging. (A) Energy level diagram of the second (SHG) and third (THG) harmonic generation process. (B) Energy level diagram of the two- (2PF) and three-photon (3PF) excited auto-fluorescence process. (C) Multiphoton microscope setup: Laser producing 200 fs pulses at 1200 nm; GM – X-Y galvo-scanner mirrors; SL – scan lens; TL – tube lens; MO – microscope objective; DM1 – dichroic mirror reflecting back-scattered THG/SHG photons to the PMT detectors; DM2 – dichroic mirror splitting SHG and THG channels; F – narrow-band SHG and THG interference filters; L – focusing lenses; PMT – photomultiplier tube detectors. (D) Infrared photons (white arrow) are focused deep in the brain tissue, converted to THG (green) and SHG (red) photons, scattered back (green/red arrows) and epi-detected. The nonlinear optical processes result in label-free contrast images with sub-cellular resolution and intrinsic depth sectioning. (E and F) Freshly-excised low-grade (E) and high-grade (F) glioma tissue samples in artificial cerebrospinal fluid (ACSF) in a Petri dish with a millimeter paper underneath for scale. (G) An agar-embedded tumor tissue sample under 0.17 mm glass cover slip with the microscope objective (MO) on top.   Download Full Size | PPT Slide

Endomicroscopy imaging

For endomicroscopic imaging we used a commercial high-numerical-aperture (NA) multi-element micro-objective lens (GT-MO-080-018-810, GRINTECH) composed of a plano-convex lens and two GRaded INdex (GRIN) lenses with aberration compensation, object NA = 0.80 and object working distance 200 µm (in water), image NA = 0.18 and image working distance 200 µm (in air), magnification × 4.8 and field-of-view diameter of 200 μm. The GRIN lenses and the plano-convex lens were mounted in a waterproof stainless steel housing with an outer diameter of 1.4 mm and a total length of 7.5 mm. Originally designed for a wavelength range of 800–900 nm [36–41], this micro-objective lens was used for focusing of 1200 nm femtosecond pulses and collection of back-scattered harmonic and fluorescence photons. A coupling lens with f = 40 mm (NA = 0.19, Qioptiq, ARB2 NIR, dia. 25 mm) focused the scanned laser beam in the image plane of the micro-objective lens and forwarded the epi-detected harmonic and fluorescence photons to the PMTs.

We characterized the lateral (x) and axial (z) resolution of the micro-objective lens by 3D imaging of fluorescence microspheres (PS-Speck Microscope Point Source Kit, P7220, Molecular Probes). We used “blue” and “deep red” microspheres, 0.175 ± 0.005 μm in diameter, with excitation/emission maxima at 360/440 nm and 630/660 nm to obtain three-photon (3P) and two-photon (2P) point spread function (PSF) profiles. The excitation wavelength was 1200 nm, and fluorescence signals were detected in the 400 ± 5 nm (3P) and 600 ± 5 nm (2P) spectral windows, just as in the brain tissue imaging experiments. 1 μL of “blue” and “deep red” sphere suspensions were applied to a propanol-cleaned 75 × 26 × 1 mm3 glass slide. The mixed microsphere suspension was left to dry for 20 min and was then imaged with the micro-objective lens via a water immersion layer. The assembly of the coupling lens and the micro-objective lens was vertically (z) scanned with a step of 0.5 μm, and stacks of two-/three-photon images were recorded. The line profiles were then taken over the lateral (xy) images of the fluorescent spheres with maximal intensity (in focus), and fluorescence counts were plotted as function of the lateral coordinate (x). The axial (z) scan values of the two- and three-photon fluorescence signals were acquired by averaging of the total fluorescence counts of the corresponding spheres and were plotted as function of the axial coordinate (z). Lateral (x) and axial (z) 2P/3P points were then fitted with Gaussian functions and full width at half-maximum (FWHM) values were measured.

……. Results….  Conclusions

The results shown here provide the first evidence that—by applying the same microscopic criteria that are used by the pathologist, i.e. increased cellularity, nuclear pleomorphism, and rarefaction of neuropil—THG/SHG ex-vivo microscopy can be used to recognize the presence of diffuse infiltrative glioma in fresh, unstained human brain tissue. Images and a first diagnosis can be provided in seconds, with the ‘inspection mode’, by moving the sample under the scanning microscope (see Visualization 4 and Visualization 5), or in about 5 minutes if an area has to be inspected with sub-cellular detail. The sensitivity of THG to interfaces provides images with excellent contrast in which cell-by-cell variations are visualized. The quality of the images and the speed with which they can be recorded make THG a promising tool for quick assessment of the nature of excised tissue. Importantly, because THG/SHG images are very close to those of histological slides, we expect that the surgeon (or pathologist) will need very little additional training for adequate interpretation of the images. We are planning to construct a THG/SHG ex-vivo tabletop device consisting of a compact laser source and a laser-scanning microscope requiring a physical footprint of only 1 m2, to be placed in an operating room, enabling immediate feedback to the surgeon on the nature of excised tissue, during the operation. With this device, we will perform a quantitative study of the added value of rapid THG/SHG pathological feedback during surgery for the final success of the neurosurgery. Finally, we note that THG/SHG imaging does not induce artifacts associated with fixation, freezing, and staining; therefore, tissue fragments examined ex-vivo can still be used for subsequent immunochemical and/or molecular analysis.

The microendoscopy THG/SHG imaging results represent an important step toward the development of a THG/SHG-based bioptic needle, and show that the use of such a needle for in situ optical sampling for optimal resection of gliomas is indeed a viable prospect, as has been demonstrated also before for multi-photon microscopies [38,49–54]. Although there are several issues associated with the operation of a needle-like optical device, such as the fact that blood in the surgical cavity may obscure the view, and the fact that only small areas can be biopsied with a needle, it may be a valuable tool in cases where sparing healthy tissue is of such vital importance as in brain surgery. Therefore, the reasonably good quality of the THG images taken with the GRIN micro-objective shown here, together with the developments in the field of microendoscopy, warrant further development of THG/SHG into a true handheld device. This next step, a true handheld bioptic needle, requires an optical fiber to transport the light from a small footprint laser to the GRIN micro-objective, and a small 2D scanner unit, to enable placing the laser at a sufficient distance from the patient. Patient-safe irradiation levels for THG imaging will have to be determined but are expected to lie in the 10–50 mW range [55–58]. This implies that only minor optimization of signal collection efficiency needs to be achieved, because the images of Fig. 10 were measured with 50 mW incident power.

THG/SHG imaging thus holds great promise for improving surgical procedures, thereby reducing the need for second surgeries and the loss of function by excising non-infiltrated brain tissue, as well as improving survival and quality of life of the patients. In addition, the success in the challenging case of diffuse gliomas promises great potential of THG/SHG-based histological analysis for a much wider spectrum of diagnostic applications.

References and links

1. N. G. Burnet, S. J. Jefferies, R. J. Benson, D. P. Hunt, and F. P. Treasure, “Years of life lost (YLL) from cancer is an important measure of population burden–and should be considered when allocating research funds,” Br. J. Cancer 92(2), 241–245 (2005). [PubMed]  

2. J. A. Schwartzbaum, J. L. Fisher, K. D. Aldape, and M. Wrensch, “Epidemiology and molecular pathology of glioma,” Nat. Clin. Pract. Neurol. 2(9), 494–516 (2006). [CrossRef]   [PubMed]  

3. J. S. Smith, E. F. Chang, K. R. Lamborn, S. M. Chang, M. D. Prados, S. Cha, T. Tihan, S. Vandenberg, M. W. McDermott, and M. S. Berger, “Role of extent of resection in the long-term outcome of low-grade hemispheric gliomas,” J. Clin. Oncol. 26(8), 1338–1345 (2008). [CrossRef]   [PubMed]  

4. N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62(4), 753–766 (2008). [CrossRef]   [PubMed]  

5. I. Y. Eyüpoglu, M. Buchfelder, and N. E. Savaskan, “Surgical resection of malignant gliomas-role in optimizing patient outcome,” Nat. Rev. Neurol. 9(3), 141–151 (2013). [CrossRef]  [PubMed]  

6. U. Pichlmeier, A. Bink, G. Schackert, and W. Stummer, “Resection and survival in glioblastoma multiforme: An RTOG recursive partitioning analysis of ALA study patients,” Neuro-oncol. 10(6), 1025–1034 (2008). [CrossRef]   [PubMed]  

7. W. Stummer, J. C. Tonn, C. Goetz, W. Ullrich, H. Stepp, A. Bink, T. Pietsch, and U. Pichlmeier, “5-Aminolevulinic Acid-Derived Tumor Fluorescence: The Diagnostic Accuracy of Visible Fluorescence Qualities as Corroborated by Spectrometry and Histology and Postoperative Imaging,” Neurosurgery 74(3), 310–320 (2014). [CrossRef]   [PubMed]  

….. more

Tables (1)

Tables Icon

Table 1 Pre-operative diagnoses and cell densities observed in the studied brain tissue samples by THG imaging and corresponding H&E histopathology.

Read Full Post »

Crystal Resolution in Raman Spetctoscopy for Pharmaceutical Analysis, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Crystal Resolution in Raman Spetctoscopy for Pharmaceutical Analysis

Curator: Larry H. Bernstein, MD, FCAP

 

Investigating Crystallinity Using Low Frequency Raman Spectroscopy: Applications in Pharmaceutical Analysis

http://images.alfresco.advanstar.com/alfresco_images/pharma/2016/01/21/52099838-6354-4ad4-b6f9-9c7c8061f307/Gordon-figure01_web.jpg

Figure 1: Illustration of an exemplar low-frequency Raman setup with a 785-nm laser.

The second system is based on a pre-built SureBlock XLF-CLM THz-Raman system from Ondax Inc. The laser (830 nm, 200 mW), cleanup filters, and laser line filters are all self-contained inside of the instrument but operate on the same principles as the 785-nm system. The sample is arranged in a 180° backscattering geometry relative to a 10× microscope lens. This system is then coupled via a fiber-optic cable to a Princeton Instruments SP2150i spectrograph and PIXIS 100 CCD camera. The 0.15-m spectrograph is used in conjunction with either a 1200- or 1800-groove/mm blazed diffraction grating to adjust the resolution and spectral range.

Crystalline Versus Amorphous Samples

The Raman spectrum of crystalline and amorphous solids differ greatly in the low-frequency region (see Figure 2) because of the highly ordered and highly disordered molecular environments of the respective solids. However, the mid-frequency region can also be noticeably altered by the changing environment (Figure 3).

 

 

http://images.alfresco.advanstar.com/alfresco_images/pharma/2016/01/21/52099838-6354-4ad4-b6f9-9c7c8061f307/Gordon-figure03_web.jpg

Figure 3: Raman spectra of griseofulvin

Ensuring Accuracy

A potential issue is optical artifacts, and these may be identified by the analysis of both Stokes and anti-Stokes spectra. One advantage of the experimental setups described is that signal from the sample may be measured within minutes and it is nondestructive, thus allowing Raman spectra to be collected from a single sample using both techniques at virtually the same time. This approach permits the examination of low-frequency Raman data with 785-nm and 830-nm excitation and allows comparison with Fourier transform (FT)-Raman spectra, in which it is possible to collect meaningful data down to a Raman shift of 50 cm-1. The benefits are demonstrated in Figure 4. In this data, each technique produces consistent bands with similar Raman shifts and relative intensities. While Raman data were not collected below 50 cm-1 using the 1064-nm system, the bands at 69 and 96 cm-1 are consistent with the 785- and 830-nm data. Furthermore, the latter two methods show consistency with bands appearing around 32 and 46 cm-1 for both techniques.

http://images.alfresco.advanstar.com/alfresco_images/pharma/2016/01/21/52099838-6354-4ad4-b6f9-9c7c8061f307/Gordon-figure04_web.jpg

Figure 4: Comparison of the low-frequency region of three Raman spectroscopic techniques.

Case Studies

So far there have been few studies to utilize low-frequency Raman spectroscopy in the analysis of pharmaceutical crystallinity. Despite this, the literature does contain articles that demonstrate the promising applicability of the technique.

Mah and colleagues (38) studied the level of crystallinity of griseofulvin using low-frequency Raman spectroscopy with PLS analysis. In this study a batch of amorphous griseofulvin (which was checked using X-ray powder diffractometry) was prepared by melting the griseofulvin and rapidly cooling it again using liquid nitrogen. Condensed water was removed by placing the sample over phosphorus pentoxide and the glassy sample was then ground using mortar and pestle. Calibrated samples of 2%, 4%, 6%, 8%, and 10% crystallinity were then created though geometric mixing of the amorphous and crystalline samples; following this mixing, the samples were then pressed into tablets. Many tablets were then stored in differing temperatures (30 °C, 35 °C, and 40 °C) at 0% humidity. Low-frequency 785-nm, mid-frequency 785-nm, and FT-Raman spectroscopies were performed simultaneously on each sample. After PLS analysis, limits of detection (LOD) and limits of quantification (LOQ) were calculated. The results of this research showed that each of these three techniques were capable of quantifying crystallinity. It also showed that FT-Raman and low-frequency Raman techniques were able to both detect and quantify crystallinity earlier than the mid-frequency 785 nm Raman technique. The respective LOD and LOQ values for FT-Raman, low-frequency Raman, and mid-frequency Raman are as follows: LOD values: 0.6%, 1.1%, and 1.5%; LOQ values: 1.8%, 3.4%, and 4.6%. The root mean squared errors of prediction (RMSEP) were also calculated and, like the LOD and LOQ values, indicated that the FT-Raman data had the lowest error, followed by the low-frequency Raman, and mid-frequency Raman had the largest errors of the three techniques. The recrystallization tests that were performed indicated that higher temperatures showed a distinct increase in the rate of recrystallization and that each technique provided similar results (within experimental error). It is also important to note that each technique gave similar spectra (where applicable), which provides supporting evidence that the data is meaningful. Overall, the conclusions of this research were that low-frequency predictions of crystallinity are at least as accurate as the predictions made using mid-frequency Raman techniques. It is arguable that low-frequency Raman is better because of the presence of stronger spectral features and because they are intrinsically linked with crystallinity.

Hédoux and colleagues (36) investigated the crystallinity of indomethacin using low-frequency Raman spectroscopy and compared the results with high frequency data. The ranges of interest were indicated to be 5–250 cm-1and 1500–1750 cm-1 regions. Samples of indomethacin were milled using a cryogenic mill to avoid mechanical heating of the sample, with full amorphous samples being obtained after 25 min of milling. Methods used in this study include Raman spectroscopy, isothermal differential scanning calorimetry (DSC), and X-ray diffractometry as well as the milling technique. The primary objective of this research was to use all of these techniques to monitor the crystallization of amorphous indomethacin to the more stable γ-state while the sample was at room temperature–well below the glass transition temperature,Tg = 43 °C. The results of this research did in fact show that low-frequency Raman spectroscopy is a very sensitive technique for identifying very small amounts of crystallinity within mostly amorphous samples. The data was supported by the well-established methods for monitoring crystallinity: XRD and DSC. This paper particularly noted the benefit of low acquisition times associated with low-frequency Raman spectroscopy compared with the other techniques used.

Low-frequency Raman spectroscopy was also used to monitor two polymorphic forms of caffeine after grinding and pressurization of the samples (39). Pressurization was performed hydrostatically using a gasketed membrane diamond anvil cell (MDAC), while ball milling was used as the method of grinding the sample. Analysis methods used were low-frequency Raman and X-ray diffraction. Low-frequency Raman spectra revealed that, upon slight pressurization, caffeine form I transforms into a metastable state slightly different from that of form II and that a disordered (amorphous) state is achieved in both forms when pressurized above 2 GPa. In contrast, it is concluded that grinding results in the transformation of each form into the other with precise grinding times, thus also generating an intermediate form, which was found to only be observable using low-frequency Raman spectroscopy. The caffeine data, as well as the low-frequency data obtained for indomethacin were further discussed by Hédoux and colleagues (40).

Larkin and colleagues (41) used low-frequency Raman in conjunction with other techniques to characterize several different APIs and their various forms. The other techniques include FT-Raman spectroscopy, X-ray powder diffraction (XRPD), and single-crystal X-ray diffractometry. The APIs studied include carbamazepine, apixaban diacid co-crystals, theophylline, and caffeine and were prepared in various ways that are not detailed here. During this research, low-frequency Raman spectroscopy played an important role in understanding the structures while in their various forms. However, more importantly, low-frequency Raman spectroscopy produced information-rich regions below 200 cm-1 for each of the crystalline samples and noticeably broad features when the APIs were in solution.

Wang and colleagues (42) investigated the applicability of low-frequency Raman spectroscopy in the analysis of respirable dosage forms of various pharmaceuticals. The analyzed pharmaceuticals were involved in the treatment of asthma or chronic obstructive pulmonary disease (COPD) and include salmeterol xinafoate, formoterol fumarate, glycopyrronium bromide, fluticasone propionate, mometasone furoate, and salbutamol sulfate. Various formulations of amino acid excipients were also analyzed in this study. Results indicated that the use of low-frequency Raman analysis was beneficial because of the large features found in the region and allowed for reliable identification of each of the dosage forms. Not only this, it also allowed unambiguous identification of two similar bronchodilators, albuterol (Ventolin) and salbutamol (Airomir).

Heyler and colleagues (43) collected both the low-frequency and fingerprint region of Raman spectra from several polymorphs of carbamazepine, an anticonvulsant and mood stabilizer. This study found that the different polymorphs of this API could be distinguished effectively using these two regions. Similarly, Al-Dulaimi and colleagues (44) demonstrated that polymorphic forms of paracetamol, flufenamic acid, and imipramine hydrochloride could be screened using low-frequency Raman and only milligram quantities of each drug. In this study, paracetamol and flufenamic acid were used as the model compounds for comparison with a previously unstudied system (imipramine hydrochloride). Features within the low-frequency Raman regions of spectra were shown to be significantly different between forms of each drug. Therefore this study also indicated that the polymorphs were highly distinguishable using the technique. Hence, like all other previously mentioned case studies, these investigations further demonstrate the utility of low-frequency Raman spectroscopy as a fast and effective method for screening pharmaceuticals for crystallinity.

Conclusions

Low-frequency Raman spectroscopy is a new technique in the field of pharmaceuticals, as well as in general studies of crystallinity. This is despite indications in previous studies showing an innate ability of the technique for identifying crystalline materials and in some cases, quantifying crystallinity. Arguably one of the most beneficial aspects of this technique is the relatively small amount of time necessary to prepare and analyze samples when compared with XRD or DSC. This should ensure the growing use of low-frequency Raman spectroscopy in, not only pharmaceutical crystallinity studies, but also crystallinity studies of other substances as well.

References

  1. J.R. Ferraro and K. Nakamoto, Introductory Raman Spectroscopy, 1st Edition (Academic Press, San Diego, 1994).
  2. K.C. Gordon and C.M. McGoverin, Int. J. Pharm. 417, 151–162 (2011).
  3. D. Law et al., J. Pharm. Sci. 90, 1015–1025 (2001).
  4. G.H. Ward and R.K. Schultz, Pharm. Res. 12, 773–779 (1995).
  5. M.D. Ticehurst et al., Int. J. Pharm. 193, 247–259 (2000).
  6. M. Rani, R. Govindarajan, R. Surana, and R. Suryanarayanan, Pharm. Res. 23, 2356–2367 (2006).
  7. M.J. Pikal, in Polymorphs of Pharmaceutical Solids, H.G. Brittain, Ed. (Marcel Dekker, New York, 1999), pp. 395–419.
  8. M. Ohta and G. Buckton, Int. J. Pharm. 289, 31–38 (2005).
  9. J. Han and R. Suryanarayanan, Pharm. Dev. Technol. 3, 587–596 (1998).
  10. S. Debnath and R. Suryanarayanan, AAPS PharmSciTech. 5, 1–11 (2004).
  11. C.J. Strachan, T. Rades, D.A. Newnham, K.C. Gordon, M. Pepper, and P.F. Taday, Chem. Phys. Lett. 390, 20–24 (2004).
  12. Y.C. Shen, Int. J. Pharm. 417, 48–60 (2011).
  13. G.W. Chantry, in Submillimeter Spectroscopy: A Guide to the Theoretical and Experimental Physics of the Far Infrared, 1st Edition (Academic Press Inc. Ltd., Waltam, 1971).
  14. D. Tuschel, Spectroscopy 30(9), 18–31 (2015).
  15. P.M.A. Sherwood, Vibrational Spectroscopy of Solids (Cambridge University Press, Cambridge, 1972).
  16. L. Ho et al., J. Control. Release. 119, 253–261 (2007).
  17. V.P. Wallace et al., Faraday Discuss. 126, 255–263 (2004).
  18. F.S. Vieira and C. Pasquini, Anal. Chem. 84, 3780–3786 (2014).
  19. J. Darkwah, G. Smith, I. Ermolina, and M. Mueller-Holtz, Int. J. Pharm.455, 357–364 (2013).
  20. S. Kojima, T. Shibata, H. Igawa, and T. Mori, IOP Conf. Ser. Mater. Sci. Eng. 54, 1–6 (2014).
  21. T. Shibata, T. Mori, and S. Kojima, Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 150, 207–211 (2015).
  22. S.P. Delaney, D. Pan, M. Galella, S.X. Yin, and T.M. Korter, Cryst. Growth Des. 12, 5017–5024 (2012).
  23. M.D. King, W.D. Buchanan, and T.M. Korter, Anal. Chem. 83, 3786–3792 (2011).
  24. C.J. Strachan et al., J. Pharm. Sci. 94, 837–846 (2005).
  25. C.M. McGoverin, T. Rades, and K.C. Gordon, J. Pharm. Sci. 97, 4598–4621 (2008).
  26. A. Heinz, C.J. Strachan, K.C. Gordon, and T. Rades, J. Pharm. Pharmacol. 971–988 (2009).,<>
  27. H.G. Brittain, J. Pharm. Sci. 86, 405–412 (1997).
  28. L. Yu, S.M. Reutzel, and G. A. Stephenson, Pharm. Sci. Technol. Today1, 118–127 (1998).
  29. M. Dracínský, E. Procházková, J. Kessler, J. Šebestík, P. Matejka, and P. Bour, J. Phys. Chem. B. 117, 7297–7307 (2013).
  30. P. Sharma et al., J. Raman Spectrosc. DOI 10.1002/jrs.4834, wileyonlinelibrary.com (2015).
  31. A.P. Ayala, Vib. Spectrosc. 45, 112–116 (2007).
  32. J.F. Scott, Spex Speak. 17, 1–12 (1972).
  33. D.P. Strommen and K. Nakamoto, in Laboratory Raman Spectroscopy, 1st Edition (John Wiley & Sons Inc., New York, 1984).
  34. A.L. Glebov, O. Mokhun, A. Rapaport, S. Vergnole, V. Smirnov, and L.B. Glebov, Proc. SPIE. 8428, 84280C1–84280C11 (2012).
  35. E.P.J. Parrott and J.A. Zeitler, Appl. Spectrosc. 69, 1–25 (2015).
  36. A. Hédoux, L. Paccou, Y. Guinet, J.F. Willart, and M. Descamps, Eur. J. Pharm. Sci. 38, 156–164 (2009).
  37. R.L. McCreery, in Raman Spectroscopy for Chemical Analysis, 1st Edition (John Wiley & Sons Inc., New York, 2000).
  38. P.T. Mah, S.J. Fraser, M.E. Reish, T. Rades, K.C. Gordon, and C.J. Strachan, Vib. Spectrosc. 77, 10–16 (2015).
  39. A. Hédoux, A.A. Decroix, Y. Guinet, L. Paccou, P. Derollez, and M. Descamps, J. Phys. Chem. B. 115, 5746–5753 (2011).
  40. A. Hédoux, Y. Guinet, and M. Descamps, Int. J. Pharm. 417, 17–31 (2011).
  41. P.J. Larkin, M. Dabros, B. Sarsfield, E. Chan, J.T. Carriere, and B.C. Smith, Appl. Spectrosc. 68, 758–776 (2014).
  42. H. Wang, M. A. Boraey, L. Williams, D. Lechuga-Ballesteros, and R. Vehring, Int. J. Pharm. 469, 197–205 (2014).
  43. R. Heyler, J. Carriere, and B. Smith, in “Raman Technology for Today’s Spectroscopists,” supplement to Spectroscopy (June), 44–50 (2013).
  44. S. Al-Dulaimi, A. Aina, and J. Burley, CrystEngComm. 12, 1038–1040 (2010).

 

The drawing in Figure 1 is that of a six-membered ring or hexagon. A carbon atom is located at each vertex of the hexagon and a hydrogen atom is attached to each carbon, although it is not written in. The circle inside the ring represents that the electrons are delocalized which is illustrated in Figure 2.

http://images.alfresco.advanstar.com/alfresco_images/pharma/2016/02/12/645ee751-2432-4444-8af1-ded62697ee27/IR-Spectral-figure02_web.jpg

Figure 2: Top: The P orbitals on each of the six carbon atoms in benzene that contribute an electron to the ring. Bottom: the collection of delocalized P orbital electrons forming a cloud of electron density above and below the benzene ring.

Each of the carbon atoms in a benzene ring contains two P orbitals containing a lone electron, and one of these orbitals is perpendicular to the benzene ring as seen in the top of Figure 2. There is enough orbital overlap that these electrons, rather than being confined between two carbon atoms as might be expected, instead delocalize and form clouds of electron density above and below the plane of the ring. This type of bonding is called aromatic bonding(2), and a ring that has aromatic bonding is called an aromatic ring. It is aromatic bonding that gives aromatic rings their unique structures, chemistry, and IR spectra. Benzene is simply a commonly found aromatic ring. Other types of aromatic molecules include polycyclic aromatic hydrocarbons (PAHs), such as naphthalene, that contain two or more benzene rings that are fused (which means adjacent rings share two carbon atoms), and heterocyclic aromatic rings which are aromatic rings that contain a noncarbon atom such as nitrogen. Pyridine is an example of one of these. The interpretation of the IR spectra of these latter aromatic molecules will be discussed in future articles.

The IR Spectrum of Benzene

The IR spectrum of benzene is shown in Figure 3.

http://images.alfresco.advanstar.com/alfresco_images/pharma/2016/02/12/645ee751-2432-4444-8af1-ded62697ee27/IR-Spectral-figure03_web.jpg

 

09:00

Super-Resolution Fluorescence Microscopy: Where To Go Now?
Bernd Rieger, Quantitative Imaging Group Leader, Delft University of Technology

09:30

Keynote Presentation

From Molecules To Whole Organs
Francesco Pavone, Principal Investigator, LENS, University of Florence

Some examples of correlative microscopies, combining linear and non linear techniques will be described. Particular attention will be devoted Alzheimer disease or to neural plasticity after damage as neurobiological application.

10:15

Super-Resolution Imaging by dSTORM
Markus Sauer, Professor, Julius-Maximilians-Universität Würzburg

10:45

Coffee and Networking in Exhibition Hall

11:15

Correlated Fluorescence And X-Ray Tomography: Finding Molecules In Cellular CT Scans
Carolyn Larabell, Professor, University of California San Francisco

11:45

Integrating Advanced Fluorescence Microscopy Techniques Reveals Nanoscale Architecture And Mesoscale Dynamics Of Cytoskeletal Structures Promoting Cell Migration And Invasion
Alessandra Cambi, Assistant Professor, University of Nijmegen

This lecture will describe our efforts to exploit and integrate a variety of advanced microscopy techniques to unravel the nanoscale structural and dynamic complexity of individual podosomes as well as formation, architecture and function of mesoscale podosome clusters.

12:15

Multi-Photon-Like Fluorescence Microscopy Using Two-Step Imaging Probes
George Patterson, Investigator, National Institutes of Health

12:45

Lunch & Networking in Exhibition Hall

14:15

Technology Spotlight

14:30

3D Single Particle Tracking: Following Mitochondria in Zebrafish Embryos
Don Lamb, Professor, Ludwig-Maximilians-University

15:00

Visualizing Mechano-Biology: Quantitative Bioimaging Tools To Study The Impact Of Mechanical Stress On Cell Adhesion And Signalling
Bernhard Wehrle-Haller, Group Leader, University of Geneva

15:30

Superresolution Imaging Of Clathrin-Mediated Endocytosis In Yeast
Jonas Ries, Group Leader, EMBL Heidelberg

We use single-molecule localization microscopy to investigate the dynamic structural organization of the east endocytic machinery. We discovered a striking ring-shaped pre-patterning of the actin nucleation zone, which is key for an efficient force generation and membrane invagination.

16:00

Coffee and Networking in Exhibition Hall

16:30

Optical Imaging of Molecular Mechanisms of Disease
Clemens Kaminski, Professor, University of Cambridge

17:00

3-D Optical Tomography For Ex Vivo And In Vivo Imaging
James McGinty, Professor, Imperial College London

17:30

End Of Day One

Wednesday, 15 June 2016

09:00

Imaging Gene Regulation in Living Cells at the Single Molecule Level
James Zhe Liu, Group Leader, Janelia Research Campus, Howard Hughes Medical Institute

09:30

Keynote Presentation

Super-Resolution Microscopy With DNA Molecules
Ralf Jungmann, Group Leader, Max Planck Institute of Biochemistry

10:15

A Revolutionary Miniaturised Instrument For Single-Molecule Localization Microscopy And FRET
Achillefs Kapanidis, Professor, University of Oxford

10:45

Coffee and Networking in Exhibition Hall

11:15

Democratising Live-Cell High-Speed Super-Resolution Microscopy
Ricardo Henriques, Group Leader, University College London

11:45

Democratising Live-Cell High-Speed Super-Resolution Microscopy

12:15

Information In Localisation Microscopy
Susan Cox, Professor, Kings College London

12:45

Lunch & Networking in Exhibition Hall

14:15

Technology Spotlight

14:30

High-Content Imaging Approaches For Drug Discovery For Neglected Tropical Diseases
Manu De Rycker, Team Leader, University of Dundee

The development of new drugs for intracellular parasitic diseases is hampered by difficulties in developing relevant high-throughput cell-based assays. Here we present how we have used image-based high-content screening approaches to address some of these issues.

15:00

High Resolution In Vivo Histology: Clinical in vivo Subcellular Imaging using Femtoseceond Laser Multiphoton/CARS Tomography
Karsten König, Professor, Saarland University

We report on a certified, medical, transportable multipurpose nonlinear microscopic imagingsystem based on a femtosecond excitation source and a photonic crystal fiber with multiple miniaturized time-correlated single-photon counting detectors.

15:30

Coffee and Networking in Exhibition Hall

16:00

Lateral Organization Of Plasma Membrane Constituents At The Nanoscale
Gerhard Schutz, Professor, Vienna University of Technology

It is of interest how proteins are spatially distributed over the membrane, and whether they conjoin and move as part of multi-molecular complexes. In my lecture, I will discuss methods for approaching the two questions, and provide biological examples.

16:30

Correlative Light And Electron Microscopy In Structural Cell Biology
Wanda Kukulski, Group Leader, University of Cambridge

17:00

Close of Conference

 

Read Full Post »

Imaging of Cancer Cells, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

April 13, 2016

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

references:

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang, Kayvan Reza Niazi & Bahram Jalali. Deep Learning in Label-free Cell Classification. Scientific Reports 6, Article number: 21471 (2016); doi:10.1038/srep21471 (open access)

Supplementary Information

 

Deep Learning in Label-free Cell Classification

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang,Kayvan Reza Niazi & Bahram Jalali

Scientific Reports 6, Article number: 21471 (2016)    http://dx.doi.org:/10.1038/srep21471

Deep learning extracts patterns and knowledge from rich multidimenstional datasets. While it is extensively used for image recognition and speech processing, its application to label-free classification of cells has not been exploited. Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells1,2. However, each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy1,3. Imaging flow cytometry4,5 on the other hand captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering6.

In addition to classification accuracy, the throughput is another critical specification of a flow cytometer. Indeed high throughput, typically 100,000 cells per second, is needed to screen a large enough cell population to find rare abnormal cells that are indicative of early stage diseases. However there is a fundamental trade-off between throughput and accuracy in any measurement system7,8. For example, imaging flow cytometers face a throughput limit imposed by the speed of the CCD or the CMOS cameras, a number that is approximately 2000 cells/s for present systems9. Higher flow rates lead to blurred cell images due to the finite camera shutter speed. Many applications of flow analyzers such as cancer diagnostics, drug discovery, biofuel development, and emulsion characterization require classification of large sample sizes with a high-degree of statistical accuracy10. This has fueled research into alternative optical diagnostic techniques for characterization of cells and particles in flow.

Recently, our group has developed a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept11. This instrument overcomes the trade-off between sensitivity and speed by using Amplified Time-stretch Dispersive Fourier Transform12,13,14,15. In time stretched imaging16, the object’s spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds (Fig. 1). Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching8,11,16. Moreover, warped stretch transform17,18can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view19. In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput20. Integrated with a microfluidic channel, coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.

Figure 1: Time stretch quantitative phase imaging (TS-QPI) and analytics system; A mode-locked laser followed by a nonlinear fiber, an erbium doped fiber amplifier (EDFA), and a wavelength-division multiplexing (WDM) filter generate and shape a train of broadband optical pulses. http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

 

Box 1: The pulse train is spatially dispersed into a train of rainbow flashes illuminating the target as line scans. The spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame. The ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/s. The phase shift and intensity loss at each location within the field of view are embedded into the spectral interference patterns using a Michelson interferometer. Box 2: The interferogram pulses were then stretched in time so that spatial information could be mapped into time through time-stretch dispersive Fourier transform (TS-DFT), and then captured by a single pixel photodetector and an analog-to-digital converter (ADC). The loss of sensitivity at high shutter speed is compensated by stimulated Raman amplification during time stretch. Box 3: (a) Pulse synchronization; the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images. (b) The biomass density of a cell leads to a spatially varying optical phase shift. When a rainbow flash passes through the cells, the changes in refractive index at different locations will cause phase walk-off at interrogation wavelengths. Hilbert transformation and phase unwrapping are used to extract the spatial phase shift. (c) Decoding the phase shift in each pulse at each wavelength and remapping it into a pixel reveals the protein concentration distribution within cells. The optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms. Thus, quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent. Cell features describing morphology, granularity, biomass, etc are extracted from the images. (d) These biophysical features are used in a machine learning algorithm for high-accuracy label-free classification of the cells.

On another note, surface markers used to label cells, such as EpCAM21, are unavailable in some applications; for example, melanoma or pancreatic circulating tumor cells (CTCs) as well as some cancer stem cells are EpCAM-negative and will escape EpCAM-based detection platforms22. Furthermore, large-population cell sorting opens the doors to downstream operations, where the negative impacts of labels on cellular behavior and viability are often unacceptable23. Cell labels may cause activating/inhibitory signal transduction, altering the behavior of the desired cellular subtypes, potentially leading to errors in downstream analysis, such as DNA sequencing and subpopulation regrowth. In this way, quantitative phase imaging (QPI) methods24,25,26,27 that categorize unlabeled living cells with high accuracy are needed. Coherent time stretch imaging is a method that enables quantitative phase imaging at ultrahigh throughput for non-invasive label-free screening of large number of cells.

In this work, the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning. Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely28,29,30,31 among cells and their variations reflect important information of genotypes and physiological stimuli32. The multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.

We further improved the accuracy, repeatability, and the balance between sensitivity and specificity of our label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating characteristics (ROC). To demonstrate sensitivity, specificity, and accuracy of multi-feature label-free flow cytometry using our technique, we classified (1) OT-IIhybridoma T-lymphocytes and SW-480 colon cancer epithelial cells, and (2) Chlamydomonas reinhardtii algal cells (herein referred to as Chlamydomonas) based on their lipid content, which is related to the yield in biofuel production. Our preliminary results show that compared to classification by individual biophysical parameters, our label-free hyperdimensional technique improves the detection accuracy from 77.8% to 95.5%, or in other words, reduces the classification inaccuracy by about five times.     ……..

 

Feature Extraction

The decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as shown in Fig. 2 (see Section Methods: Image Reconstruction). These images are used to obtain biophysical fingerprints of the cells8,36. With domain expertise, raw images are fused and transformed into a suitable set of biophysical features, listed in Table 1, which the deep learning model further converts into learned features for improved classification.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f2.jpg

The optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells. The attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles. The optical loss image is derived from the low frequency component of the pulse interferograms. The optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations can be found in Section Methods. Also, supplementary Videos 1 and 2 show measurements of cell-induced optical path length difference by TS-QPI at four different points along the rainbow for OT-II and SW-480, respectively.

Table 1: List of extracted features.

Feature Name    Description         Category

 

Figure 3: Biophysical features formed by image fusion.

(a) Pairwise correlation matrix visualized as a heat map. The map depicts the correlation between all major 16 features extracted from the quantitative images. Diagonal elements of the matrix represent correlation of each parameter with itself, i.e. the autocorrelation. The subsets in box 1, box 2, and box 3 show high correlation because they are mainly related to morphological, optical phase, and optical loss feature categories, respectively. (b) Ranking of biophysical features based on their AUCs in single-feature classification. Blue bars show performance of the morphological parameters, which includes diameter along the interrogation rainbow, diameter along the flow direction, tight cell area, loose cell area, perimeter, circularity, major axis length, orientation, and median radius. As expected, morphology contains most information, but other biophysical features can contribute to improved performance of label-free cell classification. Orange bars show optical phase shift features i.e. optical path length differences and refractive index difference. Green bars show optical loss features representing scattering and absorption by the cell. The best performed feature in these three categories are marked in red.

Figure 4: Machine learning pipeline. Information of quantitative optical phase and loss images are fused to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.

The neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification. This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC). Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node. The training process maximizes AUC, pushing the ROC curve toward the upper left corner, which means improved sensitivity and specificity in classification.

….   How to cite this article: Chen, C. L. et al. Deep Learning in Label-free Cell Classification.

Sci. Rep. 6, 21471; http://dx.doi.org:/10.1038/srep21471

 

Computer Algorithm Helps Characterize Cancerous Genomic Variations

http://www.genengnews.com/gen-news-highlights/computer-algorithm-helps-characterize-cancerous-genomic-variations/81252626/

To better characterize the functional context of genomic variations in cancer, researchers developed a new computer algorithm called REVEALER. [UC San Diego Health]

Scientists at the University of California San Diego School of Medicine and the Broad Institute say they have developed a new computer algorithm—REVEALER—to better characterize the functional context of genomic variations in cancer. The tool, described in a paper (“Characterizing Genomic Alterations in Cancer by Complementary Functional Associations”) published in Nature Biotechnology, is designed to help researchers identify groups of genetic variations that together associate with a particular way cancer cells get activated, or how they respond to certain treatments.

REVEALER is available for free to the global scientific community via the bioinformatics software portal GenePattern.org.

“This computational analysis method effectively uncovers the functional context of genomic alterations, such as gene mutations, amplifications, or deletions, that drive tumor formation,” said senior author Pablo Tamayo, Ph.D., professor and co-director of the UC San Diego Moores Cancer Center Genomics and Computational Biology Shared Resource.

Dr. Tamayo and team tested REVEALER using The Cancer Genome Atlas (TCGA), the NIH’s database of genomic information from more than 500 human tumors representing many cancer types. REVEALER revealed gene alterations associated with the activation of several cellular processes known to play a role in tumor development and response to certain drugs. Some of these gene mutations were already known, but others were new.

For example, the researchers discovered new activating genomic abnormalities for beta-catenin, a cancer-promoting protein, and for the oxidative stress response that some cancers hijack to increase their viability.

REVEALER requires as input high-quality genomic data and a significant number of cancer samples, which can be a challenge, according to Dr. Tamayo. But REVEALER is more sensitive at detecting similarities between different types of genomic features and less dependent on simplifying statistical assumptions, compared to other methods, he adds.

“This study demonstrates the potential of combining functional profiling of cells with the characterizations of cancer genomes via next-generation sequencing,” said co-senior author Jill P. Mesirov, Ph.D., professor and associate vice chancellor for computational health sciences at UC San Diego School of Medicine.

 

Characterizing genomic alterations in cancer by complementary functional associations

Jong Wook Kim, Olga B Botvinnik, Omar Abudayyeh, Chet Birger, et al.

Nature Biotechnology (2016)              http://dx.doi.org:/10.1038/nbt.3527

Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes

 

Figure 2: REVEALER results for transcriptional activation of β-catenin in cancer.close

(a) This heatmap illustrates the use of the REVEALER approach to find complementary genomic alterations that match the transcriptional activation of β-catenin in cancer. The target profile is a TCF4 reporter that provides an estimate of…

 

An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

Jonathan P. Celli, Imran Rizvi, Adam R. Blanden, Iqbal Massodi, Michael D. Glidden, Brian W. Pogue & Tayyaba Hasan

Scientific Reports 4; 3751  (2014)    http://dx.doi.org:/10.1038/srep03751

While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

The attrition rates for preclinical development of oncology therapeutics are particularly dismal due to a complex set of factors which includes 1) the failure of pre-clinical models to recapitulate determinants of in vivo treatment response, and 2) the limited ability of available assays to extract treatment-specific data integral to the complexities of therapeutic responses1,2,3. Three-dimensional (3D) tumour models have been shown to restore crucial stromal interactions which are missing in the more commonly used 2D cell culture and that influence tumour organization and architecture4,5,6,7,8, as well as therapeutic response9,10, multicellular resistance (MCR)11,12, drug penetration13,14, hypoxia15,16, and anti-apoptotic signaling17. However, such sophisticated models can only have an impact on therapeutic guidance if they are accompanied by robust quantitative assays, not only for cell viability but also for providing mechanistic insights related to the outcomes. While numerous assays for drug discovery exist18, they are generally not developed for use in 3D systems and are often inherently unsuitable. For example, colorimetric conversion products have been noted to bind to extracellular matrix (ECM)19 and traditional colorimetric cytotoxicity assays reduce treatment response to a single number reflecting a biochemical event that has been equated to cell viability (e.g. tetrazolium salt conversion20). Such approaches fail to provide insight into the spatial patterns of response within colonies, morphological or structural effects of drug response, or how overall culture viability may be obscuring the status of sub-populations that are resistant or partially responsive. Hence, the full benefit of implementing 3D tumour models in therapeutic development has yet to be realized for lack of analytical methods that describe the very aspects of treatment outcome that these systems restore.

Motivated by these factors, we introduce a new platform for quantitative in situ treatment assessment (qVISTA) in 3D tumour models based on computational analysis of information-dense biological image datasets (bioimage-informatics)21,22. This methodology provides software end-users with multiple levels of complexity in output content, from rapidly-interpreted dose response relationships to higher content quantitative insights into treatment-dependent architectural changes, spatial patterns of cytotoxicity within fields of multicellular structures, and statistical analysis of nodule-by-nodule size-dependent viability. The approach introduced here is cognizant of tradeoffs between optical resolution, data sampling (statistics), depth of field, and widespread usability (instrumentation requirement). Specifically, it is optimized for interpretation of fluorescent signals for disease-specific 3D tumour micronodules that are sufficiently small that thousands can be imaged simultaneously with little or no optical bias from widefield integration of signal along the optical axis of each object. At the core of our methodology is the premise that the copious numerical readouts gleaned from segmentation and interpretation of fluorescence signals in these image datasets can be converted into usable information to classify treatment effects comprehensively, without sacrificing the throughput of traditional screening approaches. It is hoped that this comprehensive treatment-assessment methodology will have significant impact in facilitating more sophisticated implementation of 3D cell culture models in preclinical screening by providing a level of content and biological relevance impossible with existing assays in monolayer cell culture in order to focus therapeutic targets and strategies before costly and tedious testing in animal models.

Using two different cell lines and as depicted in Figure 1, we adopt an ECM overlay method pioneered originally for 3D breast cancer models23, and developed in previous studies by us to model micrometastatic ovarian cancer19,24. This system leads to the formation of adherent multicellular 3D acini in approximately the same focal plane atop a laminin-rich ECM bed, implemented here in glass-bottom multiwell imaging plates for automated microscopy. The 3D nodules resultant from restoration of ECM signaling5,8, are heterogeneous in size24, in contrast to other 3D spheroid methods, such as rotary or hanging drop cultures10, in which cells are driven to aggregate into uniformly sized spheroids due to lack of an appropriate substrate to adhere to. Although the latter processes are also biologically relevant, it is the adherent tumour populations characteristic of advanced metastatic disease that are more likely to be managed with medical oncology, which are the focus of therapeutic evaluation herein. The heterogeneity in 3D structures formed via ECM overlay is validated here by endoscopic imaging ofin vivo tumours in orthotopic xenografts derived from the same cells (OVCAR-5).

 

Figure 1: A simplified schematic flow chart of imaging-based quantitative in situ treatment assessment (qVISTA) in 3D cell culture.

(This figure was prepared in Adobe Illustrator® software by MD Glidden, JP Celli and I Rizvi). A detailed breakdown of the image processing (Step 4) is provided in Supplemental Figure 1.

A critical component of the imaging-based strategy introduced here is the rational tradeoff of image-acquisition parameters for field of view, depth of field and optical resolution, and the development of image processing routines for appropriate removal of background, scaling of fluorescence signals from more than one channel and reliable segmentation of nodules. In order to obtain depth-resolved 3D structures for each nodule at sub-micron lateral resolution using a laser-scanning confocal system, it would require ~ 40 hours (at approximately 100 fields for each well with a 20× objective, times 1 minute/field for a coarse z-stack, times 24 wells) to image a single plate with the same coverage achieved in this study. Even if the resources were available to devote to such time-intensive image acquisition, not to mention the processing, the optical properties of the fluorophores would change during the required time frame for image acquisition, even with environmental controls to maintain culture viability during such extended imaging. The approach developed here, with a mind toward adaptation into high throughput screening, provides a rational balance of speed, requiring less than 30 minutes/plate, and statistical rigour, providing images of thousands of nodules in this time, as required for the high-content analysis developed in this study. These parameters can be further optimized for specific scenarios. For example, we obtain the same number of images in a 96 well plate as for a 24 well plate by acquiring only a single field from each well, rather than 4 stitched fields. This quadruples the number conditions assayed in a single run, at the expense of the number of nodules per condition, and therefore the ability to obtain statistical data sets for size-dependent response, Dfrac and other segmentation-dependent numerical readouts.

 

We envision that the system for high-content interrogation of therapeutic response in 3D cell culture could have widespread impact in multiple arenas from basic research to large scale drug development campaigns. As such, the treatment assessment methodology presented here does not require extraordinary optical instrumentation or computational resources, making it widely accessible to any research laboratory with an inverted fluorescence microscope and modestly equipped personal computer. And although we have focused here on cancer models, the methodology is broadly applicable to quantitative evaluation of other tissue models in regenerative medicine and tissue engineering. While this analysis toolbox could have impact in facilitating the implementation of in vitro 3D models in preclinical treatment evaluation in smaller academic laboratories, it could also be adopted as part of the screening pipeline in large pharma settings. With the implementation of appropriate temperature controls to handle basement membranes in current robotic liquid handling systems, our analyses could be used in ultra high-throughput screening. In addition to removing non-efficacious potential candidate drugs earlier in the pipeline, this approach could also yield the additional economic advantage of minimizing the use of costly time-intensive animal models through better estimates of dose range, sequence and schedule for combination regimens.

 

Microscope Uses AI to Find Cancer Cells More Efficiently

Thu, 04/14/2016 – by Shaun Mason

http://www.mdtmag.com/news/2016/04/microscope-uses-ai-find-cancer-cells-more-efficiently

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses.

There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.

Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data with the goal of achieving accurate decision making.

The study, which was published in the journal Nature Scientific Reports, was led by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.

Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The new microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.

The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.

“Each frame is slowed down in time and optically amplified so it can be digitized,” Mahjoubfar said. “This lets us perform fast cell imaging that the artificial intelligence component can distinguish.”

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem.

“The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination,” Chen said.

The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.   …..  see also http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

Chen, C. L. et al. Deep Learning in Label-free Cell Classification.    Sci. Rep. 6, 21471;   http://dx.doi.org:/10.1038/srep21471

 

 

Read Full Post »

Protein profiling in cancer and metabolic diseases

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Deep Protein Profiling Key

Company has encouraged by two recent reports that emphasise the importance of protein profiling to improve outcomes in cancer treatment.

http://www.technologynetworks.com/Proteomics/news.aspx?ID=190145

Proteome Sciences plc has strongly encouraged by two recent reports that emphasise the importance of protein profiling to improve outcomes in cancer treatment. These highlight the growing need for more detailed, personal assessment of protein profiles to improve the management of cancer treatment.

In the first study two groups from University College London and Cancer Research UK demonstrated that genetic mutations in cancer can lead to changes in the proteins on the cell surface1. These are new sequences which are seen as foreign by the body’s immune system and, with appropriate immunotherapy, the level of response in lung cancer was greatly enhanced.

However many of the patients with these types of mutations unfortunately still did not respond which highlighted the need for deeper analysis of the protein expression in tumours in order to better appreciate the mechanisms that contribute to treatment failure.

The second study, led by Professor Nigel Bundred of Manchester University, reported that use of two drugs that act on the same breast cancer target, an over-expressing protein called Her-2, were able to eradicate detectable tumours in around 10% of those treated in just 11 days, with 87% of those treated having a proteomic change indicating cells had stopped growing and/or cell death had increased2.

Whilst these results appear very promising it is worth noting that the over-expressing Her-2 target is only present in about 20% of breast tumours meaning this combination therapy was successful in clearing tumours in just 2% of the total breast cancer population.

Dr. Ian Pike, Chief Operating Officer of Proteome Sciences commented, “Both these recent studies should rightly be recognised as important steps forward towards better cancer treatment. However, in order to overcome the limitations of current drug therapy programs, a much deeper and more comprehensive analysis of the complex protein networks that regulate tumour growth and survival is required and will be essential to achieve a major advance in the battle to treat cancer.

“Our SysQuant® workflows provide that solution. As an example, in pancreatic cancer3 we have successfully mapped the complex network of regulatory processes and demonstrate the ability to devise personalised treatment combinations on an individual basis for each patient. A retrospective study with SysQuant® to predict response to the targeted drug Sorafenib in liver cancer is in process and we are planning further prospective trials to guide personalised treatment selection in liver cancer.

“We are already delivering systems-wide biology solutions through SysQuant® and TMTcalibrator™ programs to our clients that are generating novel biological data and results using more sensitive profiling that are helping them to better understand their drug development programs and to provide new biomarkers for tracking patient response in clinical trials.

“We are strongly positioned to deliver more comprehensive analysis of proteins and cellular pathways across other areas of disease and in particular to extend the use of SysQuant® with other leading cancer research groups in liver and other cancers.”

Proteome Sciences has also expanded its offering in personalised medicine through the use of its TMTcalibrator™ technology to uniquely identify protein biomarkers that reveal active cancer and other disease processes in body fluid samples. The importance of these ‘mechanistic’ biomarkers is that they are essential to monitor that drugs are being effective and that they can be used as early biomarkers of disease recurrence.

Using SysQuant® and TMTcalibrator™, Proteome Sciences can deliver more comprehensive analysis and provide unparalleled levels of sensitivity and breadth of coverage of the proteome, enabling faster, more efficient drug development and more accurate disease diagnosis.

 

Discovering ‘Outlier’ Enzymes

Researchers at TSRI and Salk Institute have discovered ‘Outlier’ enzymes that could offer new targets to treat type 2 diabetes and inflammatory disorders.

A team led by scientists at The Scripps Research Institute (TSRI) and the Salk Institute for Biological Studies have discovered two enzymes that appear to play a role in metabolism and inflammation—and might someday be targeted with drugs to treat type 2 diabetes and inflammatory disorders. The discovery is unusual because the enzymes do not bear a resemblance—in their structures or amino-acid sequences—to any known class of enzymes.

The team of scientists nevertheless identified them as “outlier” members of the serine/threonine hydrolase class, using newer techniques that detect biochemical activity. “A huge fraction of the human ‘proteome’ remains uncharacterized, and this paper shows how chemical approaches can be used to uncover proteins of a given functionality that have eluded classification based on sequence or predicted structure,” said co-senior author Benjamin F. Cravatt, chair of TSRI’s Department of Chemical Physiology.

“In this study, we found two genes that control levels of lipids with anti-diabetic and anti-inflammatory activity, suggesting exciting targets for diabetes and inflammatory diseases,” said co-senior author Alan Saghatelian, who holds the Dr. Frederik Paulsen Chair at the Salk Institute. The study, which appeared as a Nature Chemical Biology Advance Online Publication on March 28, 2016, began as an effort in the Cravatt laboratory to discover and characterize new serine/threonine hydrolases using fluorophosphonate (FP) probes—molecules that selectively bind and, in effect, label the active sites of these enzymes.

Pulling FP-binding proteins out of the entire proteome of test cells and identifying them using mass spectrometry techniques, the team matched nearly all to known hydrolases. The major outlier was a protein called androgen-induced gene 1 protein (AIG1). The only other one was a distant cousin in terms of sequence, a protein called ADTRP. “Neither of these proteins had been characterized as an enzyme; in fact, there had been little functional characterization of them at all,” said William H. Parsons, a research associate in the Cravatt laboratory who was co-first author of the study.

Experiments on AIG1 and ADTRP revealed that they do their enzymatic work in a unique way. “It looks like they have an active site that is novel—it had never been described in the literature,” said Parsons. Initial tests with panels of different enzyme inhibitors showed that AIG1 and ADTRP are moderately inhibited by inhibitors of lipases—enzymes that break down fats and other lipids. But on what specific lipids do these newly discovered outlier enzymes normally work?

At the Salk Institute, the Saghatelian laboratory was investigating a class of lipids it had discovered in 2014. Known as fatty acid esters of hydroxy fatty acids (FAHFAs), these molecules showed strong therapeutic potential. Saghatelian and his colleagues had found that boosting the levels of one key FAHFA lipid normalizes glucose levels in diabetic mice and also reduces inflammation.

“[Ben Cravatt’s] lab was screening panels of lipids to find the ones that their new enzymes work on,” said Saghatelian, who is a former research associate in the Cravatt laboratory. “We suggested they throw FAHFAs in there—and these turned out to be very good substrates.” The Cravatt laboratory soon developed powerful inhibitors of the newly discovered enzymes, and the two labs began working together, using the inhibitors and genetic techniques to explore the enzymes’ functions in vitro and in cultured cells.

Co-first author Matthew J. Kolar, an MD-PhD student, performed most of the experiments in the Saghatelian lab. The team concluded that AIG1 and ADTRP, at least in the cell types tested, appear to work mainly to break down FAHFAs and not any other major class of lipid. In principle, inhibitors of AIG1 and ADTRP could be developed into FAHFA-boosting therapies.

“Our prediction,” said Saghatelian, “is that if FAHFAs do what we think they’re doing, then using an enzyme inhibitor to block their degradation would make FAHFA levels go up and should thus reduce inflammation as well as improve glucose levels and insulin sensitivity.” The two labs are now collaborating on further studies of the new enzymes—and the potential benefits of inhibiting them—in mouse models of diabetes, inflammation and autoimmune disease.

“One of the neat things this study shows,” said Cravatt, “is that even for enzyme classes as well studied as the hydrolases, there may still be hidden members that, presumably by convergent evolution, arrived at that basic enzyme mechanism despite sharing no sequence or structural homology.”

Other co-authors of the study, “AIG1 and ADTRP are atypical integral membrane hydrolases that degrade bioactive FAHFAs,” were Siddhesh S. Kamat, Armand B. Cognetta III, Jonathan J. Hulce and Enrique Saez, of TSRI; and co-senior author Barbara B. Kahn of Beth Israel Deaconess Medical Center and Harvard Medical School

 

New Weapon Against Breast Cancer

Molecular marker in healthy tissue can predict a woman’s risk of getting the disease, research says.

Harvard Stem Cell Institute (HSCI) researchers at Dana-Farber Cancer Institute (DFCI) and collaborators at Brigham and Women’s Hospital (BWH) have identified a molecular marker in normal breast tissue that can predict a woman’s risk for developing breast cancer, the leading cause of death in women with cancer worldwide.

The work, led by HSCI principal faculty member Kornelia Polyak and Rulla Tamimi of BWH, was published in an early online release and in the April 1 issue of Cancer Research.

The study builds on Polyak’s earlier research finding that women already identified as having a high risk of developing cancer — namely those with a mutation called BRCA1 or BRCA2 — or women who did not give birth before their 30s had a higher number of mammary gland progenitor cells.

In the latest study, Polyak, Tamimi, and their colleagues examined biopsies, some taken as many as four decades ago, from 302 participants in the Nurses’ Health Study and the Nurses’ Health Study II who had been diagnosed with benign breast disease. The researchers compared tissue from the 69 women who later developed cancer to the tissue from the 233 women who did not. They found that women were five times as likely to develop cancer if they had a higher percentage of Ki67, a molecular marker that identifies proliferating cells, in the cells that line the mammary ducts and milk-producing lobules. These cells, called the mammary epithelium, undergo drastic changes throughout a woman’s life, and the majority of breast cancers originate in these tissues.

Doctors already test breast tumors for Ki67 levels, which can inform decisions about treatment, but this is the first time scientists have been able to link Ki67 to precancerous tissue and use it as a predictive tool.

“Instead of only telling women that they don’t have cancer, we could test the biopsies and tell women if they were at high risk or low risk for developing breast cancer in the future,” said Polyak, a breast cancer researcher at Dana-Farber and co-senior author of the paper.

“Currently, we are not able to do a very good job at distinguishing women at high and low risk of breast cancer,” added co-senior author Tamimi, an associate professor at the Harvard T.H. Chan School of Public Health and Harvard Medical School. “By identifying women at high risk of breast cancer, we can better develop individualized screening and also target risk reducing strategies.”

To date, mammograms are the best tool for the early detection, but there are risks associated with screening. False positive and negative results and over-diagnosis could cause psychological distress, delay treatment, or lead to overtreatment, according to the National Cancer Institute (NCI).

Mammography machines also use low doses of radiation. While a single mammogram is unlikely to cause harm, repeated screening can potentially cause cancer, though the NCI writes that the benefits “nearly always outweigh the risks.”

“If we can minimize unnecessary radiation for women at low risk, that would be good,” said Tamimi.

Screening for Ki67 levels would “be easy to apply in the current setting,” said Polyak, though the researchers first want to reproduce the results in an independent cohort of women.

 

AIG1 and ADTRP are atypical integral membrane hydrolases that degrade bioactive FAHFAs

William H ParsonsMatthew J Kolar, …., Barbara B KahnAlan Saghatelian & Benjamin F Cravatt

Nature Chemical Biology 28 March 2016                    http://dx.doi.org:/10.1038/nchembio.2051

Enzyme classes may contain outlier members that share mechanistic, but not sequence or structural, relatedness with more common representatives. The functional annotation of such exceptional proteins can be challenging. Here, we use activity-based profiling to discover that the poorly characterized multipass transmembrane proteins AIG1 and ADTRP are atypical hydrolytic enzymes that depend on conserved threonine and histidine residues for catalysis. Both AIG1 and ADTRP hydrolyze bioactive fatty acid esters of hydroxy fatty acids (FAHFAs) but not other major classes of lipids. We identify multiple cell-active, covalent inhibitors of AIG1 and show that these agents block FAHFA hydrolysis in mammalian cells. These results indicate that AIG1 and ADTRP are founding members of an evolutionarily conserved class of transmembrane threonine hydrolases involved in bioactive lipid metabolism. More generally, our findings demonstrate how chemical proteomics can excavate potential cases of convergent or parallel protein evolution that defy conventional sequence- and structure-based predictions.

Figure 1: Discovery and characterization of AIG1 and ADTRP as FP-reactive proteins in the human proteome.

 

http://www.nature.com/nchembio/journal/vaop/ncurrent/carousel/nchembio.2051-F1.jpg

(a) Competitive ABPP-SILAC analysis to identify FP-alkyne-inhibited proteins, in which protein enrichment and inhibition were measured in proteomic lysates from SKOV3 cells treated with FP-alkyne (20 μM, 1 h) or DMSO using the FP-biotin…

 

  1. Willems, L.I., Overkleeft, H.S. & van Kasteren, S.I. Current developments in activity-based protein profiling. Bioconjug. Chem. 25, 11811191 (2014).
  2. Niphakis, M.J. & Cravatt, B.F. Enzyme inhibitor discovery by activity-based protein profiling.Annu. Rev. Biochem. 83, 341377 (2014).
  3. Berger, A.B., Vitorino, P.M. & Bogyo, M. Activity-based protein profiling: applications to biomarker discovery, in vivo imaging and drug discovery. Am. J. Pharmacogenomics 4,371381 (2004).
  4. Liu, Y., Patricelli, M.P. & Cravatt, B.F. Activity-based protein profiling: the serine hydrolases.Proc. Natl. Acad. Sci. USA 96, 1469414699 (1999).
  5. Simon, G.M. & Cravatt, B.F. Activity-based proteomics of enzyme superfamilies: serine hydrolases as a case study. J. Biol. Chem. 285, 1105111055 (2010).
  6. Bachovchin, D.A. et al. Superfamily-wide portrait of serine hydrolase inhibition achieved by library-versus-library screening. Proc. Natl. Acad. Sci. USA 107, 2094120946 (2010).
  7. Jessani, N. et al. A streamlined platform for high-content functional proteomics of primary human specimens. Nat. Methods 2, 691697 (2005).
  8. Higa, H.H., Diaz, S. & Varki, A. Biochemical and genetic evidence for distinct membrane-bound and cytosolic sialic acid O-acetyl-esterases: serine-active-site enzymes. Biochem. Biophys. Res. Commun. 144, 10991108 (1987).

Academic cross-fertilization by public screening yields a remarkable class of protein phosphatase methylesteras-1 inhibitors

Proc Natl Acad Sci U S A. 2011 Apr 26; 108(17): 6811–6816.    doi:  10.1073/pnas.1015248108
National Institutes of Health (NIH)-sponsored screening centers provide academic researchers with a special opportunity to pursue small-molecule probes for protein targets that are outside the current interest of, or beyond the standard technologies employed by, the pharmaceutical industry. Here, we describe the outcome of an inhibitor screen for one such target, the enzyme protein phosphatase methylesterase-1 (PME-1), which regulates the methylesterification state of protein phosphatase 2A (PP2A) and is implicated in cancer and neurodegeneration. Inhibitors of PME-1 have not yet been described, which we attribute, at least in part, to a dearth of substrate assays compatible with high-throughput screening. We show that PME-1 is assayable by fluorescence polarization-activity-based protein profiling (fluopol-ABPP) and use this platform to screen the 300,000+ member NIH small-molecule library. This screen identified an unusual class of compounds, the aza-β-lactams (ABLs), as potent (IC50 values of approximately 10 nM), covalent PME-1 inhibitors. Interestingly, ABLs did not derive from a commercial vendor but rather an academic contribution to the public library. We show using competitive-ABPP that ABLs are exquisitely selective for PME-1 in living cells and mice, where enzyme inactivation leads to substantial reductions in demethylated PP2A. In summary, we have combined advanced synthetic and chemoproteomic methods to discover a class of ABL inhibitors that can be used to selectively perturb PME-1 activity in diverse biological systems. More generally, these results illustrate how public screening centers can serve as hubs to create spontaneous collaborative opportunities between synthetic chemistry and chemical biology labs interested in creating first-in-class pharmacological probes for challenging protein targets.

Protein phosphorylation is a pervasive and dynamic posttranslational protein modification in eukaryotic cells. In mammals, more than 500 protein kinases catalyze the phosphorylation of serine, threonine, and tyrosine residues on proteins (1). A much more limited number of phosphatases are responsible for reversing these phosphorylation events (2). For instance, protein phosphatase 2A (PP2A) and PP1 are thought to be responsible together for > 90% of the total serine/threonine phosphatase activity in mammalian cells (3). Specificity is imparted on PP2A activity by multiple mechanisms, including dynamic interactions between the catalytic subunit (C) and different protein-binding partners (B subunits), as well as a variety of posttranslational chemical modifications (2, 4). Within the latter category is an unusual methylesterification event found at the C terminus of the catalytic subunit of PP2A that is introduced and removed by a specific methyltransferase (leucine carbxoylmethyltransferase-1 or LCMT1) (5, 6) and methylesterase (protein phosphatase methylesterase-1 or PME-1) (7), respectively (Fig. 1A). PP2A carboxymethylation (hereafter referred to as “methylation”) has been proposed to regulate PP2A activity, at least in part, by modulating the binding interaction of the C subunit with various regulatory B subunits (810). A predicted outcome of these shifts in subunit association is the targeting of PP2A to different protein substrates in cells. PME-1 has also been hypothesized to stabilize inactive forms of nuclear PP2A (11), and recent structural studies have shed light on the physical interactions between PME-1 and the PP2A holoenzyme (12).

There were several keys to the success of our probe development effort. First, screening for inhibitors of PME-1 benefited from the fluopol-ABPP technology, which circumvented the limited throughput of previously described substrate assays for this enzyme. Second, we were fortunate that the NIH compound library contained several members of the ABL class of small molecules. These chiral compounds, which represent an academic contribution to the NIH library, occupy an unusual portion of structural space that is poorly accessed by commercial compound collections. Although at the time of their original synthesis (23) it may not have been possible to predict whether these ABLs would show specific biological activity, their incorporation into the NIH library provided a forum for screening against many proteins and cellular targets, culminating in their identification as PME-1 inhibitors. We then used advanced chemoproteomic assays to confirm the remarkable selectivity displayed by ABLs for PME-1 across (and beyond) the serine hydrolase superfamily. That the mechanism for PME-1 inhibition involves acylation of the enzyme’s conserved serine nucleophile (Fig. 3) suggests that exploration of a more structurally diverse set of ABLs might uncover inhibitors for other serine hydrolases. In this way, the chemical information gained from a single high-throughput screen may be leveraged to initiate probe development programs for additional enzyme targets.

Projecting forward, this research provides an example of how public small-molecule screening centers can serve as a portal for spawning academic collaborations between chemical biology and synthetic chemistry labs. By continuing to develop versatile high-throughput screens and combining them with a small-molecule library of expanding structural diversity conferred by advanced synthetic methodologies, academic biologists and chemists are well-positioned to collaboratively deliver pharmacological probes for a wide range of proteins and pathways in cell biology.

 

New weapon against breast cancer

Molecular marker in healthy tissue can predict a woman’s risk of getting the disease, research says

April 6, 2016 | Popular
BRC_Cancer605

 

New Group of Aging-Related Proteins Discovered

http://www.genengnews.com/gen-news-highlights/new-group-of-aging-related-proteins-discovered/81252599/

Scientists have discovered a group of six proteins that may help to divulge secrets of how we age, potentially unlocking new insights into diabetes, Alzheimer’s, cancer, and other aging-related diseases.

The proteins appear to play several roles in our bodies’ cells, from decreasing the amount of damaging free radicals and controlling the rate at which cells die to boosting metabolism and helping tissues throughout the body respond better to insulin. The naturally occurring amounts of each protein decrease with age, leading investigators to believe that they play an important role in the aging process and the onset of diseases linked to older age.

The research team led by Pinchas Cohen, M.D., dean and professor of the University of Southern California Leonard Davis School of Gerontology, identified the proteins and observed their origin from mitochondria and their game-changing roles in metabolism and cell survival. This latest finding builds upon prior research by Dr. Cohen and his team that uncovered two significant proteins, humanin and MOTS-c, hormones that appear to have significant roles in metabolism and diseases of aging.

Unlike most other proteins, humanin and MOTS-c are encoded in mitochondria. Dr. Cohen’s team used computer analysis to see if the part of the mitochondrial genome that provides the code for humanin was coding for other proteins as well. The analysis uncovered the genes for six new proteins, which were dubbed small humanin-like peptides, or SHLPs, 1 through 6 (pronounced “schlep”).

After identifying the six SHLPs and successfully developing antibodies to test for several of them, the team examined both mouse tissues and human cells to determine their abundance in different organs as well as their functions. The proteins were distributed quite differently among organs, which suggests that the proteins have varying functions based on where they are in the body. Of particular interest is SHLP 2, according to Dr. Cohen.  The protein appears to have insulin-sensitizing, antidiabetic effects as well as neuroprotective activity that may emerge as a strategy to combat Alzheimer’s disease. He added that SHLP 6 is also intriguing, with a unique ability to promote cancer cell death and thus potentially target malignant diseases.

Proteins That May Protect Against Age Related Illnesses Discovered

 

The cell proliferation antigen Ki-67 organises heterochromatin

 Michal Sobecki, 

Antigen Ki-67 is a nuclear protein expressed in proliferating mammalian cells. It is widely used in cancer histopathology but its functions remain unclear. Here, we show that Ki-67 controls heterochromatin organisation. Altering Ki-67 expression levels did not significantly affect cell proliferation in vivo. Ki-67 mutant mice developed normally and cells lacking Ki-67 proliferated efficiently. Conversely, upregulation of Ki-67 expression in differentiated tissues did not prevent cell cycle arrest. Ki-67 interactors included proteins involved in nucleolar processes and chromatin regulators. Ki-67 depletion disrupted nucleologenesis but did not inhibit pre-rRNA processing. In contrast, it altered gene expression. Ki-67 silencing also had wide-ranging effects on chromatin organisation, disrupting heterochromatin compaction and long-range genomic interactions. Trimethylation of histone H3K9 and H4K20 was relocalised within the nucleus. Finally, overexpression of human or Xenopus Ki-67 induced ectopic heterochromatin formation. Altogether, our results suggest that Ki-67 expression in proliferating cells spatially organises heterochromatin, thereby controlling gene expression.

 

A protein called Ki-67 is only produced in actively dividing cells, where it is located in the nucleus – the structure that contains most of the cell’s DNA. Researchers often use Ki-67 as a marker to identify which cells are actively dividing in tissue samples from cancer patients, and previous studies indicated that Ki-67 is needed for cells to divide. However, the exact role of this protein was not clear. Before cells can divide they need to make large amounts of new proteins using molecular machines called ribosomes and it has been suggested that Ki-67 helps to produce ribosomes.

Now, Sobecki et al. used genetic techniques to study the role of Ki-67 in mice. The experiments show that Ki-67 is not required for cells to divide in the laboratory or to make ribosomes. Instead, Ki-67 alters the way that DNA is packaged in the nucleus. Loss of Ki-67 from mice cells resulted in DNA becoming less compact, which in turn altered the activity of genes in those cells.

Read Full Post »

The late Cambridge Mayor Alfred Vellucci welcomed Life Sciences Labs to Cambridge, MA – June 1976

Reporter: Aviva Lev-Ari, PhD, RN

How Cambridge became the Life Sciences Capital

Worth watching is the video below, which captures the initial Cambridge City Council hearing on recombinant DNA research from June 1976. The first speaker is the late Cambridge mayor Alfred Vellucci.

Vellucci hoped to pass a two-year moratorium on gene splicing in Cambridge. Instead, the council passed a three-month moratorium, and created a board of nine Cambridge citizens — including a nun and a nurse — to explore whether the work should be allowed, and if so, what safeguards would be necessary. A few days after the board was created, the pro and con tables showed up at the Kendall Square marketplace.

At the time, says Phillip Sharp, an MIT professor, Cambridge felt like a manufacturing town that had seen better days. He recalls being surrounded by candy, textile, and leather factories. Sharp hosted the citizens review committee at MIT, explaining what the research scientists there planned to do. “I think we built a relationship,” he says.

By early 1977, the citizens committee had proposed a framework to ensure that any DNA-related experiments were done under fairly stringent safety controls, and Cambridge became the first city in the world to regulate research using genetic material.

 

WATCH VIDEO

http://www.betaboston.com/news/2016/03/17/how-cambridge-became-the-life-sciences-capital/

Scott Kirsner can be reached at kirsner@pobox.com. Follow him on Twitter@ScottKirsner and on betaboston.com.

SOURCE

How Cambridge became the life sciences capital

http://www.betaboston.com/news/2016/03/17/how-cambridge-became-the-life-sciences-capital/

Read Full Post »

3-D Printed Liver

Curator: Larry H. Bernstein, MD, FCAP

 

 

3D-printing a new lifelike liver tissue for drug screening

Could let pharmaceutical companies quickly do pilot studies on new drugs
February 15, 2016    http://www.kurzweilai.net/3d-printing-a-new-lifelike-liver-tissue-for-drug-screening

Images of the 3D-printed parts of the biomimetic liver tissue: liver cells derived from human induced pluripotent stem cells (left), endothelial and mesenchymal supporing cells (center), and the resulting organized combination of multiple cell types (right). (credit: Chen Laboratory, UC San Diego)

 

University of California, San Diego researchers have 3D-printed a tissue that closely mimics the human liver’s sophisticated structure and function. The new model could be used for patient-specific drug screening and disease modeling and could help pharmaceutical companies save time and money when developing new drugs, according to the researchers.

The liver plays a critical role in how the body metabolizes drugs and produces key proteins, so liver models are increasingly being developed in the lab as platforms for drug screening. However, so far, the models lack both the complex micro-architecture and diverse cell makeup of a real liver. For example, the liver receives a dual blood supply with different pressures and chemical constituents.

So the team employed a novel bioprinting technology that can rapidly produce complex 3D microstructures that mimic the sophisticated features found in biological tissues.

The liver tissue was printed in two steps.

  • The team printed a honeycomb pattern of 900-micrometer-sized hexagons, each containing liver cells derived from human induced pluripotent stem cells. An advantage of human induced pluripotent stem cells is that they are patient-specific, which makes them ideal materials for building patient-specific drug screening platforms. And since these cells are derived from a patient’s own skin cells, researchers don’t need to extract any cells from the liver to build liver tissue.
  • Then, endothelial and mesenchymal supporting cells were printed in the spaces between the stem-cell-containing hexagons.

The entire structure — a 3 × 3 millimeter square, 200 micrometers thick — takes just seconds to print. The researchers say this is a vast improvement over other methods to print liver models, which typically take hours. Their printed model was able to maintain essential functions over a longer time period than other liver models. It also expressed a relatively higher level of a key enzyme that’s considered to be involved in metabolizing many of the drugs administered to patients.

“It typically takes about 12 years and $1.8 billion to produce one FDA-approved drug,” said Shaochen Chen, NanoEngineering professor at the UC San Diego Jacobs School of Engineering. “That’s because over 90 percent of drugs don’t pass animal tests or human clinical trials. We’ve made a tool that pharmaceutical companies could use to do pilot studies on their new drugs, and they won’t have to wait until animal or human trials to test a drug’s safety and efficacy on patients. This would let them focus on the most promising drug candidates earlier on in the process.”

The work was published the week of Feb. 8 in the online early edition of Proceedings of the National Academy of Sciences.


Abstract of Deterministically patterned biomimetic human iPSC-derived hepatic model via rapid 3D bioprinting

The functional maturation and preservation of hepatic cells derived from human induced pluripotent stem cells (hiPSCs) are essential to personalized in vitro drug screening and disease study. Major liver functions are tightly linked to the 3D assembly of hepatocytes, with the supporting cell types from both endodermal and mesodermal origins in a hexagonal lobule unit. Although there are many reports on functional 2D cell differentiation, few studies have demonstrated the in vitro maturation of hiPSC-derived hepatic progenitor cells (hiPSC-HPCs) in a 3D environment that depicts the physiologically relevant cell combination and microarchitecture. The application of rapid, digital 3D bioprinting to tissue engineering has allowed 3D patterning of multiple cell types in a predefined biomimetic manner. Here we present a 3D hydrogel-based triculture model that embeds hiPSC-HPCs with human umbilical vein endothelial cells and adipose-derived stem cells in a microscale hexagonal architecture. In comparison with 2D monolayer culture and a 3D HPC-only model, our 3D triculture model shows both phenotypic and functional enhancements in the hiPSC-HPCs over weeks of in vitro culture. Specifically, we find improved morphological organization, higher liver-specific gene expression levels, increased metabolic product secretion, and enhanced cytochrome P450 induction. The application of bioprinting technology in tissue engineering enables the development of a 3D biomimetic liver model that recapitulates the native liver module architecture and could be used for various applications such as early drug screening and disease modeling.

Fernando

I wonder how equivalent are these hepatic cells derived from human induced pluripotent stem cells (hiPSCs) compared with the real hepatic cell populations.
All cells in our organism share the same DNA info, but every tissue is special for what genes are expressed and also because of the specific localization in our body (which would mean different surrounding environment for each tissue). I am not sure about how much of a step forward this is. Induced hepatic cells are known, but this 3-D print does not have liver shape or the different cell sub-types you would find in the liver.

I agree with your observation that having the same DNA information doesn’t account for variability of cell function within an organ. The regulation of expression is in RNA translation, and that is subject to regulatory factors related to noncoding RNAs and to structural factors in protein folding. The result is that chronic diseases that are affected by the synthetic capabilities of the liver are still problematic – toxicology, diabetes, and the inflammatory response, and amino acid metabolism as well. Nevertheless, this is a very significant step for the testing of pharmaceuticals. When we look at the double circulation of the liver, hypoxia is less of an issue than for heart or skeletal muscle, or mesothelial tissues. I call your attention to the outstanding work by Nathan O. Kaplan on the transhydrogenases, and his stipulation that there are significant differences between organs that are anabolic and those that are catabolic in TPNH/DPNH, that has been ignored for over 40 years. Nothing is quite as simple as we would like.

Fernando commented on 3-D printed liver

3-D printed liver Larry H. Bernstein, MD, FCAP, Curator LPBI 3D-printing a new lifelike liver tissue for drug …

I wonder how equivalent are these hepatic cells derived from human induced pluripotent stem cells (hiPSCs) compared with the real hepatic cell populations.
All cells in our organism share the same DNA info, but every tissue is special for what genes are expressed and also because of the specific localization in our body (which would mean different surrounding environment for each tissue). I am not sure about how much of a step forward this is. Induced hepatic cells are known, but this 3-D print does not have liver shape or the different cell sub-types you would find in the liver.

 

Read Full Post »

Tunable light sources

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Putting Tunable Light Sources to the Test

Common goals of spectroscopy applications such as studying the chemical or biological properties of a material often dictate the requirements of the measurement system’s lamp, power supply and monochromator.

JEFF ENG AND JOHN PARK, PH.D., NEWPORT CORP.   http://www.photonics.com/Article.aspx?AID=58302

Many common spectroscopic measurements require the coordinated operation of a detection instrument and light source, as well as data acquisition and processing. Integration of individual components can be challenging and various applications may have different requirements. Conventional lamp-based tunable light sources are a popular choice for applications requiring a measurement system with this degree of capability.

Many types of tunable light sources are available, with differences in individual component performance translating to the performance of the system as a whole. Tunable light sources are finding themselves to be an especially ideal system for one application in particular: quantum efficiency and spectral responsivity characterization of photonic sensors, such as solar cells.

Xenon and mercury xenon lamps, two examples of DC arc lamps.

http://www.photonics.com/images/Web/Articles/2016/2/10/Light_Lamps.jpg

Xenon and mercury xenon lamps, two examples of DC arc lamps.

The tunable light source’s (TLS) versatility as both a broadband and high-resolution monochromatic light source makes the unit suitable for a variety of applications, such as the study of wavelength-dependent chemical or biological properties or wavelength-induced physical changes of materials. These light sources can also be used in color analysis and reflectivity measurements of materials for quality purposes.

Among their unique attributes, the TLS can produce monochromatic light from the UV to near-infrared (NIR). Lamp-based TLSs feature two major components: a light source and a monochromator. Common lamps used in TLSs are the DC arc lamp and quartz tungsten halogen (QTH) lamp. While both of these lamps have a broad emission spectrum, arc and QTH lamps differ in the characteristic wavelength emissions or relatively smooth shape of their spectral output curves, respectively. A stable power supply for the lamp is a critical component since most applications require high light output power stability1.

Smooth spectral output vs. monochromator throughput

DC arc lamps are excellent sources of continuous wave, broadband light. They consist of two electrodes (an anode and a cathode) separated by a gas such as neon, argon, mercury or xenon. Light is generated by ionizing the gas between the electrodes. The bright broadband emission from this short arc between the anode and cathode makes these lamps high-intensity point sources, capable of being collimated with the proper lens configuration.

DC arc lamps also offer the advantages of long lifetime, superior monochromator throughput (particularly in the UV range) and a smaller divergence angle. They are particularly well-suited for fiber coupling applications2. (See Figure 1.)

A xenon arc lamp housed in an Oriel Research lamp housing.

Figure 1. A xenon arc lamp housed in an Oriel Research lamp housing. Photo courtesy of Newport Corp.

Xenon (Xe) arc lamps, in particular, have a relatively smooth emission curve in the UV to visible spectrums, with characteristic wavelengths emitted from 380 to 750 nm. However, strong xenon peaks are emitted between 750 to 1000 nm.

Their sunlike emission spectrum and about 5800 K color temperature make them a popular choice for solar simulation applications. (See Figure 2.)

Arc lamps can have the following specialty characteristics:

Ozone-free: Wavelength emissions below about 260 nm create toxic ozone. Ideally, an arc lamp is operated outdoors or in a room with adequate ventilation to protect the user from the ozone created.

UV-enhanced: For applications requiring additional UV light intensity, UV-enhanced lamps should be used. These lamps provide the same visible to NIR performance of an arc lamp while providing high-intensity UV output due to changes in the material of the lamp’s glass envelope.

High-stability: High-stability arc lamps are made of a higher quality cathode than that typically used for arc lamp construction. As a result, no arc wander occurs, allowing the lamp to maintain consistent output intensity throughout its lifetime.

The spectral output of 3000-W Xe and 250-W QTH lamps used in Oriel’s Tunable Light Sources.

Figure 2. The spectral output of 3000-W Xe and 250-W QTH lamps used in Oriel’s Tunable Light Sources.Photo courtesy of Newport Corp.

QTH lamps produce light by heating a filament wire with an electric current. The hot filament wire is surrounded by a vacuum or inert gas to prevent oxidation. QTH lamps are not very efficient at converting electricity to light, but they offer very accurate color reproduction due to their continuous blackbody spectrum. These lamps are a popular alternative to arc lamps due to their higher output intensity stability and lack of intense UV light emission, spectral emission lines in their output curve and toxic ozone production. These advantages over traditional DC arc lamps make QTH lamps preferable for radiometric and photometric applications as well as excitation sources of visible to NIR light. QTH lamps are also easier to handle and install, and produce a smooth output spectrum. Selecting the most appropriate lamp type is a matter of deciding which performance criteria are most important.

Constant current vs. constant power

The power supply is a vital component for operating a DC arc or QTH lamp with minimum light ripple. The lamps are operated in either constant current or constant power mode and are used in applications such as radiometric measurements, where a stable light output is required for accurate measurement. Providing stable electrical power to the lamp is important since fluctuations in the wavelength and output intensity of the light source impact the accuracy of measurement.

There is very little difference in the short-term output stability when operating an arc lamp or QTH lamp in constant current or constant power mode. However, the differences appear as the lamp ages. For arc lamps, even with a stable power supply, deposits on the inside of the lamp envelope are visible as the electrodes degrade, which causes an unstable arc position, changing the electrical characteristics of the arc lamp. The distance between the cathode and anode of the arc lamp increases, raising the lamp’s operating voltage. For QTH lamps, deposits on the inside of the lamp envelope are visible as the lamp filament degrades, changing the electrical and spectral characteristics of the lamp.

In power mode, the lamp is operated at a constant power setting. As the voltage cannot be changed, the current is raised or lowered to maintain the power at the same level. As the lamp ages, the radiant output decreases. However, lamp lifetime is prolonged.

In current mode, the lamp is operated at a constant current setting. As the voltage cannot be changed, the power is raised or lowered to maintain the current at the same level. As the lamp ages, the input power required for operation is increased. This results in greater output power, which, to some extent, may help compensate for a darkening lamp envelope. However, the lamp’s lifetime is greatly reduced due to the increase in power.

Although power supplies are highly regulated, there are factors beyond the control of the power supply that may affect light output. Some of these factors include lamp aging, ambient temperature fluctuations and filament erosion. For applications in which high stability light output intensity is especially critical, optical feedback control of power supply is suggested in order to compensate for such factors3. (See Figure 3.)

Oriel’s OPS Series Power Supplies offer the option of operating a lamp in constant power, constant current or intensity operation modes.

Figure 3. Oriel’s OPS Series Power Supplies offer the option of operating a lamp in constant power, constant current or intensity operation modes. Photo courtesy of Newport Corp.

Diffraction gratings narrow the wavelength band

Monochromators use diffraction gratings to spatially isolate and select a narrow band of wavelengths from a wider wavelength emitting light source. They are a valuable piece of equipment because they can be used to create quasi-monochromatic light and also take high precision spectral measurements. A high precision stepper motor is typically used to select the desired wavelength and switch between diffraction gratings quickly, without sacrificing instrument performance.

Determining which slit width to use is based on the trade-off between light throughput and the resolution required for measurement. A larger slit width allows for more light throughput. However, more light throughput results in poorer resolution. When choosing a slit width at which to operate the monochromator, both the input and output ports must be set to the same slit width. (See Figure 4.) Focused light enters the monochromator through the entrance slit, and is redirected by the collimating mirror toward the grating. The grating directs the light toward the focusing mirror, which then redirects the chosen wavelength toward the exit slit. At the exit slits, quasi-monochromatic light is emitted4.

A fixed width slit being installed into an Oriel Cornerstone 130 monochromator.

Figure 4. A fixed width slit being installed into an Oriel Cornerstone 130 monochromator.Photo courtesy of Newport Corp.

Measuring quantum efficiencies

Measuring quantum efficiency (QE) over a range of different wavelengths to measure a device’s QE at each photon energy level is an ideal task for a tunable light source. The QE of a photoelectric material for photons with energy below the band gap is zero. The QE value of a light-sensing device such as a solar cell indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. The principle of QE measurement is to count the proportion of carriers extracted from the material’s valence band to the number of photons impinging on the surface. To do this, it is necessary to shine a calibrated, tunable light on the cell, while simultaneously measuring the output current. The key to accurate measurement of the QE/internal photon to current efficiency is to accurately know how much scanning light is incident on the device under test and how much current is generated. Thus, measurement of light output with a NIST (National Institute of Standards and Technology) traceable calibrated detector is necessary prior to testing since illumination of an absolute optical power is required.

External quantum efficiency (EQE) is the ratio of the number of photons incident on a solar cell to the number of generated charge carriers. Internal quantum efficiency (IQE) also considers the internal efficiency — that is, the losses associated with the photons absorbed by nonactive layers of the cell. By comparison, EQE is much more straightforward to measure, and gives a direct parameter of how much output current will be contributed to the output circuit per incident photon at a given wavelength. IQE is a more in-depth parameter, taking into account the photoelectric efficiency of all composite layers of a material. In an IQE measurement, these losses from nonactive layers of the material are measured in order to calculate a net quantum efficiency — a much truer efficiency measurement.

Understanding the conversion efficiency as a function of the wavelength of light impingent on the cell makes QE measurement critical for materials research and solar cell design. With this data, the solar cell composition and topography can be modified to optimize conversion over the broadest possible range of wavelengths.

As a formula, it is given by IQE = EQE/(1 − R), where R is the reflectivity, direct and diffuse, of the solar cell. The IQE is an indication of the capacity of the active layers of the solar cell to make good use of the absorbed photons. It is always higher than the EQE, but should never exceed 100 percent, with the exception of multiple-exciton generation. Figure 5 illustrates how the tunable light source is used to illuminate the solar cell to perform an IQE measurement. The software controls all components of the measurement system, including the monochromator and data acquisition5.

A sample QE measurement system using the components of a tunable light source.

Figure 5. A sample QE measurement system using the components of a tunable light source. Photo courtesy of Newport Corp.

To measure quantum efficiency in 10-nm wavelength steps, the slit size of the monochromator is typically hundreds of micron in width. The slit width is reduced approximately half if 5-nm wavelength increments are desired. However, output power of the monochromator is reduced by more than 50 percent if the slit width is halved. Lowering optical power impacts QE measurement since a solar cell responds to this diminished optical power with low output current. This can result in a poor signal-to-noise ratio, making a QE measurement error more likely. The detection of low current requires very sensitive equipment with the ability to measure current down to the pico-ampere level. To make for an easier signal measurement, optical power is typically increased. A DC arc source is the better choice for QE measurements made in 5-nm increments or lower due to the lamp’s arc size resulting in better monochromator throughput. However, a QTH lamp is the better choice if greater than 0.1 percent light stability is required, with the trade-off of not being able to measure in as precise wavelength increments as if an arc lamp was used.

Balance between optical power and resolution is an important consideration as it impacts the quality of the QE measurement. The selection of lamp type and monochromator specifications are important considerations for TLS design. To be considered a suitable component for the majority of spectroscopic applications, high-output power and stability, long lifetime of the lamp, and broadband spectral emission with high resolution capability are required for the TLS.

Meet the authors

John Park, new product development manager at Newport Corp., has designed and developed numerous spectroscopy instruments for the photonics industry for over 10 years. He holds two granted patents and is a graduate from University of California, Irvine, with a Ph.D. in electrical engineering; email: john.park@newport.com. Jeff Eng is a product specialist for Oriel Spectroscopy Products at Newport Corp. His work experience includes application support, business-to-business sales and marketing activity of photonic light sources and detectors. He is a graduate of Rutgers University; email: jeff.eng@newport.com.

References

1. Newport Corp., Oriel Instruments TLS datasheet. Tunable Xe arch lamp sources.http://assets.newport.com/webDocuments-EN/images/39191.pdf.

2. Newport Corp., Oriel Instruments handbook: The Book of Photon Tools, light source section.

3. Newport Corp., Oriel Instruments OPS datasheet. OPS-A series arc lamp power supplies.http://assets.newport.com/webDocuments-EN/images/OPS-A%20Series%20Power%20Supply%20Datasheet.pdf.

4. J. M. Lerner and A. Thevenon (1988). The Optics of Spectroscopy. Edison, N.J.: Optical Systems/Instruments SA Inc.

5. K. Emery (2005). Handbook of Photovoltaic Science and Engineering, eds. A. Luque and S. Hegedus. Chapter 16: Measurement and characterization of solar cells and modules. Hoboken, N.J.: John Wiley & Sons Ltd.

Read Full Post »

Tau and IGF1 in Alzheimer’s Disease

Larry H. Bernstein, MD, FCAP, Curator

LPBI

TAU links growth factor to development of Alzheimer’s disease

https://english.tau.ac.il/sites/default/files/styles/reaserch_main_image_580_x_330/public/alz2580.jpg

 

The mechanisms underlying the stability and plasticity of neural circuits in the hippocampus, the part of the brain responsible for spatial memory and the memory of everyday facts and events, has been a major focus of study in the field of neuroscience. Understanding precisely how a “healthy” brain stores and processes information is crucial to preventing and reversing the memory failures associated with Alzheimer’s disease (AD), the most common form of late-life dementia.

 

Hyperactivity of the hippocampus is known to be associated with conditions that confer risk for AD, including amnestic mild cognitive impairment. A new Tel Aviv University study finds that the insulin-like growth factor 1 receptor (IGF-1R), the “master” lifespan regulator, plays a vital role in directly regulating the transfer and processing of information in hippocampal neural circuits. The research reveals IGF-1R as a differential regulator of two different modes of transmission — spontaneous and evoked — in hippocampal circuits of the brain. The researchers hope their findings can be used to indicate a new direction for therapy used to treat patients in the early stages of Alzheimer’s disease.

 

The study was led by Dr. Inna Slutsky of TAU’s Sagol School of Neuroscience and Sackler School of Medicine and conducted by doctoral student Neta Gazit. It was recently published in the journal Neuron. “People who are at risk for AD show hyperactivity of the hippocampus, and our results suggest that IGF-1R activity may be an important contributor to this abnormality,” Dr. Slutsky concluded.

 

Resolving a controversy

“We know that IGF-1R signaling controls growth, development and lifespan, but its role in AD has remained controversial,” said Dr. Slutsky. “To resolve this controversy, we had to understand how IGF-1R functions physiologically in synaptic transfer and plasticity.”

 

Using brain cultures and slices, the researchers developed an integrated approach characterizing the brain system on different scales — from the level of protein interactions to the level of single synapses, neuronal connections and the entire hippocampal network. The team sought to address two important questions: whether IGF-1Rs are active in synapses and transduce signalling at rest, and how they affect synaptic function.

 

“We used fluorescence resonance energy transfer (FRET) to estimate the receptor activation at the single-synapse level,” said Dr. Slutsky. “We found IGF-1Rs to be fully activated under resting conditions, modulating release of neurotransmitters from synapses.”

 

While acute application of IGF-1 hormone was found to be ineffective, the introduction of various IGF-1R blockers produced robust dual effects — namely, the inhibition of a neurotransmitter release evoked by spikes, electrical pulses in the brain, while enhancement of spontaneous neurotransmitter release.

 

A test for Alzheimer’s?

“When we modified the level of IGF-1R expression, synaptic transmission and plasticity were altered at hippocampal synapses, and an increase in the IGF-1R expression caused an augmented release of glutamate, enhancing the activity of hippocampal neurons,” said Gazit.

 

“We suggest that IGF-1R small inhibitors, which are currently under development for cancer, be tested for reduction aberrant brain activity at early stages of Alzheimer’s disease,” said Dr. Slutsky.

 

The researchers are currently planning to study how IGF-1R signaling controls the stability of neural circuits over an extended timescale.

 

Dr. Irena Vertkin, Dr. Ilana Shapira, Edden Slomowitz, Maayan Sheiba and Yael Mor of Dr. Slutsky’s lab at TAU, and Martin Helm and Prof. Silvio Rizzoli of the University of Göttingen in Germany, contributed to this research.

 

This article was originally published by AFTAU.

 

“We know that IGF-1R signaling controls growth, development and lifespan, but its role in AD has remained controversial,” said Dr. Slutsky. “To resolve this controversy, we had to understand how IGF-1R functions physiologically in synaptic transfer and plasticity.”

Using brain cultures and slices, the researchers developed an integrated approach characterizing the brain system on different scales — from the level of protein interactions to the level of single synapses, neuronal connections and the entire hippocampal network. The team sought to address two important questions: whether IGF-1Rs are active in synapses and transduce signalling at rest, and how they affect synaptic function.

“We used fluorescence resonance energy transfer (FRET) to estimate the receptor activation at the single-synapse level,” said Dr. Slutsky. “We found IGF-1Rs to be fully activated under resting conditions, modulating release of neurotransmitters from synapses.”

While acute application of IGF-1 hormone was found to be ineffective, the introduction of various IGF-1R blockers produced robust dual effects — namely, the inhibition of a neurotransmitter release evoked by spikes, electrical pulses in the brain, while enhancement of spontaneous neurotransmitter release.

A test for Alzheimer’s?

“When we modified the level of IGF-1R expression, synaptic transmission and plasticity were altered at hippocampal synapses, and an increase in the IGF-1R expression caused an augmented release of glutamate, enhancing the activity of hippocampal neurons,” said Gazit.

“We suggest that IGF-1R small inhibitors, which are currently under development for cancer, be tested for reduction aberrant brain activity at early stages of Alzheimer’s disease,” said Dr. Slutsky.

The researchers are currently planning to study how IGF-1R signaling controls the stability of neural circuits over an extended timescale.

 

 

Read Full Post »

A Magnetically controlled Mechanical Propeller for Immotile Sperm

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Researchers from the Institute for Integrative Nanosciences, IFW Dresden, Germany and Material Systems for Nanoelectronics, Chemnitz University of Technology, Germany have developed something known as the spermbot, a remotely controlled sperm movement controlling robot that could help create babies of the future. It is a magnetically powered robotic “suit” that can strap itself to individual sperm and help guide it faster towards the egg. According to the inventors all the initial tests with the spermbot have delivered promising results.

The purpose of the spermbot is to solve one of the widely talked about causes of infertility in men which is poor motility of sperm. Low sperm motility, or otherwise healthy sperm that just can’t swim, can be a big factor in infertility. While the development of the spermbot is in its early stages, this is already being talked about as a promising alternative to existing popular techniques that are expensive and come with a high failure rate. These include methods like in-vitro fertilization and artificial insemination. Only 30 percent of the traditional “spray-and-pray” approach ends up with success, which warranted the need for an alternative procedure like the spermbot. According to the report, initial experiments show a marked increase in the probability of the spermbot-assisted sperm to reach its intended destination. The process of fertilization can be completed inside the body or in the lab, inside a petri-dish.

The spermbot is a coat of microscopic metal polymers shaped into a helix. It can attach itself to the tail of the spermatozoid, and then, using a hybrid micromotor, it can help propel the sperm faster towards the egg. The direction the sperm needs to take is controlled using a rotating magnetic field. In fact, even the motion of the sperm can be remote-controlled by simply adjusting this magnetic field. Once the spermbot propels the sperm towards the egg and the sperm manages to implant itself into the egg, the bionic part of the spermbot detaches itself from the tail.

While the initial experiments look promising, there is still some way to go before the spermbot technique is regularly used. To start off, scientists have very few sample size to correctly evaluate the results, and unless more comprehensive tests are carried out, it would not be possible to start using them on human subjects. Another major stumbling block is that there is currently no way to film the spermbot in action while it is moving inside the body. This also means that doctors would not be able to correctly direct it towards the egg. Another concern is the response of the body’s own immune system to the spermbot. The use of the spermbot could trigger a reaction from the body’s immune system, the results of which cannot be predicted without comprehensive clinical trials. The idea of the spermbot looks promising right now, but it is still too early to call it a replacement to the tried and tested methods like in-vitro fertilization and artificial insemination. In fact, it would take a few years for the procedure to be made available to patients if clinical trials are successfully completed.

References:

http://pubs.acs.org/doi/abs/10.1021/acs.nanolett.5b04221

http://www.acs.org/content/acs/en/pressroom/presspacs/2016/acs-presspac-january-13-2016/spermbots-could-help-women-trying-to-conceive-video.html

http://www.inquisitr.com/2711435/spermbot-robot-sperm-infertility-treatment/#utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+google%2FyDYq+%28The+Inquisitr+-+News%29

http://www.slate.com/articles/video/video/2016/01/spermbot_attached_to_sperm_and_delivers_it_quickly_to_an_egg_video.html

http://www.sciencemag.org/news/2016/01/video-motorized-spermbot-helps-sperm-reach-egg

Read Full Post »

Graphene Interaction with Neurons

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Graphene Shown to Safely Interact with Neurons in the Brain

University of Cambridge

(Source: University of Cambridge)

http://www.biosciencetechnology.com/sites/biosciencetechnology.com/files/bt1601_cambridge_graphene.png

 

Researchers have successfully demonstrated how it is possible to interface graphene – a two-dimensional form of carbon – with neurons, or nerve cells, while maintaining the integrity of these vital cells. The work may be used to build graphene-based electrodes that can safely be implanted in the brain, offering promise for the restoration of sensory functions for amputee or paralyzed patients, or for individuals with motor disorders such as epilepsy or Parkinson’s disease.

The research, published in the journal ACS Nano, was an interdisciplinary collaboration coordinated by the University of Trieste in Italy and the Cambridge Graphene Centre.

Previously, other groups had shown that it is possible to use treated graphene to interact with neurons. However the signal to noise ratio from this interface was very low. By developing methods of working with untreated graphene, the researchers retained the material’s electrical conductivity, making it a significantly better electrode.

“For the first time we interfaced graphene to neurons directly,” said Professor Laura Ballerini of the University of Trieste in Italy. “We then tested the ability of neurons to generate electrical signals known to represent brain activities, and found that the neurons retained their neuronal signaling properties unaltered. This is the first functional study of neuronal synaptic activity using uncoated graphene based materials.”

Our understanding of the brain has increased to such a degree that by interfacing directly between the brain and the outside world we can now harness and control some of its functions. For instance, by measuring the brain’s electrical impulses, sensory functions can be recovered. This can be used to control robotic arms for amputee patients or any number of basic processes for paralyzed patients – from speech to movement of objects in the world around them. Alternatively, by interfering with these electrical impulses, motor disorders (such as epilepsy or Parkinson’s) can start to be controlled.

Scientists have made this possible by developing electrodes that can be placed deep within the brain. These electrodes connect directly to neurons and transmit their electrical signals away from the body, allowing their meaning to be decoded.

However, the interface between neurons and electrodes has often been problematic: not only do the electrodes need to be highly sensitive to electrical impulses, but they need to be stable in the body without altering the tissue they measure.

Too often the modern electrodes used for this interface (based on tungsten or silicon) suffer from partial or complete loss of signal over time. This is often caused by the formation of scar tissue from the electrode insertion, which prevents the electrode from moving with the natural movements of the brain due to its rigid nature.

Graphene has been shown to be a promising material to solve these problems, because of its excellent conductivity, flexibility, biocompatibility and stability within the body.

Based on experiments conducted in rat brain cell cultures, the researchers found that untreated graphene electrodes interfaced well with neurons. By studying the neurons with electron microscopy and immunofluorescence the researchers found that they remained healthy, transmitting normal electric impulses and, importantly, none of the adverse reactions which lead to the damaging scar tissue were seen.

According to the researchers, this is the first step towards using pristine graphene-based materials as an electrode for a neuro-interface. In future, the researchers will investigate how different forms of graphene, from multiple layers to monolayers, are able to affect neurons, and whether tuning the material properties of graphene might alter the synapses and neuronal excitability in new and unique ways. “Hopefully this will pave the way for better deep brain implants to both harness and control the brain, with higher sensitivity and fewer unwanted side effects,” said Ballerini.

“We are currently involved in frontline research in graphene technology towards biomedical applications,” said Professor Maurizio Prato from the University of Trieste. “In this scenario, the development and translation in neurology of graphene-based high-performance biodevices requires the exploration of the interactions between graphene nano- and micro-sheets with the sophisticated signalling machinery of nerve cells. Our work is only a first step in that direction.”

“These initial results show how we are just at the tip of the iceberg when it comes to the potential of graphene and related materials in bio-applications and medicine,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. “The expertise developed at the Cambridge Graphene Centre allows us to produce large quantities of pristine material in solution, and this study proves the compatibility of our process with neuro-interfaces.”

The research was funded by the Graphene Flagship, a European initiative which promotes a collaborative approach to research with an aim of helping to translate graphene out of the academic laboratory, through local industry and into society.

Source: University of Cambridge

 

Remembering to Remember Supported by Two Distinct Brain Processes

http://www.biosciencetechnology.com/news/2013/08/remembering-remember-supported-two-distinct-brain-processes

To investigate how prospective memory is processed in the brain, psychological scientist Mark McDaniel of Washington University in St. Louis and colleagues had participants lie in an fMRI scanner and asked them to press one of two buttons to indicate whether a word that popped up on a screen was a member of a designated category.  In addition to this ongoing activity, participants were asked to try to remember to press a third button whenever a special target popped up. The task was designed to tap into participants’ prospective memory, or their ability to remember to take certain actions in response to specific future events.

When McDaniel and colleagues analyzed the fMRI data, they observed that two distinct brain activation patterns emerged when participants made the correct button press for a special target.

When the special target was not relevant to the ongoing activity—such as a syllable like “tor”—participants seemed to rely on top-down brain processes supported by the prefrontal cortex. In order to answer correctly when the special syllable flashed up on the screen, the participants had to sustain their attention and monitor for the special syllable throughout the entire task. In the grocery bag scenario, this would be like remembering to bring the grocery bags by constantly reminding yourself that you can’t forget them.

When the special target was integral to the ongoing activity—such as a whole word, like “table”—participants recruited a different set of brain regions, and they didn’t show sustained activation in these regions. The findings suggest that remembering what to do when the special target was a whole word didn’t require the same type of top-down monitoring. Instead, the target word seemed to act as an environmental cue that prompted participants to make the appropriate response—like reminding yourself to bring the grocery bags by leaving them near the front door.

“These findings suggest that people could make use of several different strategies to accomplish prospective memory tasks,” says McDaniel.

McDaniel and colleagues are continuing their research on prospective memory, examining how this phenomenon might change with age.

Co-authors on this research include Pamela LaMontagne, Michael Scullin, Todd Braver of Washington University in St. Louis; and Stefanie Beck of Technische Universität Dresden.

This research was funded by the National Institute on Aging, the Washington University Institute of Clinical and Translation Sciences, the National Center for Advancing Translational Sciences, and the German Science Foundation.

Read Full Post »

« Newer Posts - Older Posts »