Archive for the ‘BioPrinting in Regenerative Medicine’ Category

3-D Printed Ovaries Produce Healthy Offspring

Reporter: Irina Robu, PhD


Each year about 120,000 organs are transplanted from one human being to another and most of the time is a living volunteer. But lack of suitable donors, predominantly means the supply of such organs is inadequate. Countless people consequently die waiting for a transplant which has led researchers to study the question of how to build organs from scratch.

One promising approach is to print them, but “bioprinting” remains largely experimental. Nevertheless, bioprinted tissue is before now being sold for drug testing, and the first transplantable tissues are anticipated to be ready for use in a few years’ time. The first 3D printed organ includes bioprosthetic ovaries which are constructed of 3D printed scaffolds that have immature eggs and have been successful in boosting hormone production and restoring fertility was developed by Teresa K. Woodruff, a reproductive scientist and director of the Women’s Health Research Institute at Feinberg School of Medicine, at Northwestern University, in Illinois.

What sets apart these bioprosthetic ovaries is the architecture of the scaffold. The material is made of gelatin made from broken-down collagen that is safe to humans which is self-supporting and can lead to building multiple layers.

The 3-D printed “scaffold” or “skeleton” is implanted into a female and its pores can be used to optimize how follicles, or immature eggs, get wedged within the scaffold. The scaffold supports the survival of the mouse’s immature egg cells and the cells that produce hormones to boost production. The open construction permits room for the egg cells to mature and ovulate, blood vessels to form within the implant enabling the hormones to circulate and trigger lactation after giving birth. The purpose of this scaffold is to recapitulate how an ovary would function.
The scientists’ only objective for developing the bioprosthetic ovaries was to help reestablish fertility and hormone production in women who have suffered adult cancer treatments and now have bigger risks of infertility and hormone-based developmental issues.



Printed human body parts could soon be available for transplant


3D printed ovaries produce healthy offspring giving hope to infertile women


Brave new world: 3D-printed ovaries produce healthy offspring


3-D-printed scaffolds restore ovary function in infertile mice


Our Grandkids May Be Born From 3D-Printed Ovaries



Read Full Post »

Nanostraws Developed at Stanford Sample a Cell’s Contents without Damage

Reporter: Irina Robu, PhD

Cells within our bodies change over time and divide, with thousands of chemical reactions happening within cell daily. Nicholas Melosh, Associate Professor of Materials Science and Engineering, developed a new, non-destructive system for sampling cells with nanoscale straws which could help uncover mysteries about how cells function.

Currently, cells are sampled via lysing which ruptures the cell membrane which means that it can’t ever be sampled again. The sample system that Dr. Melosh invented banks on, on tiny tubes 600 times smaller than a strand of hair that allow researchers to sample a single cell at a time. The nanostraws penetrate a cell’s outer membrane, without damaging it, and draw out proteins and genetic material from the cell’s salty interior.

The Nanostraw sampling technique, according to Melosh, will knowingly impact our understanding of cell development and could result to much safer and operational medical therapies because the technique allows for long term, non-destructive monitoring. The sampling technique could also inform cancer treatments and answer questions about why some cancer cells are resistant to chemotherapy while others are not. The sampling platform on which the nanostraws are grown is tiny, similar to the size of a gumball. It’s called the Nanostraw Extraction (NEX) sampling system, and it was designed to mimic biology itself.

The goal of developing this technology was to make an impact in medical biology by providing a platform that any lab could build.


Read Full Post »

Topical Solution for Combination Oncology Drug Therapy: Patch that delivers Drug, Gene, and Light-based Therapy to Tumor

Reporter: Aviva Lev-Ari, PhD, RN


Self-assembled RNA-triple-helix hydrogel scaffold for microRNA modulation in the tumour microenvironment


  1. Massachusetts Institute of Technology, Institute for Medical Engineering and Science, Harvard-MIT Division for Health Sciences and Technology, Cambridge, Massachusetts 02139, USA
    • João Conde,
    • Nuria Oliva,
    • Mariana Atilano,
    • Hyun Seok Song &
    • Natalie Artzi
  2. School of Engineering and Materials Science, Queen Mary University of London, London E1 4NS, UK
    • João Conde
  3. Grup dEnginyeria de Materials, Institut Químic de Sarrià-Universitat Ramon Llull, Barcelona 08017, Spain
    • Mariana Atilano
  4. Division of Bioconvergence Analysis, Korea Basic Science Institute, Yuseong, Daejeon 169-148, Republic of Korea
    • Hyun Seok Song
  5. Broad Institute of MIT and Harvard, Cambridge, Massachusetts 02142, USA
    • Natalie Artzi
  6. Department of Medicine, Biomedical Engineering Division, Brigham and Womens Hospital, Harvard Medical School, Boston, Massachusetts 02115, USA
    • Natalie Artzi


J.C. and N.A. conceived the project and designed the experiments. J.C., N.O., H.S.S. and M.A. performed the experiments, collected and analysed the data. J.C. and N.A. co-wrote the manuscript. All authors discussed the results and reviewed the manuscript.

Nature Materials
22 April 2015
26 October 2015
Published online
07 December 2015

The therapeutic potential of miRNA (miR) in cancer is limited by the lack of efficient delivery vehicles. Here, we show that a self-assembled dual-colour RNA-triple-helix structure comprising two miRNAs—a miR mimic (tumour suppressor miRNA) and an antagomiR (oncomiR inhibitor)—provides outstanding capability to synergistically abrogate tumours. Conjugation of RNA triple helices to dendrimers allows the formation of stable triplex nanoparticles, which form an RNA-triple-helix adhesive scaffold upon interaction with dextran aldehyde, the latter able to chemically interact and adhere to natural tissue amines in the tumour. We also show that the self-assembled RNA-triple-helix conjugates remain functional in vitro and in vivo, and that they lead to nearly 90% levels of tumour shrinkage two weeks post-gel implantation in a triple-negative breast cancer mouse model. Our findings suggest that the RNA-triple-helix hydrogels can be used as an efficient anticancer platform to locally modulate the expression of endogenous miRs in cancer.




Patch that delivers drug, gene, and light-based therapy to tumor sites shows promising results

In mice, device destroyed colorectal tumors and prevented remission after surgery.

Helen Knight | MIT News Office
July 25, 2016

Approximately one in 20 people will develop colorectal cancer in their lifetime, making it the third-most prevalent form of the disease in the U.S. In Europe, it is the second-most common form of cancer.

The most widely used first line of treatment is surgery, but this can result in incomplete removal of the tumor. Cancer cells can be left behind, potentially leading to recurrence and increased risk of metastasis. Indeed, while many patients remain cancer-free for months or even years after surgery, tumors are known to recur in up to 50 percent of cases.

Conventional therapies used to prevent tumors recurring after surgery do not sufficiently differentiate between healthy and cancerous cells, leading to serious side effects.

In a paper published today in the journal Nature Materials, researchers at MIT describe an adhesive patch that can stick to the tumor site, either before or after surgery, to deliver a triple-combination of drug, gene, and photo (light-based) therapy.

Releasing this triple combination therapy locally, at the tumor site, may increase the efficacy of the treatment, according to Natalie Artzi, a principal research scientist at MIT’s Institute for Medical Engineering and Science (IMES) and an assistant professor of medicine at Brigham and Women’s Hospital, who led the research.

The general approach to cancer treatment today is the use of systemic, or whole-body, therapies such as chemotherapy drugs. But the lack of specificity of anticancer drugs means they produce undesired side effects when systemically administered.

What’s more, only a small portion of the drug reaches the tumor site itself, meaning the primary tumor is not treated as effectively as it should be.

Indeed, recent research in mice has found that only 0.7 percent of nanoparticles administered systemically actually found their way to the target tumor.

“This means that we are treating both the source of the cancer — the tumor — and the metastases resulting from that source, in a suboptimal manner,” Artzi says. “That is what prompted us to think a little bit differently, to look at how we can leverage advancements in materials science, and in particular nanotechnology, to treat the primary tumor in a local and sustained manner.”

The researchers have developed a triple-therapy hydrogel patch, which can be used to treat tumors locally. This is particularly effective as it can treat not only the tumor itself but any cells left at the site after surgery, preventing the cancer from recurring or metastasizing in the future.

Firstly, the patch contains gold nanorods, which heat up when near-infrared radiation is applied to the local area. This is used to thermally ablate, or destroy, the tumor.

These nanorods are also equipped with a chemotherapy drug, which is released when they are heated, to target the tumor and its surrounding cells.

Finally, gold nanospheres that do not heat up in response to the near-infrared radiation are used to deliver RNA, or gene therapy to the site, in order to silence an important oncogene in colorectal cancer. Oncogenes are genes that can cause healthy cells to transform into tumor cells.

The researchers envision that a clinician could remove the tumor, and then apply the patch to the inner surface of the colon, to ensure that no cells that are likely to cause cancer recurrence remain at the site. As the patch degrades, it will gradually release the various therapies.

The patch can also serve as a neoadjuvant, a therapy designed to shrink tumors prior to their resection, Artzi says.

When the researchers tested the treatment in mice, they found that in 40 percent of cases where the patch was not applied after tumor removal, the cancer returned.

But when the patch was applied after surgery, the treatment resulted in complete remission.

Indeed, even when the tumor was not removed, the triple-combination therapy alone was enough to destroy it.

The technology is an extraordinary and unprecedented synergy of three concurrent modalities of treatment, according to Mauro Ferrari, president and CEO of the Houston Methodist Research Institute, who was not involved in the research.

“What is particularly intriguing is that by delivering the treatment locally, multimodal therapy may be better than systemic therapy, at least in certain clinical situations,” Ferrari says.

Unlike existing colorectal cancer surgery, this treatment can also be applied in a minimally invasive manner. In the next phase of their work, the researchers hope to move to experiments in larger models, in order to use colonoscopy equipment not only for cancer diagnosis but also to inject the patch to the site of a tumor, when detected.

“This administration modality would enable, at least in early-stage cancer patients, the avoidance of open field surgery and colon resection,” Artzi says. “Local application of the triple therapy could thus improve patients’ quality of life and therapeutic outcome.”

Artzi is joined on the paper by João Conde, Nuria Oliva, and Yi Zhang, of IMES. Conde is also at Queen Mary University in London.


Other related articles published in thie Open Access Online Scientific Journal include the following:

The Development of siRNA-Based Therapies for Cancer

Author: Ziv Raviv, PhD


Targeted Liposome Based Delivery System to Present HLA Class I Antigens to Tumor Cells: Two papers

Reporter: Stephen J. Williams, Ph.D.


Blast Crisis in Myeloid Leukemia and the Activation of a microRNA-editing Enzyme called ADAR1

Curator: Larry H. Bernstein, MD, FCAP


First challenge to make use of the new NCI Cloud Pilots – Somatic Mutation Challenge – RNA: Best algorithms for detecting all of the abnormal RNA molecules in a cancer cell

Reporter: Aviva Lev-Ari, PhD, RN


miRNA Therapeutic Promise

Curator: Larry H. Bernstein, MD, FCAP

Read Full Post »

Bioprinting basics

Curator: Larry H. Bernstein, MD, FCAP



The ABCs of 3D Bioprinting of Living Tissues, Organs   5/06/2016 

(Credit: Ozbolat Lab/Penn State University)
(Credit: Ozbolat Lab/Penn State University)

Although first originated in 2003, the world of bioprinting is still very new and ambiguous. Nevertheless, as the need for organ donation continues to increase worldwide, and organ and tissue shortages prevail, a handful of scientists have started utilizing this cutting-edge science and technology for various areas of regenerative medicine to possibly fill that organ-shortage void.

Among these scientists is Ibrahim Tarik Ozbolat, an associate professor of Engineering Science and Mechanics Department and the Huck Institutes of the Life Sciences at Penn State University, who’s been studying bioprinting and tissue engineering for years.

While Ozbolat is not the first to originate 3D bioprinting research, he’s the first one at Penn State University to spearhead the studies at Ozbolat Lab, Leading Bioprinting Research.

“Tissue engineering is a big need. Regenerative medicine, biofabrication of tissues and organs that can replace the damage or diseases is important,” Ozbolat told R&D Magazine after his seminar presentation at Interphex last week in New York City, titled 3D Bioprinting of Living Tissues & Organs.”

3D bioprinting is the process of creating cell patterns in a confined space using 3D-printing technologies, where cell function and viability are preserved within the printed construct.

Recent progress has allowed 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. The technology is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine, according to

“If we’re able to make organs on demand, that will be highly beneficial to society,” said Ozbolat. “We have the capability to pattern cells, locate them and then make the same thing that exists in the body.”

3D bioprinting of tissues and organs

Sean V Murphy & Anthony Atala
Nature Biotechnology 32,773–785(2014)       doi:10.1038/nbt.2958


Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology.


Future Technologies : Bioprinting

3D printing is increasingly permitting the direct digital manufacture (DDM) of a wide variety of plastic and metal items. While this in itself may trigger a manufacturing revolution, far more startling is the recent development of bioprinters. These artificially construct living tissue by outputting layer-upon-layer of living cells. Currently all bioprinters are experimental. However, in the future, bioprinters could revolutionize medical practice as yet another element of the New Industrial Convergence.

Bioprinters may be constructed in various configurations. However, all bioprinters output cells from a bioprint head that moves left and right, back and forth, and up and down, in order to place the cells exactly where required. Over a period of several hours, this permits an organic object to be built up in a great many very thin layers.

In addition to outputting cells, most bioprinters also output a dissolvable gel to support and protect cells during printing. A possible design for a future bioprinter appears below and in the sidebar, here shown in the final stages of printing out a replacement human heart. Note that you can access larger bioprinter images on the Future Visions page. You may also like to watch my bioprinting video.



Bioprinting Pioneers

Several experimental bioprinters have already been built. For example, in 2002 Professor Makoto Nakamura realized that the droplets of ink in a standard inkjet printer are about the same size as human cells. He therefore decided to adapt the technology, and by 2008 had created a working bioprinter that can print out biotubing similar to a blood vessel. In time, Professor Nakamura hopes to be able to print entire replacement human organs ready for transplant. You can learn more about this groundbreaking work here or read this message from Professor Nakamura. The movie below shows in real-time the biofabrication of a section of biotubing using his modified inkjet technology.


Another bioprinting pioneer is Organovo. This company was set up by a research group lead by Professor Gabor Forgacs from the University of Missouri, and in March 2008 managed to bioprint functional blood vessels and cardiac tissue using cells obtained from a chicken. Their work relied on a prototype bioprinter with three print heads. The first two of these output cardiac and endothelial cells, while the third dispensed a collagen scaffold — now termed ‘bio-paper’ — to support the cells during printing.

Since 2008, Organovo has worked with a company called Invetech to create a commercial bioprinter called the NovoGen MMX. This is loaded with bioink spheroids that each contain an aggregate of tens of thousands of cells. To create its output, the NovoGen first lays down a single layer of a water-based bio-paper made from collagen, gelatin or other hydrogels. Bioink spheroids are then injected into this water-based material. As illustrated below, more layers are subsequently added to build up the final object. Amazingly, Nature then takes over and the bioink spheroids slowly fuse together. As this occurs, the biopaper dissolves away or is otherwise removed, thereby leaving a final bioprinted body part or tissue.


bioprinting stages

As Organovo have demonstrated, using their bioink printing process it is not necessary to print all of the details of an organ with a bioprinter, as once the relevant cells are placed in roughly the right place Nature completes the job. This point is powerfully illustrated by the fact that the cells contained in a bioink spheroid are capable of rearranging themselves after printing. For example, experimental blood vessels have been bioprinted using bioink spheroids comprised of an aggregate mix of endothelial, smooth muscle and fibroblast cells. Once placed in position by the bioprint head, and with no technological intervention, the endothelial cells migrate to the inside of the bioprinted blood vessel, the smooth muscle cells move to the middle, and the fibroblasts migrate to the outside.

In more complex bioprinted materials, intricate capillaries and other internal structures also naturally form after printing has taken place. The process may sound almost magical. However, as Professor Forgacs explains, it is no different to the cells in an embryo knowing how to configure into complicated organs. Nature has been evolving this amazing capability for millions of years. Once in the right places, appropriate cell types somehow just know what to do.

In December 2010, Organovo create the first blood vessels to be bioprinted using cells cultured from a single person. The company has also successfully implanted bioprinted nerve grafts into rats, and anticipates human trials of bioprinted tissues by 2015. However, it also expects that the first commercial application of its bioprinters will be to produce simple human tissue structures for toxicology tests. These will enable medical researchers to test drugs on bioprinted models of the liver and other organs, thereby reducing the need for animal tests.

In time, and once human trials are complete, Organovo hopes that its bioprinters will be used to produce blood vessel grafts for use in heart bypass surgery. The intention is then to develop a wider range of tissue-on-demand and organs-on-demand technologies. To this end, researchers are now working on tiny mechanical devices that can artificially exercise and hence strengthen bioprinted muscle tissue before it is implanted into a patient.

Organovo anticipates that its first artificial human organ will be a kidney. This is because, in functional terms, kidneys are one of the more straight-forward parts of the body. The first bioprinted kidney may in fact not even need to look just like its natural counterpart or duplicate all of its features. Rather, it will simply have to be capable of cleaning waste products from the blood. You can read more about the work of Organovoand Professor Forgac’s in this article from Nature.

Regenerative Scaffolds and Bones

A further research team with the long-term goal of producing human organs-on-demand has created the Envisiontec Bioplotter. Like Organovo’s NovoGen MMX, this outputs bio-ink ’tissue spheroids’ and supportive scaffold materials including fibrin and collagen hydrogels. But in addition, the Envisontech can also print a wider range of biomaterials. These include biodegradable polymers and ceramics that may be used to support and help form artificial organs, and which may even be used as bioprinting substitutes for bone.

Talking of bone, a team lead by Jeremy Mao at the Tissue Engineering and Regenerative Medicine Lab at Columbia University is working on the application of bioprinting in dental and bone repairs. Already, a bioprinted, mesh-like 3D scaffold in the shape of an incisor has been implanted into the jaw bone of a rat. This featured tiny, interconnecting microchannels that contained ‘stem cell-recruiting substances’. In just nine weeks after implantation, these triggered the growth of fresh periodontal ligaments and newly formed alveolar bone. In time, this research may enable people to be fitted with living, bioprinted teeth, or else scaffolds that will cause the body to grow new teeth all by itself. You can read more about this development in this article from The Engineer.

In another experient, Mao’s team implanted bioprinted scaffolds in the place of the hip bones of several rabbits. Again these were infused with growth factors. As reported inThe Lancet, over a four month period the rabbits all grew new and fully-functional joints around the mesh. Some even began to walk and otherwise place weight on their new joints only a few weeks after surgery. Sometime next decade, human patients may therefore be fitted with bioprinted scaffolds that will trigger the grown of replacement hip and other bones. In a similar development, a team from Washington State University have also recently reported on four years of work using 3D printers to create a bone-like material that may in the future be used to repair injuries to human bones.

In Situ Bioprinting

The aforementioned research progress will in time permit organs to be bioprinted in a lab from a culture of a patient’s own cells. Such developments could therefore spark a medical revolution. Nevertheless, others are already trying to go further by developing techniques that will enable cells to be printed directly onto or into the human body in situ. Sometime next decade, doctors may therefore be able to scan wounds and spray on layers of cells to very rapidly heal them.

Already a team of bioprinting researchers lead by Anthony Alata at the Wake Forrest School of Medicine have developed a skin printer. In initial experiments they have taken 3D scans of test injuries inflicted on some mice and have used the data to control a bioprint head that has sprayed skin cells, a coagulant and collagen onto the wounds. The results are also very promising, with the wounds healing in just two or three weeks compared to about five or six weeks in a control group. Funding for the skin-printing project is coming in part from the US military who are keen to develop in situ bioprinting to help heal wounds on the battlefield. At present the work is still in a pre-clinical phase with Alata progressing his research usig pigs. However, trials of with human burn victims could be a little as five years away.

The potential to use bioprinters to repair our bodies in situ is pretty mind blowing. In perhaps no more than a few decades it may be possible for robotic surgical arms tipped with bioprint heads to enter the body, repair damage at the cellular level, and then also repair their point of entry on their way out. Patients would still need to rest and recuperate for a few days as bioprinted materials fully fused into mature living tissue. However, most patients could potentially recover from very major surgery in less than a week.

Cosmetic Applications …

Bioprinting Implications …

More information on bioprinting can be found in my books 3D Printing: Second Editionand The Next Big Thing. There is also a bioprinting section in my 3D Printing Directory. Oh, and there is also a great infographic about bioprinting here. Enjoy!


How to print out a blood vessel

New work moves closer to the age of organs on demand.

Blood vessels can now be ‘printed out’ by machine. Could bigger structures be in the future?SUSUMU NISHINAGA / SCIENCE PHOTO LIBRARY

Read Full Post »

New method for 3D imaging of brain tumors

Larry H. Bernstein, MD, FCAP, Curator




Third-Harmonic Generation Microscopy Provides In Situ Brain Tumor Imaging

AMSTERDAM, Netherlands, April 25, 2015 — A technique involving third-harmonic generation microscopy could allow neurosurgeons to image and assess brain tumor boundaries during surgery, providing optical biopsies in near-real time and increasing the accuracy of tissue removal.

Pathologists typically use staining methods, in which chemicals like hematoxylin and eosin turn different tissue components blue and red, revealing its structure and whether there are any tumor cells. A definitive diagnosis can take up to 24 hours, meaning surgeons may not realize some cancerous tissue has escaped from their attention until after surgery — requiring a second operation and more risk.

Tissue from a patient diagnosed with low-grade glioma.

Tissue from a patient diagnosed with low-grade glioma. The green image is taken with the new method, while the pink uses conventional hematoxylin and eosin staining. From the upper left to the lower right, both images show increasing cell density due to more tumor tissue. The insets reveal the high density of tumor cells. Courtesy of N.V. Kuzmin et al./VU University Amsterdam.

Brain tumors — specifically glial brain tumors — are often spread out and mixed in with the healthy tissue, presenting a particular challenge. Surgery, irradiation and chemotherapy often cause substantial collateral damage to the surrounding brain tissue.

Now researchers from VU University Amsterdam, led by professor Marloes Groot, have demonstrated a label-free optical method for imaging cancerous brain tissue. They were able to produce most images in under a minute; smaller ones took <1 s, while larger images of a few square millimeters took 5 min.

The study involved firing short, 200-fs, 1200-nm laser pulses into the tissue. When three photons converged at the same time and place, the photons interacted with the nonlinear optical properties of the tissue. Through the phenomena of third harmonic generation, the interactions produced a single 400- or 600-nm photon (in the case of third or second harmonic generation, respectively).

The shorter-wavelength photon scatters in the tissue, and when it reaches a detector — in this case a high-sensitivity GaAsP photomultiplier tube — it reveals what the tissue looks like inside. The resulting images enabled clear recognition of cellularity, nuclear pleomorphism and rarefaction of neuropil in the tissue.

While this technique has been used in other applications — to image insects and fish embryos, for example — the researchers said this is the first time it’s been used to analyze glial brain tumors.

Groot and her team are now developing a handheld device for tumor border detection during surgery. The incoming laser pulses can only reach a depth of about 100 μm into the tissue currently; to reach further, Groot envisions attaching a needle that can pierce the tissue and deliver photons deeper.

The research was published in Biomedical Optics Express, a publication of The Optical Society (OSA) (doi: 10.1364/boe.7.001889).


Third harmonic generation imaging for fast, label-free pathology of human brain tumors

Biomedical Optics Express 2016  7(5):1889-1904    doi: 10.1364/BOE.7.001889

In brain tumor surgery, recognition of tumor boundaries is key. However, intraoperative assessment of tumor boundaries by the neurosurgeon is difficult. Therefore, there is an urgent need for tools that provide the neurosurgeon with pathological information during the operation. We show that third harmonic generation (THG) microscopy provides label-free, real-time images of histopathological quality; increased cellularity, nuclear pleomorphism, and rarefaction of neuropil in fresh, unstained human brain tissue could be clearly recognized. We further demonstrate THG images taken with a GRIN objective, as a step toward in situ THG microendoscopy of tumor boundaries. THG imaging is thus a promising tool for optical biopsies.


Glial tumors (gliomas) account for almost 80% of the tumors originating from brain tissue. The vast majority of these tumors are so-called ‘diffuse gliomas’ as they show very extensive (‘diffuse’) growth into the surrounding brain parenchyma. With surgical resection, irradiation, and/or chemotherapy it is impossible to eliminate all glioma cells without serious damage to the brain tissue. As a consequence, until now, patients with a diffuse glioma have had a poor prognosis, a situation which strongly contributes to the fact that brain tumor patients experience more years of life lost than patients with any other type of cancer [1,2].

Meanwhile it has also been demonstrated that the prognosis of patients with a diffuse glioma correlates with the extent of resection [3–5]. During brain surgery, however, it is extremely difficult for the neurosurgeon to determine the boundary of the tumor, i.e. whether a brain area contains tumor cells or not. If the neurosurgeon could have histopathological information on the tumor boundaries during brain surgery, then recognition of these tumor boundaries and with that, the surgical resection, could be significantly improved.

Occasionally, intra-operative analysis using hematoxylin-and-eosin (H&E) stained sections of snap-frozen material or smear preparations is performed by the pathologist to help establish brain tumor boundaries, but this procedure only allows analysis of small, selected regions, can only be performed on tissue fragments that are already resected, and is rather time consuming (frozen section diagnosis) or does not allow analysis of tumor in the histological context (smear preparations). Fluorescence imaging techniques are increasingly used during surgery [6,7] but are associated with several drawbacks, such as heterogeneous delivery and nonspecific staining [8,9]. In particular, low-grade gliomas and normal brain tissue have an intact blood-brain barrier and take up little circulating dye [10–12]. Alternative techniques are therefore required, that can detect the presence of tumor cells in tissue without fluorescent labels and with a speed that enables ‘live’ feedback to the surgeon while he/she operates.

The past year has seen exciting new developments in which optical coherence tomography [13] and stimulated Raman microscopy [14,15] were reported to reliably detect tumor tissue in the brain of human glioma patients, and a handheld Raman spectroscopy device was even implemented intra-surgical to assess brain tissue prior to excision [16]. These techniques are especially sensitive in densely tumor-infiltrated areas, and for the Raman spectroscopy device study a sensitivity limit of 17 tumor cells in an area of 150 × 150 μm2 was reported. The discriminating power of the Raman techniques is based on subtle differences in the vibrational spectra of tumor tissue and healthy tissue, and they require extensive comparison of experimental spectra against libraries of reference spectra. A technique capable of directly visualizing the classical histopathological hallmark criteria currently used by pathologists for classification of tumor tissue could potentially be even more reliable and make the transition from the current practice—histopathological analysis of fixated tissue—to in situ optical biopsy easier. Diffuse gliomas are histopathologically characterized by variably increased cellularity, nuclear pleomorphism and—especially in higher-grade neoplasms—brisk mitotic activity, microvascular proliferation, and necrosis. To visualize these features in live tissue, a technique that elucidates the morphology of tissue is required. In this context, third harmonic generation (THG) microscopy is a promising tool because of its capacity to visualize almost the full morphology of tissue. THG is a nonlinear optical process that relies on spatial variations of the third-order non-linear susceptibility χ(3) intrinsic to the tissue and (in the case of brain tissue) mainly arises from interfaces with lipid-rich molecules [17–27]. SHG signals arise from an optical nonlinear process involving non-centrosymmetric molecules present in, for example, microtubules and collagen. THG has been successfully applied to image unstained samples such as insect embryos, plant seeds and intact mammalian tissue [28], epithelial tissues [29–31], zebra fish embryos [32], and the zebra fish nervous system [33]. In brain tissue of mice, augmented by co-recording of SHG signals, THG was shown to visualize cells, nuclei, the inner and outer contours of axons, blood cells, and vessels, resulting in the visualization of both gray and white matter (GM and WM) as well as vascularization, up to a depth of 350 μm [24,26]. Here, we explore the potential of THG and SHG imaging for real time analysis of ex-vivo human brain tissue in the challenging cases of diffuse tumor invasion in low-grade brain tumors as well as of high-grade gliomas and structurally normal brain tissues.


Multiphoton imaging

THG and SHG are nonlinear optical processes that may occur in tissue depending on the nonlinear susceptibility coefficients χ(3) and χ(2) of the tissue and upon satisfying phase matching conditions [17–19,21,23–27]. In the THG process, three incident photons are converted into one photon with triple energy and one third of the wavelength (Fig. 1(A)). In the SHG process, signals result from the conversion of an incident photon pair into one photon with twice the energy and half the wavelength. Two- and three photon excited fluorescence signals (2PF, 3PF) may simultaneously be generated by intrinsic proteins (Fig. 1(B)). As a result, a set of distinct (harmonic) and broadband (autofluorescence) spectral peaks is generated in the visible range. The imaging setup (Fig. 1(C)) to generate and collect these signals consisted of a commercial two-photon laser-scanning microscope (TriMScope I, LaVision BioTec GmbH) and a femtosecond laser source. The laser source was an optical parametric oscillator (Mira-OPO, APE) pumped at 810 nm by a Ti-sapphire oscillator (Coherent Chameleon Ultra II). The OPO generates 200 fs pulses at 1200 nm with a repetition rate of 80 MHz. We selected this wavelength as it falls in the tissue transparency window, providing deeper penetration and reduced photodamage compared to the 700–1000 nm range, as well as harmonic signals generated in the visible wavelength range, facilitating their collection and detection with conventional objectives and detectors. We focused the OPO beam on the sample using a 25 × /1.10 (Nikon APO LWD) water-dipping objective (MO). The 1200 nm beam focal spot size on the sample was dlateral ~0.7 μm and daxial ~4.1 μm. It was measured with 0.175 μm fluorescent microspheres (see Section 3.4) yielding two- and three-photon resolution values Δ2P,lateral ~0.5 μm, Δ2P,axial ~2.9 μm, Δ3P,lateral ~0.4 μm, and Δ3P,axial ~2.4 μm. Two high-sensitivity GaAsP photomultiplier tubes (PMT, Hamamatsu H7422-40) equipped with narrowband filters at 400 nm and 600 nm were used to collect the THG and SHG signals, respectively, as a function of position of the focus in the sample. The signals were filtered from the 1200 nm fundamental photons by a dichroic mirror (Chroma T800LPXRXT, DM1), split into SHG and THG channels by a dichroic mirror (Chroma T425LPXR, DM2), and passed through narrow-band interference filters (F) for SHG (Chroma D600/10X) and THG (Chroma Z400/10X) detection. The efficient back-scattering of the harmonic signals allowed for their detection in epi-direction. The laser beam was transversely scanned over the sample by a pair of galvo mirrors (GM). THG and SHG modalities are intrinsically confocal and therefore provide direct depth sectioning. We obtained a full 3D image of the tissue volume by scanning the microscope objective with a stepper motor in the vertical (z) direction. The mosaic imaging of the sample was performed by transverse (xy) scanning of the motorized translation stage. Imaging data was acquired with the TriMScope I software (“Imspector Pro”); image stacks were stored in 16-bit tiff-format and further processed and analyzed with “ImageJ” software (ver. 1.49m, NIH, USA). All images were processed with logarithmic contrast enhancement.

Fig. 1 THG/SHG microscopy for brain tissue imaging. (A) Energy level diagram of the second (SHG) and third (THG) harmonic generation process. (B) Energy level diagram of the two- (2PF) and three-photon (3PF) excited auto-fluorescence process. (C) Multiphoton microscope setup: Laser producing 200 fs pulses at 1200 nm; GM – X-Y galvo-scanner mirrors; SL – scan lens; TL – tube lens; MO – microscope objective; DM1 – dichroic mirror reflecting back-scattered THG/SHG photons to the PMT detectors; DM2 – dichroic mirror splitting SHG and THG channels; F – narrow-band SHG and THG interference filters; L – focusing lenses; PMT – photomultiplier tube detectors. (D) Infrared photons (white arrow) are focused deep in the brain tissue, converted to THG (green) and SHG (red) photons, scattered back (green/red arrows) and epi-detected. The nonlinear optical processes result in label-free contrast images with sub-cellular resolution and intrinsic depth sectioning. (E and F) Freshly-excised low-grade (E) and high-grade (F) glioma tissue samples in artificial cerebrospinal fluid (ACSF) in a Petri dish with a millimeter paper underneath for scale. (G) An agar-embedded tumor tissue sample under 0.17 mm glass cover slip with the microscope objective (MO) on top.   Download Full Size | PPT Slide

Endomicroscopy imaging

For endomicroscopic imaging we used a commercial high-numerical-aperture (NA) multi-element micro-objective lens (GT-MO-080-018-810, GRINTECH) composed of a plano-convex lens and two GRaded INdex (GRIN) lenses with aberration compensation, object NA = 0.80 and object working distance 200 µm (in water), image NA = 0.18 and image working distance 200 µm (in air), magnification × 4.8 and field-of-view diameter of 200 μm. The GRIN lenses and the plano-convex lens were mounted in a waterproof stainless steel housing with an outer diameter of 1.4 mm and a total length of 7.5 mm. Originally designed for a wavelength range of 800–900 nm [36–41], this micro-objective lens was used for focusing of 1200 nm femtosecond pulses and collection of back-scattered harmonic and fluorescence photons. A coupling lens with f = 40 mm (NA = 0.19, Qioptiq, ARB2 NIR, dia. 25 mm) focused the scanned laser beam in the image plane of the micro-objective lens and forwarded the epi-detected harmonic and fluorescence photons to the PMTs.

We characterized the lateral (x) and axial (z) resolution of the micro-objective lens by 3D imaging of fluorescence microspheres (PS-Speck Microscope Point Source Kit, P7220, Molecular Probes). We used “blue” and “deep red” microspheres, 0.175 ± 0.005 μm in diameter, with excitation/emission maxima at 360/440 nm and 630/660 nm to obtain three-photon (3P) and two-photon (2P) point spread function (PSF) profiles. The excitation wavelength was 1200 nm, and fluorescence signals were detected in the 400 ± 5 nm (3P) and 600 ± 5 nm (2P) spectral windows, just as in the brain tissue imaging experiments. 1 μL of “blue” and “deep red” sphere suspensions were applied to a propanol-cleaned 75 × 26 × 1 mm3 glass slide. The mixed microsphere suspension was left to dry for 20 min and was then imaged with the micro-objective lens via a water immersion layer. The assembly of the coupling lens and the micro-objective lens was vertically (z) scanned with a step of 0.5 μm, and stacks of two-/three-photon images were recorded. The line profiles were then taken over the lateral (xy) images of the fluorescent spheres with maximal intensity (in focus), and fluorescence counts were plotted as function of the lateral coordinate (x). The axial (z) scan values of the two- and three-photon fluorescence signals were acquired by averaging of the total fluorescence counts of the corresponding spheres and were plotted as function of the axial coordinate (z). Lateral (x) and axial (z) 2P/3P points were then fitted with Gaussian functions and full width at half-maximum (FWHM) values were measured.

……. Results….  Conclusions

The results shown here provide the first evidence that—by applying the same microscopic criteria that are used by the pathologist, i.e. increased cellularity, nuclear pleomorphism, and rarefaction of neuropil—THG/SHG ex-vivo microscopy can be used to recognize the presence of diffuse infiltrative glioma in fresh, unstained human brain tissue. Images and a first diagnosis can be provided in seconds, with the ‘inspection mode’, by moving the sample under the scanning microscope (see Visualization 4 and Visualization 5), or in about 5 minutes if an area has to be inspected with sub-cellular detail. The sensitivity of THG to interfaces provides images with excellent contrast in which cell-by-cell variations are visualized. The quality of the images and the speed with which they can be recorded make THG a promising tool for quick assessment of the nature of excised tissue. Importantly, because THG/SHG images are very close to those of histological slides, we expect that the surgeon (or pathologist) will need very little additional training for adequate interpretation of the images. We are planning to construct a THG/SHG ex-vivo tabletop device consisting of a compact laser source and a laser-scanning microscope requiring a physical footprint of only 1 m2, to be placed in an operating room, enabling immediate feedback to the surgeon on the nature of excised tissue, during the operation. With this device, we will perform a quantitative study of the added value of rapid THG/SHG pathological feedback during surgery for the final success of the neurosurgery. Finally, we note that THG/SHG imaging does not induce artifacts associated with fixation, freezing, and staining; therefore, tissue fragments examined ex-vivo can still be used for subsequent immunochemical and/or molecular analysis.

The microendoscopy THG/SHG imaging results represent an important step toward the development of a THG/SHG-based bioptic needle, and show that the use of such a needle for in situ optical sampling for optimal resection of gliomas is indeed a viable prospect, as has been demonstrated also before for multi-photon microscopies [38,49–54]. Although there are several issues associated with the operation of a needle-like optical device, such as the fact that blood in the surgical cavity may obscure the view, and the fact that only small areas can be biopsied with a needle, it may be a valuable tool in cases where sparing healthy tissue is of such vital importance as in brain surgery. Therefore, the reasonably good quality of the THG images taken with the GRIN micro-objective shown here, together with the developments in the field of microendoscopy, warrant further development of THG/SHG into a true handheld device. This next step, a true handheld bioptic needle, requires an optical fiber to transport the light from a small footprint laser to the GRIN micro-objective, and a small 2D scanner unit, to enable placing the laser at a sufficient distance from the patient. Patient-safe irradiation levels for THG imaging will have to be determined but are expected to lie in the 10–50 mW range [55–58]. This implies that only minor optimization of signal collection efficiency needs to be achieved, because the images of Fig. 10 were measured with 50 mW incident power.

THG/SHG imaging thus holds great promise for improving surgical procedures, thereby reducing the need for second surgeries and the loss of function by excising non-infiltrated brain tissue, as well as improving survival and quality of life of the patients. In addition, the success in the challenging case of diffuse gliomas promises great potential of THG/SHG-based histological analysis for a much wider spectrum of diagnostic applications.

References and links

1. N. G. Burnet, S. J. Jefferies, R. J. Benson, D. P. Hunt, and F. P. Treasure, “Years of life lost (YLL) from cancer is an important measure of population burden–and should be considered when allocating research funds,” Br. J. Cancer 92(2), 241–245 (2005). [PubMed]  

2. J. A. Schwartzbaum, J. L. Fisher, K. D. Aldape, and M. Wrensch, “Epidemiology and molecular pathology of glioma,” Nat. Clin. Pract. Neurol. 2(9), 494–516 (2006). [CrossRef]   [PubMed]  

3. J. S. Smith, E. F. Chang, K. R. Lamborn, S. M. Chang, M. D. Prados, S. Cha, T. Tihan, S. Vandenberg, M. W. McDermott, and M. S. Berger, “Role of extent of resection in the long-term outcome of low-grade hemispheric gliomas,” J. Clin. Oncol. 26(8), 1338–1345 (2008). [CrossRef]   [PubMed]  

4. N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62(4), 753–766 (2008). [CrossRef]   [PubMed]  

5. I. Y. Eyüpoglu, M. Buchfelder, and N. E. Savaskan, “Surgical resection of malignant gliomas-role in optimizing patient outcome,” Nat. Rev. Neurol. 9(3), 141–151 (2013). [CrossRef]  [PubMed]  

6. U. Pichlmeier, A. Bink, G. Schackert, and W. Stummer, “Resection and survival in glioblastoma multiforme: An RTOG recursive partitioning analysis of ALA study patients,” Neuro-oncol. 10(6), 1025–1034 (2008). [CrossRef]   [PubMed]  

7. W. Stummer, J. C. Tonn, C. Goetz, W. Ullrich, H. Stepp, A. Bink, T. Pietsch, and U. Pichlmeier, “5-Aminolevulinic Acid-Derived Tumor Fluorescence: The Diagnostic Accuracy of Visible Fluorescence Qualities as Corroborated by Spectrometry and Histology and Postoperative Imaging,” Neurosurgery 74(3), 310–320 (2014). [CrossRef]   [PubMed]  

….. more

Tables (1)

Tables Icon

Table 1 Pre-operative diagnoses and cell densities observed in the studied brain tissue samples by THG imaging and corresponding H&E histopathology.

Read Full Post »

Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator



Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

April 13, 2016

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.


Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.


Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang, Kayvan Reza Niazi & Bahram Jalali. Deep Learning in Label-free Cell Classification. Scientific Reports 6, Article number: 21471 (2016); doi:10.1038/srep21471 (open access)

Supplementary Information


Deep Learning in Label-free Cell Classification

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang,Kayvan Reza Niazi & Bahram Jalali

Scientific Reports 6, Article number: 21471 (2016)

Deep learning extracts patterns and knowledge from rich multidimenstional datasets. While it is extensively used for image recognition and speech processing, its application to label-free classification of cells has not been exploited. Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells1,2. However, each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy1,3. Imaging flow cytometry4,5 on the other hand captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering6.

In addition to classification accuracy, the throughput is another critical specification of a flow cytometer. Indeed high throughput, typically 100,000 cells per second, is needed to screen a large enough cell population to find rare abnormal cells that are indicative of early stage diseases. However there is a fundamental trade-off between throughput and accuracy in any measurement system7,8. For example, imaging flow cytometers face a throughput limit imposed by the speed of the CCD or the CMOS cameras, a number that is approximately 2000 cells/s for present systems9. Higher flow rates lead to blurred cell images due to the finite camera shutter speed. Many applications of flow analyzers such as cancer diagnostics, drug discovery, biofuel development, and emulsion characterization require classification of large sample sizes with a high-degree of statistical accuracy10. This has fueled research into alternative optical diagnostic techniques for characterization of cells and particles in flow.

Recently, our group has developed a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept11. This instrument overcomes the trade-off between sensitivity and speed by using Amplified Time-stretch Dispersive Fourier Transform12,13,14,15. In time stretched imaging16, the object’s spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds (Fig. 1). Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching8,11,16. Moreover, warped stretch transform17,18can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view19. In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput20. Integrated with a microfluidic channel, coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.

Figure 1: Time stretch quantitative phase imaging (TS-QPI) and analytics system; A mode-locked laser followed by a nonlinear fiber, an erbium doped fiber amplifier (EDFA), and a wavelength-division multiplexing (WDM) filter generate and shape a train of broadband optical pulses.


Box 1: The pulse train is spatially dispersed into a train of rainbow flashes illuminating the target as line scans. The spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame. The ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/s. The phase shift and intensity loss at each location within the field of view are embedded into the spectral interference patterns using a Michelson interferometer. Box 2: The interferogram pulses were then stretched in time so that spatial information could be mapped into time through time-stretch dispersive Fourier transform (TS-DFT), and then captured by a single pixel photodetector and an analog-to-digital converter (ADC). The loss of sensitivity at high shutter speed is compensated by stimulated Raman amplification during time stretch. Box 3: (a) Pulse synchronization; the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images. (b) The biomass density of a cell leads to a spatially varying optical phase shift. When a rainbow flash passes through the cells, the changes in refractive index at different locations will cause phase walk-off at interrogation wavelengths. Hilbert transformation and phase unwrapping are used to extract the spatial phase shift. (c) Decoding the phase shift in each pulse at each wavelength and remapping it into a pixel reveals the protein concentration distribution within cells. The optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms. Thus, quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent. Cell features describing morphology, granularity, biomass, etc are extracted from the images. (d) These biophysical features are used in a machine learning algorithm for high-accuracy label-free classification of the cells.

On another note, surface markers used to label cells, such as EpCAM21, are unavailable in some applications; for example, melanoma or pancreatic circulating tumor cells (CTCs) as well as some cancer stem cells are EpCAM-negative and will escape EpCAM-based detection platforms22. Furthermore, large-population cell sorting opens the doors to downstream operations, where the negative impacts of labels on cellular behavior and viability are often unacceptable23. Cell labels may cause activating/inhibitory signal transduction, altering the behavior of the desired cellular subtypes, potentially leading to errors in downstream analysis, such as DNA sequencing and subpopulation regrowth. In this way, quantitative phase imaging (QPI) methods24,25,26,27 that categorize unlabeled living cells with high accuracy are needed. Coherent time stretch imaging is a method that enables quantitative phase imaging at ultrahigh throughput for non-invasive label-free screening of large number of cells.

In this work, the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning. Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely28,29,30,31 among cells and their variations reflect important information of genotypes and physiological stimuli32. The multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.

We further improved the accuracy, repeatability, and the balance between sensitivity and specificity of our label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating characteristics (ROC). To demonstrate sensitivity, specificity, and accuracy of multi-feature label-free flow cytometry using our technique, we classified (1) OT-IIhybridoma T-lymphocytes and SW-480 colon cancer epithelial cells, and (2) Chlamydomonas reinhardtii algal cells (herein referred to as Chlamydomonas) based on their lipid content, which is related to the yield in biofuel production. Our preliminary results show that compared to classification by individual biophysical parameters, our label-free hyperdimensional technique improves the detection accuracy from 77.8% to 95.5%, or in other words, reduces the classification inaccuracy by about five times.     ……..


Feature Extraction

The decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as shown in Fig. 2 (see Section Methods: Image Reconstruction). These images are used to obtain biophysical fingerprints of the cells8,36. With domain expertise, raw images are fused and transformed into a suitable set of biophysical features, listed in Table 1, which the deep learning model further converts into learned features for improved classification.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

The optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells. The attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles. The optical loss image is derived from the low frequency component of the pulse interferograms. The optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations can be found in Section Methods. Also, supplementary Videos 1 and 2 show measurements of cell-induced optical path length difference by TS-QPI at four different points along the rainbow for OT-II and SW-480, respectively.

Table 1: List of extracted features.

Feature Name    Description         Category


Figure 3: Biophysical features formed by image fusion.

(a) Pairwise correlation matrix visualized as a heat map. The map depicts the correlation between all major 16 features extracted from the quantitative images. Diagonal elements of the matrix represent correlation of each parameter with itself, i.e. the autocorrelation. The subsets in box 1, box 2, and box 3 show high correlation because they are mainly related to morphological, optical phase, and optical loss feature categories, respectively. (b) Ranking of biophysical features based on their AUCs in single-feature classification. Blue bars show performance of the morphological parameters, which includes diameter along the interrogation rainbow, diameter along the flow direction, tight cell area, loose cell area, perimeter, circularity, major axis length, orientation, and median radius. As expected, morphology contains most information, but other biophysical features can contribute to improved performance of label-free cell classification. Orange bars show optical phase shift features i.e. optical path length differences and refractive index difference. Green bars show optical loss features representing scattering and absorption by the cell. The best performed feature in these three categories are marked in red.

Figure 4: Machine learning pipeline. Information of quantitative optical phase and loss images are fused to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.

The neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification. This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC). Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node. The training process maximizes AUC, pushing the ROC curve toward the upper left corner, which means improved sensitivity and specificity in classification.

….   How to cite this article: Chen, C. L. et al. Deep Learning in Label-free Cell Classification.

Sci. Rep. 6, 21471;


Computer Algorithm Helps Characterize Cancerous Genomic Variations

To better characterize the functional context of genomic variations in cancer, researchers developed a new computer algorithm called REVEALER. [UC San Diego Health]

Scientists at the University of California San Diego School of Medicine and the Broad Institute say they have developed a new computer algorithm—REVEALER—to better characterize the functional context of genomic variations in cancer. The tool, described in a paper (“Characterizing Genomic Alterations in Cancer by Complementary Functional Associations”) published in Nature Biotechnology, is designed to help researchers identify groups of genetic variations that together associate with a particular way cancer cells get activated, or how they respond to certain treatments.

REVEALER is available for free to the global scientific community via the bioinformatics software portal

“This computational analysis method effectively uncovers the functional context of genomic alterations, such as gene mutations, amplifications, or deletions, that drive tumor formation,” said senior author Pablo Tamayo, Ph.D., professor and co-director of the UC San Diego Moores Cancer Center Genomics and Computational Biology Shared Resource.

Dr. Tamayo and team tested REVEALER using The Cancer Genome Atlas (TCGA), the NIH’s database of genomic information from more than 500 human tumors representing many cancer types. REVEALER revealed gene alterations associated with the activation of several cellular processes known to play a role in tumor development and response to certain drugs. Some of these gene mutations were already known, but others were new.

For example, the researchers discovered new activating genomic abnormalities for beta-catenin, a cancer-promoting protein, and for the oxidative stress response that some cancers hijack to increase their viability.

REVEALER requires as input high-quality genomic data and a significant number of cancer samples, which can be a challenge, according to Dr. Tamayo. But REVEALER is more sensitive at detecting similarities between different types of genomic features and less dependent on simplifying statistical assumptions, compared to other methods, he adds.

“This study demonstrates the potential of combining functional profiling of cells with the characterizations of cancer genomes via next-generation sequencing,” said co-senior author Jill P. Mesirov, Ph.D., professor and associate vice chancellor for computational health sciences at UC San Diego School of Medicine.


Characterizing genomic alterations in cancer by complementary functional associations

Jong Wook Kim, Olga B Botvinnik, Omar Abudayyeh, Chet Birger, et al.

Nature Biotechnology (2016)    

Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes


Figure 2: REVEALER results for transcriptional activation of β-catenin in cancer.close

(a) This heatmap illustrates the use of the REVEALER approach to find complementary genomic alterations that match the transcriptional activation of β-catenin in cancer. The target profile is a TCF4 reporter that provides an estimate of…


An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

Jonathan P. Celli, Imran Rizvi, Adam R. Blanden, Iqbal Massodi, Michael D. Glidden, Brian W. Pogue & Tayyaba Hasan

Scientific Reports 4; 3751  (2014)

While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

The attrition rates for preclinical development of oncology therapeutics are particularly dismal due to a complex set of factors which includes 1) the failure of pre-clinical models to recapitulate determinants of in vivo treatment response, and 2) the limited ability of available assays to extract treatment-specific data integral to the complexities of therapeutic responses1,2,3. Three-dimensional (3D) tumour models have been shown to restore crucial stromal interactions which are missing in the more commonly used 2D cell culture and that influence tumour organization and architecture4,5,6,7,8, as well as therapeutic response9,10, multicellular resistance (MCR)11,12, drug penetration13,14, hypoxia15,16, and anti-apoptotic signaling17. However, such sophisticated models can only have an impact on therapeutic guidance if they are accompanied by robust quantitative assays, not only for cell viability but also for providing mechanistic insights related to the outcomes. While numerous assays for drug discovery exist18, they are generally not developed for use in 3D systems and are often inherently unsuitable. For example, colorimetric conversion products have been noted to bind to extracellular matrix (ECM)19 and traditional colorimetric cytotoxicity assays reduce treatment response to a single number reflecting a biochemical event that has been equated to cell viability (e.g. tetrazolium salt conversion20). Such approaches fail to provide insight into the spatial patterns of response within colonies, morphological or structural effects of drug response, or how overall culture viability may be obscuring the status of sub-populations that are resistant or partially responsive. Hence, the full benefit of implementing 3D tumour models in therapeutic development has yet to be realized for lack of analytical methods that describe the very aspects of treatment outcome that these systems restore.

Motivated by these factors, we introduce a new platform for quantitative in situ treatment assessment (qVISTA) in 3D tumour models based on computational analysis of information-dense biological image datasets (bioimage-informatics)21,22. This methodology provides software end-users with multiple levels of complexity in output content, from rapidly-interpreted dose response relationships to higher content quantitative insights into treatment-dependent architectural changes, spatial patterns of cytotoxicity within fields of multicellular structures, and statistical analysis of nodule-by-nodule size-dependent viability. The approach introduced here is cognizant of tradeoffs between optical resolution, data sampling (statistics), depth of field, and widespread usability (instrumentation requirement). Specifically, it is optimized for interpretation of fluorescent signals for disease-specific 3D tumour micronodules that are sufficiently small that thousands can be imaged simultaneously with little or no optical bias from widefield integration of signal along the optical axis of each object. At the core of our methodology is the premise that the copious numerical readouts gleaned from segmentation and interpretation of fluorescence signals in these image datasets can be converted into usable information to classify treatment effects comprehensively, without sacrificing the throughput of traditional screening approaches. It is hoped that this comprehensive treatment-assessment methodology will have significant impact in facilitating more sophisticated implementation of 3D cell culture models in preclinical screening by providing a level of content and biological relevance impossible with existing assays in monolayer cell culture in order to focus therapeutic targets and strategies before costly and tedious testing in animal models.

Using two different cell lines and as depicted in Figure 1, we adopt an ECM overlay method pioneered originally for 3D breast cancer models23, and developed in previous studies by us to model micrometastatic ovarian cancer19,24. This system leads to the formation of adherent multicellular 3D acini in approximately the same focal plane atop a laminin-rich ECM bed, implemented here in glass-bottom multiwell imaging plates for automated microscopy. The 3D nodules resultant from restoration of ECM signaling5,8, are heterogeneous in size24, in contrast to other 3D spheroid methods, such as rotary or hanging drop cultures10, in which cells are driven to aggregate into uniformly sized spheroids due to lack of an appropriate substrate to adhere to. Although the latter processes are also biologically relevant, it is the adherent tumour populations characteristic of advanced metastatic disease that are more likely to be managed with medical oncology, which are the focus of therapeutic evaluation herein. The heterogeneity in 3D structures formed via ECM overlay is validated here by endoscopic imaging ofin vivo tumours in orthotopic xenografts derived from the same cells (OVCAR-5).


Figure 1: A simplified schematic flow chart of imaging-based quantitative in situ treatment assessment (qVISTA) in 3D cell culture.

(This figure was prepared in Adobe Illustrator® software by MD Glidden, JP Celli and I Rizvi). A detailed breakdown of the image processing (Step 4) is provided in Supplemental Figure 1.

A critical component of the imaging-based strategy introduced here is the rational tradeoff of image-acquisition parameters for field of view, depth of field and optical resolution, and the development of image processing routines for appropriate removal of background, scaling of fluorescence signals from more than one channel and reliable segmentation of nodules. In order to obtain depth-resolved 3D structures for each nodule at sub-micron lateral resolution using a laser-scanning confocal system, it would require ~ 40 hours (at approximately 100 fields for each well with a 20× objective, times 1 minute/field for a coarse z-stack, times 24 wells) to image a single plate with the same coverage achieved in this study. Even if the resources were available to devote to such time-intensive image acquisition, not to mention the processing, the optical properties of the fluorophores would change during the required time frame for image acquisition, even with environmental controls to maintain culture viability during such extended imaging. The approach developed here, with a mind toward adaptation into high throughput screening, provides a rational balance of speed, requiring less than 30 minutes/plate, and statistical rigour, providing images of thousands of nodules in this time, as required for the high-content analysis developed in this study. These parameters can be further optimized for specific scenarios. For example, we obtain the same number of images in a 96 well plate as for a 24 well plate by acquiring only a single field from each well, rather than 4 stitched fields. This quadruples the number conditions assayed in a single run, at the expense of the number of nodules per condition, and therefore the ability to obtain statistical data sets for size-dependent response, Dfrac and other segmentation-dependent numerical readouts.


We envision that the system for high-content interrogation of therapeutic response in 3D cell culture could have widespread impact in multiple arenas from basic research to large scale drug development campaigns. As such, the treatment assessment methodology presented here does not require extraordinary optical instrumentation or computational resources, making it widely accessible to any research laboratory with an inverted fluorescence microscope and modestly equipped personal computer. And although we have focused here on cancer models, the methodology is broadly applicable to quantitative evaluation of other tissue models in regenerative medicine and tissue engineering. While this analysis toolbox could have impact in facilitating the implementation of in vitro 3D models in preclinical treatment evaluation in smaller academic laboratories, it could also be adopted as part of the screening pipeline in large pharma settings. With the implementation of appropriate temperature controls to handle basement membranes in current robotic liquid handling systems, our analyses could be used in ultra high-throughput screening. In addition to removing non-efficacious potential candidate drugs earlier in the pipeline, this approach could also yield the additional economic advantage of minimizing the use of costly time-intensive animal models through better estimates of dose range, sequence and schedule for combination regimens.


Microscope Uses AI to Find Cancer Cells More Efficiently

Thu, 04/14/2016 – by Shaun Mason

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses.

There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.

Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data with the goal of achieving accurate decision making.

The study, which was published in the journal Nature Scientific Reports, was led by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.

Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The new microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.

The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.

“Each frame is slowed down in time and optically amplified so it can be digitized,” Mahjoubfar said. “This lets us perform fast cell imaging that the artificial intelligence component can distinguish.”

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem.

“The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination,” Chen said.

The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.   …..  see also

Chen, C. L. et al. Deep Learning in Label-free Cell Classification.    Sci. Rep. 6, 21471;



Read Full Post »

Colon cancer and organoids

Larry H. Bernstein, MD, FCAP, Curator





Guts and Glory

An open mind and collaborative spirit have taken Hans Clevers on a journey from medicine to developmental biology, gastroenterology, cancer, and stem cells.

By Anna Azvolinsky

Ihave had to talk a lot about my science recently and it’s made me think about how science works,” says Hans Clevers. “Scientists are trained to think science is driven by hypotheses, but for [my lab], hypothesis-driven research has never worked. Instead, it has been about trying to be as open-minded as possible—which is not natural for our brains,” adds the Utrecht University molecular genetics professor. “The human mind is such that it tries to prove it’s right, so pursuing a hypothesis can result in disaster. My advice to my own team and others is to not preformulate an answer to a scientific question, but just observe and never be afraid of the unknown. What has worked well for us is to keep an open mind and do the experiments. And find a collaborator if it is outside our niche.”

“One thing I have learned is that hypothesis-driven research tends not to be productive when you are in an unknown territory.”

Clevers entered medical school at Utrecht University in The Netherlands in 1978 while simultaneously pursuing a master’s degree in biology. Drawn to working with people in the clinic, Clevers had a training position in pediatrics lined up after medical school, but then mentors persuaded him to spend an additional year converting the master’s degree to a PhD in immunology. “At the end of that year, looking back, I got more satisfaction from the research than from seeing patients.” Clevers also had an aptitude for benchwork, publishing four papers from his PhD year. “They were all projects I had made up myself. The department didn’t do the kind of research I was doing,” he says. “Now that I look back, it’s surprising that an inexperienced PhD student could come up with a project and publish independently.”

Clevers studied T- and B-cell signaling; he set up assays to visualize calcium ion flux and demonstrated that the ions act as messengers to activate human B cells, signaling through antibodies on the cell surface. “As soon as the experiment worked, I got T cells from the lab next door and did the same experiment. That was my strategy: as soon as something worked, I would apply it elsewhere and didn’t stop just because I was a B-cell biologist and not a T-cell biologist. What I learned then, that I have continued to benefit from, is that a lot of scientists tend to adhere to a niche. They cling to these niches and are not that flexible. You think scientists are, but really most are not.”

Here, Clevers talks about promoting a collaborative spirit in research, the art of doing a pilot experiment, and growing miniature organs in a dish.

Clevers Creates

Re-search? Clevers was born in Eindhoven, in the south of The Netherlands. The town was headquarters to Philips Electronics, where his father worked as a businessman, and his mother took care of Clevers and his three brothers. Clevers did well in school but his passion was sports, especially tennis and field hockey, “a big thing in Holland.” Then in 1975, at age 18, he moved to Utrecht University, where he entered an intensive, biology-focused program. “I knew I wanted to be a biology researcher since I was young. In Dutch, the word for research is ‘onderzoek’ and I knew the English word ‘research’ and had wondered why there was the ‘re’ in the word, because I wanted to search but I didn’t want to do re-search—to find what someone else had already found.”

Opportunity to travel. “I was very disappointed in my biology studies, which were old-fashioned and descriptive,” says Clevers. He thought medicine might be more interesting and enrolled in medical school while still pursuing a master’s degree in biology at Utrecht. For the master’s, Clevers had to do three rotations. He spent a year at the International Laboratory for Research on Animal Diseases (ILRAD) in Nairobi, Kenya, and six months in Bethesda, Maryland, at the National Institutes of Health. “Holland is really small, so everyone travels.” Clevers saw those two rotations more as travel explorations. In Nairobi, he went on safaris and explored the country in Land Rovers borrowed from the institute. While in Maryland in 1980, Clevers—with the consent of his advisor, who thought it was a good idea for him to get a feel for the U.S.—flew to Portland, Oregon, and drove back to Boston with a musician friend along the Canadian border. He met the fiancé of political activist and academic Angela Davis in New York City and even stayed in their empty apartment there.

Life and lab lessons. Back in Holland, Clevers joined Rudolf Eugène Ballieux’s lab at Utrecht University to pursue his PhD, for which he studied immune cell signaling. “I didn’t learn much science from him, but I learned that you always have to create trust and to trust people around you. This became a major theme in my own lab. We don’t distrust journals or reviewers or collaborators. We trust everyone and we share. There will be people who take advantage, but there have only been a few of those. So I learned from Ballieux to give everyone maximum trust and then change this strategy only if they fail that trust. We collaborate easily because we give out everything and we also easily get reagents and tools that we may need. It’s been valuable to me in my career. And it is fun!”

Clevers Concentrates

On a mission. “Once I decided to become a scientist, I knew I needed to train seriously. Up to that point, I was totally self-trained.” From an extensive reading of the immunology literature, Clevers became interested in how T cells recognize antigens, and headed off to spend a postdoc studying the problem in Cox Terhorst’s lab at Dana-Farber Cancer Institute in Boston. “Immunology was young, but it was very exciting and there was a lot to discover. I became a professional scientist there and experienced how tough science is.” In 1988, Clevers cloned and characterized the gene for a component of the T-cell receptor (TCR) called CD3-epsilon, which binds antigen and activates intracellular signaling pathways.

On the fast track in Holland. Clevers returned to Utrecht University in 1989 as a professor of immunology. Within one month of setting up his lab, he had two graduate students and a technician, and the lab had cloned the first T cell–specific transcription factor, which they called TCF-1, in human T cells. When his former thesis advisor retired, Clevers was asked, at age 33, to become head of the immunology department. While the appointment was high-risk for him and for the department, Clevers says, he was chosen because he was good at multitasking and because he got along well with everyone.

Problem-solving strategy. “My strategy in research has always been opportunistic. One thing I have learned is that hypothesis-driven research tends not to be productive when you are in an unknown territory. I think there is an art to doing pilot experiments. So we have always just set up systems in which something happens and then you try and try things until a pattern appears and maybe you formulate a small hypothesis. But as soon as it turns out not to be exactly right, you abandon it. It’s a very open-minded type of research where you question whether what you are seeing is a real phenomenon without spending a year on doing all of the proper controls.”

Trial and error. Clevers’s lab found that while TCF-1 bound to DNA, it did not alter gene expression, despite the researchers’ tinkering with promoter and enhancer assays. “For about five years this was a problem. My first PhD students were leaving and they thought the whole TCF project was a failure,” says Clevers. His lab meanwhile cloned TCF homologs from several model organisms and made many reagents including antibodies against these homologs. To try to figure out the function of TCF-1, the lab performed a two-hybrid screen and identified components of the Wnt signaling pathway as binding partners of TCF-1. “We started to read about Wnt and realized that you study Wnt not in T cells but in frogs and flies, so we rapidly transformed into a developmental biology lab. We showed that we held the key for a major issue in developmental biology, the final protein in the Wnt cascade: TCF-1 binds b-catenin when b-catenin becomes available and activates transcription.” In 1996, Clevers published the mechanism of how the TCF-1 homolog in Xenopus embryos, called XTcf-3, is integrated into the Wnt signaling pathway.

Clevers Catapults


3DCrypt building and colon cancer.

Clevers next collaborated with Bert Vogelstein’s lab at Johns Hopkins, linking TCF to Wnt signaling in colon cancer. In colon cancer cell lines with mutated forms of the tumor suppressor gene APC, the APC protein can’t rein in b-catenin, which accumulates in the cytoplasm, forms a complex with TCF-4 (later renamed TCF7L2) in the nucleus, and caninitiate colon cancer by changing gene expression. Then, the lab showed that Wnt signaling is necessary for self-renewal of adult stem cells, as mice missing TCF-4 do not have intestinal crypts, the site in the gut where stem cells reside. “This was the first time Wnt was shown to play a role in adults, not just during development, and to be crucial for adult stem cell maintenance,” says Clevers. “Then, when I started thinking about studying the gut, I realized it was by far the best way to study stem cells. And I also realized that almost no one in the world was studying the healthy gut. Almost everyone who researched the gut was studying a disease.” The main advantages of the murine model are rapid cell turnover and the presence of millions of stereotypic crypts throughout the entire intestine.

Against the grain. In 2007, Nick Barker, a senior scientist in the Clevers lab, identified the Wnt target gene Lgr5 as a unique marker of adult stem cells in several epithelial organs, including the intestine, hair follicle, and stomach. In the intestine, the gene codes for a plasma membrane protein on crypt stem cells that enable the intestinal epithelium to self-renew, but can also give rise to adenomas of the gut. Upon making mice with adult stem cell populations tagged with a fluorescent Lgr5-binding marker, the lab helped to overturn assumptions that “stem cells are rare, impossible to find, quiescent, and divide asymmetrically.”

On to organoids. Once the lab could identify adult stem cells within the crypts of the gut, postdoc Toshiro Sato discovered that a single stem cell, in the presence of Matrigel and just three growth factors, could generate a miniature crypt structure—what is now called an organoid. “Toshi is very Japanese and doesn’t always talk much,” says Clevers. “One day I had asked him, while he was at the microscope, if the gut stem cells were growing, and he said, ‘Yes.’ Then I looked under the microscope and saw the beautiful structures and said, ‘Why didn’t you tell me?’ and he said, ‘You didn’t ask.’ For three months he had been growing them!” The lab has since also grown mini-pancreases, -livers, -stomachs, and many other mini-organs.

Tumor Organoids. Clevers showed that organoids can be grown from diseased patients’ samples, a technique that could be used in the future to screen drugs. The lab is also building biobanks of organoidsderived from tumor samples and adjacent normal tissue, which could be especially useful for monitoring responses to chemotherapies. “It’s a similar approach to getting a bacterium cultured to identify which antibiotic to take. The most basic goal is not to give a toxic chemotherapy to a patient who will not respond anyway,” says Clevers. “Tumor organoids grow slower than healthy organoids, which seems counterintuitive, but with cancer cells, often they try to divide and often things go wrong because they don’t have normal numbers of chromosomes and [have] lots of mutations. So, I am not yet convinced that this approach will work for every patient. Sometimes, the tumor organoids may just grow too slowly.”

Selective memory. “When I received the Breakthrough Prize in 2013, I invited everyone who has ever worked with me to Amsterdam, about 100 people, and the lab organized a symposium where many of the researchers gave an account of what they had done in the lab,” says Clevers. “In my experience, my lab has been a straight line from cloning TCF-1 to where we are now. But when you hear them talk it was ‘Hans told me to try this and stop this’ and ‘Half of our knockout mice were never published,’ and I realized that the lab is an endless list of failures,” Clevers recalls. “The one thing we did well is that we would start something and, as soon as it didn’t look very good, we would stop it and try something else. And the few times when we seemed to hit gold, I would regroup my entire lab. We just tried a lot of things, and the 10 percent of what worked, those are the things I remember.”

Greatest Hits

  • Cloned the first T cell–specific transcription factor, TCF-1, and identified homologous genes in model organisms including the fruit fly, frog, and worm
  • Found that transcriptional activation by the abundant β-catenin/TCF-4 [TCF7L2] complex drives cancer initiation in colon cells missing the tumor suppressor protein APC
  • First to extend the role of Wnt signaling from developmental biology to adult stem cells by showing that the two Wnt pathway transcription factors, TCF-1 and TCF-4, are necessary for maintaining the stem cell compartments in the thymus and in the crypt structures of the small intestine, respectively
  • Identified Lgr5 as an adult stem cell marker of many epithelial stem cells including those of the colon, small intestine, hair follicle, and stomach, and found that Lgr5-expressing crypt cells in the small intestine divide constantly and symmetrically, disproving the common belief that stem cell division is asymmetrical and uncommon
  • Established a three-dimensional, stable model, the “organoid,” grown from adult stem cells, to study diseased patients’ tissues from the gut, stomach, liver, and prostate
 Regenerative Medicine Comes of Age   
“Anti-Aging Medicine” Sounds Vaguely Disreputable, So Serious Scientists Prefer to Speak of “Regenerative Medicine”
  • Induced pluripotent stem cells (iPSCs) and genome-editing techniques have facilitated manipulation of living organisms in innumerable ways at the cellular and genetic levels, respectively, and will underpin many aspects of regenerative medicine as it continues to evolve.

    An attitudinal change is also occurring. Experts in regenerative medicine have increasingly begun to embrace the view that comprehensively repairing the damage of aging is a practical and feasible goal.

    A notable proponent of this view is Aubrey de Grey, Ph.D., a biomedical gerontologist who has pioneered an regenerative medicine approach called Strategies for Engineered Negligible Senescence (SENS). He works to “develop, promote, and ensure widespread access to regenerative medicine solutions to the disabilities and diseases of aging” as CSO and co-founder of the SENS Research Foundation. He is also the editor-in-chief of Rejuvenation Research, published by Mary Ann Liebert.

    Dr. de Grey points out that stem cell treatments for age-related conditions such as Parkinson’s are already in clinical trials, and immune therapies to remove molecular waste products in the extracellular space, such as amyloid in Alzheimer’s, have succeeded in such trials. Recently, there has been progress in animal models in removing toxic cells that the body is failing to kill. The most encouraging work is in cancer immunotherapy, which is rapidly advancing after decades in the doldrums.

    Many damage-repair strategies are at an  early stage of research. Although these strategies look promising, they are handicapped by a lack of funding. If that does not change soon, the scientific community is at risk of failing to capitalize on the relevant technological advances.

    Regenerative medicine has moved beyond boutique applications. In degenerative disease, cells lose their function or suffer elimination because they harbor genetic defects. iPSC therapies have the potential to be curative, replacing the defective cells and eliminating symptoms in their entirety. One of the biggest hurdles to commercialization of iPSC therapies is manufacturing.

  • Building Stem Cell Factories

    Cellular Dynamics International (CDI) has been developing clinically compatible induced pluripotent stem cells (iPSCs) and iPSC-derived human retinal pigment epithelial (RPE) cells. CDI’s MyCell Retinal Pigment Epithelial Cells are part of a possible therapy for macular degeneration. They can be grown on bioengineered, nanofibrous scaffolds, and then the RPE cell–enriched scaffolds can be transplanted into patients’ eyes. In this pseudo-colored image, RPE cells are shown growing over the nanofibers. Each cell has thousands of “tongue” and “rod” protrusions that could naturally support rod and cone cells in the eye.

    “Now that an infrastructure is being developed to make unlimited cells for the tools business, new opportunities are being created. These cells can be employed in a therapeutic context, and they can be used to understand the efficacy and safety of drugs,” asserts Chris Parker, executive vice president and CBO, Cellular Dynamics International (CDI). “CDI has the capability to make a lot of cells from a single iPSC line that represents one person (a capability termed scale-up) as well as the capability to do it in parallel for multiple individuals (a capability termed scale-out).”

    Minimally manipulated adult stem cells have progressed relatively quickly to the clinic. In this scenario, cells are taken out of the body, expanded unchanged, then reintroduced. More preclinical rigor applies to potential iPSC therapy. In this case, hematopoietic blood cells are used to make stem cells, which are manufactured into the cell type of interest before reintroduction. Preclinical tests must demonstrate that iPSC-derived cells perform as intended, are safe, and possess little or no off-target activity.

    For example, CDI developed a Parkinsonian model in which iPSC-derived dopaminergic neurons were introduced to primates. The model showed engraftment and enervation, and it appeared to be free of proliferative stem cells.

    • “You will see iPSCs first used in clinical trials as a surrogate to understand efficacy and safety,” notes Mr. Parker. “In an ongoing drug-repurposing trial with GlaxoSmithKline and Harvard University, iPSC-derived motor neurons will be produced from patients with amyotrophic lateral sclerosis and tested in parallel with the drug.” CDI has three cell-therapy programs in their commercialization pipeline focusing on macular degeneration, Parkinson’s disease, and postmyocardial infarction.

    • Keeping an Eye on Aging Eyes

      The California Project to Cure Blindness is evaluating a stem cell–based treatment strategy for age-related macular degeneration. The strategy involves growing retinal pigment epithelium (RPE) cells on a biostable, synthetic scaffold, then implanting the RPE cell–enriched scaffold to replace RPE cells that are dying or dysfunctional. One of the project’s directors, Dennis Clegg, Ph.D., a researcher at the University of California, Santa Barbara, provided this image, which shows stem cell–derived RPE cells. Cell borders are green, and nuclei are red.

      The eye has multiple advantages over other organ systems for regenerative medicine. Advanced surgical methods can access the back of the eye, noninvasive imaging methods can follow the transplanted cells, good outcome parameters exist, and relatively few cells are needed.

      These advantages have attracted many groups to tackle ocular disease, in particular age-related macular degeneration, the leading cause of blindness in the elderly in the United States. Most cases of age-related macular degeneration are thought to be due to the death or dysfunction of cells in the retinal pigment epithelium (RPE). RPE cells are crucial support cells for the rods, cones, and photoreceptors. When RPE cells stop working or die, the photoreceptors die and a vision deficit results.

      A regenerated and restored RPE might prevent the irreversible loss of photoreceptors, possibly via the the transplantation of functionally polarized RPE monolayers derived from human embryonic stem cells. This approach is being explored by the California Project to Cure Blindness, a collaborative effort involving the University of Southern California (USC), the University of California, Santa Barbara (UCSB), the California Institute of Technology, City of Hope, and Regenerative Patch Technologies.

      The project, which is funded by the California Institute of Regenerative Medicine (CIRM), started in 2010, and an IND was filed early 2015. Clinical trial recruitment has begun.

      One of the project’s leaders is Dennis Clegg, Ph.D., Wilcox Family Chair in BioMedicine, UCSB. His laboratory developed the protocol to turn undifferentiated H9 embryonic stem cells into a homogenous population of RPE cells.

      “These are not easy experiments,” remarks Dr. Clegg. “Figuring out the biology and how to make the cell of interest is a challenge that everyone in regenerative medicine faces. About 100,000 RPE cells will be grown as a sheet on a 3 × 5 mm biostable, synthetic scaffold, and then implanted in the patients to replace the cells that are dying or dysfunctional. The idea is to preserve the photoreceptors and to halt disease progression.”

      Moving therapies such as this RPE treatment from concept to clinic is a huge team effort and requires various kinds of expertise. Besides benefitting from Dr. Clegg’s contribution, the RPE project incorporates the work of Mark Humayun, M.D., Ph.D., co-director of the USC Eye Institute and director of the USC Institute for Biomedical Therapeutics and recipient of the National Medal of Technology and Innovation, and David Hinton, Ph.D., a researcher at USC who has studied how actvated RPE cells can alter the local retinal microenvironment.

Read Full Post »

Older Posts »