Feeds:
Posts
Comments

Archive for the ‘BioPrinting in Regenerative Medicine’ Category


Topical Solution for Combination Oncology Drug Therapy: Patch that delivers Drug, Gene, and Light-based Therapy to Tumor

Reporter: Aviva Lev-Ari, PhD, RN

 

Self-assembled RNA-triple-helix hydrogel scaffold for microRNA modulation in the tumour microenvironment

Affiliations

  1. Massachusetts Institute of Technology, Institute for Medical Engineering and Science, Harvard-MIT Division for Health Sciences and Technology, Cambridge, Massachusetts 02139, USA
    • João Conde,
    • Nuria Oliva,
    • Mariana Atilano,
    • Hyun Seok Song &
    • Natalie Artzi
  2. School of Engineering and Materials Science, Queen Mary University of London, London E1 4NS, UK
    • João Conde
  3. Grup dEnginyeria de Materials, Institut Químic de Sarrià-Universitat Ramon Llull, Barcelona 08017, Spain
    • Mariana Atilano
  4. Division of Bioconvergence Analysis, Korea Basic Science Institute, Yuseong, Daejeon 169-148, Republic of Korea
    • Hyun Seok Song
  5. Broad Institute of MIT and Harvard, Cambridge, Massachusetts 02142, USA
    • Natalie Artzi
  6. Department of Medicine, Biomedical Engineering Division, Brigham and Womens Hospital, Harvard Medical School, Boston, Massachusetts 02115, USA
    • Natalie Artzi

Contributions

J.C. and N.A. conceived the project and designed the experiments. J.C., N.O., H.S.S. and M.A. performed the experiments, collected and analysed the data. J.C. and N.A. co-wrote the manuscript. All authors discussed the results and reviewed the manuscript.

Nature Materials
15,
353–363
(2016)
doi:10.1038/nmat4497
Received
22 April 2015
Accepted
26 October 2015
Published online
07 December 2015

The therapeutic potential of miRNA (miR) in cancer is limited by the lack of efficient delivery vehicles. Here, we show that a self-assembled dual-colour RNA-triple-helix structure comprising two miRNAs—a miR mimic (tumour suppressor miRNA) and an antagomiR (oncomiR inhibitor)—provides outstanding capability to synergistically abrogate tumours. Conjugation of RNA triple helices to dendrimers allows the formation of stable triplex nanoparticles, which form an RNA-triple-helix adhesive scaffold upon interaction with dextran aldehyde, the latter able to chemically interact and adhere to natural tissue amines in the tumour. We also show that the self-assembled RNA-triple-helix conjugates remain functional in vitro and in vivo, and that they lead to nearly 90% levels of tumour shrinkage two weeks post-gel implantation in a triple-negative breast cancer mouse model. Our findings suggest that the RNA-triple-helix hydrogels can be used as an efficient anticancer platform to locally modulate the expression of endogenous miRs in cancer.

SOURCE

http://www.nature.com/nmat/journal/v15/n3/abs/nmat4497.html#author-information

 

 

Patch that delivers drug, gene, and light-based therapy to tumor sites shows promising results

In mice, device destroyed colorectal tumors and prevented remission after surgery.

Helen Knight | MIT News Office
July 25, 2016

Approximately one in 20 people will develop colorectal cancer in their lifetime, making it the third-most prevalent form of the disease in the U.S. In Europe, it is the second-most common form of cancer.

The most widely used first line of treatment is surgery, but this can result in incomplete removal of the tumor. Cancer cells can be left behind, potentially leading to recurrence and increased risk of metastasis. Indeed, while many patients remain cancer-free for months or even years after surgery, tumors are known to recur in up to 50 percent of cases.

Conventional therapies used to prevent tumors recurring after surgery do not sufficiently differentiate between healthy and cancerous cells, leading to serious side effects.

In a paper published today in the journal Nature Materials, researchers at MIT describe an adhesive patch that can stick to the tumor site, either before or after surgery, to deliver a triple-combination of drug, gene, and photo (light-based) therapy.

Releasing this triple combination therapy locally, at the tumor site, may increase the efficacy of the treatment, according to Natalie Artzi, a principal research scientist at MIT’s Institute for Medical Engineering and Science (IMES) and an assistant professor of medicine at Brigham and Women’s Hospital, who led the research.

The general approach to cancer treatment today is the use of systemic, or whole-body, therapies such as chemotherapy drugs. But the lack of specificity of anticancer drugs means they produce undesired side effects when systemically administered.

What’s more, only a small portion of the drug reaches the tumor site itself, meaning the primary tumor is not treated as effectively as it should be.

Indeed, recent research in mice has found that only 0.7 percent of nanoparticles administered systemically actually found their way to the target tumor.

“This means that we are treating both the source of the cancer — the tumor — and the metastases resulting from that source, in a suboptimal manner,” Artzi says. “That is what prompted us to think a little bit differently, to look at how we can leverage advancements in materials science, and in particular nanotechnology, to treat the primary tumor in a local and sustained manner.”

The researchers have developed a triple-therapy hydrogel patch, which can be used to treat tumors locally. This is particularly effective as it can treat not only the tumor itself but any cells left at the site after surgery, preventing the cancer from recurring or metastasizing in the future.

Firstly, the patch contains gold nanorods, which heat up when near-infrared radiation is applied to the local area. This is used to thermally ablate, or destroy, the tumor.

These nanorods are also equipped with a chemotherapy drug, which is released when they are heated, to target the tumor and its surrounding cells.

Finally, gold nanospheres that do not heat up in response to the near-infrared radiation are used to deliver RNA, or gene therapy to the site, in order to silence an important oncogene in colorectal cancer. Oncogenes are genes that can cause healthy cells to transform into tumor cells.

The researchers envision that a clinician could remove the tumor, and then apply the patch to the inner surface of the colon, to ensure that no cells that are likely to cause cancer recurrence remain at the site. As the patch degrades, it will gradually release the various therapies.

The patch can also serve as a neoadjuvant, a therapy designed to shrink tumors prior to their resection, Artzi says.

When the researchers tested the treatment in mice, they found that in 40 percent of cases where the patch was not applied after tumor removal, the cancer returned.

But when the patch was applied after surgery, the treatment resulted in complete remission.

Indeed, even when the tumor was not removed, the triple-combination therapy alone was enough to destroy it.

The technology is an extraordinary and unprecedented synergy of three concurrent modalities of treatment, according to Mauro Ferrari, president and CEO of the Houston Methodist Research Institute, who was not involved in the research.

“What is particularly intriguing is that by delivering the treatment locally, multimodal therapy may be better than systemic therapy, at least in certain clinical situations,” Ferrari says.

Unlike existing colorectal cancer surgery, this treatment can also be applied in a minimally invasive manner. In the next phase of their work, the researchers hope to move to experiments in larger models, in order to use colonoscopy equipment not only for cancer diagnosis but also to inject the patch to the site of a tumor, when detected.

“This administration modality would enable, at least in early-stage cancer patients, the avoidance of open field surgery and colon resection,” Artzi says. “Local application of the triple therapy could thus improve patients’ quality of life and therapeutic outcome.”

Artzi is joined on the paper by João Conde, Nuria Oliva, and Yi Zhang, of IMES. Conde is also at Queen Mary University in London.

SOURCE

http://news.mit.edu/2016/patch-delivers-drug-gene-light-based-therapy-tumor-0725

Other related articles published in thie Open Access Online Scientific Journal include the following:

The Development of siRNA-Based Therapies for Cancer

Author: Ziv Raviv, PhD

https://pharmaceuticalintelligence.com/2013/05/09/the-development-of-sirna-based-therapies-for-cancer/

 

Targeted Liposome Based Delivery System to Present HLA Class I Antigens to Tumor Cells: Two papers

Reporter: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2016/07/20/targeted-liposome-based-delivery-system-to-present-hla-class-i-antigens-to-tumor-cells-two-papers/

 

Blast Crisis in Myeloid Leukemia and the Activation of a microRNA-editing Enzyme called ADAR1

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/06/10/blast-crisis-in-myeloid-leukemia-and-the-activation-of-a-microrna-editing-enzyme-called-adar1/

 

First challenge to make use of the new NCI Cloud Pilots – Somatic Mutation Challenge – RNA: Best algorithms for detecting all of the abnormal RNA molecules in a cancer cell

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/07/17/first-challenge-to-make-use-of-the-new-nci-cloud-pilots-somatic-mutation-challenge-rna-best-algorithms-for-detecting-all-of-the-abnormal-rna-molecules-in-a-cancer-cell/

 

miRNA Therapeutic Promise

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/01/mirna-therapeutic-promise/

Advertisements

Read Full Post »


Bioprinting basics

Curator: Larry H. Bernstein, MD, FCAP

 

 

The ABCs of 3D Bioprinting of Living Tissues, Organs   5/06/2016 

(Credit: Ozbolat Lab/Penn State University)
(Credit: Ozbolat Lab/Penn State University)

Although first originated in 2003, the world of bioprinting is still very new and ambiguous. Nevertheless, as the need for organ donation continues to increase worldwide, and organ and tissue shortages prevail, a handful of scientists have started utilizing this cutting-edge science and technology for various areas of regenerative medicine to possibly fill that organ-shortage void.

Among these scientists is Ibrahim Tarik Ozbolat, an associate professor of Engineering Science and Mechanics Department and the Huck Institutes of the Life Sciences at Penn State University, who’s been studying bioprinting and tissue engineering for years.

While Ozbolat is not the first to originate 3D bioprinting research, he’s the first one at Penn State University to spearhead the studies at Ozbolat Lab, Leading Bioprinting Research.

“Tissue engineering is a big need. Regenerative medicine, biofabrication of tissues and organs that can replace the damage or diseases is important,” Ozbolat told R&D Magazine after his seminar presentation at Interphex last week in New York City, titled 3D Bioprinting of Living Tissues & Organs.”

3D bioprinting is the process of creating cell patterns in a confined space using 3D-printing technologies, where cell function and viability are preserved within the printed construct.

Recent progress has allowed 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. The technology is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine, according to nature.com.

“If we’re able to make organs on demand, that will be highly beneficial to society,” said Ozbolat. “We have the capability to pattern cells, locate them and then make the same thing that exists in the body.”

3D bioprinting of tissues and organs

Sean V Murphy & Anthony Atala
Nature Biotechnology 32,773–785(2014)       doi:10.1038/nbt.2958

 

Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology.

 

Future Technologies : Bioprinting
Bioprinting

3D printing is increasingly permitting the direct digital manufacture (DDM) of a wide variety of plastic and metal items. While this in itself may trigger a manufacturing revolution, far more startling is the recent development of bioprinters. These artificially construct living tissue by outputting layer-upon-layer of living cells. Currently all bioprinters are experimental. However, in the future, bioprinters could revolutionize medical practice as yet another element of the New Industrial Convergence.

Bioprinters may be constructed in various configurations. However, all bioprinters output cells from a bioprint head that moves left and right, back and forth, and up and down, in order to place the cells exactly where required. Over a period of several hours, this permits an organic object to be built up in a great many very thin layers.

In addition to outputting cells, most bioprinters also output a dissolvable gel to support and protect cells during printing. A possible design for a future bioprinter appears below and in the sidebar, here shown in the final stages of printing out a replacement human heart. Note that you can access larger bioprinter images on the Future Visions page. You may also like to watch my bioprinting video.

bioprinter

 

Bioprinting Pioneers

Several experimental bioprinters have already been built. For example, in 2002 Professor Makoto Nakamura realized that the droplets of ink in a standard inkjet printer are about the same size as human cells. He therefore decided to adapt the technology, and by 2008 had created a working bioprinter that can print out biotubing similar to a blood vessel. In time, Professor Nakamura hopes to be able to print entire replacement human organs ready for transplant. You can learn more about this groundbreaking work here or read this message from Professor Nakamura. The movie below shows in real-time the biofabrication of a section of biotubing using his modified inkjet technology.

 

Another bioprinting pioneer is Organovo. This company was set up by a research group lead by Professor Gabor Forgacs from the University of Missouri, and in March 2008 managed to bioprint functional blood vessels and cardiac tissue using cells obtained from a chicken. Their work relied on a prototype bioprinter with three print heads. The first two of these output cardiac and endothelial cells, while the third dispensed a collagen scaffold — now termed ‘bio-paper’ — to support the cells during printing.

Since 2008, Organovo has worked with a company called Invetech to create a commercial bioprinter called the NovoGen MMX. This is loaded with bioink spheroids that each contain an aggregate of tens of thousands of cells. To create its output, the NovoGen first lays down a single layer of a water-based bio-paper made from collagen, gelatin or other hydrogels. Bioink spheroids are then injected into this water-based material. As illustrated below, more layers are subsequently added to build up the final object. Amazingly, Nature then takes over and the bioink spheroids slowly fuse together. As this occurs, the biopaper dissolves away or is otherwise removed, thereby leaving a final bioprinted body part or tissue.

 

bioprinting stages

As Organovo have demonstrated, using their bioink printing process it is not necessary to print all of the details of an organ with a bioprinter, as once the relevant cells are placed in roughly the right place Nature completes the job. This point is powerfully illustrated by the fact that the cells contained in a bioink spheroid are capable of rearranging themselves after printing. For example, experimental blood vessels have been bioprinted using bioink spheroids comprised of an aggregate mix of endothelial, smooth muscle and fibroblast cells. Once placed in position by the bioprint head, and with no technological intervention, the endothelial cells migrate to the inside of the bioprinted blood vessel, the smooth muscle cells move to the middle, and the fibroblasts migrate to the outside.

In more complex bioprinted materials, intricate capillaries and other internal structures also naturally form after printing has taken place. The process may sound almost magical. However, as Professor Forgacs explains, it is no different to the cells in an embryo knowing how to configure into complicated organs. Nature has been evolving this amazing capability for millions of years. Once in the right places, appropriate cell types somehow just know what to do.

In December 2010, Organovo create the first blood vessels to be bioprinted using cells cultured from a single person. The company has also successfully implanted bioprinted nerve grafts into rats, and anticipates human trials of bioprinted tissues by 2015. However, it also expects that the first commercial application of its bioprinters will be to produce simple human tissue structures for toxicology tests. These will enable medical researchers to test drugs on bioprinted models of the liver and other organs, thereby reducing the need for animal tests.

In time, and once human trials are complete, Organovo hopes that its bioprinters will be used to produce blood vessel grafts for use in heart bypass surgery. The intention is then to develop a wider range of tissue-on-demand and organs-on-demand technologies. To this end, researchers are now working on tiny mechanical devices that can artificially exercise and hence strengthen bioprinted muscle tissue before it is implanted into a patient.

Organovo anticipates that its first artificial human organ will be a kidney. This is because, in functional terms, kidneys are one of the more straight-forward parts of the body. The first bioprinted kidney may in fact not even need to look just like its natural counterpart or duplicate all of its features. Rather, it will simply have to be capable of cleaning waste products from the blood. You can read more about the work of Organovoand Professor Forgac’s in this article from Nature.

Regenerative Scaffolds and Bones

A further research team with the long-term goal of producing human organs-on-demand has created the Envisiontec Bioplotter. Like Organovo’s NovoGen MMX, this outputs bio-ink ’tissue spheroids’ and supportive scaffold materials including fibrin and collagen hydrogels. But in addition, the Envisontech can also print a wider range of biomaterials. These include biodegradable polymers and ceramics that may be used to support and help form artificial organs, and which may even be used as bioprinting substitutes for bone.

Talking of bone, a team lead by Jeremy Mao at the Tissue Engineering and Regenerative Medicine Lab at Columbia University is working on the application of bioprinting in dental and bone repairs. Already, a bioprinted, mesh-like 3D scaffold in the shape of an incisor has been implanted into the jaw bone of a rat. This featured tiny, interconnecting microchannels that contained ‘stem cell-recruiting substances’. In just nine weeks after implantation, these triggered the growth of fresh periodontal ligaments and newly formed alveolar bone. In time, this research may enable people to be fitted with living, bioprinted teeth, or else scaffolds that will cause the body to grow new teeth all by itself. You can read more about this development in this article from The Engineer.

In another experient, Mao’s team implanted bioprinted scaffolds in the place of the hip bones of several rabbits. Again these were infused with growth factors. As reported inThe Lancet, over a four month period the rabbits all grew new and fully-functional joints around the mesh. Some even began to walk and otherwise place weight on their new joints only a few weeks after surgery. Sometime next decade, human patients may therefore be fitted with bioprinted scaffolds that will trigger the grown of replacement hip and other bones. In a similar development, a team from Washington State University have also recently reported on four years of work using 3D printers to create a bone-like material that may in the future be used to repair injuries to human bones.

In Situ Bioprinting

The aforementioned research progress will in time permit organs to be bioprinted in a lab from a culture of a patient’s own cells. Such developments could therefore spark a medical revolution. Nevertheless, others are already trying to go further by developing techniques that will enable cells to be printed directly onto or into the human body in situ. Sometime next decade, doctors may therefore be able to scan wounds and spray on layers of cells to very rapidly heal them.

Already a team of bioprinting researchers lead by Anthony Alata at the Wake Forrest School of Medicine have developed a skin printer. In initial experiments they have taken 3D scans of test injuries inflicted on some mice and have used the data to control a bioprint head that has sprayed skin cells, a coagulant and collagen onto the wounds. The results are also very promising, with the wounds healing in just two or three weeks compared to about five or six weeks in a control group. Funding for the skin-printing project is coming in part from the US military who are keen to develop in situ bioprinting to help heal wounds on the battlefield. At present the work is still in a pre-clinical phase with Alata progressing his research usig pigs. However, trials of with human burn victims could be a little as five years away.

The potential to use bioprinters to repair our bodies in situ is pretty mind blowing. In perhaps no more than a few decades it may be possible for robotic surgical arms tipped with bioprint heads to enter the body, repair damage at the cellular level, and then also repair their point of entry on their way out. Patients would still need to rest and recuperate for a few days as bioprinted materials fully fused into mature living tissue. However, most patients could potentially recover from very major surgery in less than a week.

Cosmetic Applications …

Bioprinting Implications …

More information on bioprinting can be found in my books 3D Printing: Second Editionand The Next Big Thing. There is also a bioprinting section in my 3D Printing Directory. Oh, and there is also a great infographic about bioprinting here. Enjoy!

 

How to print out a blood vessel

New work moves closer to the age of organs on demand.

Blood vessels can now be ‘printed out’ by machine. Could bigger structures be in the future?SUSUMU NISHINAGA / SCIENCE PHOTO LIBRARY

Read Full Post »


New method for 3D imaging of brain tumors

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

Third-Harmonic Generation Microscopy Provides In Situ Brain Tumor Imaging

AMSTERDAM, Netherlands, April 25, 2015 — A technique involving third-harmonic generation microscopy could allow neurosurgeons to image and assess brain tumor boundaries during surgery, providing optical biopsies in near-real time and increasing the accuracy of tissue removal.

Pathologists typically use staining methods, in which chemicals like hematoxylin and eosin turn different tissue components blue and red, revealing its structure and whether there are any tumor cells. A definitive diagnosis can take up to 24 hours, meaning surgeons may not realize some cancerous tissue has escaped from their attention until after surgery — requiring a second operation and more risk.

Tissue from a patient diagnosed with low-grade glioma.

Tissue from a patient diagnosed with low-grade glioma. The green image is taken with the new method, while the pink uses conventional hematoxylin and eosin staining. From the upper left to the lower right, both images show increasing cell density due to more tumor tissue. The insets reveal the high density of tumor cells. Courtesy of N.V. Kuzmin et al./VU University Amsterdam.

Brain tumors — specifically glial brain tumors — are often spread out and mixed in with the healthy tissue, presenting a particular challenge. Surgery, irradiation and chemotherapy often cause substantial collateral damage to the surrounding brain tissue.

Now researchers from VU University Amsterdam, led by professor Marloes Groot, have demonstrated a label-free optical method for imaging cancerous brain tissue. They were able to produce most images in under a minute; smaller ones took <1 s, while larger images of a few square millimeters took 5 min.

The study involved firing short, 200-fs, 1200-nm laser pulses into the tissue. When three photons converged at the same time and place, the photons interacted with the nonlinear optical properties of the tissue. Through the phenomena of third harmonic generation, the interactions produced a single 400- or 600-nm photon (in the case of third or second harmonic generation, respectively).

The shorter-wavelength photon scatters in the tissue, and when it reaches a detector — in this case a high-sensitivity GaAsP photomultiplier tube — it reveals what the tissue looks like inside. The resulting images enabled clear recognition of cellularity, nuclear pleomorphism and rarefaction of neuropil in the tissue.

While this technique has been used in other applications — to image insects and fish embryos, for example — the researchers said this is the first time it’s been used to analyze glial brain tumors.

Groot and her team are now developing a handheld device for tumor border detection during surgery. The incoming laser pulses can only reach a depth of about 100 μm into the tissue currently; to reach further, Groot envisions attaching a needle that can pierce the tissue and deliver photons deeper.

The research was published in Biomedical Optics Express, a publication of The Optical Society (OSA) (doi: 10.1364/boe.7.001889).

 

Third harmonic generation imaging for fast, label-free pathology of human brain tumors

Biomedical Optics Express 2016  7(5):1889-1904    doi: 10.1364/BOE.7.001889

In brain tumor surgery, recognition of tumor boundaries is key. However, intraoperative assessment of tumor boundaries by the neurosurgeon is difficult. Therefore, there is an urgent need for tools that provide the neurosurgeon with pathological information during the operation. We show that third harmonic generation (THG) microscopy provides label-free, real-time images of histopathological quality; increased cellularity, nuclear pleomorphism, and rarefaction of neuropil in fresh, unstained human brain tissue could be clearly recognized. We further demonstrate THG images taken with a GRIN objective, as a step toward in situ THG microendoscopy of tumor boundaries. THG imaging is thus a promising tool for optical biopsies.

 

Glial tumors (gliomas) account for almost 80% of the tumors originating from brain tissue. The vast majority of these tumors are so-called ‘diffuse gliomas’ as they show very extensive (‘diffuse’) growth into the surrounding brain parenchyma. With surgical resection, irradiation, and/or chemotherapy it is impossible to eliminate all glioma cells without serious damage to the brain tissue. As a consequence, until now, patients with a diffuse glioma have had a poor prognosis, a situation which strongly contributes to the fact that brain tumor patients experience more years of life lost than patients with any other type of cancer [1,2].

Meanwhile it has also been demonstrated that the prognosis of patients with a diffuse glioma correlates with the extent of resection [3–5]. During brain surgery, however, it is extremely difficult for the neurosurgeon to determine the boundary of the tumor, i.e. whether a brain area contains tumor cells or not. If the neurosurgeon could have histopathological information on the tumor boundaries during brain surgery, then recognition of these tumor boundaries and with that, the surgical resection, could be significantly improved.

Occasionally, intra-operative analysis using hematoxylin-and-eosin (H&E) stained sections of snap-frozen material or smear preparations is performed by the pathologist to help establish brain tumor boundaries, but this procedure only allows analysis of small, selected regions, can only be performed on tissue fragments that are already resected, and is rather time consuming (frozen section diagnosis) or does not allow analysis of tumor in the histological context (smear preparations). Fluorescence imaging techniques are increasingly used during surgery [6,7] but are associated with several drawbacks, such as heterogeneous delivery and nonspecific staining [8,9]. In particular, low-grade gliomas and normal brain tissue have an intact blood-brain barrier and take up little circulating dye [10–12]. Alternative techniques are therefore required, that can detect the presence of tumor cells in tissue without fluorescent labels and with a speed that enables ‘live’ feedback to the surgeon while he/she operates.

The past year has seen exciting new developments in which optical coherence tomography [13] and stimulated Raman microscopy [14,15] were reported to reliably detect tumor tissue in the brain of human glioma patients, and a handheld Raman spectroscopy device was even implemented intra-surgical to assess brain tissue prior to excision [16]. These techniques are especially sensitive in densely tumor-infiltrated areas, and for the Raman spectroscopy device study a sensitivity limit of 17 tumor cells in an area of 150 × 150 μm2 was reported. The discriminating power of the Raman techniques is based on subtle differences in the vibrational spectra of tumor tissue and healthy tissue, and they require extensive comparison of experimental spectra against libraries of reference spectra. A technique capable of directly visualizing the classical histopathological hallmark criteria currently used by pathologists for classification of tumor tissue could potentially be even more reliable and make the transition from the current practice—histopathological analysis of fixated tissue—to in situ optical biopsy easier. Diffuse gliomas are histopathologically characterized by variably increased cellularity, nuclear pleomorphism and—especially in higher-grade neoplasms—brisk mitotic activity, microvascular proliferation, and necrosis. To visualize these features in live tissue, a technique that elucidates the morphology of tissue is required. In this context, third harmonic generation (THG) microscopy is a promising tool because of its capacity to visualize almost the full morphology of tissue. THG is a nonlinear optical process that relies on spatial variations of the third-order non-linear susceptibility χ(3) intrinsic to the tissue and (in the case of brain tissue) mainly arises from interfaces with lipid-rich molecules [17–27]. SHG signals arise from an optical nonlinear process involving non-centrosymmetric molecules present in, for example, microtubules and collagen. THG has been successfully applied to image unstained samples such as insect embryos, plant seeds and intact mammalian tissue [28], epithelial tissues [29–31], zebra fish embryos [32], and the zebra fish nervous system [33]. In brain tissue of mice, augmented by co-recording of SHG signals, THG was shown to visualize cells, nuclei, the inner and outer contours of axons, blood cells, and vessels, resulting in the visualization of both gray and white matter (GM and WM) as well as vascularization, up to a depth of 350 μm [24,26]. Here, we explore the potential of THG and SHG imaging for real time analysis of ex-vivo human brain tissue in the challenging cases of diffuse tumor invasion in low-grade brain tumors as well as of high-grade gliomas and structurally normal brain tissues.

 

Multiphoton imaging

THG and SHG are nonlinear optical processes that may occur in tissue depending on the nonlinear susceptibility coefficients χ(3) and χ(2) of the tissue and upon satisfying phase matching conditions [17–19,21,23–27]. In the THG process, three incident photons are converted into one photon with triple energy and one third of the wavelength (Fig. 1(A)). In the SHG process, signals result from the conversion of an incident photon pair into one photon with twice the energy and half the wavelength. Two- and three photon excited fluorescence signals (2PF, 3PF) may simultaneously be generated by intrinsic proteins (Fig. 1(B)). As a result, a set of distinct (harmonic) and broadband (autofluorescence) spectral peaks is generated in the visible range. The imaging setup (Fig. 1(C)) to generate and collect these signals consisted of a commercial two-photon laser-scanning microscope (TriMScope I, LaVision BioTec GmbH) and a femtosecond laser source. The laser source was an optical parametric oscillator (Mira-OPO, APE) pumped at 810 nm by a Ti-sapphire oscillator (Coherent Chameleon Ultra II). The OPO generates 200 fs pulses at 1200 nm with a repetition rate of 80 MHz. We selected this wavelength as it falls in the tissue transparency window, providing deeper penetration and reduced photodamage compared to the 700–1000 nm range, as well as harmonic signals generated in the visible wavelength range, facilitating their collection and detection with conventional objectives and detectors. We focused the OPO beam on the sample using a 25 × /1.10 (Nikon APO LWD) water-dipping objective (MO). The 1200 nm beam focal spot size on the sample was dlateral ~0.7 μm and daxial ~4.1 μm. It was measured with 0.175 μm fluorescent microspheres (see Section 3.4) yielding two- and three-photon resolution values Δ2P,lateral ~0.5 μm, Δ2P,axial ~2.9 μm, Δ3P,lateral ~0.4 μm, and Δ3P,axial ~2.4 μm. Two high-sensitivity GaAsP photomultiplier tubes (PMT, Hamamatsu H7422-40) equipped with narrowband filters at 400 nm and 600 nm were used to collect the THG and SHG signals, respectively, as a function of position of the focus in the sample. The signals were filtered from the 1200 nm fundamental photons by a dichroic mirror (Chroma T800LPXRXT, DM1), split into SHG and THG channels by a dichroic mirror (Chroma T425LPXR, DM2), and passed through narrow-band interference filters (F) for SHG (Chroma D600/10X) and THG (Chroma Z400/10X) detection. The efficient back-scattering of the harmonic signals allowed for their detection in epi-direction. The laser beam was transversely scanned over the sample by a pair of galvo mirrors (GM). THG and SHG modalities are intrinsically confocal and therefore provide direct depth sectioning. We obtained a full 3D image of the tissue volume by scanning the microscope objective with a stepper motor in the vertical (z) direction. The mosaic imaging of the sample was performed by transverse (xy) scanning of the motorized translation stage. Imaging data was acquired with the TriMScope I software (“Imspector Pro”); image stacks were stored in 16-bit tiff-format and further processed and analyzed with “ImageJ” software (ver. 1.49m, NIH, USA). All images were processed with logarithmic contrast enhancement.

Fig. 1 THG/SHG microscopy for brain tissue imaging. (A) Energy level diagram of the second (SHG) and third (THG) harmonic generation process. (B) Energy level diagram of the two- (2PF) and three-photon (3PF) excited auto-fluorescence process. (C) Multiphoton microscope setup: Laser producing 200 fs pulses at 1200 nm; GM – X-Y galvo-scanner mirrors; SL – scan lens; TL – tube lens; MO – microscope objective; DM1 – dichroic mirror reflecting back-scattered THG/SHG photons to the PMT detectors; DM2 – dichroic mirror splitting SHG and THG channels; F – narrow-band SHG and THG interference filters; L – focusing lenses; PMT – photomultiplier tube detectors. (D) Infrared photons (white arrow) are focused deep in the brain tissue, converted to THG (green) and SHG (red) photons, scattered back (green/red arrows) and epi-detected. The nonlinear optical processes result in label-free contrast images with sub-cellular resolution and intrinsic depth sectioning. (E and F) Freshly-excised low-grade (E) and high-grade (F) glioma tissue samples in artificial cerebrospinal fluid (ACSF) in a Petri dish with a millimeter paper underneath for scale. (G) An agar-embedded tumor tissue sample under 0.17 mm glass cover slip with the microscope objective (MO) on top.   Download Full Size | PPT Slide

Endomicroscopy imaging

For endomicroscopic imaging we used a commercial high-numerical-aperture (NA) multi-element micro-objective lens (GT-MO-080-018-810, GRINTECH) composed of a plano-convex lens and two GRaded INdex (GRIN) lenses with aberration compensation, object NA = 0.80 and object working distance 200 µm (in water), image NA = 0.18 and image working distance 200 µm (in air), magnification × 4.8 and field-of-view diameter of 200 μm. The GRIN lenses and the plano-convex lens were mounted in a waterproof stainless steel housing with an outer diameter of 1.4 mm and a total length of 7.5 mm. Originally designed for a wavelength range of 800–900 nm [36–41], this micro-objective lens was used for focusing of 1200 nm femtosecond pulses and collection of back-scattered harmonic and fluorescence photons. A coupling lens with f = 40 mm (NA = 0.19, Qioptiq, ARB2 NIR, dia. 25 mm) focused the scanned laser beam in the image plane of the micro-objective lens and forwarded the epi-detected harmonic and fluorescence photons to the PMTs.

We characterized the lateral (x) and axial (z) resolution of the micro-objective lens by 3D imaging of fluorescence microspheres (PS-Speck Microscope Point Source Kit, P7220, Molecular Probes). We used “blue” and “deep red” microspheres, 0.175 ± 0.005 μm in diameter, with excitation/emission maxima at 360/440 nm and 630/660 nm to obtain three-photon (3P) and two-photon (2P) point spread function (PSF) profiles. The excitation wavelength was 1200 nm, and fluorescence signals were detected in the 400 ± 5 nm (3P) and 600 ± 5 nm (2P) spectral windows, just as in the brain tissue imaging experiments. 1 μL of “blue” and “deep red” sphere suspensions were applied to a propanol-cleaned 75 × 26 × 1 mm3 glass slide. The mixed microsphere suspension was left to dry for 20 min and was then imaged with the micro-objective lens via a water immersion layer. The assembly of the coupling lens and the micro-objective lens was vertically (z) scanned with a step of 0.5 μm, and stacks of two-/three-photon images were recorded. The line profiles were then taken over the lateral (xy) images of the fluorescent spheres with maximal intensity (in focus), and fluorescence counts were plotted as function of the lateral coordinate (x). The axial (z) scan values of the two- and three-photon fluorescence signals were acquired by averaging of the total fluorescence counts of the corresponding spheres and were plotted as function of the axial coordinate (z). Lateral (x) and axial (z) 2P/3P points were then fitted with Gaussian functions and full width at half-maximum (FWHM) values were measured.

……. Results….  Conclusions

The results shown here provide the first evidence that—by applying the same microscopic criteria that are used by the pathologist, i.e. increased cellularity, nuclear pleomorphism, and rarefaction of neuropil—THG/SHG ex-vivo microscopy can be used to recognize the presence of diffuse infiltrative glioma in fresh, unstained human brain tissue. Images and a first diagnosis can be provided in seconds, with the ‘inspection mode’, by moving the sample under the scanning microscope (see Visualization 4 and Visualization 5), or in about 5 minutes if an area has to be inspected with sub-cellular detail. The sensitivity of THG to interfaces provides images with excellent contrast in which cell-by-cell variations are visualized. The quality of the images and the speed with which they can be recorded make THG a promising tool for quick assessment of the nature of excised tissue. Importantly, because THG/SHG images are very close to those of histological slides, we expect that the surgeon (or pathologist) will need very little additional training for adequate interpretation of the images. We are planning to construct a THG/SHG ex-vivo tabletop device consisting of a compact laser source and a laser-scanning microscope requiring a physical footprint of only 1 m2, to be placed in an operating room, enabling immediate feedback to the surgeon on the nature of excised tissue, during the operation. With this device, we will perform a quantitative study of the added value of rapid THG/SHG pathological feedback during surgery for the final success of the neurosurgery. Finally, we note that THG/SHG imaging does not induce artifacts associated with fixation, freezing, and staining; therefore, tissue fragments examined ex-vivo can still be used for subsequent immunochemical and/or molecular analysis.

The microendoscopy THG/SHG imaging results represent an important step toward the development of a THG/SHG-based bioptic needle, and show that the use of such a needle for in situ optical sampling for optimal resection of gliomas is indeed a viable prospect, as has been demonstrated also before for multi-photon microscopies [38,49–54]. Although there are several issues associated with the operation of a needle-like optical device, such as the fact that blood in the surgical cavity may obscure the view, and the fact that only small areas can be biopsied with a needle, it may be a valuable tool in cases where sparing healthy tissue is of such vital importance as in brain surgery. Therefore, the reasonably good quality of the THG images taken with the GRIN micro-objective shown here, together with the developments in the field of microendoscopy, warrant further development of THG/SHG into a true handheld device. This next step, a true handheld bioptic needle, requires an optical fiber to transport the light from a small footprint laser to the GRIN micro-objective, and a small 2D scanner unit, to enable placing the laser at a sufficient distance from the patient. Patient-safe irradiation levels for THG imaging will have to be determined but are expected to lie in the 10–50 mW range [55–58]. This implies that only minor optimization of signal collection efficiency needs to be achieved, because the images of Fig. 10 were measured with 50 mW incident power.

THG/SHG imaging thus holds great promise for improving surgical procedures, thereby reducing the need for second surgeries and the loss of function by excising non-infiltrated brain tissue, as well as improving survival and quality of life of the patients. In addition, the success in the challenging case of diffuse gliomas promises great potential of THG/SHG-based histological analysis for a much wider spectrum of diagnostic applications.

References and links

1. N. G. Burnet, S. J. Jefferies, R. J. Benson, D. P. Hunt, and F. P. Treasure, “Years of life lost (YLL) from cancer is an important measure of population burden–and should be considered when allocating research funds,” Br. J. Cancer 92(2), 241–245 (2005). [PubMed]  

2. J. A. Schwartzbaum, J. L. Fisher, K. D. Aldape, and M. Wrensch, “Epidemiology and molecular pathology of glioma,” Nat. Clin. Pract. Neurol. 2(9), 494–516 (2006). [CrossRef]   [PubMed]  

3. J. S. Smith, E. F. Chang, K. R. Lamborn, S. M. Chang, M. D. Prados, S. Cha, T. Tihan, S. Vandenberg, M. W. McDermott, and M. S. Berger, “Role of extent of resection in the long-term outcome of low-grade hemispheric gliomas,” J. Clin. Oncol. 26(8), 1338–1345 (2008). [CrossRef]   [PubMed]  

4. N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62(4), 753–766 (2008). [CrossRef]   [PubMed]  

5. I. Y. Eyüpoglu, M. Buchfelder, and N. E. Savaskan, “Surgical resection of malignant gliomas-role in optimizing patient outcome,” Nat. Rev. Neurol. 9(3), 141–151 (2013). [CrossRef]  [PubMed]  

6. U. Pichlmeier, A. Bink, G. Schackert, and W. Stummer, “Resection and survival in glioblastoma multiforme: An RTOG recursive partitioning analysis of ALA study patients,” Neuro-oncol. 10(6), 1025–1034 (2008). [CrossRef]   [PubMed]  

7. W. Stummer, J. C. Tonn, C. Goetz, W. Ullrich, H. Stepp, A. Bink, T. Pietsch, and U. Pichlmeier, “5-Aminolevulinic Acid-Derived Tumor Fluorescence: The Diagnostic Accuracy of Visible Fluorescence Qualities as Corroborated by Spectrometry and Histology and Postoperative Imaging,” Neurosurgery 74(3), 310–320 (2014). [CrossRef]   [PubMed]  

….. more

Tables (1)

Tables Icon

Table 1 Pre-operative diagnoses and cell densities observed in the studied brain tissue samples by THG imaging and corresponding H&E histopathology.

Read Full Post »


Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

April 13, 2016

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

references:

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang, Kayvan Reza Niazi & Bahram Jalali. Deep Learning in Label-free Cell Classification. Scientific Reports 6, Article number: 21471 (2016); doi:10.1038/srep21471 (open access)

Supplementary Information

 

Deep Learning in Label-free Cell Classification

Claire Lifan Chen, Ata Mahjoubfar, Li-Chia Tai, Ian K. Blaby, Allen Huang,Kayvan Reza Niazi & Bahram Jalali

Scientific Reports 6, Article number: 21471 (2016)    http://dx.doi.org:/10.1038/srep21471

Deep learning extracts patterns and knowledge from rich multidimenstional datasets. While it is extensively used for image recognition and speech processing, its application to label-free classification of cells has not been exploited. Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells1,2. However, each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy1,3. Imaging flow cytometry4,5 on the other hand captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering6.

In addition to classification accuracy, the throughput is another critical specification of a flow cytometer. Indeed high throughput, typically 100,000 cells per second, is needed to screen a large enough cell population to find rare abnormal cells that are indicative of early stage diseases. However there is a fundamental trade-off between throughput and accuracy in any measurement system7,8. For example, imaging flow cytometers face a throughput limit imposed by the speed of the CCD or the CMOS cameras, a number that is approximately 2000 cells/s for present systems9. Higher flow rates lead to blurred cell images due to the finite camera shutter speed. Many applications of flow analyzers such as cancer diagnostics, drug discovery, biofuel development, and emulsion characterization require classification of large sample sizes with a high-degree of statistical accuracy10. This has fueled research into alternative optical diagnostic techniques for characterization of cells and particles in flow.

Recently, our group has developed a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept11. This instrument overcomes the trade-off between sensitivity and speed by using Amplified Time-stretch Dispersive Fourier Transform12,13,14,15. In time stretched imaging16, the object’s spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds (Fig. 1). Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching8,11,16. Moreover, warped stretch transform17,18can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view19. In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput20. Integrated with a microfluidic channel, coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.

Figure 1: Time stretch quantitative phase imaging (TS-QPI) and analytics system; A mode-locked laser followed by a nonlinear fiber, an erbium doped fiber amplifier (EDFA), and a wavelength-division multiplexing (WDM) filter generate and shape a train of broadband optical pulses. http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

 

Box 1: The pulse train is spatially dispersed into a train of rainbow flashes illuminating the target as line scans. The spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame. The ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/s. The phase shift and intensity loss at each location within the field of view are embedded into the spectral interference patterns using a Michelson interferometer. Box 2: The interferogram pulses were then stretched in time so that spatial information could be mapped into time through time-stretch dispersive Fourier transform (TS-DFT), and then captured by a single pixel photodetector and an analog-to-digital converter (ADC). The loss of sensitivity at high shutter speed is compensated by stimulated Raman amplification during time stretch. Box 3: (a) Pulse synchronization; the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images. (b) The biomass density of a cell leads to a spatially varying optical phase shift. When a rainbow flash passes through the cells, the changes in refractive index at different locations will cause phase walk-off at interrogation wavelengths. Hilbert transformation and phase unwrapping are used to extract the spatial phase shift. (c) Decoding the phase shift in each pulse at each wavelength and remapping it into a pixel reveals the protein concentration distribution within cells. The optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms. Thus, quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent. Cell features describing morphology, granularity, biomass, etc are extracted from the images. (d) These biophysical features are used in a machine learning algorithm for high-accuracy label-free classification of the cells.

On another note, surface markers used to label cells, such as EpCAM21, are unavailable in some applications; for example, melanoma or pancreatic circulating tumor cells (CTCs) as well as some cancer stem cells are EpCAM-negative and will escape EpCAM-based detection platforms22. Furthermore, large-population cell sorting opens the doors to downstream operations, where the negative impacts of labels on cellular behavior and viability are often unacceptable23. Cell labels may cause activating/inhibitory signal transduction, altering the behavior of the desired cellular subtypes, potentially leading to errors in downstream analysis, such as DNA sequencing and subpopulation regrowth. In this way, quantitative phase imaging (QPI) methods24,25,26,27 that categorize unlabeled living cells with high accuracy are needed. Coherent time stretch imaging is a method that enables quantitative phase imaging at ultrahigh throughput for non-invasive label-free screening of large number of cells.

In this work, the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning. Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely28,29,30,31 among cells and their variations reflect important information of genotypes and physiological stimuli32. The multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.

We further improved the accuracy, repeatability, and the balance between sensitivity and specificity of our label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating characteristics (ROC). To demonstrate sensitivity, specificity, and accuracy of multi-feature label-free flow cytometry using our technique, we classified (1) OT-IIhybridoma T-lymphocytes and SW-480 colon cancer epithelial cells, and (2) Chlamydomonas reinhardtii algal cells (herein referred to as Chlamydomonas) based on their lipid content, which is related to the yield in biofuel production. Our preliminary results show that compared to classification by individual biophysical parameters, our label-free hyperdimensional technique improves the detection accuracy from 77.8% to 95.5%, or in other words, reduces the classification inaccuracy by about five times.     ……..

 

Feature Extraction

The decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as shown in Fig. 2 (see Section Methods: Image Reconstruction). These images are used to obtain biophysical fingerprints of the cells8,36. With domain expertise, raw images are fused and transformed into a suitable set of biophysical features, listed in Table 1, which the deep learning model further converts into learned features for improved classification.

The new technique combines two components that were invented at UCLA:

A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.

A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.

 

http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f2.jpg

The optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells. The attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles. The optical loss image is derived from the low frequency component of the pulse interferograms. The optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations can be found in Section Methods. Also, supplementary Videos 1 and 2 show measurements of cell-induced optical path length difference by TS-QPI at four different points along the rainbow for OT-II and SW-480, respectively.

Table 1: List of extracted features.

Feature Name    Description         Category

 

Figure 3: Biophysical features formed by image fusion.

(a) Pairwise correlation matrix visualized as a heat map. The map depicts the correlation between all major 16 features extracted from the quantitative images. Diagonal elements of the matrix represent correlation of each parameter with itself, i.e. the autocorrelation. The subsets in box 1, box 2, and box 3 show high correlation because they are mainly related to morphological, optical phase, and optical loss feature categories, respectively. (b) Ranking of biophysical features based on their AUCs in single-feature classification. Blue bars show performance of the morphological parameters, which includes diameter along the interrogation rainbow, diameter along the flow direction, tight cell area, loose cell area, perimeter, circularity, major axis length, orientation, and median radius. As expected, morphology contains most information, but other biophysical features can contribute to improved performance of label-free cell classification. Orange bars show optical phase shift features i.e. optical path length differences and refractive index difference. Green bars show optical loss features representing scattering and absorption by the cell. The best performed feature in these three categories are marked in red.

Figure 4: Machine learning pipeline. Information of quantitative optical phase and loss images are fused to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.

The neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification. This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC). Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node. The training process maximizes AUC, pushing the ROC curve toward the upper left corner, which means improved sensitivity and specificity in classification.

….   How to cite this article: Chen, C. L. et al. Deep Learning in Label-free Cell Classification.

Sci. Rep. 6, 21471; http://dx.doi.org:/10.1038/srep21471

 

Computer Algorithm Helps Characterize Cancerous Genomic Variations

http://www.genengnews.com/gen-news-highlights/computer-algorithm-helps-characterize-cancerous-genomic-variations/81252626/

To better characterize the functional context of genomic variations in cancer, researchers developed a new computer algorithm called REVEALER. [UC San Diego Health]

Scientists at the University of California San Diego School of Medicine and the Broad Institute say they have developed a new computer algorithm—REVEALER—to better characterize the functional context of genomic variations in cancer. The tool, described in a paper (“Characterizing Genomic Alterations in Cancer by Complementary Functional Associations”) published in Nature Biotechnology, is designed to help researchers identify groups of genetic variations that together associate with a particular way cancer cells get activated, or how they respond to certain treatments.

REVEALER is available for free to the global scientific community via the bioinformatics software portal GenePattern.org.

“This computational analysis method effectively uncovers the functional context of genomic alterations, such as gene mutations, amplifications, or deletions, that drive tumor formation,” said senior author Pablo Tamayo, Ph.D., professor and co-director of the UC San Diego Moores Cancer Center Genomics and Computational Biology Shared Resource.

Dr. Tamayo and team tested REVEALER using The Cancer Genome Atlas (TCGA), the NIH’s database of genomic information from more than 500 human tumors representing many cancer types. REVEALER revealed gene alterations associated with the activation of several cellular processes known to play a role in tumor development and response to certain drugs. Some of these gene mutations were already known, but others were new.

For example, the researchers discovered new activating genomic abnormalities for beta-catenin, a cancer-promoting protein, and for the oxidative stress response that some cancers hijack to increase their viability.

REVEALER requires as input high-quality genomic data and a significant number of cancer samples, which can be a challenge, according to Dr. Tamayo. But REVEALER is more sensitive at detecting similarities between different types of genomic features and less dependent on simplifying statistical assumptions, compared to other methods, he adds.

“This study demonstrates the potential of combining functional profiling of cells with the characterizations of cancer genomes via next-generation sequencing,” said co-senior author Jill P. Mesirov, Ph.D., professor and associate vice chancellor for computational health sciences at UC San Diego School of Medicine.

 

Characterizing genomic alterations in cancer by complementary functional associations

Jong Wook Kim, Olga B Botvinnik, Omar Abudayyeh, Chet Birger, et al.

Nature Biotechnology (2016)              http://dx.doi.org:/10.1038/nbt.3527

Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes

 

Figure 2: REVEALER results for transcriptional activation of β-catenin in cancer.close

(a) This heatmap illustrates the use of the REVEALER approach to find complementary genomic alterations that match the transcriptional activation of β-catenin in cancer. The target profile is a TCF4 reporter that provides an estimate of…

 

An imaging-based platform for high-content, quantitative evaluation of therapeutic response in 3D tumour models

Jonathan P. Celli, Imran Rizvi, Adam R. Blanden, Iqbal Massodi, Michael D. Glidden, Brian W. Pogue & Tayyaba Hasan

Scientific Reports 4; 3751  (2014)    http://dx.doi.org:/10.1038/srep03751

While it is increasingly recognized that three-dimensional (3D) cell culture models recapitulate drug responses of human cancers with more fidelity than monolayer cultures, a lack of quantitative analysis methods limit their implementation for reliable and routine assessment of emerging therapies. Here, we introduce an approach based on computational analysis of fluorescence image data to provide high-content readouts of dose-dependent cytotoxicity, growth inhibition, treatment-induced architectural changes and size-dependent response in 3D tumour models. We demonstrate this approach in adherent 3D ovarian and pancreatic multiwell extracellular matrix tumour overlays subjected to a panel of clinically relevant cytotoxic modalities and appropriately designed controls for reliable quantification of fluorescence signal. This streamlined methodology reads out the high density of information embedded in 3D culture systems, while maintaining a level of speed and efficiency traditionally achieved with global colorimetric reporters in order to facilitate broader implementation of 3D tumour models in therapeutic screening.

The attrition rates for preclinical development of oncology therapeutics are particularly dismal due to a complex set of factors which includes 1) the failure of pre-clinical models to recapitulate determinants of in vivo treatment response, and 2) the limited ability of available assays to extract treatment-specific data integral to the complexities of therapeutic responses1,2,3. Three-dimensional (3D) tumour models have been shown to restore crucial stromal interactions which are missing in the more commonly used 2D cell culture and that influence tumour organization and architecture4,5,6,7,8, as well as therapeutic response9,10, multicellular resistance (MCR)11,12, drug penetration13,14, hypoxia15,16, and anti-apoptotic signaling17. However, such sophisticated models can only have an impact on therapeutic guidance if they are accompanied by robust quantitative assays, not only for cell viability but also for providing mechanistic insights related to the outcomes. While numerous assays for drug discovery exist18, they are generally not developed for use in 3D systems and are often inherently unsuitable. For example, colorimetric conversion products have been noted to bind to extracellular matrix (ECM)19 and traditional colorimetric cytotoxicity assays reduce treatment response to a single number reflecting a biochemical event that has been equated to cell viability (e.g. tetrazolium salt conversion20). Such approaches fail to provide insight into the spatial patterns of response within colonies, morphological or structural effects of drug response, or how overall culture viability may be obscuring the status of sub-populations that are resistant or partially responsive. Hence, the full benefit of implementing 3D tumour models in therapeutic development has yet to be realized for lack of analytical methods that describe the very aspects of treatment outcome that these systems restore.

Motivated by these factors, we introduce a new platform for quantitative in situ treatment assessment (qVISTA) in 3D tumour models based on computational analysis of information-dense biological image datasets (bioimage-informatics)21,22. This methodology provides software end-users with multiple levels of complexity in output content, from rapidly-interpreted dose response relationships to higher content quantitative insights into treatment-dependent architectural changes, spatial patterns of cytotoxicity within fields of multicellular structures, and statistical analysis of nodule-by-nodule size-dependent viability. The approach introduced here is cognizant of tradeoffs between optical resolution, data sampling (statistics), depth of field, and widespread usability (instrumentation requirement). Specifically, it is optimized for interpretation of fluorescent signals for disease-specific 3D tumour micronodules that are sufficiently small that thousands can be imaged simultaneously with little or no optical bias from widefield integration of signal along the optical axis of each object. At the core of our methodology is the premise that the copious numerical readouts gleaned from segmentation and interpretation of fluorescence signals in these image datasets can be converted into usable information to classify treatment effects comprehensively, without sacrificing the throughput of traditional screening approaches. It is hoped that this comprehensive treatment-assessment methodology will have significant impact in facilitating more sophisticated implementation of 3D cell culture models in preclinical screening by providing a level of content and biological relevance impossible with existing assays in monolayer cell culture in order to focus therapeutic targets and strategies before costly and tedious testing in animal models.

Using two different cell lines and as depicted in Figure 1, we adopt an ECM overlay method pioneered originally for 3D breast cancer models23, and developed in previous studies by us to model micrometastatic ovarian cancer19,24. This system leads to the formation of adherent multicellular 3D acini in approximately the same focal plane atop a laminin-rich ECM bed, implemented here in glass-bottom multiwell imaging plates for automated microscopy. The 3D nodules resultant from restoration of ECM signaling5,8, are heterogeneous in size24, in contrast to other 3D spheroid methods, such as rotary or hanging drop cultures10, in which cells are driven to aggregate into uniformly sized spheroids due to lack of an appropriate substrate to adhere to. Although the latter processes are also biologically relevant, it is the adherent tumour populations characteristic of advanced metastatic disease that are more likely to be managed with medical oncology, which are the focus of therapeutic evaluation herein. The heterogeneity in 3D structures formed via ECM overlay is validated here by endoscopic imaging ofin vivo tumours in orthotopic xenografts derived from the same cells (OVCAR-5).

 

Figure 1: A simplified schematic flow chart of imaging-based quantitative in situ treatment assessment (qVISTA) in 3D cell culture.

(This figure was prepared in Adobe Illustrator® software by MD Glidden, JP Celli and I Rizvi). A detailed breakdown of the image processing (Step 4) is provided in Supplemental Figure 1.

A critical component of the imaging-based strategy introduced here is the rational tradeoff of image-acquisition parameters for field of view, depth of field and optical resolution, and the development of image processing routines for appropriate removal of background, scaling of fluorescence signals from more than one channel and reliable segmentation of nodules. In order to obtain depth-resolved 3D structures for each nodule at sub-micron lateral resolution using a laser-scanning confocal system, it would require ~ 40 hours (at approximately 100 fields for each well with a 20× objective, times 1 minute/field for a coarse z-stack, times 24 wells) to image a single plate with the same coverage achieved in this study. Even if the resources were available to devote to such time-intensive image acquisition, not to mention the processing, the optical properties of the fluorophores would change during the required time frame for image acquisition, even with environmental controls to maintain culture viability during such extended imaging. The approach developed here, with a mind toward adaptation into high throughput screening, provides a rational balance of speed, requiring less than 30 minutes/plate, and statistical rigour, providing images of thousands of nodules in this time, as required for the high-content analysis developed in this study. These parameters can be further optimized for specific scenarios. For example, we obtain the same number of images in a 96 well plate as for a 24 well plate by acquiring only a single field from each well, rather than 4 stitched fields. This quadruples the number conditions assayed in a single run, at the expense of the number of nodules per condition, and therefore the ability to obtain statistical data sets for size-dependent response, Dfrac and other segmentation-dependent numerical readouts.

 

We envision that the system for high-content interrogation of therapeutic response in 3D cell culture could have widespread impact in multiple arenas from basic research to large scale drug development campaigns. As such, the treatment assessment methodology presented here does not require extraordinary optical instrumentation or computational resources, making it widely accessible to any research laboratory with an inverted fluorescence microscope and modestly equipped personal computer. And although we have focused here on cancer models, the methodology is broadly applicable to quantitative evaluation of other tissue models in regenerative medicine and tissue engineering. While this analysis toolbox could have impact in facilitating the implementation of in vitro 3D models in preclinical treatment evaluation in smaller academic laboratories, it could also be adopted as part of the screening pipeline in large pharma settings. With the implementation of appropriate temperature controls to handle basement membranes in current robotic liquid handling systems, our analyses could be used in ultra high-throughput screening. In addition to removing non-efficacious potential candidate drugs earlier in the pipeline, this approach could also yield the additional economic advantage of minimizing the use of costly time-intensive animal models through better estimates of dose range, sequence and schedule for combination regimens.

 

Microscope Uses AI to Find Cancer Cells More Efficiently

Thu, 04/14/2016 – by Shaun Mason

http://www.mdtmag.com/news/2016/04/microscope-uses-ai-find-cancer-cells-more-efficiently

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses.

There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one. It combines two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.

Deep learning is a form of artificial intelligence that uses complex algorithms to extract meaning from data with the goal of achieving accurate decision making.

The study, which was published in the journal Nature Scientific Reports, was led by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.

Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The new microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.

The new microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.

“Each frame is slowed down in time and optically amplified so it can be digitized,” Mahjoubfar said. “This lets us perform fast cell imaging that the artificial intelligence component can distinguish.”

Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem.

“The photonic time stretch technique allows us to identify rogue cells in a short time with low-level illumination,” Chen said.

The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.   …..  see also http://www.nature.com/article-assets/npg/srep/2016/160315/srep21471/images_hires/m685/srep21471-f1.jpg

Chen, C. L. et al. Deep Learning in Label-free Cell Classification.    Sci. Rep. 6, 21471;   http://dx.doi.org:/10.1038/srep21471

 

 

Read Full Post »


Colon cancer and organoids

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

 

 

Guts and Glory

An open mind and collaborative spirit have taken Hans Clevers on a journey from medicine to developmental biology, gastroenterology, cancer, and stem cells.

By Anna Azvolinsky    http://www.the-scientist.com/?articles.view/articleNo/45580/title/Guts-and-Glory

Ihave had to talk a lot about my science recently and it’s made me think about how science works,” says Hans Clevers. “Scientists are trained to think science is driven by hypotheses, but for [my lab], hypothesis-driven research has never worked. Instead, it has been about trying to be as open-minded as possible—which is not natural for our brains,” adds the Utrecht University molecular genetics professor. “The human mind is such that it tries to prove it’s right, so pursuing a hypothesis can result in disaster. My advice to my own team and others is to not preformulate an answer to a scientific question, but just observe and never be afraid of the unknown. What has worked well for us is to keep an open mind and do the experiments. And find a collaborator if it is outside our niche.”

“One thing I have learned is that hypothesis-driven research tends not to be productive when you are in an unknown territory.”

Clevers entered medical school at Utrecht University in The Netherlands in 1978 while simultaneously pursuing a master’s degree in biology. Drawn to working with people in the clinic, Clevers had a training position in pediatrics lined up after medical school, but then mentors persuaded him to spend an additional year converting the master’s degree to a PhD in immunology. “At the end of that year, looking back, I got more satisfaction from the research than from seeing patients.” Clevers also had an aptitude for benchwork, publishing four papers from his PhD year. “They were all projects I had made up myself. The department didn’t do the kind of research I was doing,” he says. “Now that I look back, it’s surprising that an inexperienced PhD student could come up with a project and publish independently.”

Clevers studied T- and B-cell signaling; he set up assays to visualize calcium ion flux and demonstrated that the ions act as messengers to activate human B cells, signaling through antibodies on the cell surface. “As soon as the experiment worked, I got T cells from the lab next door and did the same experiment. That was my strategy: as soon as something worked, I would apply it elsewhere and didn’t stop just because I was a B-cell biologist and not a T-cell biologist. What I learned then, that I have continued to benefit from, is that a lot of scientists tend to adhere to a niche. They cling to these niches and are not that flexible. You think scientists are, but really most are not.”

Here, Clevers talks about promoting a collaborative spirit in research, the art of doing a pilot experiment, and growing miniature organs in a dish.

Clevers Creates

Re-search? Clevers was born in Eindhoven, in the south of The Netherlands. The town was headquarters to Philips Electronics, where his father worked as a businessman, and his mother took care of Clevers and his three brothers. Clevers did well in school but his passion was sports, especially tennis and field hockey, “a big thing in Holland.” Then in 1975, at age 18, he moved to Utrecht University, where he entered an intensive, biology-focused program. “I knew I wanted to be a biology researcher since I was young. In Dutch, the word for research is ‘onderzoek’ and I knew the English word ‘research’ and had wondered why there was the ‘re’ in the word, because I wanted to search but I didn’t want to do re-search—to find what someone else had already found.”

Opportunity to travel. “I was very disappointed in my biology studies, which were old-fashioned and descriptive,” says Clevers. He thought medicine might be more interesting and enrolled in medical school while still pursuing a master’s degree in biology at Utrecht. For the master’s, Clevers had to do three rotations. He spent a year at the International Laboratory for Research on Animal Diseases (ILRAD) in Nairobi, Kenya, and six months in Bethesda, Maryland, at the National Institutes of Health. “Holland is really small, so everyone travels.” Clevers saw those two rotations more as travel explorations. In Nairobi, he went on safaris and explored the country in Land Rovers borrowed from the institute. While in Maryland in 1980, Clevers—with the consent of his advisor, who thought it was a good idea for him to get a feel for the U.S.—flew to Portland, Oregon, and drove back to Boston with a musician friend along the Canadian border. He met the fiancé of political activist and academic Angela Davis in New York City and even stayed in their empty apartment there.

Life and lab lessons. Back in Holland, Clevers joined Rudolf Eugène Ballieux’s lab at Utrecht University to pursue his PhD, for which he studied immune cell signaling. “I didn’t learn much science from him, but I learned that you always have to create trust and to trust people around you. This became a major theme in my own lab. We don’t distrust journals or reviewers or collaborators. We trust everyone and we share. There will be people who take advantage, but there have only been a few of those. So I learned from Ballieux to give everyone maximum trust and then change this strategy only if they fail that trust. We collaborate easily because we give out everything and we also easily get reagents and tools that we may need. It’s been valuable to me in my career. And it is fun!”

Clevers Concentrates

On a mission. “Once I decided to become a scientist, I knew I needed to train seriously. Up to that point, I was totally self-trained.” From an extensive reading of the immunology literature, Clevers became interested in how T cells recognize antigens, and headed off to spend a postdoc studying the problem in Cox Terhorst’s lab at Dana-Farber Cancer Institute in Boston. “Immunology was young, but it was very exciting and there was a lot to discover. I became a professional scientist there and experienced how tough science is.” In 1988, Clevers cloned and characterized the gene for a component of the T-cell receptor (TCR) called CD3-epsilon, which binds antigen and activates intracellular signaling pathways.

On the fast track in Holland. Clevers returned to Utrecht University in 1989 as a professor of immunology. Within one month of setting up his lab, he had two graduate students and a technician, and the lab had cloned the first T cell–specific transcription factor, which they called TCF-1, in human T cells. When his former thesis advisor retired, Clevers was asked, at age 33, to become head of the immunology department. While the appointment was high-risk for him and for the department, Clevers says, he was chosen because he was good at multitasking and because he got along well with everyone.

Problem-solving strategy. “My strategy in research has always been opportunistic. One thing I have learned is that hypothesis-driven research tends not to be productive when you are in an unknown territory. I think there is an art to doing pilot experiments. So we have always just set up systems in which something happens and then you try and try things until a pattern appears and maybe you formulate a small hypothesis. But as soon as it turns out not to be exactly right, you abandon it. It’s a very open-minded type of research where you question whether what you are seeing is a real phenomenon without spending a year on doing all of the proper controls.”

Trial and error. Clevers’s lab found that while TCF-1 bound to DNA, it did not alter gene expression, despite the researchers’ tinkering with promoter and enhancer assays. “For about five years this was a problem. My first PhD students were leaving and they thought the whole TCF project was a failure,” says Clevers. His lab meanwhile cloned TCF homologs from several model organisms and made many reagents including antibodies against these homologs. To try to figure out the function of TCF-1, the lab performed a two-hybrid screen and identified components of the Wnt signaling pathway as binding partners of TCF-1. “We started to read about Wnt and realized that you study Wnt not in T cells but in frogs and flies, so we rapidly transformed into a developmental biology lab. We showed that we held the key for a major issue in developmental biology, the final protein in the Wnt cascade: TCF-1 binds b-catenin when b-catenin becomes available and activates transcription.” In 1996, Clevers published the mechanism of how the TCF-1 homolog in Xenopus embryos, called XTcf-3, is integrated into the Wnt signaling pathway.

Clevers Catapults

COURTESY OF HANS CLEVERS AND JEROEN HUIJBEN, NYMUS

3DCrypt building and colon cancer.

Clevers next collaborated with Bert Vogelstein’s lab at Johns Hopkins, linking TCF to Wnt signaling in colon cancer. In colon cancer cell lines with mutated forms of the tumor suppressor gene APC, the APC protein can’t rein in b-catenin, which accumulates in the cytoplasm, forms a complex with TCF-4 (later renamed TCF7L2) in the nucleus, and caninitiate colon cancer by changing gene expression. Then, the lab showed that Wnt signaling is necessary for self-renewal of adult stem cells, as mice missing TCF-4 do not have intestinal crypts, the site in the gut where stem cells reside. “This was the first time Wnt was shown to play a role in adults, not just during development, and to be crucial for adult stem cell maintenance,” says Clevers. “Then, when I started thinking about studying the gut, I realized it was by far the best way to study stem cells. And I also realized that almost no one in the world was studying the healthy gut. Almost everyone who researched the gut was studying a disease.” The main advantages of the murine model are rapid cell turnover and the presence of millions of stereotypic crypts throughout the entire intestine.

Against the grain. In 2007, Nick Barker, a senior scientist in the Clevers lab, identified the Wnt target gene Lgr5 as a unique marker of adult stem cells in several epithelial organs, including the intestine, hair follicle, and stomach. In the intestine, the gene codes for a plasma membrane protein on crypt stem cells that enable the intestinal epithelium to self-renew, but can also give rise to adenomas of the gut. Upon making mice with adult stem cell populations tagged with a fluorescent Lgr5-binding marker, the lab helped to overturn assumptions that “stem cells are rare, impossible to find, quiescent, and divide asymmetrically.”

On to organoids. Once the lab could identify adult stem cells within the crypts of the gut, postdoc Toshiro Sato discovered that a single stem cell, in the presence of Matrigel and just three growth factors, could generate a miniature crypt structure—what is now called an organoid. “Toshi is very Japanese and doesn’t always talk much,” says Clevers. “One day I had asked him, while he was at the microscope, if the gut stem cells were growing, and he said, ‘Yes.’ Then I looked under the microscope and saw the beautiful structures and said, ‘Why didn’t you tell me?’ and he said, ‘You didn’t ask.’ For three months he had been growing them!” The lab has since also grown mini-pancreases, -livers, -stomachs, and many other mini-organs.

Tumor Organoids. Clevers showed that organoids can be grown from diseased patients’ samples, a technique that could be used in the future to screen drugs. The lab is also building biobanks of organoidsderived from tumor samples and adjacent normal tissue, which could be especially useful for monitoring responses to chemotherapies. “It’s a similar approach to getting a bacterium cultured to identify which antibiotic to take. The most basic goal is not to give a toxic chemotherapy to a patient who will not respond anyway,” says Clevers. “Tumor organoids grow slower than healthy organoids, which seems counterintuitive, but with cancer cells, often they try to divide and often things go wrong because they don’t have normal numbers of chromosomes and [have] lots of mutations. So, I am not yet convinced that this approach will work for every patient. Sometimes, the tumor organoids may just grow too slowly.”

Selective memory. “When I received the Breakthrough Prize in 2013, I invited everyone who has ever worked with me to Amsterdam, about 100 people, and the lab organized a symposium where many of the researchers gave an account of what they had done in the lab,” says Clevers. “In my experience, my lab has been a straight line from cloning TCF-1 to where we are now. But when you hear them talk it was ‘Hans told me to try this and stop this’ and ‘Half of our knockout mice were never published,’ and I realized that the lab is an endless list of failures,” Clevers recalls. “The one thing we did well is that we would start something and, as soon as it didn’t look very good, we would stop it and try something else. And the few times when we seemed to hit gold, I would regroup my entire lab. We just tried a lot of things, and the 10 percent of what worked, those are the things I remember.”

Greatest Hits

  • Cloned the first T cell–specific transcription factor, TCF-1, and identified homologous genes in model organisms including the fruit fly, frog, and worm
  • Found that transcriptional activation by the abundant β-catenin/TCF-4 [TCF7L2] complex drives cancer initiation in colon cells missing the tumor suppressor protein APC
  • First to extend the role of Wnt signaling from developmental biology to adult stem cells by showing that the two Wnt pathway transcription factors, TCF-1 and TCF-4, are necessary for maintaining the stem cell compartments in the thymus and in the crypt structures of the small intestine, respectively
  • Identified Lgr5 as an adult stem cell marker of many epithelial stem cells including those of the colon, small intestine, hair follicle, and stomach, and found that Lgr5-expressing crypt cells in the small intestine divide constantly and symmetrically, disproving the common belief that stem cell division is asymmetrical and uncommon
  • Established a three-dimensional, stable model, the “organoid,” grown from adult stem cells, to study diseased patients’ tissues from the gut, stomach, liver, and prostate
 Regenerative Medicine Comes of Age   
“Anti-Aging Medicine” Sounds Vaguely Disreputable, So Serious Scientists Prefer to Speak of “Regenerative Medicine”
  • Induced pluripotent stem cells (iPSCs) and genome-editing techniques have facilitated manipulation of living organisms in innumerable ways at the cellular and genetic levels, respectively, and will underpin many aspects of regenerative medicine as it continues to evolve.

    An attitudinal change is also occurring. Experts in regenerative medicine have increasingly begun to embrace the view that comprehensively repairing the damage of aging is a practical and feasible goal.

    A notable proponent of this view is Aubrey de Grey, Ph.D., a biomedical gerontologist who has pioneered an regenerative medicine approach called Strategies for Engineered Negligible Senescence (SENS). He works to “develop, promote, and ensure widespread access to regenerative medicine solutions to the disabilities and diseases of aging” as CSO and co-founder of the SENS Research Foundation. He is also the editor-in-chief of Rejuvenation Research, published by Mary Ann Liebert.

    Dr. de Grey points out that stem cell treatments for age-related conditions such as Parkinson’s are already in clinical trials, and immune therapies to remove molecular waste products in the extracellular space, such as amyloid in Alzheimer’s, have succeeded in such trials. Recently, there has been progress in animal models in removing toxic cells that the body is failing to kill. The most encouraging work is in cancer immunotherapy, which is rapidly advancing after decades in the doldrums.

    Many damage-repair strategies are at an  early stage of research. Although these strategies look promising, they are handicapped by a lack of funding. If that does not change soon, the scientific community is at risk of failing to capitalize on the relevant technological advances.

    Regenerative medicine has moved beyond boutique applications. In degenerative disease, cells lose their function or suffer elimination because they harbor genetic defects. iPSC therapies have the potential to be curative, replacing the defective cells and eliminating symptoms in their entirety. One of the biggest hurdles to commercialization of iPSC therapies is manufacturing.

  • Building Stem Cell Factories

    Cellular Dynamics International (CDI) has been developing clinically compatible induced pluripotent stem cells (iPSCs) and iPSC-derived human retinal pigment epithelial (RPE) cells. CDI’s MyCell Retinal Pigment Epithelial Cells are part of a possible therapy for macular degeneration. They can be grown on bioengineered, nanofibrous scaffolds, and then the RPE cell–enriched scaffolds can be transplanted into patients’ eyes. In this pseudo-colored image, RPE cells are shown growing over the nanofibers. Each cell has thousands of “tongue” and “rod” protrusions that could naturally support rod and cone cells in the eye.

    “Now that an infrastructure is being developed to make unlimited cells for the tools business, new opportunities are being created. These cells can be employed in a therapeutic context, and they can be used to understand the efficacy and safety of drugs,” asserts Chris Parker, executive vice president and CBO, Cellular Dynamics International (CDI). “CDI has the capability to make a lot of cells from a single iPSC line that represents one person (a capability termed scale-up) as well as the capability to do it in parallel for multiple individuals (a capability termed scale-out).”

    Minimally manipulated adult stem cells have progressed relatively quickly to the clinic. In this scenario, cells are taken out of the body, expanded unchanged, then reintroduced. More preclinical rigor applies to potential iPSC therapy. In this case, hematopoietic blood cells are used to make stem cells, which are manufactured into the cell type of interest before reintroduction. Preclinical tests must demonstrate that iPSC-derived cells perform as intended, are safe, and possess little or no off-target activity.

    For example, CDI developed a Parkinsonian model in which iPSC-derived dopaminergic neurons were introduced to primates. The model showed engraftment and enervation, and it appeared to be free of proliferative stem cells.

    • “You will see iPSCs first used in clinical trials as a surrogate to understand efficacy and safety,” notes Mr. Parker. “In an ongoing drug-repurposing trial with GlaxoSmithKline and Harvard University, iPSC-derived motor neurons will be produced from patients with amyotrophic lateral sclerosis and tested in parallel with the drug.” CDI has three cell-therapy programs in their commercialization pipeline focusing on macular degeneration, Parkinson’s disease, and postmyocardial infarction.

    • Keeping an Eye on Aging Eyes

      The California Project to Cure Blindness is evaluating a stem cell–based treatment strategy for age-related macular degeneration. The strategy involves growing retinal pigment epithelium (RPE) cells on a biostable, synthetic scaffold, then implanting the RPE cell–enriched scaffold to replace RPE cells that are dying or dysfunctional. One of the project’s directors, Dennis Clegg, Ph.D., a researcher at the University of California, Santa Barbara, provided this image, which shows stem cell–derived RPE cells. Cell borders are green, and nuclei are red.

      The eye has multiple advantages over other organ systems for regenerative medicine. Advanced surgical methods can access the back of the eye, noninvasive imaging methods can follow the transplanted cells, good outcome parameters exist, and relatively few cells are needed.

      These advantages have attracted many groups to tackle ocular disease, in particular age-related macular degeneration, the leading cause of blindness in the elderly in the United States. Most cases of age-related macular degeneration are thought to be due to the death or dysfunction of cells in the retinal pigment epithelium (RPE). RPE cells are crucial support cells for the rods, cones, and photoreceptors. When RPE cells stop working or die, the photoreceptors die and a vision deficit results.

      A regenerated and restored RPE might prevent the irreversible loss of photoreceptors, possibly via the the transplantation of functionally polarized RPE monolayers derived from human embryonic stem cells. This approach is being explored by the California Project to Cure Blindness, a collaborative effort involving the University of Southern California (USC), the University of California, Santa Barbara (UCSB), the California Institute of Technology, City of Hope, and Regenerative Patch Technologies.

      The project, which is funded by the California Institute of Regenerative Medicine (CIRM), started in 2010, and an IND was filed early 2015. Clinical trial recruitment has begun.

      One of the project’s leaders is Dennis Clegg, Ph.D., Wilcox Family Chair in BioMedicine, UCSB. His laboratory developed the protocol to turn undifferentiated H9 embryonic stem cells into a homogenous population of RPE cells.

      “These are not easy experiments,” remarks Dr. Clegg. “Figuring out the biology and how to make the cell of interest is a challenge that everyone in regenerative medicine faces. About 100,000 RPE cells will be grown as a sheet on a 3 × 5 mm biostable, synthetic scaffold, and then implanted in the patients to replace the cells that are dying or dysfunctional. The idea is to preserve the photoreceptors and to halt disease progression.”

      Moving therapies such as this RPE treatment from concept to clinic is a huge team effort and requires various kinds of expertise. Besides benefitting from Dr. Clegg’s contribution, the RPE project incorporates the work of Mark Humayun, M.D., Ph.D., co-director of the USC Eye Institute and director of the USC Institute for Biomedical Therapeutics and recipient of the National Medal of Technology and Innovation, and David Hinton, Ph.D., a researcher at USC who has studied how actvated RPE cells can alter the local retinal microenvironment.

Read Full Post »


3D revolution and tissue repair

Curator: Larry H. Bernstein, MD, FCAP

 

 

Berkeley Lab captures first high-res 3D images of DNA segments

DNA segments are targeted to be building blocks for molecular computer memory and electronic devices, nanoscale drug-delivery systems, and as markers for biological research and imaging disease-relevant proteins

In a Berkeley Lab-led study, flexible double-helix DNA segments (purple, with green DNA models) connected to gold nanoparticles (yellow) are revealed from the 3D density maps reconstructed from individual samples using a Berkeley Lab-developed technique called individual-particle electron tomography (IPET). Projections of the structures are shown in the green background grid. (credit: Berkeley Lab)

An international research team working at the Lawrence Berkeley National Laboratory (Berkeley Lab) has captured the first high-resolution 3D images of double-helix DNA segments attached at either end to gold nanoparticles — which could act as building blocks for molecular computer memory and electronic devices (see World’s smallest electronic diode made from single DNA molecule), nanoscale drug-delivery systems, and as markers for biological research and for imaging disease-relevant proteins.

The researchers connected coiled DNA strands between polygon-shaped gold nanoparticles and then reconstructed 3D images, using a cutting-edge electron microscope technique coupled with a protein-staining process and sophisticated software that provided structural details at the scale of about 2 nanometers.

“We had no idea about what the double-strand DNA would look like between the gold nanoparticles,” said Gang “Gary” Ren, a Berkeley Lab scientist who led the research. “This is the first time for directly visualizing an individual double-strand DNA segment in 3D,” he said.

The results were published in an open-access paper in the March 30 edition of Nature Communications.

The method developed by this team, called individual-particle electron tomography (IPET), had earlier captured the 3-D structure of a single protein that plays a key role in human cholesterol metabolism. By grabbing 2D images of an object from different angles, the technique allows researchers to assemble a 3D image of that object.

The team has also used the technique to uncover the fluctuation of another well-known flexible protein, human immunoglobulin 1, which plays a role in the human immune system.

https://youtu.be/lQrbmg9ry90
Berkeley Lab | 3-D Reconstructions of Double strand DNA and Gold Nanoparticle Structures

For this new study of DNA nanostructures, Ren used an electron-beam study technique called cryo-electron microscopy (cryo-EM) to examine frozen DNA-nanogold samples, and used IPET to reconstruct 3-D images from samples stained with heavy metal salts. The team also used molecular simulation tools to test the natural shape variations (“conformations”) in the samples, and compared these simulated shapes with observations.

First visualization of DNA strand dynamics without distorting x-ray crystallography

Ren explained that the naturally flexible dynamics of samples, like a man waving his arms, cannot be fully detailed by any method that uses an average of many observations.

A popular way to view the nanoscale structural details of delicate biological samples is to form them into crystals and zap them with X-rays, but that destroys their natural shape, especially fir the DNA-nanogold samples in this study, which the scientists say are incredibly challenging to crystallize. Other common research techniques may require a collection of thousands of near-identical objects, viewed with an electron microscope, to compile a single, averaged 3-D structure. But an averaged 3D image may not adequately show the natural shape fluctuations of a given object.

The samples in the latest experiment were formed from individual polygon gold nanostructures, measuring about 5 nanometers across, connected to single DNA-segment strands with 84 base pairs. Base pairs are basic chemical building blocks that give DNA its structure. Each individual DNA segment and gold nanoparticle naturally zipped together with a partner to form the double-stranded DNA segment with a gold particle at either end.

https://youtu.be/RDOpgj62PLU
Berkeley Lab | These views compare the various shape fluctuations obtained from different samples of the same type of double-helix DNA segment (DNA renderings in green, 3D reconstructions in purple) connected to gold nanoparticles (yellow).

The samples were flash-frozen to preserve their structure for study with cryo-EM imaging. The distance between the two gold nanoparticles in individual samples varied from 20 to 30 nanometers, based on different shapes observed in the DNA segments.

Researchers used a cryo-electron microscope at Berkeley Lab’s Molecular Foundry for this study. They collected a series of tilted images of the stained objects, and reconstructed 14 electron-density maps that detailed the structure of individual samples using the IPET technique.

Sub-nanometer images next

Ren said that the next step will be to work to improve the resolution to the sub-nanometer scale.

“Even in this current state we begin to see 3-D structures at 1- to 2-nanometer resolution,” he said. “Through better instrumentation and improved computational algorithms, it would be promising to push the resolution to that visualizing a single DNA helix within an individual protein.”

In future studies, researchers could attempt to improve the imaging resolution for complex structures that incorporate more DNA segments as a sort of “DNA origami,” Ren said. Researchers hope to build and better characterize nanoscale molecular devices using DNA segments that can, for example, store and deliver drugs to targeted areas in the body.

“DNA is easy to program, synthesize and replicate, so it can be used as a special material to quickly self-assemble into nanostructures and to guide the operation of molecular-scale devices,” he said. “Our current study is just a proof of concept for imaging these kinds of molecular devices’ structures.”

The team included researchers at UC Berkeley, the Kavli Energy NanoSciences Institute at Berkeley Lab and UC Berkeley, and Xi’an Jiaotong University in China. This work was supported by the National Science Foundation, DOE Office of Basic Energy Sciences, National Institutes of Health, the National Natural Science Foundation of China, Xi’an Jiaotong University in China, and the Ministry of Science and Technology in China. View more about Gary Ren’s research group here.


Abstract of Three-dimensional structural dynamics and fluctuations of DNA-nanogold conjugates by individual-particle electron tomography

DNA base pairing has been used for many years to direct the arrangement of inorganic nanocrystals into small groupings and arrays with tailored optical and electrical properties. The control of DNA-mediated assembly depends crucially on a better understanding of three-dimensional structure of DNA-nanocrystal-hybridized building blocks. Existing techniques do not allow for structural determination of these flexible and heterogeneous samples. Here we report cryo-electron microscopy and negative-staining electron tomography approaches to image, and three-dimensionally reconstruct a single DNA-nanogold conjugate, an 84-bp double-stranded DNA with two 5-nm nanogold particles for potential substrates in plasmon-coupling experiments. By individual-particle electron tomography reconstruction, we obtain 14 density maps at ~2-nm resolution. Using these maps as constraints, we derive 14 conformations of dsDNA by molecular dynamics simulations. The conformational variation is consistent with that from liquid solution, suggesting that individual-particle electron tomography could be an expected approach to study DNA-assembling and flexible protein structure and dynamics.

 

World’s smallest electronic diode made from single DNA molecule

Electronic components 1,000 times smaller than with silicon may be possible
http://www.kurzweilai.net/worlds-smallest-electronic-diode-made-from-single-dna-molecule
By inserting a small “coralyne” molecule into DNA, scientists were able to create a single-molecule diode (connected here by two gold electrodes), which can be used as an active element in future nanoscale circuits. The diode circuit symbol is shown on the left. (credit: University of Georgia and Ben-Gurion University)

Nanoscale electronic components can be made from single DNA molecules, as researchers at the University of Georgia and at Ben-Gurion University in Israel have demonstrated, using a single molecule of DNA to create the world’s smallest diode.

DNA double helix with base pairs (credit: National Human Genome Research Institute)

A diode is a component vital to electronic devices that allows current to flow in one direction but prevents its flow in the other direction. The development could help stimulate development of DNA components for molecular electronics.

As noted in an open-access Nature Chemistry paper published this week, the researchers designed a 11-base-pair (bp) DNA molecule and inserted a small molecule named coralyne into the DNA.*

They found, surprisingly, that this caused the current flowing through the DNA to be 15 times stronger for negative voltages than for positive voltages, a necessary feature of a diode.

Electronic elements 1,00o times smaller than current components

“Our discovery can lead to progress in the design and construction of nanoscale electronic elements that are at least 1,000 times smaller than current components,” says the study’s lead author, Bingqian Xu an associate professor in the UGA College of Engineering and an adjunct professor in chemistry and physics.

The research team plans to enhance the performance of the molecular diode and construct additional molecular devices, which may include a transistor (similar to a two-layer diode, but with one additional layer).

A theoretical model developed by Yanantan Dubi of Ben-Gurion University indicated the diode-like behavior of DNA originates from the bias voltage-induced breaking of spatial symmetry inside the DNA molecule after the coralyne is inserted.

The research is supported by the National Science Foundation.

*“We prepared the DNA–coralyne complex by specifically intercalating two coralyne molecules into a custom-designed 11-base-pair (bp) DNA molecule (5′-CGCGAAACGCG-3′) containing three mismatched A–A base pairs at the centre,” according to the authors.

UPDATE April 6, 2016 to clarify the coralyne intercalation (insertion) into the DNA molecule.


Abstract of Molecular rectifier composed of DNA with high rectification ratio enabled by intercalation

The predictability, diversity and programmability of DNA make it a leading candidate for the design of functional electronic devices that use single molecules, yet its electron transport properties have not been fully elucidated. This is primarily because of a poor understanding of how the structure of DNA determines its electron transport. Here, we demonstrate a DNA-based molecular rectifier constructed by site-specific intercalation of small molecules (coralyne) into a custom-designed 11-base-pair DNA duplex. Measured current–voltage curves of the DNA–coralyne molecular junction show unexpectedly large rectification with a rectification ratio of about 15 at 1.1 V, a counter-intuitive finding considering the seemingly symmetrical molecular structure of the junction. A non-equilibrium Green’s function-based model—parameterized by density functional theory calculations—revealed that the coralyne-induced spatial asymmetry in the electron state distribution caused the observed rectification. This inherent asymmetry leads to changes in the coupling of the molecular HOMO−1 level to the electrodes when an external voltage is applied, resulting in an asymmetric change in transmission.

 

A stem-cell repair system that can regenerate any kind of human tissue …including disease and aging; human trials next year
http://www.kurzweilai.net/a-stem-cell-repair-system-that-can-regenerate-any-kind-of-human-tissue

http://www.kurzweilai.net/images/spinal_disc_regeneration.jpg

UNSW researchers say the therapy has enormous potential for treating spinal disc injury and joint and muscle degeneration and could also speed up recovery following complex surgeries where bones and joints need to integrate with the body (credit: UNSW TV)

A stem cell therapy system capable of regenerating any human tissue damaged by injury, disease, or aging could be available within a few years, say University of New South Wales (UNSW Australia) researchers.

Their new repair system*, similar to the method used by salamanders to regenerate limbs, could be used to repair everything from spinal discs to bone fractures, and could transform current treatment approaches to regenerative medicine.

The UNSW-led research was published this week in the Proceedings of the National Academy of Sciences journal.

Reprogramming bone and fat cells

The system reprograms bone and fat cells into induced multipotent stem cells (iMS), which can regenerate multiple tissue types and has been successfully demonstrated in mice, according to study lead author, haematologist, and UNSW Associate Professor John Pimanda.

“This technique is a significant advance on many of the current unproven stem cell therapies, which have shown little or no objective evidence they contribute directly to new tissue formation,” Pimanda said. “We have taken bone and fat cells, switched off their memory and converted them into stem cells so they can repair different cell types once they are put back inside the body.”

“We are currently assessing whether adult human fat cells reprogrammed into iMS cells can safely repair damaged tissue in mice, with human trials expected to begin in late 2017.”

http://www.kurzweilai.net/images/UNSW-stem-cell-repair.jpg

Advantages over stem-cell types

There are different types of stem cells including embryonic stem (ES) cells, which during embryonic development generate every type of cell in the human body, and adult stem cells, which are tissue-specific, but don’t regenerate multiple tissue types. Embryonic stem cells cannot be used to treat damaged tissues because of their tumor forming capacity. The other problem when generating stem cells is the requirement to use viruses to transform cells into stem cells, which is clinically unacceptable, the researchers note.

Research shows that up to 20% of spinal implants either don’t heal or there is delayed healing. The rates are higher for smokers, older people and patients with diseases such diabetes or kidney disease.

Human trials are planned next year once the safety and effectiveness of the technique using human cells in mice has been demonstrated.

* The technique involves extracting adult human fat cells and treating them with the compound 5-Azacytidine (AZA), along with platelet-derived growth factor-AB (PDGF-AB) for about two days. The cells are then treated with the growth factor alone for a further two-three weeks.

AZA is known to induce cell plasticity, which is crucial for reprogramming cells. The AZA compound relaxes the hard-wiring of the cell, which is expanded by the growth factor, transforming the bone and fat cells into iMS cells. When the stem cells are inserted into the damaged tissue site, they multiply, promoting growth and healing.

The new technique is similar to salamander limb regeneration, which is also dependent on the plasticity of differentiated cells, which can repair multiple tissue types, depending on which body part needs replacing.

Along with confirming that human adult fat cells reprogrammed into iMS stem cells can safely repair damaged tissue in mice, the researchers said further work is required to establish whether iMS cells remain dormant at the sites of transplantation and retain their capacity to proliferate on demand.

https://youtu.be/zAMCBNujzzw

Abstract of PDGF-AB and 5-Azacytidine induce conversion of somatic cells into tissue-regenerative multipotent stem cells

Current approaches in tissue engineering are geared toward generating tissue-specific stem cells. Given the complexity and heterogeneity of tissues, this approach has its limitations. An alternate approach is to induce terminally differentiated cells to dedifferentiate into multipotent proliferative cells with the capacity to regenerate all components of a damaged tissue, a phenomenon used by salamanders to regenerate limbs. 5-Azacytidine (AZA) is a nucleoside analog that is used to treat preleukemic and leukemic blood disorders. AZA is also known to induce cell plasticity. We hypothesized that AZA-induced cell plasticity occurs via a transient multipotent cell state and that concomitant exposure to a receptive growth factor might result in the expansion of a plastic and proliferative population of cells. To this end, we treated lineage-committed cells with AZA and screened a number of different growth factors with known activity in mesenchyme-derived tissues. Here, we report that transient treatment with AZA in combination with platelet-derived growth factor–AB converts primary somatic cells into tissue-regenerative multipotent stem (iMS) cells. iMS cells possess a distinct transcriptome, are immunosuppressive, and demonstrate long-term self-renewal, serial clonogenicity, and multigerm layer differentiation potential. Importantly, unlike mesenchymal stem cells, iMS cells contribute directly to in vivo tissue regeneration in a context-dependent manner and, unlike embryonic or pluripotent stem cells, do not form teratomas. Taken together, this vector-free method of generating iMS cells from primary terminally differentiated cells has significant scope for application in tissue regeneration.

 

First transistors made entirely of nanocrystal ‘inks’ in simplified process

Transistors and other electronic components to be built into flexible or wearable applications; 3D printing planned
http://www.kurzweilai.net/first-transistors-made-entirely-of-nanocrystal-inks
Because this process works at relatively low temperatures, many transistors can be made on a flexible backing at once. (credit: University of Pennsylvania)

University of Pennsylvania engineers have developed a simplified new approach for making transistors by sequentially depositing their components in the form of liquid nanocrystal “inks.” The new process open the door for transistors and other electronic components to be built into flexible or wearable applications. It also avoids the highly complex current process for creating transistors, which requires high-temperature, high-vacuum equipment. Also, the new lower-temperature process is compatible with a wide array of materials and can be applied to larger areas.

Transistors patterned on plastic backing

The researchers’ nanocrystal-based field effect transistors were patterned onto flexible plastic backings using spin coating, but could eventually be constructed by additive manufacturing systems, like 3D printers.

Published in the journal Science,  the study was lead by Cherie Kagan, the Stephen J. Angello Professor in the School of Engineering and Applied Science, and Ji-Hyuk Choi, then a member of her lab, now a senior researcher at the Korea Institute of Geoscience and Mineral Resources. Researchers at Korea University Korea’s Yonsei University were also involved.

[+]

Kagan’s group developed four nanocrystal inks that comprise the transistor, then deposited them on a flexible backing. (credit: University of Pennsylvania)

The researchers began by dispersing a specific type of nanocrystals in a liquid, creating nanocrystal inks. They developed a library of four of these inks: a conductor (silver), an insulator (aluminum oxide), a semiconductor (cadmium selenide), and a conductor combined with a dopant (a mixture of silver and indium). (“Doping” the semiconductor layer of a transistor with impurities controls whether the device creates a positive or negative charge.)

“These materials are colloids just like the ink in your inkjet printer,” Kagan said, “but you can get all the characteristics that you want and expect from the analogous bulk materials, such as whether they’re conductors, semiconductors or insulators.” Although the electrical properties of several of these nanocrystal inks had been independently verified, they had never been combined into full devices. “Our question was whether you could lay them down on a surface in such a way that they work together to form functional transistors.”

Laying down patterns in layers

Such a process entails layering or mixing them in precise patterns.

First, the conductive silver nanocrystal ink was deposited from liquid on a flexible plastic surface that was treated with a photolithographic mask, then rapidly spun to draw it out in an even layer. The mask was then removed to leave the silver ink in the shape of the transistor’s gate electrode.

The researchers followed that layer by spin-coating a layer of the aluminum oxide nanocrystal-based insulator, then a layer of the cadmium selenide nanocrystal-based semiconductor and finally another masked layer for the indium/silver mixture, which forms the transistor’s source and drain electrodes. Upon heating at relatively low temperatures, the indium dopant diffused from those electrodes into the semiconductor component.

“The trick with working with solution-based materials is making sure that, when you add the second layer, it doesn’t wash off the first, and so on,” Kagan said. “We had to treat the surfaces of the nanocrystals, both when they’re first in solution and after they’re deposited, to make sure they have the right electrical properties and that they stick together in the configuration we want.”

Because this entirely ink-based fabrication process works at lower temperatures than existing vacuum-based methods, the researchers were able to make several transistors on the same flexible plastic backing at the same time.

[+]

The inks’ specialized surface chemistry allowed them to stay in configuration without losing their electrical properties. (credit: University of Pennsylvania)

“Making transistors over larger areas and at lower temperatures have been goals for an emerging class of technologies, when people think of the Internet of things, large area flexible electronics and wearable devices,” Kagan said. “We haven’t developed all of the necessary aspects so they could be printed yet, but because these materials are all solution-based, it demonstrates the promise of this materials class and sets the stage for additive manufacturing.”

Because this entirely ink-based fabrication process works at lower temperatures than existing vacuum-based methods, the researchers were able to make several transistors on the same flexible plastic backing at the same time.

3D-printing transistors for wearables

“This is the first work,” Choi said, “showing that all the components, the metallic, insulating, and semiconducting layers of the transistors, and even the doping of the semiconductor, could be made from nanocrystals.”

“Making transistors over larger areas and at lower temperatures have been goals for an emerging class of technologies, when people think of the Internet of things, large area flexible electronics and wearable devices,” Kagan said. “We haven’t developed all of the necessary aspects so they could be printed yet, but because these materials are all solution-based, it demonstrates the promise of this materials class and sets the stage for additive manufacturing.”

The research was supported by the National Science Foundation, the U.S. Department of Energy, the Office of Naval Research, and the Korea Institute of Geoscience and Mineral Resources funded by the Ministry of Science, ICT, and Future Planning of Korea.


Abstract of Exploiting the colloidal nanocrystal library to construct electronic devices

Synthetic methods produce libraries of colloidal nanocrystals with tunable physical properties by tailoring the nanocrystal size, shape, and composition. Here, we exploit colloidal nanocrystal diversity and design the materials, interfaces, and processes to construct all-nanocrystal electronic devices using solution-based processes. Metallic silver and semiconducting cadmium selenide nanocrystals are deposited to form high-conductivity and high-mobility thin-film electrodes and channel layers of field-effect transistors. Insulating aluminum oxide nanocrystals are assembled layer by layer with polyelectrolytes to form high–dielectric constant gate insulator layers for low-voltage device operation. Metallic indium nanocrystals are codispersed with silver nanocrystals to integrate an indium supply in the deposited electrodes that serves to passivate and dope the cadmium selenide nanocrystal channel layer. We fabricate all-nanocrystal field-effect transistors on flexible plastics with electron mobilities of 21.7 square centimeters per volt-second.

Best textile manufacturing methods for creating human tissues with stem cells
Bioengineers determine three best processes for engineering tissues needed for organ and tissue repair
http://www.kurzweilai.net/best-textile-manufacturing-methods-for-creating-human-tissues-with-stem-cells
All four textile manufacturing processes and corresponding scaffold (structure) types studied exhibited the presence of lipid vacuoles (small red spheres, right column, indicating stem cells undergoing random differentiation), compared to control (left). Electrospun scaffolds (row a) exhibited only a monolayer of lipid vacuoles in a single focal plane, while meltblown, spunbond, and carded scaffolds (rows b, c, d) exhibited vacuoles in multiple planes throughout the fabric thickness. Scale bars: 100 μm (credit: S. A. Tuin et al./Biomedical Materials)

Elizabeth Loboa, dean of the Missouri University College of Engineering, and her team have tested new tissue- engineering methods (based on textile manufacturing) to find ones that are most cost-effective and can be produced in larger quantities.

Tissue engineering is a process that uses novel biomaterials seeded with stem cells to grow and replace missing tissues. When certain types of materials are used, the “scaffolds” that are created to hold stem cells eventually degrade, leaving natural tissue in its place. The new tissues could help patients suffering from wounds caused by diabetes and circulation disorders, patients in need of cartilage or bone repair, and women who have had mastectomies by replacing their breast tissue. The challenge is creating enough of the material on a scale that clinicians need to treat patients.

Comparing textile manufacturing techniques

http://www.kurzweilai.net/images/electrospinning.png

Electrospinning experiment: nanofibers are collected into an ethanol bath and removed at predefined time intervals (credit: J. M. Coburn et al./The Johns Hopkins University/PNAS)

In typical tissue engineering approaches that use fibers as scaffolds, non-woven materials are often bonded together using an electrostatic field. This process, called electrospinning (see Nanoscale scaffolds and stem cells show promise in cartilage repair and Improved artificial blood vessels), creates the scaffolds needed to attach to stem cells.

However, large-scale production with electrospinning is not cost-effective. “Electrospinning produces weak fibers, scaffolds that are not consistent, and pores that are too small,” Loboa said. “The goal of ‘scaling up’ is to produce hundreds of meters of material that look the same, have the same properties, and can be used in clinical settings. So we investigated the processes that create textiles, such as clothing and window furnishings like drapery, to scale up the manufacturing process.”

The group published two papers using three industry-standard, high-throughput manufacturing techniques — meltblowing, spunbonding, and carding — to determine if they would create the materials needed to mimic native tissue.

Meltblowing is a technique during which nonwoven materials are created using a molten polymer to create continuous fibers. Spunbond materials are made much the same way but the fibers are drawn into a web while in a solid state instead of a molten one. Carding involves the separation of fibers through the use of rollers, forming the web needed to hold stem cells in place.

http://www.kurzweilai.net/images/carded-scaffold-fabrication.jpg

Schematic of gilled fiber multifilament spinning and carded scaffold fabrication (credit: Stephen A. Tuin et al./Acta Biomaterialia)

Cost-effective methods

Loboa and her colleagues tested these techniques to create polylactic acid (PLA) scaffolds (a Food and Drug Administration-approved material used as collagen fillers), seeded with human stem cells. They then spent three weeks studying whether the stem cells remained healthy and if they began to differentiate into fat and bone pathways, which is the goal of using stem cells in a clinical setting when new bone and/or new fat tissue is needed at a defect site. Results showed that the three textile manufacturing methods proved as viable if not more so than electrospinning.

“These alternative methods are more cost-effective than electrospinning,” Loboa said. “A small sample of electrospun material could cost between $2 to $5. The cost for the three manufacturing methods is between $.30 to $3.00; these methods proved to be effective and efficient. Next steps include testing how the different scaffolds created in the three methods perform once implanted in animals.”

Researchers at North Carolina State University and the University of North Carolina at Chapel Hill were also involved in the two studies, which were published in Biomedical Materials (open access) and Acta Biomaterialia. The National Science Foundation, the National Institutes of Health, and the Nonwovens Institute provided funding for the studies.


Abstract of Creating tissues from textiles: scalable nonwoven manufacturing techniques for fabrication of tissue engineering scaffolds

Electrospun nonwovens have been used extensively for tissue engineering applications due to their inherent similarities with respect to fibre size and morphology to that of native extracellular matrix (ECM). However, fabrication of large scaffold constructs is time consuming, may require harsh organic solvents, and often results in mechanical properties inferior to the tissue being treated. In order to translate nonwoven based tissue engineering scaffold strategies to clinical use, a high throughput, repeatable, scalable, and economic manufacturing process is needed. We suggest that nonwoven industry standard high throughput manufacturing techniques (meltblowing, spunbond, and carding) can meet this need. In this study, meltblown, spunbond and carded poly(lactic acid) (PLA) nonwovens were evaluated as tissue engineering scaffolds using human adipose derived stem cells (hASC) and compared to electrospun nonwovens. Scaffolds were seeded with hASC and viability, proliferation, and differentiation were evaluated over the course of 3 weeks. We found that nonwovens manufactured via these industry standard, commercially relevant manufacturing techniques were capable of supporting hASC attachment, proliferation, and both adipogenic and osteogenic differentiation of hASC, making them promising candidates for commercialization and translation of nonwoven scaffold based tissue engineering strategies.


Abstract of Fabrication of novel high surface area mushroom gilled fibers and their effects on human adipose derived stem cells under pulsatile fluid flow for tissue engineering applications

The fabrication and characterization of novel high surface area hollow gilled fiber tissue engineering scaffolds via industrially relevant, scalable, repeatable, high speed, and economical nonwoven carding technology is described. Scaffolds were validated as tissue engineering scaffolds using human adipose derived stem cells (hASC) exposed to pulsatile fluid flow (PFF). The effects of fiber morphology on the proliferation and viability of hASC, as well as effects of varied magnitudes of shear stress applied via PFF on the expression of the early osteogenic gene marker runt related transcription factor 2 (RUNX2) were evaluated. Gilled fiber scaffolds led to a significant increase in proliferation of hASC after seven days in static culture, and exhibited fewer dead cells compared to pure PLA round fiber controls. Further, hASC-seeded scaffolds exposed to 3 and 6 dyn/cm2 resulted in significantly increased mRNA expression of RUNX2 after one hour of PFF in the absence of soluble osteogenic induction factors. This is the first study to describe a method for the fabrication of high surface area gilled fibers and scaffolds. The scalable manufacturing process and potential fabrication across multiple nonwoven and woven platforms makes them promising candidates for a variety of applications that require high surface area fibrous materials.

Statement of Significance

We report here for the first time the successful fabrication of novel high surface area gilled fiber scaffolds for tissue engineering applications. Gilled fibers led to a significant increase in proliferation of human adipose derived stem cells after one week in culture, and a greater number of viable cells compared to round fiber controls. Further, in the absence of osteogenic induction factors, gilled fibers led to significantly increased mRNA expression of an early marker for osteogenesis after exposure to pulsatile fluid flow. This is the first study to describe gilled fiber fabrication and their potential for tissue engineering applications. The repeatable, industrially scalable, and versatile fabrication process makes them promising candidates for a variety of scaffold-based tissue engineering applications.

Read Full Post »


Update on FDA Policy Regarding 3D Bioprinted Material

Curator: Stephen J. Williams, Ph.D.

Last year (2015) in late October the FDA met to finalize a year long process of drafting guidances for bioprinting human tissue and/or medical devices such as orthopedic devices.  This importance of the development of these draft guidances was highlighted in a series of articles below, namely that

  • there were no standards as a manufacturing process
  • use of human tissues and materials could have certain unforseen adverse events associated with the bioprinting process

In the last section of this post a recent presentation by the FDA is given as well as an excellent  pdf here BioprintingGwinnfinal written by a student at University of Kentucky James Gwinn on regulatory concerns of bioprinting.

Bio-Printing Could Be Banned Or Regulated In Two Years

3D Printing News January 30, 2014 No Comments 3dprinterplans

organovaliver

 

 

 

 

 

Cross-section of multi-cellular bioprinted human liver tissue Credit: organovo.com

Bio-printing has been touted as the pinnacle of additive manufacturing and medical science, but what if it might be shut down before it splashes onto the medical scene. Research firm, Gartner Inc believes that the rapid development of bio-printing will spark calls to ban the technology for human and non-human tissue within two years.

A report released by Gartner predicts that the time is drawing near when 3D-bioprinted human organs will be readily available, causing widespread debate. They use an example of 3D printed liver tissue by a San Diego-based company named Organovo.

“At one university, they’re actually using cells from human and non-human organs,” said Pete Basiliere, a Gartner Research Director. “In this example, there was human amniotic fluid, canine smooth muscle cells, and bovine cells all being used. Some may feel those constructs are of concern.”

Bio-printing 

Bio-printing uses extruder needles or inkjet-like printers to lay down rows of living cells. Major challenges still face the technology, such as creating vascular structures to support tissue with oxygen and nutrients. Additionally, creating the connective tissue or scaffolding-like structures to support functional tissue is still a barrier that bio-printing will have to overcome.

Organovo has worked around a number of issues and they hope to print a fully functioning liver for pharmaceutical industry by the end of this year.  “We have achieved thicknesses of greater than 500 microns, and have maintained liver tissue in a fully functional state with native phenotypic behavior for at least 40 days,” said Mike Renard, Organovo’s executive vice president of commercial operations.

clinical trails and testing of organs could take over a decade in the U.S. This is because of the strict rules the U.S. Food and Drug Administration (FDA) places on any new technology. Bio-printing research could outplace regulatory agencies ability to keep up.

“What’s going to happen, in some respects, is the research going on worldwide is outpacing regulatory agencies ability to keep up,” Basiliere said. “3D bio-printing facilities with the ability to print human organs and tissue will advance far faster than general understanding and acceptance of the ramifications of this technology.”

Other companies have been successful with bio-printing as well. Munich-based EnvisionTEC is already selling a printer called a Bioplotter that sells for $188,000 and can print 3D pieces of human tissue. China’s Hangzhou Dianzi University has developed a printer called Regenovo, which printed a small working kidney that lasted four months.

“These initiatives are well-intentioned, but raise a number of questions that remain unanswered. What happens when complex enhanced organs involving nonhuman cells are made? Who will control the ability to produce them? Who will ensure the quality of the resulting organs?” Basiliere said.

Gartner believes demand for bio-printing will explode in 2015, due to a burgeoning population and insufficient levels of healthcare in emerging markets. “The overall success rates of 3D printing use cases in emerging regions will escalate for three main reasons: the increasing ease of access and commoditization of the technology; ROI; and because it simplifies supply chain issues with getting medical devices to these regions,” Basiliere said. “Other primary drivers are a large population base with inadequate access to healthcare in regions often marred by internal conflicts, wars or terrorism.”

It’s interesting to hear Gartner’s bold predictions for bio-printing. Some of the experts we have talked to seem to think bio-printing is further off than many expect, possibly even 20 or 30 years away for fully functioning organs used in transplants on humans. However, less complicated bio-printing procedures and tissue is only a few years away.

 

FDA examining regulations for 3‑D printed medical devices

Renee Eaton Monday, October 27, 2014

fdalogo

The official purpose of a recent FDA-sponsored workshop was “to provide a forum for FDA, medical device manufacturers, additive manufacturing companies and academia to discuss technical challenges and solutions of 3-D printing.” The FDA wants “input to help it determine technical assessments that should be considered for additively manufactured devices to provide a transparent evaluation process for future submissions.”

Simply put, the FDA is trying to stay current with advanced manufacturing technologies that are revolutionizing patient care and, in some cases, democratizing its availability. When a next-door neighbor can print a medical device in his or her basement, it clearly has many positive and negative implications that need to be considered.

Ignoring the regulatory implications for a moment, the presentations at the workshop were fascinating.

STERIS representative Dr. Bill Brodbeck cautioned that the complex designs and materials now being created with additive manufacturing make sterilization practices challenging. For example, how will the manufacturer know if the implant is sterile or if the agent has been adequately removed? Also, some materials and designs cannot tolerate acids, heat or pressure, making sterilization more difficult.

Dr. Thomas Boland from the University of Texas at El Paso shared his team’s work on 3-D-printed tissues. Using inkjet technology, the researchers are evaluating the variables involved in successfully printing skin. Another bio-printing project being undertaken at Wake Forest by Dr. James Yoo involves constructing bladder-shaped prints using bladder cell biopsies and scaffolding.

Dr. Peter Liacouras at Walter Reed discussed his institution’s practice of using 3-D printing to create surgical guides and custom implants. In another biomedical project, work done at Children’s National Hospital by Drs. Axel Krieger and Laura Olivieri involves the physicians using printed cardiac models to “inform clinical decisions,” i.e. evaluate conditions, plan surgeries and reduce operating time.

As interesting as the presentations were, the subsequent discussions were arguably more important. In an attempt to identify and address all significant impacts of additive manufacturing on medical device production, the subject was organized into preprinting (input), printing (process) and post-printing (output) considerations. Panelists and other stakeholders shared their concerns and viewpoints on each topic in an attempt to inform and persuade FDA decision-makers.

An interesting (but expected) outcome was the relative positions of the various stakeholders. Well-established and large manufacturers proposed validation procedures: material testing, process operating guidelines, quality control, traceability programs, etc. Independent makers argued that this approach would impede, if not eliminate, their ability to provide low-cost prosthetic devices.

Comparing practices to the highly regulated food industry, one can understand and accept the need to adopt similar measures for some additively manufactured medical devices. An implant is going into someone’s body, so the manufacturer needs to evaluate and assure the quality of raw materials, processing procedures and finished product.

But, as in the food industry, this means the producer needs to know the composition of materials. Suppliers cannot hide behind proprietary formulations. If manufacturers are expected to certify that a device is safe, they need to know what ingredients are in the materials they are using.

Many in the industry are also lobbying the FDA to agree that manufacturers should be expected to certify the components and not the additive manufacturing process itself. They argue that what matters is whether the device is safe, not what process was used to make it.

Another distinction should be the product’s risk level. Devices should continue to be classified as I, II or III and that classification, not the process used, should determine its level of regulation.

 

 

Will the FDA Regulate Bioprinting?

Published by Sandra Helsel, May 21, 2014 10:20 am

(3DPrintingChannel) The FDA currently assesses 3D printed medical devices and conventionally made products under the same guidelines, despite the different manufacturing methods involved. To receive device approval, manufacturers must prove that the device is equivalent to a product already on the market for the same use, or the device must undergo the process of attaining pre-market approval. However, the approval process for 3D printed devices could become complicated because the devices are manufactured differently and can be customizable. Two teams at the agency are now trying to determine how approval process should be tweaked to account for the changes.

3D Printing and 3D Bioprinting – Will the FDA Regulate Bioprinting?

This entry was posted by Bill Decker on May 20, 2014 at 8:52 am

3dprintedskin

 

 

 

 

 

VIEW VIDEO

https://www.youtube.com/watch?v=5KY-JZCXKXQ#action=share

 

The 3d printing revolution came to medicine and is making people happy while scaring them at the same time!

3-D printing—the process of making a solid object of any shape from a digital model—has grown increasingly common in recent years, allowing doctors to craft customized devices like hearing aids, dental implants, and surgical instruments. For example, University of Michigan researchers last year used a 3-D laser printer to create an airway splint out of plastic particles. In another case, a patient had 75% of his skull replaced with a 3-D printed implant customized to fit his head. The 3d printing revolution came to medicine and is making people happy while scaring them at the same time!

Printed hearts? Doctors are getting there
FDA currently treats assesses 3-D printed medical devices and conventionally made products under the same guidelines, despite the different manufacturing methods involved. To receive device approval, manufacturers must prove that the device is equivalent to a product already on the market for the same use, or the device must undergo the process of attaining pre-market approval.

“We evaluate all devices, including any that utilize 3-D printing technology, for safety and effectiveness, and appropriate benefit and risk determination, regardless of the manufacturing technologies used,” FDA spokesperson Susan Laine said.
However, the approval process for 3-D printed devices could become complicated because the devices are manufactured differently and can be customizable. Two teams at the agency now are trying to determine how approval process should be tweaked to account for the changes:

http://product-liability.weil.com/news/the-stuff-of-innovation-3d-bioprinting-and-fdas-possible-reorganization/

Print This Post

The Stuff of Innovation – 3D Bioprinting and FDA’s Possible Reorganization

Weil Product Liability Monitor on September 10, 2013 ·

Posted in News

Contributing Author: Meghan A. McCaffrey

With 3D printers, what used to exist only in the realm of science fiction — who doesn’t remember the Star Trek food replicator that could materialize a drink or meal with the mere press of a button — is now becoming more widely available with  food on demand, prosthetic devices, tracheal splintsskull implants, and even liver tissue all having recently been printed, used, implanted or consumed.  3D printing, while exciting, also presents a unique hybrid of technology and biology, making it a potentially unique and difficult area to regulate and oversee.  With all of the recent technological advances surround 3D printer technology, the FDA recently announced in a blog post that it too was going 3D, using it to “expand our research efforts and expand our capabilities to review innovative medical products.”  In addition, the agency will be investigating how 3D printing technology impacts medical devices and manufacturing processes.  This will, in turn, raise the additional question of how such technology — one of the goals of which, at least in the medical world,  is to create unique and custom printed devices, tissue and other living organs for use in medical procedures — can be properly evaluated, regulated and monitored.
In medicine, 3D printing is known as “bioprinting,” where so-called bioprinters print cells in liquid or gel format in an attempt to engineer cartilage, bone, skin, blood vessels, and even small pieces of liver and other human tissues [see a recent New York Times article here].  Not to overstate the obvious, but this is truly cutting edge science that could have significant health and safety ramifications for end users.  And more importantly for regulatory purposes, such bioprinting does not fit within the traditional category of a “device” or a “biologic.”  As was noted in Forbes, “more of the products that FDA is tasked with regulating don’t fit into the traditional categories in which FDA has historically divided its work.  Many new medical products transcend boundaries between drugs, devices, and biologics…In such a world, the boundaries between FDA’s different centers may no longer make as much sense.”  To that end, Forbes reported that FDA Commissioner Peggy Hamburg announced Friday the formation of a “Program Alignment Group” at the FDA whose goal is to identify and develop plans “to best adapt to the ongoing rapid changes in the regulatory environment, driven by scientific innovation, globalization, the increasing complexity of regulated products, new legal authorities and additional user fee programs.”

It will be interesting to see if the FDA can retool the agency to make it a more flexible, responsive, and function-specific organization.  In the short term, the FDA has tasked two laboratories in the Office of Science and Engineering Laboratories with investigating how the new 3D technology can impact the safety and efficacy of devices and materials manufactured using the technology.  The Functional Performance and Device Use Laboratory is evaluating “the effect of design changes on the safety and performance of devices when used in different patient populations” while the Laboratory for Solid Mechanics is assessing “how different printing techniques and processes affect the strength and durability of the materials used in medical devices.”  Presumably, all of this information will help the FDA evaluate at some point in the future whether a 3D printed heart is safe and effective for use in the patient population.

In any case, this type of hybrid technology can present a risk for companies and manufacturers creating and using such devices.  It remains to be seen what sort of regulations will be put in place to determine, for example, what types of clinical trials and information will have to be provided before a 3D printer capable of printing a human heart is approved for use by the FDA.  Or even on a different scale, what regulatory hurdles (and on-going monitoring, reporting, and studies) will be required before bioprinted cartilage can be implanted in a patient’s knee.  Are food replicators and holodecks far behind?

http://www.raps.org/regulatory-focus/news/2014/05/19000/FDA-3D-Printing-Guidance-and-Meeting/

Home / Regulatory Focus / News

FDA Plans Meeting to Explore Regulation, Medical Uses of 3D Printing Technology

Posted 16 May 2014 By Alexander Gaffney, RAC

The US Food and Drug Administration (FDA) plans to soon hold a meeting to discuss the future of regulating medical products made using 3D printing techniques, it has announced.

fdaplanstomeetbioprinting

Background

3D printing is a manufacturing process which layers printed materials on top of one another, creating three-dimensional parts (as opposed to injection molding or routing materials).

The manufacturing method has recently come into vogue with hobbyists, who have been driven by several factors only likely to accelerate in the near future:

  • The cost of 3D printers has come down considerably.
  • Electronic files which automate the printing process are shareable over the Internet, allowing anyone with the sufficient raw materials to build a part.
  • The technology behind 3D printing is becoming more advanced, allowing for the manufacture of increasingly durable parts.

While the technology has some alarming components—the manufacture of untraceable weapons, for example—it’s increasingly being looked at as the future source of medical product innovation, and in particular for medical devices like prosthetics.

Promise and Problems

But while 3D printing holds promise for patients, it poses immense challenges for regulators, who must assess how to—or whether to—regulate the burgeoning sector.

In a recent FDA Voice blog posting, FDA regulators noted that 3D-printed medical devices have already been used in FDA-cleared clinical interventions, and that it expects more devices to emerge in the future.

Already, FDA’s Office of Science and Engineering laboratories are working to investigate how the technology will affect the future of device manufacturing, and CDRH’s Functional Performance and Device Use Laboratory is developing and adapting computer modeling methods to help determine how small design changes could affect the safety of a device. And at the Laboratory for Solid Mechanics, FDA said it is investigating the materials used in the printing process and how those might affect durability and strength of building materials.

And as Focus noted in August 2013, there are myriad regulatory challenges to confront as well. For example: If a 3D printer makes a medical device, will that device be considered adulterated since it was not manufactured under Quality System Regulation-compliant conditions? Would each device be required to be registered with FDA? And would FDA treat shared design files as unauthorized promotion if they failed to make proper note of the device’s benefits and risks? What happens if a device was never cleared or approved by FDA?

The difficulties for FDA are seemingly endless.

Plans for a Guidance Document

But there have been indications that FDA has been thinking about this issue extensively.

In September 2013, Focus first reported that CDRH Director Jeffery Shuren was planning to release a guidance on 3D printing in “less than two years.”

Responding to Focus, Shuren said the guidance would be primarily focused on the “manufacturing side,” and probably on how 3D printing occurs and the materials used rather than some of the loftier questions posed above.

“What you’re making, and how you’re making it, may have implications for how safe and effective that device is,” he said, explaining how various methods of building materials can lead to various weaknesses or problems.

“Those are the kinds of things we’re working through. ‘What are the considerations to take into account?'”

“We’re not looking to get in the way of 3D printing,” Shuren continued, noting the parallel between 3D printing and personalized medicine. “We’d love to see that.”

Guidance Coming ‘Soon’

In recent weeks there have been indications that the guidance could soon see a public release. Plastics News reported that CDRH’s Benita Dair, deputy director of the Division of Chemistry and Materials Science, said the 3D printing guidance would be announced “soon.”

“In terms of 3-D printing, I think we will soon put out a communication to the public about FDA’s thoughts,” Dair said, according to Plastics News. “We hope to help the market bring new devices to patients and bring them to the United States first. And we hope to play an integral part in that.”

Public Meeting

But FDA has now announced that it may be awaiting public input before it puts out that guidance document. In a 16 May 2014 Federal Register announcement, the agency said it will hold a meeting in October 2014 on the “technical considerations of 3D printing.”

“The purpose of this workshop is to provide a forum for FDA, medical device manufacturers, additive manufacturing companies, and academia to discuss technical challenges and solutions of 3-D printing. The Agency would like input regarding technical assessments that should be considered for additively manufactured devices to provide a transparent evaluation process for future submissions.”

That language—”transparent evaluation process for future submissions”—indicates that at least one level, FDA plans to treat 3D printing no differently than any other medical device, subjecting the products to the same rigorous premarket assessments that many devices now undergo.

FDA’s notice seems to focus on industrial applications for the technology—not individual ones. The agency notes that it has already “begun to receive submissions using additive manufacturing for both traditional and patient-matched devices,” and says it sees “many more on the horizon.”

Among FDA’s chief concerns, it said, are process verification and validation, which are both key parts of the medical device quality manufacturing regulations.

But the notice also indicates that existing guidance documents, such as those specific to medical device types, will still be in effect regardless of the 3D printing guidance.

Discussion Points

FDA’s proposed list of discussion topics include:

  • Preprinting considerations, including but not limited to:
    • material chemistry
    • physical properties
    • recyclability
    • part reproducibility
    • process validation
  • Printing considerations, including but not limited to:
    • printing process characterization
    • software used in the process
    • post-processing steps (hot isostatic pressing, curing)
    • additional machining
  • Post-printing considerations, including but not limited to:
    • cleaning/excess material removal
    • effect of complexity on sterilization and biocompatibility
    • final device mechanics
    • design envelope
    • verification

– See more at: http://www.raps.org/regulatory-focus/news/2014/05/19000/FDA-3D-Printing-Guidance-and-Meeting/#sthash.cDg4Utln.dpuf

 

FDA examining regulations for 3‑D printed medical devices

 

Renee Eaton Monday, October 27, 2014

Share this article

The official purpose of a recent FDA-sponsored workshop was “to provide a forum for FDA, medical device manufacturers, additive manufacturing companies and academia to discuss technical challenges and solutions of 3-D printing.” The FDA wants “input to help it determine technical assessments that should be considered for additively manufactured devices to provide a transparent evaluation process for future submissions.”

Simply put, the FDA is trying to stay current with advanced manufacturing technologies that are revolutionizing patient care and, in some cases, democratizing its availability. When a next-door neighbor can print a medical device in his or her basement, it clearly has many positive and negative implications that need to be considered.

Ignoring the regulatory implications for a moment, the presentations at the workshop were fascinating.

STERIS representative Dr. Bill Brodbeck cautioned that the complex designs and materials now being created with additive manufacturing make sterilization practices challenging. For example, how will the manufacturer know if the implant is sterile or if the agent has been adequately removed? Also, some materials and designs cannot tolerate acids, heat or pressure, making sterilization more difficult.

Dr. Thomas Boland from the University of Texas at El Paso shared his team’s work on 3-D-printed tissues. Using inkjet technology, the researchers are evaluating the variables involved in successfully printing skin. Another bio-printing project being undertaken at Wake Forest by Dr. James Yoo involves constructing bladder-shaped prints using bladder cell biopsies and scaffolding.

Dr. Peter Liacouras at Walter Reed discussed his institution’s practice of using 3-D printing to create surgical guides and custom implants. In another biomedical project, work done at Children’s National Hospital by Drs. Axel Krieger and Laura Olivieri involves the physicians using printed cardiac models to “inform clinical decisions,” i.e. evaluate conditions, plan surgeries and reduce operating time.

As interesting as the presentations were, the subsequent discussions were arguably more important. In an attempt to identify and address all significant impacts of additive manufacturing on medical device production, the subject was organized into preprinting (input), printing (process) and post-printing (output) considerations. Panelists and other stakeholders shared their concerns and viewpoints on each topic in an attempt to inform and persuade FDA decision-makers.

An interesting (but expected) outcome was the relative positions of the various stakeholders. Well-established and large manufacturers proposed validation procedures: material testing, process operating guidelines, quality control, traceability programs, etc. Independent makers argued that this approach would impede, if not eliminate, their ability to provide low-cost prosthetic devices.

Comparing practices to the highly regulated food industry, one can understand and accept the need to adopt similar measures for some additively manufactured medical devices. An implant is going into someone’s body, so the manufacturer needs to evaluate and assure the quality of raw materials, processing procedures and finished product.

But, as in the food industry, this means the producer needs to know the composition of materials. Suppliers cannot hide behind proprietary formulations. If manufacturers are expected to certify that a device is safe, they need to know what ingredients are in the materials they are using.

Many in the industry are also lobbying the FDA to agree that manufacturers should be expected to certify the components and not the additive manufacturing process itself. They argue that what matters is whether the device is safe, not what process was used to make it.

Another distinction should be the product’s risk level. Devices should continue to be classified as I, II or III and that classification, not the process used, should determine its level of regulation.

If you are interested in submitting comments to the FDA on this topic, post them by Nov. 10.

FDA Guidance Summary on 3D BioPrinting

fdaregulationguidelinesfor3dbioprinting_1 fdaregulationguidelinesfor3dbioprinting_2 fdaregulationguidelinesfor3dbioprinting_3 fdaregulationguidelinesfor3dbioprinting_4 fdaregulationguidelinesfor3dbioprinting_5 fdaregulationguidelinesfor3dbioprinting_6 fdaregulationguidelinesfor3dbioprinting_7 fdaregulationguidelinesfor3dbioprinting_8 fdaregulationguidelinesfor3dbioprinting_9 fdaregulationguidelinesfor3dbioprinting_10 fdaregulationguidelinesfor3dbioprinting_11

 

 

 

 

 

Read Full Post »

Older Posts »