Posts Tagged ‘nanotechnology’

Nanoscale photodynamic therapy

Larry H. Bernstein, MD, FCAP, Curator


Researchers from the Centre for Nanoscale BioPhotonics (CNBP), an Australian Research Centre of Excellence, have shown that nanoparticles used in combination with X-rays, are a viable method for killing cancer cells deep within the living body.

The research, published in the journal Scientific Reports is based on the successful quantification of singlet oxygen produced during photodynamic therapy for cancer. Singlet oxygen molecules (a highly reactive form of oxygen) are able to kill or inhibit growth of cancer cells in the body due to their toxicity.

Co-lead author on the paper, Ewa Goldys, Deputy Director of the CNBP and Professor at Macquarie University explained, “Photodynamic therapy is where light sensitive compounds are placed near diseased cells, then activated by light, producing short lived molecular by-products that can destroy or damage the cells being targeted.”

“In this case, X-rays (a form of light) were used to stimulate cerium fluoride (CeF3) nanoparticles which had been placed near a group of cells. Singlet oxygen was produced as a by-product of the X-ray and CeF3 interaction, which was then successfully measured.”

Goldys believes the research is significant, as this is the first time that anyone has been able to quantify accurately, the number of singlet oxygen molecules produced in this type of procedure.

“Singlet oxygen molecules are a far more reactive form of oxygen but they can only kill cancer cells if generated in sufficient quantity”, said Goldys.

“In our testing we established that therapeutic radiation dose X-rays, produce enough singlet oxygen molecules to be effective in photodynamic therapy.”

According to Goldys, photodynamic therapy has traditionally utilised near-infrared or visible light which has been unable to penetrate far into the body, limiting its use to cancer treatment, on or near the surface of the skin.

“We’re looking to target cancer cells deeper in the body hence the use of X-rays, which can really penetrate into deeper levels of tissue, and are already used in medical diagnostic and therapy.’

Concluded Goldys, “What we’ve shown through our measurements is the applicability of the photodynamic therapy approach to effectively treat tumours within.”

“The beauty of this type of treatment is that it uses different biological pathways to kill cells as compared to chemotherapy, radiotherapy and other current cancer practices.

“Deep tissue photodynamic therapy will potentially provide new treatment options for the cancer patients of the future.”

Next steps with this research will see differing nanoparticles tested and measured, for effectiveness in singlet oxygen production.

ernstein, MD, FCAP, Curator



Counting Cancer-busting Oxygen Molecules

Macquarie University   http://www.rdmag.com/news/2016/02/counting-cancer-busting-oxygen-molecules



Read Full Post »

Confluence of Chemistry, Physics, and Biology

Curator: Larry H. Bernstein, MD, FCAP


  1. How Nanotechnology Works by Kevin Bonsor and Jonathan Strickland


Image Source:

There’s an unprecedented multidisciplinary convergence of scientists dedicated to the study of a world so small, we can’t see it — even with a light microscope. That world is the field of nanotechnology, the realm ofatoms and nanostructures.Nanotechnology i­s so new, no one is really sure what will come of it. Even so, predictions range from the ability to reproduce things like diamonds and food to the world being devoured by self-replicating nanorobots.In order to understand the unusual world of nanotechnology, we need to get an idea of the units of measure involved. A centimeter is one-hundredth of a meter, a millimeter is one-thousandth of a meter, and a micrometer is one-millionth of a meter, but all of these are still huge compared to the nanoscale. A nanometer (nm) is one-billionth of a meter, smaller than the wavelength of visible light and a hundred-thousandth the width of a human hair

As small as a nanometer is, it’s still large compared to the atomic scale. An atom has a diameter of about 0.1 nm. An atom’s nucleus is much smaller — about 0.00001 nm. Atoms are the building blocks for all matter in our universe. You and everything around you are made of atoms. Nature has perfected the science of manufacturing matter molecularly. For instance, our bodies are assembled in a specific manner from millions of living cells. Cells are nature’s nanomachines. At the atomic scale, elements are at their most basic level. On the nanoscale, we can potentially put these atoms together to make almost anything.

In a lecture called “Small Wonders:The World of Nanoscience,” Nobel Prize winner Dr. Horst Störmer said that the nanoscale is more interesting than the atomic scale because the nanoscale is the first point where we can assemble something — it’s not until we start putting atoms together that we can make anything useful.

In this article, we’ll learn about what nanotechnology means today and what the future of nanotechnology may hold. We’ll also look at the potential risks that come with working at the nanoscale.

In the next section, we’ll learn more about our world on the nanoscale.

The World of Nanotechnology

Experts sometimes disagree about what constitutes the nanoscale, but in general, you can think ofnanotechnology dealing with anything measuring between 1 and 100 nm. Larger than that is the microscale, and smaller than that is the atomic scale.

Nanotechnology is rapidly becoming an interdisciplinary field. Biologists, chemists, physicists and engineers are all involved in the study of substances at the nanoscale. Dr. Störmer hopes that the different disciplines develop a common language and communicate with one another

Only then, he says, can we effectively teach nanoscience since you can't understand the world of nanotechnology without a solid background in multiple sciences.

One of the exciting and challenging aspects of the nanoscale is the role that quantum mechanics plays in it. The rules of quantum mechanics are very different from classical physics, ­which means that the behavior of substances at the nanoscale can sometimes contradict common sense by behaving erratically. You can’t walk up to a wall and immediately teleport to the other side of it, but at the nanoscale an electron can — it’s called electron tunneling. Substances that are insulators, meaning they can’t carry an electric charge, in bulk form might become semiconductors when reduced to the nanoscale. Melting points can change due to an increase in surface area. Much of nanoscience requires that you forget what you know and start learning all over again.

So what does this all mean? Right now, it means that scientists are experimenting with substances at the nanoscale to learn about their properties and how we might be able to take advantage of them in various applications. Engineers are trying to use nano-size wires to create smaller, more powerful microprocessors. Doctors are searching for ways to use nanoparticles in medical applications. Still, we’ve got a long way to go before nanotechnology dominates the technology and medical markets.

In the next section, we’ll look at two important nanotechnology structures: nanowires and carbon nanotubes.


At the nanoscale, objects are so small that we can’t see them — even with a light microscope. Nanoscientists have to use tools like scanning tunneling microscopes or atomic force microscopes to observe anything at the nanoscale. Scanning tunneling microscopes use a weak electric current to probe the scanned material. Atomic force microscopes scan surfaces with an incredibly fine tip. Both microscopes send data to a computer, which can assemble the information and project it graphically onto a monitor



Nanowires and Carbon Nanotubes

Currently, scientists find two nano-size structures of particular interest: nanowires and carbon nanotubes. Nanowires are wires with a very small diameter, sometimes as small as 1 nanometer. Scientists hope to use them to build tiny transistors for computer chips and other electronic devices. In the last couple of years, carbon nanotubes have overshadowed nanowires. We’re still learning about these structures, but what we’ve learned so far is very exciting.

A carbon nanotube is a nano-size cylinder of carbon atoms. Imagine a sheet of carbon atoms, which would look like a sheet of hexagons. If you roll that sheet into a tube, you’d have a carbon nanotube. Carbon nanotube properties depend on how you roll the sheet. In other words, even though all carbon nanotubes are made of carbon, they can be very different from one another based on how you align the individual atoms.

With the right arrangement of atoms, you can create a carbon nanotube that’s hundreds of times stronger than steel, but six times lighter

Engineers plan to make building material out of carbon nanotubes, particularly for things like cars and airplanes. Lighter vehicles would mean better fuel efficiency, and the added strength translates to increased passenger safety.

Carbon nanotubes can also be effective semiconductors with the right arrangement of atoms. Scientists are still working on finding ways to make carbon nanotubes a realistic option for transistors in microprocessors and other electronics.

In the next section, we’ll look at products that are taking advantage of nanotechnology.


What’s the difference between graphite and diamonds? Both materials are made of carbon, but both have vastly different properties. Graphite is soft; diamonds are hard. Graphite conducts electricity, but diamonds are insulators and can’t conduct electricity. Graphite is opaque; diamonds are usually transparent. Graphite and diamonds have these properties because of the way the carbon atoms bond together at the nanoscale.

Products with Nanotechnology

You might be surprised to find out how many products on the market are already benefiting from nanotechnology.

Bridgestone engineers developed this Quick Response Liquid Powder Display, a flexible digital screen, using nanotechnology.

Yoshikazu Tsuno/AFP/Getty Images

  • Sunscreen – Many sunscreens contain nanoparticles of zinc oxide or titanium oxide. Older sunscreen formulas use larger particles, which is what gives most sunscreens their whitish color. Smaller particles are less visible, meaning that when you rub the sunscreen into your skin, it doesn’t give you a whitish tinge.
  • Self-cleaning glass – A company called Pilkington offers a product they call Activ Glass, which uses nanoparticles to make the glassphotocatalytic and hydrophilic. The photocatalytic effect means that when UV radiation from light hits the glass, nanoparticles become energized and begin to break down and loosen organic molecules on the glass (in other words, dirt). Hydrophilic means that when water makes contact with the glass, it spreads across the glass evenly, which helps wash the glass clean.
  • Clothing – Scientists are using nanoparticles to enhance your clothing. By coating fabrics with a thin layer of zinc oxide nanoparticles, manufacturers can create clothes that give better protection from UV radiation. Some clothes have nanoparticles in the form of little hairs or whiskers that help repel water and other materials, making the clothing stain-resistant.
  • Scratch-resistant coatings – Engineers discovered that adding aluminum silicate nanoparticles to scratch-resistant polymer coatings made the coatings more effective, increasing resistance to chipping and scratching. Scratch-resistant coatings are common on everything from cars to eyeglass lenses.
  • Antimicrobial bandages – Scientist Robert Burrell created a process to manufacture antibacterial bandages using nanoparticles of silver. Silver ions block microbes’ cellular respiration

    . In other words, silver smothers harmful cells, killing them.

New products incorporating nanotechnology are coming out every day. Wrinkle-resistant fabrics, deep-penetrating cosmetics, liquid crystal displays (LCD) and other conveniences using nanotechnology are on the market. Before long, we’ll see dozens of other products that take advantage of nanotechnology ranging from Intel microprocessors to bio-nanobatteriescapacitors only a few nanometers thick. While this is exciting, it’s only the tip of the iceberg as far as how nanotechnology may impact us in the future.

In the next section, we’ll look at some of the incredible things that nanotechnology may hold for us.­


Nanotechnology is making a big impact on the tennis world. In 2002, the tennis racket company Babolat introduced the VS Nanotube Power racket. They made the racket out of carbon nanotube-infused graphite, meaning the racket was very light, yet many times stronger than steel. Meanwhile, tennis ball manufacturer Wilson introduced the Double Core tennis ball. These balls have a coating of clay nanoparticles on the inner core. The clay acts as a sealant, making it very difficult for air to escape the ball.

Accelerate Your Time to Print Using ANSYS™ SpaceClaim 2015

Switching between multiple tools to prepare 3D models for printing is not only time consuming, but also inefficient and costly to maintain. In the 2015 release, ANSYS SpaceClaim has honed its 3D printing capabilities while adding a multitude of new features to streamline model preparation, providing you with the best 3D printing model prep solution.

ANSYS™ SpaceClaim 2015 provides new features to the STL Prep module, including:

  1. A one-click tool for adding a desired thickness to a part for printing
  2. Automatic facet smoothing for building precision into 3D parts
  3. A minimum thickness detection feature to check for areas falling below a tolerance limit
  4. An unsupported material warning with an overhangs button to add support material where it is needed


The Future of Nanotechnology

In the world of “Star Trek,” machines called replicators can produce practically any physical object, from weapons to a steaming cup of Earl Grey tea. Long considered to be exclusively the product of science fiction, today some people believe replicators are a very real possibility. They call it molecular manufacturing, and if it ever does become a reality, it could drastically change the world.


Atoms and molecules stick together because they have complementary shapes that lock together, or charges that attract. Just like with magnets, a positively charged atom will stick to a negatively charged atom. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of molecular manufacturing is to manipulate atoms individually and place them in a pattern to produce a desired structure.

The first step would be to develop nanoscopic machines, called assemblers, that scientists can program to manipulate atoms and molecules at will. Rice University Professor Richard Smalley points out that it would take a single nanoscopic machine millions of years to assemble a meaningful amount of material. In order for molecular manufacturing to be practical, you would need trillions of assemblers working together simultaneously. Eric Drexler believes that assemblers could first replicate themselves, building other assemblers. Each generation would build another, resulting in exponential growth until there are enough assemblers to produce objects

Assemblers might have moving parts like the nanogears in this concept drawing.

Trillions of assemblers and replicators could fill an area smaller than a cubic millimeter, and could still be too small for us to see with the naked eye. Assemblers and replicators could work together to automatically construct products, and could eventually replace all traditional labor methods. This could vastly decrease manufacturing costs, thereby making consumer goods plentiful, cheaper and stronger. Eventually, we could be able to replicate anything, including diamonds, water and food. Famine could be eradicated by machines that fabricate foods to feed the hungry.

Nanotechnology may have its biggest impact on the medical industry. Patients will drink fluids containing nanorobots programmed to attack and reconstruct the molecular structure of cancer cells and viruses. There’s even speculation that nanorobots could slow or reverse the aging process, and life expectancy could increase significantly. Nanorobots could also be programmed to perform delicate surgeries — suchnanosurgeons could work at a level a thousand times more precise than the sharpest scalpel

By working on such a small scale, a nanorobot could operate without leaving the scars that conventional surgery does. Additionally, nanorobots could change your physical appearance. They could be programmed to perform cosmetic surgery, rearranging your atoms to change your ears, nose, eye color or any other physical feature you wish to alter.

Nanotechnology has the potential to have a positive effect on the environment. For instance, scientists could program airborne nanorobots to rebuild the thinning ozone layer. Nanorobots could remove contaminants from water sources and clean up oil spills. Manufacturing materials using the bottom-upmethod of nanotechnology also creates less pollution than conventional manufacturing processes. Our dependence on non-renewable resources would diminish with nanotechnology. Cutting down trees, mining coal or drilling for oil may no longer be necessary — nanomachines could produce those resources.

Many nanotechnology experts feel that these applications are well outside the realm of possibility, at least for the foreseeable future. They caution that the more exotic applications are only theoretical. Some worry that nanotechnology will end up like virtual reality — in other words, the hype surrounding nanotechnology will continue to build until the limitations of the field become public knowledge, and then interest (and funding) will quickly dissipate.

In the next section, we’ll look at some of the challenges and risks of nanotechnology.


In 1959, physicist and future Nobel prize winner Richard Feynman gave a lecture to the American Physical Society called “There’s Plenty of Room at the Bottom.” The focus of his speech was about the field of miniaturization and how he believed man would create increasingly smaller, powerful devices.

In 1986, K. Eric Drexler wrote “Engines of Creation” and introduced the term nanotechnology. Scientific research really expanded over the last decade. Inventors and corporations aren’t far behind — today, more than 13,000 patents registered with the U.S. Patent Office have the word “nano” in them

Nanotechnology Challenges, Risks and Ethics


The most immediate challenge in nanotechnology is that we need to learn more about materials and their properties at the nanoscale. Universities and corporations across the world are rigorously studying how atoms fit together to form larger structures. We’re still learning about how quantum mechanics impact substances at the nanoscale.

Because elements at the nanoscale behave differently than they do in their bulk form, there’s a concern that some nanoparticles could be toxic. Some doctors worry that the nanoparticles are so small, that they could easily cross the blood-brain barrier, a membrane that protects the brain from harmful chemicals in the bloodstream. If we plan on using nanoparticles to coat everything from our clothing to our highways, we need to be sure that they won’t poison us.

Closely related to the knowledge barrier is the technical barrier. In order for the incredible predictions regarding nanotechnology to come true, we have to find ways to mass produce nano-size products like transistors and nanowires. While we can use nanoparticles to build things like tennis rackets and make wrinkle-free fabrics, we can’t make really complex microprocessor chips with nanowires yet.

There are some hefty social concerns about nanotechnology too. Nanotechnology may also allow us to create more powerful weapons, both lethal and non-lethal. Some organizations are concerned that we’ll only get around to examining the ethical implications of nanotechnology in weaponry after these devices are built. They urge scientists and politicians to examine carefully all the possibilities of nanotechnology before designing increasingly powerful weapons.

If nanotechnology in medicine makes it possible for us to enhance ourselves physically, is that ethical? In theory, medical nanotechnology could make us smarter, stronger and give us other abilities ranging from rapid healing to night vision. Should we pursue such goals? Could we continue to call ourselves human, or would we become transhuman — the next step on man’s evolutionary path? Since almost every technology starts off as very expensive, would this mean we’d create two races of people — a wealthy race of modified humans and a poorer population of unaltered people? We don’t have answers to these questions, but several organizations are urging nanoscientists to consider these implications now, before it becomes too late.

Not all questions involve altering the human body — some deal with the world of finance and economics. If molecular manufacturing becomes a reality, how will that impact the world’s economy? Assuming we can build anything we need with the click of a button, what happens to all the manufacturing jobs? If you can create anything using a replicator, what happens to currency? Would we move to a completely electronic economy? Would we even need money?

Whether we’ll actually need to answer all of these questions is a matter of debate. Many experts think that concerns like grey goo and transhumans are at best premature, and probably unnecessary. Even so, nanotechnology will definitely continue to impact us as we learn more about the enormous potential of the nanoscale.


Eric Drexler, the man who introduced the word nanotechnology, presented a frightening apocalyptic vision — self-replicating nanorobots malfunctioning, duplicating themselves a trillion times over, rapidly consuming the entire world as they pull carbon from the environment to build more of themselves. It’s called the “grey goo” scenario, where a synthetic nano-size device replaces all organic material. Another scenario involves nanodevices made of organic material wiping out the Earth — the “green goo” scenario.

The Technion’s Russell Berrie Nanotechnology Institute is a world-leader in nanotechnology research having made seminal discoveries in the field.

Breakthroughs in Nanotechnology

  • Prof. Ester Segal and a team of Israeli and American researchers find that silicon nanomaterials used for the localized delivery of chemotherapy drugs behave differently in cancerous tumors than they do in healthy tissues. The findings could help scientists better design such materials to facilitate the controlled and targeted release of the chemotherapy drugs to tumors.
  • Associate Professor Alex Leshansky of the Faculty of Chemical Engineering is part of an international team that has created a tiny screw-shaped propeller that can move in a gel-like fluid, mimicking the environment in a living organism. The breakthrough brings closer the day robots that are only nanometers – billionths of a meter – in length, can maneuver and perform medicine inside the human body and possibly inside human cells.
  • Prof. Amit Miller and a team of researchers at the Technion and Boston University have discovered a simple way to control the passage of DNA molecules through nanopore sensors. The breakthrough could lead to low-cost, ultra-fast DNA sequencing that would revolutionize healthcare and biomedical research, and spark major advances in drug development, preventative medicine and personalized medicine.

– Israeli Prime Minister Benjamin Netanyahu presents U.S. President Barack Obama with nano-sized inscribed replicas of the Declarations of Independence of the United States and the State of Israel. The replicas were created by scientists at the Technion’s Russell Berrie Nanotechnology Institute (RBNI). (03/13)

– Prof. Nir Tessler has found a way to generate an electrical field inside solar cells that use inorganic nanocrystals or “quantum dots,” making them more suitable for building an energy-efficient nanocrystal solar cell. (11/11)

– Researchers led by Prof. Wayne Kaplan discover the nature of nanometer-thick layers between different materials and find that they have both solid and liquid properties. The results could enable scientists to improve the resilience of the bond between ceramic materials and metals, two types of materials that “do not like” to come into contact. Applications include cutting tools for metal-working; composites for brake pads; the joins between metal conducting wires and chips in computers; and the application of protective ceramic coatings on jet engine blades. (05/11)

– Israeli President Shimon Peres presents Pope Benedict XVI with a “Nano-Bible” smaller than a pinhead. Created by researchers at the Technion-Israel Institute of Technology, the complete punctuated and vowelized version of the Old Testament takes up just 0.5 square millimeters. The idea to write the Bible on such a tiny surface was conceived by Professor Uri Sivan, the first head of the university’s Russell Berrie Nanotechnology Institute (RBNI). (05/09)

Nanotechnology and medicine

Expert Opinion on Biological Therapy  2003; Volume 3Issue 4, 655-663
Dwaine F Emerich & Christopher G Thanos   http://dx.doi.org:/10.1517/14712598.3.4.655

Nanotechnology, or systems/device manufacture at the molecular level, is a multidisciplinary scientific field undergoing explosive development. The genesis of nanotechnology can be traced to the promise of revolutionary advances across medicine, communications, genomics and robotics. On the surface, miniaturisation provides cost effective and more rapidly functioning mechanical, chemical and biological components. Less obvious though is the fact that nanometre sized objects also possess remarkable self-ordering and assembly behaviours under the control of forces quite different from macro objects. These unique behaviours are what make nanotechnology possible, and by increasing our understanding of these processes, new approaches to enhancing the quality of human life will surely be developed. A complete list of the potential applications of nanotechnology is too vast and diverse to discuss in detail, but without doubt one of the greatest values of nanotechnology will be in the development of new and effective medical treatments (i.e., nanomedicine). This review focuses on the potential of nanotechnology in medicine, including the development of nanoparticles for diagnostic and screening purposes, artificial receptors, DNA sequencing using nanopores, manufacture of unique drug delivery systems, gene therapy applications and the enablement of tissue engineering.

Nanotechnology in Medicine – Nanomedicine

The use of nanotechnology in medicine offers some exciting possibilities. Some techniques are only imagined, while others are at various stages of testing, or actually being used today.

Nanotechnology in medicine involves applications of nanoparticles currently under development, as well as longer range research that involves the use of manufactured nano-robots to make repairs at the cellular level (sometimes referred to as nanomedicine).

Whatever you call it, the use of nanotechnology in the field of medicine could revolutionize the way we detect and treat damage to the human body and disease in the future, and many techniques only imagined a few years ago are making remarkable progress towards becoming realities.

Nanotechnology in Medicine Application: Drug Delivery

One application of nanotechnology in medicine currently being developed involves employing nanoparticles to deliver drugs, heat, light or other substances to specific types of cells (such as cancer cells). Particles are engineered so that they are attracted to diseased cells, which allows direct treatment of those cells. This technique reduces damage to healthy cells in the body and allows for earlier detection of disease.

For example, nanoparticles that deliver chemotherapy drugs directly to cancer cells are under development. Tests are in progress for targeted delivery of chemotherapy drugs and their final approval for their use with cancer patients is pending. One company, CytImmune has published the results of a Phase 1 Clinical Trial of their first targeted chemotherapy drug and another company, BIND Biosciences, has published preliminary results of a Phase 1 Clinical Trial for their first targeted chemotherapy drug and is proceeding with a Phase 2 Clinical Trial.

Researchers at the University of Illinois have demonstated that gelatin nanoparticles can be used to deliver drugs to damaged brain tissue.

Researchers at MIT using nanoparticles to deliver vaccine. The nanoparticles protect the vaccine, allowing the vaccine time to trigger a stronger immune response.

Reserchers are developing a method to release insulin that uses a sponge-like matrix that contains insulin as well as nanocapsules containing an enzyme. When the glucose level rises the nanocapsules release hydrogen ions, which bind to the fibers making up the matrix. The hydrogen ions make the fibers positively charged, repelling each other and creating openings in the matrix through which insulin is released.

Researchers are developing a nanoparticle that can be taken orally and pass through the lining of the intestines into the bloodsteam. This should allow drugs that must now be delivered with a shot to be taken in pill form.

Researchers are also developing a nanoparticle to defeat viruses. The nanoparticle does not actually destroy viruses molecules, but delivers an enzyme that prevents the reproduction of viruses molecules in the patients bloodstream.

Read more about nanomedicine in drug delivery

Nanotechnology in Medicine Application: Therapy Techniques

Researchers have developed “nanosponges” that absorb toxins and remove them from the bloodstream. The nanosponges are polymer nanoparticles coated with a red blood cell membrane. The red blood cell membrane allows the nanosponges to travel freely in the bloodstream and attract the toxins.

Researchers have demonstrated a method to generate sound waves that are powerful, but also tightly focused, that may eventually be used for noninvasive surgery. They use a lens coated with carbon nanotubes to convert light from a laser to focused sound waves. The intent is to develop a method that could blast tumors or other diseased areas without damaging healthy tissue.

Researchers are investigating the use of bismuth nanoparticles to concentrate radiation used in radiation therapy to treat cancer tumors. Initial results indicate that the bismuth nanoparticles would increase the radiation dose to the tumor by 90 percent.

Nanoparticles composed of polyethylene glycol-hydrophilic carbon clusters (PEG-HCC) have been shown to absorb free radicals at a much higher rate than the proteins out body uses for this function. This ability to absorb free radicals may reduce the harm that is caused by the release of free radicals after a brain injury.

Targeted heat therapy is being developed to destroy breast cancer tumors. In this method antibodies that are strongly attracted to proteins produced in one type of breast cancer cell are attached to nanotubes, causing the nanotubes to accumulate at the tumor. Infrared light from a laser is absorbed by the nanotubes and produces heat that incinerates the tumor.

Read more about nanomedicine therapy techniques

Nanotechnology in Medicine Application: Diagnostic Techniques

Reseachers at MIT have developed a sensor using carbon nanotubes embedded in a gel; that can be injected under the skin to monitor the level of nitric oxide in the bloodstream. The level of nitric oxide is important because it indicates inflamation, allowing easy monitoring of imflammatory diseases. In tests with laboratory mice the sensor remained functional for over a year.

Researchers at the University of Michigan are developing a sensor that can detect a very low level of cancer cells, as low as 3 to 5 cancer cells in a one milliliter in a blood sample. They grow sheets of graphene oxide, on which they attach molecules containing an antibody that attaches to the cancer cells. They then tag the cancer cells with fluorescent molecules to make the cancer cells stand out in a microscope.

Researchers have demonstrated a way to use nanoparticles for early diagnosis of infectious disease. The nanoparticles attach to molecules in the blood stream indicating the start of an infection. When the sample is scanned for Raman scattering the nanoparticles enhance the Raman signal, allowing detection of the molecules indicating an infectious disease at a very early stage.

A test for early detection of kidney damage is being developed. The method uses gold nanorodsfunctionalized to attach to the type of protein generated by damaged kidneys. When protein accumulates on the nanorod the color of the nanorod shifts. The test is designed to be done quickly and inexpensively for early detection of a problem.

Read more about nanomedicine diagnostic techniques

Nanotechnology in Medicine Application: Anti-Microbial Techniques

One of the earliest nanomedicine applications was the use of nanocrystalline silver which is  as an antimicrobial agent for the treatment of wounds, as discussed on the Nucryst Pharmaceuticals Corporation website.

A nanoparticle cream has been shown to fight staph infections. The nanoparticles contain nitric oxide gas, which is known to kill bacteria. Studies on mice have shown that using the nanoparticle cream to release nitric oxide gas at the site of staph abscesses significantly reduced the infection.

Burn dressing that is coated with nanocapsules containing antibotics. If a infection starts the harmful bacteria in the wound causes the nanocapsules to break open, releasing the antibotics. This allows much quicker treatment of an infection and reduces the number of times a dressing has to be changed.

A welcome idea in the early study stages is the elimination of bacterial infections in a patient within minutes, instead of delivering treatment with antibiotics over a period of weeks. You can read about design analysis for the antimicrobial nanorobot used in such treatments in the following article: Microbivores: Artifical Mechanical Phagocytes using Digest and Discharge Protocol.

Nanotechnology in Medicine Application: Cell Repair

Nanorobots could actually be programmed to repair specific diseased cells, functioning in a similar way to antibodies in our natural healing processes.  Read about design analysis for one such cell repair nanorobot in this article: The Ideal Gene Delivery Vector: Chromallocytes, Cell Repair Nanorobots for Chromosome Repair Therapy

Nanotechnology in Medicine: Company Directory

Company Product
CytImmune Gold nanoparticles for targeted delivery of drugs to tumors
NanoBio Nanoemulsions for nasal delivery to fight viruses (such as the flu and colds) or through the skin to fight bacteria

More nanomedicine companies

Nanotechnology in Medicine: Resources

National Cancer Institute Alliance for Nanotechnology in Cancer; This alliance includes aNanotechnology Characterization Lab as well as eight Centers of  Cancer Nanotechnology Excellence.

Alliance for NanoHealth; This alliance includes eight research institutions performing collaborative research.

European Nanomedicine platform

The National Institute of Health (NIH) is funding research at eight Nanomedicine Development Centers.

Page 2: Nanomedicine based upon nano-robots

Compiled by Earl Boysen of Hawk’s Perch Technical Writing, LLC and UnderstandingNano.com.

Future impact of nanotechnology on medicine and dentistry

Mallanagouda Patil,1 Dhoom Singh Mehta,2 and Sowjanya Guvva3

J Indian Soc Periodontol. 2008 May-Aug; 12(2): 34–40.

doi:  10.4103/0972-124X.44088  PMCID: PMC2813556

The human characteristics of curiosity, wonder, and ingenuity are as old as mankind. People around the world have been harnessing their curiosity into inquiry and the process of scientific methodology. Recent years have witnessed an unprecedented growth in research in the area of nanoscience. There is increasing optimism that nanotechnology applied to medicine and dentistry will bring significant advances in the diagnosis, treatment, and prevention of disease. Growing interest in the future medical applications of nanotechnology is leading to the emergence of a new field called nanomedicine. Nanomedicine needs to overcome the challenges for its application, to improve the understanding of pathophysiologic basis of disease, bring more sophisticated diagnostic opportunities, and yield more effective therapies and preventive properties. When doctors gain access to medical robots, they will be able to quickly cure most known diseases that hobble and kill people today, to rapidly repair most physical injuries our bodies can suffer, and to vastly extend the human health span. Molecular technology is destined to become the core technology underlying all of 21st century medicine and dentistry. In this article, we have made an attempt to have an early glimpse on future impact of nanotechnology in medicine and dentistry.

Keywords: Nanodentistry, nanomedicine, nanoscience, nanotechnology


The world began without man, and it will complete itself without him. …Cloude Levi Strauss. Winfred Phillips, DSc, said, “You have to be able to fabricate things, you have to be able to analyze things, you have to be able to handle things smaller than ever imagined in ways not done before”.[1] Many researchers believed that in future, scientific devices that are dwarfed by dust mites may one day be capable of grand biomedical miracles.

The vision of nanotechnology introduced in 1959 by late Nobel Physicist Richard P Faynman in dinner talk said, “There is plenty of room at the bottom,”[2] proposed employing machine tools to make smaller machine tools, these are to be used in turn to make still smaller machine tools, and so on all the way down to the atomic level, noting that this is “a development which I think cannot be avoided”. He suggested nanomachines, nanorobots, and nanodevices ultimately could be used to develop a wide range of automically precise microscopic instrumentation and manufacturing tools, could be applied to produce a vast quantities of ultrasmall computers and various nanoscale microscale robots.

Feynman’s idea remained largely undiscussed until the mid-1980s, when the MIT educated engineer K Eric Drexler published “Engines of Creation”, a book to popularize the potential of molecular nanotechnology.[3]

Nano comes from the Greek word for dwarf, usually nanotechnology is defined as the research and development of materials, devices, and systems exhibiting physical, chemical, and biological properties that are different from those found on a larger scale (matter smaller than scale of things like molecules and viruses).[4]

Old rules don’t apply, small things behave differently. Researchers in nanoland are also making really, really small things with astonishing properties like the carbon nanotube. Chris Papadopoulos, a nanotechnology researcher says, “The carbon nanotube is the poster boy for nanotechnology”. It’s is a very thin sheet of graphite that’s formed into a tube, its strength can be harnessed by embedding them in constructive materials, among other applications, nanotubes may be part of future improvements for high-performance air craft.

In nanoland, tiny differences in size can add up to huge differences in function. Ted Sergent, author of The dance of Molecules, says matter is tunable at nanoscale. For example, change the length of a guitar string and you change the sound it makes; change the size of semiconductors called quantum dots, and you change their rainbow of colors from a single material. Sergent made a three-nanometric dot that ‘glows’ blue, and four nanometer dot that glows red and a five nanometer dot that emits infrared rays or heat.

Nanotechnology will affect everything, says William Atkinson, author of Nanoscom. Nanotechnology and the big changes coming from the inconceivably small. It’ll be like a blizzard; snowflakes whose weight you can’t detect can bring a city to a standstill. Nanotechnology is going to be like that.

The unique quantum phenomena that happen at the nanoscale, draw researchers from many different disciplines to the field, including medicine, chemistry, physics, engineering, and others (dentistry).

The scientists in the field of regenerative medicine and tissue engineering are continually looking for new ways to apply the principles of cell transplantation, material science, and bioengineering to construct biological substitutes that will restore and maintain normal function in diseased and injured tissue. Development of more refined means of delivering medications at therapeutic levels to specific sites is an important clinical issue, for applications of such technology in medicine, and dentistry.[5]


The field of “Nanomedicine” is the science and technology of diagnosing, treating, and preventing disease and traumatic injury, of relieving pain, and of preserving and improving human health, using nanoscale structured materials, biotechnology, and genetic engineering, and eventually complex machine systems and nonorobots.[5] It was perceived as embracing five main subdisciplines that in many ways are overlapping by common technical issues [Figure 1].

Figure 1

Dimensions in Nanomedicine


It is the use of nanodevices for the early disease identification or predisposition at cellular and molecular level. In in-vitro diagnostics, nanomedicine could increase the efficiency and reliability of the diagnostics using human fluids or tissues samples by using selective nanodevices, to make multiple analyses at subcellular scale, etc. In in vivo diagnostics, nanomedicine could develop devices able to work inside the human body in order to identify the early presence of a disease, to identify and quantify toxic molecules, tumor cells.

Regenerative medicine

It is an emerging multidisciplinary field to look for the reparation, improvement, and maintenance of cells, tissues, and organs by applying cell therapy and tissue engineering methods. With the help of nanotechnology it is possible to interact with cell components, to manipulate the cell proliferation and differentiation, and the production and organization of extracellular matrices.

Present day nanomedicine exploits carefully structured nanoparticles such as dendrimers, carbon fullerenes (buckyballs), and nanoshells to target specific tissues and organs. These nanoparticles may serve as diagnostic and therapeutic antiviral, antitumor, or anticancer agents. Years ahead, complex nanodevices and even nanorobots will be fabricated, first of biological materials but later using more durable materials such as diamond to achieve the most powerful results.[6]

The human body is comprised of molecules, hence the availablity of molecular nanotechnology will permit dramatic progress to address medical problems and will use molecular knowledge to maintain and improve human health at the molecular scale.

Applications in medicine

Within 10–20 years it should become possible to construct machines on the micrometer scale made up of parts on the nanometer scale. Subassemblies of such devices may include such as useful robotic components as 100 nm manipulater arms, 10 nm sorting rotors for molecule by molecule reagent purification, and smooth super hard surfaces made of automically flawless diamond.

Nanocomputers would assume the important task of activating, controlling, and deactivating such nanomechanical devices. Nanocomputers would store and execute mission plans, receive and process external signals and stimuli, communicate with other nanocomputers or external control and monitoring devices, and possess contextual knowledge to ensure safe functioning of the nanomechanical devices. Such technology has enormous medical and dental implications.

Programmable nanorobotic devices would allow physicians to perform precise interventions at the cellular and molecular level. Medical nanorobots have been proposed for genotological[7] applicatons in pharmaceuticals research,[8] clinical diagnosis, and in dentistry,[9] and also mechanically reversing atherosclerosis, improving respiratory capacity, enabling near-instantaneous homeostasis, supplementing immune system, rewriting or replacing DNA sequences in cells, repairing brain damage, and resolving gross cellular insults whether caused by irreversible process or by cryogenic storage of biological tissues.

Feynman offered the first known proposal for a nanorobotic surgical procedure to cure heart disease,[2] “A friend of mine (Albert R. Hibbs) suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and looks around. It finds out which valve is the faulty and takes a little knife and slices it out, that we can manufacture an object that maneuvers at that level, other small machines might be permanently incorporated in the body to assist some inadequately functioning organs”.[2]

Many disease causing culprits such as bacteria and viruses are nanosize. So, it only makes sense that nanotechnology would offer us ways of fighting back. The ancient greeks used silver to promote healing and prevent infection, but the treatment took backseat when antibiotics came on the scene. Nycryst pharmaceuticals (Canada) revived and improved an old cure by coating a burn and wound bandage with nanosize silver particles that are more reactive than the bulk form of metal. They penetrate into skin and work steadily. As a result, burn victims can have their dressings changed just once a week.

Genomics and protomics research is already rapidly elucidating the molecular basis of many diseases. This has brought new opportunities to develop powerful diagnostic tools able to identify genetic predisposition to diseases. In the future, point of care diagnosis will be routinely used to identify those patients requiring preventive medication to select the most appropriate medication for individual patients, and to monitor response to treatment. Nanotechnology has a vital role to play in realizing cost-effective diagnostic tools.

Chris Backous developing Lab–on-Chip to give doctor immediate results from medical tests for cancer and viruses, it gets its information by analyzing the genetic material in individual cells. Advances in gene sequencing mean this can now be done quickly and sequencing with tiny samples of body fluids or tissues such as blood, bone marrow, or tumors. The device can also detect the BK virus, a sign of trouble in patients who have had kidney transplants. Ultimately (Pilarski thinks,) chip technology will be able to detect what kind of flu a person has, or, even if they have SARS or HIV.

Nanotechnology has the potential to offer invaluable advances such as use of nanocoatings to slow the release of asthma medication in the lungs, allowing people with asthma to experience longer periods of relief from symptoms after using inhalants. Thus, what nanotechnology tries to do is essentially make drug particles in such a way, that they don’t dissolve that fast, done this with.

Nanosensors developed for military use in recognizing airborne rogue agents and chemical weapons to detect drugs and other substances in exhaled breath.[1] Basically, you can detect many drugs in breath, but the amount you detect in breath is going to be related to the amount that you take and also to whether it partitions well between the blood and the breath. Drug abuse like marijuna (and things like), concentration of alcohol, testing of athletes for banned substances, and individual’s drug treatment programs are two areas long overdue for breath detection technologies. We see this in future totally replacing urine testing.

Currently, most legal and illegal drug overdoses have no specific way to be effectively neutralized, using nanoparticles as absorbents of toxic drugs, is another area of medical nanoscience that is rapidly gaining momentum. Goal is design nanostructures that effectively bind molecular entities, which currently don’t have effective treatments. We are putting nanosponges into the blood stream and they are soaking up toxic drug molecules to reduce the free amount in the blood, in turn, causes a resolution of the toxicity that was there before you put the nanosponges into the blood.

French and Italian researchers have come up with a completely new approach to render anticancer and antiviral nucleoside analoges significantly more potent. By linking the nucleoside analoges to sequalene, a biochemical precursor to the whole family of steroids, the researchers observed the self-organization of amphiphilic molecules in water. These nanoassemblies exhibited superior anticancer activity in vitro in human cancer cells.

Laurie B Gower, PhD, has been researching bone formation and structure at the nanoscale level. She is examining biomimetic methods of constructing a synthetic bone graft substitute with a nanostructured architecture that matches natural bone so that it would be accepted by the body and guide the cells toward the mending of damaged bones. Biomineralization refers to minerals that are formed biologically, which have very different properties than geological minerals or lab-formed crystals. The crystal properties found in bone are manipulated at nanoscale and are imbedded within collagen fibers to create an interpenetrating organic–inorganic composite with unique mechanical properties. She foresees numerous implications of the material in the future of osteology.

Hichan Fenniri, a chemistry professor, tried to make artificial joints act more like natural ones. Fenniri has made a nanotube coating for titanium hip or knee, is very good mimic of collagen, as a result of coating attracts and attaches more bone cells, osteoblasts, which help in bone growth quickly than uncoated hip or knee.

There is ongoing attempts to build ‘medical microrobots’ for in vivo medical use.[10] In 2002, Ishiyama et al,[11] at Tohku University developed tiny magnetically driven spinning screws intended to swim along veins and carry drugs to infected tissues or even to burrow into tumors and kill them with heat. In 2005, Brad Nelson’s[12] team reported the fabrication of a microscopic robot, small enough (approximately 200 µm) to be injected into the body through a syringe. They hope that this device or its descendants might someday be used to deliver drugs or perform minimally invasive eye surgery. Gorden’s[9,13] group at the University of Manitoba has also proposed magnetically controlled ‘cytobots’ and ‘karyobots’ for performing wireless intracellular and intranuclear surgery.

‘Respirocytes’, the first theoreotical design study of a complete medical nanorobot ever published in peer-reviewed journal described a hypothetical artificial mechanical red blood cell or ‘respirocyte’ made of 18 billion precisely arranged structural atoms.[10,14] The respirocyte is a bloodborne spherical 1 µm diamondedoid 1000 atmosphere pressure vessel with reversible molecule selective surface pumps powered by endogenous serum glucose. This nanorobot would deliver 236 times more oxygen to body tissues per unit volume than natural red cells and would manage carbonic acidity, controlled by gas concentration sensors and an onboard nanocomputer.

Nanorobotic microbivores

Artificial phagocytes called microbivores could patrol the bloodstream, seeking out and digesting unwanted pathogens including bacteria, viruses, or fungi.[10,15] Microbivores would achieve complete clearance of even the most severe septicemic infections in hours or less. The nanorobots do not increase the risk of sepsis or septic shock because the pathogens are completely digested into harmless sugars, amino acids, and the like, which are the only effluents from the nanorobot.

Surgical nanorobotics

A surgical nanorobot, programmed or guided by a human surgeon, could act as a semiautonomous on site surgeon inside the human body, when introduced into the body through vascular system or cavities. Such a device could perform various functions such as searching for pathology and then diagnosing and correcting lesions by nanomanipulation, coordinated by an onboard computer while maintaining contact with the supervising surgeon via coded ultrasound signals.[10]

The earliest forms of cellular nanosurgery are already being explored today. For example, rapidly vibrating (100 Hz) micropipette with a <1 µm tip diameter has been used to completely cut dentrites from single neurons without damaging cell viability.[16] Axotomy of roundworm neurons was performed by femtosecond laser surgery, after which the axons functionally regenerated.[17] Femtolaser acts like a pair of nanoscissors by vaporizing tissue locally while leaving adjacent tissue unharmed. Femtolaser surgery has performed the individual chromosomes.[18]


They could make new class of self-powered implantable medical devices, sensors, and portable electronics, by converting mechanical energy from body movement, muscle stretching, or water flow into electricity.

Nanogenerators produce electric current by bending and then releasing zinc oxide nanowires, which are both piezoelectric and semiconducting. Nanowires can be grown on polymer-based films, use of flexible polymer substrates could one day allow portable devices to be powered by movement of their users.

“Our bodies are good at converting chemical energy from glucose into the mechanical energy of our muscles,” Wang (faculty at Peking University and National Center for Nanoscience and Technology of China) explained “these nanogenerators can take mechanical energy and convert it to electrical energy for powering devices inside the body. This could open up tremendous possibilities for self-powered implantable medical devices.”


Nanodentistry will make possible the maintenance of comprehensive oral health by employing nanomaterials, biotechnology, including tissue engineering, and ultimately, dental nanorobotics. New potential treatment opportunities in dentistry may include, local anesthesia, dentition renaturalization, permanent hypersensitivity cure, complete orthodontic realignments during a single office visit, covalently bonded diamondised enamel, and continuous oral health maintenance using mechanical dentifrobots.

When the first micro-size dental nanorobots can be constructed, dental nanorobots might use specific motility mechanisms to crawl or swim through human tissue with navigational precision, acquire energy, sense, and manipulate their surroundings, achieve safe cytopenetration and use any of the multitude techniques to monitor, interrupt, or alter nerve impulse traffic in individual nerve cells in real time.

These nanorobot functions may be controlled by an onboard nanocomputer that executes preprogrammed instructions in response to local sensor stimuli. Alternatively, the dentist may issue strategic instructions by transmitting orders directly to in vivo nanorobots via acoustic signals or other means.

Inducing anesthesia

One of the most common procedure in dental practice, to make oral anesthesia, dental professionals will instill a colloidal suspension containing millions of active analgesic micron-sized dental nanorobot ‘particles’ on the patient’s gingivae. After contacting the surface of the crown or mucosa, the ambulating nanorobots reach the dentin by migrating into the gingival sulcus and passing painlessly through the lamina propria or the 1–3-micron thick layer of loose tissue at the cementodentinal junction. On reaching dentin, the nanorobots enter dentinal tubules holes that are 1–4 microns in diameter and proceed toward the pulp, guided by a combination of chemical gradients, temperature differentials, and even positional navigation, all under the control of the onboard nanocomputer as directed by the dentist.[9]

There are many pathways to choose from, near to CEJ, midway between junction and pulp, and near to pulp. Tubules diameter increases as it nears the pulp, which may facilitate nanorobot movement, although circumpulpal tubule openings vary in numbers and size (tubules number density 22,000 mm DEJ, 37,000 mm square midway, ans 48000 mm square near to pulp). Tubules branching patterns, between primary and irregular secondary dentin, regular secondary dentin in young and old teeth (sclerosing) may present a significant challenge to navigation.

The presence of natural cells that are constantly in motion around and inside the teeth including human gingival and pulpal fibroblasts, cementoblasts of the CDJ, bacteria inside dentinal tubules, odontoblasts near the pulp dentin border, and lymphocytes within the pulp or lamina propria suggested that such journey should be feasible by cell-sized nanorobots of similar mobility.

Once installed in the pulp and having established control over nerve impulse traffic, the analgesic dental nanorobots may be commanded by the dentist to shut down all sensitivity in any particular tooth that requires treatment. When on the hand-held controller display, the selected tooth immediately becomes numb. After the oral procedures completed, the dentist orders the nanorobots to restore all sensation, to relinguish control of nerve traffic and to engress, followed by aspiration. Nanorobotic analgesics offer greater patient comfort and reduced anxiety, no needles, greater selectivity, and controllability of the analgesic effect, fast and completely reversible switchable action and avoidance of most side effects and complications.

Tooth repair

Nanorobotic manufacture and installation of a biologically autologous whole replacement tooth that includes both mineral and cellular components, that is, ‘complete dentition replacement therapy’ should become feasible within the time and economic constraints of a typical office visit through the use of an affordable desktop manufacturing facility, which would fabricate the new tooth in the dentist’s office.

Chen et al[19] took advantage of these latest developments in the area of nanotechnology to simulate the natural biomineralization process to create the hardest tissue in the human body, dental enamel, by using highly organized microarchitectural units of nanorod-like calcium hydroxyapatite crystals arranged roughly parallel to each other.

Dentin hypersensitivity

Natural hypersensitive teeth have eight times higher surface density of dentinal tubules and diameter with twice as large than nonsensitive teeth. Reconstructive dental nanorobots, using native biological materials, could selectively and precisely occlude specific tubules within minutes, offering patients a quick and permanent cure.[9]

Tooth repositioning

Orthodontic nanorobots could directly manipulate the periodontal tissues, allowing rapid and painless tooth straightening, rotating and vertical repositioning within minutes to hours.

Tooth renaturalization

This procedure may become popular, providing perfect treatment methods for esthetic dentistry. This trend may begin with patients who desire to have their (1) old dental amalgams excavated and their teeth remanufactured with native biological materials, and (2) full coronal renaturalization procedures in which all fillings, crowns, and other 20th century modifications to the visible dentition are removed with the affected teeth remanufactured to become indistinguishable from original teeth.

Dental durability and cosmetics

Durability and appearance of tooth may be improved by replacing upper enamel layers with covalently bonded artificial materials such as sapphire or diamond,[20] which have 20–100 times the hardness and failure strength of natural enamel or contemporary ceramic veneers and good biocompatibility. Pure sapphire and diamond are brittle and prone to fracture, can be made more fracture resistant as part of a nanostructured composite material that possibly includes embedded carbon nanotubes.

Nanorobotic dentifrice (dentifrobots) delivered by mouthwash or toothpaste could patrol all supragingival and subgingival surfaces at least once a day metabolizing trapped organic mater into harmless and odorless vapors and performing continous calculus debridement.

Properly configured dentifrobots could identify and destroy pathogenic bacteria residing in the plaque and elsewhere, while allowing the 500 species of harmless oral microflora to flourish in a healthy ecosystem. Dentifrobots also would provide a continous barriers to halitosis, since bacterial putrification is the central metabolic process involved in oral malodor. With this kind of daily dental care available from an early age, conventional tooth decay and gingival deseases will disappear into the annals of medical history.

Potential benefits of nanotechnology are its ability to exploit the atomic or molecular properties of materials and the development of newer materials with better properties. Nanoproducts can be made by: building-up particles by combining atomic elements and using equipments to create mechanical nanoscale objects.

Nanotechnology has improved the properties of various kinds of fibers.[21] Polymer nanofibers with diameters in the nanometer range, possess a larger surface area per unit mass and permit an easier addition of surface functionalities compared to polymer microfibers.[21,22] Polymer nanofiber materials have been studied as drug delivery systems, scaffolds for tissue engineering and filters. Carbon fibers with nanometer diamensions showed a selective increase in osteoblast adhesion necessary for successful orthopedic/dental implant applications due to a high degree of nanometer surface roughness.[23]

Nonagglomerated discrete nanoparticles are homogenously manufactured in resins or coatings to produce nanocomposites. The nanofiller used include an aluminosilicate powder having a mean particles size of about 80 nm and 1:4 M ratio of alumina to silica. Advantages – superior hardness, flexible strength, modulus of elasticity, translucency and esthetic appeal, excellent color density, high polish, and polish retention, and excellent handling properties.[24] (Filtek O supreme Univrasl Restorative Pure Nano O).

Heliometer, microfilled composite resin, a close examination of this composite suggests that a form of nanotechnology was in use years ago, yet never recognized.

Nanosolutions produce unique and dispersible nanoparticles that can be added to various solvents, paints, and polymers in which they are dispersed homogenously. Nanotechnology in bonding agents ensures homogeneity and so the operator can now be totally confident that the adhesive is perfectly mixed every time.

Nanofillers are integrated in the vinylsiloxanes, producing a unique addition siloxane impression material. Better flow, improved hydrophilic properties, hence fewer voids at margin and better model pouring, enhanced detail precision.[25]


Nanotechnology is part of a predicted future in which dentistry and periodontal practice may become more high-tech and more effective looking to manage individual dental health on a microscopic level by enabling us to battle decay where it begins with bacteria. Construction of a comprehensive research facility is crucial to meet the rigorous requirements for the development of nanotechnologies.

Researchers are looking at ways to use microscopic entities to perform tasks that are now done by hand or with equipment. This concept is known as nanotechnology. Tiny machines, known as nanoassemblers, could be controlled by computer to perform specialized jobs. The nanoassemblers could be smaller than a cell nucleus so that they could fit into places that are hard to reach by hand or with other technology. Used to destroy bacteria in the mouth that cause dental caries or even repair spots on the teeth where decay has set in, by use of computer to direct these tiny workers in their tasks.

Nanotechnology has tremendous potential, but social issues of public acceptance, ethics, regulation, and human safety must be addressed before molecular nanotechnology can be seen as the possibility of providing high quality dental care to the 80% of the world’s population that currently receives no significant dental care.

Role of periodontitis will continue to evolve along the lines of currently visible trends. For example, simple self-care neglect will become fewer, while cases involving cosmetic procedures, acute trauma, or rare disease conditions will become relatively more commonplace.

Trends in oral health and disease also may change the focus on specific diagnostic and treatment modalities. Increasingly preventive approaches will reduce the need for cure prevention a viable approach for the most of them.

Diagnosis and treatment will be customized to match the preferences and genetics of each patient. Treatment options will become more numerous and exciting. All this will demand, even more so than today, the best technical abilities, professional skills that are the hallmark of the contemporary dentist and periodontist. Developments are expected to accelerate significantly.

Nanometers and nanotubes, technologies could be used to administer drugs more precisely. Technology should be able to target specific cells in a patient suffering from cancer or other life-threatening conditions. Toxic drugs used to fight these illnessess would become much more direct and consequently less harmful to the body.


The visions described in this article may sound unlikely, implausible, or even heretic. Yet, the theoretical and applied research to turn them into reality is progressing rapidly. Nanotechnology will change dentistry, healthcare, and human life more profoundly than many developments of the past. As with all technologies, nanotechnology carries a significant potential for misuse and abuse on a scale and scope never seen before. However, they also have potential to bring about significant benefits, such as improved health, better use of natural resources, and reduced environmental pollution. These truly are the days of miracle and wonder.

Current work is focused on the recent developments, particularly of nanoparticles and nanotubes for periodontal management, the materials developed from such as the hollow nanospheres, core shell structures, nanocomposites, nanoporous materials, and nanomembranes will play a growing role in materials development for the dental industry.

Once nanomechanics are available, the ultimate dream of every healer, medicine man and physician throughout recorded history will, at last become a reality. Programmable and controllable microscale robots comprised of nanoscale parts fabricated to nanometer precision will allow medical doctors to execute curative and reconstructive procedures in the human body at the cellular and molecular levels. Nanomedical physicians of the 21st century will still make good use of the body’s natural healing powers and homeostatic mechanisms, because all else equal, those interventions are best that intervene least.


Source of Support: Nil

Conflict of Interest: None declared.


  1. Rocco Castoro. U F expects big things from the science of small, nanotechnology. Think Small. The POST 02-2005.
  2. Feynman RP. There’s plenty of room at the bottom. Eng Sci. 1960;23:22–36.
  3. Drexler KE. New era of nanotechnology. New York: Anchor Press; 1986. Engines of creation: The coming era of nanotechnology; pp. 99–129.
  4. Freitas RA., Jr . Basic capabilities. Vol 1. Texas: Landes Bioscience; 1999. Nanomedicine. Available from: http//www.nanomedicine.com[last accessed on 2000 Sep 26] Georgetown.
  5. European Science Foundation. Nanomedicine. Forward look on Nanomedicine. 2005
  6. Frietas RA., Jr Current status of nanomedicine and medical nanorobotics. J Comut Ther Nanosci.2005;2:1–25.
  7. Fahy GM. Short-term and long term possibilities for interventive gerontology. Mt Sinai J Med.1991;58:328–40. [PubMed]
  8. Fahy GM. Molecular nanotechnology and its possible pharmaceutical implications. In: Bezold C, Halperin JA, Eng JL, editors. 2020 visions: Health care information standards and technologies. Rockville, MD: U.S Pharmacopenial Convention; 1993. pp. 152–9.
  9. Freitas RA., Jr Nanodentistry. J Am Dent Assoc. 2000;131:1559–66. [PubMed]
  10. Freitas R., Jr Nanotechnology, nanomedicine and nanosurgery. Int J Surg. 2005;3:243–6. [PubMed]
  11. Ishiyama K, Sendoh M, Arai KI. Magnetic micromachines for medical applications. J Magn Mater.2002;242:1163–5.
  12. Nelson B, Rajamani R. Biomedical micro-robotic system. In: Eighth international conference on medical image computing and computer assised intervention. MICCAI; 2005; Available from:http://www.miccai2005.orgPalm Springs CA: 26-29 Otober 2005.
  13. Chrusch DD, Podaima BW, Gordon R. Cytobots: Intracellular robotic micromanipulators. In: Kinsner W, Sebak A, editors. Conference proceedings, 2002 IEEE Canadian conference on electrical and computer engineering; 2002 May 12-15; Winnipeg, Canada. Winnipeg: IEEE; 2002.
  14. Freitas RA., Jr Exploratory design in medical nanotechnology: A mechanical artificial red cell. Artif Cells Blood Substit Immobil Biotechnol. 1998;26:411–30. [PubMed]
  15. Freitas RA., Jr Microbivores: Artificial mechanical phagocytes using digest and discharge protocol. J Evol Technol. 2005;14:1–52.
  16. Kirson ED, Yaari Y. A novel technique for micro-dissection of neuronal processes. J Neurosci Methods.2000;98:119–22. [PubMed]
  17. Yanik MF, Cinar H, Cinar HN, Chisholm AD, Jin Y, Ben-Yakar A. Neurosurgery: functional regeneration after laser axotomy. Nature. 2004;432:822. [PubMed]
  18. Konig K, Riemann I, Fischer P, Halbhuber KJ. Intracellular nanosurgery with near infrared femtosecond laser pulses. Cell Mol Biol. 1999;45:195–201. [PubMed]
  19. Chen HF, Clarkson BH, Sunk, Mansfield JF. Self assembly of synthetic hydroxyaptite nanorods into enamel prism like structure. J Colloid Interf Sci. 2005;188:97–103. [PubMed]
  20. Yunshin S, Park HN, Kim KH. Biologic evaluation of Chitosan Nanofiber Membrane for guided bone regeneration. J Periodontol. 2005;76:1778–84. [PubMed]
  21. Reifman EM. Diamond teeth. In: Crandall BC, editor. Nanotechnology: Molecular speculations on global abundance. Cambridge, Mass: MIT Press; 1996. pp. 81–6.
  22. Jayraman K, Kotaki M, Zhang Y, Mox, Ramakrishna S. Recent advances in Polymer nanofibers. J Nanosci Nanotechnol. 2004;4:52–65. [PubMed]
  23. Katti DS, Robinson KW, Ko FK, Laurenci CT. Bioresorbable nanofiber based systems for wound healing and drug delivery: Optimisation of fabrication parameters. J Biomed Mater Res. 2004;70:282–96.[PubMed]
  24. Price RL, Ellison K, Haberstroh KM, Webster TJ. Nano-meter surface roughness increases select osteoblasts adhesion on carbon nanofiber compacts. J Biomed Mater Res. 2004;70:129–38. [PubMed]
  25. Nano A. The A to Z of nanotechnology And nanomaterials. The Institute of nanotechnology, Azom Co Ltd; 2003.


The MIT-Harvard Center for Cancer Nanotechnology Excellence is a collaborative effort among MIT, Harvard University, Harvard Medical School, Massachusetts General Hospital, and Brigham and Women’s Hospital. It is one of eight Centers of Cancer Nanotechnology Excellence awarded by The National Cancer Institute (NCI), part of the National Institutes of Health (NIH). It focuses on developing a diversified portfolio of nanoscale devices for targeted delivery of cancer therapies, diagnostics, non-invasive imaging, and molecular sensing. In addition to general oncology applications, the Consortium focuses on prostate, brain, lung, ovarian, and colon cancer.

Examples of projects that the Consortium is undertaking include the development of:

  • Targeted nanoparticles for treating prostate cancer
  • Polymer nanoparticles and quantum dots for siRNA delivery
  • Next-generation magnetic nanoparticles for multimodal, non-invasive tumor imaging
  • Implantable, biodegradable microelectromechanical systems (MEMS), also known as lab-on-a-chip devices, for in vivo molecular sensing of tumor-associated biomolecules
  • Low-toxicity nanocrystal quantum dots for biomedical sensing

In addition to drawing on the scientific and technological expertise of its investigators, the Consortium uses available facilities for toxicology testing and the extensive mouse models of cancer collection at the collaborating institutions.

  1.  Nanotechnology and CancerNanotechnology is one of the most popular areas of scientific research, especially with regard to medical applications. We’ve already discussed some of the new detection methods that should bring about cheaper, faster and less invasive cancer diagnoses. But once the diagnosis occurs, there’s still the prospect of surgery, chemotherapy or radiation treatment to destroy the cancer. Unfortunately, these treatments can carry serious side effects. Chemotherapy can cause a variety of ailments, including hair loss, digestive problems, nausea, lack of energy and mouth ulcers.But nanotechnologists think they have an answer for treatment as well, and it comes in the form o ftargeted drug therapies. If scientists can load their cancer-detecting gold nanoparticles with anticancer drugs, they could attack the cancer exactly where it lives. Such a treatment means fewer side effects and less medication used. Nanoparticles also carry the potential for targeted and time-release drugs. A potent dose of drugs could be delivered to a specific area but engineered to release over a planned period to ensure maximum effectiveness and the patient’s safety.These treatments aim to take advantage of the power of nanotechnology and the voracious tendencies of cancer cells, which feast on everything in sight, including drug-laden nanoparticles. One experiment of this type used modified bacteria cells that were 20 percent the size of normal cells. These cells were equipped with antibodies that latched onto cancer cells before releasing the anticancer drugs they contained.Another used nanoparticles as a companion to other treatments. These particles were sucked up by cancer cells and the cells were then heated with a magnetic field to weaken them. The weakened cancer cells were then much more susceptible to chemotherapy.It may sound odd, but the dye in your blue jeans or your ballpoint pen has also been paired with gold nanoparticles to fight cancer. This dye, known as phthalocyanine, reacts with light. The nanoparticles take the dye directly to cancer cells while normal cells reject the dye. Once the particles are inside, scientists “activate” them with light to destroy the cancer. Similar therapies have existed to treat skin cancers with light-activated dye, but scientists are now working to use nanoparticles and dye to treat tumors deep in the body.From manufacturing to medicine to many types of scientific research, nanoparticles are now rather common, but some scientists have voiced concerns about their negative health effects. Nanoparticles’ small size allows them to infiltrate almost anywhere. That’s great for cancer treatment but potentially harmful to healthy cells and DNA. There are also questions about how to dispose of nanoparticles used in manufacturing or other processes. Special disposal techniques are needed to prevent harmful particles from ending up in the water supply or in the general environment, where they’d be impossible to track.Gold nanoparticles are a popular choice for medical research, diagnostic testing and cancer treatment, but there are numerous types of nanoparticles in use and in development. Bill Hammack, a professor of chemical engineering at the University of Illinois, warned that nanoparticles are “technologically sweet” [Source: Marketplace]. In other words, scientists are so wrapped up in what they can do, they’re not asking if they should do it. The Food and Drug Administration has a task force on nanotechnology, but as of yet, the government has exerted little oversight or regulation.
  2. The U.S. Food and Drug Administration (FDA)regulates a wide range of products, including foods, cosmetics, drugs, devices, veterinary products, and tobacco products some of which may utilize nanotechnology or contain nanomaterials. Nanotechnology allows scientists to create, explore, and manipulate materials measured in nanometers (billionths of a meter).  Such materials can have chemical, physical, and biological properties that differ from those of their larger counterparts.Guidance documents issued
    • On June 24, 2014, FDA issued three final guidance documentsrelated to the use of nanotechnology in regulated products,incuding cosmetics and food substances.
    • On August 5, 2015, FDA issued one final guidance documentrelated to the use of nanotechnology in food for animals.
      • FDA Guidance on Nanotechnology
        1. Nanotechnology Fact Sheet
        2. FDA issues three final guidances related to nanotechnology applications in regulated products, including cosmetics and food substances (June 2014)
        3. FDA issues final guidance on the use of nanotechnology in food for animals (August 2015)
        4. Nanotechnology in TherapeuticsA Focus on Nanoparticles as a Drug Delivery SystemSuwussa Bamrungsap; Zilong Zhao; Tao Chen; Lin Wang; Chunmei Li; Ting Fu; Weihong TanDisclosuresNanomedicine. 2012;7(8):1253-1271.AbstractContinuing improvement in the pharmacological and therapeutic properties of drugs is driving the revolution in novel drug delivery systems. In fact, a wide spectrum of therapeutic nanocarriers has been extensively investigated to address this emerging need. Accordingly, this article will review recent developments in the use of nanoparticles as drug delivery systems to treat a wide variety of diseases. Finally, we will introduce challenges and future nanotechnology strategies to overcome limitations in this field.IntroductionNanotechnology involves the engineering of functional systems at the molecular scale. Such systems are characterized by unique physical, optical and electronic features that are attractive for disciplines ranging from materials science to biomedicine. One of the most active research areas of nanotechnology is nanomedicine, which applies nanotechnology to highly specific medical interventions for the prevention, diagnosis and treatment of diseases.[1,2,401] The surge in nanomedicine research during the past few decades is now translating into considerable commercialization efforts around the globe, with many products on the market and a growing number in the pipeline. Currently, nanomedicine is dominated by drug delivery systems, accounting for more than 75% of total sales.[3]

          Nanomaterials fall into a size range similar to proteins and other macromolecular structures found inside living cells. As such, nanomaterials are poised to take advantage of existing cellular machinery to facilitate the delivery of drugs. Nanoparticles (NPs) containing encapsulated, dispersed, absorbed or conjugated drugs have unique characteristics that can lead to enhanced performance in a variety of dosage forms. When formulated correctly, drug particles are resistant to settling and can have higher saturation solubility, rapid dissolution and enhanced adhesion to biological surfaces, thereby providing rapid onset of therapeutic action and improved bioavailability. In addition, the vast majority of molecules in a nanostructure reside at the particle surface,[4] which maximizes the loading and delivery of cargos, such as therapeutic drugs, proteins and polynucleotides, to targeted cells and tissues. Highly efficient drug delivery, based on nanomaterials, could potentially reduce the drug dose needed to achieve therapeutic benefit, which, in turn, would lower the cost and/or reduce the side effects associated with particular drugs. Furthermore, NP size and surface characteristics can be easily manipulated to achieve both passive and active drug targeting. Site-specific targeting can be achieved by attaching targeting ligands, such as antibodies or aptamers, to the surface of particles, or by using guidance in the form of magnetic NPs. NPs can also control and sustain release of a drug during transport to, or at, the site of localization, altering drug distribution and subsequent clearance of the drug in order to improve therapeutic efficacy and reduce side effects.

          Nanotechnology could be strategically implemented in new developing drug delivery systems that can expand drug markets. Such a plan would be applied to drugs selected for full-scale development based on their safety and efficacy data, but which fail to reach clinical development because of poor biopharmacological properties, for example, poor solubility or poor permeability across the intestinal epithelium, situations that translate into poor bioavailability and undesirable pharmacokinetic properties.[5] The new drug delivery methods are expected to enable pharmaceutical companies to reformulate existing drugs on the market, thereby extending the lifetime of products and enhancing the performance of drugs by increasing effectiveness, safety and patient adherence, and ultimately reducing healthcare costs.[6–8]

          Commercialization of nanotechnology in pharmaceutical and medical science has made great progress. Taking the USA alone as an example, at least 15 new pharmaceuticals approved since 1990 have utilized nanotechnology in their design and drug delivery systems. In each case, both product development and safety data reviews were conducted on a case-by-case basis, using the best available methods and procedures, with an understanding that postmarketing vigilance for safety issues would be ongoing. Some representative examples of therapeutic nanocarriers on the market are briefly described in Table 1.

          In this review, we focus mainly on the application of nanotechnology to drug delivery and highlight several areas of opportunity where current and emerging nanotechnologies could enable novel classes of therapeutics. We look at challenges and general trends in pharmaceutical nanotechnology, and we also explore nanotechnology strategies to overcome limitations in drug delivery. However, this article can only serve to provide a glimpse into this rapidly evolving field, both now and what may be expected in the future.

          Nanocarriers & Their Applications

          Various nanoforms have been attempted as drug delivery systems, varying from biological substances, such as albumin, gelatin and phospholipids for liposomes, to chemical substances, such as various polymers and solid metal-containing NPs (Figure 1). Polymer–drug conjugates, which have high size variation, are normally not considered as NPs. However, since their size can still be controlled within 100 nm, they are also included in these nanodelivery systems. These nanodelivery systems can be designed to have drugs absorbed or conjugated onto the particle surface, encapsulated inside the polymer/lipid or dissolved within the particle matrix. As a consequence, drugs can be protected from a critical environment or their unfavorable biopharmaceutical properties can be masked and replaced with the properties of nanomaterials. In addition, nanocarriers can be accumulated preferentially at tumor, inflammatory and infectious sites by virtue of the enhanced permeability and retention (EPR) effect. The EPR effect involves site-specific characteristics, not associated with normal tissues or organs, thus resulting in increased selective targeting. Based on those properties, nanodrug delivery systems offer many advantages,[9–11] including:

          (Enlarge Image)

          Figure 1.

          Some nanotechnology-based drug delivery platforms, including a nanocrystal, liposome, polymeric micelle, protein-based nanoparticle, dendrimer, carbon nanotube and polymer–drug conjugate.
          NP: Nanoparticle.

          • Improving the stability of hydrophobic drugs, rendering them suitable for administration;
          • Improving biodistribution and pharmacokinetics, resulting in improved efficacy;
          • Reducing adverse effects as a consequence of favored accumulation at target sites;
          • Decreasing toxicity by using biocompatible nanomaterials.

          By adopting nanotechnology, fundamental changes in drug production and delivery are expected to affect approximately half of the worldwide drug production in the next decade, totaling approximately US$380 billion in revenue.[12] Next, several main nanocarriers are briefly discussed.


          One of the most obvious and important nanotechnology tools for product development is the opportunity to convert existing drugs with poor water solubility and dissolution rate into readily water-soluble dispersions by converting them into nanosized drugs.[13,14] In other words, the drug itself may be formulated at a nanoscale such that it can function as its own ‘carrier’.[15] Many approaches have been studied, but the most practical strategy involves reducing the drug particle size to nanometer range and stabilizing the drug NP surface with a layer of nonionic surfactants or polymeric macromolecules.[16] By reducing the particle size of the active pharmaceutical ingredient, the drug’s surface area is increased considerably, thereby improving its solubility and dissolution and consequently increasing both the maximum plasma concentration and area under the curve. Once the drug is nanosized, it can be formulated into various dosage forms, such as oral, nasal and injectable. These nanocrystal drugs may have advantages over association colloids (micelle solutions) because the level of surfactant per amount of drug can be greatly minimized, using only the amount that is necessary to stabilize the solid–fluid interface.[15]

          Furthermore, recent studies have shown that external agents, such as surfactants, for nanocrystal drug delivery can be eliminated. For example, a method was recently developed for the delivery of a hydrophobic photosensitizing anticancer drug in its pure form using nanocrystals.[17] Synthesized by the reprecipitation method, the resulting drug nanocrystals were stable in aqueous dispersion, without the necessity of any additional stabilizer. These nanocrystals are uniform in size distribution with an average diameter of 110 nm. Such nanocrystals were efficiently taken up by tumor cells in vitro, and irradiation of such cells with visible light (665 nm) resulted in significant cell death. An in vivo study of the nanocrystal drug also showed significant efficacy compared with the conventional surfactant-based delivery system. These results illustrate the potential of pure drug nanocrystals for photodynamic therapy. As shown in Table 1 , a number of well-known drugs have already been commercialized using the nanocrystal approach.

          Organic Nanoplatforms

          Liposomes Liposomes are self-assembled artificial vesicles developed from amphiphilic phospholipids. These vesicles consist of a spherical bilayer structure surrounding an aqueous core domain, and their size can vary from 50 nm to several micrometers. Liposomes have attractive biological properties, including general biocompatibility, biodegradability, isolation of drugs from the surrounding environment and the ability to entrap both hydrophilic and hydrophobic drugs. Through the addition of agents to the lipid membrane, or the alteration of the surface chemistry, liposome properties, such as size, surface charge and functionality, can be easily tuned.

          Liposomes are the most clinically established nanosystems for drug delivery. Their efficacy has been demonstrated in reducing systemic effects and toxicity, as well as in attenuating drug clearance.[18,19]Modified liposomes at the nanoscale have been shown to have excellent pharmacokinetic profiles for the delivery of DNA, antisense oligonucleotide, siRNA, proteins and chemotherapeutic agents.[20]Examples of marketed liposomal drugs with higher efficacy and lower toxicity than their nonliposomal analogues are listed in Table 1 . Doxorubicin is an anticancer drug that is widely used for the treatment of various types of tumors. It is a highly toxic compound affecting not only tumor tissue, but also heart and kidney, a fact that limits its therapeutic applications. However, the development of doxorubicin enclosed in liposomes culminated in an approved nanomedical drug delivery system.[21,22] This novel liposomal formulation has resulted in reduced delivery of doxorubicin to the heart and renal system, while elevating the accumulation in tumor tissue[23,24] by the EPR effect. Furthermore, a number of liposomal drugs are currently being investigated, including anticancer agents, such as camptothecin[25]and paclitaxel (PTX),[26] as well as antibiotics, such as vancomycin[27] and amikacin.[28]

          Liposomes are also subject to some limitations, including low encapsulation efficiency, fast burst release of drugs, poor storage stability and lack of tunable triggers for drug release.[29] Furthermore, since liposomes cannot usually permeate cells, drugs are released into the extracellular fluid.[30] As such, many efforts have focused on improving their stability and increasing circulation half-life for effective targeting or sustained drug action.[19,31] Surface modification is one method of conferring stability and structural integrity against a harsh bioenvironment after oral or parenteral administration.[32] Surface modification can be achieved by attaching polyethylene glycol (PEG) units, which form a protective layer over the liposome surface (known as stealth liposomes) to slow down liposome recognition, or by attaching other polymers, such as poly(methacrylic acid-co-cholesteryl methacrylate)[33] and poly(actylic acid),[34] to improve the circulation time of liposomes in blood. To overcome the fast burst release of the chemotherapeutic drugs from liposomes, drugs such as doxorubicin may be encapsulated in the liposomal aqueous phase by an ammonium sulphate gradient.[35] This strategy enables stable drug entrapment with negligible drug leakage during circulation, even after prolonged residence in the blood stream.[36] Further efforts to improve control over the rate of release and drug bioavailability have been made by designing liposomes whose release is environmentally triggered. Accordingly, the drug release from liposome-responsive polymers, or hydrogel, is triggered by a change in pH, temperature, radiofrequency or magnetic field.[37] Liposomes have also been conjugated with active-targeting ligands, such as antibodies[38–40] or folate, for target-specific drug delivery.[41]

          Polymeric NPs Polymeric NPs are colloidal particles with a size range of 10–1000 nm, and they can be spherical, branched or core–shell structures. They have been fabricated using biodegradable synthetic polymers, such as polylactide–polyglycolide copolymers, polyacrylates and polycaprolactones, or natural polymers, such as albumin, gelatin, alginate, collagen and chitosan.[42] Various methods, such as solvent evaporation, spontaneous emulsification, solvent diffusion, salting out/emulsification-diffusion, use of supercritical CO2 and polymerization, have been used to prepare the NPs.[43]Advances in polymer science and engineering have resulted in the development of smart polymer (stimuli-sensitive polymer), which can change its physicochemical properties in response to environmental signals. Physical (temperature, ultrasound, light, electricity and mechanical stress), chemical (pH and ionic strength) and biological signals (enzymes and biomolecules) have been used as triggering stimuli. Various monomers having sensitivity to specific stimuli can be tailored to a homopolymer in response to a certain signal or copolymers answering multiple stimuli. The versatility of polymer sources and their easy combination make it possible to tune up polymer sensitivity in response to a given stimulus within a narrow range, leading to more accurate and programmable drug delivery.

          Polymeric nanocarriers can be categorized based on three drug-incorporation mechanisms. The first includes polymeric carriers that use covalent chemistry for direct drug conjugation (e.g., linear polymers). The second group includes hydrophobic interactions between drugs and nanocarriers (e.g., polymeric micelles from amphiphilic block copolymers). Polymeric nanocarriers in the third group include hydrogels, which offer a water-filled depot for hydrophilic drug encapsulation.

          Polymer–Drug Conjugates (Prodrugs) Many polymer–drug conjugates have been developed since the first combination reported in the 1970s.[44,45] Conjugation of macromolecular polymers to drugs can significantly enhance the blood circulation time of the drugs. Especially, protein or peptide drugs, which can be readily digested inside the human body, can maintain their activity by conjugation of the water-soluble polymer PEG (PEGylation). For example, it was reported that PEGylated L-asparaginase increased its plasma half-life by up to 357 h.[46] Without PEG, the half-life of natural L-asparaginase is only 20 h. In addition to PEGylation of proteins, small molecular anticancer drugs can also be PEGylated to improve their pharmacokinetics for cancer therapy. For instance, PEG-camptothecin (PROTHECAN®) has entered clinical trials for cancer therapy.[47]

          Increasing the otherwise poor solubility of some drugs is another important function of polymer–drug conjugation. Specifically, conjugating water-soluble polymers to functional groups that already exist in the drug structure can significantly enhance the water solubility of the drug. Recently, a new category of polymer–drug conjugates called brush polymer–drug conjugates were prepared by ring-opening metathesis copolymerization.[48] In this report, as PEG was employed as the brush polymer side chains, the conjugates exhibited significant water solubility. However, polymer–drug conjugates require chemical modification of the existing drugs; as a consequence, their production could cost more, and additional purification steps are needed. Moreover, polymers that are chemically conjugated with drugs are often considered new chemical entities owing to a pharmacokinetic profile distinct from that of the parent drugs. As such, additional US FDA approval is required, even though the parent drug has already been approved. Despite the variety of novel drug targets and sophisticated chemistries available, only four drugs (doxorubicin, camptothecin, PTX and platinate) and four polymers (N-[2-hydroxylpropyl]methacrylamide [HPMA] copolymer, poly-L-glutamic acid [PGA], PEG and dextran) have been used to develop polymer–drug conjugates.[49–54] In addition to the commercially available polymer drugs listed in Table 1 , PGA-PTX (Xyotax™, CT-2103; Cell Therapeutics Inc./Chugai Pharmaceutical Co. Ltd.),[55] PGA-camptothecin (CT-2106; Cell Therapeutics Inc.)[56] and HPMA–doxorubicin (PK1/FCE-28068; Pfizer Inc./Cancer Research Campaign)[57] are now in clinical trials. As an example, PK1 has been evaluated in clinical trials as an anticancer agent, and a Phase I evaluation has been completed in patients with several types of tumors resistant to prior therapy, such as chemotherapy or radiation. However, although the clinical results for HPMA–doxorubicin conjugates look promising, PEG-based conjugation remains the gold-standard in the field of polymeric drug delivery. In addition, polymer–drug conjugates are still limited by their nonbiodegradability and the fate of polymers after in vivoadministration.[58]

          Polymeric Micelles Polymeric micelles are formed when amphiphilic surfactants or polymeric molecules spontaneously associate in aqueous medium to form core–shell structures. The inner core of a micelle, which is hydrophobic, is surrounded by a shell of hydrophilic polymers, such as PEG.[59] Their hydrophobic core serves as a reservoir for poorly water-soluble and amphiphilic drugs; at the same time, their hydrophilic shell stabilizes the core, prolongs circulation time in blood and increases accumulation in tumor tissues.[41] So far, a large variety of drug molecules have been incorporated into polymeric micelles, either by physical encapsulation[60,61] or covalent attachment.[62] Genexol-PM® (Samyang, Korea), PEG-poly(D,L-lactide)-PTX, employs cremophor-free polymeric micelles loaded with PTX drugs. It was found to have a three-times higher maximum tolerated dose in nude mice and two- to threefold higher levels of biodistribution, compared with those of pristine PTX, in various tissues, including tumors. A Phase I clinical trial has been evaluated in patients, and the results showed that Genexol-PM is superior to conventional PTX for the delivery of higher doses without additional toxicity.[63] Recently, a series of novel dual targeting micellar delivery systems were developed based on the self-assembled hyaluronic acid-octadecyl (HA-C18) copolymer and folic acid-conjugated HA-C18 (FA-HA-C18). PTX was successfully encapsulated by HA-C18 and FA-HA-C18 polymeric micelles, with a high encapsulation efficiency of 97.3%. Since these copolymers are biodegradable, biocompatible and cell-specifically targetable, they become promising nanostructure carriers for hydrophobic anticancer drugs.[64] In addition, stimuli-responsive drug-loaded micelles[65–69] and multifunctional polymeric micelles containing imaging as well as therapeutic agents[70–72] are now under active investigation with the potential to be the mainstream of the polymeric drug development in the near future. Furthermore, using computer simulation, the experimental preparation of drug-loaded polymeric micelles could be more efficiently guided, by providing insight into the mechanism of mesoscopic structures and serving as a complement to experiments.[73]

          Hydrogel NPs In recent years, hydrogel NPs have gained considerable attention as one of the most promising nanoparticulate drug delivery systems owing to their unique properties. Hydrogels are cross-linked networks of hydrophilic polymers that can absorb and retain more than 20% of their weight in water, while at the same time, maintaining the distinct 3D structure of the polymer network. Swelling properties, network structure, permeability or mechanical stability of hydrogels can be controlled by external stimuli or physiological parameters.[74–78] Hydrogels have been extensively studied for controlled release of therapeutics, stimuli-responsive release and applications in biological implants.[75,79–81] However, the hydration response to changes in stimuli in most hydrogel systems is too slow for therapeutic applications. To overcome this limitation, further development of hydrogel structures at the micro- and nano-scale is needed.[82] Recent reports showed some progress in micro- and nanogels of poly-N-isopropylacrylamide with ultrafast responses and attractive rheological properties.[83,84] Ding et al. demonstrated that cisplatin-loaded polyacrylic acid hydrogel NPs could be implanted and plastered on tumor tissue.[85] This hydrogel system exhibited superior efficacy in impeding tumor growth and prolonging lifespan in mice. The in vivo biodistribution assay also demonstrated that the hydrogel implant results in high concentration and retention of the drug. A multifunctional hybrid hydrogel was developed by combining the magnetic properties of NPs and the typical characteristics of the hydrogel. These hybrid hydrogels could be used to load a large number of drugs and transport them to the target site by the application of an external magnetic field.[86] To improve the specificity of the hydrogel drug delivery systems, core–shell nanogels were developed, which utilize aptamers as the recognition element and near-infrared light as a triggering stimulus for drug delivery. In this system, gold (Au)–silver nanorods, which possess intense absorption bands in the near-infrared range, were coated with DNA cross-linked polymeric shells, so that drugs can be rapidly and controllably released upon the near-infrared irradiation.[87] As the fate of hydrogel NPs after in vivo administration may be a concern for clinical applications, biodegradable hydrogel NPs with diameters of approximately 200 nm have been synthesized via inverse miniemulsion reversible addition–fragmentation chain-transfer polymerization of 2-(dimethylamino)ethyl methacrylate. A disulfide cross-linker was used to cross-link the NPs, so that the polymer network could be degraded to its constituent primary chains by exposure to a reductive environment. It is indicated that these biodegradable hydrogel NPs are currently being investigated for encapsulation and controlled release of siRNA.[88] Although hydrogel NPs-based drugs are not commercially available, they have high possibility to be further developed for drug delivery systems in the future, owing to their highly biocompatible and effective drug-loading properties.

          Protein-based NPs Hydrophobic drugs, such as taxanes, are highly active and widely used in a variety of solid tumor therapies. Both PTX and docetaxel, which are the commercially available taxanes for clinical treatments, are hydrophobic. Because of their solubility problems, they have been formulated as suspensions with nonionic surfactants, such as Cremophor EL® (BASF Corp.) for PTX and Tween-80 (ICI Americas, Inc.) for docetaxel. However, these surfactants are associated with hypersensitivity reaction and toxic side effects to tissues. To decrease toxicity, albumin conjugated with PTX has been formulated, yielding NPs approximately 130 nm in size and approved by the FDA for breast cancer treatment.[89–91] In addition to reduced toxicity, albumin–PTX has been found to bind with the albumin receptor (gp60) on endothelial cells, with further extravascular transport,[92–94] resulting in an increase in drug concentration at tumor sites without hypersensitivity reactions. The albumin–PTX complex is approved in 38 countries for the treatment of metastatic breast cancer. Furthermore, Abraxane® is currently in various stages of investigation for the treatment of other cancers, such as metastatic breast cancer, non-small-cell lung cancer, malignant melanoma, pancreatic and gastric cancer.

          Dendrimers Dendrimers are synthetic, branched macromolecules that form a tree-like structure. Unlike most linear polymers, the chemical composition and molecular weight of dendrimers can be precisely controlled; hence, it is relatively easy to predict their biocompatibility and pharmacokinetics.[95]Dendrimers are very uniform with extremely low polydispersities, and they are commonly created with dimensions incrementally grown in approximate nanometer steps from 1 to over 10 nm. Their globular structures and the presence of internal cavities enable drugs to be encapsulated within the macromolecule interior and are used to provide controlled release from the inner core.[96] Although the small size (up to 10 nm) of dendrimers limits extensive drug incorporation, their dendritic nature and branching allows drug loading onto the outside surface of the structure[97] via covalent binding or electrostatic interactions. Dendrimers can be synthesized by either divergent or convergent approaches. In the divergent approach, dendrimers are synthesized from the core and further built to other layers called generations. However, this method provides a low yield because the reactions that occur must be conducted on a single molecule processing a large number of equivalent reaction sites.[98] In addition, a large amount of reagents is required for the latter stages of synthesis, resulting in complication of purification. For the convergent method, synthesis begins at the periphery of the dendrimer molecules and stops at the core. In this approach, each synthesized generation can be subsequently purified.[98]

          Drug molecules associated with dendrimers can be utilized for cancer treatment,[99] the enhancement of drug solubility and permeability (dendrimer–drug conjugates)[100] and intracellular delivery.[101] Some drugs can be physically encapsulated inside the dendrimer network or form linkages (either covalently or noncovalently) on the dendrimer surface.[102] Furthermore, functionalization of the dendrimer surface with specific ligands can enhance potential targeting. For example, Myc et al. reported a polyamidoamine dendrimer conjugate containing FA as the targeting agent and methotroxate as the therapeutic agent.[103] Cytotoxicity and specificity were tested with both FA receptor-expressing and nonexpressing cells. Both in vitro and in vivo results showed that the dendrimer conjugate was preferentially cytotoxic to the target cells. The polyamido amine dendrimer conjugated with an anti-prostate specific membrane antigen antibody was also demonstrated.[104] The antibody–dendrimer conjugate specifically bound to anti-prostate specific membrane antigen-positive, but not negative, cell lines. However, dendrimer toxicity and immunogenicity are the main concerns when they are applied for drug delivery. Since the clinical experience with dendrimers has so far been limited, it is hard to tell whether the dendrimers are intrinsically ‘safe’ or ‘toxic’.

          Inorganic Platforms

          Au NPs Noble metal NPs, such as Au NPs, have emerged as a promising scaffold for drug and gene delivery in that they provide a useful complement to more traditional delivery vehicles. The combination of inertness and low toxicity,[105] easy synthesis, very large surface area, well-established surface functionalization (generally through thiol linkages) and tunable stability provide Au NPs with unique attributes to enable new delivery strategies. Moreover, excess loading of pharmaceuticals on NPs allows ‘drug reservoirs’ to accumulate for controlled and sustained release, thereby maintaining the drug level within the therapeutic window. An Au NP with 2-nm core diameter could, in principle, be conjugated with 100 molecules to available ligands (n = 108) in the monolayer.[106] Zubarev et al. have recently succeeded in coupling 70 PTX molecules, a chemotherapeutic drug, to an Au NP with a 2-nm core diameter.[107] Efficient release of these therapeutic agents could be triggered by internal (e.g., glutathione[108] or pH[109]) or external (e.g., light[110,111]) stimuli. In addition to serving as the carrier for drug delivery, Au NPs can also be imaged using contrast imaging techniques. Once the Au NPs are targeted to the diseased site, such as a tumor, hyperthermia treatment can be used for tumor destruction. For example, a recent study demonstrated that PEGylated Au NPs were employed for highly efficient drug delivery and in vivo photodynamic therapy of cancer.[112] Compared with conventional photodynamic therapy drug delivery in vivo, PEGylated Au NPs accelerated the silicon phthalocyanine 4 administration by approximately two orders of magnitude without side effects in treated mice. The key issue that needs to be addressed with Au NPs is the engineering of the particle surface for optimized properties, such as bioavailability and nonimmunogenicity.

          Superparamagnetic NPs Magnetic NPs have been proposed as drug carriers with a push towards clinical trials.[113] The superparamagnetic properties of iron (II) oxide particles can be used to guide microcapsules in place for delivery by external magnetic fields. Another advantage of using magnetic NPs is the ability to heat the particles after internalization, which is known as the hyperthermia effect. For example, Brazel et al. developed a grafted thermosensitive polymeric system by embedding FePt NPs in poly(N-isopropylacrylamide)-based hydrogels, which can be triggered to release the loaded drug by inducing an increase in temperature based on a magnetic thermal heating event.[114] The grafted hydrogel system is also shown to exhibit a desirable positive thermal response with an increased drug diffusion coefficient for temperatures higher than physiological temperature.[115]

          Besides being utilized for targeting and raising temperature, magnetic NPs can also affect the permeability of microcapsules by applying external oscillating magnetic fields and releasing encapsulated materials.[116] For example, ferromagnetic Au-coated cobalt NPs (3 nm in diameter) were incorporated into the polymer walls of microcapsules. Subsequently, application of external alternating magnetic fields of 100–300 Hz and 1200 Oe strength disturbed the capsule wall structures and dramatically increased their permeability to macromolecules. This work supports the hypothesis that magnetic NPs embedded in polyelectrolyte capsules can be used for the controlled release of substances by applying an external magnetic field.

          The main benefits of superparamagnetic NPs over classical cancer therapies are minimal invasiveness, accessibility of hidden tumors and minimal side effects. Conventional heating of a tissue by, for example, microwaves or laser light results in the destruction of healthy tissue surrounding the tumor. However, targeted paramagnetic particles provide a powerful strategy for localized heating of cancerous cells.

          Ceramic NPs Ceramic NPs are particles fabricated from inorganic compounds with porous characteristics, such as silica, alumina and titania.[117–119] Among these, silica NPs have attracted much research attention as a result of their biocompatibility and ease of synthesis, as well as surface modification.[120–122,301] Furthermore, the well-established silane chemistry facilitates the cross-linking of drugs to silica particles.[123,124] For example, recent breakthroughs in mesoporous silica NPs (MSNs) have brought new possibilities to this burgeoning area of research. MSNs contain hundreds of empty channels (mesopores) arranged in a 2D network of a honeycomb-like porous structure. In contrast to the low biocompatibility of other amorphous silica materials, recent studies have shown that MSNs exhibit superior biocompatibility at concentrations adequate for pharmacological applications.[125,126]Once the vehicle is localized in the cytoplasm, it is desirable to have effective control over the release of drug molecules in order to reach pharmacologically effective levels. The ability to selectively functionalize the external particle and/or the interior nanochannel surface of MSNs is advantageous in achieving this goal.[127,128] Different functional groups can be added by using this methodology, including, for example, functionalization with stimuli-responsive tethers that could be further attached to NPs (Au and iron [II] oxide). These NPs could work as gatekeepers and be removed by either intracellular or external triggers, such as changes in pH, reducing environment, enzymatic activity, light, electromagnetic field or ultrasound.[128] The surface of MSNs can be engineered with cell-specific moieties, such as organic molecules, peptides, aptamers and antibodies, to achieve cell type or tissue specificity. Moreover, optical and magnetic contrast agents can be introduced to develop multipurpose drug delivery systems.

          These strategies demonstrated that the application of target-specific MSN vehicles in vitro is promising; however, the application in vivo has not yet been reported. These particles are not biodegradable; consequently, there is a concern that they may accumulate in the human body and cause harmful effects.[117] For further in vivo applications, the biocompatibility, biodistribution, retention, degradation and clearance of MSNs must be systematically investigated.

          Carbon-based Nanomaterials Carbon-based nanomaterials have attracted particular interest because they can be surface functionalized for the grafting of nucleic acids, peptides and proteins. Carbon nanotubes (CNTs), fullerene, and nanodiamonds[129] have been extensively studied for drug delivery applications.[130] The size, geometry and surface characteristics of single-wall nanotubes (SWNTs), multiwall nanotubes and C60 fullerenes make them appealing for drug carrier usage. For example, PTX-conjugated SWNTs have shown promise for in vivo cancer treatment. SWNT delivery of PTX affords markedly improved treatment efficacy over clinical Taxol (Bristol-Myers Squibb Co.), as evidenced by its ability to slow down tumor growth at a low PTX dose.[131]

          However, the primary drawback of carbon-based nanomaterials appears to be their toxicity. Experiments have shown that CNTs can lead to cell proliferation inhibition and apoptosis. Although they are less toxic than carbon fibers and NPs, the toxicity of CNTs increases significantly when carbonyl, carboxyl and/or hydroxyl functional groups are present on their surface.[132] Because of the reported toxicity of CNTs,[133–137] studies involving their application for drug delivery are still being conducted.[138–140] In order to promote the application of CNTs for drug delivery, researchers have functionalized their surface, rendering them benign.[136] Unfortunately, concerns that functionalized CNTs may revert back to a toxic state if the functional group detaches has limited the pursuit of using these modified CNTs for biomedical applications.

          The toxicity of other forms of nanocarbons has also been reported.[132,140,141] One study of human lung tumor cells showed that carbon NPs are even more toxic than multiwall nanotubes and carbon nanofibers.[132] Given the mounting evidence demonstrating the toxicity of carbon NPs, the enthusiasm to develop carbon NPs for drug delivery has decreased significantly in recent years.

          Integrated Nanocomposite Particles

          A variety of nanoplatforms have been developed for a wide spectrum of applications, and each of these applications has unique advantages and limitations. By combining the specific function of each material, new hybrid nanocomposite materials can be fabricated. For instance, liposomes and polymeric NPs are the two most widely studied drug delivery platforms, and attempts have been made to combine the advantages of both systems. A recent study reported the use of nanocells consisting of nuclear poly(lactic-co-glycolic acid) NPs within an extranuclear PEGylated phospholipid envelope for temporal targeting of tumor cells and neovasculature.[142] Moreover, liposomes are routinely coated with a hydrophilic polymer, such as PEG or poly(ethylene oxide), to improve the circulation time in vivo, which is another example of a liposome–polymer composite.[143] Similarly, liposomal locked-in dendrimers, the combination of liposomes and dendrimers in one formulation, has resulted in higher drug loading and slower drug release from the composite, as compared with pure liposomes.[144] Another LipoMag formulation, which consists of an oleic acid-coated magnetic nanocrystal core and a cationic lipid shell, was magnetically guided to deliver and silence genes in cells and tumors in mice.[145]

          Targeting Strategies

          Two basic requirements should be realized in the design of nanocarriers to achieve effective drug delivery (Figure 2). First, drugs should be able to reach the desired tumor sites after administration with minimal loss to their volume and activity in blood circulation. Second, drugs should only kill tumor cells without harmful effects to healthy tissue.[146] These requirements may be enabled using two strategies: passive and active targeting of drugs.[147]

          (Enlarge Image)

          Figure 2.

          Passive and active targeting.
          By the enhanced permeability and retention effect, nanoparticles (NPs) can be passively extravasated through leaky vascularization, allowing their accumulation at the tumor region (A). In this case, drugs may be released in the extracellular matrix and then diffuse through the tissue. Active targeting (B) can enhance the therapeutic efficacy of drugs by the increased accumulation and cellular uptake of NPs through receptor-mediated endocytosis. NPs can be engineered to incorporate ligands that bind to endothelial cell surface receptors. In this case, the enhanced permeability and retention effect does not pertain, and the presence of leaky vasculature is not required.

          Passive Targeting

          Passive targeting takes advantage of the unique pathophysiological characteristics of tumor vessels, enabling nanodrugs to accumulate in tumor tissues. Typically, tumor vessels are highly disorganized and dilated with a high number of pores, resulting in enlarged gap junctions between endothelial cells and compromised lymphatic drainage. The ‘leaky’ vascularization, which refers to the EPR effect, allows migration of macromolecules up to 400 nm in diameter into the surrounding tumor region.[147–149] One of the earliest nanoscale technologies for passive targeting of drugs was based on the use of liposomes. More advanced liposomes are coated with a synthetic polymer that protects the agents from immune destruction.[150]

          Moreover, the EPR effect, the microenvironment surrounding tumor tissue, is different from that of healthy cells, a physiological phenomenon that also supports passive targeting. Based on the high metabolic rate of fast-growing tumor cells, they require more oxygen and nutrients. Consequently, glycolysis is stimulated to obtain extra energy, resulting in an acidic environment.[151] Taking advantage of this, pH-sensitive liposomes have been designed to be stable at physiological pH 7.4, but degraded to release drug molecules at the acidic pH.[152]

          Although passive targeting approaches form the basis of clinical therapy, they suffer from several limitations. Ubiquitously targeting cells within a tumor is not always feasible because some drugs cannot diffuse efficiently, and the random nature of the approach makes it difficult to control the process. The passive strategy is further limited because certain tumors do not exhibit an EPR effect, and the permeability of vessels may not be the same throughout a single tumor.[153]

          Active Targeting

          One way to overcome the limitations of passive targeting is to attach affinity ligands (antibodies,[154]peptides,[155] aptamers[156] or small molecules[157] that only bind to specific receptors on the cell surface) to the surface of the nanocarriers by a variety of conjugation chemistries. Nanocarriers will recognize and bind to target cells through ligand–receptor interactions by the expression of receptors or epitopes on the cell surface. In order to achieve high specificity, those receptors should be highly expressed on tumor cells, but not on normal cells. Furthermore, the receptors should homogeneously express and should not be shed into the blood circulation. Internalization of targeting conjugates can also occur by receptor-mediated endocytosis after binding to target cells, facilitating drug release inside the cells. Based on the receptor-mediated endocytosis mechanism, targeting conjugates bind with their receptors first, followed by plasma membrane enclosure around the ligand–receptor complex to form an endosome. The newly formed endosome is transferred to specific organelles, and drugs could be released by acidic pH or enzymes. Although the active targeting strategy looks intriguing, nanodrugs currently approved for clinical use are relatively simple and generally lack active targeting or triggered drug release components. Moreover, nanodrugs currently under clinical development lack specific targeting. To fully explore the application of targeted drug delivery, we need to investigate whether the specific diseases are the correct application for targeting, whether the properties of the therapeutic drugs, as well as their site and mode of action, are suited for targeting and whether the delivery vehicles are optimal for product development.[158]

          Key Factors Impacting Drug Delivery

          In order to achieve effective drug delivery, nanocarriers must have suitable circulation time to prevent the elimination of drugs before reaching their target. Based on previous investigations, size, shape and surface characteristics are key factors that impact the efficiency of drug delivery systems.


          Nanotechnology is an emerging field with the potential to revolutionize drug delivery. Advances in this area have allowed some nanomedicines in the market to achieve desirable pharmacokinetic properties, reduce toxicity and improve patient compliance, as well as clinical outcomes. Integration of nanoparticulate drug delivery technologies in preformulation work not only accelerates the development of new therapeutic moieties, but also helps in the reduction of attrition of new molecular entities caused by undesirable biopharmaceutical and pharmacokinetic properties.

          Optimizing the integration of nanomaterials into drug delivery systems will require standardized metrics for their classification, as well as protocols for their handling. This will, in turn, result in a better understanding of the interactions of nanomaterials with biological systems, which will facilitate better engineering of their properties specific to biomedical applications. The development of such drug carriers will require a greater understanding of both the surface chemistry of nanomaterials and the interaction chemistry of these nanomaterials with biological systems. This can only be achieved through collaborative efforts among scientists in different disciplines. Those who work in this emerging field should have up-to-date information on related toxicology issues, potential health and safety risks and the regulatory environment that will impact patient use. Understanding both the benefits and the risks of these new nanotechnology applications will be essential to good decision-making for drug developers, regulators and ultimately the consumers and patients who will be the beneficiaries of new drug delivery technologies.

        5. Nanoparticles wrapped inside human platelet membranes serve as new vehicles for targeted drug delivery.


Nanoparticles disguised as human platelets could greatly enhance the healing power of drug treatments for cardiovascular disease and systemic bacterial infections. These platelet-mimicking nanoparticles, developed by engineers at the University of California, San Diego, are capable of delivering drugs to targeted sites in the body — particularly injured blood vessels, as well as organs infected by harmful bacteria. Engineers demonstrated that by delivering the drugs just to the areas where the drugs were needed, these platelet copycats greatly increased the therapeutic effects of drugs that were administered to diseased rats and mice.

“This work addresses a major challenge in the field of nanomedicine: targeted drug delivery with nanoparticles,” said Liangfang Zhang, a nanoengineering professor at UC San Diego and the senior author of the study. “Because of their targeting ability, platelet-mimicking nanoparticles can directly provide a much higher dose of medication specifically to diseased areas without saturating the entire body with drugs.”


The study is an excellent example of using engineering principles and technology to achieve “precision medicine,” said Shu Chien, a professor of bioengineering and medicine, director of the Institute of Engineering in Medicine at UC San Diego, and a corresponding author on the study. “While this proof of principle study demonstrates specific delivery of therapeutic agents to treat cardiovascular disease and bacterial infections, it also has broad implications for targeted therapy for other diseases such as cancer and neurological disorders,” said Chien.

The ins and outs of the platelet copycats

On the outside, platelet-mimicking nanoparticles are cloaked with human platelet membranes, which enable the nanoparticles to circulate throughout the bloodstream without being attacked by the immune system. The platelet membrane coating has another beneficial feature: it preferentially binds to damaged blood vessels and certain pathogens such as MRSA bacteria, allowing the nanoparticles to deliver and release their drug payloads specifically to these sites in the body.

Enclosed within the platelet membranes are nanoparticle cores made of a biodegradable polymer that can be safely metabolized by the body. The nanoparticles can be packed with many small drug molecules that diffuse out of the polymer core and through the platelet membrane onto their targets.

To make the platelet-membrane-coated nanoparticles, engineers first separated platelets from whole blood samples using a centrifuge. The platelets were then processed to isolate the platelet membranes from the platelet cells. Next, the platelet membranes were broken up into much smaller pieces and fused to the surface of nanoparticle cores. The resulting platelet-membrane-coated nanoparticles are approximately 100 nanometers in diameter, which is one thousand times thinner than an average sheet of paper.

This cloaking technology is based on the strategy that Zhang’s research group had developed to cloak nanoparticles in red blood cell membranes. The researchers previously demonstrated that nanoparticles disguised as red blood cells are capable of removing dangerous pore-forming toxins produced by MRSA, poisonous snake bites and bee stings from the bloodstream.

By using the body’s own platelet membranes, the researchers were able to produce platelet mimics that contain the complete set of surface receptors, antigens and proteins naturally present on platelet membranes. This is unlike other efforts, which synthesize platelet mimics that replicate one or two surface proteins of the platelet membrane.

“Our technique takes advantage of the unique natural properties of human platelet membranes, which have a natural preference to bind to certain tissues and organisms in the body,” said Zhang. This targeting ability, which red blood cell membranes do not have, makes platelet membranes extremely useful for targeted drug delivery, researchers said.

Platelet copycats at work

In one part of this study, researchers packed platelet-mimicking nanoparticles with docetaxel, a drug used to prevent scar tissue formation in the lining of damaged blood vessels, and administered them to rats afflicted with injured arteries. Researchers observed that the docetaxel-containing nanoparticles selectively collected onto the damaged sites of arteries and healed them.

When packed with a small dose of antibiotics, platelet-mimicking nanoparticles can also greatly minimize bacterial infections that have entered the bloodstream and spread to various organs in the body. Researchers injected nanoparticles containing just one-sixth the clinical dose of the antibiotic vancomycin into one of group of mice systemically infected with MRSA bacteria. The organs of these mice ended up with bacterial counts up to one thousand times lower than mice treated with the clinical dose of vancomycin alone.

“Our platelet-mimicking nanoparticles can increase the therapeutic efficacy of antibiotics because they can focus treatment on the bacteria locally without spreading drugs to healthy tissues and organs throughout the rest of the body,” said Zhang. “We hope to develop platelet-mimicking nanoparticles into new treatments for systemic bacterial infections and cardiovascular disease.”

6.  Sponge-like nanoporous gold could be key to new devices to detect disease-causing agents in humans and plants, according to UC Davis researchers.


A group from the UC Davis Department of Electrical and Computer Engineering have demonstrated that they could detect nucleic acids  using nanoporous gold, a novel sensor coating material, in mixtures of other biomolecules that would gum up most detectors. This method enables sensitive detection of DNA in complex biological samples, such as serum from whole blood.

“Nanoporous gold can be imagined as a porous metal sponge with pore sizes that are a thousand times smaller than the diameter of a human hair,” said Erkin Şeker, assistant professor of electrical and computer engineering at UC Davis and the senior author on the papers. “What happens is the debris in biological samples, such as proteins, is too large to go through those pores, but the fiber-like nucleic acids that we want to detect can actually fit through them. It’s almost like a natural sieve.”


Rapid and sensitive detection of nucleic acids plays a crucial role in early identification of pathogenic microbes and disease biomarkers. Current sensor approaches usually require nucleic acid purification that relies on multiple steps and specialized laboratory equipment, which limit the sensors’ use in the field. The researchers’ method reduces the need for purification.

“So now we hope to have largely eliminated the need for extensive sample clean-up, which makes the process conducive to use in the field,” Şeker said.

The result is a faster and more efficient process that can be applied in many settings.

The researchers hope the technology can be translated into the development of miniature point-of-care diagnostic platforms for agricultural and clinical applications.

“The applications of the sensor are quite broad ranging from detection of plant pathogens to disease biomarkers,” said Şeker.

For example, in agriculture, scientists could detect whether a certain pathogen exists on a plant without seeing any symptoms. And in sepsis cases in humans, doctors might determine bacterial contamination much more quickly than at present, preventing any unnecessary treatments.

7.  Pushing the limits of lensless imaging


The Optical Society

To take a picture with this method, scientists fire an X-ray or extreme ultraviolet laser at a target. The light scatters off, and some of those photons interfere with one another and find their way onto a detector, creating a diffraction pattern. By analyzing that pattern, a computer then reconstructs the path those photons must have taken, which generates an image of the target material—all without the lens that’s required in conventional microscopy.

WASHINGTON — Using ultrafast beams of extreme ultraviolet light streaming at a 100,000 times a second, researchers from the Friedrich Schiller University Jena, Germany, have pushed the boundaries of a well-established imaging technique. Not only did they make the highest resolution images ever achieved with this method at a given wavelength, they also created images fast enough to be used in real time. Their new approach could be used to study everything from semiconductor chips to cancer cells.

The team will present their work at the Frontiers in Optics, The Optical Society’s annual meeting and conference in San Jose, California, USA, on October 22, 2015.

The researchers’ wanted to improve on a lensless imaging technique called coherent diffraction imaging, which has been around since the 1980s. To take a picture with this method, scientists fire an X-ray or extreme ultraviolet laser at a target. The light scatters off, and some of those photons interfere with one another and find their way onto a detector, creating a diffraction pattern. By analyzing that pattern, a computer then reconstructs the path those photons must have taken, which generates an image of the target material—all without the lens that’s required in conventional microscopy.

“The computer does the imaging part—forget about the lens,” explained Michael Zürch, Friedrich Schiller University Jena, Germany and lead researcher. “The computer emulates the lens.”

Without a lens, the quality of the images primarily depends on the radiation source. Traditionally, researchers use big, powerful X-ray beams like the one at the SLAC National Accelerator Laboratory in Menlo Park, CA, USA. Over the last 10 years, researchers have developed smaller, cheaper machines that pump out coherent, laser-like beams in the laboratory setting. While those machines are convenient from the cost perspective, they have drawbacks when reporting results.

The table-top machines are unable to produce as many photons as the big expensive ones which limits their resolution. To achieve higher resolutions, the detector must be placed close to the target material—similar to placing a specimen close to a microscope to boost the magnification. Given the geometry of such short distances, hardly any photons will bounce off the target at large enough angles to reach the detector. Without enough photons, the image quality is reduced.

Zürch and a team of researchers from Jena University used a special, custom-built ultrafast laser that fires extreme ultraviolet photons a hundred times faster than conventional table-top machines. With more photons, at a wavelength of 33 nanometers, the researchers were able to make an image with a resolution of 26 nanometers — almost the theoretical limit. “Nobody has achieved such a high resolution with respect to the wavelength in the extreme ultraviolet before,” Zürch said.

The ultrafast laser also overcame another drawback of conventional table-top light sources: long exposure times. If researchers have to wait for images, they can’t get real-time feedback on the systems they study. Thanks to the new high-speed light source, Zürch and his colleagues have reduced the exposure time to only about a second — fast enough for real-time imaging. When taking snapshots every second, the researchers reached a resolution below 80 nanometers.

The prospect of high-resolution and real-time imaging using such a relatively small setup could lead to all kinds of applications, Zürch said. Engineers can use this to hunt for tiny defects in semiconductor chips. Biologists can zoom in on the organelles that make up a cell. Eventually, he said, the researchers might be able to cut down on the exposure times even more and reach even higher resolution levels.

About FiO/LS

Frontiers in Optics (FiO) 2015 is The Optical Society’s (OSA) 99th Annual Meeting and is being held together with Laser Science, the 31th annual meeting of the American Physical Society (APS) Division of Laser Science (DLS). The two meetings unite the OSA and APS communities for five days of quality, cutting-edge presentations, in-demand invited speakers and a variety of special events spanning a broad range of topics in optics and photonics—the science of light—across the disciplines of physics, biology and chemistry. The exhibit floor will feature leading optics companies, technology products and programs.

About The Optical Society

Founded in 1916, The Optical Society (OSA) is a leading professional organization for scientists, engineers, students and entrepreneurs who fuel discoveries, shape real-life applications and accelerate achievements in the science of light. Through world-renowned publications, meetings and membership initiatives, OSA provides quality research, inspired interactions and dedicated resources for its extensive global network of optics and photonics experts. OSA is a founding partner of the National Photonics Initiative and the 2015 International Year of Light.

SOURCE: The Optical Society

8.  Physicists determine three-dimensional positions of individual atoms for the first time

Katherine Kornei, UCLA
The scientists were able to plot the exact coordinates of nine layers of atoms with a precision of 19 trillionths of a meter. Courtesy of Mary Scott and Jianwei (John) Miao/UCLAAtoms are the building blocks of all matter on Earth, and the patterns in which they are arranged dictate how strong, conductive or flexible a material will be. Now, scientists at UCLA have used a powerful microscope to image the three-dimensional positions of individual atoms to a precision of 19 trillionths of a meter, which is several times smaller than a hydrogen atom.

Their observations make it possible, for the first time, to infer the macroscopic properties of materials based on their structural arrangements of atoms, which will guide how scientists and engineers build aircraft components, for example. The research, led by Jianwei (John) Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, is published September 21 in the online edition of the journal Nature Materials.

For more than 100 years, researchers have inferred how atoms are arranged in three-dimensional space using a technique called X-ray crystallography, which involves measuring how light waves scatter off of a crystal. However, X-ray crystallography only yields information about the average positions of many billions of atoms in the crystal, and not about individual atoms’ precise coordinates.

“It’s like taking an average of people on Earth,” Miao said. “Most people have a head, two eyes, a nose and two ears. But an image of the average person will still look different from you and me.”

Because X-ray crystallography doesn’t reveal the structure of a material on a per-atom basis, the technique can’t identify tiny imperfections in materials, such as the absence of a single atom. These imperfections, known as point defects, can weaken materials, which can be dangerous when the materials are components of machines like jet engines.

“Point defects are very important to modern science and technology,” Miao said.

Miao and his team used a technique known as scanning transmission electron microscopy, in which a beam of electrons smaller than the size of a hydrogen atom is scanned over a sample and measures how many electrons interact with the atoms at each scan position. The method reveals the atomic structure of materials because different arrangements of atoms cause electrons to interact in different ways.

However, scanning transmission electron microscopes only produce two-dimensional images. So, creating a 3-D picture requires scientists to scan the sample once, tilt it by a few degrees and re-scan it—repeating the process until the desired spatial resolution is achieved—before combining the data from each scan using a computer algorithm. The downside of this technique is that the repeated electron beam radiation can progressively damage the sample.

Using a scanning transmission electron microscope at the Lawrence Berkeley National Laboratory’s Molecular Foundry, Miao and his colleagues analyzed a small piece of tungsten, an element used in incandescent light bulbs. As the sample was tilted 62 times, the researchers were able to slowly assemble a 3-D model of 3,769 atoms in the tip of the tungsten sample.

The experiment was time consuming because the researchers had to wait several minutes after each tilt for the setup to stabilize.

“Our measurements are so precise, and any vibrations—like a person walking by—can affect what we measure,” said Peter Ercius, a staff scientist at Lawrence Berkeley National Laboratory and an author of the paper.

The researchers compared the images from the first and last scans to verify that the tungsten had not been damaged by the radiation, thanks to the electron beam energy being kept below the radiation damage threshold of tungsten.

Miao and his team showed that the atoms in the tip of the tungsten sample were arranged in nine layers, the sixth of which contained a point defect. The researchers believe the defect was either a hole in an otherwise filled layer of atoms or one or more interloping atoms of a lighter element such as carbon.

Regardless of the nature of the point defect, the researchers’ ability to detect its presence is significant, demonstrating for the first time that the coordinates of individual atoms and point defects can be recorded in three dimensions.

“We made a big breakthrough,” Miao said.

Miao and his team plan to build on their results by studying how atoms are arranged in materials that possess magnetism or energy storage functions, which will help inform our understanding of the properties of these important materials at the most fundamental scale.

“I think this work will create a paradigm shift in how materials are characterized in the 21st century,” he said. “Point defects strongly influence a material’s properties and are discussed in many physics and materials science textbooks. Our results are the first experimental determination of a point defect inside a material in three dimensions.”

The study’s co-authors include Rui Xu, Chien-Chun Chen, Li Wu, Mary Scott, Matthias Bartels, Yongsoo Yang and Michael Sawaya, all of UCLA; as well as Colin Ophus of Lawrence Berkeley National Laboratory; Wolfgang Theis of the University of Birmingham; Hadi Ramezani-Dakhel and Hendrik Heinz of the University of Akron; and Laurence Marks of Northwestern University.

This work was primarily supported by the U.S. Department of Energy’s Office of Basic Energy Sciences (grant DE-FG02-13ER46943 and contract DE-AC02—05CH11231).

9.  An SDSU chemist has developed a technique to identify potential cancer drugs that are less likely to produce side effects.


A class of therapeutic drugs known as protein kinase inhibitors has in the past decade become a powerful weapon in the fight against various life-threatening diseases, including certain types of leukemia, lung cancer, kidney cancer and squamous cell cancer of the head and neck. One problem with these drugs, however, is that they often inhibit many different targets, which can lead to side effects and complications in therapeutic use. A recent study by San Diego State University chemist Jeffrey Gustafson has identified a new technique for improving the selectivity of these drugs and possibly decreasing unwanted side effects in the future.

Why are protein kinase–inhibiting drugs so unpredictable? The answer lies in their molecular makeup.

Many of these drug candidates possess examples of a phenomenon known as atropisomerism. To understand what this is, it’s helpful to understand a bit of the chemistry at work. Molecules can come in different forms that have exactly the same chemical formula and even the same bonds, just arranged differently. The different arrangements are mirror images of each other, with a left-handed and a right-handed arrangement. The molecules’ “handedness” is referred to as chirality. Atropisomerism is a form of chirality that arises when the spatial arrangement has a rotatable bond called an axis of chirality. Picture two non-identical paper snowflakes tethered together by a rigid stick.

Some axes of chirality are rigid, while others can freely spin about their axis. In the latter case, this means that at any given time, you could have one of two different “versions” of the same molecule.

Watershed treatment

As the name suggests, kinase inhibitors interrupt the function of kinases—a particular type of enzyme—and effectively shut down the activity of proteins that contribute to cancer.

“Kinase inhibition has been a watershed for cancer treatment,” said Gustafson, who attended SDSU as an undergraduate before earning his Ph.D. in organic chemistry from Yale University, then working there as a National Institutes of Health poctdoctoral fellow in chemical biology.

“However, it’s really hard to inhibit a single kinase,” he explained. “The majority of compounds identified inhibit not just one but many kinases, and that can lead to a number of side effects.”

Many kinase inhibitors possess axes of chirality that are freely spinning. The problem is that because you can’t control which “arrangement” of the molecule is present at a given time, the unwanted version could have unintended consequences.

In practice, this means that when medicinal chemists discover a promising kinase inhibitor that exists as two interchanging arrangements, they actually have two different inhibitors. Each one can have quite different biological effects, and it’s difficult to know which version of the molecule actually targets the right protein.

“I think this has really been under-recognized in the field,” Gustafson said. “The field needs strategies to weed out these side effects.”

Applying the brakes

So that’s what Gustafson did in a recently published study. He and his colleagues synthesized atropisomeric compounds known to target a particular family of kinases known as tyrosine kinases. To some of these compounds, the researchers added a single chlorine atom which effectively served as a brake to keep the atropisomer from spinning around, locking the molecule into either a right-handed or a left-handed version.

When the researchers screened both the modified and unmodified versions against their target kinases, they found major differences in which kinases the different versions inhibited. The unmodified compound was like a shotgun blast, inhibiting a broad range of kinases. But the locked-in right-handed and left-handed versions were choosier.

“Just by locking them into one or another atropisomeric configuration, not only were they more selective, but they inhibited different kinases,” Gustafson explained.

If drug makers incorporated this technique into their early drug discovery process, he said, it would help identify which version of an atropisomeric compound actually targets the kinase they want to target, cutting the potential for side effects and helping to usher drugs past strict regulatory hurdles and into the hands of waiting patients.

11.  ‘Nanocubes’ Make PSA Test Over 100 Times More Sensitive


A new catalyst that improves the sensitivity of the standard PSA test more than 100-fold, pictured above, is made of palladium nanocubes coated with iridium. (Credit: Xiaohu Xia, Michigan Technological University)

Say you’ve been diagnosed with prostate cancer, the second-leading cause of cancer death in men. You opt for surgery to remove your prostate. Three months later, a prostate surface antigen (PSA) test shows no prostate cells in your body. Everyone rejoices.

Until 18 months later, when another PSA test reveals that now prostate cells have reappeared. What happened?

The first PSA test yielded what’s known as a false negative result. It did not detect the handful of cells that remained after surgery and later multiplied. Now a chemist at Michigan Technological University has made a discovery that could, among other things, slash the numbers of false negatives in PSA tests.

Xiaohu Xia and his team, including researchers from Louisiana State University and the University of Texas at Dallas, have developed a new catalyst that could make lab tests like the PSA much more sensitive. And it may even speed up reactions that neutralize toxic industrial chemicals before they enter lakes and streams.

A paper on the research, “Pd-Ir Core-Shell Nanocubes: A Type of Highly Efficient and Versatile Peroxidase Mimic,” was published online Sept. 3 in ACS Nano. In addition to Xia, the coauthors are graduate students Jingtuo Zhang, Jiabin Liu and Haihang Ye and undergraduate Erin McKenzie of Michigan Tech; Moon J. Kim and Ning Lu of the University of Texas at Dallas; and Ye Xu and Kushal Ghale of Louisiana State University. The LSU team conducted theoretical calculations, and the UT Dallas team contributed high-resolution electron microscopy images.

Their new catalyst mimics the action of similar biochemicals found in nature, called peroxidases. “In animals and plants, these peroxidases are important– for example, they get rid of hydrogen peroxide, which is harmful to the organism,” said Xia, an assistant professor of chemistry at Michigan Tech. In medicine, peroxidases have become powerful tools for accelerating chemical reactions in diagnostic tests; a peroxidase found in the horseradish root is commonly used in the standard PSA test.

However, these natural peroxidases have drawbacks. They can be difficult to extract and purify. “And, they are made of protein, which isn’t very stable,” Xia explained. “At high temperatures, they cook, like meat.”

“Moreover, their efficiency is just fair,” he added. “We wanted to develop a mimic peroxidase that was substantially more efficient than the natural peroxidase, which would lead to a more-sensitive PSA test.”

Their new catalyst, made from nanoscale cubes of palladium coated with a few layers of iridium atoms, does just that. PSA tests Xia’s team conducted using the palladium-iridium catalyst were 110 times more sensitive than tests completed with the conventional peroxidase.

“After surgery, it’s vital to detect a tiny amount of prostate antigen, because otherwise you can get a false negative and perhaps delay treatment for cancer,” said Xia. “Our ultimate goal is to further refine our system for use in clinical diagnostic laboratories.”

Xia hopes that his mimic peroxidase will someday save lives through earlier detection of cancer and other maladies. He also plans to explore other applications, including how it compares with horseradish peroxidase in other catalytic reactions: breaking down toxic industrial-waste products like phenols into harmless substances.

Finally, the team wants to better understand why its palladium-iridium catalyst works so well. “We know the iridium coating is the key,” Xia said. “We think it makes the surface sticky, so the chemical reagents bind to it better.”

12.  Using Proteomics To Understand How Genetic Mutations Rewire Cancer Cells


SAN JOSE, Calif.–(BUSINESS WIRE)–Thermo Fisher Scientific and the Biotech Research and Innovation Center (BRIC) at the University of Copenhagen (UCPH) have shared results from two important scientific papers that advance understanding of how gene mutations drive cancer progression. The two landmark studies, published this week in the journal ; CELL, are some of the early results of the strategic collaboration between Thermo Fisher Scientific and the Linding Lab at BRIC, UCPH.

Using advanced Thermo Scientific Orbitrap Fusion mass spectrometry and next-generation sequencing technologies, researchers from the Universities of Copenhagen, Yale, Zurich, Rome and Tottori describe how specific cancer mutations target and damage the protein signaling networks within human cells on a global scale.

By developing advanced algorithms to integrate data from quantitative mass-spectrometry and next generation sequencing of tumor samples, the researchers have been able to uncover cancer-related changes to phosphorylation signaling networks. This new breakthrough allows researchers to identify the effects of mutations on the function of protein pathways in cancer for individual patients, even if those mutations are very rare.

Lead BRIC researcher Dr. Rune Linding said: “The identification of distinct changes within our tissues that could have the potential to help predict and treat cancer is a major step forward and we are confident that it can aid in the development of novel therapies and screening techniques.”

Since the human genome was decoded more than a decade ago, large scale cancer genome studies have successfully identified gene mutations in individual patients and tumors. However to develop improved cancer therapies, researchers need to explain and relate this genomic data to proteins, the targets of most pharmaceutical drugs. Creating this linkage provides powerful new insights into cancer biology and potential therapeutic approaches.

“The studies highlight the importance of integrating proteomics with genomics in future cancer studies and underscores the value of the broad technological expertise within Thermo Fisher,” said Ken Miller, vice president of research product marketing, life sciences mass spectrometry at Thermo Fisher. “It is becoming increasingly apparent that the genetic basis for each patient’s cancer is subtly, but importantly, different. This realization will inevitably lead to a need for tools to acquire and assess patient-specific information to develop highly personalized therapies with the potential for much greater efficacy. It is hoped that the novel approaches described in these studies, together with best-in-class enabling technologies such as the Orbitrap and Ion Torrent systems, will continue to improve our knowledge of cancer biology.”

The Biotech Research & Innovation Centre (BRIC) was established in 2003 by the Danish Ministry of Science, Technology and Innovation to form an elite centre in biomedical research.

The two studies will be available in advance online and printed in the 24th September issue of CELL, a premier journal in life and biological sciences. More information about the studies and links to media content can be found on http://www.lindinglab.science and http://www.bric.ku.dk. The work was supported by the European Research Council (ERC), the Lundbeck Foundation and Human Frontier Science Program.

13.  Multi-Ancestry GWAS Uncovers a Dozen New Loci Linked to Blood Pressure

Sep 21, 2015


NEW YORK (GenomeWeb) – In Nature Genetics, an international team described a dozen new loci influencing blood pressure patterns across individuals from multiple populations — a set that overlaps with variants implicated in epigenetic features of blood and other tissues.

Through a multi-stage genome-wide association study that relied on genotyping information for as many as 320,251 individuals of East Asian, South Asian, and European descent, the researchers focused in on SNPs at 12 blood pressure-associated sites in the genome, including loci previously linked to cardiac or metabolic functions.

In particular, the team saw blood pressure-linked variants in and around genes contributing to vascular smooth muscle and renal function. And a large proportion of the associated SNPs — or variants in linkage disequilibrium with them — turned up at sites already implicated in control of DNA methylation.

“We note an effect of genome-wide-associated sentinel SNPs on DNA methylation for traits in addition to blood pressure, suggesting that DNA methylation might have a wider role in linking common genetic variation to multiple phenotypes,” the study’s authors wrote.

More than a billion people around the world are affected by high blood pressure, the team explained, a condition that elevates the risk of heart disease, heart attack, stroke, and chronic kidney disease.

Because it occurs at especially high rates in East Asian and South Asian populations, the investigators reasoned that it might be possible to find both ancestry-specific and trans-ancestral genetic associations with high blood pressure.

The team started by analyzing imputed and directly genotyped SNPs in 31,516 individuals of East Asian ancestry, 35,352 individuals with European ancestry, and 33,126 individuals of South Asian descent, searching for variants associated with systolic blood pressure, diastolic blood pressure, pulse pressure, mean arterial pressure, and hypertension.

Through analyses on each population individually and in a meta-analysis of individuals from all three populations, the researchers initially identified 630 loci with suspected ties to at least one of the five blood pressure traits considered.

They then compared the top SNP at each site against data on as many as 87,205 individuals tested for various blood pressure traits for the International Consortium on Blood Pressure GWAS, narrowing in on 19 loci with potential ties to blood pressure that were not described in the past.

The team confirmed blood pressure associations for SNPs at 12 of the new loci through testing on another 48,268 East Asians, 68,456 Europeans, and 16,328 South Asians.

The analysis also verified almost two-dozen loci linked to blood pressure in the past and pointed to 17 sites in the genome with weaker ties to the traits of interest.

Variants at the 12 new loci seemed to have similar effects on the five traits in question, regardless of the population considered, while variants that first appeared to show population-specific effects in East Asians and Europeans did not pan out in replication testing.

By folding in linkage disequilibrium patterns for SNPs at the new blood pressure-associated sites, the researchers got a look at genes that fall near these linked SNPs — a collection that includes genes such as PDE3A, KCNK3, and PRDM6.

They also used these linkage patterns to look for overlap with DNA methylation-related SNPs, demonstrating that 28 of 35 SNPs at these loci seem to be linked to altered DNA methylation levels and related expression shifts in samples from thousands of Europeans or East Asians.

And the team saw similar effects in hundreds of cord blood samples subjected to methylation profiling, suggesting the effect is not simply a consequence of high blood pressure itself.

“The presence of these associations at an early stage of life, before substantial environmental exposure, lends support to the view that the sequence variants have a direct effect on DNA methylation and argues against reverse causation,” the study authors wrote.

14.  Elabela, A New Human Embryonic Stem Cell Growth Factor

September 20, 2015 by mburatov

When embryonic stem cell lines are made, they are traditionally grown on a layer of “feeder cells” that secrete growth factors that keep the embryonic stem cells (ESCs) from differentiating and drive them to grow. These feeder cells are usually irradiated mouse fibroblasts that coat the culture dish, but do not divide. Mouse ESCs can be grown without feeder cells if the growth factor LIF is provided in the medium. LIF, however, is not the growth factor required by human ESCs, and therefore, designing culture media for human ESCs to help them grow without feeder cells has proven more difficult.

Having said that, several laboratories have designed media that can be used to derive human embryonic stem cells without feeder cells. Such a procedure is very important if such cells are to be used for therapeutic purposes, since animal cells can harbor difficult to detect viruses and unusual sugars on their cell surfaces that can also be transferred to human ESCs in culture. These unusual sugars can elicit a strong immune response against them, and for this reason, ESCs must be cultivated or derived under cell-free conditions. However, to design good cell-free culture media, we must know more about the growth factors required by ESCs.

To that end, Bruno Reversade from The Institute of Molecular and Cell Biology in Singapore and others have identified a new growth factor that human ESCs secrete themselves. This protein, ELABELA (ELA), was first identified as a signal for heart development. However, Reversade’s laboratory has discovered that ELA is also abundantly secreted by human ESCs and is required for human ESCs to maintain their ability to self-renew.

Reversade and others deleted the ELA gene with the CRISPR/Cas9 system, and they also knocked the expression of this gene down in other cells with small interfering RNAs. Alternatively, they also incubated human ESCs with antibodies against ELA, which neutralized ELA and prevented it from binding to the cell surface. However Ela was inhibited, the results were the same; reduced ESC growth, increased amounts of cell death, and loss of pluripotency.

How does ELA signal to cells to grow? Global signaling studies of growing human ESCs showed that ELA activates the PI3K/AKT/mTORC1 signaling pathway, which has been show in other work to be required for cell survival. By activating this pathway, ELA drives human ESCs through the cell-cycle progression, activates protein synthesis, and inhibits stress-induced apoptosis.

Interestingly, INSULIN and ELA have partially overlapping functions in human ESC culture medium, but only ELA seems to prime human ESCs toward the endoderm lineage. In the heart, ELA binds to the Apelin receptor APLNR. This receptor, however, is not expressed in human ESCs, which suggests that another receptor, whose identity remains unknown at the moment, binds ELA in human ESCs.

Thus ELA seems to act through an alternate cell-surface receptor, is an endogenous secreted growth factor in human

This paper was published in the journal Cell Stem Cell.

15.  Multiwavelength TIRF Microscopy Enables Insight into Actin Filaments


Researchers at the University of California, San Francisco (UCSF) are combining multiple laser excitation wavelengths in total internal reflection fluorescence (TIRF) microscopy to investigate the binding dynamics of individual actin filaments.


TIRF microscopy provides a unique method of imaging isolated molecules and complexes in vitro. Additionally, the use of sensitive, low-noise cameras enables researchers to study this behavior in real time. A new plug-and-play method of combining several fiber-delivered, digitally modulated lasers into a single instrument, such as a TIRF microscope, now enables multiple labeled proteins to be imaged pseudosimultaneously at high frame rates. This article explores how multiwavelength excitation is being combined with TIRF microscopy in the laboratory of Dr. Dyche Mullins, a professor at UCSF, and how it’s being used to gain new insights into complex biochemical interactions that control the stability and function of actin filaments.

TIRF microscopy in single-filament studies

The Mullins Lab, located at UCSF’s Mission Bay campus, is widely recognized as a leading authority on the study of actin filaments. The protein filaments are fundamental to many processes in virtually every eukaryotic cell — they act as structural elements that enable movement of internal cargoes, amoeboid cell migration, cell division, etc. With these filaments playing so many different roles, it is not surprising that their combination of growth, branching, aggregation and movement involves many subtle control options, which are mediated by a range of different proteins. Sam Lord, the Mullins Lab’s microscope specialist, said, “One area of our research is studying how various proteins bind to actin filaments to enable aggregation, branching and other actions, and more specifically, how yet another set of proteins modulates these binding processes. Obviously, we do bulk studies in a cuvette that reveal overall kinetic data about these binding processes but we also want to image these processes in real time to study the structural biochemistry.” In order to do so, the lab uses TIRF microscopy to observe single actin filaments.

This process involves excitation light that is introduced into the sample region through either a glass slide or a cover slip. The microscope’s optics are configured so that the light hits the glass/sample interface beyond the critical angle, meaning that all of the light will undergo total internal reflection (TIR). However, even with TIR, some of the light’s electric field, called the evanescent wave, penetrates into the sample by an incredibly short distance — typically around 100 nm — beyond the interface. This means that TIRF microscopy can be used to selectively excite fluorescence in molecules and complexes that are adhered to the interface. However, because the light does not penetrate into the bulk (i.e., background) sample region, this methodology will not excite fluorescence from the huge backdrop of molecules freely floating within this medium.

TIRF microscopy is thus a 3D-resolved imaging technique. Its X-Y resolution is limited only by diffraction and/or the camera resolution, but the Z-axis sampling depth is much smaller than the diffraction limit. If there is sufficient signal for fast frame acquisition speeds, the important fourth dimension — time — enables dynamic processes, such as actin filament-protein binding, to be observed on a single filament or on a network of filaments, in real time.

In principle, both laser and nonlaser light sources may be used for fluorescence excitation in such TIRF-based applications. However, for experiments with naturally low signal levels, such as single-molecule monitoring, a laser beam’s extreme brightness is a critical advantage. In particular, a laser’s unique spatial brightness means that it is relatively simple to collimate and subsequently focus the beam into the sample with a narrow range of incidence angles, avoiding excitation of the bulk sample.

Through-objective TIRF microscopy

All TIRF microscope setups are based on one of two basic approaches: through-objective lens geometry or the prism-based method. In the former approach, light is directed in an off-axis geometry through an oil-immersion microscope objective so that the angle of incidence at the coverslip/sample interface is greater than the critical angle, as is shown schematically in Figure 1.

Figure 1.
 In TIRF microscopy, excitation light beyond the critical light is completely reflected. The evanescence of the light field at the refractive interface penetrates into the sample by about 100 nm, causing selective excitation of molecules and complexes adhered to this interface. TIRF microscopes are available with a choice of either through-objective excitation or prism excitation options.
In the prism-based method, the orientation of the sample is reversed with respect to the imaging objective. A light beam is introduced to the sample through a prism attached to the cover slip; the geometry of the prism ensures that the incidence angle at the sample is greater than the critical angle.

Depending on the type of experiment being performed, there are both advantages and disadvantages to each of the above methods. For example, the prism method limits physical access to the sample. As Lord explained, the Mullins Lab uses a Nikon microscope in the through-objective configuration with a very high numerical aperture (NA = 1.49) for several reasons. “For single-molecule studies, fluorescence signal strength is always a major challenge, particularly since we are following processes that need fast frame rates. So, we need a high-NA objective with a small working distance to maximize light collection efficiency. These objectives require a coverglass of precise thickness and the sample near the top of the coverslip to minimize aberrations.” Lord also stated that caution must be taken so the team does not introduce scattering and other losses due to viewing fluorescence through the bulk of the sample.

Multiple, simultaneous laser wavelengths

As has been noted, the mechanisms controlling the binding of regulatory proteins to actin filaments are quite complex. To better understand these processes, the Mullins Lab increasingly has been using sophisticated, multiwavelength TIRF-based experiments. In order to image multiple fluorophores, Lord explained, “We can use either multiple sequenced lasers or a scope equipped with multiple cameras — we have setups for both arrangements.” He continued, “Multiple excitation wavelengths that sequence at high rates enable us to selectively image multiple, differently labeled targets using a microscope equipped with a single high-sensitivity camera, and ensures near-perfect image registration.”

When using multiple lasers, the two technical challenges are to perfectly coalign the lasers into the microscope objective and then to be able to switch between different wavelengths. In order to follow fast binding processes in real time, researchers typically must switch wavelengths between alternate camera frames to build up pseudosimultaneous (i.e., interleaved) videos at two or sometimes three laser wavelengths. This switching must be performed with no undesirable dead time (i.e., shifts in the beam path) and without using mechanical shutters or a complex and costly approach, such as an acousto-optic tunable filter.

Lord notes, “As recently as five years ago, we simply didn’t have low-cost options to conduct single molecule studies using multiple laser wavelengths pseudosimultaneously at the requisite frame rates (30 fps) in order to follow critical binding processes.” He added, “Digitally controllable diode or solid-state lasers, hardware sequencing electronics and quad-band optical filters make it possible to achieve nearly simultaneous multicolor imaging with a single camera.”

In 2014, the lab acquired several digitally modulatable smart lasers to enable multiwavelength TIRF microscopy. These lasers included Coherent’s fiber-pigtailed OBIS FP modules that operate at 488, 561 and 640 nm. The lab also acquired the OBIS Galaxy, which enables simple plug-and-play combining of up to eight fiber-coupled lasers into one, single-mode output fiber. As was detailed by Coherent’s Dan Callen and Matthias Schulze in BioPhotonics’ November 2014 issue (“Laser Combiner Enables Scanning Fluorescence Endoscopy,” www.photonics.com/A56915), this passive module enables lasers to be added or subtracted (i.e., hot-swapped) to any fiber-coupled instrument or setup in a few minutes or less via standard fiber connectors, such as FC/UFC and FC/APC connectors.

Figure 2. The OBIS Galaxy (shown with the top cover removed) allows plug-and-play combining of up to eight separate fiber coupled lasers into a single output fiber. Courtesy of Sam Lord.
The timing hardware setup at the Mullins Lab is very simple in design due to the fact that these smart lasers support direct digital modulation. In each experiment, the frame rate is set by the microscope’s high-sensitivity camera, which is an Andor DU897. The camera’s TTL output trigger pulses are processed in either a programmable Arduino board or an ESio controller, which then directs TTL pulses to fire one of the three lasers without any hardware or software delays. Alternating wavelengths typically are used in most experiments, although any sequence of wavelength frames easily can be programed using the Arduino and Micro-Manager software.1

According to Lord, the flexibility of this arrangement supports future experimental setups that have even greater levels of complexity. In particular, he added, “We may well add a 405-nm laser option in the near future. If/when this arrives, we can simply plug it in and we are ready to go.”

Investigating modulation of actin binding processes

In the team’s work on the binding of actin filaments, this flexible TIRF setup enables the Mullins Lab to conduct experiments with several different approaches. For example, in typical two-wavelength experiments, the actin filament is labeled with one fluorophore, and the protein of interest is labeled with another fluorophore. The protein fluorophore only appears in the TIRF-produced images if/when it binds to the actin sitting on the cover slip. One use of the third wavelength is to image a second protein, which is labeled with a different fluorophore. The image sequences then may reveal, for example, whether the proteins are interspersed at different sites on the filament, or whether the second protein promotes filament growth or branching from a new site. Or, it may reveal that the second protein competitively displaces the first.

In a recently published study,2 Mullins Lab researchers used their multilaser TIRF setup to investigate the details of control mechanisms associated with the binding of tropomyosins to actin filaments. Tropomyosins are coiled-coil proteins whose known functions are to bind actin filaments and thereby regulate multiple cytoskeletal functions — including actin network dynamics near the leading edge of motile cells.

Mullins explained, “The binding of tropomyosins to actin filaments is known to be fundamentally important in actin dynamics. But, we do not yet fully understand how this binding is regulated, especially near the leading edge of migrating cells. Why, for example, are filaments in the lamellum coated with tropomyosin while filaments in the adjacent lamellipod are not?” (Lamellum and lamellipod are distinct, actin-based substructures involved in cell migration.) He went on to state that, prior to his team’s latest studies, previous research demonstrated that tropomyosins inhibit actin nucleation by the Arp2/3 protein complex and that this, in turn, prevented filament severing by the protein cofilin.3,4 “So, we have recently used TIRF and other methods to investigate if and how the Arp2/3 complex and cofilin in turn modulate the binding of tropomyosins to actin filaments,” he said.

Figure 3.
 TIRF images showing Tm1A binding preferentially to the pointed end of single actin filaments. The red signal is from Cy5 labeled Tm1A fluorescence excited at 640 nm, and the green signal is due to Alexa 488 labeled actin excited at 488 nm. Courtesy of J.Y. Hsiao, L.M. Goins, N.A. Petek, R.D. Mullins.
The team members studied these interactions in the specific case of nonmuscle Drosophila tropomyosin protein, Tm1A. They also compared some of these interactions in Tm1A to the same interactions in rabbit skeletal muscle tropomyosin, as other researchers previously have found that mammalian skeletal muscle tropomyosin is the least-effective Arp2/3 inhibitor.2

Data from dual-wavelength excitation produced by TIRF microscopy methodology when applied to single filaments is shown in Figure 3. This information shows that Tm1A preferentially binds near the pointed end of actin filaments. By comparing similar data that resulted from different experimental conditions, the researchers showed that pointed-end binding is dependent on the nucleotide state of the actin and the Tm1A concentration.

Although a complete evaluation of all of the research’s results, conclusions and wider implications falls outside the scope of this article, Mullins does summarize some of the key points. “Binding of cyto-skeletal tropomyosin to actin filaments turns out to be more complicated than previously appreciated. Both nucleation and spreading of tropomyosin are strongly influenced by the conformation of the actin filament and the presence of other regulatory proteins.” Mullins added that, based on TIRF-produced images and other collected data, “We have been able to propose a model where the cooperation of the severing activity of cofilin and tropomyosin binding helps establish the border between the lamellipod and lamellum.” The role of cofilin in the model referenced by Mullin is shown in Figure 4.

Figure 4.
 These images summarize the role of cofilin in the model proposed by Hsaio et al. [ref]. The branched actin network on the left shows the situation in the absence of cofilin, where tropomyosin binding is blocked by Arp2/3 branches. The branched actin network on the right illustrates that in the presence of cofilin, new pointed ends are created, which allows tropomyosin to bind. Once tropomyosin is bound, it protects the actin filaments from further cofilin severing, possibly resulting in the transition from the lamellipod to the lamellum. Courtesy of J.Y. Hsiao, L.M. Goins, N.A. Petek, R.D. Mullins.
In summation, TIRF microscopy is a well-established technique for imaging single molecular structures and protein complexes. This method also enables their respective dynamics to be observed in real time. By providing a method to rapidly switch between two or more excitation wavelengths, the latest lasers and laser-combining technologies are now enabling researchers to perform TIRF microscopy experiments with a greater number of separate labels. This capability is delivering unique insights into important and multifaceted processes in the study of cell biology.

Meet the author

Dan Callen is a product manager at Coherent Inc. in Santa Clara, Calif.; email: daniel.callen@coherent.com.


1. A.D. Edelstein et al. (2014). Advanced methods of microscope control using μManager software. J Biol Methods, Vol. 1, No. 2, e10.

2. J.Y. Hsiao et al. (2015). Arp2/3 complex and cofilin modulate binding of tropomyosin to branched actin networks. Curr Biol, pp. 1-10.

3. L. Blanchoin et al. (2001). Inhibition of the Arp2/3 complex-nucleated actin polymerization and branch formation by tropomyosin. Curr Biol, Vol. 11, No. 16, pp. 1300-1304.

4. J.H. Iwasa and R.D. Mullins (2007). Spatial and temporal relationships between actin-filament nucleation, capping and disassembly. Curr Biol, Vol. 17, No. 5, pp. 395-406.

18.  Using QCLs for MIR-Based Spectral Imaging — Applications in Tissue Pathology


A quantum cascade laser (QCL) microscope allows for fast data acquisition, real-time chemical imaging and the ability to collect only spectral frequencies of interest. Due to their high-quality, highly tunable illumination characteristics and excellent signal-to-noise performance, QCLs are paving the way for the next generation of mid-infrared (MIR) imaging methodologies.


H. Sreedhar*1, V. Varma*2, A. Graham3, Z. Richards1, F. Gambacorata4, A. Bhatt1,
P. Nguyen1, K. Meinke1, L. Nonn1, G. Guzman1, E. Fotheringham5, M. Weida5,
D. Arnone5, B. Mohar5, J. Rowlette5
Real-time, MIR chemical imaging microscopes could soon become powerful frontline screening tools for practicing pathologists. The ability to see differences in the biochemical makeup across a tissue sample greatly enhances a practioner’s ability to detect early stages of disease or disease variants. Today, this is accomplished much as it was 100 years ago — through the use of specially formulated stains and dyes in combination with white light microscopy. A new MIR, QCL-based microscope from Daylight Solutions enables real-time, nondestructive biochemical imaging of tissues without the need to perturb the sample with chemical or heat treatments, thus preserving the sample for follow-on fluorescence tagging, histochemical staining or other “omics” testing within the workflow.
MIR chemical imaging is a well-established absorbance spectroscopy technique; it senses the relative amount of light that molecules absorb due to their unique vibrational resonances falling within the MIR portion of the electromagnetic spectrum (i.e., wavelengths from approximately 2 to 15 µm). This absorption can be detected with a variety of MIR detector types and can provide detailed information about the sample’s chemical composition.

The most common instrument for this type of measurement is known as a Fourier transform infrared (FTIR) spectrometer. FTIR systems use a broadband MIR light source, known as a globar, to illuminate a sample; the absorption spectrum is generated by the use of interferometry. Throughout the past decade, FTIR systems have incorporated linear arrays and 2D focal plane arrays (FPAs) in a microscope configuration to enable a technique known as chemical imaging.

19.  Inner Ear Undertakers

Support cells in the inner ear respond differently to two drugs that kill hair cells.

By Kerry Grens | September 1, 2015


The paper
E.L. Monzack et al., “Live imaging the phagocytic activity of inner ear supporting cells in response to hair cell death,” Cell Death Differ, doi:10.1038/cdd.2015.48, 2015.

Killer drugs
A number of commonly used medications can cause hearing loss by killing off cochlear hair cells, which translate sound waves into neural activity. To understand how they die, Lisa Cunningham and Elyssa Monzack of the National Institute on Deafness and Other Communication Disorders and colleagues turned to the utricle, a vestibular inner-ear structure involved with balance whose hair cells are very similar to those in the cochlea, which are notoriously resistant to culturing when mature.

Body bags
The team developed a method to watch hair cells of whole mouse utricles die in real time after exposure to the chemotherapy drug cisplatin or the antibiotic neomycin. In response to the latter, supporting cells, glia-like neighbors of hair cells, appeared to form a phagosome around the corpses and engulf them. “You can see two, three, sometimes four supporting cells advancing simultaneously on that hair cell corpse,” says Cunningham—which suggests that the dying cell is giving off a specific and local signal.

Spilled guts
In contrast, cisplatin-induced hair cell death provoked hardly any phagocytic reaction from supporting cells, about half of which themselves succumbed. Cunningham says this could have clinical implications if dead hair cells then spill their cytoplasmic contents into the tissue, which can result in an immune response that can cause even further damage.

Distress call
Mark Warchol of Washington University in St. Louis says it will be important to identify the signal supporting cells are responding to after neomycin treatment. “There’s some molecular signal by which the hair cell causes [supporting cells] to execute this process. And with cisplatin, they’re just not capable of doing it.”


utriclesensory biologyphagocytosishearinghair cellearcell death and cell & molecular biology

20.  Inner Ear Cartography

Scientists map the position of cells within the organ of Corti.

By Ruth Williams | September 1, 2015


Age-related hearing loss caused by damage to the sensory hair cells within the cochlea is extremely common, but studying the inner ear is tough. “It’s in the densest bone in the body, so you don’t have access,” says John Brigande of Oregon Health and Science University in Portland. Even if you can extract cells, he says, “there are so darn few of them.”

Despite these technical difficulties, researchers have gleaned gene-expression information about different cell types within the organ of Corti—home to the sensory cells within the cochlea. But “it’s not only important to know what a cell expresses,” says Robert Durruthy-Durruthy, a postdoc in the Stanford University lab of Stefan Heller. “It’s also important to know where it can be found within a tissue.”

To this end, Durruthy-Durruthy, Heller, and postdoc Jörg Waldhaus have derived a 2-D map of organ of Corti cells from neonatal mice. First, the team sorted all cell types across the medial-to-lateral axis (or width) of the organ based on marker gene expression. The approximately 900 sorted cells, representing nine cell types, were then each quantitatively analyzed for the expression of 192 selected genes. Computational analysis of these expression data then enabled reconstruction of the cells’ positions along the organ’s apical-to-basal (length) and medial-to-lateral axes. In principle, the technique, which harnesses gene-expression information to determine cells’ spatial organization, could be applied to generate 2-D maps of any complex tissue, says Durruthy-Durruthy.

Within the mammalian cochlea, apical cells retain regenerative capacity for a few weeks after birth, but basal cells do not. “Spatial mapping allows us to get at the differences [between these cells],” says Brigande, and that could ultimately highlight possible ways to reinstate regeneration in the adult ear. (Cell Reports, 11:1385-99, 2015)

  FROM ORGAN TO SINGLE CELLS: To build a map of cells within the organ of Corti—where sound is translated to neural activity—scientists divide the cochlea in two. Each half of the organ of Corti is then broken up into its constituent cells, which comprise nine cell types (represented by the nine colors) spanning the organ’s edial-to-lateral axis.


21.  Resveratrol Stabilizes Amyloid in Alzheimer’s

Pauline Anderson

September 17, 2015


High doses of purified resveratrol, a polyphenol found in some foods, appear to stabilize levels of amyloid beta (Aβ) in cerebrovascular fluid (CSF) and in plasma in patients with mild to moderate Alzheimer disease (AD) and are safe and well tolerated, a new phase 2 study has shown.

Although it is too soon to start recommending resveratrol supplements to patients, the research indicates that this compound is safe and is promising, lead author R. Scott Turner, MD, PhD, professor, Neurology, and director of the Memory Disorders Program, Georgetown University Medical Center, Washington, DC, one of 21 medical centers across the United States participating in the study.

“It seems to have some interesting effects, enough to justify further research into this strategy,” he told Medscape Medical News.

The study was published online September 11 as an Open Access article in Neurology.

Natural Compound

Resveratrol is a naturally occurring compound found in red grapes, red wine, dark chocolate, and some other foods, and is widely available as a supplement.

It is believed that resveratrol promotes resilience to stress, as levels increase in plants exposed to severe cold or to fungus, said Dr Turner. Animal research suggests that resveratrol may affect sirtuins, which are proteins that are activated with calorie restriction, which is a form of mild stress.

The study included 119 patients randomly assigned to either high doses of pure synthetic pharmaceutical grade resveratrol that is not available commercially (n = 64) or placebo (n = 55). The resveratrol used in the study was introduced at a dose of 500 mg a day and was increased every 3 months, so that by the end of the 1-year study, subjects were taking 2000 mg a day.

Results showed that at 1 year, the treated group’s levels of Aβ40 in CSF declined from 6574 to 6513 ng/mL, but in the placebo group, these levels went from 6560 to 5622 ng/mL, for a statistical difference at week 52 (P = .002).

This difference was also found in secondary analyses of study completers, in the mild dementia subgroup, and in APOE4 carriers and noncarriers.

The treated group’s Aβ40 levels in plasma declined from 163 to 153 ng/mL, and in the placebo group, these levels went from 165 to 132 ng/mL (for a statistical difference; P = .024).

“We can’t prove efficacy from this trial, but we’re looking for some movement in biomarkers, and we actually found that,” which is promising, said Dr Turner. “The major movement we found was in amyloid proteins in blood and CSF that were stabilized by resveratrol treatment compared to placebo, where it trended downhill, which is what happens with Alzheimer’s disease.”

This downhill trend could signal more amyloid being deposited into the brain. In contrast, that resveratrol seemed to stabilize Aβ40 in the CSF and plasma suggests the drug was able to penetrate the blood–brain barrier.

Unfortunately, said Dr Turner, the study could not fund amyloid positron emission tomography scans, which might have shed more light on the Aβ status of subjects.

Although there were no significant effects of the treatment on other amyloid biomarkers, including CSF and plasma Aβ42, trends were similar to the findings with Aβ40.

There was no difference in CSF tau. There was a trend toward an increase in CSF phospho-tau 181 with treatment (P = .08) and in secondary analysis of mild dementia (P = .047).

As for brain volume determined through magnetic resonance imaging (MRI), results showed that volumes declined more in the treatment group (going from 866 to 839 mL) than in the placebo group (going from 850 to 840 mL). This result was “mysterious” and “unexpected,” said Dr Turner.

However, he noted that the same effect has been reported in other AD trials, including those investigating immunotherapy. “The working hypothesis is that by treating AD, we are also decreasing the amount of inflammation and swelling in the brain.”

The study showed no significant effects on the mini mental state exam or on other clinical scales, but the researchers note that the phase 2 trial was not powered to detect differences in clinical outcomes.

However, they did find that the activities of daily living scale declined less in the resveratrol group than in the placebo group. “That’s also promising, because even with this phase 2, we are seeing what we think might be a clinical benefit,” said Dr Turner

A total of 657 adverse events were reported (355 in the treatment and 302 in the placebo groups), most of which were mild. The most common adverse events were gastrointestinal-related and included nausea and diarrhea.

Weight Loss

The placebo group gained about 1 pound of body weight, whereas the treated group lost almost 2 pounds. At the end of the study, the mean body mass index in the placebo group was 26.1, and in the treated group, it was 25.4.

“The weight loss is concerning, because Alzheimer disease itself causes weight loss, and we don’t want people to continue to lose weight,” said Dr Turner.

It is not clear whether the weight loss was a result of the adverse effects of diarrhea, nausea, and so on, or because of some metabolic effect.

Interestingly, six of the seven new neoplasms seen in study participants occurred in those taking placebo. This is of great interest to cancer researchers, said Dr Turner, adding that resveratrol and similar compounds are being tested in many age-related disorders, including diabetes and neurodegenerative disorders, as well as AD and cancer.

None of the 36 serious adverse events (19 on the drug and 17 on placebo), including three deaths, were deemed to be related to treatment.

Commenting on this study for Medscape Medical News, James Hendrix, PhD, director, Global Science Initiatives, Alzheimer’s Association, said that although the finding that resveratrol might stabilize Aβ40 is encouraging, the study needs to be followed up with a larger and longer phase 3 trial.

“The main focus of this study, and the main question it addressed, was whether a dose at such a high level is safe, and with the exception of some [gastrointestinal] discomfort for some people, it appears to be mostly safe.”

Dr Hendrix noted that the high dose used in the study is equivalent to 1000 bottles of red wine.

He pointed out that the study was relatively small, with 56 subjects completing the study in the treatment group, and only 48 in the placebo group.

The research was supported by a grant from the National Institute on Aging. Dr Turner reports no personal financial interests related to the study. Dr Hendrix is an employee of the Alzheimer’s Association, which has funded resveratrol grants in the past, but did not fund this study.

Neurology. Published online September 11, 2015. Full text


A randomized, double-blind, placebo-controlled trial of resveratrol for Alzheimer disease ABSTRACT Objective: A randomized, placebo-controlled, double-blind, multicenter 52-week phase 2 trial of resveratrol in individuals with mild to moderate Alzheimer disease (AD) examined its safety and tolerability and effects on biomarker (plasma Ab40 and Ab42, CSF Ab40, Ab42, tau, and phospho-tau 181) and volumetric MRI outcomes (primary outcomes) and clinical outcomes (secondary outcomes). Methods: Participants (n 5 119) were randomized to placebo or resveratrol 500 mg orally once daily (with dose escalation by 500-mg increments every 13 weeks, ending with 1,000 mg twice daily). Brain MRI and CSF collection were performed at baseline and after completion of treatment. Detailed pharmacokinetics were performed on a subset (n 5 15) at baseline and weeks 13, 26, 39, and 52. Results: Resveratrol and its major metabolites were measurable in plasma and CSF. The most common adverse events were nausea, diarrhea, and weight loss. CSF Ab40 and plasma Ab40 levels declined more in the placebo group than the resveratrol-treated group, resulting in a significant difference at week 52. Brain volume loss was increased by resveratrol treatment compared to placebo. Conclusions: Resveratrol was safe and well-tolerated. Resveratrol and its major metabolites penetrated the blood–brain barrier to have CNS effects. Further studies are required to interpret the biomarker changes associated with resveratrol treatment. Classification of evidence: This study provides Class II evidence that for patients with AD resveratrol is safe, well-tolerated, and alters some AD biomarker trajectories. The study is rated Class II because more than 2 primary outcomes were designated. Neurology® 2015;85:1–9

Caloric restriction prevents aging-dependent phenotypes1 and activates sirtuins (including SIRT1), a highly conserved family of deacetylases that are regulated by NAD1/NADH and thus link energy metabolism to gene expression.2 SIRT1 substrates include FOXO and PGC- 1a. 3 A screen of SIRT1 activators identified resveratrol (trans-3,49,5-trihydroxystilbene) as a potent compound.4 Similar to caloric restriction,5,6 resveratrol decreases aging-dependent cognitive decline and pathology in Alzheimer disease (AD) animal models.7,8

Resveratrol is under investigation to prevent age-related disorders including cancer, diabetes mellitus, and neurodegeneration.4,9–12 Due to its low bioavailability but high bioactivity,13,14 we increased the dose to the maximal amount considered safe and well-tolerated for this study.15 We conducted a randomized, placebocontrolled, double-blind, multicenter 52- week phase 2 trial of resveratrol in individuals with mild to moderate AD. The primary objectives were to (1) assess the safety and tolerability of resveratrol; (2) assess effect on plasma and CSF Ab42 and Ab40, CSF tau and phospho-tau 181, and volumetric MRI; and (3) examine pharmacokinetics. The secondary objectives were to (1) explore the effects of resveratrol on cognitive, functional, and behavioral outcomes; (2) examine the influence of APOE genotype; and (3) determine whether resveratrol affects insulin and glucose metabolism. We hypothesized that resveratrol would alter AD biomarker trajectories.

RESULTS A total of 179 participants were screened, of whom 60 were not randomized (50 screen-failed and 10 withdrew consent). Participants (119) were randomized as shown (figure 1). A total of 104 completed the study (12.6% dropout), and 77 completed 2 CSF collections (34% dropout). Eighteen participants discontinued treatment early and 15 discontinued the study. The population was English-speaking, 57% female, and 91% Caucasian.

Safety and tolerability. No differences between the resveratrol and placebo-treated groups were found on vital signs, physical examinations, or neurologic examinations. Routine laboratory tests were normal. A total of 657 AEs (490 mild, 139 moderate, 28 severe) were reported (355 on drug, 302 on placebo) (table 2). A total of 113 out of 119 (95%) participants reported at least 1 AE. The most common AEs were nausea and diarrhea (in 42% of individuals with drug vs 33% with placebo, p 5 0.35). Few participants reported nausea and diarrhea—the most likely drug-related AE—that led to treatment discontinuation, a treatment plateau at a lower dosage, or study discontinuation (figure 1). The placebo group gained 0.54 6 3.2 kg body weight, while the treated group lost 0.92 6 4.9 kg (mean 6 SD, p 5 0.038) resulting in a difference in body mass index (BMI). The treated group’s BMI was 25.4 6 4.0 vs the placebo group’s 26.1 6 4.1 at week 52 (mean 6 SD, p 5 0.047). Thirty-six serious AEs (SAEs) were reported (19 on drug, 17 on placebo) including 27 hospitalizations (14 on drug, 13 on placebo) and 3 deaths (1 on drug, 2 on placebo)—none study drug-related. There were no differences in participants who experienced at least one SAE (20.3% on drug, 18.2% on placebo), at least one hospitalization (18.8% drug, 16.4% placebo), or died (1.6% drug, 3.6% placebo). Seven new neoplasms were reported (1 on drug, 6 on placebo, p , 0.048) (table 2). Retrospective review of the brain MRIs of a placebo-enrolled participant with malignant glioma, which resulted in death, revealed that the tumor was present at screening. Two participant deaths were due to lung melanoma (placebo group) and drowning (drug group).

AD duration (from year of symptom onset), y, mean (SD)      Resv 3.9 (2.3)        Placebo 5.5 (2.6)     <0.001

Outcomes. At week 52, the treated group’s CSF Ab40 declined from 6,574 6 2,346 to 6,513 6 2,279 ng/mL and from 6,560 6 2,190 to 5,622 6 1,736 ng/mL with placebo, resulting in a difference at week 52 (mean 6 SD, p 5 0.002) (figure 2A). This difference was also found in secondary analyses of study completers (p 5 0.002), in the mild dementia subgroup (p 5 0.01), and in APOE4 carriers (p 5 0.05) and noncarriers (p 5 0.01) (table e-2). During the study, the treated group’s plasma Ab40 (figure 2B) declined from 163 6 58 to 153 6 54 ng/mL and from 165 6 55 to 132 6 54 ng/mL with placebo (mean 6 SD, p 5 0.024). Secondary analyses by APOE4 genotype revealed an effect of treatment on plasma Ab40 in APOE4 carriers (p 5 0.04) but not noncarriers (table e-2). There were no effects on CSF Ab42 or plasma Ab42 (figure 2, C and D), although trends were similar to Ab40. There was no difference in CSF tau and a trend toward an increase in CSF phospho-tau 181 with treatment (p 5 0.08), and in a secondary analysis of mild dementia (p 5 0.047) (data not shown). Volumetric MRIs revealed that brain volume (excluding CSF, brainstem, and cerebellum) declined more in the treatment group (p 5 0.025) with an increase in ventricular volume (p 5 0.05) at week 52 (figure 3, A and B). In the treatment group, brain volume decreased from 866 6 84 to 839 6 85 mL and ventricular volume increased from 55 6 24 to 81 6 24 mL (mean 6 SD). With placebo, brain volume decreased from 850 6 99 to 840 6 93 mL and ventricular volume increased from 56 6 19 to 76 6 25 mL (mean 6 SD). Secondary analyses revealed that brain volume declined with treatment in APOE4 carriers (p 5 0.02) but not noncarriers (table e-2). Similar results were found with ventricular volume, which increased with treatment in APOE4 carriers (p 5 0.05) but not noncarriers. This phase 2 trial (underpowered to detect differences in clinical outcomes) found no significant effects on CDR-SOB, ADAS-cog, MMSE, or NPI. The drugtreated group’s ADCS-ADL declined from 63.7 6 10.8 to 57.4 6 12.3 and from 60.5 6 10.7 to 51.3 6 14.5 in the placebo group (mean 6 SD, p 5 0.03), indicating less decline with treatment. No drug effects were found with plasma glucose or insulin metabolism (data not shown). We also analyzed (post hoc) the subset of individuals with CSF Ab42 ,600 ng/mL at baseline as a proxy of AD amyloid pathology. At week 52, differences between treatment groups persisted for CSF Ab40 (p 5 0.001, total n 5 70) and plasma Ab40 (p 5 0.02, n 5 83). In this analysis, we also found a treatment effect on CSF Ab42 (p 5 0.02, n 5 70) but lost significance in brain volume loss (p 5 0.06, n 5 83) and ADCSADL (p 5 0.055, n 5 88).

DISCUSSION High-dose oral resveratrol is safe and well-tolerated. The most common AEs were nausea and diarrhea, but results were similar to placebo. Weight and fat loss with resveratrol are reported in some preclinical studies,4 but human studies are scarce and of shorter duration. A decrease in body fat and a trend toward weight loss were reported in a 26- week trial with 200 mg/day resveratrol in healthy older participants.33 Weight and fat loss may be related to enhanced mitochondrial biogenesis mediated by SIRT1 activation of PCG-1a. 4,10,11 Ab levels declined as dementia advanced. The altered CSF Ab40 trajectory suggests that the drug penetrated the blood–brain barrier to have central effects. At week 52, the mean CSF levels of resveratrol, 3G-RES, 4G-RES, and S-RES were 3.3%, 0.4%, 0.4%, and 0.3%, respectively, of plasma levels at the same study visit. At the highest dosage, low mM levels of resveratrol and its metabolites were measured in plasma, with corresponding low nM levels found in CSF. Resveratrol has many targets, with some engaged at uM concentrations.4 These findings suggest that a central molecular target may be engaged at nM concentrations. In addition to anti-inflammatory, antioxidant, and anti-Ab aggregation, putative targets include sirtuin activation with enhanced a-cleavage of amyloid precursor protein34 and promotion of autophagy.35 Further studies of banked CSF, plasma, pellets, DNA, and blood mononuclear cells from participants will examine mechanisms.

Resveratrol treatment increased brain volume loss. This finding persisted when participants with weight loss (table 2) were excluded (data not shown). The etiology and interpretation of brain volume loss observed here and in other studies are unclear, but they are not associated with cognitive or functional decline. In the first human active Ab immunization trial, antibody responders had greater brain volume loss, and greater volumetric changes were associated with higher antibody titers.36 In the phase 2 bapineuzumab trial, treatment resulted in greater ventricular enlargement, but only in APOE4 carriers.37 In the phase 3 bapineuzumab APOE4 carrier trial and the high-dose noncarrier study, treatment resulted in a trend toward greater brain atrophy.38 Since this phase 2 study lacks consistent changes in clinical outcomes, interpretation of the effects on trajectories for plasma and CSF Ab40, and brain and ventricular volume, remain uncertain.

Resveratrol altered levels of CSF Ab40 (A) and plasma Ab40 (B) (ng/mL, mean 6 SE). Similar but nonsignificant trends were found for CSF Ab42 (C) and plasma Ab42 (D) (ng/mL, mean 6 SE). Note difference in scales. Sample sizes are indicated.

Resveratrol increased brain volume loss (A, C) (mL, mean 6 SE) with a corresponding increase in ventricular volume (B, D) (mL, mean 6 SE). Sample sizes are indicated.

This phase 2 study has limitations. It was designed to determine the safety and tolerability of resveratrol and to examine pharmacokinetics. Although some biomarker trajectories were altered, we found no effects of drug treatment on plasma Ab42, CSF Ab42, CSF tau, CSF phospho-tau 181, hippocampal volume, entorhinal cortex thickness, MMSE, CDR, ADAS-cog, NPI, or glucose or insulin metabolism. The altered biomarker trajectories must be interpreted with caution. Although they suggest CNS effects, they do not indicate benefit.

22.  Miniature VHS Solenoid Valves Play Significant Role in the Viability of 3D Bio-Printing of Human Cells  

The rapid development of viable inkjet technology for highly specialised applications, such as printing human cells, continues to generate significant interest. If successful, the realisation of this technology for specialised biological applications, generally known as ‘biofabrication’, has the potential to replace the long established (and often controversial) process of using animals for testing new drugs. However, there are many challenges to overcome to enable the successful production of a valve-based cell printer for the formation of human embryonic stem cell spheroid aggregates. For example, printing techniques need to be developed which are both controllable and less harmful to the process of preserving human cell tissue viability and functions.

One particular cell printing project at an advanced stage and which has benefitted from the features and benefits of Lee Products miniature VHS solenoid valves and nozzles, is the result of pioneering activities at Edinburgh’s Heriot-Watt University. Dr Will Shu at the University’s Biomedical micro-engineering Group and his colleagues, including Alan Faulkner-Jones a bioengineering PhD student have successfully developed a bio-printer which has been demonstrated at the 3D Print show in London. Also involved in the development of the bio-printer are specialists at Roslin Cellab in Midlothian, a leading stem cell technology company.

The valve based bio-printer has been validated to print highly viable cells in programmable patterns from two different bio-inks with independent control of the volume of each droplet (with a lower limit of 2nL or fewer than five cells per droplet). Human ESC’s (Embryonic Stem Cells) were used to make spheroids by overprinting two opposing gradients of bio-ink; one of hESC’s in medium and the other of medium alone.
The resulting array of uniform sized droplets with a gradient of cell concentrations was inverted to allow cells to aggregate and form spheroids via gravity.
The resulting aggregates have controllable and repeatable sizes and consequently they can be made to order for specific applications. Spheroids with between 5 and 140 dissociated cells resulted in spheroids of 0.25-0.6 mm diameter. The success of the bio-printer demonstrates that a valve based printing process is gentle enough to maintain stem cell viability, accurate enough to produce spheroids of uniform size and that printed cells maintain their pluripotency.
Looking closer at the design of the bio-printer platform reveals two dispensing systems, each comprising a Lee VHS Nanolitre solenoid dispensing valve with a Teflon coated 101.6 µm internal diameter Lee Minstac nozzle controlled by a Arduino microcontroller. Each dispensing system is attached to a static pressure reservoir for the bio-ink solution to be dispensed via flexible tubing. The dispensing system and bio-ink reservoirs are mounted within a custom-built enclosure on the tool head of a micrometer-resolution 3-axis 3d printing platform (High-Z S-400, CNC Step) and controlled by a customized CNC controller (based on G540, Geokodrives).

A relatively larger nozzle diameter (compared to the size of the cells that are printed) was selected to reduce the amount of shear stress that could be experienced by the cells during the dispensing process. The bio-ink reservoirs were kept as close as possible to the valves in order to minimise the amount of time it would take to charge the system with bio-ink and to purge it at the end of the experiment. A USB microscope is also included to enable visual inspection of the target substrate during the printing process. Due to the type of deposition system used, a direct line of sight view through the nozzle is not possible and therefore the USB microscope is mounted at an offset angle from the cell deposition system assemblies.
Commenting on the development of the bio-printer and the vital role played by Lee Product’s VHS solenoid valves, Dr Will Shu at Heriot-Watt University said: “Printing living cells is extremely challenging and to the best of our knowledge, this is the first time that these cells have been 3D printed. The technique will allow us to create more accurate human tissue models which are essential to in-vitro drug development and toxicity testing and since the majority of drug discovery is targeting human disease, it makes sense to use human tissues.
”The development of the bio-printer has taken many years of effort and we are very pleased with the performance of Lee’s VHS solenoid valves, they are a vital component within the bio-printer printhead and we recommend them to our colleagues working on similar projects.”
Dr Shu added, “We also acknowledge the support and interaction from our contacts at Lee Products which has helped us to overcome the challenges of this project.”
This highly specialised application is an excellent example of the performance of Lee’s range of VHS Micro-Dispense Solenoid Valves which provide precise, repeatable, non-contact dispensing of fluids in the nanolitre to microlitre range. The valves feature a number of port configurations to facilitate quick and convenient connections to Lee’s 062 MINSTAC fittings and press-on tubing. The 062 MINSTAC outlet port can be used with Lee 062 MINSTAC tubing or atomising nozzles. Custom configurations and voltages are also available to suit specific applications.




Posted on September 23, 2015 by Healthinnovations

Ten million Canadians are living with diabetes or pre-diabetes. The Canadian Diabetes Association reports that more than 20 Canadians are newly diagnosed with the disease every hour of every day. It is also the seventh leading cause of death in Canada, with associated health-care costs estimated at nearly $9 billion a year. Type 2 diabetes accounts for 90 per cent of all cases, increasing the risk of blindness, nerve damage, stroke, heart disease and several other serious health conditions.

Insulin secretion from β cells of the pancreatic islets of Langerhans is impaired in type 2 diabetes (T2D).  Evidence suggests that this metabolic amplification of insulin secretion occurs distally in the secretory pathway, possibly at the calcium dependent exocytotic site.  Therefore the regulation or amplification of insulin is an important target for researchers around the world

Now, researchers from the University of Alberta have identified a new molecular pathway that manages the amount of insulin produced by the pancreatic cells, essentially a ‘dimmer’ switch that adjusts how much or how little insulin is secreted when blood sugar increases.  The team state that the dimmer appears to be lost in Type 2 diabetes, however, it can be restored and ‘turned back on’, reviving proper control of insulin secretion from islet cells of people with Type 2 diabetes.  The opensource study is published in the Journal of Clinical Investigation.

Previous studies show that the canonical mechanism of glucose-stimulated insulin secretion involving increases in metabolism-derived ATP, inhibition of KATP channels, and activation of VDCCs was first introduced more than 30 years ago and remains as a cornerstone mechanism for the triggering of insulin secretion’.  The KATP channel mechanism does not define the entire secretory response with multiple metabolic coupling intermediates proposed as factors that amplify the secretory response to a Ca2+exocytosis-based signal, with the net export of mitochondrial substrates being of great interest.

The current study examined pancreatic islet cells from 99 human organ donors.  Results show that the glucose-dependent amplification of exocytosis in human β cells, which is disrupted in type 2 diabetes, requires isocitrate flux through mitochondrial export which generates cytosolic NADPH and GSH. These then act through SENP1 to amplify the exocytosis of insulin, thereby controlling glucose homeostasis.  The lab then validated these data findings in a transgenic animal model.

The researchers state that the discovery is a potential game-changer in Type 2 diabetes research, leading to a new way of thinking about the disease and its future treatment.  The go on to add that understanding the islet cells in the pancreas that make insulin, how they work, and how they can fail, could lead to new ways to treat the disease, delaying or even preventing diabetes.

The team surmise that although the ability to restore and fix the dimmer switch in islet cells may have been proven on a molecular level, finding a way to translate those findings into clinical use could yet take decades. Despite this the group conclude that the findings show an important new way forward.

Source: University of Alberta


Pancreatic islet–specific knockout of Senp1 blunts insulin secretion due to an impaired amplification of exocytosis. Proposed pathway linking mitochondrial export of (iso)citrate, glutathione biosynthesis (blue), and glutathione reduction (orange) pathways to the amplification of insulin exocytosis (yellow). Isocitrate-to-SENP1 signaling amplifies insulin secretion and rescues dysfunctional β cells. MacDonald et al 2015.

24.  Nanotechnology

Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.

Physicist Richard Feynman, the father of nanotechnology.

Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering.

The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s Plenty of Room at the Bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn’t until 1981, with the development of the scanning tunneling microscope that could “see” individual atoms, that modern nanotechnology began.

Medieval stained glass windows are an example of  how nanotechnology was used in the pre-modern era. (Courtesy: NanoBioNet)

It’s hard to imagine just how small nanotechnology is. One nanometer is a billionth of a meter, or 10-9 of a meter. Here are a few illustrative examples:

  • There are 25,400,000 nanometers in an inch
  • A sheet of newspaper is about 100,000 nanometers thick
  • On a comparative scale, if a marble were a nanometer, then one meter would be the size of the Earth

Nanoscience and nanotechnology involve the ability to see and to control individual atoms and molecules. Everything on Earth is made up of atoms—the food we eat, the clothes we wear, the buildings and houses we live in, and our own bodies.

But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school science classes. The microscopes needed to see things at the nanoscale were invented relatively recently—about 30 years ago.

Once scientists had the right tools, such as the scanning tunneling microscope (STM) and the atomic force microscope (AFM), the age of nanotechnology was born.

Although modern nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries. Alternate-sized gold and silver particles created colors in the stained glass windows of medieval churches hundreds of years ago. The artists back then just didn’t know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with.

Today’s scientists and engineers are finding a wide variety of ways to deliberately make materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight, increased control of light spectrum, and greater chemical reactivity than their larger-scale counterparts.


Education and workforce development are critical to the advancement of nanotechnology and are encompassed within one of the four goals of the National Nanotechnology Initiative (NNI): “Develop and sustain educational resources, a skilled workforce, and a dynamic infrastructure and toolset to advance nanotechnology.” As new knowledge is created through exploratory research and development, it is a challenge to translate this understanding into the educational system and to the broader public. Over the past fifteen years of the NNI, there have been several activities that have made significant contributions in this area: public outreach and informal education by the NSF Nanoscale Informal Science Education  Network (NISE Net) through programs such as NanoDays; technician and workforce training through programs such as the NSF Advanced Technological Education Centers including the Nanotechnology Applications and Career Knowledge (NACK) Network; countless university courses and degree programs; and the emerging incorporation of nanoscience into the K-12 science education standards in states such as Virginia. To build upon this strong foundation, several announcements were made last week at the White House Forum on Small Business Challenges to Commercializing Nanotechnology including the establishment of a Nano and Emerging Technologies Student Leaders conference, a webinar series focused on providing information for teachers, and a web portal of nanoscale science and engineering educational resources.  – See more at: http://www.nano.gov/node/1415#sthash.fw1tMPiU.dpuf

25.  Antimicrobial film for future implants


(Nanowerk News) The implantation of medical devices is not without risks. Bacterial or fungal infections can occur and the body’s strong immune response may lead to the rejection of the implant. Researchers at Unit 1121 “Biomaterials and Bio-engineering” (Inserm/Strasbourg university) have succeeded in creating a biofilm with antimicrobial, antifungal and anti-inflammatory properties. It may be used to cover titanium implants (orthopaedic prostheses, pacemakers…) prevent or control post-operative infections. Other frequently used medical devices that cause numerous infectious problems, such as catheters, may also benefit.
These results are published in the journal Advanced Healthcare Materials (“Harnessing the Multifunctionality in Nature: A Bioactive Agent Release System with Self-Antimicrobial and Immunomodulatory Properties”).

26.  Characterizing the forces that hold everything together: UMass Amherst physicists offer new open source calculations for molecular interactions


UMass Amherst physicists, with others, provide a new software tool and database to help materials designers with the difficult calculations needed to predict the magnitude of van der Waals interactions between anisotropic or directionally dependent bodies such as those illustrated, with long-range torques. Though small, these forces are dominant on the nanoscale.

CREDIT: UMass Amherst

As electronic, medical and molecular-level biological devices grow smaller and smaller, approaching the nanometer scale, the chemical engineers and materials scientists devising them often struggle to predict the magnitude of molecular interactions on that scale and whether new combinations of materials will assemble and function as designed.

Characterizing the forces that hold everything together: UMass Amherst physicists offer new open source calculations for molecular interactions

Amherst. MA | Posted on September 23rd, 2015

This is because the physics of interactions at these scales is difficult, say physicists at the University of Massachusetts Amherst, who with colleagues elsewhere this week unveil a project known as Gecko Hamaker, a new computational and modeling software tool plus an open science database to aid those who design nano-scale materials.

In the cover story in today’s issue of Langmuir, Adrian Parsegian, Gluckstern Chair in physics, physics doctoral student Jaime Hopkins and adjunct professor Rudolf Podgornik on the UMass Amherst team report calculations of van der Waals interactions between DNA, carbon nanotubes, proteins and various inorganic materials, with colleagues at Case Western Reserve University and the University of Missouri who make up the Gecko-Hamaker project team.

To oversimplify, van der Waals forces are the intermolecular attractions between atoms, molecules, surfaces, that control interactions at the molecular level. The Gecko Hamaker project makes available to its online users a large variety of calculations for nanometer-level interactions that help to predict molecular organization and evaluate whether new combinations of materials will actually stick together and work.

In this work supported by the U.S. Department of Energy, Parsegian and colleagues say their open-science software opens a whole range of insights into nano-scale interactions that materials scientists haven’t been able to access before.

Parsegian explains, “Van der Waals forces are small, but dominant on the nanoscale. We have created a bridge between deep physics and the world of new materials. All miniaturization, all micro- and nano-designs are governed by these forces and interactions, as is behavior of biological macromolecules such as proteins and lipid membranes. These relationships define the stability of materials.”

He adds, “People can try putting all kinds of new materials together. This new database and our calculations are going to be important to many different kinds of scientists interested in colloids, biomolecular engineering, those assembling molecular aggregates and working with virus-like nanoparticles, and to people working with membrane stability and stacking. It will be helpful in a broad range of other applications.”

Podgornik adds, “They need to know whether different molecules will stick together or not. It’s a complicated problem, so they try various tricks and different approaches.” One important contribution of Gecko Hamaker is that it includes experimental observations seemingly unrelated to the problem of interactions that help to evaluate the magnitude of van der Waals forces.

Podgornik explains, “Our work is fundamentally different from other approaches, as we don’t talk only about forces but also about torques. Our methodology allows us to address orientation, which is more difficult than simply describing van der Waals forces, because you have to add a lot more details to the calculations. It takes much more effort on the fundamental level to add in the orientational degrees of freedom.”

He points out that their methods also allow Gecko Hamaker to address non-isotropic, or non-spherical and other complex molecular shapes. “Many molecules don’t look like spheres, they look like rods. Certainly in that case, knowing only the forces isn’t enough. You must calculate how torque works on orientation. We bring the deeper theory and microscopic understanding to the problem. Van der Waals interactions are known in simple cases, but we’ve taken on the most difficult ones.”

Hopkins, the doctoral student, notes that as an open-science product, Gecko Hamaker’s calculations and data are transparent to users, and user feedback improves its quality and ease of use, while also verifying the reproducibility of the science.


For more information, please click here

Janet Lathrop

27.  Researchers have succeeded in creating a biofilm with antimicrobial, antifungal and anti-inflammatory properties. (Image: Inserm / E.Falett)
Implantable medical devices (prosthesis/pacemakers) are an ideal interface for micro-organisms, which can easily colonize their surface. As such, bacterial infection may occur and lead to an inflammatory reaction. This may cause the implant to be rejected. These infections are mainly caused by bacteria such as Staphylococcus aureus, originating in the body, and Pseudomonas aeruginosa. These infections may also be fungal or caused by yeasts. The challenge presented by implanting medical devices in the body is preventing the occurrence of these infections, which lead to an immune response that compromises the success of the implant. Antibiotics are currently used during surgery or to coat certain implants. However, the emergence of multi-resistant bacteria now restricts their effectiveness.
A biofilm invisible to the naked eye…
It is within this context that researchers at the “Bioengineering and Biomaterials” Unit 1121 (Inserm/Strasbourg University) with four laboratories1 have developed a biofilm with antimicrobial and anti-inflammatory properties. Researchers have used a combination of two substances: polyarginine (PAR) and hyaluronic acid (HA), to develop and create a film invisible to the naked eye (between 400 and 600 nm thick) that is made of several layers. As arginine is metabolised by immune cells to fight pathogens, it has been used to communicate with the immune system to obtain the desired anti-inflammatory effect. Hyaluronic acid, a natural component of the body, was also chosen for its biocompatibility and inhibiting effect on bacterial growth.
…with embedded antimicrobial peptides,
The film is also unique due to the fact that it embeds natural antimicrobial peptides, in particular catestatin, to prevent possible infection around the implant. This is an alternative to the antibiotics that are currently used. As well as having a significant antimicrobial role, these peptides are not toxic to the body that they are secreted into. They are capable of killing bacteria by creating holes in their cellular wall and preventing any counter-attack on their side.
…on a thin silver coating,
In this study researchers show that poly(arginine), associated with hyaluronic acid, possesses microbial activity against Staphylococcus aureus (S. aureus) for over 24 hours. “In order to prolong this activity, we have placed a silver-coated precursor before applying the film. Silver is an anti-infectious material currently used on catheters and dressings. This strategy allows us to extend antimicrobial activity in the long term” explains Philippe Lavalle, Research Director at Inserm.
…effectively reducing inflammation, preventing and controlling infection
The results from numerous tests performed on this new film shows that it reduces inflammation and prevents the most common bacterial and fungal infections.
On the one hand, researchers demonstrate, through contact with human blood, that the presence of the film on the implant suppresses the activation of inflammatory markers normally produced by immune cells in response to the implant. Moreover, “the film inhibits the growth and long-term proliferation of staphylococcal bacteria (Staphylococcus aureus), yeast strains (Candida albicans) or fungi (Aspegillus fumigatus) that frequently cause implant-related infection” emphasises Philippe Lavalle.
Researchers conclude that this film may be used in vivo on implants or medical devices within a few years to control the complex microenvironment surrounding implants and to protect the body from infection.
Source: INSERM (Institut national de la santé et de la recherche médicale)

28.  Quantum dots light up under strain


Semiconductor nanocrystals, or quantum dots, are tiny, nanometer-sized particles with the ability to absorb light and re-emit it with well-defined colors. With low-cost fabrication, long-term stability and a wide palette of colors, they have become a building blocks of the display technology, improving the image quality of TV-sets, tablets, and mobile phones. Exciting quantum dot applications are also emerging in the fields of green energy, optical sensing, and bio-imaging.

Prospects have become even more appealing after a publication, entitled “Band structure engineering via piezoelectric fields in strained anisotropic CdSe/CdS nanocrystals,” was published in the journal Nature Communications last July. An international team, formed by scientists at the Italian Institute of Technology (Italy), the University Jaume I (Spain), the IBM research lab Zurich (Switzerland) and the University of Milano-Bicocca (Italy) demonstrated a radically new approach to manipulate the light emission of quantum dots.

The traditional operating principle of quantum dots is based on the so-called quantum confinement effect, where the particle size determines the color of the emitted light. The new strategy relies on a completely different physical mechanism; a strain induced electrical field inside the quantum dots. It is created by growing a thick shell around the dots. This way, researchers were able to compress the inner core, creating the intense internal electric field. This field now becomes the dominating factor in determining the emission properties.

The result is a new generation of quantum dots whose properties are beyond those enabled by quantum confinement alone. This not only broadens the application scope of the well-known CdSe/CdS material set but also of other materials. “Our findings add an important new degree of freedom to the development of quantum dot-based technological devices,” the researchers say. “For example, the elapsed time between light absorption and emission can be extended to be more than 100 times longer compared to conventional quantum dots, which opens the way towards optical memories and smart pixel new devices. The new material could also lead to optical sensors that are highly sensitive to the electrical field in the environment on the nanometer scale.”

Explore further: Resonant energy transfer from quantum dots to graphene

More information: “Band structure engineering via piezoelectric fields in strained anisotropic CdSe/CdS nanocrystals” Nat Commun. 2015 Jul 29; 6:7905. DOI: 10.1038/ncomms8905

Journal reference: Nature Communications

Read more at: http://phys.org/news/2015-09-quantum-dots-strain.html#jCp

29. Turing Reaction-diffusion Model Confirmed



In 1952, the legendary British mathematician and cryptographer Alan Turing proposed a model, which assumes formation of complex patterns through chemical interaction of two diffusing reagents. Russian scientists managed to prove that the corneal surface nanopatterns in 23 insect orders completely fit into this model.

Their work is published in the Proceedings of the National Academy of Sciences.

The work was done by a team working in the Institute of Protein Research of the Russian Academy of Sciences, (Pushchino, Russia) and the Department of Entomology at the Faculty of Biology of the Lomonosov Moscow State University. It was supervised by Professor Vladimir Katanaev, who also leads a lab in the University of Lausanne, Switzerland. Artem Blagodatskiy and Mikhail Kryuchkov performed the choice and preparation of insect corneal samples and analyzed the data. Yulia Lopatina from the Lomonosov Moscow State University played the role of expert entomologist, while Anton Sergeev performed the atomic force microscopy.

The initial goal of the study was to characterize the antireflective three-dimensional nanopatterns covering insect eye cornea, with respect to the taxonomy of studied insects and to get insight into their possible evolution path.

The result was surprising as the pattern morphology did not correlate with insect position on the evolutionary tree. Instead, Russian scientists have characterized four main morphological corneal nanopatterns as well as transition forms between them, omnipresent among the insect class. Another finding was that all the possible forms of the patterns directly matched to the array of patterns predicted by the famous Turing reaction-diffusion model published in 1952, what Russian scientists confirmed not by mere observation, but by mathematical modeling as well. The model assumes formation of complex patterns through chemical interaction of two diffusing reagents.

The analysis of corneal surface nanopatterns in 23 insect orders has been performed by means of atomic force microscopy with resolution up to single nanometers.

“This method allowed us to drastically expand the previously available data, acquired through scanning electron microscopy; it also made possible to characterize surface patterns directly, not based upon analysis of metal replicas. When possible, we always examined corneae belonging to distinct families of one order to get insight into intra-order pattern diversity,” Blagodatskiy said.

The main implication of the work is the understanding of the mechanisms underlying the formation of biological three-dimensional nano-patterns, demonstrating the first example of Turing reaction-diffusion model acting in the bio-nanoworld.

Interestingly, the Turing nanopatterning mechanism is common not only for the insect class, but also for spiders, scorpions and centipedes in other words — universal for arthropods. Due to the antireflective properties of insect corneal nanocoatings, the revealed mechanisms are paving the way for design of artificial antireflective nanosurfaces.

“A promising future development of the project is planned to be a genetic analysis of corneal nanopattern formation on platform of a well-studied Drosophila melanogaster (fruitfly) model. The wild-type fruitflies possess a nipple array type nanocoating on their eyes,” Blagodatskiy summarized.

Different combinations of overexpressed and underexpressed proteins known to be responsible for corneal development in Drosophila may alter the nipple pattern to another pattern type and thus shed the light on chemical nature of compounds, forming the Turing-type structures upon insect eyes. Revealing of proteins and\or other agents responsible for nanopattern formation will be a direct clue to artificial design of nanocoatings with desired properties. Another direction of project development will be the comparison

Citation: Artem Blagodatski, Anton Sergeev, Mikhail Kryuchkov, Yuliya Lopatina, Vladimir L. Katanaev. Diverse set of Turing nanopatterns coat corneae across insect lineages.Proceedings of the National Academy of Sciences, 2015; 112 (34): 10750 DOI:10.1073/pnas.1505748112

30.  Germ-free mice gain weight when transplanted with gut microbes from obese humans, in a diet-dependent manner.

By Ed Yong | September 5, 2013

Escherichia coliWIKIPEDIAPhysical traits like obesity and leanness can be “transmitted” to mice, by inoculating the rodents with human gut microbes. A team of scientists led byJeffrey Gordon from the Washington University School of Medicine in St. Louis found that germ-free mice put on weight when they were transplanted with gut microbes from an obese person, but not those from a lean person.

The team also showed that a “lean” microbial community could infiltrate and displace an “obese” one, preventing mice from gaining weight so long as they were on a healthy diet. The results were published today (September 5) in Science.

Gordon emphasized that there are many causes of obesity beyond microbes. Still, he said that studies like these “provide a proof-of-principle for ameliorating diseases.” By understanding how microbes and food interact to influence human health, researchers may be able to design effective probiotics that can prevent obesity by manipulating the microbiome.

The human gut is home to tens of trillions of microbes, which play crucial roles in breaking down food and influencing health. Gordon’s group and others have now shown that obese and lean people differ in their microbial communities. Just last week, the MetaHIT consortium showed that a quarter of Danish people studied had a very low number of bacterial genes in their gut—an impoverished state that correlated with higher risks of both obesity and metabolic diseases.

However, descriptive studies like these cannot tell scientists whether such microbial differences are the cause of obesity or a consequence of it. “A lot of correlations are being made between microbe community configurations and disease states, but we don’t know if these are casual or causal,” said Gordon. By using germ-free mice as living laboratories, Gordon and his colleagues aim to start moving “beyond careful description to direct tests of function,” he added.

“It’s extremely exciting and powerful to go from descriptive studies in humans to mechanistic studies in mice,” said Oluf Pedersen, an endocrinologist who was involved in the MetaHIT studies. “That’s beautifully illustrated in this paper.”

Gordon lab graduate student Vanessa Ridaura inoculated the germ-free mice with gut microbes from four pairs of female twins, each in which one person was obese and the other had a healthy weight. Mice that received the obese humans’ microbes gained more body fat, put on more weight, and showed stronger molecular signs of metabolic problems.

Once the transplanted microbes had taken hold in their guts, but before their bodies had started to change, Ridaura housed the two groups of mice together. Mice regularly eat one another’s feces, so these cage-mates inadvertently introduced their neighbors’ microbes to their own gut communities. Gordon called this the “Battle of the Microbiota.”

These co-housing experiments prevented the mice with “obese” microbes from putting on weight or developing metabolic problems, while those with the “lean” microbes remained at a healthy weight.

Gordon explains that the obese microbe communities, being less diverse than the lean ones, leave many “job openings” within the gut—niches that can be filled by the diverse lean microbes when they invade. “And obviously, those job openings aren’t there in the richer, lean gut community,” he said. “That’s why the invasion is one-directional.”

“But if invasion is so robust, why then isn’t there an epidemic of leanness?” asked Gordon. “The answer appears to be, in part, diet.”

In her initial experiments, Ridaura fed the mice standard chow, which is high in fiber and plant matter. She also blended up two new recipes, designed to reflect extremes of saturated fat versus fruit and vegetable consumption associated with Western diets.

If the mice were fed food low in fat and high in fruit and vegetables, Ridaura found the same results as before—the lean microbes could cancel out the effect of the obese ones. But when the mice were fed food low in fruit and vegetables and high in saturated fat, those with obese gut microbes still gained weight, no matter who their neighbors were.

This may be because the best colonizers among the lean communities were the Bacteroidetes—a group of bacteria that are excellent at breaking down the complex carbohydrates found in plant foods. When the mice ate plant-rich diets, the Bacteroidetes could fulfill a metabolic role that was vacant in the obese gut communities. When the mice ate unhealthy, plant-poor diets, “these vacancies weren’t there and the organisms couldn’t establish themselves,” said Gordon.

“We’re now trying to identify particular sets of organisms that can do what the complete community does,” Gordon added. The ultimate goal is to create a set of specific bacteria that could be safely administered as a probiotic that, along with a defined diet, could help these beneficial microbes to establish themselves and might effectively prevent weight gain.

“This study is an inspiration for us at MetaHIT,” said Pedersen. “It would be very interesting to take stools or cultures from extreme cases within our samples—people who have very rich or very poor gut microbiomes—and inoculate them into germ-free mice. . . . Now that we have a proof-of-concept, it’s obvious for us to follow up our findings through these studies.”

V.K. Ridaura et al., “Gut microbiota from twins discordant for obesity modulate metabolism in mice,” Science, doi: 10.1126/science.1241214, 2013.

31. Gut Microbes Treat Illness

Oral administration of a cocktail of bacteria derived from the human gut reduces colitis and allergy-invoked diarrhea in mice.

By Chris Palmer | July 10, 2013

Micrograph of germ-free mice colon colonized with 17 strains of human-derived Clostridia. Kenya Honda

An astounding array of microorganisms colonizes the human gut; our large intestines alone are home to 1014 bacteria from more than 1,000 species. Though scientists have long attempted to manipulate these microbial populations to affect health, probiotics have failed to reliably treat disease. However, a new study published today in Nature reports that a blend of specially selected strains of Clostridium bacteria derived from humans can significantly reduce symptoms of certain immune disorders in mice.

“[This work] shows that microbes can influence the balance and architecture of the immune system of their host,” said Sarkis Mazmanian, an immunologist at the California Institute of Technology who did not participate in the research. “I think it has tremendous potential for ameliorating human disease.”

Mammalian gut microbiota—the community of microorganisms that inhabit the gastrointestinal tract—have a long, intimate, and mostly symbiotic history with their hosts. The ubiquitous bugs are integral to some of the most basic of physiological functions, including metabolism and immune system development and function. However, specific gut microbes have also been linked to autoimmune disorders, obesity, inflammatory bowel disease, and possibly even neurological disorders. “It’s clear that gut microbes can affect many, many aspects of our physiology,” said Mazmanian.

Senior author Kenya Honda and his team previously reported that colonization of germ-free mice—mice that lack a microbiota—with a cocktail of a few dozen strains of Clostridium bacteria derived from wild-type mice promoted the activity of regulatory T cells (Treg) in the colon. Treg cells produce important anti-inflammatory immune molecules, including interleukin-10 and inducible T-cell co-stimulator, to prevent an overreaction of the immune system, and disruption of Treg cells is known to play a role in autoimmune disorders such as colitis, Crohn’s disease, food allergies, and type II diabetes. Indeed, mice treated with theClostridium cocktail appeared more resistant to allergies and intestinal inflammation.

Clostridia bacteria include the well-known tetanus and botulism toxins. “Clostridia are very diverse bacteria, and include some pathogens,” said Alexander Rudensky, an immunologist at the Memorial Sloan-Kettering Cancer Center in New York and a cofounder,  of Vedanta Biosciences, which he launched with the paper authors in 2010. “So, their role [in disease] may be surprising to immunologists and public, but not to microbiologists.”

To extend the clinical relevance of the previous results, Honda’s group repeated their experiment usingClostridium derived from a sample of human feces. As in the previous study, germ-free mice treated with specially selected strains of human-derived Clostridia displayed a significant increase in Treg cells. The treated mice also displayed reduced symptoms of colitis and allergy-induced diarrhea.

“This is a terrific advance to their previous studies where they showed that mouse microbiota can induce regulatory T cells,” said Mazmanian. “In this paper they’ve extended that to bacteria that come from humans, which they have tested in mice.”

The researchers used RNA sequencing of gut tissue samples of mice treated with human microbes to identify 17 specific non-virulent strains of Clostridium responsible for the increased production of Treg cells. They then sequenced the metagenomes of human ulcerative colitis patient guts, and found that they tended to carry lower levels of the 17 strains, with 5 out of the 17 showing a statistically significant reduction. “This work lays out the first instance of a rationally designed drug candidate isolated from human microbiota, which can be given to animals to treat autoimmune disease,” said study coauthor Bernat Olle, the chief operating officer of Vedanta Biosciences, which is developing therapies based on the new research.

Investigations into the mechanisms underlying Treg-cell induction pointed to small chain fatty acids and bacterial antigens that are cooperatively produced by the 17 strains of Clostridium. The small chain fatty acids and antigens in turn activate a transforming growth factor (TGF-beta) response that drives Treg cell differentiation and expansion.

“It’s very valuable to see studies like this one, where detailed analysis of microbial compositions is linked to biology,” said Rudensky.

Atarashi et al., “Treg induction by a rationally selected mixture of Clostridia strains from the human microbiota,” Nature, doi:10.1038/nature12331, 2013.


32.  Foxp3 targets revealed

The first comprehensive — but preliminary — list of Foxp3 targets in mice could provide clues to how the protein helps regulate the immune system

By Chandra Shekhar | January 22, 2007

The first comprehensive catalogue of mouse genes targeted by the transcriptional factor Foxp3 appears intwo papers published in this week’s Nature. The lists from both studies don’t always match, but the combined findings represent a key step in understanding how the protein helps regulatory T-cells maintain immune system tolerance and prevent autoimmune diseases. “The papers provide the first look at relating the transcriptional DNA-binding activity of Foxp3 with specific target genes,” said Fred Ramsdell of ZymoGenetics in Seattle, who was not involved in either study. “This is something the field has beenlooking to do for the past five years.” Expressed primarily in regulatory T-cells, Foxp3 is essential to both their development and normal function. Loss-of-function Foxp3 mutations in mice and humans result in fatal autoimmune diseases. A research team led by Alexander Rudensky of the University of Washington in Seattle, with Ye Zheng as first author, used ex vivo T-cells from mice with Foxp3 knocked out or tagged with GFP. Using a chromatin immunoprecipitation (ChIP) protocol, the team located nearly 1,300 Foxp3 binding sites on the mouse genome, from which it identified 702 Foxp3-bound genes. “Unlike other transcription factors, Foxp3 binds to only a few sites in the genome,” observed Rudensky. “But its binding results in very efficient changes in gene expression.” Another study, led by Richard Young of the Whitehead Institute in Cambridge, Mass. and Harald von Boehmer of the Dana-Farber Cancer Institute in Boston, also used ChIP to identify Foxp3 binding sites. Out of more than 1,500 binding sites, they identified 1,119 genes bound by Foxp3. Instead of ex vivo T-cells, however, the researchers used T-cell hybridomas transfected with Foxp3. This made it easier to observe the effects of T-cell receptor stimulation, explained study’s first author, Alexander Marson. “Foxp3 exerts a much stronger influence on its target genes in stimulated cells than in unstimulated cells,” he noted. Ethan Shevach of the National Institutes of Health in Bethesda, Md., who was not involved in either study, said he preferred the use of normal T-cells — as in the Zheng et al. study — to hybridomas. “There is no evidence that the cell [Marson et al] transfect with Foxp3 is a regulatory T-cell,” Shevach said. Some of the direct targets of Foxp3 identified in the two studies — such as members of the irf family — are transcription factors in their own right, indicating a second layer of regulation mediated by Foxp3. The target lists also include a number of genes for cell surface molecules, such as CD28, and signal transduction, such as Cdc42. “Some of these targets are red herrings,” cautioned Shevach. “Foxp3 may bind to them, but they may have nothing to do with regulatory cell function.” The results from the two studies differ significantly. For instance, Zheng et al. noted that ctla4 — an important T-cell inhibitor — was bound and strongly upregulated by Foxp3, but Marson et al. did not observe this. Conversely, while both studies found that Foxp3 bound to the receptor for IL2, a key player in immune response, only Marson et al. found IL2 itself to be a target. Further, while Zheng et al. determined that Foxp3 activated more genes than it suppressed, Marson et al. came to the opposite conclusion. “What I found most striking was the amount of non-overlap between the two datasets,” said Steve Ziegler of the Benaroya Research Institute in Seattle. “This may reflect the fact that they used two different systems for their chip-on-chip analysis.” Despite the discrepancies, experts said the studies would be a major help in research into immune tolerance. “Foxp3 is located in the nucleus and is hard to get at,” said Ziegler. “Downstream targets of it may be more accessible and give us more tractable surrogate markers of regulatory T-cells.” Chandra Shekhar cshekhar@the-scientist.com Links within this article Two papers: Y. Zheng, et al., “Genome-wide analysis of Foxp3 target genes in developing and mature regulatory T cells,” Nature, Jan 2007. A. Marson, et al., “Foxp3 occupancy and regulation of key target genes during T-cell stimulation,” Nature, Jan 2007. http://www.nature.com T.P. Toma, “Self-tolerance gene?” The Scientist, January 9, 2003 http://www.the-scientist.com/article/display/20994 M. Greener, “Hot on tolerance’s trail: The hunt for human Foxp3,” The Scientist, May 23, 2005http://www.the-scientist.com/article/display/15478 F. Ramsdell, “Foxp3 and natural regulatory T cells: Key to a cell lineage?” Immunity, August 2003. ‘http://www.immunity.com/content/article/abstract?uid=PIIS1074761303002073 Alexander Rudenskyhttp://depts.washington.edu/immunweb/faculty/profiles/rudensky.html Richard Younghttp://jura.wi.mit.edu/young_public/index.html Harald von Boehmer http://www.dana-farber.org/res/physician/detail.asp?personID=232&RD=True&group=%28Researcher%29 Ethan Shevachhttp://www3.niaid.nih.gov/labs/aboutlabs/li/cellularImmunologySection Steve Zieglerhttp://www.benaroyaresearch.org/investigators/ziegler_steven

32.  Lasker Winners Announced

This year’s prizes honor pioneering work on the unfolded protein response, deep-brain stimulation, and the discovery of cancer-related genes.

By Tracy Vence | September 8, 2014

Kazutoshi Mori (left), Peter Walter (right) ALBERT AND MARY LASKER FOUNDATION Kazutoshi Mori of Kyoto University in Japan and Peter Walter of the University of California, San Francisco, have won the 2014 Lasker Award for basic medical research. Mori and Walter are being honored by the Albert and Mary Lasker Foundation for their work related to the unfolded protein response—a cellular stress response that has been implicated in several protein-folding diseases.

In its announcement, the foundation said that “Mori and Walter’s work has led to a better understanding of inherited diseases such as cystic fibrosis, retinitis pigmentosa, and certain elevated cholesterol conditions in which unfolded proteins overwhelm the unfolded protein response.”

Three years ago, the Lasker Foundation honored Franz-Ulrich Hartl and Arthur Horwich for their protein-folding work with its 2011 basic research award.

Meanwhile, Alim Louis Benabid of Joseph Fourier University in Grenoble, France, and Mahlon DeLong of the Emory University School of Medicine in Atlanta, Georgia, have won the this year’s Lasker-DeBakey Clinical Medical Research Award for their deep-brain stimulation work that has been used to help restore and motor function in patients with advanced Parkinson’s disease.

And the University of Washington’s Mary-Claire King has won the 2014 Lasker-Koshland Special Achievement Award in Medical Science for “bold, imaginative, and diverse contributions to medical science and human rights” related to her work to reunite missing persons or their remains with their families, as well as her discovery of the cancer-related BRCA1 gene locus. In a commentary published in JAMA today (September 8), King and her colleagues advocated for population-based screening for cancer-related genetic variants. “Population-wide screening will require significant efforts to educate the public and to develop new counseling strategies, but this investment will both save women’s lives and provide a model for other public health programs in genomic medicine,” they wrote.

This year’s recipients will receive a $250,000 honorarium per category. The awards will be presented on Friday, September 19, in New York City.

33.  Protein Binding

Edited by: Thomas W. Durso S.D. Rosen, C.R. Bertozzi, “The selectins and their ligands,” Current Opinion in Cell Biology, 6:663-73, 1994. (Cited in more than 60 publications through April 1996) Comments by Steven D. Rosen, University of California, San Francisco The selectins are a trio of related proteins involved in leukocyte-endothelium interactions, affecting the ability of leukocytes-that is, white blood cells-to interact with blood vessel walls. THREEPEAT: The selectins are a threesome

By Carolyn Bertozzi | October 28, 1996

Edited by: Thomas W. Durso
S.D. Rosen, C.R. Bertozzi, “The selectins and their ligands,” Current Opinion in Cell Biology6:663-73, 1994. (Cited in more than 60 publications through April 1996) Comments by Steven D. Rosen, University of California, San Francisco

The selectins are a trio of related proteins involved in leukocyte-endothelium interactions, affecting the ability of leukocytes-that is, white blood cells-to interact with blood vessel walls.

THREEPEAT: The selectins are a threesome of related proteins, says UC-San Francisco’s Steven Rosen.

“One of the novel aspects of the selectins is that they function as carbohydrate-binding receptor molecules-that is, they recognize specific carbohydrate structures as their ligands, or counter-receptors,” Rosen says. “This means that in principle, it’s possible to interrupt the function of selectins by determining what carbohydrates they bind to and providing mimics for those carbohydrates in the form of soluble small molecules, thereby arriving at a new class or classes of anti-inflammatory substances.”The paper summarizes the three selectins and their physiological functions in leukocyte-endothelium interactions, and describes how they function.First identified at the molecular level in 1989 (L.M. Stoolman, Cell,56:907-10, 1989), selectins are the topic of this review paper by Steven D. Rosen, a professor in the department of anatomy and program in immunology at the University of California, San Francisco, and Carolyn R. Bertozzi, a former postdoc in Rosen’s lab and now an assistant professor of chemistry at the University of California, Berkeley.

Rosen explains that with leukocytes moving from the blood into tissues, the leukocyte-endothelium interaction is critical to inflammatory reactions.

ONE PLACE: UC-Berkeley’s Carolyn Bertozzi, Rosen’s former postdoc, was coauthor of the review paper.

“Leukocytes in tissue sites are protecting the individual from bacterial invasions and foreign substances that the individual wants to eliminate, but leukocytes can have an arsenal of destructive capabilities which can be turned on the individual’s own tissues. So inflammatory reactions have a down side. There are a lot of inflammatory diseases, such as rheumatoid arthritis, multiple sclerosis, lupus, and other autoimmune diseases.””In many cases, inflammatory reactions lead to pathological problems,” he points out. “It’s a defense mechanism the body has, but leukocytes being in tissue sites can cause problems as well as be of value to the individual.

He concludes: “The interest in the selectins was: Here’s a family of proteins that has involvement in leukocyte-endothelium interactions, therefore here’s a potential set of targets to prevent leukocyte entry into tissues and prevent inflammatory problems.”

Asked for his opinion on why this paper has been cited so much, Rosen replies: “There’s a huge amount of interest in the selectins, because there’s basic cell biology and biochemistry that everybody’s interested in here. . . . There’s a real convergence of the basic science with direct clinical applications. What you do in the lab can have immediate ramifications on the design of anti-inflammatory compounds. There’s tremendous biotech and pharmaceutical company interest in the selectins and their ligands.

“This has been a tremendously hot topic since 1989, and it will be for years to come. Our article put everything down in one place, from the basic cell biology to the clinical connections, and updated the carbohydrate information and ligand identification information in a very accessible way.”

In addition to reviewing the selectins, Rosen states, “the paper deals with what is known about the carbohydrates that the selectins recognize, and what is known about the macromolecules-the ligands-that carry these carbohydrates. What might make the carbohydrates that one selectin recognizes different from the carbohydrates that another selectin molecule might recognize-that is, what is the selectivity of carbohydrate binding among the three selectins?”

The paper also lists the animal models of inflammatory diseases in which selectins have been shown to play an important role, “where antagonism of the selectin leads to beneficial effects, in terms of decreasing damage,” Rosen notes.

Since the publication of this paper, he and Bertozzi have written a second review, updating ligand characterizations (S.D. Rosen, C.R. Bertozzi, Current Biology6:261-4, 1996).

“It has a lot more on carbohydrate specificity, and it’s got some new information on how one of the selectins recognizes its ligands,” Rosen notes. “Sulfation is important. At the time of the first review, sulfation was known to be important for the binding of one selectin to its ligands. . . . This review points to the importance of sulfation for the ligand of another selectin. The nature of the sulfation modifications of the ligands are very different for the two selectins.”

Additional LPBI articles:

MIT’s Promise for the MI Patient: A new cardiac patch uses Gold Nanowires to enhance Electrical Signaling between heart cells

Curator: Aviva Lev-Ari, PhD, RN


Nanotechnology and Heart Disease

Author and Curator:  Tilda Barliya PhD


AAAS February 14-18, 2013, Boston: Symposia – The Science of Uncertainty in Genomic Medicine

Reporter: Aviva Lev-Ari, PhD, RN


Robert S. Langer, Massachusetts Institute of Technology

Challenges and Opportunities at the Confluence of Biotechnology and Nanomaterials

Introduction to Tissue Engineering; Nanotechnology applications

Author, editor; Tilda Barliya PhD


Building a Drug-Delivery System(DDS): choice of polymers and drugs

Author: Tilda Barliya PhD


Read Full Post »

Augmentation of the ONTOLOGY of the 3D Printing Research

Curator: Larry H. Bernstein, MD, FCAP

Encore: Research Allows for 3D Printed Augmentation of Everyday Objects



If you have a 3D printer then you are likely overwhelmed by the sheer number of possible objects you can find online to print out. At the same time, the technology is somewhat limited, unless you have professional CAD skills or are incredibly creative. What, for instance, would you do if you wanted to take an everyday item such as a hot glue gun and print an attached stand for it, or turn your child’s favorite action figure into a magnet?

Four researchers at Carnegie Mellon University, Xiang ‘Anthony’ Chen, Stelian Coros, Jennifer Mankoff and Scott E. Hudson, believe that they can change this via a new WebGL-based tool under development called Encore.

Funded by the National Science Foundation under grant NSF IIS 1217929, Encore is a multifaceted tool which enables three different techniques to augment already existing objects. The researchers call these three techniques Print-Over, Print-to-Affix and Print-Through, all of which allow for the adherence of newly 3D printed attachments to other objects. Before we get into what each of these three techniques involves, one first should understand the computational pipeline involved in designing these attachments.

First a user is required to design a basic attachment, or perhaps use a design that they’d like to iterate upon. Once they have a basic model of their desired attachment the Encore system will geometrically analyze the model along with a 3D scan of a target object, that the attachment will adhere to, in order to determine its printability while also deciding if it will be durable enough and usable once attached to the target object. Next comes the interactive exploration phase, where the tool will visualize and explore various areas where the attachment can be affixed to the target object. The tool will also adjust the design of the attachment, if required, to better fit the target object. Once this is all settled, it’s now time for Encore to generate a model which not only will include the attachment itself, but also any connecting structures and even supports to hold the target item or attachment in place. Below you will find the three techniques that the Encore tool can use to attach 3D printed items to a target object:

This technique, in my opinion, is one of the coolest, as it allows for the printing of an attachment directly on a given object. Once Encore establishes the parameters required and sizes up the model to fit the target object, it will automatically tell the printer to print supports to hold the target item in place. Once the target item is in place, Encore will then tell the printer to begin printing the attachment in a particular spot on the target item, based on its geometric analysis of that object.

“It is also important to ensure that the existing object will not impede the motion of the print head while the attachment is being printed,” warn the researchers.

An example used by the researchers for this technique was a magnet holder which they directly attached to a teddy bear figurine. They also printed an LED light onto a 9V battery by first placing a small amount of glue on the battery prior to beginning to print on top.

This approach is similar to the Print-Over technique, only that instead of printing an attachment directly onto the target object, Encore will analyze the geometry of the target object prior to printing in order to create an attachment which will fit perfectly on that object via glue, straps (zip-ties) or even snaps. Once the attachment is printed the user can then use hot-glue or another adhesive or attachment mechanism to affix the printed object onto the target.

This technique is perfect for items which you don’t want to physically attach, but instead can connect to an object. For instance a tag on the loop of a pair of scissors or a charm onto a bracelet. This process requires that the printer be paused while a user manually places the target object within the print field.

“Print-through has aesthetic qualities that distinguish it from print-to-affix and print-over – it typically creates a loose but permanent connection between two objects,” explained the researchers.

While the new Encore tool is still under development as researchers improve upon its analytical capabilities, it certainly seems to show promise to those of us wishing to do more than just fabricate new items. In fact, the researchers were able to show that via all three techniques they could save a substantial amount of time and material over printing an object in one single piece. As an example, they used Slic3r to estimate the print time and total material required for printing a typical Utah teapot with a torus-shaped handle, as well as just printing the handle onto an already fabricated Utah teapot. Their estimate showed that the time of fabricating the item could be cut by more than 80% and material use reduced by as much as 85% by using their techniques and the Encore tool.

There are many variables going into the tool’s decision making algorithms, such as determining where to place attachments for the best balance when holding an object, what placement of an attachment will result in the best adhesion, etc. More research is still required as the team continues to develop the tool, as well as new techniques to attach multiple parts to one object, but it certainly seems like something which could have a sizable impact on the industry in general. Let us know your thoughts on the Encore tool in the Augmenting 3D Prints forum thread on 3DPB.com.

The Limits of 3D Printing

Matthias Holweg

Harvard Business Review Feb 2015   https://hbr.org/2015/06/the-limits-of-3d-printing

Contrary to what some say, 3D printing is not going to revolutionize the manufacturing sector, rendering traditional factories obsolete. The simple fact of the matter is the economics of 3D printing now and for the foreseeable future make it an unfeasible way to produce the vast majority of parts manufactured today. So instead of looking at it as a substitute for existing manufacturing, we should look to new areas where it can exploit its unique capabilities to complement traditional manufacturing processes.

Additive manufacturing, or “3D printing” as it is commonly known, has understandably captured the popular imagination: New materials that can be “printed” are announced virtually every day, and the most recent generation of printers can even print several materials at the same time, opening up new opportunities. Exciting applications have already been demonstrated across all sectors — from aerospace and medical applications to biotechnology and food production.

Some predict that a day is coming when we’ll be able to make any part at the push of a button at a local printer, which might even render the global supply lines that dominate today’s world of manufacturing a thing of the past. Unfortunately, this vision does not stack up to economic reality. Early findings from a research project being conducted by the Additive Manufacturing and 3D Printing Research Group at the University of Nottingham and Saïd Business School at the University of Oxford show that there are both significant scale and learning effects inherent in the 3D printing process. (The project, in which I am a principal investigator, is focusing on industrial selective laser sintering (or more accurately, melting) processes and not fused deposition modelling or stereolithography processes that are more suited to rapid prototyping and home applications.)

Furthermore, the pre- and post-printing cost amount to a significant proportion of total cost per printed part. So even when the cost for printers materials come down, the labor-cost penalty will remain.

3D printing simply works best in areas where customization is key — from printing hearing aids and dental implants to printing a miniature of the happy couple for their wedding cake. Using a combination of 3D scanning and printing, implants can be customized to specific anatomic circumstances in a way that was simply not feasible beforehand. However, we also know that 99% of all manufactured parts are standard and do not require customization. In these cases, 3D printing has to compete with scale-driven manufacturing processes and rather efficient logistics operations. A good example is the wrench  that NASA printed on the International Space Station last year. The cost of shipping it to the space station would have been at least $400 (assuming the unpackaged weight of 18 grams per wrench and using the most recent cost data given by NASA for transporting goods into lower-earth orbit); in comparison, shipping it from China to the United States would only cost $0.002 per unit. Thus, while it makes a lot of sense to print the wrench on the space station, printing it for local consumption in the United States wouldn’t.

The simple fact is that when customization isn’t important, 3D printing is not competitive. For one, printing costs per part are highly sensitive to the utilization of the “build room,” the three-dimensional area inside the 3D printer where the laser fuses the metal or plastic powder. Therefore, contract manufacturers that perform 3D printing  such as Shapeways generally wait to fill a batch that uses the entire build room. Printing just one part raises unit cost considerably; so economies of scale do matter. Interestingly, the economic case for the most-cited standard part in 3D volume production today, the GE fuel nozzle for the CFM LEAP engine, is it is lighter and more fuel efficient, not a lower manufacturing cost per se.

A second point often overlooked is that the labor cost that remains. Counter to common perception, 3D printing does not happen “at the touch of a button”; it involves considerable pre- and post-processing, which incur non-trivial labor costs. The starting point for any 3D printing process is a 3D file that can be “printed.” Just having an electronic CAD drawing is not sufficient; currently, there is no way to automatically convert the CAD drawing into a 3D file.

Creating printable files involves two steps: creating a three-dimensional volume model that can be printed, and “slicing” that volume model in the best possible way to avoid material wastage and prevent printing errors. Both steps require tacit knowledge. Following the printing, the parts produced have to be recovered, cleaned, washed (or sanded and polished, in the case of metal prints), and inspected. This, in turn, means that using 3D printing for the aftermarket services — an application where it makes a lot of sense — requires making a significant upfront investment in generating the printable files of the spare parts that would likely be needed. This investment would have to outweigh the cost of keeping a lifetime supply of spare parts in inventory, which is a tough call for small bolts, brackets, and connectors that make up the bulk of aftermarket demand.

So while I, like many others, have fallen in love with the notion of the “ultimate lean supply chain” of having 3D printers at every other corner table to print single parts just in time where they are needed, I am afraid that this vision does not stack up against reality. 3D printing technology undoubtedly has great potential. However, it is unlikely to replace traditional manufacturing. Instead, we should see it as a complement, a new tool in the box, and exploit its unique capabilities — both in making existing products better as well as being able to manufacture entirely new ones that we previously could not make.

Medical implants and printable body parts to drive 3D printer growth

With 3D bio-printing in the pipeline, dental and medical applications could be worth $6bn by 2025


False teeth, hip joints and replacement knees – and potentially printable skin and organs – will drive growth in the burgeoning market for 3D printers over the next decade, according to new research.

A report suggests that dentistry and medicine will increasingly harness one of the 21st century’s most exciting technological breakthroughs.

The technology is better known to British households for its ability to replace broken crockery or produce awkward figurine “selfies.”

But a report by Cambridge-based market research firm IDTechEx says ceramic jaw or teeth implants and metal hip replacements will become increasingly common 3D fare.

The parts are created by nozzles laying down fine sedimentary layers of material that build a product indistinguishable from an item that has rolled off a factory conveyor belt.

The dental and medical market for 3D printers is expected to expand by 365% to $867m (£523m) by 2025, according to IDTechEx analysts, even before bio-printing technology is taken into account. If bio-printing becomes suitable for commercial use – which scientists hope will allow the printing of pieces of skin, liver or kidney using live cells – analysts estimate the medical market could reach a value of $6bn or more within 10 years.

While printing of complete organs for transplants may be decades away, the use of pieces of tissue for laboratory toxicology tests for cosmetics or drugs could be ready within five years, helping the medical market for 3D printers overtake all other sectors.

Dr Jon Harrop, a director of IDTechEx, said: “Bio printing is a bit unsure as it doesn’t exist commercially at the moment but all the medical professionals we interviewed thought it was highly likely to be commercial within 10 years.”

In the US, dental labs have invested in technology that can scan a patient’s teeth so new teeth can be produced by pressing the print button.

Harrop said there are a number of stumbling blocks in the way of the commercial application of bio printing, but even in the past year, scientists have been able to extend the life of a piece of skin tissue created in the lab from just a few hours to 40 days, taking it closer to the three months required for toxicology tests.

At present, 3D printers are most widely used in the automotive industry where they help produce prototypes for new cars or car parts. The next biggest market is aerospace, where manufacturers are using the technology to make lighter versions of complex parts for aeroplanes.

Already, 3D printers have been used by the medical industry to create a jaw, a pelvis and several customised hip replacements from metal. This year, surgeons in Newcastle upon Tyne created a titanium pelvis for a man who lost half his original one to a rare bone cancer, while in May doctors in Southampton completed Britain’s first hip replacement made using a 3D printer. Professor Richard Oreffo at the University of Southampton, who helped develop the hip replacement technique, said at the time: “The 3D printing of the implant in titanium, from CT scans of the patient and stem cell graft, is cutting edge and offers the possibility of improved outcomes for patients.”

Dentists have been using 3D printers to create exact replicas of jaws or teeth in order to aid complex procedures for a few years, but increasingly they are creating implants made of durable plastic or medical ceramics.

The Future of 3D Printing

By Stephen F. DeAngelis


In his most recent State of the Union address, President Barack Obama stated, “Last year, we created our first manufacturing innovation institute in Youngstown, Ohio. A once-shuttered warehouse is now a state-of-the art lab where new workers are mastering the 3D printing that has the potential to revolutionize the way we make almost everything. There’s no reason this can’t happen in other towns. So tonight, I’m announcing the launch of three more of these manufacturing hubs, where businesses will partner with the Department of Defense and Energy to turn regions left behind by globalization into global centers of high-tech jobs. And I ask this Congress to help create a network of 15 of these hubs and guarantee that the next revolution in manufacturing is made right here in America.” [“Remarks by the President in the State of the Union Address,” White House, 12 February 2013]

Official White House Photo by Chuck Kennedy (not shown)

Clearly, the President believes that 3D printing marks a new era in manufacturing that could change the entire business landscape. Kinaxis analyst Andrew Bell agrees that “change is inevitable.” “The question that we need to answer,” he writes, “is how will it change?” [“What Could 3D Printing Mean for the Supply Chain? The 21st Century Supply Chain, 9 January 2013] Bell offers a few thoughts on the subject; he writes:

“One thing is for sure, the supply chain isn’t going away. As usual, it will likely just get more complicated. Here are some of the areas that I propose will influence the supply chain as 3D printing becomes more and more mainstream, and I’m sure there are many more.

  • Local Manufacturing – More things will be made closer to their final destination. This will have definite impact on the logistics industry, and will change the way business try and schedule their operations.
  • Customizability – It will be easier, faster, and more efficient for companies to provide made-to-order products to their end users.
  • Distribution of raw materials – There will need to be a dramatic shift in the way raw materials are distributed since these printers will require raw materials in order to produce the final product.
  • New replacement parts model – Business will be able to provide replacement parts as required instead of trying to predict the need and manufacture the stock well in advance (as they do today)
  • Blurred boundaries within businesses – A closer integration of the various departments of an organization will be mandatory. A siloed manufacturing department will no longer allow for a competitive business.

“My predictions may be right, or they may be wrong, but one thing I think all will agree on is that 3D printing will make the supply chain more complex and more difficult to manage.”

Jim Stockton notes that 3D printing (or additive manufacturing) has been around since the 1980s. It has only broken into the mainstream because of recent breakthroughs that now have everyone talking. [“Top Innovations in the World of 3D Printing,” BestDesignTuts, 12 February 2013] He writes:

“Some products produced with this technology that might already be a part of your daily life include shoes, jewellery, clothing accessories, and educational products. The field of engineering is also benefitting from the development of 3D printing, in terms of boosting geographic information systems, aerospace engineering, engineering projects, and construction processes. Furthermore, humans are already benefitting in the field of healthcare with dental and medical instruments and products being formulated with 3D printing. The automotive industry is able to make more reliable products at a quicker speed, and the industrial design and architecture fields are able to develop new products that were inconceivable only a decade ago.”

What is really exciting people, however, is not so much what is currently being manufactured but the potential of what could be manufactured using 3D printers. Stockton writes:

“Engineers and scientists around the world are looking forward to the near future, in which printers will be able to create equipment, tools and devices via open-source models. This kind of advancement will drastically change the ways in which research and practical medicine are performed. Chemists are even attempting to build chemical compounds using 3D printers, and scientists have already started to replicate fossils and other ancient materials to better understand their compositions and functions. Architects wonder if in the future, buildings themselves can be printed from the ground up. The future of 3D printing is very bright – both in small-scale products for individual users and for large-scale projects, such as printing meters of building materials in the course of an hour and making intricate parts of automobiles and planes.”


Vivek Srinivasan and Jarrod Bassan offer ten trends that will likely define the direction that will be taken by additive manufacturing in the years ahead. [“Manufacturing The Future: 10 Trends To Come In 3D Printing,” Forbes, 7 December 2012] They are:

  1. 3D printing becomes industrial strength. Once reserved for prototypes and toys, 3D printing will become industrial strength. You will take a flight on an airliner that includes 3D-printed components, making it lighter and more fuel efficient. In fact, there are aircrafts that already contain some 3D-printed components. The technology will also start to be adopted for the direct manufacture of specialist components in industries like defense and automotive. Overall, the number of 3D printed parts in planes, cars and even appliances will increase without you knowing.
  2. 3D printing starts saving lives. 3D-printed medical implants will improve the quality of life of someone close to you. Because 3D printing allows products to be custom-matched to an exact body shape, it is being used today for making better titanium bone implants, prosthetic limbs and orthodontic devices. Experiments in printing soft tissue are underway, and may soon allow printed veins and arteries to be used in operations. Today’s research into medical applications of 3D printing covers nano-medicine, pharmaceuticals and even printing of organs. Taken to the extreme, 3D printing could one day enable custom medicines and reduce if not eliminate the organ donor shortage.
  3. Customization becomes the norm. You will buy a product, customized to your exact specifications, which is 3D-printed and delivered to your doorstep. Innovative companies will use 3D printing technologies to give themselves a competitive advantage by offering customization at the same price as their competitor’s standard products. At first this may range from novelty items like custom smartphone cases or ergonomic improvements to standard tools, but it will rapidly expand to new markets. The leaders will adjust their sales, distribution and marketing channels to take advantage of their capability to provide customization direct to the customer. Customization will also play a big role in healthcare devices such as 3D-printed hearing aids and artificial limbs.
  4. Product innovation is faster. Everything from new car models to better home appliances will be designed more rapidly, bringing innovation to you faster. Because rapid prototyping using 3D printers reduces the time to turn a concept into a production-ready design, it allows designers to focus on the function of products. Although the use of 3D printing for rapid prototyping is not new, the rapidly decreasing cost, improved design software and increasing range of printable materials means designers will have more access to printers, allowing them to innovate faster by 3D printing an object early in the design phase, modifying it, re-printing it, and so on. The result will be better products, designed faster.
  5. New companies develop innovative business models built on 3D printing. You will invest in a 3D printing company’s IPO. Start-up companies will flourish as a generation of innovators, hackers and “makers” take advantage of the capabilities of 3D printing to create new products or deliver services to the burgeoning 3D printer market. Some enterprises will fail, and there may be a boom-bust cycle, but 3D printing will spawn new and creative business models.
  6. 3D print shops open at the mall. 3D print shops will begin to appear, at first servicing local markets with high-quality 3D printing services. Initially designed to service rapid-prototyping and other niche capabilities, these shops will branch into the consumer marketplace. As retailers begin to “ship the design, not the product,” the local 3D print shop will one day be where you pick up your customized, locally manufactured products, just like you pick up your printed photos from the local Walmart today.
  7. Heated debates on who owns the rights emerge. As manufacturers and designers start to grapple with the prospect of their copyrighted designs being replicated easily on 3D printers, there will be high-profile test cases over the intellectual property of physical object designs. Just like file-sharing sites shook the music industry because they made it easy to copy and share music, the ability to easily copy, share, modify and print 3D objects will ignite a new wave of intellectual property issues.
  8. New products with magical properties will tantalize us. New products – that can only be created on 3D printers – will combine new materials, nano scale and printed electronics to exhibit features that seem magical compared to today’s manufactured products. These printed products will be desirable and have distinct competitive advantage. The secret sauce is that 3D printing can control material as it is printed, right down to the molecules and atoms. As today’s research is perfected into tomorrow’s commercially available printers, expect exciting and desirable new products with amazing capabilities. The question is: What are these products and who will be selling them?
  9. New machines grace the factory floor. Expect to see 3D printing machines appearing in factories. Already some niche components are produced more economically on 3D printers, but this is only on a small scale. Many manufacturers will begin experimenting with 3D printing for applications outside of prototyping. As the capabilities of 3D printers develop and manufacturers gain experience in integrating them into production lines and supply chains, expect hybrid manufacturing processes that incorporate some 3D-printed components. This will be further fueled by consumers desiring products that require 3D printers for their manufacture.
  10. “Look what I made!” Your children will bring home 3D printed projects from school. Digital literacy – including Web and app development, electronics, collaboration and 3D design – will be supported by 3D printers in schools. A number of middle schools and high schools already have 3D printers. As 3D printing costs continue to fall, more schools will sign on. Digital literacy will be about things as well as bits.

If, as President Obama believes, a manufacturing revolution, led by 3D printing, is coming, it behooves business leaders in every field to ask themselves how it could affect their business model and assumptions about the future.

Research Leads to the 3D Printing of Pure Graphene Nanostructures



Researchers in Korea have successfully 3D printed graphene nano-structures without the use of any other material. With the entire printed structure being composed of graphene, the strength, as well as full conductivity of the material can be taken advantage of.

3d printing graphene

3d printing graphene


There is no question that graphene, has enormous potential, from solar cell technology, to electronics to medicine.  A key factor in developing practical and commercial applications of the one-atom thick carbon sheets is in aligning the material in the desired form depending on the application.

Now 3D printing of graphene is nearing a feasible stage and companies such as Graphene 3D Lab, are at the forefront of the technology.

However, there is a difference between 3D printing pure graphene, and 3D printing a graphene/thermoplastic composites like Graphene 3D has been doing.

While printing with composite materials, using a typical FDM/FFF or powder based laser sintering process, will keep some of graphene’s superior properties intact, most will be lost. The plastic will eventually break down leaving any prints weak, and not much different from a typical object you’d print with a MakerBot Replicator.

Now, researchers, led by Professor Seung Kwon Seol from Korea Electrotechnology Research Institute (KERI), recentlypublished a paper in Advanced Materials where they describe a new process of directly 3D printing pure graphene.

Their techniques mean that graphene nano-structures can be fabricated without the use of any other material. With the entire printed structure being composed of graphene, the strength, as well as full conductivity of the material can be realized.

“We are convinced that this approach will present a new paradigm for implementing 3D patterns in printed electronics.”

“We developed a nanoscale 3D printing approach that exploits a size-controllable liquid meniscus to fabricate 3D reduced graphene oxide (rGO) nanowires,” Seol told Nanowerk. “Different from typical 3D printing approaches which use filaments or powders as printing materials, our method uses the stretched liquid meniscus of ink. This enables us to realize finer printed structures than a nozzle aperture, resulting in the manufacturing of nanostructures.”
“So far, to the best of our knowledge, nobody has reported 3D printed nanostructures composed entirely of graphene,” says Seol. “Several results reported the 3D printing (millimeter- or centimeter-scale) of graphene or carbon nanotube/plastic composite materials by using a conventional 3D printer. In such composite system, the graphene (or CNT) plays an important role for improving the properties of plastic materials currently used in 3D printers. However, the plastic materials used for producing the composite structures deteriorate the intrinsic properties of graphene (or CNT).”

“We are convinced that this approach will present a new paradigm for implementing 3D patterns in printed electronics,” says Seol.

For their technique, the team grew graphene oxide (GO) wires at room temperature using the meniscus formed at the tip of a micropipette filled with a colloidal dispersion of GO sheets, then reduced it by thermal or chemical treatment (with hydrazine).

The deposition of GO was obtained by pulling the micropipette as the solvent rapidly evaporated, thus enabling the growth of GO wires. The researchers were able to accurately control the radius of the rGO wires by tuning the pulling rate of the pipette; they managed to reach a minimum value of ca. 150 nm.

Using this technique, they were able to produce arrays of different freestanding rGO architectures, grown directly at chosen sites and in different directions: straight wires, bridges, suspended junctions, and woven structures.

Seol points out that this 3D nanoprinting approach can be used for manufacturing 2D patterns and 3D geometry in diverse devices such as printed circuit boards, transistors, light emitting devices, solar cells, sensors and so on.

A lot of work remains to reduce the 3D printable size to below 10 nm and increase the production yield. A short video of Seol’s process is below:


Nanoparticle Solar Cells May Drive Down Price of Solar Cells


University of Alberta researchers have found that abundant materials in the Earth’s crust can be used to make inexpensive and easily manufactured nanoparticle-based solar cells.

The research, which was supported by the Natural Sciences and Engineering Research Council of Canada, is published in the latest issue of ACS Nano.

The discovery, several years in the making, is an important step forward in making solar power more accessible to parts of the world that are off the traditional electricity grid or face high power costs, such as the Canadian North, said researcher Jillian Buriak, a chemistry professor and senior research officer of the National Institute for Nanotechnology based on the U of A campus.

Buriak and her team have designed nanoparticles that absorb light and conduct electricity from two very common elements: phosphorus and zinc. Both materials are more plentiful than scarce materials such as cadmium and are free from manufacturing restrictions imposed on lead-based nanoparticles.

“Half the world already lives off the grid, and with demand for electrical power expected to double by the year 2050, it is important that renewable energy sources like solar power are made more affordable by lowering the costs of manufacturing,” Buriak said.

“My goal is that a store like Ikea could sell rolls of these things with simple instructions and baggies of screws and do-dads and you could install them yourself,” said Buriak
Her team’s research supports a promising approach of making solar cells cheaply using mass manufacturing methods like roll-to-roll printing (as with newspaper presses) or spray-coating (similar to automotive painting). “Nanoparticle-based ‘inks’ could be used to literally paint or print solar cells or precise compositions,” Buriak said.

Buriak collaborated with U of A post-doctoral fellows Erik Luber of the U of A Faculty of Engineering and Hosnay Mobarok of the Faculty of Science to create the nanoparticles. The team was able to develop a synthetic method to make zinc phosphide nanoparticles, and demonstrated that the particles can be dissolved to form an ink and processed to make thin films that are responsive to light.

Buriak and her team are now experimenting with the nanoparticles, spray-coating them onto large solar cells to test their efficiency. The team has applied for a provisional patent and has secured funding to enable the next step to scale up for manufacturing.

Graphene-Based Solar Cells Get Major Boost


graphene on silicon

graphene on silicon

raphene has extreme conductivity and is completely transparent while being inexpensive and nontoxic. This makes it a perfect candidate material for transparent contact layers for use in solar cells to conduct electricity without reducing the amount of incoming light – at least in theory. Whether or not this holds true in a real world setting is questionable as there is no such thing as “ideal” graphene – a free floating, flat honeycomb structure consisting of a single layer of carbon atoms: interactions with adjacent layers can change graphene’s properties dramatically.

The research recently appeared in the journal Applied Physics Letters.

“We examined how graphene’s conductive properties change if it is incorporated into a stack of layers similar to a silicon based thin film solar cell and were surprised to find that these properties actually change very little,” Marc Gluba explains.

To this end, they grew graphene on a thin copper sheet, next transferred it to a glass substrate, and finally coated it with a thin film of silicon. They examined two different versions that are commonly used in conventional silicon thin-film technologies: onesample contained an amorphous silicon layer, in which the silicon atoms are in a disordered state similar to a hardened molten glass; the other sample contained poly-crystalline silicon to help them observe the effects of a standard crystallization process on graphene‘s properties.
Even though the morphology of the top layer changed completely as a result of being heated to a temperature of several hundred degrees Celcius, the graphene is still detectable. “That’s something we didn’t expect to find, but our results demonstrate that graphene remains graphene even if it is coated with silicon,” says Norbert Nickel.

Their measurements of carrier mobility using the Hall-effect showed that the mobility of charge carriers within the embedded graphene layer is roughly 30 times greater than that of conventional zinc oxide based contact layers.

Says Gluba: “Admittedly, it’s been a real challenge connecting this thin contact layer, which is but one atomic layer thick, to external contacts. We’re still having to work on that.” Adds Nickel: “Our thin film technology colleagues are already pricking up their ears and wanting to incorporate it.” The researchers obtained their measurements on one square centimeter samples, although in practice it is feasible to coat much larger areas than that with graphene.

SOURCE  Helmholtz Zentrum Berlin

Fabricated data bodies: Reflections on 3D printed digital body objects in medical and health domains

Deborah Lupton

Social Theory & Health 13, 99-115 (May 2015) | http://dx.doi.org:/10.1057/sth.2015.3

The advent of 3D printing technologies has generated new ways of representing and conceptualizing health and illness, medical practice and the body. There are many social, cultural and political implications of 3D printing, but a critical sociology of 3D printing is only beginning to emerge. In this article I seek to contribute to this nascent literature by addressing some of the ways in which 3D printing technologies are being used to convert digital data collected on human bodies and fabricate them into tangible forms that can be touched and held. I focus in particular on the use of 3D printing to manufacture non-organic replicas of individuals’ bodies, body parts or bodily functions and activities. The article is also a reflection on a specific set of digital data practices and the meaning of such data to individuals. In analyzing these new forms of human bodies, I draw on sociomaterialist perspectives as well as the recent work of scholars who have sought to theorize selfhood, embodiment, place and space in digital society and the nature of people’s interactions with digital data. I argue that these objects incite intriguing ways of thinking about the ways in digital data on embodiment, health and illnesses are interpreted and used across a range of contexts. The article ends with some speculations about where these technologies may be headed and outlining future research directions.

Osteoconduction and osteoinduction of low-temperature 3D printed bioceramic implants.

Habibovic PGbureck UDoillon CJBassett DCvan Blitterswijk CABarralet JE

Europe Pubmed Central    http://dx.doi.org:/10.1016/j.biomaterials.2007.10.023

Rapid prototyping is a valuable implant production tool that enables the investigation of individual geometric parameters, such as shape, porosity, pore size and permeability, on the biological performance of synthetic bone graft substitutes. In the present study, we have employed low-temperature direct 3D printing to produce brushite and monetite implants with different geometries. Blocks predominantly consisting of brushite with channels either open or closed to the exterior were implanted on the decorticated lumbar transverse processes of goats for 12 weeks. In addition, similar blocks with closed channel geometry, consisting of either brushite or monetite were implanted intramuscularly. The design of the channels allowed investigation of the effect of macropore geometry (open and closed pores) and osteoinduction on bone formation orthotopically. Intramuscular implantation resulted in bone formation within the channels of both monetite and brushite, indicating osteoinductivity of these resorbable materials. Inside the blocks mounted on the transverse processes, initial channel shape did not seem to significantly influence the final amount of formed bone and osteoinduction was suggested to contribute to bone formation.

Read Full Post »

Nanotechnology Used for Prevention of Bone Infection

Larry H Bernstein, MD, FCAP, Reporter



Functionalised nanoscale coatings using layer-by-layer assembly for imparting antibacterial properties to polylactide-co-glycolide surfaces

Piergiorgio GentileMaria E. FrongiaMar CardellachCheryl A. Mille

In order to achieve high local biological activity and reduce the risk of side effects of antibiotics in the treatment of periodontal and bone infections, a localised and temporally controlled delivery system is desirable. The aim of this research was to develop a functionalised and resorbable surface to contact soft tissues to improve the antibacterial behaviour during the first week after its implantation in the treatment of periodontal and bone infections. Solvent-cast poly(d,l-lactide-co-glycolide acid) (PLGA) films were aminolysed and then modified by Layer-by-Layer technique to obtain a nano-layered coating using poly(sodium4-styrenesulfonate) (PSS) and poly(allylamine hydrochloride) (PAH) as polyelectrolytes. The water-soluble antibiotic, metronidazole (MET), was incorporated from the ninth layer. Infrared spectroscopy showed that the PSS and PAH absorption bands increased with the layer number. The contact angle values had a regular alternate behaviour from the ninth layer. X-ray Photoelectron Spectroscopy evidenced two distinct peaks, N1s and S2p, indicating PAH and PSS had been introduced. Atomic Force Microscopy showed the presence of polyelectrolytes on the surface with a measured roughness about 10 nm after 20 layers’ deposition. The drug release was monitored by Ultraviolet–visible spectroscopy showing 80% loaded-drug delivery in 14 days. Finally, the biocompatibility was evaluated in vitro with L929 mouse fibroblasts and the antibacterial properties were demonstrated successfully against the keystone periodontal bacteria Porphyromonas gingivalis, which has an influence on implant failure, without compromising in vitro biocompatibility. In this study, PLGA was successfully modified to obtain a localised and temporally controlled drug delivery system, demonstrating the potential value of LbL as a coating technology for the manufacture of medical devices with advanced functional properties.
functionalized coating at nanoscale dimension

functionalized coating at nanoscale dimension



Read Full Post »

Imaging-guided cancer treatment

Imaging-guided cancer treatment

Writer & reporter: Dror Nir, PhD

It is estimated that the medical imaging market will exceed $30 billion in 2014 (FierceMedicalImaging). To put this amount in perspective; the global pharmaceutical market size for the same year is expected to be ~$1 trillion (IMS) while the global health care spending as a percentage of Gross Domestic Product (GDP) will average 10.5% globally in 2014 (Deloitte); it will reach ~$3 trillion in the USA.

Recent technology-advances, mainly miniaturization and improvement in electronic-processing components is driving increased introduction of innovative medical-imaging devices into critical nodes of major-diseases’ management pathways. Consequently, in contrast to it’s very small contribution to global health costs, medical imaging bears outstanding potential to reduce the future growth in spending on major segments in this market mainly: Drugs development and regulation (e.g. companion diagnostics and imaging surrogate markers); Disease management (e.g. non-invasive diagnosis, guided treatment and non-invasive follow-ups); and Monitoring aging-population (e.g. Imaging-based domestic sensors).

In; The Role of Medical Imaging in Personalized Medicine I discussed in length the role medical imaging assumes in drugs development.  Integrating imaging into drug development processes, specifically at the early stages of drug discovery, as well as for monitoring drug delivery and the response of targeted processes to the therapy is a growing trend. A nice (and short) review highlighting the processes, opportunities, and challenges of medical imaging in new drug development is: Medical imaging in new drug clinical development.

The following is dedicated to the role of imaging in guiding treatment.

Precise treatment is a major pillar of modern medicine. An important aspect to enable accurate administration of treatment is complementing the accurate identification of the organ location that needs to be treated with a system and methods that ensure application of treatment only, or mainly to, that location. Imaging is off-course, a major component in such composite systems. Amongst the available solution, functional-imaging modalities are gaining traction. Specifically, molecular imaging (e.g. PET, MRS) allows the visual representation, characterization, and quantification of biological processes at the cellular and subcellular levels within intact living organisms. In oncology, it can be used to depict the abnormal molecules as well as the aberrant interactions of altered molecules on which cancers depend. Being able to detect such fundamental finger-prints of cancer is key to improved matching between drugs-based treatment and disease. Moreover, imaging-based quantified monitoring of changes in tumor metabolism and its microenvironment could provide real-time non-invasive tool to predict the evolution and progression of primary tumors, as well as the development of tumor metastases.

A recent review-paper: Image-guided interventional therapy for cancer with radiotherapeutic nanoparticles nicely illustrates the role of imaging in treatment guidance through a comprehensive discussion of; Image-guided radiotherapeutic using intravenous nanoparticles for the delivery of localized radiation to solid cancer tumors.

 Graphical abstract


One of the major limitations of current cancer therapy is the inability to deliver tumoricidal agents throughout the entire tumor mass using traditional intravenous administration. Nanoparticles carrying beta-emitting therapeutic radionuclides [DN: radioactive isotops that emits electrons as part of the decay process a list of β-emitting radionuclides used in radiotherapeutic nanoparticle preparation is given in table1 of this paper.) that are delivered using advanced image-guidance have significant potential to improve solid tumor therapy. The use of image-guidance in combination with nanoparticle carriers can improve the delivery of localized radiation to tumors. Nanoparticles labeled with certain beta-emitting radionuclides are intrinsically theranostic agents that can provide information regarding distribution and regional dosimetry within the tumor and the body. Image-guided thermal therapy results in increased uptake of intravenous nanoparticles within tumors, improving therapy. In addition, nanoparticles are ideal carriers for direct intratumoral infusion of beta-emitting radionuclides by convection enhanced delivery, permitting the delivery of localized therapeutic radiation without the requirement of the radionuclide exiting from the nanoparticle. With this approach, very high doses of radiation can be delivered to solid tumors while sparing normal organs. Recent technological developments in image-guidance, convection enhanced delivery and newly developed nanoparticles carrying beta-emitting radionuclides will be reviewed. Examples will be shown describing how this new approach has promise for the treatment of brain, head and neck, and other types of solid tumors.

The challenges this review discusses

  • intravenously administered drugs are inhibited in their intratumoral penetration by high interstitial pressures which prevent diffusion of drugs from the blood circulation into the tumor tissue [1–5].
  • relatively rapid clearance of intravenously administered drugs from the blood circulation by kidneys and liver.
  • drugs that do reach the solid tumor by diffusion are inhomogeneously distributed at the micro-scale – This cannot be overcome by simply administering larger systemic doses as toxicity to normal organs is generally the dose limiting factor.
  • even nanoparticulate drugs have poor penetration from the vascular compartment into the tumor and the nanoparticles that do penetrate are most often heterogeneously distributed

How imaging could mitigate the above mentioned challenges

  • The inclusion of an imaging probe during drug development can aid in determining the clearance kinetics and tissue distribution of the drug non-invasively. Such probe can also be used to determine the likelihood of the drug reaching the tumor and to what extent.

Note: Drugs that have increased accumulation within the targeted site are likely to be more effective as compared with others. In that respect, Nanoparticle-based drugs have an additional advantage over free drugs with their potential to be multifunctional carriers capable of carrying both therapeutic and diagnostic imaging probes (theranostic) in the same nanocarrier. These multifunctional nanoparticles can serve as theranostic agents and facilitate personalized treatment planning.

  • Imaging can also be used for localization of the tumor to improve the placement of a catheter or external device within tumors to cause cell death through thermal ablation or oxidative stress secondary to reactive oxygen species.

See the example of Vintfolide in The Role of Medical Imaging in Personalized Medicine


Note: Image guided thermal ablation methods include radiofrequency (RF) ablation, microwave ablation or high intensity focused ultrasound (HIFU). Photodynamic therapy methods using external light devices to activate photosensitizing agents can also be used to treat superficial tumors or deeper tumors when used with endoscopic catheters.

  • Quality control during and post treatment

For example: The use of high intensity focused ultrasound (HIFU) combined with nanoparticle therapeutics: HIFU is applied to improve drug delivery and to trigger drug release from nanoparticles. Gas-bubbles are playing the role of the drug’s nano-carrier. These are used both to increase the drug transport into the cell and as ultrasound-imaging contrast material. The ultrasound is also used for processes of drug-release and ablation.


Additional example; Multifunctional nanoparticles for tracking CED (convection enhanced delivery)  distribution within tumors: Nanoparticle that could serve as a carrier not only for the therapeutic radionuclides but simultaneously also for a therapeutic drug and 4 different types of imaging contrast agents including an MRI contrast agent, PET and SPECT nuclear diagnostic imaging agents and optical contrast agents as shown below. The ability to perform multiple types of imaging on the same nanoparticles will allow studies investigating the distribution and retention of nanoparticles initially in vivo using non-invasive imaging and later at the histological level using optical imaging.



Image-guided radiotherapeutic nanoparticles have significant potential for solid tumor cancer therapy. The current success of this therapy in animals is most likely due to the improved accumulation, retention and dispersion of nanoparticles within solid tumor following image-guided therapies as well as the micro-field of the β-particle which reduces the requirement of perfectly homogeneous tumor coverage. It is also possible that the intratumoral distribution of nanoparticles may benefit from their uptake by intratumoral macrophages although more research is required to determine the importance of this aspect of intratumoral radionuclide nanoparticle therapy. This new approach to cancer therapy is a fertile ground for many new technological developments as well as for new understandings in the basic biology of cancer therapy. The clinical success of this approach will depend on progress in many areas of interdisciplinary research including imaging technology, nanoparticle technology, computer and robot assisted image-guided application of therapies, radiation physics and oncology. Close collaboration of a wide variety of scientists and physicians including chemists, nanotechnologists, drug delivery experts, radiation physicists, robotics and software experts, toxicologists, surgeons, imaging physicians, and oncologists will best facilitate the implementation of this novel approach to the treatment of cancer in the clinical environment. Image-guided nanoparticle therapies including those with β-emission radionuclide nanoparticles have excellent promise to significantly impact clinical cancer therapy and advance the field of drug delivery.

Read Full Post »

Hastke Inc. Presents at 1st Pitch Life Sciences-Philadelphia-September 16, 2014

Reporter: Stephen J Williams, PhD



Hastke Inc. presented at Mid-Atlantic BioAngels 1st Pitch Life Sciences in  Philadelphia Tuesday Sept. 16, 2014.

Hastke, Inc., a Princeton University spin-out, captures dynamic cellular events IN REAL TIME in live cells at an unprecendented level of detail in 3D using proprietary 3D microscopy in conjunction with nanotechnology-based tags and sensors. The resolution up to 10 nm in all directions and 10 us precision, orders of magnitude superior than other methods, can be achieved.  The company is using this technology to determine extent of uptake of drugs on a cellular level and to visualize drug-receptor interaction.  Their goal is to use their ability to visualize comound-cell interaction and uptake to enhance the drug screening process.

Their company is currently comprised of three team members:

Stephanie Budijono is the President and CEO of Hastke Inc. Prior to Hastke, she developed a nanoparticle platform for targeted cancer therapy and imaging. She received her PhD from Princeton University.

Haw Yang is the leading inventor of the technology. He is a Professor at Princeton University, leading a research lab developing new methods to understand molecular reactivity in complex systems.

Kevin Welsher is a co-invetor of the technology. He is a prolific scientist whose works have been consistently featured in world-leading journals. His previous experience also includes developing new materials for in-vivo fluorescent imaging. He received his PhD from Stanford University.

Read Full Post »

USPTO Guidance On Patentable Subject Matter

USPTO Guidance On Patentable Subject Matter

Curator and Reporter: Larry H Bernstein, MD, FCAP

LH Bernstein

LH Bernstein







Revised 4 July, 2014



I came across a few recent articles on the subject of US Patent Office guidance on patentability as well as on Supreme Court ruling on claims. I filed several patents on clinical laboratory methods early in my career upon the recommendation of my brother-in-law, now deceased.  Years later, after both brother-in-law and patent attorney are no longer alive, I look back and ask what I have learned over $100,000 later, with many trips to the USPTO, opportunities not taken, and a one year provisional patent behind me.

My conclusion is

(1) that patents are for the protection of the innovator, who might realize legal protection, but the cost and the time investment can well exceed the cost of startup and building a small startup enterprize, that would be the next step.

(2) The other thing to consider is the capability of the lawyer or firm that represents you.  A patent that is well done can be expected to take 5-7 years to go through with due diligence.   I would not expect it to be done well by a university with many other competing demands. I might be wrong in this respect, as the climate has changed, and research universities have sprouted engines for change.  Experienced and productive faculty are encouraged or allowed to form their own such entities.

(3) The emergence of Big Data, computational biology, and very large data warehouses for data use and integration has changed the landscape. The resources required for an individual to pursue research along these lines is quite beyond an individuals sole capacity to successfully pursue without outside funding.  In addition, the changed designated requirement of first to publish has muddied the water.

Of course, one can propose without anything published in the public domain. That makes it possible for corporate entities to file thousands of patents, whether there is actual validation or not at the time of filing.  It would be a quite trying experience for anyone to pursue in the USPTO without some litigation over ownership of patent rights. At this stage of of technology development, I have come to realize that the organization of research, peer review, and archiving of data is still at a stage where some of the best systems avalailable for storing and accessing data still comes considerably short of what is needed for the most complex tasks, even though improvements have come at an exponential pace.

I shall not comment on the contested views held by physicists, chemists, biologists, and economists over the completeness of guiding theories strongly held.  Only history will tell.  Beliefs can hold a strong sway, and have many times held us back.

I am not an expert on legal matters, but it is incomprehensible to me that issues concerning technology innovation can be adjudicated in the Supreme Court, as has occurred in recent years. I have postgraduate degrees in  Medicine, Developmental Anatomy, and post-medical training in pathology and laboratory medicine, as well as experience in analytical and research biochemistry.  It is beyond the competencies expected for these type of cases to come before the Supreme Court, or even to the Federal District Courts, as we see with increasing frequency,  as this has occurred with respect to the development and application of the human genome.

I’m not sure that the developments can be resolved for the public good without a more full development of an open-access system of publishing. Now I present some recent publication about, or published by the USPTO.


Dr. Melvin Castro - Organic Chemistry and New Drug Development

Dr. Melvin Castro – Organic Chemistry and New Drug Development









YOU ARE FOLLOWING THIS BLOG You are following this blog, along with 1,014 other amazing people (manage).


USPTO Guidance On Patentable Subject Matter: Impediment to Biotech Innovation

Joanna T. Brougher, David A. Fazzolare J Commercial Biotechnology 2014 20(3):Brougher














Abstract In June 2013, the U.S. Supreme Court issued a unanimous decision upending more than three decades worth of established patent practice when it ruled that isolated gene sequences are no longer patentable subject matter under 35 U.S.C. Section 101.While many practitioners in the field believed that the USPTO would interpret the decision narrowly, the USPTO actually expanded the scope of the decision when it issued its guidelines for determining whether an invention satisfies Section 101.

The guidelines were met with intense backlash with many arguing that they unnecessarily expanded the scope of the Supreme Court cases in a way that could unduly restrict the scope of patentable subject matter, weaken the U.S. patent system, and create a disincentive to innovation. By undermining patentable subject matter in this way, the guidelines may end up harming not only the companies that patent medical innovations, but also the patients who need medical care.  This article examines the guidelines and their impact on various technologies.

Keywords:   patent, patentable subject matter, Myriad, Mayo, USPTO guidelines

Full Text: PDF


35 U.S.C. Section 101 states “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

” Prometheus Laboratories, Inc. v. Mayo Collaborative Services, 566 U.S. ___ (2012)

Association for Molecular Pathology et al., v. Myriad Genetics, Inc., 569 U.S. ___ (2013).

Parke-Davis & Co. v. H.K. Mulford Co., 189 F. 95, 103 (C.C.S.D.N.Y. 1911)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.


Funk Brothers Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 131 (1948)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.


Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

DOI: http://dx.doi.org/10.5912/jcb664


Science 4 July 2014; 345 (6192): pp. 14-15  DOI: http://dx.doi.org/10.1126/science.345.6192.14


Biotech feels a chill from changing U.S. patent rules

A 2013 Supreme Court decision that barred human gene patents is scrambling patenting policies.


A year after the U.S. Supreme Court issued a landmark ruling that human genes cannot be patented, the biotech industry is struggling to adapt to a landscape in which inventions derived from nature are increasingly hard to patent. It is also pushing back against follow-on policies proposed by the U.S. Patent and Trademark Office (USPTO) to guide examiners deciding whether an invention is too close to a natural product to deserve patent protection. Those policies reach far beyond what the high court intended, biotech representatives say.

“Everything we took for granted a few years ago is now changing, and it’s generating a bit of a scramble,” says patent attorney Damian Kotsis of Harness Dickey in Troy, Michigan, one of more than 15,000 people who gathered here last week for the Biotechnology Industry Organization’s (BIO’s) International Convention.

At the meeting, attorneys and executives fretted over the fate of patent applications for inventions involving naturally occurring products—including chemical compounds, antibodies, seeds, and vaccines—and traded stories of recent, unexpected rejections by USPTO. Industry leaders warned that the uncertainty could chill efforts to commercialize scientific discoveries made at universities and companies. Some plan to appeal the rejections in federal court.

USPTO officials, meanwhile, implored attendees to send them suggestions on how to clarify and improve its new policies on patenting natural products, and even announced that they were extending the deadline for public comment by a month. “Each and every one of you in this room has a moral duty … to provide written comments to the PTO,” patent lawyer and former USPTO Deputy Director Teresa Stanek Rea told one audience.

At the heart of the shake-up are two Supreme Court decisions: the ruling last year in Association for Molecular Pathology v. Myriad Genetics Inc. that human genes cannot be patented because they occur naturally (Science, 21 June 2013, p. 1387); and the 2012 Mayo v. Prometheus decision, which invalidated a patent on a method of measuring blood metabolites to determine drug doses because it relied on a “law of nature” (Science, 12 July 2013, p. 137).

Myriad and Mayo are already having a noticeable impact on patent decisions, according to a study released here. It examined about 1000 patent applications that included claims linked to natural products or laws of nature that USPTO reviewed between April 2011 and March 2014. Overall, examiners rejected about 40%; Myriad was the basis for rejecting about 23% of the applications, and Mayo about 35%, with some overlap, the authors concluded. That rejection rate would have been in the single digits just 5 years ago, asserted Hans Sauer, BIO’s intellectual property counsel, at a press conference. (There are no historical numbers for comparison.) The study was conducted by the news service Bloomberg BNA and the law firm Robins, Kaplan, Miller & Ciseri in Minneapolis, Minnesota.

USPTO is extending the decisions far beyond diagnostics and DNA?

The numbers suggest USPTO is extending the decisions far beyond diagnostics and DNA, attorneys say. Harness Dickey’s Kotsis, for example, says a client recently tried to patent a plant extract with therapeutic properties; it was different from anything in nature, Kotsis argued, because the inventor had altered the relative concentrations of key compounds to enhance its effect. Nope, decided USPTO, too close to nature.

In March, USPTO released draft guidance designed to help its examiners decide such questions, setting out 12 factors for them to weigh. For example, if an examiner deems a product “markedly different in structure” from anything in nature, that counts in its favor. But if it has a “high level of generality,” it gets dinged.

The draft has drawn extensive criticism. “I don’t think I’ve ever seen anything as complicated as this,” says Kevin Bastian, a patent attorney at Kilpatrick Townsend & Stockton in San Francisco, California. “I just can’t believe that this will be the standard.”

USPTO officials appear eager to fine-tune the draft guidance, but patent experts fear the Supreme Court decisions have made it hard to draw clear lines. “The Myriad decision is hopelessly contradictory and completely incoherent,” says Dan Burk, a law professor at the University of California, Irvine. “We know you can’t patent genetic sequences,” he adds, but “we don’t really know why.”

Get creative in using Draft Guidelines!

For now, Kostis says, applicants will have to get creative to reduce the chance of rejection. Rather than claim protection for a plant extract itself, for instance, an inventor could instead patent the steps for using it to treat patients. Other biotech attorneys may try to narrow their patent claims. But there’s a downside to that strategy, they note: Narrower patents can be harder to protect from infringement, making them less attractive to investors. Others plan to wait out the storm, predicting USPTO will ultimately rethink its guidance and ease the way for new patents.


Public comment period extended

USPTO has extended the deadline for public comment to 31 July, with no schedule for issuing final language. Regardless of the outcome, however, Stanek Rea warned a crowd of riled-up attorneys that, in the world of biopatents, “the easy days are gone.”


United States Patent and Trademark Office

Today we published and made electronically available a new edition of the Manual of Patent Examining Procedure (MPEP). Manual of Patent Examining Procedure uspto.gov http://www.uspto.gov/web/offices/pac/mpep/index.html Summary of Changes

PDF Title Page
PDF Foreword
PDF Introduction
PDF Table of Contents
PDF Chapter 600 –
PDF   Parts, Form, and Content of Application Chapter 700 –
PDF    Examination of Applications Chapter 800 –
PDF   Restriction in Applications Filed Under 35 U.S.C. 111; Double Patenting Chapter 900 –
PDF   Prior Art, Classification, and Search Chapter 1000 –
PDF  Matters Decided by Various U.S. Patent and Trademark Office Officials Chapter 1100 –
PDF   Statutory Invention Registration (SIR); Pre-Grant Publication (PGPub) and Preissuance Submissions Chapter 1200 –
PDF    Appeal Chapter 1300 –
PDF   Allowance and Issue Appendix L –
PDF   Patent Laws Appendix R –
PDF   Patent Rules Appendix P –
PDF   Paris Convention Subject Matter Index 
PDF Zipped version of the MPEP current revision in the PDF format.

Manual of Patent Examining Procedure (MPEP)Ninth Edition, March 2014

The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to:

Manual of Patent Examining Procedure (MPEP) Ninth Edition, March 2014
The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to: http://uspto-mpep.ideascale.com.

Note: For current fees, refer to the Current USPTO Fee Schedule.
Consolidated Laws – The patent laws in effect as of May 15, 2014. Consolidated Rules – The patent rules in effect as of May 15, 2014.  MPEP Archives (1948 – 2012)
Current MPEP: Searchable MPEP

The documents updated in the Ninth Edition of the MPEP, dated March 2014, include changes that became effective in November 2013 or earlier.
All of the documents have been updated for the Ninth Edition except Chapters 800, 900, 1000, 1300, 1700, 1800, 1900, 2000, 2300, 2400, 2500, and Appendix P.
More information about the changes and updates is available from the “Blue Page – Introduction” of the Searchable MPEP or from the “Summary of Changes” link to the HTML and PDF versions provided below. Discuss the Manual of Patent Examining Procedure (MPEP) Welcome to the MPEP discussion tool!

We have received many thoughtful ideas on Chapters 100-600 and 1800 of the MPEP as well as on how to improve the discussion site. Each and every idea submitted by you, the participants in this conversation, has been carefully reviewed by the Office, and many of these ideas have been implemented in the August 2012 revision of the MPEP and many will be implemented in future revisions of the MPEP. The August 2012 revision is the first version provided to the public in a web based searchable format. The new search tool is available at http://mpep.uspto.gov. We would like to thank everyone for participating in the discussion of the MPEP.

We have some great news! Chapters 1300, 1500, 1600 and 2400 of the MPEP are now available for discussion. Please submit any ideas and comments you may have on these chapters. Also, don’t forget to vote on ideas and comments submitted by other users. As before, our editorial staff will periodically be posting proposed new material for you to respond to, and in some cases will post responses to some of the submitted ideas and comments.Recently, we have received several comments concerning the Leahy-Smith America Invents Act (AIA). Please note that comments regarding the implementation of the AIA should be submitted to the USPTO via email t aia_implementation@uspto.gov or via postal mail, as indicated at the America Invents Act Web site. Additional information regarding the AIA is available at www.uspto.gov/americainventsact  We have also received several comments suggesting policy changes which have been routed to the appropriate offices for consideration. We really appreciate your thinking and recommendations!

FDA Guidance for Industry:Electronic Source Data in Clinical Investigations

Electronic Source Data

Electronic Source Data








The FDA published its new Guidance for Industry (GfI) – “Electronic Source Data in Clinical Investigations” in September 2013.
The Guidance defines the expectations of the FDA concerning electronic source data generated in the context of clinical trials. Find out more about this Guidance.

After more than 5 years and two draft versions, the final version of the Guidance for
Industry (GfI) – “Electronic Source Data in Clinical Investigations” was published in
September 2013. This new FDA Guidance defines the FDA’s expectations for sponsors,
CROs, investigators and other persons involved in the capture, review and retention of
electronic source data generated in the context of FDA-regulated clinical trials.In an
effort to encourage the modernization and increased efficiency of processes in clinical
trials, the FDA clearly supports the capture of electronic source data and emphasizes
the agency’s intention to support activities aimed at ensuring the reliability, quality,
integrity and traceability of this source data, from its electronic source to the electronic
submission of the data in the context of an authorization procedure. The Guidance
addresses aspects as data capture, data review and record retention. When the
computerized systems used in clinical trials are described, the FDA recommends
that the description not only focus on the intended use of the system, but also on
data protection measures and the flow of data across system components and
interfaces. In practice, the pharmaceutical industry needs to meet significant
requirements regarding organisation, planning, specification and verification of
computerized systems in the field of clinical trials. The FDA also mentions in the
Guidance that it does not intend to apply 21 CFR Part 11 to electronic health records
(EHR). Author: Oliver Herrmann Q-Infiity Source: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/
Webinar: https://collaboration.fda.gov/p89r92dh8wc


Read Full Post »

Prologue to Cancer – e-book Volume One – Where are we in this journey?

Prologue to Cancer – e-book Volume One – Where are we in this journey?

Author and Curator: Larry H. Bernstein, MD, FCAP

Consulting Reviewer and Contributor:  Jose Eduardo de Salles Roselino, MD


LH Bernstein

LH Bernstein

Jose Eduardo de Salles Roselino

LES Roselino



This is a preface to the fourth in the ebook series of Leaders in Pharmaceutical Intelligence, a collaboration of experienced doctorate medical and pharmaceutical professionals.  The topic is of great current interest, and it entails a significant part of current medical expenditure by a group of neoplastic diseases that may develop at different periods in life, and have come to supercede infections or even eventuate in infectious disease as an end of life event.  The articles presented are a collection of the most up-to-date accounts of the state of a now rapidly emerging field of medical research that has benefitted enormously by progress in immunodiagnostics,  radiodiagnostics, imaging, predictive analytics, genomic and proteomic discovery subsequent to the completion of the Human Genome Project, advances in analytic methods in qPCR, gene sequencing, genome mapping, signaling pathways, exome identification, identification of therapeutic targets in inhibitors, activators, initiators in the progression of cell metabolism, carcinogenesis, cell movement, and metastatic potential.  This story is very complicated because we are engaged in trying to evoke from what we would like to be similar clinical events, dissimilar events in their expression and classification, whether they are within the same or different anatomic class.  Thus, we are faced with constructing an objective evidence-based understanding requiring integration of several disciplinary approaches to see a clear picture.  The failure to do so creates a high risk of failure in biopharmaceutical development.

The chapters that follow cover novel and important research and development in cancer related research, development, diagnostics and treatment, and in balance, present a substantial part of the tumor landscape, with some exceptions.  Will there ever be a unifying concept, as might be hoped for? I certainly can’t see any such prediction on the horizon.  Part of the problem is that disease classification is a human construct to guide us, and so are treatments that have existed and are reexamined for over 2,000 years.  In that time, we have changed, our afflictions have been modified, and our environment has changed with respect to the microorganisms within and around us, viruses, the soil, and radiation exposure, and the impacts of war and starvation, and access to food.  The outline has been given.  Organic and inorganic chemistry combined with physics has given us a new enterprise in biosynthetics that is and will change our world.  But let us keep in mind that this is a human construct, just as drug target development is such a construct, workable with limitations.

What Molecular Biology Gained from Physics

We need greater clarity and completeness in defining the carcinogenetic process.  It is the beginning, but not the end.  But we must first examine the evolution of the scientific structure that leads to our present understanding. This was preceded by the studies of anatomy, physiology, and embryology that had to occur as a first step, which was followed by the researches into bacteriology, fungi, sea urchins and the evolutionary creatures that could be studied having more primary development in scale.  They are still major objects of study, with the expectation that we can derive lessons about comparative mechanisms that have been passed on through the ages and have common features with man.  This became the serious intent of molecular biology, the discipline that turned to find an explanation for genetics, and to carry out controlled experiments modelled on the discipline that already had enormous success in physics, mathematics, and chemistry. In 1900, when Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, it had important ramifications for chemistry and biology (See Appendix II and Footnote 1, Planck equation, energy and oscillation).  The leading idea is to search below the large-scale observations of classical biology.

The central dogma of molecular biology where genetic material is transcribed into RNA and then translated into protein, provides a starting point, but the construct is undergoing revision in light of emerging novel roles for RNA and signaling pathways.   The term, coined by Warren Weaver (director of Natural Sciences for the Rockefeller Foundation), who observed an emergence of significant change given recent advances in fields such as X-ray crystallography. Molecular biology also plays important role in understanding formations, actions, regulations of various parts of cellswhich can be used efficiently for targeting new drugs, diagnosis of disease, physiology of the Cell. The Nobel Prize in Physiology or Medicine in 1969 was shared by Max Delbrück, Alfred D. Hershey, Salvador E. Luria, whose work with viral replication gave impetus to the field.  Delbruck was a physicist who trained in Copenhagen under Bohr, and specifically committed himself to a rigor in biology, as was in physics.

Dorothy Hodgkin  protein crystallography

Dorothy Hodgkin protein crystallography

Rosalind Franlin crystallographer double helix

Rosalind Franlin
double helix

 Max Delbruck         molecular biology

Max Delbruck        
molecular biology

Max Planck

Max Planck Quantum Physics




We then stepped back from classical (descriptive) physiology, with the endless complexity, to molecular biology.  This led us to the genetic code, with a double helix model.  It has recently been found insufficiently explanatory, with the recent construction of triplex and quadruplex models. They have a potential to account for unaccounted for building blocks, such as inosine, and we don’t know whether more than one model holds validity under different conditions .  The other major field of development has been simply unaccounted for in the study of proteomics, especially in protein-protein interactions, and in the energetics of protein conformation, first called to our attention by the work of Jacob, Monod, and Changeux (See Footnote 2).  Proteins are not just rigid structures stamped out by the monotonously simple DNA to RNA to protein concept.  Nothing is ever quite so simple. Just as there are epigenetic events, there are posttranslational events, and yet more.


JP Changeux









The Emergence of Molecular Biology

I now return the discussion to the topic of medicine, the emergence of molecular biology and the need for convergence with biochemistry in the mid-20th century. Jose Eduardo de Salles Roselino recalls “I was previously allowed to make of the conformational energy as made by R Marcus in his Nobel lecture revised (J. of Electroanalytical  Chemistry 438:(1997) p251-259. (See Footnote 1) His description of the energetic coordinates of a landscape of a chemical reaction is only a two-dimensional cut of what in fact is a volcano crater (in three dimensions) (each one varies but the sum of the two is constant. Solvational+vibrational=100% in ordinate) nuclear coordinates in abcissa. In case we could represent it by research methods that allow us to discriminate in one by one degree of different pairs of energy, we would most likely have 360 other similar representations of the same phenomenon. The real representation would take into account all those 360 representations together. In case our methodology was not that fine, for instance it discriminates only differences of minimal 10 degrees in 360 possible, will have 36 partial representations of something that to be perfectly represented will require all 36 being taken together. Can you reconcile it with ATGC?  Yet, when complete genome sequences were presented they were described as though we will know everything about this living being. The most important problems in biology will be viewed by limited vision always and the awareness of this limited is something we should acknowledge and teach it. Therefore, our knowledge is made up of partial representations. If we had the entire genome data for the most intricate biological problems, they are still not amenable to this level of reductionism. But going from general views of signals andsymptoms we could get to the most detailed molecular view and in this case genome provides an anchor.”

“Warburg Effect” describes the preference of glycolysis and lactic acid fermentation rather than oxidative phosphorylation for energy production in cancer cells. Mitochondrial metabolism is an important and necessary component in the functioning and maintenance of the cell, and accumulating evidence suggests that dysfunction of mitochondrial metabolism plays a role in cancer. Progress has demonstrated the mechanisms of the mitochondrial metabolism-to-glycolysis switch in cancer development and how to target this metabolic switch.






Otto Heinrich Warburg (1883-  )

Otto Warburg


Louis Pasteur


















The expression “Pasteur effect” was coined by Warburg when inspired by Pasteur’s findings in yeast cells, when he investigated this metabolic observation (Pasteur effect) in cancer cells. In yeast cells, Pasteur had found that the velocity of sugar used was greatly reduced in presence of oxygen. Not to be confused, in the “Crabtree effect”, the velocity of sugar metabolism was greatly increased, a reversal, when yeast cells were transferred from the aerobic to an anaerobic condition. Thus, the velocity of sugar metabolism of yeast cells was shown to be under metabolic regulatory control in response to change in environmental oxygen conditions in growth. Warburg had to verify whether cancer cells and tissue related normal mammalian cells also have a similar control mechanism. He found that this control was also found in normal cells studied, but was absent in cancer cells. Strikingly, cancer cells continue to have higher anaerobic gycolysis despite the presence of oxygen in their culture media (See Footnote 3).

Taking this a step further, food is digested and supplied to cells In vertebrates mainly in the form of glucose, which is metabolized producing Adenosine Triphosphate (ATP) by two pathways. Glycolysis, occurs via anaerobic metabolism in the cytoplasm, and is of major significance for making ATP quickly, but in a minuscule amount (2 molecules).  In the presence of oxygen, the breakdown process continues in the mitochondria via the Krebs’s cycle coupled with oxidative phosphorylation, which is more efficient for ATP production (36 molecules). Cancer cells seem to depend on glycolysis. In the 1920s, Otto Warburg first proposed that cancer cells show increased levels of glucose consumption and lactate fermentation even in the presence of ample oxygen (known as “Warburg Effect”). Based on this theory, oxidative phosphorylation switches to glycolysis which promotes the proliferation of cancer cells. Many studies have demonstrated glycolysis as the main metabolic pathway in cancer cells.

Albert Szent Gyogy (Warburg’s student) and Otto Meyerhof both studied striated skeletal muscle metabolism invertebrates, and they found those changes observed in yeast by Pasteur. The description of the anaerobic pathway was largely credited to Emden and Meyerhof. Whenever there is increase in muscle work, energy need is above what can be provided by blood supply, the cell metabolism changes from aerobic (where  Acetyl CoA  provides the chemical energy for aerobic production of ATP) to anaerobic metabolism of glucose. In this condition, glucose is obtained directly from its muscle glycogen stores (not from hepatic glycogenolysis).  This is the sole source of chemical energy that is independent of oxygen supplied to the cell. It is a physiological change on muscle metabolism that favors autonomy. It does not depend upon the blood oxygen for aerobic metabolim or blood sources of carbon metabolites borne out from adipose tissue (free fatty acids) or muscle proteins (branched chain amino acids), or vascular delivery of glucose. On that condition, the muscle can perform contraction by its internal source of ATP and uses conversion of pyruvate to lactate in order to regenerate much-needed NAD (by hydride transfer from pyruvate) as a replacement for this mitochondrial function. This regulatory change, keeps glycolysis going at fast rate in order to meet ATP needs of the cell under low yield condition (only two or three ATP for each glucose converted into two lactate molecules). Therefore, it cannot last for long periods of time. This regulatory metabolic change is made in seconds, minutes and therefore happens with the proteins that are already presented in the cell. It does not requires the effect of transcription factors and/or changes in gene expression (See Footnote 1, 2).

In other types mammalian cells, like those from the lens of the eye (86% gycolysis + pentose shunt),  and red blood cells (RBC)[both lacking mitochondria], and also in the deep medullary layer of the kidneys, for lack of mitochondria in the first two cases and normally reduced blood perfusion in the third – A condition required for the counter current mechanism and our ability to concentrate urine also have, permanent higher anaerobic metabolism. In the case of RBC, it includes the ability to produce in a shunt of glycolytic pathway 2,3 diphospho- glycerate that is required to place the hemogloblin macromolecule in an unstable equilibrium between its two forms (R and T – Here presented as simplified accordingly to the model of Monod, Wyman and Changeux. The final model would be even much complex (see for instance, H-W and K review Nature 2007 vol 450: p 964-972 )

Any tissue under a condition of ischemia that is required for some medical procedures (open heart surgery, organ transplants, etc) displays this fast regulatory mechanism (See Footnote 1, 2). A display of these regulatory metabolic changes can be seen in: Cardioplegia: the protection of the myocardium during open heart surgery: a review. D. J. Hearse J. Physiol., Paris, 1980, 76, 751-756 (Fig 1).  The following points are made:

1-       It is a fast regulatory response. Therefore, no genetic mechanism can be taken into account.

2-       It moves from a reversible to an irreversible condition, while the cells are still alive. Death can be seen at the bottom end of the arrow. Therefore, it cannot be reconciled with some of the molecular biology assumptions:

A-       The gene and genes reside inside the heart muscle cells but, in order to preserve intact, the source of coded genetic information that the cell reads and transcribes, DNA must be kept to a minimal of chemical reactivity.

B-       In case sequence determines conformation, activity and function , elevated potassium blood levels could not cause cardiac arrest.

In comparison with those conditions here presented, cancer cells keep the two metabolic options for glucose metabolism at the same time. These cells can use glucose that our body provides to them or adopt temporarily, an independent metabolic form without the usual normal requirement of oxygen (one or another form for ATP generation).  ATP generation is here, an over-simplification of the metabolic status since the carbon flow for building blocks must also be considered and in this case oxidative metabolism of glucose in cancer cells may be viewed as a rich source of organic molecules or building blocks that dividing cells always need.

JES Roselino has conjectured that “most of the Krebs cycle reaction works as ideal reversible thermodynamic systems that can supply any organic molecule that by its absence could prevent cell duplication.” In the vision of Warburg, cancer cells have a defect in Pasteur-effect metabolic control. In case it was functioning normally, it will indicate which metabolic form of glucose metabolism is adequate for each condition. What more? Cancer cells lack differentiated cell function. Any role for transcription factors must be considered as the role of factors that led to the stable phenotypic change of cancer cells. The failure of Pasteur effect must be searched for among the fast regulatory mechanisms that aren’t dependent on gene expression (See Footnote 3).

Extending the thoughts of JES Roselino (Hepatology 1992;16: 1055-1060), reduced blood flow caused by increased hydrostatic pressure in extrahepatic cholestasis decreases mitochondrial function (quoted in Hepatology) and as part of Pasteur effect normal response, increased glycolysis in partial and/or functional anaerobiosis and therefore blocks the gluconeogenic activity of hepatocytes that requires inhibited glycolysis. In this case, a clear energetic link can be perceived between the reduced energetic supply and the ability to perform differentiated hepatic function (gluconeogenesis). In cancer cells, the action of transcription factors that can be viewed as different ensembles of kaleidoscopic pieces (with changing activities as cell conditions change) are clearly linked to the new stable phenotype. In relation to extrahepatic cholestasis mentioned above it must be reckoned that in case a persistent chronic condition is studied a secondary cirrhosis is installed as an example of persistent stable condition, difficult to be reversed and without the requirement for a genetic mutation. (See Footnote 4).

 The Rejection of Complexity

Most of our reasoning about genes was derived from scientific work in microorganisms. These works have provided great advances in biochemistry.

250px-DNA_labeled  DNA diagram showing base pairing

double helix


hgp_hubris_220x288_72  genome cartoon

Dna triplex pic

Triple helix


formation of a triplex DNA structure

formation of triple helix

























1-      The “Gelehrter idea”: No matter what you are doing you will always be better off, in case you have a gene (In chapter 7 Principles of Medical Genetics Gelehrter and Collins Williams & Wilkins 1990).

2-      The idea that everything could be found following one gene one enzyme relationship that works fine for our understanding of the metabolism, in all biological problems.

3-      The idea that everything that explains biochemistry in microorganisms explains also for every living being (J Nirenberg).

4-      The idea that biochemistry may not require that time should be also taken into account. Time must be considered only for genetic and biological evolution studies (S Luria. In Life- The unfinished experiment 1977 C Scribner´s sons NY).

5-      Finally, the idea that everything in biology, could be found in the genome. Since all information in biology goes from DNA through RNA to proteins. Alternatively, are in the DNA, in case the strict line that includes RNA is not included.

This last point can be accepted in case it is considered that ALL GENETIC information is in our DNA. Genetics as part of life and not as its total expression.

For example, when our body is informed that the ambient temperature is too low or alternatively is too high, our body is receiving an information that arrives from our environment. This external information will affect our proteins and eventually, in case of longer periods in a new condition will cause adaptive response that may include conformational changes in transcription factors (proteins) that will also, produce new readings on the DNA. However, it is an information that moves from outside, to proteins and not from DNA to proteins. The last pathway, when transcription factors change its conformation and change DNA reading will follow the dogmatic view as an adaptive response (See Footnotes 1-3).

However, in case, time is taken into account, the first reactions against cold or warmer temperatures will be the ones that happen through change in protein conformation, activities and function before any change in gene expression can be noticed at protein level. These fast changes, in seconds, minutes cannot be explained by changes in gene expression and are strongly linked to what is needed for the maintenance of life.

“It is possible”, says Roselino, “desirable, to explain all these fast biochemical responses to changes in a living being condition as the sound foundation of medical practices without a single mention to DNA. In case a failure in any mechanism necessary to life is found to be genetic in its origin, the genome in context with with this huge set of transcription factors must be taken into account. This is the biochemical line of reasoning that I have learned with Houssay and Leloir. It would be an honor to see it restored in modern terms.”

More on the Mechanism of Metabolic Control

It was important that genomics would play such a large role in medical research for the last 70 years. There is also good reason to rethink the objections of the Nobelists James Watson and Randy Schekman in the past year, whatever discomfort it brings.  Molecular biology has become a tautology, and as a result deranged scientific rigor inside biology.

Crick & Watson with their DNA model, 1953

Eatson and Crick

Randy-Schekman Berkeley

Randy-Schekman Berkeley



According to JES Roselino, “consider that glycolysis is oscillatory thanks to the kinetic behavior of Phosphofructokinase. Further, by its effect upon Pyruvate kinase through Fructose 1,6 diphosphate oscillatory levels, the inhibition of gluconeogenesis is also oscillatory. When the carbon flow through glycolysis is led to a maximal level gluconeogenesis will be almost completely blocked. The reversal of the Pyruvate kinase step in liver requires two enzymes (Pyruvate carboxylase (maintenance of oxaloacetic levels) + phosphoenolpyruvate carboxykinase (E.C. and energy requiring reactions that most likely could not as an ensemble, have a fast enough response against pyruvate kinase short period of inhibition during high frequency oscillatory periods of glycolytic flow. Only when glycolysis oscillates at low frequency the opposite reaction could enable gluconeogenic carbon flow.”

In case it can be shown in a rather convincing way, the same reasoning could be applied to understand how simple replicative signals inducing Go to G1 transition in cells, could easily overcome more complex signals required for cell differentiation and differentiated function.

Perhaps the problem of overextension of the equivalence of the DNA and what happens to the organism is also related to the initial reliance on a single cell model to relieve the complexity (which isn’t fully the case).

For instance, consider this fragment:
“Until only recently it was assumed that all proteins take on a clearly defined three-dimensional structure – i.e. they fold in order to be able to assume these functions.”
Cold Spring Harbour Symp. Quant. Biol. 1973  p 187-193 J.C Seidel and J Gergely – Investigation of conformational changes in Spin-Labeled Myosin Model for muscle contraction:
Huxley, A. F. 1971 Proc. Roy. Soc (London) (B) 178:1
Huxley, A.F and R. M. Simmons,1971. Nature 233:633
J.C Haselgrove X ray Evidence for a conformational Change in the Actin-containing filaments…Cold Spring Harbour Symp Quant Biol.1972 v 37: p 341-352

Only a very small sample indicating otherwise. Proteins were held as interacting macromolecules, changing their conformation in regulatory response to changes in the microenvironment (See Footnote 2). DNA was the opposite, non-interacting macromolecules to be as stable as a library must be.

The dogma held that the property of proteins could be read in DNA alone. Consequenly, the few examples quoted above, must be ignored and all people must believe that DNA alone, without environmental factors roles, controls protein amino acid sequence (OK), conformation (not true), activity (not true) and function (not true).

It appeared naively to be correct from the dogma to conclude from interpreting your genome: You have a 50% increased risk of developing the following disease (deterministic statement).  The correct form must be: You belong to a population that has a 50% increase in the risk of….followed by –  what you must do to avoid increase in your personal risk and the care you should take in case you want to have longer healthy life.  Thus, genetics and non-genetic diseases were treated as the same and medical foundations were reinforced by magical considerations (dogmas) in a very profitable way for those involved besides the patient.


  1. There is a link of electricity with ions in biology and the oscillatory behavior of some electrical discharges.  In addition, the oscillatory form of electrical discharged may have allowed Planck to relate high energy content with higher frequencies and conversely, low energy content in low frequency oscillatory events.  One may think of high density as an indication of great amount of matter inside a volume in space.  This helps the understanding of Planck’s idea as a high-density-energy in time for a high frequency phenomenon.
  1. Take into account a protein that may have its conformation restricted by an S-S bridge. This protein also, may move to another more flexible conformation in case it is in HS HS condition when the S-S bridge is broken. Consider also that, it takes some time for a protein to move from one conformation for instance, the restricted conformation (S-S) to other conformations. Also, it takes a few seconds or minutes to return to the S-S conformation (This is the Daniel Koshland´s concept of induced fit and relaxation time used by him in order to explain allosteric behavior of monomeric proteins- Monod, Wyman and Changeux requires tetramer or at least, dimer proteins).
  1. In case you have glycolysis oscillating in a frequency much higher than the relaxation time you could lead to the prevalence of high NADH effect leading to high HS /HS condition and at low glycolytic frequency, you could have predominance of S-S condition affecting protein conformation. In case you have predominance of NAD effect upon protein S-S you would get the opposite results.  The enormous effort to display the effect of citrate and over Phosphofructokinase conformation was made by others. Take into account that ATP action as an inhibitor in this case, is a rather unusual one. It is a substrate of the reaction, and together with its action as activator  F1,6 P (or its equivalent F2,6 P) is also unusual. However, it explains oscillatory behaviour of glycolysis. (Goldhammer , A.R, and Paradies: PFK structure and function, Curr. Top Cell Reg 1979; 15:109-141).
  1. The results presented in our Hepatology work must be viewed in the following way: In case the hepatic (oxygenated) blood flow is preserved, the bile secretory cells of liver receive well-oxygenated blood flow (the arterial branches bath secretory cells while the branches originated from portal vein irrigate the hepatocytes.  During extra hepatic cholestasis the low pressure, portal blood flow is reduced and the hepatocytes do not receive enough oxygen required to produce ATP that gluconeogenesis demands. Hepatic artery do not replace this flow since, its branches only join portal blood fluxes after the previous artery pressure  is reduced to a low pressure venous blood – at the point where the formation of hepatic vein is. Otherwise, the flow in the portal vein would be reversed or, from liver to the intestine. It is of no help to take into account possible valves for this reasoning since minimal arterial pressure is well above maximal venous pressure and this difference would keep this valve in permanent close condition. In low portal blood flow condition, the hepatocyte increases pyruvate kinase activity and with increased pyruvate kinase activity Gluconeogenesis is forbidden (See Walsh & Cooper revision quoted in the Hepatology as ref 23). For the hemodynamic considerations, role of artery and veins in hepatic portal system see references 44 and 45 Rappaport and Schneiderman and Rappapaport.


 Appendix I.

metabolic pathways

metabolic pathways

Signals Upstream and Targets Downstream of Lin28 in the Lin28 Pathway

Signals Upstream and Targets Downstream of Lin28 in the Lin28 Pathway









1.  Functional Proteomics Adds to Our Understanding

Ben Schuler’s research group from the Institute of Biochemistry of the University of Zurich has now established that an increase in temperature leads to folded proteins collapsing and becoming smaller. Other environmental factors can trigger the same effect. The crowded environments inside cells lead to the proteins shrinking. As these proteins interact with other molecules in the body and bring other proteins together, understanding of these processes is essential “as they play a major role in many processes in our body, for instance in the onset of cancer”, comments study coordinator Ben Schuler.

Measurements using the “molecular ruler”

“The fact that unfolded proteins shrink at higher temperatures is an indication that cell water does indeed play an important role as to the spatial organisation eventually adopted by the molecules”, comments Schuler with regard to the impact of temperature on protein structure. For their studies the biophysicists use what is known as single-molecule spectroscopy. Small colour probes in the protein enable the observation of changes with an accuracy of more than one millionth of a millimetre. With this “molecular yardstick” it is possible to measure how molecular forces impact protein structure.

With computer simulations the researchers have mimicked the behaviour of disordered proteins. They want to use them in future for more accurate predictions of their properties and functions.

Correcting test tube results

That’s why it’s important, according to Schuler, to monitor the proteins not only in the test tube but also in the organism. “This takes into account the fact that it is very crowded on the molecular level in our body as enormous numbers of biomolecules are crammed into a very small space in our cells”, says Schuler. The biochemists have mimicked this “molecular crowding” and observed that in this environment disordered proteins shrink, too.

Given these results many experiments may have to be revisited as the spatial organisation of the molecules in the organism could differ considerably from that in the test tube according to the biochemist from the University of Zurich. “We have, therefore, developed a theoretical analytical method to predict the effects of molecular crowding.” In a next step the researchers plan to apply these findings to measurements taken directly in living cells.

Explore further: Designer proteins provide new information about the body’s signal processesMore information: Andrea Soranno, Iwo Koenig, Madeleine B. Borgia, Hagen Hofmann, Franziska Zosel, Daniel Nettels, and Benjamin Schuler. Single-molecule spectroscopy reveals polymer effects of disordered proteins in crowded environments. PNAS, March 2014. DOI: 10.1073/pnas.1322611111


Effects of Hypoxia on Metabolic Flux

  1. Glucose-6-phosphate dehydrogenase regulation in the hepatopancreas of the anoxia-tolerantmarinemollusc, Littorina littorea

JL Lama , RAV Bell and KB Storey

Glucose-6-phosphate dehydrogenase (G6PDH) gates flux through the pentose phosphate pathway and is key to cellular antioxidant defense due to its role in producing NADPH. Good antioxidant defenses are crucial for anoxia-tolerant organisms that experience wide variations in oxygen availability. The marine mollusc, Littorina littorea, is an intertidal snail that experiences daily bouts of anoxia/hypoxia with the tide cycle and shows multiple metabolic and enzymatic adaptations that support anaerobiosis. This study investigated the kinetic, physical and regulatory properties of G6PDH from hepatopancreas of L. littorea to determine if the enzyme is differentially regulated in response to anoxia, thereby providing altered pentose phosphate pathway functionality under oxygen stress conditions.

Several kinetic properties of G6PDH differed significantly between aerobic and 24 h anoxic conditions; compared with the aerobic state, anoxic G6PDH (assayed at pH 8) showed a 38% decrease in K G6P and enhanced inhibition by urea, whereas in pH 6 assays Km NADP and maximal activity changed significantly.

All these data indicated that the aerobic and anoxic forms of G6PDH were the high and low phosphate forms, respectively, and that phosphorylation state was modulated in response to selected endogenous protein kinases (PKA or PKG) and protein phosphatases (PP1 or PP2C). Anoxia-induced changes in the phosphorylation state of G6PDH may facilitate sustained or increased production of NADPH to enhance antioxidant defense during long term anaerobiosis and/or during the transition back to aerobic conditions when the reintroduction of oxygen causes a rapid increase in oxidative stress.

Lama et al.  Peer J 2013.   http://dx.doi.org/10.7717/peerj.21


  1. Structural Basis for Isoform-Selective Inhibition in Nitric Oxide Synthase

    TL. Poulos and H Li

In the cardiovascular system, the important signaling molecule nitric oxide synthase (NOS) converts L-arginine into L-citrulline and releases nitric oxide (NO). NO produced by endothelial NOS (eNOS) relaxes smooth muscle which controls vascular tone and blood pressure. Neuronal NOS (nNOS) produces NO in the brain, where it influences a variety of neural functions such as neural transmitter release. NO can also support the immune system, serving as a cytotoxic agent during infections. Even with all of these important functions, NO is a free radical and, when overproduced, it can cause tissue damage. This mechanism can operate in many neurodegenerative diseases, and as a result the development of drugs targeting nNOS is a desirable therapeutic goal.

However, the active sites of all three human isoforms are very similar, and designing inhibitors specific for nNOS is a challenging problem. It is critically important, for example, not to inhibit eNOS owing to its central role in controlling blood pressure. In this Account, we summarize our efforts in collaboration with Rick Silverman at Northwestern University to develop drug candidates that specifically target NOS using crystallography, computational chemistry, and organic synthesis. As a result, we have developed aminopyridine compounds that are 3800-fold more selective for nNOS than eNOS, some of which show excellent neuroprotective effects in animal models. Our group has solved approximately 130 NOS-inhibitor crystal structures which have provided the structural basis for our design efforts. Initial crystal structures of nNOS and eNOS bound to selective dipeptide inhibitors showed that a single amino acid difference (Asp in nNOS and Asn in eNOS) results in much tighter binding to nNOS. The NOS active site is open and rigid, which produces few large structural changes when inhibitors bind. However, we have found that relatively small changes in the active site and inhibitor chirality can account for large differences in isoform-selectivity. For example, we expected that the aminopyridine group on our inhibitors would form a hydrogen bond with a conserved Glu inside the NOS active site. Instead, in one group of inhibitors, the aminopyridine group extends outside of the active site where it interacts with a heme propionate. For this orientation to occur, a conserved Tyr side chain must swing out of the way. This unanticipated observation taught us about the importance of inhibitor chirality and active site dynamics. We also successfully used computational methods to gain insights into the contribution of the state of protonation of the inhibitors to their selectivity. Employing the lessons learned from the aminopyridine inhibitors, the Silverman lab designed and synthesized symmetric double-headed inhibitors with an aminopyridine at each end, taking advantage of their ability to make contacts both inside and outside of the active site. Crystal structures provided yet another unexpected surprise. Two of the double-headed inhibitor molecules bound to each enzyme subunit, and one molecule participated in the generation of a novel Zn site that required some side chains to adopt alternate conformations. Therefore, in addition to achieving our specific goal, the development of nNOS selective compounds, we have learned how subtle differences in and structure can control proteinligand interactions and often in unexpected ways.



Nitric oxide synthase

arginine-NO-citulline cycle

arginine-NO-citulline cycle

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).



NO - muscle, vasculature, mitochondria

NO – muscle, vasculature, mitochondria















Figure:  (A) Structure of one of the early dipeptide lead compounds, 1, that exhibits excellentisoform selectivity. (B, C) show the crystal structures of the dipeptide inhibitor 1 in the active site of eNOS (PDB: 1P6L) and nNOS (PDB: 1P6H). In nNOS, the inhibitor “curls” which enables the inhibitor R-amino group to interact with both Glu592 and Asp597. In eNOS, Asn368 is the homologue to nNOS Asp597.

Accounts in Chem Res 2013; 46(2): 390-98.

  1. Jamming a Protein Signal

Interfering with a single cancer-promoting protein and its receptor can open this resistance mechanism by initiating autophagy of the affected cells,  according to researchers at The University of Texas MD Anderson Cancer Center  in the journal Cell Reports.  According to Dr. Anil Sood and Yunfei Wen, lead and first authors, blocking  prolactin, a potent growth factor for ovarian cancer, sets off downstream events that result in cell by autophagy, the process  recycles damaged organelles and proteins for new use by the cell through the phagolysozome. This in turn, provides a clinical rationale for blocking prolactin and its receptor to initiate sustained autophagy as an alternative strategy for treating cancers.

Steep reductions in tumor weight

Prolactin (PRL) is a hormone previously implicated in ovarian, endometrial and other cancer development andprogression. When PRL binds to its cell membrane receptor, PRLR, activation of cancer-promoting cell signaling pathways follows.  A variant of normal prolactin called G129R blocks the reaction between prolactin and its receptor. Sood and colleagues treated mice that had two different lines of human ovarian cancer, both expressing the prolactin receptor, with G129R. Tumor weights fell by 50 percent for mice with either type of ovarian cancer after 28 days of treatment with G129R, and adding the taxane-based chemotherapy agent paclitaxel cut tumor weight by 90 percent. They surmise that higher doses of G129R may result in even greater therapeutic benefit.


3D experiments show death by autophagy


[video width=”1280″ height=”720″ mp4=”https://pharmaceuticalintelligence.files.wordpress.com/2014/04/1741-7007-11-65-s1-macromolecular-juggling-by-ubiquitylation-enzymes1.mp4″][/video]


Next the team used the prolactin-mimicking peptide to treat cultures of cancer spheroids which sharply reduced their numbers, and blocked the activation of JAK2 and STAT signaling pathways.

Protein analysis of the treated spheroids showed increased presence of autophagy factors and genomic analysis revealed increased expression of a number of genes involved in autophagy progression and cell death.  Then a series of experiments using fluorescence and electron microscopy showed that the cytosol of treated cells had large numbers of cavities caused by autophagy.

The team also connected the G129R-induced autophagy to the activity of PEA-15, a known cancer inhibitor. Analysis of tumor samples from 32 ovarian cancer patients showed that tumors express higher levels of the prolactin receptor and lower levels of phosphorylated PEA-15 than normal ovarian tissue. However, patients with low levels of the prolactin receptor and higher PEA-15 had longer overall survival than those with high PRLR and low PEA-15.

Source: MD Anderson Cancer Center


  1. Chemists’ Work with Small Peptide Chains of Enzymes

Korendovych and his team designed seven simple peptides, each containing seven amino acids. They then allowed the molecules of each peptide to self-assemble, or spontaneously clump together, to form amyloids. (Zinc, a metal with catalytic properties, was introduced to speed up the reaction.) What they found was that four of the seven peptides catalyzed the hydrolysis of molecules known as esters, compounds that react with water to produce water and acids—a feat not uncommon among certain enzymes.

“It was the first time that a peptide this small self-assembled to produce an enzyme-like catalyst,” says Korendovych. “Each enzyme has to be an exact fit for its respective substrate,” he says, referring to the molecule with which an enzyme reacts. “Even after millions of years, nature is still testing all the possible combinations of enzymes to determine which ones can catalyze metabolic reactions. Our results make an argument for the design of self-assembling nanostructured catalysts.”

Source: Syracuse University

Here are three articles emphasizing the value of combinatorial analysis, which can be formed from genomic, clinical, and proteomic data sets.


  1. Comparative analysis of differential network modularity in tissue specific normal and cancer protein interaction networks

    F Islam , M Hoque , RS Banik , S Roy , SS Sumi, et al.

As most biological networks show modular properties, the analysis of differential modularity between normal and cancer protein interaction networks can be a good way to understand cancer more significantly. Two aspects of biological network modularity e.g. detection of molecular complexes (potential modules or clusters) and identification of crucial nodes forming the overlapping modules have been considered in this regard.

The computational analysis of previously published protein interaction networks (PINs) has been conducted to identify the molecular complexes and crucial nodes of the networks. Protein molecules involved in ten major cancer signal transduction pathways were used to construct the networks based on expression data of five tissues e.g. bone, breast, colon, kidney and liver in both normal and cancer conditions.

Cancer PINs show higher level of clustering (formation of molecular complexes) than the normal ones. In contrast, lower level modular overlapping is found in cancer PINs than the normal ones. Thus a proposition can be made regarding the formation of some giant nodes in the cancer networks with very high degree and resulting in reduced overlapping among the network modules though the predicted molecular complex numbers are higher in cancer conditions.

Islam et al. Journal of Clinical Bioinformatics 2013, 3:19-32

  1. A new 12-gene diagnostic biomarker signature of melanoma revealed by integrated microarray analysis

    Wanting Liu , Yonghong Peng and Desmond J. Tobin
    PeerJ 1:e49;        http://dx.doi.org/10.7717/peerj.49

Here we present an integrated microarray analysis framework, based on a genome-wide relative significance (GWRS) and genome-wide global significance (GWGS) model. When applied to five microarray datasets on melanoma published between 2000 and 2011, this method revealed a new signature of 200 genes. When these were linked to so-called ‘melanoma driver’ genes involved in MAPK, Ca2+, and WNT signaling pathways we were able to produce a new 12-gene diagnostic biomarker signature for melanoma (i.e., EGFR, FGFR2, FGFR3, IL8, PTPRF, TNC, CXCL13, COL11A1, CHP2, SHC4, PPP2R2C, andWNT4).We have begun to experimentally validate a subset of these genes involved inMAPK signaling at the protein level, including CXCL13, COL11A1, PTPRF and SHC4 and found these to be overexpressed inmetastatic and primarymelanoma cells in vitro and in situ compared to melanocytes cultured from healthy skin epidermis and normal healthy human skin.


catalytic amyloid forming particle

catalytic amyloid forming particle








        8.    PanelomiX: A threshold-based algorithm to create panels of biomarkers

X Robin , N Turck , A Hainard , N Tiberti, et al.
               Translational Proteomics 2013.    http://dx.doi.org/10.1016/j.trprot.2013.04.003

The PanelomiX toolbox combines biomarkers and evaluates the performance of panels to classify patients better than singlemarkers or other classifiers. The ICBTalgorithm proved to be an efficient classifier, the results of which can easily be interpreted.

Here are two current examples of the immense role played by signaling pathways in carcinogenic mechanisms and in treatment targeting, which is also confounded by acquired resistance.


  1. Triple-Negative Breast Cancer

  1. epidermal growth factor receptor (EGFR or ErbB1) and
  2. high activity of the phosphatidylinositol 3-kinase (PI3K)–Akt pathway

are both targeted in triple-negative breast cancer (TNBC).

  • activation of another EGFR family member [human epidermal growth factor receptor 3 (HER3) (or ErbB3)] may limit the antitumor effects of these drugs.

This study found that TNBC cell lines cultured with the EGFR or HER3 ligand EGF or heregulin, respectively, and treated with either an Akt inhibitor (GDC-0068) or a PI3K inhibitor (GDC-0941) had increased abundance and phosphorylation of HER3.

The phosphorylation of HER3 and EGFR in response to these treatments

  1. was reduced by the addition of a dual EGFR and HER3 inhibitor (MEHD7945A).
  2. MEHD7945A also decreased the phosphorylation (and activation) of EGFR and HER3 and
  3. the phosphorylation of downstream targets that occurred in response to the combination of EGFR ligands and PI3K-Akt pathway inhibitors.

In culture, inhibition of the PI3K-Akt pathway combined with either MEHD7945A or knockdown of HER3

  1. decreased cell proliferation compared with inhibition of the PI3K-Akt pathway alone.
  2. Combining either GDC-0068 or GDC-0941 with MEHD7945A inhibited the growth of xenografts derived from TNBC cell lines or from TNBC patient tumors, and
  3. this combination treatment was also more effective than combining either GDC-0068 or GDC-0941 with cetuximab, an EGFR-targeted antibody.
  4. After therapy with EGFR-targeted antibodies, some patients had residual tumors with increased HER3 abundance and EGFR/HER3 dimerization (an activating interaction).

Thus, we propose that concomitant blockade of EGFR, HER3, and the PI3K-Akt pathway in TNBC should be investigated in the clinical setting.

Reference: Antagonism of EGFR and HER3 Enhances the Response to Inhibitors of the PI3K-Akt Pathway in Triple-Negative Breast Cancer. JJ Tao, P Castel, N Radosevic-Robin, M Elkabets, et al.  Sci. Signal., 25 March 2014;
7(318), p. ra29   http://dx.doi.org/10.1126/scisignal.2005125


                  10.   Metastasis in RAS Mutant or Inhibitor-Resistant Melanoma Cells

The protein kinase BRAF is mutated in about 40% of melanomas, and BRAF inhibitors improve progression-free and overall survival in these patients. However, after a relatively short period of disease control, most patients develop resistance because of reactivation of the RAF–ERK (extracellular signal–regulated kinase) pathway, mediated in many cases by mutations in RAS. We found that BRAF inhibition induces invasion and metastasis in RAS mutant melanoma cells through a mechanism mediated by the reactivation of the MEK (mitogen-activated protein kinase kinase)–ERK pathway.

Reference: BRAF Inhibitors Induce Metastasis in RAS Mutant or Inhibitor-Resistant Melanoma Cells by Reactivating MEK and ERK Signaling. B Sanchez-Laorden, A Viros, MR Girotti, M Pedersen, G Saturno, et al., Sci. Signal., 25 March 2014;  7(318), p. ra30  http://dx.doi.org/10.1126/scisignal.2004815

Appendix II.

The world of physics in the twentieth century saw the end of determinism established by Newton. This is characterized by discrete laws that describe natural observations. These are in gravity and in eletricity. In an early phase of investigation, an era of galvanic or voltaic electricity represented a revolutionary break from the historical focus on frictional electricity. Alessandro Voltadiscovered that chemical reactions could be used to create positively charged anodes and negatively charged cathodes.  In 1790, Prof. Luigi Alyisio Galvani of Bologna, while conducting experiments on “animal electricity“, noticed the twitching of a frog’s legs in the presence of an electric machine. He observed that a frog’s muscle, suspended on an iron balustrade by a copper hook passing through its dorsal column, underwent lively convulsions without any extraneous cause, the electric machine being at this time absent.  Volta communicated a description of his pile to the Royal Society of London and shortly thereafter Nicholson and Cavendish (1780) produced the decomposition of water by means of the electric current, using Volta’s pile as the source of electromotive force.

Siméon Denis Poisson attacked the difficult problem of induced magnetization, and his results provided  a first approximation. His innovation required the application of mathematics to physics.  His memoirs on the theory of electricity and magnetism created a new branch of mathematical physics.  The discovery of electromagnetic induction was made almost simultaneously and independently by Michael Faraday and Joseph Henry. Michael Faraday, the successor of Humphry Davy, began his epoch-making research relating to electric and electromagnetic induction in 1831. In his investigations of the peculiar manner in which iron filings arrange themselves on a cardboard or glass in proximity to the poles of a magnet, Faraday conceived the idea of magnetic “lines of force” extending from pole to pole of the magnet and along which the filings tend to place themselves. On the discovery being made that magnetic effects accompany the passage of an electric current in a wire, it was also assumed that similar magnetic lines of force whirled around the wire. He also posited that iron, nickel, cobalt, manganese, chromium, etc., are paramagnetic (attracted by magnetism), whilst other substances, such as bismuth, phosphorus, antimony, zinc, etc., are repelled by magnetism or are diamagnetic.

Around the mid-19th century, Fleeming Jenkin‘s work on ‘ Electricity and Magnetism ‘ and Clerk Maxwell’s ‘ Treatise on Electricity and Magnetism ‘ were published. About 1850 Kirchhoff published his laws relating to branched or divided circuits. He also showed mathematically that according to the then prevailing electrodynamic theory, electricity would be propagated along a perfectly conducting wire with the velocity of light. Herman Helmholtz investigated the effects of induction on the strength of a current and deduced mathematical equations, which experiment confirmed. In 1853 Sir William Thomson (later Lord Kelvin) predicted as a result of mathematical calculations the oscillatory nature of the electric discharge of a condenser circuit.  Joseph Henry, in 1842 discerned  the oscillatory nature of the Leyden jardischarge.

In 1864 James Clerk Maxwell announced his electromagnetic theory of light, which was perhaps the greatest single step in the world’s knowledge of electricity. Maxwell had studied and commented on the field of electricity and magnetism as early as 1855/6 when On Faraday’s lines of force was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday’s work, and how the two phenomena were related. He reduced all of the current knowledge into a linked set of differential equations with 20 equations in 20 variables. This work was later published as On Physical Lines of Force in1861. In order to determine the force which is acting on any part of the machine we must find its momentum, and then calculate the rate at which this momentum is being changed. This rate of change will give us the force. The method of calculation which it is necessary to employ was first given by Lagrange, and afterwards developed, with some modifications, by Hamilton’s equations. Now Maxwell logically showed how these methods of calculation could be applied to the electro-magnetic field. The energy of a dynamical systemis partly kinetic, partly potential. Maxwell supposes that the magnetic energy of the field is kinetic energy, the electric energy potential.  Around 1862, while lecturing at King’s College, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light.   Maxwell’s electromagnetic theory of light obviously involved the existence of electric waves in free space, and his followers set themselves the task of experimentally demonstrating the truth of the theory. By 1871, he presented the Remarks on the mathematical classification of physical quantities.

A Wave-Particle Dilemma at the Century End

In 1896 J.J. Thomson performed experiments indicating that cathode rays really were particles, found an accurate value for their charge-to-mass ratio e/m, and found that e/m was independent of cathode material. He made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called “corpuscles”, had perhaps one thousandth of the mass of the least massive ion known (hydrogen). He further showed that the negatively charged particles produced by radioactive materials, by heated materials, and by illuminated materials, were universal.  In the late 19th century, the Michelson–Morley experiment was performed by Albert Michelson and Edward Morley at what is now Case Western Reserve University. It is generally considered to be the evidence against the theory of a luminiferous aether. The experiment has also been referred to as “the kicking-off point for the theoretical aspects of the Second Scientific Revolution.” Primarily for this work, Albert Michelson was awarded theNobel Prize in 1907.

Wave–particle duality is a theory that proposes that all matter exhibits the properties of not only particles, which have mass, but also waves, which transfer energy. A central concept of quantum mechanics, this duality addresses the inability of classical concepts like “particle” and “wave” to fully describe the behavior of quantum-scale objects. Standard interpretations of quantum mechanics explain this paradox as a fundamental property of the universe, while alternative interpretations explain the duality as an emergent, second-order consequence of various limitations of the observer. This treatment focuses on explaining the behavior from the perspective of the widely used Copenhagen interpretation, in which wave–particle duality serves as one aspect of the concept of complementarity, that one can view phenomena in one way or in another, but not both simultaneously.  Through the work of Max PlanckAlbert EinsteinLouis de BroglieArthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).

Beginning in 1670 and progressing over three decades, Isaac Newton argued that the perfectly straight lines of reflection demonstrated light’s particle nature, but Newton’s contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different, refraction could be easily explained. The resulting Huygens–Fresnel principle was supported by Thomas Young‘s discovery of double-slit interference, the beginning of the end for the particle light camp.  The final blow against corpuscular theory came when James Clerk Maxwell discovered that he could combine four simple equations, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter.

Matter and Light

In 1789, Antoine Lavoisier secured chemistry by introducing rigor and precision into his laboratory techniques. By discovering diatomic gases, Avogadro completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. The final stroke in classical atomic theory came when Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry.   Chemistry was now an atomic science.

Black-body radiation, the emission of electromagnetic energy due to an object’s heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object’s energy is partitioned equally among the object’s vibrational modes. This worked well when describing thermal objects, whose vibrational modes were defined as the speeds of their constituent atoms, and the speed distribution derived from egalitarian partitioning of these vibrational modes closely matched experimental results. Speeds much higher than the average speed were suppressed by the fact that kinetic energy is quadratic—doubling the speed requires four times the energy—thus the number of atoms occupying high energy modes (high speeds) quickly drops off. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. The Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths.

The solution arrived in 1900 when Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with energy less than .

In 1905 Albert Einstein took Planck’s black body model in itself and saw a wonderful solution to another outstanding problem of the day: the photoelectric effect, the phenomenon where electrons are emitted from atoms when they absorb energy from light.   Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck’s constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck’s constant. These results were not confirmed until 1915, when Robert Andrews Millikan, produced experimental results in perfect accord with Einstein’s predictions. While  the energy of ejected electrons reflected Planck’s constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect  When Einstein received his Nobel Prizein 1921, it was  for the photoelectric effect, the suggestion of quantized light. Einstein’s “light quanta” represented the quintessential example of wave–particle duality. Electromagnetic radiation propagates following  linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.

Radioactivity Changes the Scientific Landscape

The turn of the century also features radioactivity, which later came to the forefront of the activities of World War II, the Manhattan Project, the discovery of the chain reaction, and later – Hiroshima and Nagasaki.

Marie Curie

Marie Curie




Marie Skłodowska-Curie was a Polish and naturalized-French physicist and chemist who conducted pioneering research on radioactivity. She was the first woman to win a Nobel Prize, the only woman to win in two fields, and the only person to win in multiple sciences. She was also the first woman to become a professor at the University of Paris, and in 1995 became the first woman to be entombed on her own merits in the Panthéon in Paris. She shared the 1903 Nobel Prize in Physics with her husband Pierre Curie and with physicist Henri Becquerel. She won the 1911 Nobel Prize in Chemistry.  Her achievements included a theory of radioactivity (a term that she coined, techniques for isolating radioactive isotopes, and the discovery of polonium and radium. She named the first chemical element that she discovered – polonium, which she first isolated in 1898 – after her native country. Under her direction, the world’s first studies were conducted into the treatment of neoplasms using radioactive isotopes. She founded the Curie Institutes in Paris and in Warsaw, which remain major centres of medical research today. During World War I, she established the first military field radiological centres.  Curie died in 1934 due to aplastic anemia brought on by exposure to radiation – mainly, it seems, during her World War I service in mobile X-ray units created by her.


Read Full Post »

Progenitor Cell Transplant for MI and Cardiogenesis  (Part 1

Author and Curator: Larry H. Bernstein, MD, FCAP
Curator: Aviva Lev-Ari, PhD, RN
This article is Part I of a review of three perspectives on stem cell transplantation onto a substantial size of infarcted myocardium to generate cardiogenesis in tissue that is composed of both repair fibroblasts and cardiomyocytes, after essentially nontransmural myocardial infarct.

Progenitor Cell Transplant for MI and Cardiogenesis (Part 1)

Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN


Source of Stem Cells to Ameliorate Damage Myocardium (Part 2)

Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013-10-29/larryhbern/Source_of_Stem_Cells_to_Ameliorate_ Damaged_Myocardium/

An Acellular 3-Dimensional Collagen Scaffold Induces Neo-angiogenesis
 (Part 3)

Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013-10-29/larryhbern/An_Acellular_3-Dimensional_Collagen_Scaffold _Induces_Neo-angiogenesis/

The same approach is considered for stroke in one of these studies.  These are issues that need to be considered
  1. Adult stem cells
  2. Umbilical cord tissue sourced cells
  3. Sheets of stem cells
  4. Available arterial supply at the margins
  5. Infarct diameter
  6. Depth of ischemic necrosis
  7. Distribution of stroke pressure
  8. Stroke volume
  9. Mean Arterial Pressure (MAP)
  10. Location of infarct
  11. Ratio of myocytes to fibrocytes
  12. Coexisting heart disease and, or
  13. Comorbidities predisposing to cardiovascular disease, hypertension
  14. Inflammatory reaction against the graft

Transplantation of cardiac progenitor cell sheet onto infarcted heart promotes cardiogenesis and improves function

L Zakharova1, D Mastroeni1, N Mutlu1, M Molina1, S Goldman2,3, E Diethrich4, and MA Gaballa1*
1Center for Cardiovascular Research, Banner Sun Health Research Institute, Sun City, AZ; 2Cardiology Section, Southern Arizona VA Health Care System, and 3Department of Internal Medicine, The University of Arizona, Tucson, AZ; and 4Arizona Heart Institute, Phoenix, AZ
Cardiovascular Research (2010) 87, 40–49   http://dx.doi.org/10.1093/cvr/cvq027



Cell-based therapy for myocardial infarction (MI) holds great promise; however, the ideal cell type and delivery system have not been established. Obstacles in the field are the massive cell death after direct injection and the small percentage of surviving cells differentiating into cardiomyocytes. To overcome these challenges we designed a novel study to deliver cardiac progenitor cells as a cell sheet.

Methods and results

Cell sheets composed of rat or human cardiac progenitor cells (cardiospheres), and cardiac stromal cells were transplanted onto the infarcted myocardium after coronary artery ligation in rats. Three weeks later, transplanted cells survived, proliferated, and differentiated into cardiomyocytes (14.6 ± 4.7%). Cell sheet transplantation suppressed cardiac wall thinning and increased capillary density (194 ± 20 vs. 97 ± 24 per mm2, P < 0.05) compared with the untreated MI. Cell migration from the sheet was observed along the necrotic trails within the infarcted area. The migrated cells were located in the vicinity of stromal-derived factor (SDF-1) released from the injured myocardium, and about 20% of these cells expressed CXCR4, suggesting that the SDF-1/CXCR4 axis plays, at least, a role in cell migration. Transplantation of cell sheets resulted in a preservation of cardiac contractile function after MI, as was shown by a greater ejection fraction and lower left ventricular end diastolic pressure compared with untreated MI.


The scaffold-free cardiosphere-derived cell sheet approach seeks to efficiently deliver cells and increase cell survival.These transplanted cells effectively rescue myocardium function after infarction by promoting not only neovascular-ization but also inducing a significant level of cardiomyogenesis
Keywords  Myocardial infarction • Cardiac progenitor cells • Cardiospheres • Cardiac regeneration • Contractility


Despite advances in cardiac treatment after myocardial infarction (MI), congestive heart failure remains the number one killer world-wide. MI results in an irreversible loss of functional cardiomyocytes followed by scar tissue formation. To date, heart transplant remains the gold standard for treatment of end-stage heart failure, a procedure which will always be limited by the availability of a donor heart. Hence, developing a new form of therapy is vital.
A number of adult non-cardiac progenitor cells have been tested for myocardial regeneration, including skeletal myoblasts,1 bone-marrow2, and endothelial progenitor cells.3,4 In addition, several cardiac resident stem cell populations have been characterized based on the expression of stem cell marker proteins.5–8 Among these, the c-Kit+ population has been reported to promote myocardial repair.5,9 Recently, an ex vivo method to expand cardiac-derived progenitor cells from human myocardial biopsies and murine hearts was developed.10 Using this approach, undifferentiated cells (or cardiospheres) grow as self-adherent clusters from postnatal atrium or ventricular biopsy specimens.11
To date, the most common technique for cell delivery is direct injection into the infarcted myocardium.12 This approach is inefficient because more than 90% of the delivered cells die by apoptosis and only a small number of the survived cells differentiated into cardiomyocytes.13 An alternative approach to cell delivery is a biodegradable scaffold-based engineered tissue.14,15 This approach has the clear advantage in creating tissue patches of different shapes and sizes and in creating a beating heart by decellularization technology.16 Advances are being made to overcome the issue of small patch thickness and to minimize possible toxicity of the degraded substances from the scaffold.15 Recently, scaffold-free cell sheets were created from fibroblasts, mesenchymal cells, or neonatal myocytes.17,18 Transplantation of these sheets resulted in a limited improvement in cardiac function due to induced neovascularization and angiogenesis through secretion of angiogenic factors.17–19 However, few of those progenitor cells have differentiated into cardiomyocytes.17 The need to improve cardiac contractile function suggests focusing on cells with higher potential to differentiate to cardiomyocytes with an improved delivery method.
In the present study, we report a cell-based therapeutic strategy that surpasses limitation inherent in previously used methodologies. We have created a scaffold-free sheet composed of cardiac progenitor cells (cardiospheres) incorporated into a layer of cardiac stromal cells. The progenitor cells survived when transplanted as a cell sheet onto the infarcted area, improved cardiac contractile functions, and supported recovery of damaged myocardium by promoting not only vascularization but also a significant level of cardiomyogenesis. We also showed that cells from a sheet can be recruited to the site of injury driven, at least partially, by the stromal-derived factor (SDF-1) gradient.


Detailed methods are provided in the Supplementary Methods


Three-month-old Sprague Dawley male rats were used. Rats were randomly placed into four groups:
(1) sham-operated rats, n = 12;
(2) MI, n = 12;
(3) MI treated with rat sheet, n = 10; and
(4) MI treated with human sheet, n = 10.

Myocardial infarction

MI was created by the ligation of the left coronary artery.20 Animals were intubated and ventilated using a small animal ventilator (Harvard Apparatus). A left thoracotomy was performed via the third intercostal rib, and the left coronary artery was ligated. The extent of infarct was verified by measuring the area at risk: heart was perfused with PBS containing 4 mg/mL Evans Blue as previously described by our laboratory.20 The area at risk was estimated by recording the size of the under-perfused (pale-colored) area of myocardium (see Supplementary material online, Figure S1). Only animals with an area at risk >30% were used in the present study. Post-mortem infarct size was measured using triphenyl tetrazolium chloride staining as previously described by our laboratory.20

Isolation of cardiosphere-forming cells

Cardiospheres were generated as described10 from atrial tissues obtained from:
(1) human atrial resection samples obtained from patients (aged from 53 to 73 years old) undergoing cardiac bypass surgery at Arizonam Heart Hospital (Phoenix, AZ) in compliance with Institutional Review Board protocol (n = 10),
(2) 3-month-old SD rats (n = 10). Briefly, tissues were cut into 1–2 mm3 pieces and tissue fragments were cultured ‘as explants’ in a complete explants medium for 4 weeks (Supplementary Methods).
Cell sheet preparation, labelling, handling, and transplantation
Cardiosphere-forming cells (CFCs) combined with cardiac stromal cells were seeded on double-coated plates (poly-L-lysine and collagen type IV from human placenta) in cardiosphere growing medium (Supplementary Methods). The sheets created from the same cell donors were divided into two groups,
one for transplantation and the other for characterization by immunostaining and RT–PCR (Supplementary Methods).
Prior to transplantation, rat cell sheets were labelled with 2 mM 1,1-dioctadecyl-3,3,3,3-tetramethylindocarbocyanine, DiI, for tracking transplanted cells in rat host myocardium (Molecular Probes, Eugene, OR). Sheets created using human cells were transplanted unlabelled. Sheets were gently peeled off the collagen-coated plate and folded twice to form four layers. The entire sheet with 200 ml of media was
  • gently aspirated into the pipette tip,
  • transferred to the supporting polycarbonate filter (Costar) and
  • spread off by adding media drops on the sheet (Figure 2A).
Polycarbonate filter was used as a flexible mechanical support for cell sheet to facilitate handling during the transplantation. Immediately after LAD occlusion, the cell sheet was transplanted onto the infarcted area, allowed to adhere to the ventricle for 5–7 min, and the filter was removed before closing the chest (Figure 2A).

Cardiac function

Three weeks after MI, closed-chest in vivo cardiac function was measured using a Millar pressure conductance catheter system (Millar Instruments, Houston, TX) (Supplementary Methods).

Cell sheet survival, engraftment, and cell migration

Rat host myocardium and cell sheet composition after transplantation were characterized by immunostaining (Supplementary Methods). Rat-originated cells were traced by DiI, while human-originated cells were identified by immunostaining with anti-human nuclei or human lamin antibodies.
  1. To assess sheet-originated cardiomyocytes within the host myocardium, the number of cells positive for both human nuclei and myosin heavy chain (MHC) (human sheet); or both DiI and MHC (rat sheet) were counted.
  2. To assess sheet-originated capillaries within the rat host myocardium, the number of cells positive for both human nuclei and von Willebrand factor (vWf) (human sheet); or both DiI and vWf (rat sheet) were counted. Cells were counted in five microscopic fields within cell sheet and area of infarct (n = 5). The number of cells expressing specific markers was normalized to the total number of cells determined by 40,6-diamidino-2-phenylindole staining of the nuclei DNA.
  3. To assess the survival of transplanted cells, sections were stained with Ki-67 antibody followed by fluorescent detection and caspase 3 primary antibodies followed by DAB detection (Supplementary Methods).
  4. To evaluate human sheet engraftment, sections were stained with human lamin antibody followed by fluorescent detection (Supplementary Methods).
  5. Rat host inflammatory response to the transplanted human cell sheet 21 days after transplantation was evaluated by counting tissue mononuclear phagocytes and neutrophils (Supplementary Methods).


Images were captured using Olympus IX70 confocal microscope (Olympus Corp, Tokyo, Japan) equipped with argon and krypton lasers or Olympus IX-51 epifluorescence microscope using excitation/emission maximum filters: 490/520 nm, 570 /595 nm, and 355 /465 nm. Images were processed using DP2-BSW software (Olympus Corp).


All data are represented as mean ± SE Significance (P < 0.05) was deter-mined using ANOVA (StatView).


Generation of cardiospheres

Cardiospheres were generated from atrial tissue explants. After 7–14 days in culture, a layer of stromal cells arose from the attached explants (Supplementary material online, Figure S2a). CFCs, small phase-bright single cells, emerged from explants and bedded down on the stromal cell layer (Supplementary material online, Figure S2b).
  • After 4 weeks, single CFCs, as well as cardiospheres (spherical colonies generated from CFCs) were observed (Supplementary material online, Figure S2c).
Cellular characteristics of cardiospheres in vitro
Immunocytochemical analysis of dissociated cardiospheres revealed that
  • 30% of cells were c-Kitþ indicating that the CFCs maintain multi-potency. About
  • 22 and 28% of cells expressed a, b-MHC and cardiac troponin I, respectively.
These cells represent an immature cardiomyocyte population because they were smaller (10–15 pm in length vs. 60–80 pm for mature cardiomyocytes) and no organized structure of MHC was detected. Furthermore
  • 17% of the cells expressed a-smooth muscle actin (SMA) and
  • 6% were positive for vimentin,
    • both are mesenchymal cell markers (Supplementary material online, Figure S3a and b).
  • Less then 5% of cells were positive for endothelial cell marker; vWf.
Cell characteristics of human cardiospheres are similar to those from rat tissues (Supplementary material online, Figure S3c).
Cardiospheres were further characterized based on the expression of c-Kit antigen. RT–PCR analysis was performed on both c-Kitþ and c-Kit2 subsets isolated from re-suspended cardiospheres. KDR, kinase domain protein receptor, was recently identified as a marker for cardiovascular lineage progenitors in differentiating embryonic stem cells.21 Here, we found that
  • the c-Kitþ cells were also Nkx2.5 and GATA4-positive, but were low or negative for KDR (Supplementary material online, Figure S3d). In contrast,
  • c-Kit2 cells strongly expressed KDR and GATA4, but were negative for Nkx2.5.
  • Both c-Kitþ and c-Kit2 subsets did not express Isl1, a marker for multipotent secondary heart field progenitors.22
Characteristics of cell sheet prior to transplantation
The cell sheet is a layer of cardiac stromal cells in which the cardiospheres were incorporated at a frequency of 21 ± 0.5 spheres per 100,000 viable cells (Figure 1A). The average diameter of cardiospheres within a sheet was 0.13 ± 0.02 mm and their average area was 0.2 ± 0.06 mm2 (Figure 1A). After sheets were peeled off the plate, it exhibited a heterogeneous thickness ranging from 0.05– 0.1 mm (n 1/4 10), H&E staining (Figure1B) and Masson’s Trichrome staining (Figure 1C) of the sheet sections revealed tissue-like organized structures composed of muscle tissue intertwined with streaks of collagen with no necrotic core. Based on the immunostaining results, sheet compiled of several cell types including
  • SMAþ cardiac stromal cells (50%),
  • MHCþ cardiomyocytes (20%), and
  • vWfþ endothelial cells (10%) (Figure 1D and E).
  • 15% of the sheet-forming cells were c-Kitþ suggesting the cells multipotency (Figure 1E).
  • Cells within the sheet expressed gap-junction protein C43, an indicator of electromechanical coupling between cells (Figure 1D).
  • 40% of cells were positive for the proliferation marker Ki-67 suggesting an active cell cycle state (Figure 1D, middle panel).
Human sheet expressed genes
  1. known to be upregulated in undifferentiated cardiovascular progenitors such as c-Kit and KDR;
  2. cardiac transcription factors Nkx2.5 and GATA4; genes related to adhesion, cell homing, and
  3. migration such as ICAM (intercellular adhesion molecule), CXCR4 (receptor for SDF-1), and
  4. matrix metalloprotease 2 (MMP2).
No expression of Isl1 was detected in human sheet (Figure 1F).
sheet transplant on MI_Image_2
Figure 1 Cell sheet characteristics. (A) Fully formed cell sheet. Arrow indicates integrated cardiosphere. (B) H&E staining; pink colour (arrowhead) indicates cytosol and blue (arrows) indicates nuclear stain. Note that there is no necrotic core within the cell sheet. (C) Masson’s Trichrome staining of sheet section. Arrowhead indicates collagen deposition within the sheet. (D and E) Sheet sections were labelled with antibodies against following markers: (D) vWf (green), Ki-67 (green), C43 (green); (E) c-Kit (green), MHC (red), SMA (red) as indicated on top of each panel. Nuclei were labelled with blue fluorescence of 40,6-diamidino-2-phenylindole (DAPI). (F) Gene expression analysis of the cell sheet. Scale bars, 200 pm (A) or 50 pm (B–E).

Cell sheet survival and proliferation

Two approaches were used to track transplanted cells in the host myocardium.
  • rat cell sheets were labelled with red fluorescent dye, DiI, prior to the transplantation.
  • the sheet created from human cells (human sheet) were identified in rat host myocardium by immunostaining with human nuclei antibodies.
DiI-labelling together with trichrome staining showed engraftment of the cardiosphere-derived cell sheet to the infarcted myocardium (Figure 2B–D). In vivo sheets grew into a stratum with heterogeneous thickness ranging from 0.1–0.5 mm over native tissue. The percentage of Ki-67þ cells within the sheet was 37.5 ± 6.5 (Figure 2F) whereas host tissue was mostly negative (except for the vasculature).
To assess the viability of transplanted cells, the heart sections were stained with the apoptosis marker, caspase 3. A low level of caspase 3 was detected within the sheet, suggesting that the majority of transplanted cells survived after transplantation (Figure 2G).
sheet transplant on MI_Image_3
Figure 2 Transplantation and growth of cell sheet after transplantation.
(A) Sheet transplantation onto infarcted heart. Detached cell sheet on six-well plate (left); cell sheet folded on filter (middle); and transplanted onto left ventricle (right). Scale bar 2 mm. DiI-labelled cell sheets grafted above MI area at day 3
(B) and day 21
(C) after transplantation.
(D) LV section of untreated MI rat at day 21 showing no significant red fluorescence background.
Bottom row (B–D) demonstrates the enlargement of box-selected area of corresponding top panels.
(E) Similar sections stained with Masson’s Trichrome. Section of rat (F) or human (G) sheet treated rat at day 21 after MI.
(F) Section was stained with antibody against Ki-67 (green). Cell sheet was pre-labelled with DiI (red). Nuclei stained with blue fluorescence of DAPI.
(G) Section was double stained with human nuclei (blue) and caspase 3 (brown, arrows) antibodies and counterstained with eosin.
Asterisks (**) indicate cell sheet area. Scale bars 200 mm (B–D, top row), 100 mm (B–D, bottom row, and E) or 50 mm (F, G).
Identification of inflammatory response
Twenty-one days after transplantation of human cell sheet, inflammatory response of rat host was examined. Transplantation of human sheet on infarcted rats reduced the number of mononuclear phagocytes (ED1-like positive cells) compared with untreated MI control (Supplementary material online, Figure S4a–e and l). In addition, the number of neutrophils was similar in both control untreated MI and sheet-treated sections (Supplementary material online, Figure S4f–k and m). These data suggest that at 21 days post transplantation, human cell sheet was not associated with significant infiltration of host immune cells.

Cell sheet engraftment and migration

Development of new vasculature was determined in cardiac tissue sections by co-localization of DiI labelling and vWf staining (Figure 3C). Three weeks after transplantation, the capillary density of ischaemic myocardium in the sheet-treated group significantly increased compared with MI animals (194 ± 20 vs. 97 ± 24 per mm2, P < 0.05, Figure 3A and B). The capillaries originated from the sheet ranged in diameter from 10 to 40 jim (n 1/4 30). A gradient in capillary density was observed with higher density in the sheet area which was decreased towards underlying infarcted myocardium. Mature blood vessels were identified within the sheet area and in the underlying myocardium in close proximity to the sheet evident by vWf and SMA double staining (Figure 3D).
sheet transplant on MI_Image_4
Figure 3 Neovascularization of infarcted wall. (A) Frozen tissue sections stained with vWf antibody (green). LV section of control (sham), infarcted (MI), and MI treated with cell sheet (sheet) rats. Scale bar, 100 jim. (B) Capillary density decreased in the MI compared with sham (*P < 0.05) and improved after cell sheet treatment (#P < 0.05). (C) Neovascularization within cell sheet area was recognized by co-localization of DiI- (red) and vWf (green) staining. Scale bar 100 jim. (D) Mature blood vessels (arrows) were identified by co-localization of SMA (red) and vWf (green) staining. Scale bar 50 jim.
Furthermore, 3 weeks after transplantation, a large number of labelled human nuclei positive or DiI-labelled cells were detected deep within the infarcted area indicating cell migration from the epicardial surface to the infarct (Figure 4A, B, and D). Minor or no migration was detected when the cell sheet was transplanted onto non-infarcted myocardium, sham control (Figure 4C). To evaluate engraftment of sheet-originated cells, sections were labelled with anti-human nuclear lamin antibody. Quantification of engraftment was performed using two approaches: fluorescence intensity and cell counting. Fluorescence intensity of the signal was analysed and compared for different areas of myocardium (Figure 4E–J). Since the transplanted sheets are created by human cells and are stained with human nuclear lamin-labelled with green fluorescence, the signal intensity of the sheet is set to 100% (100% of cells are lamin-positive). Myocardial area with no or limited number of labelled cells had the lowest level of fluorescence signal (13%, or 3.2 ± 1.4% of total number of cells), while
  1. the area where the cell migrated from the sheet to the infarcted myocardium had higher signal intensity (47%, or 11.9 ± 1.7% of total number of cells), indicating a higher number of sheet-originated cells are engrafted in the infarcted area.) (Figure 4K and L).
  2. Migrated cells were positive for KDR (Supplementary material online, Figure S5).
sheet transplant on MI_Image_5
Figure 4 Engraftment quantification of cells migrated from the sheet into the infarcted area of MI. Animals were treated with rat (A) or human (B–F) sheets. Cardiomyocytes were labelled with MHC antibody (A, green or B, red). Rat sheet-originated cells were identified with DiI-labelling, red (A). Arrows indicate the track of migrating cells. Human sheet-originated cells were identified by immunostaining with human nuclei antibody followed by secondary antibodies conjugated with either Alexa 488 (B, E and F, green) or AP (C, D, blue). No migration was detected when the cell sheet was transplanted onto non-infarcted myocardium (C). Heart sections were counterstained with eosin, pink (C–D). Higher magnification of area selected in the box is presented (D, right). Immunofluorescence of sheet (green) grafted to the myocardium surface (E) or cells migrated to the infarction area (F). Fluorescence profiles acrossthe cell sheet itself(G, box 1), area underlying cell sheet (I, box 2) and infarction areawith migrated cells (F, box 3). Mean fluorescence intensityofthe grafted human (K) cells was determined by outlining the region of interest (ROI) and subtracting the background fluorescence for the same region. Fluorescence intensity was normalized to the area of ROI (ii 1/4 6). (L) Percent engraftment was defined as number of lamin-positive cells divided by total number of cells per ROI. ‘M’, myocardium,’S’ sheet, ‘I’ infarction. Scale bars 100 mm (A–C, D, left, E and F), or 50 mm (D, right).
To elucidate a possible mechanism of cell migration, sections were stained to detect SDF1 and its unique receptor CXCR4. The migration patterns of cells from the sheet coincided with SDF-1 expression. Within 3 days after MI, SDF-1 was expressed in the injured myocardium (Figure 5A). At 3 weeks after MI and sheet transplantation, SDF-1 was co-localized with the migrated labelled cells (Figure 5B). PCR analysis revealed CXCR4 expression in cell sheet before transplantation (Figure 1F). However, after transplantation only a fraction of migrated cells expressed CXCR4 (Figure 5C).
sheet transplant on MI_Image_6
Figure 5 Migration of sheet-originated cells into the infarcted area. Confocal images of MI animals treated with sheets from rats (A and B) or human (C). SDF1 (green) was detected at border zone of the infarct at day 3 (A) and day 21 (B). Rat sheet-originated cells were identified with DiI-labelling (red). Note co-localization of DiI-positive sheet-originated cells with SDF1 at 21 days after MI (B). Human cells were identified by immunostaining with human nuclei antibody, red, (C). Note human cells that migrated to the area of infarct express CXCR4 (green) (C). Scale bar, 200 mm (A, B) or 50 mm (C). ‘M’, myocardium, ‘S’ sheet, ‘I’ infarct.

3.7 Cardiac regeneration

The differentiation of migrating cells into cardiomyocytes was evident by the co-localization of MHC staining with either human nuclei (Figure 6A) or DiI (Figure 6B and C). In contrast to the immature cardiomyocyte-like cells within the pre-transplanted cell sheet, the migrated and newly differentiated cells within the myocardium were about 30–50 mm in size and co-expressed C43 (see Supplementary material online, Figure S6). Cardiomyogenesis within the infarcted myocardium was observed in the sheets created from either rat or human cells.
sheet transplant on MI_Image_6
Figure 6 Cardiac regeneration. Sections of MI animals treated with human (A) or rat (B, C) sheets. Human sheet was identified by immunostaining with human nuclei antibody (green). Section was double-stained with MHC (red) antibody. Newly formed cardiomyocytes was identified by co-localization of human nuclei and MHC (yellow, arrow). (B) Rat sheet-originated cells were identified by DiI labelling (red). Section was double-stained with MHC (green) antibody. Newly formed cardiomyocytes were detected by co-localization of DiI with MHC (yellow, arrows). (C) Higher magnification of area selected in the boxes (B). Scale bars 200 mm (B), or 20 mm (A, C). ‘M’, myocardium, ‘S’ sheet, ‘I’ infarct.

Cell sheet improved cardiac contractile function and retarded LV remodelling after MI

Closed-chest in vivo cardiac function was derived from left ventricle (LV) pressure–volume loops (PV loops), which were measured using a solid-state Millar conductance catheter system. MI resulted in a characteristic decline in LV systolic parameters and an increase in diastolic parameters (Table 1). Cell sheet treatment improved both systolic and diastolic parameters (Table 1). Specifically, load-dependent parameters of systolic function: ejection fraction (EF), dP/dTmax, and cardiac index (CI) were decreased in MI rats and increased towards sham control with the cell sheet treatment (Table 1). Diastolic function parameters, dP/dTmin, relaxation constant (Tau), EDV, and EDP were increased in the MI rats and returned towards sham control parameters after sheet treatment (Table 1). However, load-independent systolic function, Emax, was decreased after MI. Treatment with human sheet improved Emax, while treatment with rat sheet had no effect (Table 1). Treatment with either rat or human sheets retarded LV remodelling; as such that it increased the ratio of anteriolateral wall thickness/LV inner diameter (t/Di) and wall thickness/LV outer diameter (t/Do) (see Supplementary material online, Table S3). However, human sheets appear to further improve LV remodelling compared with rat sheets as indicated by increased ratio of wall thickness to ventricular diameter and decreased both EDV and EDP (Table 1 and see Supplementary material online, Table S3).
Table 1 Hemodynamic parameters
Table 1. hemodynamic parameters


The majority of the cardiac progenitor cells delivered using our scaffold-free cell sheet survived after transplantation onto the infarcted heart. A significant percentage of transplanted cells migrated from the cell sheet to the site of infarction and differentiated into car-diomyocytes and vasculature leading to improving cardiac contractile function and retarding LV remodelling. Thus, delivery of cardiac progenitor cells together with cardiac mesenchymal cells in a form of scaffold-free cell sheet is an effective approach for cardiac regeneration after MI.
Consistent with previous studies,5,11 here we showed that cardio-spheres are composed of multipotent precursors, which have the capacity to differentiate to cardiomyocytes and other cardiac cell types. When we fractioned cardiospheres based on c-Kit expression, we identified two subsets: Kitþ /KDR2/low/Nkx2.5þ and Kit2/KDRþ/ Nkx2.52(Supplementary material online, Figure S3d), which are likely reflecting cardiac and vascular progenitors.20
In the present study, delivery of cardiac progenitor cells as a cell sheet facilitates cell survival after transplantation. Necrotic cores, commonly observed in tissue engineered patches,23,24 are absent in cardiosphere sheets prior to transplantation (Figure 1B and C). Poor cell survival is caused by multiple processes such as: ischemia from the lack of vasculature and anoikis due to cell detachment from sub-strate.25 A possible mechanism of cell survival within the sheet is the induction of neo-vessels soon after transplantation due to the presence of endothelial cells within the sheet before transplantation (Figure 10). The cell sheet continued to grow in vivo (Figure 2B and C), suppressed cardiac wall thinning, and prevented LV remodelling at 21 days after transplantation (see Supplementary material online, Table S3). This maybe due to the induction of neovascularization (Figure 3), which may prevents ischemia-induced cell death (Figure 2G). Another likely mechanism of cell survival is that the cells within the scaffold-free sheet maintained cell-to-cell adhesion16 as shown by ICAM expression (Figure 1F). The cells also exhibit C43-positive junctions (Figure 10, see Supplementary material online, Figure S6), which may facilitate electromechanical coupling between the transplanted cells and the native myocardium.
We observed cell migration from the sheet to the infarcted myocardium (Figure 4A and B, E and F), which may be facilitated by the strong expression of MMP2 in the cell sheet (Figure 1F). Although, the mechanism of cardiac progenitor cell migration remains unclear, previous observations showed that SDF-1 is upregulated after MI and plays a role in bone-marrow and cardiac stem cell migration.26,27 Our data suggest that SDF-1-CXCR4 axis plays, at least in part, a role in cardiac progenitor cell migration from cell sheet to the infarcted myocardium. This conclusion is based on the following observations: (1) cell sheet expresses CXCR4 prior to transplantation (Figure 1F), (2) migrated cells are located in the vicinity of SDF-1 release (Figure 5A and B), and (3) about 20% of migrated cells expressed CXCR4. Note, not all the migrated cells expressed CXCR4 suggesting other mechanisms are involved in cell migration (Figure 5C).
Here we report that implanting cardiosphere-generated cell sheet onto infarcted myocardium not only improved vascularization but also promoted cardiogenesis within the infarcted area (Figure 6). A larger number of newly formed cardiomyocytes were found deep within the infarct compared with the cell sheet periphery. Notably the transplantation of the cell sheet resulted in a significant improvement of the cardiac contractile function after MI, as was shown by an increase of EF and decrease of LV end diastolic pressure (Table 1).
The beneficial effect of cell sheet is, in part, due to the presence of a large number of activated cardiac mesenchymal stromal cells (myofibroblasts) within the sheet. Myofibroblasts are known to provide a mechanical support for grafted cells, facilitating contraction28 and to induce neovascularization through the release of cytokines.17 In addition, mesenchymal cells are uniquely immunotolerant. In xenograft models unmatched mesenchymal cells transplanted to the heart of immunocompetent rats were shown to suppress host immune response29 presumably due to inhibition of T-cell activation.30 Consistently with previous study from our laboratory,31 here, we demonstrated host tolerance to the cell sheet 21 days after MI. Finally, phase II and III clinical trials are currently undergoing in which allogeneic MSCs are used to treat MI in patients (Osiris Therapeutic, Inc.).
In summary, our results show that cardiac progenitor cells can be delivered as a cell sheet, composed of a layer of cardiac stromal cells impregnated with cardiospheres. After transplantation, cells from the cell sheet migrated to the infarct, partially driven by SDF-1 gradient, and differentiated into cardiomyocytes and vasculature. Transplantation of cell sheet was associated with prevention of LV remodelling, reconstitution of cardiac mass, reversal of wall thinning, and significant improvement in cardiac contractile function after MI. Our data also suggest that strategies, which utilize undigested cells, intact cell–cell interactions, and combined cell types such as our scaffold-free cell sheet should be considered in designing effective cell therapy.


Fuchs JR, Nasseri BA, Vacanti JP, Fauza DO. Postnatal myocardial augmentation with skeletal myoblast-based fetal tissue engineering. Surgery 2006;140:100–107.
Orlic D, Kajstura J, Chimenti S, Bodine DM, Leri A, Anversa P. Bone marrow stem cells regenerate infarcted myocardium. Pediatr Transplant 2003;7(Suppl. 3):86–88.
Kawamoto A, Tkebuchava T, Yamaguchi J, Nishimura H, Yoon YS, Milliken C et al. Intramyocardial transplantation of autologous endothelial progenitor cells for therapeutic neovascularization of myocardial ischemia. Circulation 2003;107:461–468.
Iwasaki H, Kawamoto A, Ishikawa M, Oyamada A, Nakamori S, Nishimura H et al. Dose-dependent contribution of CD34-positive cell transplantation to concurrent vasculogenesis and cardiomyogenesis for functional regenerative recovery after myocardial infarction. Circulation 2006;113:1311–1325.
Beltrami AP, Barlucchi L, Torella D, Baker M, Limana F, Chimenti S et al. Adult cardiac stem cells are multipotent and support myocardial regeneration. Cell 2003;114: 763–776.
Oh H, Bradfute SB, Gallardo TD, Nakamura T, Gaussin V, Mishina Y et al. Cardiac progenitor cells from adult myocardium: homing, differentiation, and fusion after infarction. Proc Natl Acad Sci USA 2003;100:12313–12318.
Laugwitz KL, Moretti A, Lam J, Gruber P, Chen Y, Woodard S et al. Postnatal isl1+ cardioblasts enter fully differentiated cardiomyocyte lineages. Nature 2005;433: 647–653.
Pfister O, Mouquet F, Jain M, Summer R, Helmes M, Fine A et al. CD31- but Not CD31+ cardiac side population cells exhibit functional cardiomyogenic differentiation. Circ Res 2005;97:52–61.
Dawn B, Stein AB, Urbanek K, Rota M, Whang B, Rastaldo R et al. Cardiac stem cells delivered intravascularly traverse the vessel barrier, regenerate infarcted myocardium, and improve cardiac function. Proc Natl Acad Sci USA 2005;102:3766–3771.


Read Full Post »

Author: Tilda Barliya PhD

Photoacoustic Tomography (PAT), also called the optoacoustic or thermoacoustic (TA), is a materials analysis technique based on the reconstruction of an internal photoacoustic source distribution from measurements acquired by scanning ultrasound detectors over a surface that encloses the source under study. Moreover, it is non-ionizing and non-invasive, and is the fastest growing new biomedical method, with clinical applications on the way.

Dr. Lihong Wang, a Distinguished Professor of Biomedical Engineering in the School of Engineering and Applied Science at Washington University in St. Louis, summarizes the state of the art in photoacoustic imaging (1).

The photoacoustic (PA) effect:

The fundamental principle of the PA effect can be simply described: an object absorbs EM radiation energy, the absorbed energy converts into heat and the temperature of the object increases. As soon as the temperature increases, thermal expansion takes place, generating acoustic pressure in the medium. However, a steady thermal expansion (time invariant heating) does not generate acoustic waves; thus, the heating source is required to be time variant.

Dr. Wang explains that “the trick of photoacoustic tomography is to convert light absorbed at depth to sound waves, which scatter a thousand times less than light, for transmission back to the surface. The tissue to be imaged is irradiated by a nanosecond-pulsed laser at an optical wavelength”.

Absorption by light by molecules beneath the surface creates a thermally induced pressure jump that launches sound waves that are measured by ultrasound receivers at the surface and reassembled to create what is, in effect, a photograph.

When comparing to other modalities, PAT has several great advantages:

Table 1 Comparison of imaging modalities.















Dr. Wang is already working with physicians at the Washington University School of Medicine to move four applications of photoacoustic tomography into clinical trials (2).

  • One is to visualize the sentinel lymph nodes that are important in breast cancer staging;
  • A second to monitor early response to chemotherapy;
  • A third to image melanomas;
  • The fourth to image the gastrointestinal tract.

Sentinel node biopsy provides a good example of the improvement photoacoustic imaging promises over current imaging practice. Sentinel nodes are the nodes nearest a tumor, such as a breast tumor, to which cancerous cells would first migrate.

Currently, sentinel node biopsy, includes injection of  a radioactive substance, a dye or both near a tumor. The body treats both substances as foreign, so they flow to the first draining node to be filtered and flushed from the body. A gamma probe or a Geiger counter is used to locate the radioactive particles and the surgeon must cut open the area and follow the dye visually to the sentinel lymph node.

Dr. Wang however, offers a simpler method: injecting an optical dye that shows up so clearly in photoacoustic images that a hollow needle can be guided directly to the sentinel lymph node and a sample of tissue taken through the needle.

Contrast agents:

Most photoacoustic (PA) contrast agents are designed for absorbing laser, especially in the NIR spectral range. However, RF contrast agents are also desirable due to the superior penetration depth of RF in the body (1).  A typical example is indocyanine green (ICG), a dye approved by FDA. ICG has high absorption in the NIR spectral region, and it has already been proved to increase the PA signal when it is injected in blood vessels. Most recently, methyline blue was used as the contrast agent to detect the sentinel lymph node (SLN) (4).

Compared with dyes, nanoparticles possess a high and tunable absorption spectrum, and longer circulation time (1). The absorption peak is tunable by changing the shape and size of the particle. In addition, nanoparticles can be used to target certain diseases by bio-conjugating them with proteins, such as antibodies.  Among different nanoparticles, gold nanoparticles are favored in optical imaging due to their exceptional optical properties in the visible and NIR spectral ranges, including scattering, absorption and photoluminescence. So far, none of the gold nanoparticles have been approved by FDA (1).

One exciting aspect of photoacoustic tomography is that images contain functional as well as structural information because color reflects the chemical composition and chemistry determines function. Photoacoustic tomography, for example, can detect the oxygen saturation of hemoglobin, which is bright red when it is carrying oxygen and turns darker red when it releases it (3), that is important, since almost all diseases, especially cancer and diabetes, cause abnormal oxygen metabolism.  For example see image 1.

Image courtesy of Junjie Yao/Lihong Wang

Image 1: melanoma tumor (MT) cells were injected into a mouse ear on day 1. By day 7, there were noticeable changes in the blood flow rate (top graph, right) and the metabolic rate of oxygen usage (bottom graph, right). Counterintuitively, the tumor did not increase the oxygen extraction fraction (middle graph). The colors correspond to depth, with blue being superficial and red deep (3).

Wang’s team demonstrated that oxygen metabolism betrayed the presence of a melanoma within few days of injections in animal models, where as Oxygen use doubled in a week.

In this aspect: photoacoustic images,  can offer several parameters such as;

  • Vessel cross-section,
  • Concentration of hemoglobin and blood flow speed,
  • and The gradient of oxygen saturation can be used to calculate the oxygen use by a region of tissue.

Analysis of oxygen use is not necessarily new and is frequently measured by positron emission tomography (PET), which requires the injection or inhalation of a radioactively labeled tracer and undesirable radiation exposure.

Photoacoustic Tomography is currently being investigated for (5):

  1. Breast cancer (microvascular).  Additionally, for further information on photoacoustic tomography please read the article by Dr. Venkat Karra (I).
  2. Skin cancer (melanin)
  3. Brain tumors
  4. Cardiac disease – myocardial infraction (6)
  5. Ophthalmology – retinal disease (7)
  6. Ostheoarthrities (8)


photoacoustic tomography perfectly complements other biomedical imaging modalities by providing unique optical absorption contrast with highly scalable spatial resolution, penetration depth, and imaging speed. In light of its capabilities and flexibilities, PAT is expected to play a more essential role in biomedical studies and clinical practice.


1.  Changhui Li and Lihong V Wang. Photoacoustic tomography and sensing in biomedicine. Phys. Med. Biol. 2009 54 R59 doi:10.1088/0031-9155/54/19/R01  http://iopscience.iop.org/0031-9155/54/19/R01 http://iopscience.iop.org/0031-9155/54/19/R01/pdf/0031-9155_54_19_R01.pdf

2. Jiecheny Yin. Photoacoustic tomography in cancer detection. http://bme240.eng.uci.edu/students/08s/jiecheny/index.htm

3. Jim Goodwin. NEW IMAGING TECHNIQUE COULD SPEED CANCER DETECTION. http://www.siteman.wustl.edu/ContentPage.aspx?id=5788

4.  Song K H, Stein E W, Margenthaler J A and Wang L V. Noninvasive photoacoustic identification of sentinel lymph nodes containing methylene blue in vivo in a rat model J. Biomed. Opt. 2008: 13 054033–6.  http://oilab.seas.wustl.edu/epub/SongK_2008_J_Biomed_Opt_13_054033.pdf

5. Junjie Yao and Lihong V Wang.  Photoacoustic tomography: fundamentals, advances and prospects. Contrast Media Mol Imaging. 2011 September; 6(5): 332–345. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3205414/

6. Holotta M, Grossauer HKremser CTorbica PVölkl JDegenhart GEsterhammer RNuster RPaltauf GJaschke W. Photoacoustic tomography of ex vivo mouse hearts with myocardial infarction. J. Biomed Opt. 2011 Mar;16(3):036007. doi: 10.1117/1.3556720. http://www.ncbi.nlm.nih.gov/pubmed/21456870

7. Hao F. ZhangCarmen A. Puliafito, and Shuliang Jiao, Photoacoustic Ophthalmoscopy for In Vivo Retinal Imaging: Current Status and Prospects.  Ophthalmic Surg Lasers Imaging. 2011 July; 42(0): S106–S115.  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291958/

8. Yao Sun, Eric S. Sobel, and Huabei Jiang. First assessment of three-dimensional quantitative photoacoustic tomography for in vivo detection of osteoarthritis in the finger joints.  Med. Phys. 38, 4009 (2011); http://dx.doi.org/10.1118/1.3598113 . http://online.medphys.org/resource/1/mphya6/v38/i7/p4009_s1?isAuthorized=no

Other articles from our Open Access Journal:

I. By : Venkat Karra. Visualizing breast cancer without X-rays. https://pharmaceuticalintelligence.com/2012/05/08/visualizing-breast-cancer-without-x-rays/

II. By: Dr. Dror Nir. Ultrasound in Radiology – Results of a European Survey. https://pharmaceuticalintelligence.com/2013/07/21/ultrasound-in-radiology-results-of-a-european-survey/

III.  By: Dr. Dror Nir. Causes and imaging features of false positives and false negatives on 18F-PET/CT in oncologic imaging. https://pharmaceuticalintelligence.com/2013/05/18/causes-and-imaging-features-of-false-positives-and-false-negatives-on-18f-petct-in-oncologic-imaging/

Read Full Post »

« Newer Posts - Older Posts »