Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Medical technology’


Reporter: Stephen J. Williams, PhD

10:00-10:45 AM The Davids vs. the Cancer Goliath Part 1

Startups from diagnostics, biopharma, medtech, digital health and emerging tech will have 8 minutes to articulate their visions on how they aim to tame the beast.

Start Time End Time Company
10:00 10:08 Belong.Life
10:09 10:17 Care+Wear
10:18 10:26 OncoPower
10:27 10:35 PolyAurum LLC
10:36 10:44 Seeker Health

Speakers:
Karthik Koduru, MD, Co-Founder and Chief Oncologist, OncoPower
Eliran Malki, Co-Founder and CEO, Belong.Life
Chaitenya Razdan, Co-founder and CEO, Care+Wear @_crazdan
Debra Shipley Travers, President & CEO, PolyAurum LLC @polyaurum
Sandra Shpilberg, Founder and CEO, Seeker Health @sandrashpilberg

Belong Life

  • 10,000 cancer patients a month helping patients navigate cancer care with Belong App
  • Belong Eco system includes all their practitioners and using a trigger based content delivery (posts, articles etc)
  • most important taking unstructured health data (images, social activity, patient compilance) and converting to structured data

Care+Wear

personally design picc line cover for oncology patients

partners include NBA Major league baseball, Oscar de la Renta,

designs easy access pic line gowns and shirts

OncoPower :Digital Health in a Blockchain Ecosystem

problems associated with patient adherence and developed a product to address this

  1. OncoPower Blockchain: HIPAA compliant using the coin Oncopower security token to incentiavize patients and oncologists to consult with each other or oncologists with tumor boards; this is not an initial coin offering

PolyArum

  • spinout from UPENN; developing a nanoparticle based radiation therapy; glioblastoma muse model showed great response with gold based nanoparticle and radiation
  • they see enhanced tumor penetration, and retention of the gold nanoparticles
  • however most nanoparticles need to be a large size greater than 5 nm to see effect so they used a polymer based particle; see good uptake but excretion past a week so need to re-dose with Au nanoparticles
  • they are looking for capital and expect to start trials in 2020

Seeker Health

  • tying to improve the efficiency of clinical trial enrollment
  • using social networks to find the patients to enroll in clinical trials
  • steps they use 1) find patients on Facebook, Google, Twitter 2) engage patient screen 3) screening at clinical sites
  • Seeker Portal is a patient management system: patients referred to a clinical site now can be tracked

11:00- 11:45 AM Breakout: How to Scale Precision Medicine

The potential for precision medicine is real, but is limited by access to patient datasets. How are government entities, hospitals and startups bringing the promise of precision medicine to the masses of oncology patients

Moderator: Sandeep Burugupalli, Senior Manager, Real World Data Innovation, Pfizer @sandeepburug
Speakers:
Ingo ​Chakravarty, President and CEO, Navican @IngoChakravarty
Eugean Jiwanmall, Senior Research Analyst for Medical Policy & Technology Evaluation , Independence Blue Cross @IBX
Andrew Norden, M.D., Chief Medical Officer, Cota @ANordenMD
Ankur Parikh M.D, Medical Director of Precision Medicine, Cancer Treatment Centers of America @CancerCenter

Ingo: data is not ordered, only half of patients are tracked in some database, reimbursement a challenge

Eugean: identifying mutations as patients getting more comprehensive genomic coverage, clinical trials are expanding more rapidly as seen in 2018 ASCO

Ingo: general principals related to health outcomes or policy or reimbursement.. human studies are paramount but payers may not allowing for general principals (i.e. an Alk mutation in lung cancer and crizotanib treatment may be covered but maybe not for glioblastoma or another cancer containing similar ALK mutation; payers still depend on clinical trial results)

Andrew: using gene panels and NGS but only want to look for actionable targets; they establish an expert panel which reviews these NGS sequence results to determine actionable mutations

Ankur:  they have molecular tumor boards but still if want to prescribe off label and can’t find a clinical trial there is no reimbursement

Andrew: going beyond actionable mutations, although many are doing WES (whole exome sequencing) can we use machine learning to see if there are actionable data from a WES

Ingo: we forget in datasets is that patients have needs today and we need those payment systems and structures today

Eugean: problem is the start from cost (where the cost starts at and was it truly medically necessary)

Norden: there are not enough data sharing to make a decision; an enormous amount of effort to get businesses and technical limitations in data sharing; possibly there are policies needed to be put in place to assimilate datasets and promote collaborations

Ingo: need to take out the middle men between sequencing of patient tumor and treatment decision; middle men are taking out value out of the ‘supply chain’;

Andrew: PATIENTS DON’T OWN their DATA but MOST clinicians agree THEY SHOULD

Ankur: patients are willing to share data but the HIPAA compliance is a barrier

 

11:50- 12:30 AM Fireside Chat with Michael Pellini, M.D.

Building a Precision Medicine Business from the Ground Up: An Operating and Venture Perspective

Dr. Pellini has spent more than 20 years working on the operating side of four companies, each of which has pushed the boundaries of the standard of care. He will describe his most recent experience at Foundation Medicine, at the forefront of precision medicine, and how that experience can be leveraged on the venture side, where he now evaluates new healthcare technologies.

Speaker:
Michael Pellini, M.D., Managing Partner, Section 32 and Chairman, Foundation Medicine @MichaelPellini

Roche just bought Foundation Medicine for $2.5 billion.  They negotiated over 7 months but aside from critics they felt it was a great deal because it gives them, as a diagnostic venture, the international reach and biotech expertise.  Foundation Medicine offered Roche expertise on the diagnostic space including ability to navigate payers and regulatory aspects of the diagnostic business.  He feels it benefits all aspects of patient care and the work they do with other companies.

Moderatore: Roche is doing multiple deals to ‘own’ a disease state.

Dr. Pellini:  Roche is closing a deal with Flatiron just like how Merck closed deals with genomics companies.  He feels best to build the best company on a stand alone basis and provide for patients, then good things will happen.  However the problem of achieving scale for Precision Medicine is reimbursement by payers.  They still have to keep collecting data and evolving services to suit pharma.  They didn’t know if there model would work but when he met with FDA in 2011 they worked with Precision Medicine, said collect the data and we will keep working with you,

However the payers aren’t contributing to the effort.  They need to assist some of the young companies that can’t raise the billion dollars needed for all the evidence that payers require.  Precision Medicine still have problems, even though they have collected tremendous amounts of data and raised significant money.  From the private payer perspective there is no clear roadmap for success.

They recognized that the payers would be difficult but they had a plan but won’t invest in companies that don’t have a plan for getting reimbursement from payers.

Moderator: What is section 32?

Pellini:  Their investment arm invests in the spectrum of precision healtcare companies including tech companies.  They started with a digital path imaging system that went from looking through a scope and now looking at a monitor with software integrated with medical records. Section 32 has $130 million under management and may go to $400 Million but they want to stay small.

Pellini: we get 4-5 AI pitches a week.

Moderator: Are you interested in companion diagnostics?

Pellini:  There may be 24 expected 2018 drug approvals and 35% of them have a companion diagnostic (CDX) with them.  however going out ten years 70% may have a CDX associated with them.  Payers need to work with companies to figure out how to pay with these CDXs.

 

 

Advertisements

Read Full Post »


Current Advances in Medical Technology

Larry H. Bernstein, MD, FCAP, Curator

LPBI

Pumpkin-Shaped Molecule Enables 100-Fold Improved MRI Contrast

Tue, 10/13/2015 – 9:16amby Forschungsverbund Berlin e.V. (FVB)

http://www.mdtmag.com/news/2015/10/pumpkin-shaped-molecule-enables-100-fold-improved-mri-contrast

Assuming that we could visualize pathological processes such as cancer at a very early stage and additionally distinguish the various different cell types, this would represent a giant step for personalized medicine. Xenon magnetic resonance imaging has the potential to fulfil this promise – if suitable contrast media are found that react sensitively enough to the “exposure”. Researchers at the Leibniz-Institut für Molekulare Pharmakologie in Berlin have now found that a class of pumpkin-shaped molecules called cucurbiturils together with the inert gas xenon, enables particularly good image contrast – namely around 100 times better than has been possible up to now. This finding published in the November issue cover article of Chemical Science by the Royal Society of Chemistry points the way to the tailoring of new contrast agents to different cell types and has the potential to enable molecular diagnostics even without tissue samples in the future.

Personalized medicine instead of one treatment for all – especially in cancer medicine, this approach has led to a paradigm shift. Molecular diagnostics is the key that will give patients access to tailor-made therapy. However, if tumors are located in poorly accessible areas of the body or several tumor foci are already present, this often fails due to a lack of sufficient sensitivity of the diagnostic imaging. But such sensitivity is needed to determine the different cell types, which differ considerably even within a tumor. Although even the smallest of tumor foci and other pathological changes can be detected using the PET-CT, a differentiation according to cell type is usually not possible.

Scientists from the FMP are therefore focusing on xenon magnetic resonance imaging: The further development of standard magnetic resonance imaging makes use of the “illuminating power” of the inert gas xenon, which can provide a 10,000-fold enhanced signal in the MRI. To do this, it must be temporarily captured by so-called “cage molecules” in the diseased tissue. This has been more or less successful with the molecules used to date, but the experimental approach is still far from a medical application.

Cucurbituril Provides Stunning Image Contrasts
The research group led by Dr. Leif Schröder at the Leibniz-Institut für Molekulare Pharmakologie (FMP) has now discovered a molecule class for this purpose that eclipses all of the molecules used to date. Cucurbituril exchanges around 100 times more xenon per unit of time than its fellow molecules, which leads to a much better image contrast. “It very quickly became clear that cucurbituril might be suitable as a contrast medium,” reports Leif Schröder. “However, it was surprising that areas marked with it were imaged with a much better contrast than previously.” The explanation is to be found in the speed. Upon exposure, so to speak, cucurbituril generates contrast more rapidly than all molecules used to date, as it only binds the xenon very briefly and thus transmits the radio waves to detect the inert gas to very many xenon atoms within a fraction of a second. In this way, the inert gas is passed through the molecule much more efficiently.

In the study, which appeared in the specialist journal “Chemical Science”, the world’s first MRI images with cucurbituril have been achieved. With the aid of a powerful laser and a vaporized alkali metal, the researchers initially greatly strengthened the magnetic properties of normal xenon. The hyperpolarized gas was then introduced into a test solution with the cage molecules. A subsequent MRI image showed the distribution of the xenon in the object. In a second image, the curcurbituril together with radio waves destroyed the magnetization of the xenon, leading to dark spots on the images.

“Comparison of the two images demonstrates that only the xenon in the cages has the right resonance frequency to produce a dark area,” explains Schröder. “This blackening is possible to a much better degree with cucurbituril than with previous cage molecules, for it works like a very light-sensitive photographic paper. The contrast is around 100 times stronger.”

Time-of-Flight IC Revolutionizes Object Detection and Distance Measurement

Tue, 10/13/2015 – 9:07amby Intersil

New ISL29501 signal processing IC detects objects up to two meters

http://www.mdtmag.com/product-release/2015/10/time-flight-ic-revolutionizes-object-detection-and-distance-measurement
Intersil Corporation has introduced an innovative time-of-flight (ToF) signal processing IC that provides a complete object detection and distance measurement solution when combined with an external emitter (LED or laser) and photodiode. The ISL29501 ToF device offers one-of-a-kind functionality, including ultra-small size, low-power consumption and superior performance ideal for connected devices that make up the Internet of Things (IoT), as well as consumer mobile devices and the emerging commercial drone market.

The ISL29501 overcomes the shortcomings of traditional amplitude-based proximity sensors and other ToF solutions that perform poorly in lighting conditions above 2,000 lux, or cannot provide distance information unless the object is perpendicular to the sensor.

The ISL29501 applies Intersil’s power management expertise to save power and extend battery life through several innovations.

“Prior to Intersil’s time-of-flight technology breakthrough, there was no practical way to measure distance up to two meters in a small form factor,” said Andrew Cowell, senior vice president of Mobile Power Products at Intersil. “The innovative ISL29501 provides customers a cost-effective, small footprint solution that also gives them the flexibility to use multiple devices to increase the field of view to a full 360 degrees for enhanced object detection capabilities.”

Key Features and Specifications

  • On-chip DSP calculates ToF for accurate proximity detection and distance measurement up to two meters
  • Modulation frequency of 4.5MHz prevents interference with other consumer products such as IR TV remote controls that operate at 40kHzOn-chip emitter DAC with programmable current up to 255mA
  • Allows designers to choose the desired current level to optimize distance measurement and power budget
  • Operates in single shot mode for initial object detection and approximate distance measurement, while continuous mode improve distance accuracy
  • On-chip active ambient light rejection minimizes or eliminates the influence of ambient light during distance measurement
  • Programmable distance zones: allows the user to define three ToF distance zones for determining interrupt alerts
  • Interrupt controller generates interrupt alerts using distance measurements and user defined thresholds
  • Automatic gain control sets optimum analog signal levels to achieve best SNR response
  • Supply voltage range of 2.7V to 3.3V
  • I2C interface supports 1.8V and 3.3V bus

The ISL29501 can be combined with the ISL9120 buck-boost regulator to further reduce power consumption and extend battery life in consumer and home automation applications.

Optoelectronic Implantable Could Enable Two-Way Communication with Brain

Mon, 10/12/2015 – 4:04pmby Brown University

http://www.mdtmag.com/news/2015/10/optoelectronic-implantable-could-enable-two-way-communication-brain

Brown University researchers have created a new type of optoelectronic implantable device to access brain microcircuits, synergizing a technique that enables scientists to control the activity of brains cells using pulses of light. The invention, described in the journal Nature Methods, is a cortical microprobe that can stimulate multiple neuronal targets optically by specific patterns on micrometer scale while simultaneously recording the effects of that stimulation in the underlying neural microcircuits of interest with millisecond precision.

“We think this is a window-opener,” said Joonhee Lee, a senior research associate in Professor Arto Nurmikko’s lab in the School of Engineering at Brown and one of the lead authors of the new paper. “The ability to rapidly perturb neural circuits according specific spatial patterns and at the same time reconstruct how the circuits involved are perturbed, is in our view a substantial advance.”

First introduced around 2005, optogenetics has enriched ability of scientists seeking to understand brain function at the neuronal level. The technique involves genetically engineering neurons to express light-sensitive proteins on their membranes. With those proteins expressed, pulses of light can be used to either promote or suppress activity in those particular cells. The method gives researchers in principle unprecedented ability to control specific brain cells at specific times.

But until now, simultaneous optogenetic stimulation and recording of brain activity rapidly across multiple points within a brain microcircuit of interest has proven difficult. Doing it requires a device that can both generate a spatial pattern of light pulses and detect the dynamical patterns of electrical reverberations generated by excited cellular activity. Previous attempts to do this involved devices that cobbled together separate components for light emission and electrical sensing. Such probes were physically bulky, not ideal for insertion into a brain. And because the emitters and the sensors were necessarily a hundreds of micrometers apart, a sizable distance, the link between stimulation and recorded signal was ambiguous.

The new compact, integrated device developed by Nurmikko’s lab begins with the unique advantages endowed by a so-called wide bandgap semiconductor called zinc oxide. It is optically transparent yet able readily to conduct an electrical current.

“Very few materials have that pair of physical properties,” Lee said. “The combination makes it possible to both stimulate and detect with the same material.”

Joonhee Lee, with Assistant Research Professor Ilker Ozden and Professor Yoon-Kyu Song at Seoul National University in Korea, co-developed a novel microfabrication method with Nurmikko to shape the material into a monolithic chip just a few millimeters square with sixteen micrometer sized pin-like “optoelectrodes,” each capable of both delivering light pulses and sensing electrical current. The array of optoelectrodes enables the device to couple to neural microcircuits composed of many neurons rather than single neurons.

Such ability to stimulate and record at the network level on spatial and time scales at which they operate is key, Nurmikko says. Brain functions are driven by neural circuits rather than single neurons.

“For example, when I move my hand, that’s an example of action driven by specific network-level activity in the brain,” he said. “Our new device approach gives scientists and engineers a tool in applying the full power of optogenetics as a means of neural stimulation, while providing the means to read activity of perturbed networks at multiple points at high spatial precision and time resolution.”

Ozden led the initial testing of the device in rodent models. The researchers looked at the extent to which different light intensities could stimulate network activity. The tests showed that increasing optical power led to distinct recruitment of neuronal circuits revealing functional connectivity in the targeted network.

“We went over a range of optical power that was large–over three orders of magnitude–and in so doing we got a range of network-related responses, in particular we could replicate an activity pattern naturally occurring in the brain.” Ozden said. “It gave us a new insight into how optogenetics operates on the network level. This gives us encouragement to go ahead and extend the repertoire and application of the device technology.”

Nurmikko’s group together with the Song lab in Seoul plan to continue further development of the device, ultimately include an access via wireless means. Their next steps anticipate the use of the new device technology as chronic implant in non-human primates at potentially hundreds of points and, depending on progress in worldwide research on optogenetics ahead, perhaps even one day in humans.

“At least, the initial building blocks are here,” Nurmikko said, who conceived the idea with his Korean colleague Song.

Study Advances Possibility of Mind-Controlled Devices

Mon, 10/12/2015 – 10:50amby Ryan Bushey, Associate Editor, R&D

A study published in the journal Nature Medicine has shown a possible path to creating effective neural prosthetics.

http://www.mdtmag.com/blog/2015/10/study-advances-possibility-mind-controlled-devices

The study’s subjects, only listed as T6 and T7, have Amyotropic Lateral Sclerosis (ALS). The scientists performed surgery on them one year ago to place a “neural recording device” in the part of the brain in charge of controlling hand function, notes Bloomberg.

The test documented in the study required T6 and T7 to perform a variety of tasks, such as moving a cursor to hit different targets on a computer screen. The device receives electrical impulses from the brain and morphs them into a computer signal to operate the cursor.

Both test subjects had the highest published performance so far, and even doubled the results of the previous clinical trial participant, according to the study.

The hope is that these devices can improve quality of life for people suffering from paralysis.

You can watch how T6 performed in her test below.

https://youtu.be/9P-qsiIORVU

Removing 62 Barriers to Pig–to–Human Organ Transplant in One Fell Swoop

Mon, 10/12/2015 – 9:09amby Wyss Institute for Biologically Inspired Engineering

The largest number of simultaneous gene edits ever accomplished in the genome could help bridge the gap between organ transplant scarcity and the countless patients who need them

http://www.mdtmag.com/news/2015/10/removing-62-barriers-pig%E2%80%93%E2%80%93human-organ-transplant-one-fell-swoop

Never before have scientists been able to make scores of simultaneous genetic edits to an organism’s genome. But now in a landmark study by George Church, Ph.D., and his team at the Wyss Institute for Biologically Inspired Engineering at Harvard University and Harvard Medical School, the gene editing system known as “CRISPR–Cas9” has been used to genetically engineer pig DNA in 62 locations – an explosive leap forward in CRISPR’s capability when compared to its previous record maximum of just six simultaneous edits. The 62 edits were executed by the team to inactivate retroviruses found natively in the pig genome that have so far inhibited pig organs from being suitable for transplant in human patients. With the retroviruses safely removed via genetic engineering, however, the road is now open toward the possibility that humans could one day receive life–saving organ transplants from pigs.

Church is a Wyss Core Faculty member, the Robert Winthrop Professor of Genetics at Harvard Medical School (HMS) and Professor of Health Sciences and Technology at Harvard and MIT. The advance, reported by Church and his team including the study’s lead author Luhan Yang, Ph.D., a Postdoctoral Fellow at HMS and the Wyss Institute, was published in the October 11 issue of Science.

The concept of xenotransplantation, which is the transplant of an organ from one species to another, is nothing new. Researchers and clinicians have long hoped that one of the major challenges facing patients suffering from organ failure – which is the lack of available organs in the United States and worldwide – could be alleviated through the availability of suitable animal organs for transplant. Pigs in particular have been especially promising candidates due to their similar size and physiology to humans. In fact, pig heart valves are already commonly sterilized and de–cellularized for use repairing or replacing human heart valves.

This artistic rendering shows pig chromosomes (background) which reside in the nucleus of pig cells and contain a single strand of RNA, and the Cas9 protein targeting DNA (foreground). The CRISPR–Cas9 gene editing system works like molecular scissors to precisely edit genes of interest. A new advance reported in Science by Wyss Core Faculty member George Church and his team used Cas9 to make 62 edits to the pig genome to remove latent retroviruses, presenting a solution to one of the largest safety concerns that has so far blocked progress in making pig organs compatible for xenotransplant in humans. (Credit: Wyss Institute at Harvard University)

But the transplant of whole, functional organs comprised of living cells and tissue constructs has presented a unique set of challenges for scientists. One of the primary problems has been the fact that most mammals including pigs contain repetitive, latent retrovirus fragments in their genomes – present in all their living cells – that are harmless to their native hosts but can cause disease in other species.

“The presence of this type of virus found in pigs – known as porcine endogenous retroviruses or PERVs – brought over a billion of dollars of pharmaceutical industry investments into developing xenotransplant methods to a standstill by the early 2000s,” said Church. “PERVs and the lack of ability to remove them from pig DNA was a real showstopper on what had been a promising stage set for xenotransplantation.”

Now – using CRISPR–Cas9 like a pair of molecular scissors – Church and his team have inactivated all 62 repetitive genes containing a PERV in pig DNA, surpassing a significant obstacle on the path to bringing xenotransplantation to clinical reality. With more than 120,000 patients currently in the United States awaiting transplant and less than 30,000 transplants on average occurring annually, xenotransplantation could give patients and clinicians an alternative in the future.

“Pig kidneys can already function experimentally for months in baboons, but concern about the potential risks of PERVs has posed a problem for the field of xenotransplantation for many years,” said David H. Sachs, M.D., Director of the TBRC Laboratories at Massachusetts General Hospital, Paul S. Russell Professor of Surgery Emeritus at Harvard Medical School, and Professor of Surgical Sciences at Columbia University’s Center for Translational Immunology. Sachs has been developing special pigs for xenotransplantation for more than 30 years and is currently collaborating with Church on further genetic modifications of his pigs. “If Church and his team are able to produce pigs from genetically engineered embryos lacking PERVs by the use of CRISPR-Cas9, they would eliminate an important potential safety concern facing this field.”

Yang says the team hopes eventually they can completely eliminate the risk that PERVs could cause disease in clinical xenotransplantation by using modified pig cells to clone a line of pigs that would have their PERV genes inactivated.

“This advance overcomes a major hurdle that has until now halted the progress of xenotransplantation research and development,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences. “The real value and potential impact is in the number of lives that could be saved if we can one day use xenotransplants to close the huge gap between the number of available functional organs and the number of people who desperately need them.”

The remarkable and newly demonstrated capability for CRISPR to edit tens of repetitive genes such as PERVs will also unlock new ways for scientists to study and understand repetitive regions in the genome, which has been estimated to comprise more than two–thirds of our own human genome.

Contributors to the work also included: co–lead authors Marc Güell of the Wyss Institute and Harvard Medical School Department of Genetics, Dong Niu of HMS Dept. of Genetics and Zhejiang University’s College of Animal Sciences, and Haydy George of HMS Dept. of Genetics; and co–authors Emal Lesha, Dennis Grishin, Jürgen Poci, Ellen Shrock, and Rebeca Cortazio of HMS Dept. of Genetics, Weihong Xu of Massachusetts General Hospital Department of Surgery, and Robert Wilkinson and Jay Fishman of MGH’s Transplant Infection Disease & Compromised Host Program.

Novel Gut-on-a-Chip Nearly Indistinguishable from Human GI Tract

Fri, 10/09/2015 – 1:17pmby University of North Carolina Healthcare

http://www.mdtmag.com/news/2015/10/novel-gut-chip-nearly-indistinguishable-human-gi-tract?et_cid=4876632&et_rid=535648082

A team of researchers from the University of North Carolina at Chapel Hill and NC State University has received a $5.3 million, five-year Transformative Research (R01) Award from the National Institutes of Health (NIH) to create fully functioning versions of the human gut that fit on a chip the size of a dime.

Such “organs-on-a-chip” have become vital for biomedical research, as researchers seek alternatives to animal models for drug discovery and testing. The new grant will fund a technology that represents a major step forward for the field, overcoming limitations that have mired other efforts.

The technology will use primary cells derived directly from human biopsies, which are known to provide more relevant results than the immortalized cell lines used in current approaches. In addition, the device will sculpt these cells into the sophisticated architecture of the gut, rather than the disorganized ball of cells that are created in other miniature organ systems.

This is a picture of a schematic of colonic epithelial tissue. Crypt units are pointed down, flat surface faces center of the gut tube. Stem cells are red, progenitor cells are pink, differentiated cells are grey, blue and green. Yellow cells are stem cell niche cells. Lumenal surface is above crypts. (Credit: Scott Magness, PhD, UNC School of Medicine)

“We are building a device that goes far beyond the organ-on-a-chip,” said Nancy L. Allbritton, MD, PhD, professor and chair of the UNC-NC State joint department of biomedical engineering and one of four principle investigators on the NIH grant. “We call it a ‘simulacrum,’ a term used in science fiction to describe a duplicate. The idea is to create something that is indistinguishable from your own gut.”

Allbritton is an expert at microfabrication and microengineering. Also on the team are intestinal stem cell expert Scott T. Magness, PhD, associate professor of medicine, biomedical engineering, and cell and molecular physiology in the UNC School of Medicine; microbiome expert Scott Bultman, PhD, associate professor of genetics in the UNC School of Medicine; and bioinformatics expert Shawn Gomez, associate professor of biomedical engineering at UNC-Chapel Hill and NC State.

The impetus for the “organ-on-chip” movement comes largely from the failings of the pharmaceutical industry. For just a single drug to go through the discovery, testing, and approval process can take as many as 15 years and as much as $5 billion dollars. Animal models are expensive to work with and often don’t respond to drugs and diseases the same way humans do. Human cells grown in flat sheets on Petri dishes are also a poor proxy. Three-dimensional “organoids” are an improvement, but these hollow balls are made of a mishmash of cells that doesn’t accurately mimic the structure and function of the real organ.

Basically, the human gut is a 30-foot long hollow tube made up of a continuous single-layer of specialized cells. Regenerative stem cells reside deep inside millions of small pits or “crypts” along the tube, and mature differentiated cells are linked to the pits and live further out toward the surface. The gut also contains trillions of microbes, which are estimated to outnumber human cells by ten to one. These diverse microbial communities — collectively known as the microbiota — process toxins and pharmaceuticals, stimulate immunity, and even release hormones to impact behavior.

These are fluorescent images of the side view of two synthetic crypts. Blue: nuclei of the cells. Red: proliferating stem cells in similar location to those in the human colon. (Credit: Scott Magness, PhD, UNC School of Medicine)

To create a dime-sized version of this complex microenvironment, the UNC-NC State team borrowed fabrication technologies from the electronics and microfluidics world. The device is composed of a polymer base containing an array of imprinted or shaped “hydrogels,” a mesh of molecules that can absorb water like a sponge. These hydrogels are specifically engineered to provide the structural support and biochemical cues for growing cells from the gut. Plugged into the device will be various kinds of plumbing that bring in chemicals, fluids, and gases to provide cues that tell the cells how and where to differentiate and grow. For example, the researchers will engineer a steep oxygen gradient into the device that will enable oxygen-loving human cells and anaerobic microbes to coexist in close proximity.

“The underlying concept — to simply grow a piece of human tissue in a dish — doesn’t seem that groundbreaking,” said Magness. “We have been doing that for a long time with cancer cells, but those efforts do not replicate human physiology. Using native stem cells from the small intestine or colon, we can now develop gut tissue layers in a dish that contains stem cells and all the differentiated cells of the gut. That is the thing stem cell biologists and engineers have been shooting for, to make real tissue behave properly in a dish to create better models for drug screening and cell-based therapies. With this work, we made a big leap toward that goal.”

Right now, the team has a working prototype that can physically and chemically guide mouse intestinal stem cells into the appropriate structure and function of the gut. For several years, Magness has been isolating and banking human stem cells from samples from patients undergoing routine colonoscopies at UNC Hospitals. As part of the grant, he will work with the rest of the team to apply these stem cells to the new device and create “simulacra” that are representative of each patient’s individual gut. The approach will enable researchers to explore in a personalized way how both the human and microbial cells of the gut behave during healthy and diseased states.

“Having a system like this will advance microbiota research tremendously,” said Bultman. “Right now microbiota studies involve taking samples, doing sequencing, and then compiling an inventory of all the microbes in the disease cases and healthy controls. These studies just draw associations, so it is difficult to glean cause and effect. This device will enable us to probe the microbiota, and gain a better understanding of whether changes in these microbial communities are the cause or the consequence of disease.”

On-Chip Optical Sensing Technique Detects Multiple Flu Strains

Tue, 10/06/2015 – 10:11amby University of California – Santa Cruz

http://www.mdtmag.com/news/2015/10/chip-optical-sensing-technique-detects-multiple-flu-strains?et_cid=4876632&et_rid=535648082

A schematic view shows the optical waveguide intersecting a fluidic microchannel containing target particles. Targets are optically excited as they flow past well-defined excitation spots created by multi-mode interference; fluorescence is collected by the liquid-core waveguide channel and routed into solid-core waveguides (red). (Credit: Ozcelik et al., PNAS 2015)

New chip-based optical sensing technologies developed by researchers at UC Santa Cruz and Brigham Young University enable the rapid detection and identification of multiple biomarkers. In a paper published October 5 in Proceedings of the National Academy of Sciences, researchers describe a novel method to perform diagnostic assays for multiple strains of flu virus on a small, dedicated chip.

“A standard flu test checks for about ten different flu strains, so it’s important to have an assay that can look at ten to 15 things at once. We showed a completely new way to do that on an optofluidic chip,” said senior author Holger Schmidt, the Kapany Professor of Optoelectronics in the Baskin School of Engineering at UC Santa Cruz.

Over the past decade, Schmidt and his collaborators at BYU have developed chip-based technology to optically detect single molecules without the need for high-end laboratory equipment. Diagnostic instruments based on their optofluidic chips could provide a rapid, low-cost, and portable option for identifying specific disease-related molecules or virus particles.

In the new study, Schmidt demonstrated a novel application of a principle called wavelength division multiplexing, which is widely used in fiber-optic communications. By superimposing multiple wavelengths of light in an optical waveguide on a chip, he was able to create wavelength-dependent spot patterns in an intersecting fluidic channel. Virus particles labeled with fluorescent markers give distinctive signals as they pass through the fluidic channel depending on which wavelength of light the markers absorb.

“Each color of light produces a different spot pattern in the channel, so if the virus particle is labeled to respond to blue light, for example, it will light up nine times as it goes through the channel, if it’s labeled for red it lights up seven times, and so on,” Schmidt explained.

The researchers tested the device using three different influenza subtypes labeled with different fluorescent markers. Initially, each strain of the virus was labeled with a single dye color, and three wavelengths of light were used to detect them in a mixed sample. In a second test, one strain was labeled with a combination of the colors used to label the other two strains. Again, the detector could distinguish among the viruses based on the distinctive signals from each combination of markers. This combinatorial approach is important because it increases the number of different targets that can be detected with a given number of wavelengths of light.

For these tests, each viral subtype was separately labeled with fluorescent dye. For an actual diagnostic assay, fluorescently labeled antibodies could be used to selectively attach distinctive fluorescent markers to different strains of the flu virus.

While previous studies have shown the sensitivity of Schmidt’s optofluidic chips for detection of single molecules or particles, the demonstration of multiplexing adds another important feature for on-chip bioanalysis. Compact instruments based on the chip could provide a versatile tool for diagnostic assays targeting a variety of biological particles and molecular markers.

The optofluidic chip was fabricated by Schmidt’s collaborators at Brigham Young University led by Aaron Hawkins. The joint first authors of the PNAS paper are Damla Ozcelik and Joshua Parks, both graduate students in Schmidt’s lab at UC Santa Cruz. Other coauthors include Hong Cai and Joseph Parks at UC Santa Cruz and Thomas Wall and Matthew Stott at BYU.

In another recent paper, published September 25 in Nature Scientific Reports, Schmidt’s team reported the development of a hybrid device that integrates an optofluidic chip for virus detection with a microfluidic chip for sample preparation.

“These two papers represent important milestones for us. Our goal has always been to use this technology to analyze clinically relevant samples, and now we are doing it,” Schmidt said.

Boom in Gene-Editing Studies amid Ethics Debate over Its Use

Mon, 10/12/2015 – 1:54pmby Lauran Neergaard, AP Medical Writer

http://www.mdtmag.com/news/2015/10/boom-gene-editing-studies-amid-ethics-debate-over-its-use-0

The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle cell, preventing babies from inheriting a life-threatening disorder.

It may sound like sci-fi, but research into genome editing is booming. So is a debate about its boundaries, what’s safe and what’s ethical to try in the quest to fight disease.

Does the promise warrant experimenting with human embryos? Researchers in China already have, and they’re poised to in Britain.

Should we change people’s genes in a way that passes traits to future generations? Beyond medicine, what about the environmental effects if, say, altered mosquitoes escape before we know how to use them?

“We need to try to get the balance right,” said Jennifer Doudna, a biochemist at the University of California, Berkeley. She helped develop new gene-editing technology and hears from desperate families, but urges caution in how it’s eventually used in people.

The U.S. National Academies of Science, Engineering and Medicine will bring international scientists, ethicists and regulators together in December to start determining that balance. The biggest debate is whether it ever will be appropriate to alter human heredity by editing an embryo’s genes.

“This isn’t a conversation on a cloud,” but something that families battling devastating rare diseases may want, Dr. George Daley of Boston Children’s Hospital told specialists meeting this week to plan the ethics summit. “There will be a drive to move this forward.”

Laboratories worldwide are embracing a technology to precisely edit genes inside living cells — turning them off or on, repairing or modifying them — like a biological version of cut-and-paste software. Researchers are building stronger immune cells, fighting muscular dystrophy in mice and growing human-like organs in pigs for possible transplant. Biotech companies have raised millions to develop therapies for sickle cell disease and other disorders.

The technique has a wonky name — CRISPR-Cas9 — and a humble beginning.

Doudna was studying how bacteria recognize and disable viral invaders, using a protein she calls “a genetic scalpel” to slice DNA. That system turned out to be programmable, she reported in 2012, letting scientists target virtually any gene in many species using a tailored CRISPR recipe.

There are older methods to edit genes, including one that led to an experimental treatment for the AIDS virus, but the CRISPR technique is faster and cheaper and allows altering of multiple genes simultaneously.

“It’s transforming almost every aspect of biology right now,” said National Institutes of Health genomics specialist Shawn Burgess.

In this photo provided by UC Berkeley Public Affairs, taken June 20, 2014 Jennifer Doudna, right, and her lab manager, Kai Hong, work in her laboratory in Berkeley, Calif. The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle-cell, preventing babies from inheriting a life-threatening disorder. “We need to try to get the balance right,” said Doudna. She helped develop new gene-editing technology and hears from desperate families, but urges caution in how it’s eventually used in people. (Cailey Cotner/UC Berkeley via AP)

CRISPR’s biggest use has nothing to do with human embryos. Scientists are engineering animals with human-like disorders more easily than ever before, to learn to fix genes gone awry and test potential drugs.

Engineering rodents to harbor autism-related genes once took a year. It takes weeks with CRISPR, said bioengineer Feng Zhang of the Broad Institute at MIT and Harvard, who also helped develop and patented the CRISPR technique. (Doudna’s university is challenging the patent.)

A peek inside an NIH lab shows how it works. Researchers inject a CRISPR-guided molecule into microscopic mouse embryos, to cause a gene mutation that a doctor suspects of causing a patient’s mysterious disorder. The embryos will be implanted into female mice that wake up from the procedure in warm blankets to a treat of fresh oranges. How the resulting mouse babies fare will help determine the gene defect’s role.

Experts predict the first attempt to treat people will be for blood-related diseases such as sickle cell, caused by a single gene defect that’s easy to reach. The idea is to use CRISPR in a way similar to a bone marrow transplant, but to correct someone’s own blood-producing cells rather than implanting donated ones.

“It’s like a race. Will the research provide a cure while we’re still alive?” asked Robert Rosen of Chicago, who has one of a group of rare bone marrow abnormalities that can lead to leukemia or other life-threatening conditions. He co-founded the MPN Research Foundation, which has begun funding some CRISPR-related studies.

So why the controversy? CRISPR made headlines last spring when Chinese scientists reported the first-known attempt to edit human embryos, working with unusable fertility clinic leftovers. They aimed to correct a deadly disease-causing gene but it worked in only a few embryos and others developed unintended mutations, raising fears of fixing one disease only to cause another.

If ever deemed safe enough to try in pregnancy, that type of gene change could be passed on to later generations. Then there are questions about designer babies, altered for other reasons than preventing disease.

In the U.S., the NIH has said it won’t fund such research in human embryos.

In Britain, regulators are considering researchers’ request to gene-edit human embryos — in lab dishes only — for a very different reason, to study early development.

Medicine aside, another issue is environmental: altering insects or plants in a way that ensures they pass genetic changes through wild populations as they reproduce. These engineered “gene drives” are in very early stage research, too, but one day might be used to eliminate invasive plants, make it harder for mosquitoes to carry malaria or even spread a defect that gradually kills off the main malaria-carrying species, said Kevin Esvelt of Harvard’s Wyss Institute for Biologically Inspired Engineering.

No one knows how that might also affect habitats, Esvelt said. His team is calling for the public to weigh in and for scientists to take special precautions. For example, Esvelt said colleagues are researching a tropical mosquito species unlikely to survive cold Boston even if one escaped locked labs.

“There is no societal precedent whatsoever for a widely accessible and inexpensive technology capable of altering the shared environment,” Esvelt told a recent National Academy of Sciences hearing.

Researchers Use ‘Avatar’ Experiments to Get Leg Up On Locomotion

Mon, 10/12/2015 – 5:09pmby North Carolina State University

North Carolina State University scientists take a giant leap closer to understanding locomotion from the leg up

http://www.mdtmag.com/news/2015/10/researchers-use-avatar-experiments-get-leg-locomotion

Simple mechanical descriptions of the way people and animals walk, run, jump and hop liken whole leg behavior to a spring or pogo stick.

But until now, no one has mapped the body’s complex physiology – which in locomotion includes multiple leg muscle-tendons crossing the hip, knee and ankle joints, the weight of a body, and control signals from the brain – with the rather simple physics of spring-like limb behavior.

Using an “Avatar”-like bio-robotic motor system that integrates a real muscle and tendon along with a computer controlled nerve stimulator acting as the avatar’s spinal cord, North Carolina State University researchers have taken a giant leap closer to understanding locomotion from the leg up. The findings could help create robotic devices that begin to merge human and machine in order to assist human locomotion.

Despite the complicated physiology involved, NC State biomedical engineer Greg Sawicki and Temple University post-doctoral researcher Ben Robertson show that if you know the mass, the stiffness and the leverage of the ankle’s primary muscle-tendon unit, you can predict neural control strategies that will result in spring-like behavior.

“We tried to build locomotion from the bottom up by starting with a single muscle-tendon unit, the basic power source for locomotion in all things that move,” said Greg Sawicki, associate professor in the NC State and UNC-Chapel Hill Joint Department of Biomedical Engineering and co-author of a paper published in Proceedings of the National Academy of Sciences that describes the work. “We connected that muscle-tendon unit to a motor inside a custom robotic interface designed to simulate what the muscle-tendon unit ‘feels’ inside the leg, and then electrically stimulated the muscle to get contractions going on the benchtop.”

The researchers showed that resonance tuning is a likely mechanism behind springy leg behavior during locomotion. That is, the electrical system – in this case the body’s nervous system – drives the mechanical system – the leg’s muscle-tendon unit – at a frequency which provides maximum ‘bang for the buck’ in terms of efficient power output.

Sawicki likened resonance tuning to interacting with a slinky toy. “When you get it oscillating well, you hardly have to move your hand – it’s the timing of the interaction forces that matters.

“In locomotion, resonance comes from tuning the interaction between the nervous system and the leg so they work together,” Sawicki said. “It turns out that if I know the mass, leverage and stiffness of a muscle-tendon unit, I can tell you exactly how often I should stimulate it to get resonance in the form of spring-like, elastic behavior.”

The findings have design implications relevant to designing exoskeletons for able-bodied individuals, as well as exoskeleton or prosthetic systems for people with mobility impairments.

“In the end, we found that the same simple underlying principles that govern resonance in simple mechanical systems also apply to these extraordinarily complicated physiological systems,” said Robertson, the corresponding author of the paper.

Read Full Post »


Follow-up on Tomosynthesis

Writer & Curator: Dror Nir, PhD

Tomosynthesis, is a method for performing high-resolution limited-angle (i.e. not full 3600 rotation but more like ~500) tomography. The use of such systems in breast-cancer screening is steadily increasing following the clearance of such system by the FDA on 2011; see my posts – Improving Mammography-based imaging for better treatment planning and State of the art in oncologic imaging of breast.

Many radiologists expects that Tomosynthesis will eventually replace conventional mammography due to the fact that it increases the sensitivity of breast cancer detection. This claim is supported by new peer-reviewed publications. In addition, the patient’s experience during Tomosynthesis is less painful due to a lesser pressure that is applied to the breast and while presented with higher in-plane resolution and less imaging artifacts the mean glandular dose of digital breast Tomosynthesis is comparable to that of full field digital mammography. Because it is relatively new, Tomosynthesis is not available at every hospital. As well, the procedure is recognized for reimbursement by public-health schemes.

A good summary of radiologist opinion on Tomosynthesis can be found in the following video:

Recent studies’ results with digital Tomosynthesis are promising. In addition to increase in sensitivity for detection of small cancer lesions researchers claim that this new breast imaging technique will make breast cancers easier to see in dense breast tissue.  Here is a paper published on-line by the Lancet just a couple of months ago:

Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study

Stefano Ciatto†, Nehmat Houssami, Daniela Bernardi, Francesca Caumo, Marco Pellegrini, Silvia Brunelli, Paola Tuttobene, Paola Bricolo, Carmine Fantò, Marvi Valentini, Stefania Montemezzi, Petra Macaskill , Lancet Oncol. 2013 Jun;14(7):583-9. doi: 10.1016/S1470-2045(13)70134-7. Epub 2013 Apr 25.

Background Digital breast tomosynthesis with 3D images might overcome some of the limitations of conventional 2D mammography for detection of breast cancer. We investigated the effect of integrated 2D and 3D mammography in population breast-cancer screening.

Methods Screening with Tomosynthesis OR standard Mammography (STORM) was a prospective comparative study. We recruited asymptomatic women aged 48 years or older who attended population-based breast-cancer screening through the Trento and Verona screening services (Italy) from August, 2011, to June, 2012. We did screen-reading in two sequential phases—2D only and integrated 2D and 3D mammography—yielding paired data for each screen. Standard double-reading by breast radiologists determined whether to recall the participant based on positive mammography at either screen read. Outcomes were measured from final assessment or excision histology. Primary outcome measures were the number of detected cancers, the number of detected cancers per 1000 screens, the number and proportion of false positive recalls, and incremental cancer detection attributable to integrated 2D and 3D mammography. We compared paired binary data with McNemar’s test.

Findings 7292 women were screened (median age 58 years [IQR 54–63]). We detected 59 breast cancers (including 52 invasive cancers) in 57 women. Both 2D and integrated 2D and 3D screening detected 39 cancers. We detected 20 cancers with integrated 2D and 3D only versus none with 2D screening only (p<0.0001). Cancer detection rates were 5·3 cancers per 1000 screens (95% CI 3.8–7.3) for 2D only, and 8.1 cancers per 1000 screens (6.2–10.4) for integrated 2D and 3D screening. The incremental cancer detection rate attributable to integrated 2D and 3D mammography was 2.7 cancers per 1000 screens (1.7–4.2). 395 screens (5.5%; 95% CI 5.0–6.0) resulted in false positive recalls: 181 at both screen reads, and 141 with 2D only versus 73 with integrated 2D and 3D screening (p<0·0001). We estimated that conditional recall (positive integrated 2D and 3D mammography as a condition to recall) could have reduced false positive recalls by 17.2% (95% CI 13.6–21.3) without missing any of the cancers detected in the study population.

Interpretation Integrated 2D and 3D mammography improves breast-cancer detection and has the potential to reduce false positive recalls. Randomised controlled trials are needed to compare integrated 2D and 3D mammography with 2D mammography for breast cancer screening.

Funding National Breast Cancer Foundation, Australia; National Health and Medical Research Council, Australia; Hologic, USA; Technologic, Italy.

Introduction

Although controversial, mammography screening is the only population-level early detection strategy that has been shown to reduce breast-cancer mortality in randomised trials.1,2 Irrespective of which side of the mammography screening debate one supports,1–3 efforts should be made to investigate methods that enhance the quality of (and hence potential benefit from) mam­mography screening. A limitation of standard 2D mammography is the superimposition of breast tissue or parenchymal density, which can obscure cancers or make normal structures appear suspicious. This short coming reduces the sensitivity of mammography and increases false-positive screening. Digital breast tomosynthesis with 3D images might help to overcome these limitations. Several reviews4,5 have described the development of breast tomosynthesis technology, in which several low-dose radiographs are used to reconstruct a pseudo-3D image of the breast.4–6

Initial clinical studies of 3D mammography, 6–10 though based on small or selected series, suggest that addition of 3D to 2D mammography could improve cancer detection and reduce the number of false positives. However, previous assessments of breast tomosynthesis might have been constrained by selection biases that distorted the potential effect of 3D mammography; thus, screening trials of integrated 2D and 3D mammography are needed.6

We report the results of a large prospective study (Screening with Tomosynthesis OR standard Mammog­raphy [STORM]) of 3D digital mammography. We investi­gated the effect of screen-reading using both standard 2D and 3D imaging with tomosynthesis compared with screening with standard 2D digital mammography only for population breast-cancer screening.

  

Methods

Study design and participants

STORM is a prospective population-screening study that compares mammography screen-reading in two sequential phases (figure)—2D only versus integrated 2D and 3D mammography with tomosynthesis—yielding paired results for each screening examination. Women aged 48 years or older who attended population-based screening through the Trento and Verona screening services, Italy, from August, 2011, to June, 2012, were invited to be screened with integrated 2D and 3D mammography. Participants in routine screening mammography (once every 2 years) were asymptomatic women at standard (population) risk for breast cancer. The study was granted institutional ethics approval at each centre, and participants gave written informed consent. Women who opted not to participate in the study received standard 2D mammography. Digital mammography has been used in the Trento breast-screening programme since 2005, and in the Verona programme since 2007; each service monitors outcomes and quality indicators as dictated by European standards, and both have published data for screening performance.11,12

 

study design

Procedures

All participants had digital mammography using a Selenia Dimensions Unit with integrated 2D and 3D mammography done in the COMBO mode (Hologic, Bedford, MA, USA): this setting takes 2D and 3D images at the same screening examination with a single breast position and compression. Each 2D and 3D image consisted of a bilateral two-view (mediolateral oblique and craniocaudal) mammogram. Screening mammo­grams were interpreted sequentially by radiologists, first on the basis of standard 2D mammography alone, and then by the same radiologist (on the same day) on the basis of integrated 2D and 3D mammography (figure). Thus, integrated 2D and 3D mammography screening refers to non-independent screen reading based on joint interpretation of 2D and 3D images, and does not refer to analytical combinations. Radiologists had to record whether or not to recall the participant at each screen-reading phase before progressing to the next phase of the sequence. For each screen, data were also collected for breast density (at the 2D screen-read), and the side and quadrant for any recalled abnormality (at each screen-read). All eight radiologists were breast radiologists with a mean of 8 years (range 3–13 years) experience in mammography screening, and had received basic training in integrated 2D and 3D mammography. Several of the radiologists had also used 2D and 3D mammography for patients recalled after positive conventional mammography screening as part of previous studies of tomosynthesis.8,13

Mammograms were interpreted in two independent screen-reads done in parallel, as practiced in most population breast-screening programs in Europe. A screen was considered positive and the woman recalled for further investigations if either screen-reader recorded a positive result at either 2D or integrated 2D and 3D screening (figure). When previous screening mammograms were available, these were shown to the radiologist at the time of screen-reading, as is standard practice. For assessment of breast density, we used Breast Imaging Reporting and Data System (BI-RADS)14 classification, with participants allocated to one of two groups (1–2 [low density] or 3–4 [high density]). Disagreement between readers about breast density was resolved by assessment by a third reader.

Our primary outcomes were the number of cancers detected, the number of cancers detected per 1000 screens, the number and percentage of false posi­tive recalls, and the incremental cancer detection rate attributable to integrated 2D and 3D mammography screening. We compared the number of cancers that were detected only at 2D mammography screen-reading and those that were detected only at 2D and 3D mammography screen-reading; we also did this analysis for false positive recalls. To explore the potential effect of integrated 2D and 3D screening on false-positive recalls, we also estimated how many false-positive recalls would have resulted from using a hypothetical conditional false-positive recall approach; – i.e. positive integrated 2D and 3D mammography as a condition of recall (screening recalled at 2D mammography only would not be recalled). Pre-planned secondary analyses were comparison of outcome measures by age group and breast density.

Outcomes were assessed by excision histology for participants who had surgery, or the complete assessment outcome (including investigative imaging with or without histology from core needle biopsy) for all recalled participants. Because our study focuses on the difference in detection by the two screening methods, some cancers might have been missed by both 2D and integrated 2D and 3D mammography; this possibility could be assessed at future follow-up to identify interval cancers. However, this outcome is not assessed in the present study and does not affect estimates of our primary outcomes – i.e. comparative true or false positive detection for 2D-only versus integrated 2D and 3D mammography.

 

Statistical analysis

The sample size was chosen to provide 80% power to detect a difference of 20% in cancer detection, assuming a detection probability of 80% for integrated 2D and 3D screening mammography and 60% for 2D only screening, with a two-sided significance threshold of 5%. Based on the method of Lachenbruch15 for estimating sample size for studies that use McNemar’s test for paired binary data, a minimum of 40 cancers were needed. Because most screens in the participating centres were incident (repeat) screening (75%–80%), we used an underlying breast-cancer prevalence of 0·5% to estimate that roughly 7500–8000 screens would be needed to identify 40 cancers in the study population.

We calculated the Wilson CI for the false-positive recall ratio for integrated 2D and 3D screening with conditional recall compared with 2D only screening.16 All of the other analyses were done with SAS/STAT (version 9.2), using exact methods to compute 95 CIs and p-values.

Role of the funding source

The sponsors of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author (NH) had full access to all the data in the study and had final responsibility for the decision to submit for publication.

Results

7292 participants with a median age of 58 years (IQR 54–63, range 48–71) were screened between Aug 12, 2011, and June 29, 2012. Roughly 5% of invited women declined integrated 2D and 3D screening and received standard 2D mammography. We present data for 7294 screens because two participants had bilateral cancer (detected with different screen-reading techniques for one participant). We detected 59 breast cancers in 57 participants (52 invasive cancers and seven ductal carcinoma in-situ). Of the invasive cancers, most were invasive ductal (n=37); others were invasive special types (n=7), invasive lobular (n=4), and mixed invasive types (n=4).

Table 1 shows the characteristics of the cancers. Mean tumour size (for the invasive cancers with known exact size) was 13.7 mm (SD 5.8) for cancers detected with both 2D alone and integrated 2D and 3D screening (n=29), and 13.5 mm (SD 6.7) for cancers detected only with integrated 2D and 3D screening (n=13).

 

Table 1

Of the 59 cancers, 39 were detected at both 2D and integrated 2D and 3D screening (table 2). 20 cancers were detected with only integrated 2D and 3D screening compared with none detected with only 2D screening (p<0.0001; table 2). 395 screens were false positive (5.5%, 95% CI 5.0–6.0); 181 occurred at both screen-readings, and 141 occurred at 2D screening only compared with 73 at integrated 2D and 3D screening (p<0.0001; table 2). These differences were still significant in sensitivity analyses that excluded the two participants with bilateral cancer (data not shown).


Table 2

5.3 cancers per 1000 screens (95% CI 3.8–7.3; table 3) were detected with 2D mammography only versus 8.1 cancers per 1000 screens (95% CI 6.2–10.4) with integrated 2D and 3D mammography (p<0.0001). The incremental cancer detection rate attributable to inte­grated 2D and 3D screening was 2.7 cancers per 1000 screens (95% CI 1.7–4.2), which is 33.9% (95% CI 22.1–47.4) of the cancers detected in the study popu­lation. In a sensitivity analysis that excluded the two participants with bilateral cancer the estimated incre­mental cancer detection rate attributable to integrated 2D and 3D screening was 2.6 cancers per 1000 screens (95% CI 1.4–3.8). The stratified results show that integrated 2D and 3D mammography was associated with an incrementally increased cancer detection rate in both age-groups and density categories (tables 3–5). A minority (16.7%) of breasts were of high density (category 3–4) reducing the power of statistical comparisons in this subgroup (table 5). The incremental cancer detection rate was much the same in low density versus high density groups (2.8 per 1000 vs 2.5 per 1000; p=0.84; table 3).


Table 3

Table 4-5

Overall recall—any recall resulting in true or false positive screens—was 6.2% (95% CI 5.7–6.8), and the false-positive rate for the 7235 screens of participants who did not have breast cancer was 5.5% (5.0–6.0). Table 6 shows the contribution to false-positive recalls from 2D mammography only, integrated 2D and 3D mammography only, and both, and the estimated number of false positives if positive integrated 2D and 3D mammography was a condition for recall (positive 2D only not recalled). Overall, more of the false-positive rate was driven by 2D mammography only than by integrated 2D and 3D, although almost half of the false-positive rate was a result of false positives recalled at both screen-reading phases (table 6). The findings were much the same when stratified by age and breast density (table 6). Had a conditional recall rule been applied, we estimate that the false-positive rate would have been 3.5% (95% CI 3.1–4.0%; table 6) and could have potentially prevented 68 of the 395 false positives (a reduction of 17.2%; 95% CI 13.6–21.3). The ratio between the number of false positives with integrated 2D and 3D screening with conditional recall (n=254) versus 2D only screening (n=322) was 0.79 (95% CI 0.71–0.87).

Discussion

Our study showed that integrated 2D and 3D mam­mography screening significantly increases detection of breast cancer compared with conventional mammog­raphy screening. There was consistent evidence of an incremental improvement in detection from integrated 2D and 3D mammography across age-group and breast density strata, although the analysis by breast density was limited by low number of women with breasts of high density.

One should note that we investigated comparative cancer detection, and not absolute screening sensitivity. By integrating 2D and 3D mammography using the study screen-reading protocol, 1% of false-positive recalls resulted from 2D and 3D screen-reading only (table 6). However, significantly more false positives resulted from 2D only mammography compared with integrated 2D and 3D mammography, both overall and in the stratified analyses. Application of a conditional recall rule would have resulted in a false-positive rate of 3.5% instead of the actual false-positive rate of 5.5%. The estimated false positive recall ratio of 0.79 for integrated 2D and 3D screening with conditional recall compared with 2D only screening suggests that integrated 2D and 3D screening could reduce false recalls by roughly a fifth. Had such a condition been adopted, none of the cancers detected in the study would have been missed because no cancers were detected by 2D mammography only, although this result might be because our design allowed an independent read for 2D only mammography whereas the integrated 2D and 3D read was an interpretation of a combination of 2D and 3D imaging. We do not recommend that such a conditional recall rule be used in breast-cancer screening until our findings are replicated in other mammography screening studies—STORM involved double-reading by experienced breast radiologists, and our results might not apply to other screening settings. Using a test set of 130 mammograms, Wallis and colleagues7 report that adding tomosynthesis to 2D mammography increased the accuracy of inexperienced readers (but not of experienced readers), therefore having experienced radiologists in STORM could have underestimated the effect of integrated 2D and 3D screen-reading.

No other population screening trials of integrated 2D and 3D mammography have reported final results (panel); however, an interim analysis of the Oslo trial17 a large population screening study has shown that integrated 2D and 3D mammography substantially increases detection of breast cancer. The Oslo study investigators screened women with both 2D and 3D mammography, but randomised reading strategies (with vs without 3D mammograms) and adjusted for the different screen-readers,17whereas we used sequential screen-reading to keep the same reader for each exam­ination. Our estimates for comparative cancer detection and for cancer detection rates are consistent with those of the interim analysis of the Oslo study.17 The applied recall methods differed between the Oslo study (which used an arbitration meeting to decide recall) and the STORM study (we recalled based on a decision by either screen-reader), yet both studies show that 3D mammog­raphy reduces false-positive recalls when added to standard mammography.

An editorial in The Lancet18 might indeed signal the closing of a chapter of debate about the benefits and harms of screening. We hope that our work might be the beginning of a new chapter for mammography screening: our findings should encourage new assessments of screening using 2D and 3D mammography and should factor several issues related to our study. First, we compared standard 2D mammography with integrated 2D and 3D mammography the 3D mammograms were not interpreted independently of the 2D mammograms therefore 3D mammography only (without the 2D images) might not provide the same results. Our experience with breast tomosynthesis and a review6 of 3D mammography underscore the importance of 2D images in integrated 2D and 3D screen-reading. The 2D images form the basis of the radiologist’s ability to integrate the information from 3D images with that from 2D images. Second, although most screening in STORM was incident screening, the substantial increase in cancer detection rate with integrated 2D and 3D mammography results from the enhanced sensitivity of integrated 2D and 3D screening and is probably also a result of a prevalence effect (ie, the effect of a first screening round with integrated 2D and 3D mammography). We did not assess the effect of repeat (incident) screening with integrated 2D and 3D mammography on cancer detection it might provide a smaller effect on cancer detection rates than what we report. Third, STORM was not designed to measure biological differences between the cancers detected at integrated 2D and 3D screening compared with those detected at both screen-reading phases. Descriptive analyses suggest that, generally, breast cancers detected only at integrated 2D and 3D screening had similar features (eg, histology, pathological tumour size, node status) as those detected at both screen-reading phases. Thus, some of the cancers detected only at 2D and 3D screening might represent early detection (and would be expected to receive screening benefit) whereas some might represent over-detection and a harm from screening, as for conventional screening mam mography.1,19 The absence of consensus about over-diagnosis in breast-cancer screening should not detract from the importance of our study findings to applied screening research and to screening practice; however, our trial was not done to assess the extent to which integrated 2D and 3D mam­mography might contribute to over-diagnosis.

The average dose of glandular radiation from the many low-dose projections taken during a single acquisition of 3D mammography is roughly the same as that from 2D mammography.6,20–22 Using integrated 2D and 3D en­tails both a 2D and 3D acquisition in one breast com­pression, which roughly doubles the radiation dose to the breast. Therefore, integrated 2D and 3D mammography for population screening might only be justifiable if improved outcomes were not defined solely in terms of improved detection. For example, it would be valuable to show that the increased detection with integrated 2D and 3D screening leads to reduced interval cancer rates at follow-up. A limitation of our study might be that data for interval cancers were not available; however, because of the paired design we used, future evaluation of interval cancer rates from our study will only apply to breast cancers that were not identified using 2D only or integrated 2D and 3D screening. We know of two patients from our study who have developed interval cancers (follow-up range 8–16 months). We did not get this information from cancer registries and follow-up was very short, so these data should be interpreted very cautiously, especially because interval cancers would be expected to occur in the second year of the standard 2 year interval between screening rounds. Studies of interval cancer rates after integrated 2D and 3D mammography would need to be randomised controlled trials and have a very large sample size. Additionally, the development of reconstructed 2D images from a 3D mammogram23 provides a timely solution to concerns about radiation by providing both the 2D and 3D images from tomosynthesis, eliminating the need for two acquisitions.

We have shown that integrated 2D and 3D mammog­raphy in population breast-cancer screening increases detection of breast cancer and can reduce false-positive recalls depending on the recall strategy. Our results do not warrant an immediate change to breast-screening practice, instead, they show the urgent need for random­ised controlled trials of integrated 2D and 3D versus 2D mammography, and for further translational research in breast tomosynthesis. We envisage that future screening trials investigating this issue will include measures of breast cancer detection, and will be designed to assess interval cancer rates as a surrogate endpoint for screening efficacy.

Contributors

SC had the idea for and designed the study, and collected and interpreted data. NH advised on study concepts and methods, analysed and interpreted data, searched the published work, and wrote and revised the report. DB and FC were lead radiologists, recruited participants, collected data, and commented on the draft report. MP, SB, PT, PB, PT, CF, and MV did the screen-reading, collected data, and reviewed the draft report. SM collected data and reviewed the draft report. PM planned the statistical analysis, analysed and interpreted data, and wrote and revised the report.

Conflicts of interest

SC, DB, FC, MP, SB, PT, PB, CF, MV, and SM received assistance from Hologic (Hologic USA; Technologic Italy) in the form of tomosynthesis technology and technical support for the duration of the study, and travel support to attend collaborators’ meetings. NH receives research support from a National Breast Cancer Foundation (NBCF Australia) Practitioner Fellowship, and has received travel support from Hologic to attend a collaborators’ meeting. PM receives research support through Australia’s National Health and Medical Research Council programme grant 633003 to the Screening & Test Evaluation Program.

 

References

1       Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: an independent review. Lancet 2012; 380: 1778–86.

2       Glasziou P, Houssami N. The evidence base for breast cancer screening. Prev Med 2011; 53: 100–102.

3       Autier P, Esserman LJ, Flowers CI, Houssami N. Breast cancer screening: the questions answered. Nat Rev Clin Oncol 2012; 9: 599–605.

4       Baker JA, Lo JY. Breast tomosynthesis: state-of-the-art and review of the literature. Acad Radiol 2011; 18: 1298–310.

5       Helvie MA. Digital mammography imaging: breast tomosynthesis and advanced applications. Radiol Clin North Am 2010; 48: 917–29.

6      Houssami N, Skaane P. Overview of the evidence on digital breast tomosynthesis in breast cancer detection. Breast 2013; 22: 101–08.

7   Wallis MG, Moa E, Zanca F, Leifland K, Danielsson M. Two-view and single-view tomosynthesis versus full-field digital mammography: high-resolution X-ray imaging observer study. Radiology 2012; 262: 788–96.

8   Bernardi D, Ciatto S, Pellegrini M, et al. Prospective study of breast tomosynthesis as a triage to assessment in screening. Breast Cancer Res Treat 2012; 133: 267–71.

9   Michell MJ, Iqbal A, Wasan RK, et al. A comparison of the accuracy of film-screen mammography, full-field digital mammography, and digital breast tomosynthesis. Clin Radiol 2012; 67: 976–81.

10 Skaane P, Gullien R, Bjorndal H, et al. Digital breast tomosynthesis (DBT): initial experience in a clinical setting. Acta Radiol 2012; 53: 524–29.

11 Pellegrini M, Bernardi D, Di MS, et al. Analysis of proportional incidence and review of interval cancer cases observed within the mammography screening programme in Trento province, Italy. Radiol Med 2011; 116: 1217–25.

12 Caumo F, Vecchiato F, Pellegrini M, Vettorazzi M, Ciatto S, Montemezzi S. Analysis of interval cancers observed in an Italian mammography screening programme (2000–2006). Radiol Med 2009; 114: 907–14.

13 Bernardi D, Ciatto S, Pellegrini M, et al. Application of breast tomosynthesis in screening: incremental effect on mammography acquisition and reading time. Br J Radiol 2012; 85: e1174–78.

14 American College of Radiology. ACR BI-RADS: breast imaging reporting and data system, Breast Imaging Atlas. Reston: American College of Radiology, 2003.

15  Lachenbruch PA. On the sample size for studies based on McNemar’s test. Stat Med 1992; 11: 1521–25.

16  Bonett DG, Price RM. Confidence intervals for a ratio of binomial proportions based on paired data. Stat Med 2006; 25: 3039–47.

17  Skaane P, Bandos AI, Gullien R, et al. Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; published online Jan 3. http://dx.doi.org/10.1148/ radiol.12121373.

18  The Lancet. The breast cancer screening debate: closing a chapter? Lancet 2012; 380: 1714.

19  Biesheuvel C, Barratt A, Howard K, Houssami N, Irwig L. Effects of study methods and biases on estimates of invasive breast cancer overdetection with mammography screening: a systematic review. Lancet Oncol 2007; 8: 1129–38.

20  Tagliafico A, Astengo D, Cavagnetto F, et al. One-to-one comparison between digital spot compression view and digital breast tomosynthesis. Eur Radiol 2012; 22: 539–44.

21  Tingberg A, Fornvik D, Mattsson S, Svahn T, Timberg P, Zackrisson S. Breast cancer screening with tomosynthesis—initial experiences. Radiat Prot Dosimetry 2011; 147: 180–83.

22  Feng SS, Sechopoulos I. Clinical digital breast tomosynthesis system: dosimetric characterization. Radiology 2012; 263: 35–42.

23  Gur D, Zuley ML, Anello MI, et al. Dose reduction in digital breast tomosynthesis (DBT) screening using synthetically reconstructed projection images: an observer performance study. Acad Radiol 2012; 19: 166–71.

A very good and down-to-earth comment on this article was made by Jules H Sumkin who disclosed that he is an unpaid member of SAB Hologic Inc and have a PI research agreement between University of Pittsburgh and Hologic Inc.

The results of the study by Stefano Ciatto and colleagues1 are consistent with recently published prospective,2,3 retrospective,4 and observational5 reports on the same topic. The study1 had limitations, including the fact that the same radiologist interpreted screens sequentially the same day without cross-balancing which examination was read first. Also, the false-negative findings for integrated 2D and 3D mammography, and therefore absolute benefit from the procedure, could not be adequately assessed because cases recalled by 2D mammography alone (141 cases) did not result in a single detection of an additional cancer while the recalls from the integrated 2D and 3D mammography alone (73 cases) resulted in the detection of 20 additional cancers. Nevertheless, the results are in strong agreement with other studies reporting of substantial performance improvements when the screening is done with integrated 2D and 3D mammography.

I disagree with the conclusion of the study with regards to the urgent need for randomised clinical trials of integrated 2D and 3D versus 2D mammography. First, to assess differences in mortality as a result of an imaging-based diagnostic method, a randomised trial will require several repeated screens by the same method in each study group, and the strong results from all studies to date will probably result in substantial crossover and self-selection biases over time. Second, because of the high survival rate (or low mortality rate) of breast cancer, the study will require long follow-up times of at least 10 years. In a rapidly changing environment in terms of improvements in screening technologies and therapeutic inter­ventions, the avoidance of biases is likely to be very difficult, if not impossible. The use of the number of interval cancers and possible shifts in stage at detection, while appropriately accounting for confounders, would be almost as daunting a task. Third, the imaging detection of cancer is only the first step in many management decisions and interventions that can affect outcome. The appropriate control of biases related to patient management is highly unlikely. The arguments above, in addition to the existing reports to date that show substantial improvements in cancer detection, particularly with the detection of invasive cancers, with a simultaneous reduction in recall rates, support the argument that a randomised trial is neither necessary nor warranted. The current technology might be obsolete by the time results of an appropriately done and analysed randomised trial is made public.

In order to better link the information given by “scientific” papers to the context of daily patients’ reality I suggest to spend some time reviewing few of the videos in the below links:

  1. The following group of videos is featured on a website by Siemens. Nevertheless, the presenting radiologists are leading practitioners who affects thousands of lives every year – What the experts say about tomosynthesis. – click on ECR 2013
  2. Breast Tomosynthesis in Practice – part of a commercial ad of the Washington Radiology Associates featured on the website of Diagnostic Imaging. As well, affects thousands of lives in the Washington area every year.

The pivotal questions yet to be answered are:

  1. What should be done in order to translate increase in sensitivity and early detection into decrease in mortality?

  2. What is the price of such increase in sensitivity in terms of quality of life and health-care costs and is it worth-while to pay?

An article that summarises positively the experience of introducing Tomosynthesis into routine screening practice was recently published on AJR:

Implementation of Breast Tomosynthesis in a Routine Screening Practice: An Observational Study

Stephen L. Rose1, Andra L. Tidwell1, Louis J. Bujnoch1, Anne C. Kushwaha1, Amy S. Nordmann1 and Russell Sexton, Jr.1

Affiliation: 1 All authors: TOPS Comprehensive Breast Center, 17030 Red Oak Dr, Houston, TX 77090.

Citation: American Journal of Roentgenology. 2013;200:1401-1408

 

ABSTRACT :

OBJECTIVE. Digital mammography combined with tomosynthesis is gaining clinical acceptance, but data are limited that show its impact in the clinical environment. We assessed the changes in performance measures, if any, after the introduction of tomosynthesis systems into our clinical practice.

MATERIALS AND METHODS. In this observational study, we used verified practice- and outcome-related databases to compute and compare recall rates, biopsy rates, cancer detection rates, and positive predictive values for six radiologists who interpreted screening mammography studies without (n = 13,856) and with (n = 9499) the use of tomosynthesis. Two-sided analyses (significance declared at p < 0.05) accounting for reader variability, age of participants, and whether the examination in question was a baseline were performed.

RESULTS. For the group as a whole, the introduction and routine use of tomosynthesis resulted in significant observed changes in recall rates from 8.7% to 5.5% (p < 0.001), nonsignificant changes in biopsy rates from 15.2 to 13.5 per 1000 screenings (p = 0.59), and cancer detection rates from 4.0 to 5.4 per 1000 screenings (p = 0.18). The invasive cancer detection rate increased from 2.8 to 4.3 per 1000 screening examinations (p = 0.07). The positive predictive value for recalls increased from 4.7% to 10.1% (p < 0.001).

CONCLUSION. The introduction of breast tomosynthesis into our practice was associated with a significant reduction in recall rates and a simultaneous increase in breast cancer detection rates.

Here are the facts in tables and pictures from this article

Table 1 AJR

Table 2-3 AJR

 

Table 4 AJR

 

p1 ajr

p2 ajr

Other articles related to the management of breast cancer were published on this Open Access Online Scientific Journal:

Automated Breast Ultrasound System (‘ABUS’) for full breast scanning: The beginning of structuring a solution for an acute need!

Introducing smart-imaging into radiologists’ daily practice.

Not applying evidence-based medicine drives up the costs of screening for breast-cancer in the USA.

New Imaging device bears a promise for better quality control of breast-cancer lumpectomies – considering the cost impact

Harnessing Personalized Medicine for Cancer Management, Prospects of Prevention and Cure: Opinions of Cancer Scientific Leaders @ http://pharmaceuticalintelligence.com

Predicting Tumor Response, Progression, and Time to Recurrence

“The Molecular pathology of Breast Cancer Progression”

Personalized medicine gearing up to tackle cancer

What could transform an underdog into a winner?

Mechanism involved in Breast Cancer Cell Growth: Function in Early Detection & Treatment

Nanotech Therapy for Breast Cancer

A Strategy to Handle the Most Aggressive Breast Cancer: Triple-negative Tumors

Breakthrough Technique Images Breast Tumors in 3-D With Great Clarity, Reduced Radiation

Closing the Mammography gap

Imaging: seeing or imagining? (Part 1)

Imaging: seeing or imagining? (Part 2)

Read Full Post »


 Reporter: Aviva Lev-Ari, PhD, RN

Ernst & Young (“E&Y”) has published their fifth annual report on the state of the medical technology industry.

Below are the link to this report and also a link to an excerpt from the report displaying charts of the industry’s performance.

Definition of the Global Medical Technology Industry

In this report, medical technology (medtech) companies are defined as companies that primarily design and manufacture medical

technology equipment and supplies and are headquartered within the United States or Europe. For the purposes of this report, we have placed Israel’s data and analysis within the European market, and any grouping of the US and Europe has been referred to as “global.”

This wide ranging definition includes medical device, diagnostic, drug delivery and analytical/life science tool companies, but excludes distributors and service providers such as contract research organizations or contract manufacturing organizations.

By any measure, medical technology is an extraordinarily diverse industry. While developing a consistent and meaningful classification system is important, it is anything but straightforward. Existing taxonomies sometimes segregate companies into scores of thinly populated categories, making it difficult to identify and analyze industry trends.

Furthermore, they tend to combine categories based on products (such as imaging or tools) with those based on diseases targeted by those products (such as cardiovascular or oncology), which makes it harder to analyze trends consistently across either dimension. To address some of these challenges, we have categorized medtech companies across both dimensions —products and diseases targeted.

All publicly traded medtech companies were classified as belonging to one of five broad product groups:

Imaging:

companies developing products used to diagnose or monitor conditions via imaging technologies, including products such as MRI machines, computed tomography (CT) and X-ray imaging and optical biopsy systems

Non-imaging diagnostics:

companies developing products used to diagnose or monitor conditions via non-imaging technologies, which can include patient monitoring and in vitro testing equipment

Research and other equipment:

companies developing equipment used for research or other purposes, including analytical and life science tools, specialized laboratory equipment and furniture

Therapeutic devices:

companies developing products used to treat patients, including therapeutic medical devices, tools or drug delivery/infusion technologies

Other:

companies developing products that do not fi t in any of the above categories were classifi ed in this segment

In addition to product groups, this report tracks conglomerate companies that derive a significant part of their revenues from medical technologies. While a conglomerate medtech division’s technology could technically fall into one of the product groups listed above (e.g., General Electric into “imaging” and Allergan into “therapeutic devices”), all conglomerate data is kept separate from that of the nonconglomerates.

This is due to the fact that, while conglomerates report revenues for their medtech divisions, they typically do not report other financial results for their medtech divisions, such as research and development or net income.

Conglomerate companies:

United States

3M Health Care

Abbott: Medical Products

Agilent Technologies: Life Sciences and Chemical Analysis

Allergan: Medical Devices

Baxter International: Medical Products

Corning: Life Sciences

Danaher: Life Sciences & Diagnostics

Endo Health Solutions: AMS and HealthTronics

GE Healthcare

Hospira: Devices

IDEX: Health & Science Technologies

Johnson & Johnson: Medical Devices & Diagnostics

Kimberly-Clark: Health Care

Pall: Life Sciences

Europe

Agfa HealthCare

Bayer HealthCare: Medical Care

Beiersdorf: Hansaplast

Carl Zeiss Meditec

Dräger: Medical

Eckert & Ziegler: Medizintechnik

Fresenius Kabi

Halma: Health and Analysis

Jenoptik: Medical

Novartis: Alcon

Philips Healthcare

Quantel Medical

Roche Diagnostics

Sanofi : Genzyme Biosurgery

SCA Svenska Cellulosa Aktiebolaget: Personal Care

Sempermed

Siemens Healthcare

Smiths Medical

The big picture

Despite lingering financial and regulatory uncertainties, US and European publicly held medtech companies delivered another strong performance in 2011. For both conglomerates and pure-play companies, revenue growth in 2011 outpaced 2010 growth rates. Net income increased by 14% — the third consecutive year of double digit growth, and certainly impressive in today’s challenging economic climate.

So far, the medical technology industry appears to be weathering a period of slower global economic growth. However, for an industry that was accustomed to double-digit revenue growth, considerable margins and a predictable sales-and regulatory environment, the long-term future may still be turbulent. The industry’s financial performance will likely continue to be challenged by low economic growth in developed markets, the prospect of austerity measures in many countries, a looming Eurozone debt crisis and an imminent 2.3% medical device tax in the US. And while the US Supreme Court’s upholding of the Affordable Care Act has removed some of the uncertainty in the US, the regulatory environment continues to grow ever more complex around the globe.

As payers tackle runaway health care costs, medtech will face rising pricing pressures and expanded use of comparative effectiveness — making organic growth in western markets more challenging. Efforts to heighten disease management and preventive care, and other efforts to drive efficiency within the health care system, may impact both product utilization and profitability. The cost of not adapting the traditional medtech business model to stay ahead of these trends could be disastrous.

Public company data 2011                 2010 % change

Revenues $331.7                                          $313.9 6%

Conglomerates $142.3                                $132.8 7%

Pure-play companies $189.4                     $181.0 5%

R&D expense $12.6                                        $12.1 4%

SG&A expense $60.3                                    $57.4 5%

Net income $19.9                                          $17.4 14%

Cash and cash equivalents and short-term investments $39.2      $39.4 -1%

Market capitalization $436.1                                                              $465.9 -6%

Number of employees 725,900                                                           702,200 3%

Number of public companies 411                                                        423 -3%

Source: Ernst & Young and company financial statement data.

Numbers may appear to be inconsistent due to rounding.

Data shown for US and European public companies.

Market capitalization data is shown for 30 June 2011 and 30 June 2012.

Medical technology at a glance, 2010–2011

(US$b, data for pure-play companies except where indicated)

Medtech companies — long known for innovation, reinvention and risk-taking in product development — will need to apply the same principles to business model development. These trends and implications are discussed more fully in this year’s point of view article.

US and European publicly held medtech companies delivered another strong performance in 2011

Since we first published Pulse of the industry back in 2008 (using 2007 figures), a number of medtech firms have seen their revenues grow significantly. It is notable that 6 of the 10 fastest-growing companies over the period 2007–11 — led by spinal device company NuVasive and Intuitive Surgical (maker of the da Vinci Surgical System) — expanded their top lines mostly through organic growth and without the assistance of sizeable mergers or acquisitions. Corning Life Sciences was the only conglomerate to make the top 10 list.

Selected fast-growing US medtechs by revenue growth, 2007–2011

(US$m)

Companies 2007                          2011 CAGR

NuVasive $154                                 $541 37%

Alere $767                                       $2,387 33%

Life Technologies $1,282             $3,776 31%

Intuitive Surgical $601                 $1,757 31%

Illumina $367                                 $1,056 30%

Hologic $738                                   $1,789 25%

Corning Life Sciences $305            $595 18%

Thoratec $235                                   $423 16%

Greatbatch $319                                $569 16%

ResMed $716                                    $1,243 15%

Source: Ernst & Young and company financial statement data.

Companies in italics have made significant acquisitions between 2007 and 2011.

CAGR= Compounded Annual Growth Rate. 6 of the 10 fastest-growing companies expanded their top lines mostly through organic growth

Selected fast-growing European medtechs by revenue growth, 2007–2011

(US$m)

Source: Ernst & Young and company financial statement data.

Companies in italics have made significant acquisitions between 2007 and 2011.

CAGR= Compounded Annual Growth Rate.

Companies        Location          2007                   2011                CAGR

Fresenius Kabi        Germany        $2,782                $5,515                     19%

Sonova Holding      Switzerland      $926                 $1,827                   19%

ELEKTA                   Sweden              $674                 $1,217                    16%

Qiagen                     Netherlands       $650               $1,170                    16%

Stratec Biomedical Systems Germany $94               $165                     15%

Sempermed             Austria               $300                 $517                      15%

Syneron Medical         Israel               $141                  $228                    13%

Given Imaging             Israel               $113                  $178                     12%

William Demant Holding Denmark $1,010             $1,501                    10%

Essilor International France            $3,986               $5,829                  10%

While the fastest-growing companies in the US were fueled largely by organic growth, the four fastest-growing firms in Europe were aided by significant acquisitions. Germany’s Fresenius Kabi holds the distinction of having the biggest expansion in both real dollar and percentage terms on this list.

The company’s growth was in large part fueled by the addition of APP Pharmaceuticals, which it acquired for US$3.7 billion in 2008. Of the six commercial leaders on this list, five had made sizeable purchases, while the smaller “other” companies grew mostly through organic means.

Future Growth

Fueling future growth Mergers & acquisitions

The big picture

Merger and acquisition (M&A) activity among US and European medical technology companies remained vibrant in the year ended June 30, 2012. While 2011–12’s total of US$35.0 billion was well below the levels seen over the last two years, those two years were driven by megadeals done by Novartis (which paid US$41.2 billion to Nestlé for the remaining 75% of Alcon it didn’t already control) and Johnson & Johnson (which paid US$19.7 billion for Synthes). On a normalized basis (after removing the impact of the aforementioned megadeals), 2011–12’s total deal value was more in line with previous years — 25% below the prior year and 16% above the year before that.

Although no megadeals were consummated in 2011–12, there were eight transactions valued at more than US$1 billion, versus 12 the year before. The year’s largest deal was between private equity firm Apax Partners, two Canadian pension funds and Texas-based wound care company Kinetic Concepts Inc. (KCI). The US$6.3 billion Apax/KCI deal was particularly notable, as the US$6.3 billion represented one of the largest leveraged buyouts — across all industries — since the onset of the financial crisis in 2008. Two other private equity firms were also involved in multibillion-dollar M&As: Cinven sold off Swedish diagnostics company Phadia to Thermo Fisher Scientific for US$3.5 billion, and TPG Capital acquired in vitro diagnostics maker Immucor for nearly US$2 billion.

SOURCES:

Pulse of the Industry – Ernst & Young

http://www.ey.com/Publication/vwLUAssets/Pulse_medical_technology_report_2012/$FILE/Pulse_medical_technology_report_2012.pdf

Pulse of the Industry: Medical Technology Report 2012 – Industry performance

http://www.ey.com/GL/en/Industries/Life-Sciences/Pulse–medical-technology-report-2012—Mergers-and-acquisitions—medtechdata 

Read Full Post »