Feeds:
Posts
Comments

Posts Tagged ‘Medical imaging’

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade.

Reporter: Gail S. Thornton, M.A.

This article is excerpted from Health IT Analytics, April 11, 2019.

 By Jennifer Bresnick

3.4.14   These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 3: AI in Medicine

April 11, 2019 – There’s no question that artificial intelligence is moving quickly in the healthcare industry.  Even just a few months ago, AI was still a dream for the next generation: something that would start to enter regular care delivery in a couple of decades – maybe ten or fifteen years for the most advanced health systems.

Even Partners HealthCare, the Boston-based giant on the very cutting edge of research and reform, set a ten-year timeframe for artificial intelligence during its 2018 World Medical Innovation Forum, identifying a dozen AI technologies that had the potential to revolutionize patient care within the decade.

But over the past twelve months, research has progressed so rapidly that Partners has blown up that timeline. 

Instead of viewing AI as something still lingering on the distant horizon, this year’s Disruptive Dozen panel was tasked with assessing which AI innovations will be ready to fundamentally alter the delivery of care by 2020 – now less than a year away.

Sixty members of the Partners faculty participated in nominating and narrowing down the tools they think will have an almost immediate benefit for patients and providers, explained Erica Shenoy, MD, PhD, an infectious disease specialist at Massachusetts General Hospital (MGH).

“These are innovations that have a strong potential to make significant advancement in the field, and they are also technologies that are pretty close to making it to market,” she said.

The results include everything from mental healthcare and clinical decision support to coding and communication, offering patients and their providers a more efficient, effective, and cost-conscious ecosystem for improving long-term outcomes.

In order from least to greatest potential impact, here are the twelve artificial intelligence innovations poised to become integral components of the next decade’s data-driven care delivery system.

NARROWING THE GAPS IN MENTAL HEALTHCARE

Nearly twenty percent of US patients struggle with a mental health disorder, yet treatment is often difficult to access and expensive to use regularly.  Reducing barriers to access for mental and behavioral healthcare, especially during the opioid abuse crisis, requires a new approach to connecting patients with services.

AI-driven applications and therapy programs will be a significant part of the answer.

“The promise and potential for digital behavioral solutions and apps is enormous to address the gaps in mental healthcare in the US and across the world,” said David Ahern, PhD, a clinical psychologist at Brigham & Women’s Hospital (BWH). 

Smartphone-based cognitive behavioral therapy and integrated group therapy are showing promise for treating conditions such as depression, eating disorders, and substance abuse.

While patients and providers need to be wary of commercially available applications that have not been rigorously validated and tested, more and more researchers are developing AI-based tools that have the backing of randomized clinical trials and are showing good results.

A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.
A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.

Source: Partners HealthCare

STREAMLINING WORKFLOWS WITH VOICE-FIRST TECHNOLOGY

Natural language processing is already a routine part of many behind-the-scenes clinical workflows, but voice-first tools are expected to make their way into the patient-provider encounter in a new way. 

Smart speakers in the clinic are prepping to relieve clinicians of their EHR burdens, capturing free-form conversations and translating the content into structured documentation.  Physicians and nurses will be able to collect and retrieve information more quickly while spending more time looking patients in the eye.

Patients may benefit from similar technologies at home as the consumer market for virtual assistants continues to grow.  With companies like Amazon achieving HIPAA compliance for their consumer-facing products, individuals may soon have more robust options for voice-first chronic disease management and patient engagement.

IDENTIFYING INDIVIDUALS AT HIGH RISK OF DOMESTIC VIOLENCE

Underreporting makes it difficult to know just how many people suffer from intimate partner violence (IPV), says Bharti Khurana, MD, an emergency radiologist at BWH.  But the symptoms are often hiding in plain sight for radiologists.

Using artificial intelligence to flag worrisome injury patterns or mismatches between patient-reported histories and the types of fractures present on x-rays can alert providers to when an exploratory conversation is called for.

“As a radiologist, I’m very excited because this will enable me to provide even more value to the patient instead of simply evaluating their injuries.  It’s a powerful tool for clinicians and social workers that will allow them to approach patients with confidence and with less worry about offending the patient or the spouse,” said Khurana.

REVOLUTIONIZING ACUTE STROKE CARE

Every second counts when a patient experiences a stroke.  In far-flung regions of the United States and in the developing world, access to skilled stroke care can take hours, drastically increasing the likelihood of significant long-term disability or death.

Artificial intelligence has the potential to close the gaps in access to high-quality imaging studies that can identify the type of stroke and the location of the clot or bleed.  Research teams are currently working on AI-driven tools that can automate the detection of stroke and support decision-making around the appropriate treatment for the individual’s needs.  

In rural or low-resource care settings, these algorithms can compensate for the lack of a specialist on-site and ensure that every stroke patient has the best possible chance of treatment and recovery.

AI revolutionizing stroke care

Source: Getty Images

REDUCING ADMINISTRATIVE BURDENS FOR PROVIDERS

The costs of healthcare administration are off the charts.  Recent data from the Center for American progress states that providers spend about $282 billion per year on insurance and medical billing, and the burdens are only going to keep getting bigger.

Medical coding and billing is a perfect use case for natural language processing and machine learning.  NLP is well-suited to translating free-text notes into standardized codes, which can move the task off the plates of physicians and reduce the time and effort spent on complying with convoluted regulations.

“The ultimate goal is to help reduce the complexity of the coding and billing process through automation, thereby reducing the number of mistakes – and, in turn, minimizing the need for such intense regulatory oversight,” Partners says.

NLP is already in relatively wide use for this task, and healthcare organizations are expected to continue adopting this strategy as a way to control costs and speed up their billing cycles.

UNLEASHING HEALTH DATA THROUGH INFORMATION EXCHANGE

AI will combine with another game-changing technology, known as FHIR, to unlock siloes of health data and support broader access to health information.

Patients, providers, and researchers will all benefit from a more fluid health information exchange environment, especially since artificial intelligence models are extremely data-hungry.

Stakeholders will need to pay close attention to maintaining the privacy and security of data as it moves across disparate systems, but the benefits have the potential to outweigh the risks.

“It completely depends on how everyone in the medical community advocates for, builds, and demands open interfaces and open business models,” said Samuel Aronson, Executive Director of IT at Partners Personalized Medicine.

“If we all row in the same direction, there’s a real possibility that we will see fundamental improvements to the healthcare system in 3 to 5 years.”

OFFERING NEW APPROACHES FOR EYE HEALTH AND DISEASE

Image-heavy disciplines have started to see early benefits from artificial intelligence since computers are particularly adept at analyzing patterns in pixels.  Ophthalmology is one area that could see major changes as AI algorithms become more accurate and more robust.

From glaucoma to diabetic retinopathy, millions of patients experience diseases that can lead to irreversible vision loss every year.  Employing AI for clinical decision support can extend access to eye health services in low-resource areas while giving human providers more accurate tools for catching diseases sooner.

REAL-TIME MONITORING OF BRAIN HEALTH

The brain is still the body’s most mysterious organ, but scientists and clinicians are making swift progress unlocking the secrets of cognitive function and neurological disease.  Artificial intelligence is accelerating discovery by helping providers interpret the incredibly complex data that the brain produces.

From predicting seizures by reading EEG tests to identifying the beginnings of dementia earlier than any human, artificial intelligence is allowing providers to access more detailed, continuous measurements – and helping patients improve their quality of life.

Seizures can happen in patients with other serious illnesses, such as kidney or liver failure, explained, Bandon Westover, MD, PhD, executive director of the Clinical Data Animation Center at MGH, but many providers simply don’t know about it.

“Right now, we mostly ignore the brain unless there’s a special need for suspicion,” he said.  “In a year’s time, we’ll be catching a lot more seizures and we’ll be doing it with algorithms that can monitor patients continuously and identify more ambiguous patterns of dysfunction that can damage the brain in a similar manner to seizures.”

AUTOMATING MALARIA DETECTION IN DEVELOPING REGIONS

Malaria is a daily threat for approximately half the world’s population.  Nearly half a million people died from the mosquito-borne disease in 2017, according to the World Health Organization, and the majority of the victims are children under the age of five.

Deep learning tools can automate the process of quantifying malaria parasites in blood samples, a challenging task for providers working without pathologist partners.  One such tool achieved 90 percent accuracy and specificity, putting it on par with pathology experts.

This type of software can be run on a smartphone hooked up to a camera on a microscope, dramatically expanding access to expert-level diagnosis and monitoring.

AI for diagnosing and detecting malaria

Source: Getty Images

AUGMENTING DIAGNOSTICS AND DECISION-MAKING

Artificial intelligence has made especially swift progress in diagnostic specialties, including pathology. AI will continue to speed down the road to maturity in this area, predicts Annette Kim, MD, PhD, associate professor of pathology at BWH and Harvard Medical School.

“Pathology is at the center of diagnosis, and diagnosis underpins a huge percentage of all patient care.  We’re integrating a huge amount of data that funnels through us to come to a diagnosis.  As the number of data points increases, it negatively impacts the time we have to synthesize the information,” she said.

AI can help automate routine, high-volume tasks, prioritize and triage cases to ensure patients are getting speedy access to the right care, and make sure that pathologists don’t miss key information hidden in the enormous volumes of clinical and test data they must comb through every day.

“This is where AI can have a huge impact on practice by allowing us to use our limited time in the most meaningful manner,” Kim stressed.

PREDICTING THE RISK OF SUICIDE AND SELF-HARM

Suicide is the tenth leading cause of death in the United States, claiming 45,000 lives in 2016.  Suicide rates are on the rise due to a number of complex socioeconomic and mental health factors, and identifying patients at the highest risk of self-harm is a difficult and imprecise science.

Natural language processing and other AI methodologies may help providers identify high-risk patients earlier and more reliably.  AI can comb through social media posts, electronic health record notes, and other free-text documents to flag words or concepts associated with the risk of harm.

Researchers also hope to develop AI-driven apps to provide support and therapy to individuals likely to harm themselves, especially teenagers who commit suicide at higher rates than other age groups.

Connecting patients with mental health resources before they reach a time of crisis could save thousands of lives every year.

REIMAGINING THE WORLD OF MEDICAL IMAGING

Radiology is already one of AI’s early beneficiaries, but providers are just at the beginning of what they will be able to accomplish in the next few years as machine learning explodes into the imaging realm.

AI is predicted to bring earlier detection, more accurate assessment of complex images, and less expensive testing for patients across a huge number of clinical areas.

But as leaders in the AI revolution, radiologists also have a significant responsibility to develop and deploy best practices in terms of trustworthiness, workflow, and data protection.

“We certainly feel the onus on the radiology community to make sure we do deliver and translate this into improved care,” said Alexandra Golby, MD, a neurosurgeon and radiologist at BWH and Harvard Medical School.

“Can radiology live up to the expectations?  There are certainly some challenges, including trust and understanding of what the algorithms are delivering.  But we desperately need it, and we want to equalize care across the world.”

Radiologists have been among the first to overcome their trepidation about the role of AI in a changing clinical world, and are eagerly embracing the possibilities of this transformative approach to augmenting human skills.”

“All of the imaging societies have opened their doors to the AI adventure,” Golby said.  “The community very anxious to learn, codevelop, and work with all of the industry partners to turn this technology into truly valuable tools. We’re very optimistic and very excited, and we look forward to learning more about how AI can improve care.”

Source:

https://healthitanalytics.com/news/top-12-artificial-intelligence-innovations-disrupting-healthcare-by-2020

 

Read Full Post »

Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle

Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle

Reporter: Stephen J Williams, PhD

Article ID #179: Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle. Published on 8/14/2015

WordCloud Image Produced by Adam Tubman

Leaders in Pharmaceutical Business Intelligence would like to announce the First volume of their BioMedical E-Book Series C: e-Books on Cancer & Oncology

Volume One: Cancer Biology and Genomics for Disease Diagnosis

CancerandOncologyseriesCcoverwhich is now available on Amazon Kindle at                          http://www.amazon.com/dp/B013RVYR2K.

This e-Book is a comprehensive review of recent Original Research on Cancer & Genomics including related opportunities for Targeted Therapy written by Experts, Authors, Writers. This ebook highlights some of the recent trends and discoveries in cancer research and cancer treatment, with particular attention how new technological and informatics advancements have ushered in paradigm shifts in how we think about, diagnose, and treat cancer. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation. The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012.  All new articles on this subject, will continue to be incorporated, as published with periodical updates.

We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon. All forthcoming BioMed e-Book Titles can be viewed at:

http://pharmaceuticalintelligence.com/biomed-e-books/

Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of  life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.

Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:

  • delivering curation and summary interpretations of latest findings and innovations
  • on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
  • providing a social platform for scientists and clinicians to enter into discussion using social media
  • compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform

This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Table of Contents for Cancer Biology and Genomics for Disease Diagnosis

Preface

Introduction  The evolution of cancer therapy and cancer research: How we got here?

Part I. Historical Perspective of Cancer Demographics, Etiology, and Progress in Research

Chapter 1:  The Occurrence of Cancer in World Populations

Chapter 2.  Rapid Scientific Advances Changes Our View on How Cancer Forms

Chapter 3:  A Genetic Basis and Genetic Complexity of Cancer Emerge

Chapter 4: How Epigenetic and Metabolic Factors Affect Tumor Growth

Chapter 5: Advances in Breast and Gastrointestinal Cancer Research Supports Hope for Cure

Part II. Advent of Translational Medicine, “omics”, and Personalized Medicine Ushers in New Paradigms in Cancer Treatment and Advances in Drug Development

Chapter 6:  Treatment Strategies

Chapter 7:  Personalized Medicine and Targeted Therapy

Part III.Translational Medicine, Genomics, and New Technologies Converge to Improve Early Detection

Chapter 8:  Diagnosis                                     

Chapter 9:  Detection

Chapter 10:  Biomarkers

Chapter 11:  Imaging In Cancer

Chapter 12: Nanotechnology Imparts New Advances in Cancer Treatment, Detection, &  Imaging                                 

Epilogue by Larry H. Bernstein, MD, FACP: Envisioning New Insights in Cancer Translational Biology

 

Read Full Post »

Imaging Technology in Cancer Surgery

Author and curator: Dror Nir, PhD

The advent of medical-imaging technologies such as image-fusion, functional-imaging and noninvasive tissue characterisation is playing an imperative role in answering this demand thus transforming the concept of personalized medicine in cancer into practice. The leading modality in that respect is medical imaging. To date, the main imaging systems that can provide reasonable level of cancer detection and localization are: CT, mammography, Multi-Sequence MRI, PET/CT and ultrasound. All of these require skilled operators and experienced imaging interpreters in order to deliver what is required at a reasonable level. It is generally agreed by radiologists and oncologists that in order to provide a comprehensive work-flow that complies with the principles of personalized medicine, future cancer patients’ management will heavily rely on computerized image interpretation applications that will extract from images in a standardized manner measurable imaging biomarkers leading to better clinical assessment of cancer patients.

As consequence of the human genome project and technological advances in gene-sequencing, the understanding of cancer advanced considerably. This led to increase in the offering of treatment options. Yet, surgical resection is still the leading form of therapy offered to patients with organ confined tumors. Obtaining “cancer free” surgical margins is crucial to the surgery outcome in terms of overall survival and patients’ quality of life/morbidity. Currently, a significant portion of surgeries ends up with positive surgical margins leading to poor clinical outcome and increase of costs. To improve on this, large variety of intraoperative imaging-devices aimed at resection-guidance have been introduced and adapted in the last decade and it is expected that this trend will continue.

The Status of Contemporary Image-Guided Modalities in Oncologic Surgery is a review paper presenting a variety of cancer imaging techniques that have been adapted or developed for intra-operative surgical guidance. It also covers novel, cancer-specific contrast agents that are in early stage development and demonstrate significant promise to improve real-time detection of sub-clinical cancer in operative setting.

Another good (free access) review paper is: uPAR-targeted multimodal tracer for pre- and intraoperative imaging in cancer surgery

Abstract

Pre- and intraoperative diagnostic techniques facilitating tumor staging are of paramount importance in colorectal cancer surgery. The urokinase receptor (uPAR) plays an important role in the development of cancer, tumor invasion, angiogenesis, and metastasis and over-expression is found in the majority of carcinomas. This study aims to develop the first clinically relevant anti-uPAR antibody-based imaging agent that combines nuclear (111In) and real-time near-infrared (NIR) fluorescent imaging (ZW800-1). Conjugation and binding capacities were investigated and validated in vitro using spectrophotometry and cell-based assays. In vivo, three human colorectal xenograft models were used including an orthotopic peritoneal carcinomatosis model to image small tumors. Nuclear and NIR fluorescent signals showed clear tumor delineation between 24h and 72h post-injection, with highest tumor-to-background ratios of 5.0 ± 1.3 at 72h using fluorescence and 4.2 ± 0.1 at 24h with radioactivity. 1-2 mm sized tumors could be clearly recognized by their fluorescent rim. This study showed the feasibility of an uPAR-recognizing multimodal agent to visualize tumors during image-guided resections using NIR fluorescence, whereas its nuclear component assisted in the pre-operative non-invasive recognition of tumors using SPECT imaging. This strategy can assist in surgical planning and subsequent precision surgery to reduce the number of incomplete resections.

INTRODUCTION
Diagnosis, staging, and surgical planning of colorectal cancer patients increasingly rely on imaging techniques that provide information about tumor biology and anatomical structures [1-3]. Single-photon emission computed tomography (SPECT) and positron emission tomography (PET) are preoperative nuclear imaging modalities used to provide insights into tumor location, tumor biology, and the surrounding micro-environment [4]. Both techniques depend on the recognition of tumor cells using radioactive ligands. Various monoclonal antibodies, initially developed as therapeutic agents (e.g. cetuximab, bevacizumab, labetuzumab), are labeled with radioactive tracers and evaluated for pre-operative imaging purposes [5-9]. Despite these techniques, during surgery the surgeons still rely mostly on their eyes and hands to distinguish healthy from malignant tissues, resulting in incomplete resections or unnecessary tissue removal in up to 27% of rectal cancer patients [10, 11]. Incomplete resections (R1) are shown to be a strong predictor of development of distant metastasis, local recurrence, and decreased survival of colorectal cancer patients [11, 12]. Fluorescence-guided surgery (FGS) is an intraoperative imaging technique already introduced and validated in the clinic for sentinel lymph node (SLN) mapping and biliary imaging [13]. Tumor-specific FGS can be regarded as an extension of SPECT/PET, using fluorophores instead of radioactive labels conjugated to tumor-specific ligands, but with higher spatial resolution than SPECT/PET imaging and real-time anatomical feedback [14]. A powerful synergy can be achieved when nuclear and fluorescent imaging modalities are combined, extending the nuclear diagnostic images with real-time intraoperative imaging. This combination can lead to improved diagnosis and management by integrating pre-intra and postoperative imaging. Nuclear imaging enables pre-operative evaluation of tumor spread while during surgery deeper lying spots can be localized using the gamma probe counter. The (NIR) fluorescent signal aids the surgeon in providing real-time anatomical feedback to accurately recognize and resect malignant tissues. Postoperative, malignant cells can be recognized using NIR fluorescent microscopy. Clinically, the advantages of multimodal agents in image-guided surgery have been shown in patients with melanoma and prostate cancer, but those studies used a-specific agents, following the natural lymph drainage pattern of colloidal tracers after peritumoral injection [15, 16]. The urokinase-type plasminogen activator receptor (uPAR) is implicated in many aspects of tumor growth and (micro) metastasis [17, 18]. The levels of uPAR are undetectable in normal tissues except for occasional macrophages and granulocytes in the uterus, thymus, kidneys and spleen [19]. Enhanced tumor levels of uPAR and its circulating form (suPAR) are independent prognostic markers for overall survival in colorectal cancer patients [20, 21]. The relatively selective and high overexpression of uPAR in a wide range of human cancers including colorectal, breast, and pancreas nominate uPAR as a widely applicable and potent molecular target [17,22]. The current study aims to develop a clinically relevant uPAR-specific multimodal agent that can be used to visualize tumors pre- and intraoperatively after a single injection. We combined the 111Indium isotope with NIR fluorophore ZW800-1 using a hybrid linker to an uPAR specific monoclonal antibody (ATN-658) and evaluated its performance using a pre-clinical SPECT system (U-SPECT-II) and a clinically-applied NIR fluorescence camera system (FLARE™).

Fig1 Fig2 Fig3

Robotic surgery is a growing trend as a form of surgery, specifically in urology. The following review paper propose a good discussion on the added value of imaging in urologic robotic surgery:

The current and future use of imaging in urological robotic surgery: a survey of the European Association of Robotic Urological Surgeons

 Abstract

Background

With the development of novel augmented reality operating platforms the way surgeons utilize imaging as a real-time adjunct to surgical technique is changing.

Methods

A questionnaire was distributed via the European Robotic Urological Society mailing list. The questionnaire had three themes: surgeon demographics, current use of imaging and potential uses of an augmented reality operating environment in robotic urological surgery.

Results

117 of the 239 respondents (48.9%) were independently practicing robotic surgeons. 74% of surgeons reported having imaging available in theater for prostatectomy 97% for robotic partial nephrectomy and 95% cystectomy. 87% felt there was a role for augmented reality as a navigation tool in robotic surgery.

Conclusions

This survey has revealed the contemporary robotic surgeon to be comfortable in the use of imaging for intraoperative planning it also suggests that there is a desire for augmented reality platforms within the urological community. Copyright © 2014 John Wiley & Sons, Ltd.

 Introduction

Since Röntgen first utilized X-rays to image the carpal bones of the human hand in 1895, medical imaging has evolved and is now able to provide a detailed representation of a patient’s intracorporeal anatomy, with recent advances now allowing for 3-dimensional (3D) reconstructions. The visualization of anatomy in 3D has been shown to improve the ability to localize structures when compared with 2D with no change in the amount of cognitive loading [1]. This has allowed imaging to move from a largely diagnostic tool to one that can be used for both diagnosis and operative planning.

One potential interface to display 3D images, to maximize its potential as a tool for surgical guidance, is to overlay them onto the endoscopic operative scene (augmented reality). This addresses, in part, a criticism often leveled at robotic surgery, the loss of haptic feedback. Augmented reality has the potential to mitigate this sensory loss by enhancing the surgeons visual cues with information regarding subsurface anatomical relationships [2].

Augmented reality surgery is in its infancy for intra-abdominal procedures due in large part to the difficulties of applying static preoperative imaging to a constantly deforming intraoperative scene [3]. There are case reports and ex vivo studies in the literature examining the technology in minimal access prostatectomy [3-6] and partial nephrectomy [7-10], but there remains a lack of evidence determining whether surgeons feel there is a role for the technology and if so for what procedures they feel it would be efficacious.

This questionnaire-based study was designed to assess first, the pre- and intra-operative imaging modalities utilized by robotic urologists; second, the current use of imaging intraoperatively for surgical planning; and finally whether there is a desire for augmented reality among the robotic urological community.

Methods

Recruitment

A web based survey instrument was designed and sent out, as part of a larger survey, to members of the EAU robotic urology section (ERUS). Only independently practicing robotic surgeons performing robot-assisted laparoscopic prostatectomy (RALP), robot-assisted partial nephrectomy (RAPN) and/or robotic cystectomy were included in the analysis, those surgeons exclusively performing other procedures were excluded. Respondents were offered no incentives to reply. All data collected was anonymous.

Survey design and administration

The questionnaire was created using the LimeSurvey platform (www.limesurvey.com) and hosted on their website. All responses (both complete and incomplete) were included in the analysis. The questionnaire was dynamic with the questions displayed tailored to the respondents’ previous answers.

When computing fractions or percentages the denominator was the number of respondents to answer the question, this number is variable due to the dynamic nature of the questionnaire.

Demographics

All respondents to the survey were asked in what country they practiced and what robotic urological procedures they performed. In addition to what procedures they performed surgeons were asked to specify the number of cases they had undertaken for each procedure.

 Current imaging practice

Procedure-specific questions in this group were displayed according to the operations the respondent performed. A summary of the questions can be seen in Appendix 1. Procedure-nonspecific questions were also asked. Participants were asked whether they routinely used the Tile Pro™ function of the da Vinci console (Intuitive Surgical, Sunnyvale, USA) and whether they routinely viewed imaging intra-operatively.

 Augmented reality

Before answering questions in this section, participants were invited to watch a video demonstrating an augmented reality platform during RAPN, performed by our group at Imperial College London. A still from this video can be seen in Figure 1. They were then asked whether they felt augmented reality would be of use as a navigation or training tool in robotic surgery.

f1

Figure 1. A still taken from a video of augmented reality robot assisted partial nephrectomy performed. Here the tumour has been painted into the operative view allowing the surgeon to appreciate the relationship of the tumour with the surface of the kidney

Once again, in this section, procedure-specific questions were displayed according to the operations the respondent performed. Only those respondents who felt augmented reality would be of use as a navigation tool were asked procedure-specific questions. Questions were asked to establish where in these procedures they felt an augmented reality environment would be of use.

Results

Demographics

Of the 239 respondents completing the survey 117 were independently practising robotic surgeons and were therefore eligible for analysis. The majority of the surgeons had both trained (210/239, 87.9%) and worked in Europe (215/239, 90%). The median number of cases undertaken by those surgeons reporting their case volume was: 120 (6–2000), 9 (1–120) and 30 (1–270), for RALP, robot assisted cystectomy and RAPN, respectively.

 

Contemporary use of imaging in robotic surgery

When enquiring about the use of imaging for surgical planning, the majority of surgeons (57%, 65/115) routinely viewed pre-operative imaging intra-operatively with only 9% (13/137) routinely capitalizing on the TilePro™ function in the console to display these images. When assessing the use of TilePro™ among surgeons who performed RAPN 13.8% (9/65) reported using the technology routinely.

When assessing the imaging modalities that are available to a surgeon in theater the majority of surgeons performing RALP (74%, 78/106)) reported using MRI with an additional 37% (39/106) reporting the use of CT for pre-operative staging and/or planning. For surgeons performing RAPN and robot-assisted cystectomy there was more of a consensus with 97% (68/70) and 95% (54/57) of surgeons, respectively, using CT for routine preoperative imaging (Table 1).

Table 1. Which preoperative imaging modalities do you use for diagnosis and surgical planning?

  CT MRI USS None Other
RALP (n = 106) 39.8% 73.5% 2% 15.1% 8.4%
(39) (78) (3) (16) (9)
RAPN (n = 70) 97.1% 42.9% 17.1% 0% 2.9%
(68) (30) (12) (0) (2)
Cystectomy (n = 57) 94.7% 26.3% 1.8% 1.8% 5.3%
(54) (15) (1) (1) (3)

Those surgeons performing RAPN were found to have the most diversity in the way they viewed pre-operative images in theater, routinely viewing images in sagittal, coronal and axial slices (Table 2). The majority of these surgeons also viewed the images as 3D reconstructions (54%, 38/70).

Table 2. How do you typically view preoperative imaging in the OR? 3D recons = three-dimensional reconstructions

  Axial slices (n) Coronal slices (n) Sagittal slices (n) 3D recons. (n) Do not view (n)  
RALP (n = 106) 49.1% 44.3% 31.1% 9.4% 31.1%
(52) (47) (33) (10) (33)
RAPN (n = 70) 68.6% 74.3% 60% (42) 54.3% 0%
(48) (52) (38) (0)
Cystectomy (n = 57) 70.2% 52.6% 50.9% 21.1% 8.8%
(40) (30) (29) (12) (5)

The majority of surgeons used ultrasound intra-operatively in RAPN (51%, 35/69) with a further 25% (17/69) reporting they would use it if they had access to a ‘drop-in’ ultrasound probe (Figure 2).

f2

Figure 2. Chart demonstrating responses to the question – Do you use intraoperative ultrasound for robotic partial nephrectomy?

Desire for augmented reality

Overall, 87% of respondents envisaged a role for augmented reality as a navigation tool in robotic surgery and 82% (88/107) felt that there was an additional role for the technology as a training tool.

The greatest desire for augmented reality was among those surgeons performing RAPN with 86% (54/63) feeling the technology would be of use. The largest group of surgeons felt it would be useful in identifying tumour location, with significant numbers also feeling it would be efficacious in tumor resection (Figure 3).

f3

Figure 3. Chart demonstrating responses to the question – In robotic partial nephrectomy which parts of the operation do you feel augmented reality image overlay would be of assistance?

When enquiring about the potential for augmented reality in RALP, 79% (20/96) of respondents felt it would be of use during the procedure, with the largest group feeling it would be helpful for nerve sparing 65% (62/96) (Figure 4). The picture in cystectomy was similar with 74% (37/50) of surgeons believing augmented reality would be of use, with both nerve sparing and apical dissection highlighted as specific examples (40%, 20/50) (Figure 5). The majority also felt that it would be useful for lymph node dissection in both RALP and robot assisted cystectomy (55% (52/95) and 64% (32/50), respectively).

f4

Figure 4. Chart demonstrating responses to the question – In robotic prostatectomy which parts of the operation do you feel augmented reality image overlay would be of assistance?

f5

Figure 5. Chart demonstrating responses to the question – In robotic cystectomy which parts of the operation do you feel augmented reality overlay technology would be of assistance?

Discussion

The results from this study suggest that the contemporary robotic surgeon views imaging as an important adjunct to operative practice. The way these images are being viewed is changing; although the majority of surgeons continue to view images as two-dimensional (2D) slices a significant minority have started to capitalize on 3D reconstructions to give them an improved appreciation of the patient’s anatomy.

This study has highlighted surgeons’ willingness to take the next step in the utilization of imaging in operative planning, augmented reality, with 87% feeling it has a role to play in robotic surgery. Although there appears to be a considerable desire for augmented reality, the technology itself is still in its infancy with the limited evidence demonstrating clinical application reporting only qualitative results [3, 7, 11, 12].

There are a number of significant issues that need to be overcome before augmented reality can be adopted in routine clinical practice. The first of these is registration (the process by which two images are positioned in the same coordinate system such that the locations of corresponding points align [13]). This process has been performed both manually and using automated algorithms with varying degrees of accuracy [2, 14]. The second issue pertains to the use of static pre-operative imaging in a dynamic operative environment; in order for the pre-operative imaging to be accurately registered it must be deformable. This problem remains as yet unresolved.

Live intra-operative imaging circumvents the problems of tissue deformation and in RAPN 51% of surgeons reported already using intra-operative ultrasound to aid in tumour resection. Cheung and colleagues [9] have published an ex vivo study highlighting the potential for intra-operative ultrasound in augmented reality partial nephrectomy. They report the overlaying of ultrasound onto the operative scene to improve the surgeon’s appreciation of the subsurface tumour anatomy, this improvement in anatomical appreciation resulted in improved resection quality over conventional ultrasound guided resection [9]. Building on this work the first in vivo use of overlaid ultrasound in RAPN has recently been reported [10]. Although good subjective feedback was received from the operating surgeon, the study was limited to a single case demonstrating feasibility and as such was not able to show an outcome benefit to the technology [10].

RAPN also appears to be the area in which augmented reality would be most readily adopted with 86% of surgeons claiming they see a use for the technology during the procedure. Within this operation there are two obvious steps to augmentation, anatomical identification (in particular vessel identification to facilitate both routine ‘full clamping’ and for the identification of secondary and tertiary vessels for ‘selective clamping’ [15]) and tumour resection. These two phases have different requirements from an augmented reality platform; the first phase of identification requires a gross overview of the anatomy without the need for high levels of registration accuracy. Tumor resection, however, necessitates almost sub-millimeter accuracy in registration and needs the system to account for the dynamic intra-operative environment. The step of anatomical identification is amenable to the use of non-deformable 3D reconstructions of pre-operative imaging while that of image-guided tumor resection is perhaps better suited to augmentation with live imaging such as ultrasound [2, 9, 16].

For RALP and robot-assisted cystectomy the steps in which surgeons felt augmented reality would be of assistance were those of neurovascular bundle preservation and apical dissection. The relative, perceived, efficacy of augmented reality in these steps correlate with previous examinations of augmented reality in RALP [17, 18]. Although surgeon preference for utilizing augmented reality while undertaking robotic prostatectomy has been demonstrated, Thompson et al. failed to demonstrate an improvement in oncological outcomes in those patients undergoing AR RALP [18].

Both nerve sparing and apical dissection require a high level of registration accuracy and a necessity for either live imaging or the deformation of pre-operative imaging to match the operative scene; achieving this level of registration accuracy is made more difficult by the mobilization of the prostate gland during the operation [17]. These problems are equally applicable to robot-assisted cystectomy. Although guidance systems have been proposed in the literature for RALP [3-5, 12, 17], none have achieved the level of accuracy required to provide assistance during nerve sparing. In addition, there are still imaging challenges that need to be overcome. Although multiparametric MRI has been shown to improve decision making in opting for a nerve sparing approach to RALP [19] the imaging is not yet able to reliably discern the exact location of the neurovascular bundle. This said, significant advances are being made with novel imaging modalities on the horizon that may allow for imaging of the neurovascular bundle in the near future [20].

 

Limitations

The number of operations included represents a significant limitation of the study, had different index procedures been chosen different results may have been seen. This being said the index procedures selected were chosen as they represent the vast majority of uro-oncological robotic surgical practice, largely mitigating for this shortfall.

Although the available ex vivo evidence suggests that introducing augmented reality operating environments into surgical practice would help to improve outcomes [9, 21] the in vivo experience to date is limited to small volume case series reporting feasibility [2, 3, 14]. To date no study has demonstrated an in vivo outcome advantage to augmented reality guidance. In addition to this limitation augmented reality has been demonstrated to increased rates of inattention blindness among surgeons suggesting there is a trade-off between increasing visual information and the surgeon’s ability to appreciate unexpected operative events [21].

 

Conclusions

This survey shows the contemporary robotic surgeon to be comfortable with the use of imaging to aid intra-operative planning; furthermore it highlights a significant interest among the urological community in augmented reality operating platforms.

Short- to medium-term development of augmented reality systems in robotic urology surgery would be best performed using RAPN as the index procedure. Not only was this the operation where surgeons saw the greatest potential benefits, but it may also be the operation where it is most easily achievable by capitalizing on the respective benefits of technologies the surgeons are already using; pre-operative CT for anatomical identification and intra-operative ultrasound for tumour resection.

 

Conflict of interest

None of the authors have any conflicts of interest to declare.

Appendix 1

Question Asked Question Type
Demographics
In which country do you usually practise? Single best answer
Which robotic procedures do you perform?* Single best answer
Current Imaging Practice
What preoperative imaging modalities do you use for the staging and surgical planning in renal cancer? Multiple choice
How do you typically view preoperative imaging in theatre for renal cancer surgery? Multiple choice
Do you use intraoperative ultrasound for partial nephrectomy? Yes or No
What preoperative imaging modalities do you use for the staging and surgical planning in prostate cancer? Multiple choice
How do you typically view preoperative imaging in theatre for prostate cancer? Multiple choice
Do you use intraoperative ultrasound for robotic partial nephrectomy? Yes or No
Which preoperative imaging modality do you use for staging and surgical planning in muscle invasive TCC? Multiple choice
How do you typically view preoperative imaging in theatre for muscle invasive TCC? Multiple choice
Do you routinely refer to preoperative imaging intraoperativley? Yes or No
Do you routinely use Tilepro intraoperativley? Yes or No
Augmented Reality
Do you feel there is a role for augmented reality as a navigation tool in robotic surgery? Yes or No
Do you feel there is a role for augmented reality as a training tool in robotic surgery? Yes or No
In robotic partial nephrectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
In robotic nephrectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
In robotic prostatectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
Would augmented reality guidance be of use in lymph node dissection in robotic prostatectomy? Yes or No
In robotic cystectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
Would augmented reality guidance be of use in lymph node dissection in robotic cystectomy? Yes or No
*The relevant procedure related questions were displayed based on the answer to this question

References

1. Foo J-L, Martinez-Escobar M, Juhnke B, et al.Evaluating mental workload of two-dimensional and three-dimensional visualization for anatomical structure localization. J Laparoendosc Adv Surg Tech A 2013; 23(1):65–70.

2. Hughes-Hallett A, Mayer EK, Marcus HJ, et al.Augmented reality partial nephrectomy: examining the current status and future perspectives. Urology 2014; 83(2): 266–273.

3. Sridhar AN, Hughes-Hallett A, Mayer EK, et al.Image-guided robotic interventions for prostate cancer. Nat Rev Urol 2013; 10(8): 452–462.

4. Cohen D, Mayer E, Chen D, et al.Eddie’ Augmented reality image guidance in minimally invasive prostatectomy. Lect Notes Comput Sci 2010; 6367: 101–110.

5. Simpfendorfer T, Baumhauer M, Muller M, et al.Augmented reality visualization during laparoscopic radical prostatectomy. J Endourol 2011; 25(12): 1841–1845.

6. Teber D, Simpfendorfer T, Guven S, et al.In vitro evaluation of a soft-tissue navigation system for laparoscopic prostatectomy. J Endourol 2010; 24(9): 1487–1491.

7. Teber D, Guven S, Simpfendörfer T, et al.Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo Eur Urol 2009; 56(2): 332–338.

8. Pratt P, Mayer E, Vale J, et al.An effective visualisation and registration system for image-guided robotic partial nephrectomy. J Robot Surg 2012; 6(1): 23–31.

9. Cheung CL, Wedlake C, Moore J, et al.Fused video and ultrasound images for minimally invasive partial nephrectomy: a phantom study. Med Image Comput Comput Assist Interv 2010; 13(Pt 3): 408–415.

10. Hughes-Hallett A, Pratt P, Mayer E, et al.Intraoperative ultrasound overlay in robot-assisted partial nephrectomy: first clinical experience. Eur Urol 2014; 65(3): 671–672.

11. Nakamura K, Naya Y, Zenbutsu S, et al.Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. J Endourol 2010; 24(4): 521–524.

12. Ukimura O, Gill IS. Imaging-assisted endoscopic surgery: Cleveland clinic experience. J Endourol2008; 22(4):803–809.

13. Altamar HO, Ong RE, Glisson CL, et al.Kidney deformation and intraprocedural registration: a study of elements of image-guided kidney surgery. J Endourol 2011; 25(3): 511–517.

14. Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol2011; 20(3): 189–201.

15. Ukimura O, Nakamoto M, Gill IS. Three-dimensional reconstruction of renovascular-tumor anatomy to facilitate zero-ischemia partial nephrectomy. Eur Urol2012; 61(1): 211–217.

16. Pratt P, Hughes-Hallett A, Di Marco A, et al. Multimodal reconstruction for image-guided interventions. In:Yang GZ, Darzi A (eds) Proceedings of the Hamlyn symposium on medical robotics: London. 2013; 59–61.

17. Mayer EK, Cohen D, Chen D, et al.Augmented reality image guidance in minimally invasive prostatectomy. Eur Urol Supp 2011; 10(2): 300.

18. Thompson S, Penney G, Billia M, et al.Design and evaluation of an image-guidance system for robot-assisted radical prostatectomy. BJU Int 2013; 111(7): 1081–1090.

19. Panebianco V, Salciccia S, Cattarino S, et al.Use of multiparametric MR with neurovascular bundle evaluation to optimize the oncological and functional management of patients considered for nerve-sparing radical prostatectomy. J Sex Med 2012; 9(8): 2157–2166.

20. Rai S, Srivastava A, Sooriakumaran P, Tewari A. Advances in imaging the neurovascular bundle. Curr Opin Urol2012; 22(2): 88–96.

21. Dixon BJ, Daly MJ, Chan H, et al.Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg Endosc 2013; 27(2): 454–461.

Read Full Post »

The Role of Medical Imaging in Personalized Medicine

Writer & reporter: Dror Nir, PhD

The future of personalized medicine comprise quantifiable diagnosis and tailored treatments; i.e. delivering the right treatment at the right time. To achieve standardized definition of what “right” means, the designated treatment location and lesion size are important factors. This is unrelated to whether the treatment is focused to a location or general. The role of medical imaging is and will continue to be vital in that respect: Patients’ stratification based on imaging biomarkers can help identify individuals suited for preventive intervention and can improve disease staging. In vivo visualization of loco-regional physiological, biochemical and biological processes using molecular imaging can detect diseases in pre-symptomatic phases or facilitate individualized drug delivery. Furthermore, as mentioned in most of my previous posts, imaging is essential to patient-tailored therapy planning, therapy monitoring, quantification of response-to-treatment and follow-up disease progression. Especially with the rise of companion diagnostics/theranostics (therapeutics & diagnostics), imaging and treatment will have to be synchronized in real-time to achieve the best control/guidance of the treatment.

It is worthwhile noting that the new RECIST 1.1 criteria (used in oncological therapy monitoring) have been expanded to include the use of PET (in addition to lymph-node evaluation).

pet

In previous posts I already discussed many examples concerning the use of medical imaging in personalized medicine: e.g. patients’ stratification; Imaging-biomarkers is Imaging-based tissue characterization, the future of imaging-biomarkers in diagnostic; Ultrasound-based Screening for Ovarian Cancer, imaging-based guided therapies; Minimally invasive image-guided therapy for inoperable hepatocellular carcinoma, treatment follow-up; the importance of spatially-localized and quantified image interpretation in cancer management, and imaging-based assessment of response to treatment; Causes and imaging features of false positives and false negatives on 18F-PET/CT in oncologic imaging

Browsing through our collaborative open-source initiative one can find many more articles and discussions on that matter; e.g. Tumor Imaging and Targeting: Predicting Tumor Response to Treatment: Where we stand?, In Search of Clarity on Prostate Cancer Screening, Post-Surgical Followup, and Prediction of Long Term Remission

In this post I would like to highlight the potential contribution of medical imaging to development of companion diagnostics. I do that through the story on co-development of Vintafolide (EC145) and etarfolatide (Endocyte/Merck). Etarfolatide is a folate-targeted molecular radiodiagnostic imaging agent that identifies tumors that overexpress the folate receptor. The folate receptor, a glycosylphosphatidylinositol anchored cell surface receptor, is overexpressed on the vast majority of cancer tissues, while its expression is limited in healthy tissues and organs. Folate receptors are highly expressed in epithelial, ovarian, cervical, breast, lung, kidney, colorectal, and brain tumors. When expressed in normal tissue, folate receptors are restricted to the lungs, kidneys, placenta, and choroid plexus. In these tissues, the receptors are limited to the apical surface of polarized epithelia. Folate, also known as pteroylglutamate, is a non-immunogenic water-soluble B vitamin that is critical to DNA synthesis, methylation, and repair (folate is used to synthesize thymine).

Vintafolide (EC145) delivers a very potent vinca chemotherapy directly to cancer cells by targeting the folate receptor expressed on cancer cells. Approximately 80-90 percent of ovarian and lung cancers express the receptor, as do many other types of cancer. Clinical data have shown that patients with metastases that are all positive for the folate receptor, identified by etarfolatide, benefited the most from the treatment with vintafolide, the corresponding folate-targeted small molecule drug conjugate.

Having both drug and imaging agent rely on folate receptors within the patients body Endocyte’s strategy was to develop the imaging agent and to use it to accelerate R&D and regulation. Endocyte and Merck entered into a partnership for vintafolide in April 2012. Under this partnership Merck was granted an exclusive license to develop, manufacture and commercialize vintafolide. Endocyte is responsible for conducting the PROCEED Phase 3 clinical study in women with platinum resistant ovarian cancer and the Phase 2b second line NSCLC (non-small cell lung cancer) study named TARGET. Merck is responsible for further clinical studies in additional indications. This Co-development of a diagnostic and therapeutic agent, was conducted according to the FDA guidance on personalized medicine and resulted with vintafolide gaining, already in 2012, status of orphan drug in EMA.

 

 The following is an extract from a post by Phillip H. Kuo, MD, PhD, associate professor of medical imaging, medicine, and biomedical engineering; section chief of nuclear medicine; and director of PET/CT at the University of Arizona Cancer Center.

 0213-figure-1

Figure 1 — Targeted Radioimaging Diagnostic and Small Molecule Drug Conjugate

Etarfolatide is comprised of the targeting ligand folic acid (yellow), which has a high folate receptor binding affinity, and a Technetium-99m–based radioimaging agent (turquoise). Etarfolatide identifies metastases that express the folate receptor protein in real time (A). The folic acid-targeting ligand is identical to that found on vintafolide, the corresponding therapeutic small molecule drug conjugate, which also contains a linker system (blue) and a potent chemotherapeutic drug (red) (B).

 

 vinta

Figure 2 — Whole-Body Scan With 111In-DTPA-Folate 

Diagnostic images of whole-body scans obtained following administration of the targeted radioimaging agent 111In-DTPA-folate, which is constructed with the same folic acid ligand as that engineered in etarfolatide. The healthy patient image on the left shows no folate receptor-positive abdominal tumor. Instead, only healthy kidneys (involved in excretion) are revealed. The patient on the right shows folate receptor-positive tumors in the abdomen and pelvis. Patients with metastases, identified with the companion imaging diagnostic etarfolatide as folate receptor-positive are most likely to respond to treatment with the corresponding small molecular drug conjugate vintafolide. Note: Vintafolide currently is being evaluated in a phase 3 clinical trial for platinum-resistant ovarian cancer and a phase 2 trial for non–small-cell lung cancer. Both studies also are using etarfolatide.

0213-figure-3

Figure 3 — Vintafolide’s Mechanism of Action

Folate is required for cell division, and rapidly dividing cancer cells often express folate receptors to capture enough folate to support rapid cell growth. Elevated expression of the folate receptor occurs in many human malignancies, especially when associated with aggressively growing cancers. The folate-targeted small molecule drug conjugate vintafolide binds to the folate receptor (A) and subsequently is internalized by a natural endocytosis process (B). Once inside the cell, vintafolide’s serum-stable linker selectively releases a potent vinca alkaloid compound (C) to arrest cell division and induce cell death.

Epilog

I think that those of you who reached this point in my post deserve a special bonus! So here it is: A medical-imaging initiative that is as ambitious and complex as the initiative to send humans into deep-space.

This is the The European Population Imaging Infrastructure initiative of the Dutch Federation of University Medical Centres (NFU) and the Erasmus University Medical Centre Rotterdam, Department of Radiology, chaired by Professor Gabriel P. Krestin. The NFU has made available initial funding for the development of this initiative.

The European Population Imaging Infrastructure closely cooperates with the European Biomedical Imaging Infrastructure Project EURO-BioImaging which is currently being developed.

The ultimate aim of the infrastructure is to help the development and implementation of strategies to prevent or effectively treat disease. It supports imaging in large, prospective epidemiological studies on the population level. Image specific markers of pre-symptomatic diseases can be used to investigate causes of pathological alterations and for the early identification of people at risk.

More information on this infrastructure and on the role of the European Population Imaging Infrastructure in this can be found in the Netherlands Roadmap for Large-Scale Research Facilities, the applicaton for funding of the Roadmap Large Scale Research Facilities Application form of the Roadmap EuroBioImaging, and on the Euro-BioImaging website.

Certainly, while making progress with this initiative, many lessons will be learned. I recommend to explore this site and Enjoy!

Read Full Post »

Following (or not) the guidelines for use of imaging in management of prostate cancer.

Writer and curator: Dror Nir, PhD

Over diagnosis and over treatment is a trend of the last two decades. It leads to increase in health-care costs and human-misery.

The following headline on Medscape; Swedes Show That We Can Improve Imaging in Prostate Cancer elicited my curiosity.

I was expecting “good news” – well, not this time!

In spite the “general language” the study that the above mentioned headline refers to is not addressing the global use of imaging in prostate cancer patients’ pathway but is specific to use of radionuclide bone-scans as part of patients’ staging.  The “bad-news” are that realization that the Swedish government had to invest many man-years to achieve “success” in reducing unnecessary use of such imaging in low risk patients. Moreover, the paper reveals under-use of such imaging technology for staging high risk prostate cancer patients.

Based on this paper, one could come to the conclusion that in reality, we are facing long lasting non-conformity with established guidelines related to the use of “full-body” imaging as part of the prostate cancer patients’ pathway in Europe and USA.

Here is a link to the original paper:

Prostate Cancer Imaging Trends After a Nationwide Effort to Discourage Inappropriate Prostate Cancer Imaging, Danil V. MakarovStacy LoebDavid UlmertLinda DrevinMats Lambe and Pär Stattin Correspondence to: Pär Stattin, MD, PhD, Department of Surgery and Perioperative Sciences, Urology and Andrology, Umeå University, SE- 901 87 Umeå, Sweden (e-mail:par.stattin@urologi.umu.se).

JNCI J Natl Cancer Inst (2013)doi: 10.1093/jnci/djt175

 

For convenience, here are the highlights:

  • Reducing inappropriate use of imaging to stage incident prostate cancer is a challenging problem highlighted recently as a Physician Quality Reporting System quality measure and by the American Society of Clinical Oncology and the American Urological Association in the Choosing Wisely campaign.

 

  • Since 2000, the National Prostate Cancer Register (NPCR) of Sweden has led an effort to decrease national rates of inappropriate prostate cancer imaging by disseminating utilization data along with the latest imaging guidelines to urologists in Sweden.

  • Results Thirty-six percent of men underwent imaging within 6 months of prostate cancer diagnosis. Overall, imaging use decreased over time, particularly in the low-risk category, among whom the imaging rate decreased from 45% to 3% (P < .001), but also in the high-risk category, among whom the rate decreased from 63% to 47% (P < .001). Despite substantial regional variation, all regions experienced clinically and statistically (P < .001) significant decreases in prostate cancer imaging.

 

t1

t2

fig1

fig2

  • These results may inform current efforts to promote guideline-concordant imaging in the United States and internationally.

  • In 1998, the baseline low-risk prostate cancer imaging rate in Sweden was 45%. Per the NCCN guidelines (7), none of these men should have received bone imaging unless they presented with symptoms suggestive of bone pain (8,24). In the United States, the imaging rate among men with low-risk prostate cancer has been reported to be 19% to 74% in a community cohort and 10% to 48% in a Surveillance Epidemiology and End Results (SEER)–Medicare cohort (10–13,16). It is challenging to compare these rates directly across the two countries because the NPCR aggregates all staging imaging into one variable. However, our sampling revealed that 88% of those undergoing imaging had at least a bone scan, whereas only 11% had any CTs and 10% had any MRI. This suggests that baseline rates of bone scan among low-risk men in Sweden were similar to those among their low-risk counterparts in the United States, whereas rates of axial imaging were likely much lower. During the study period, rates of prostate cancer imaging among low-risk men in Sweden decreased to 3%, substantially lower than those reported in the United States at any time.

  • Miller et al. describe a decline in imaging associated with a small-scale intervention administered in three urology practices located in the United States participating in a quality-improvement consortium. Our study’s contribution is to demonstrate that a similar strategy can be applied effectively at a national scale with an associated decline in inappropriate imaging rates, a finding of great interest for policy makers in the United States seeking to improve health-care quality.

  • In 1998, the baseline high-risk prostate cancer imaging rates in Sweden were 63%, and decreased by 43% in 2008 (rising slightly to 47% in 2009). Based on our risk category definitions and the guidelines advocated in Sweden, all of these men should have undergone an imaging evaluation (8,24). Swedish rates of prostate cancer imaging among men with high-risk disease are considerably lower than those reported from the SEER–Medicare cohort, where 70% to 75% underwent bone scan and 57% to 58% underwent CT (13,16). These already low rates of imaging among men with high-risk prostate cancer only decreased further during the NPCR’s effort to promote guideline-concordant imaging. Clearly in both countries, imaging for high-risk prostate cancer remains underused despite the general overuse of imaging and numerous guidelines encouraging its appropriate use (3–9).

Similar items I have covered on this this Open Access Online Scientific Journal:

Not applying evidence-based medicine drives up the costs of screening for breast-cancer in the USA.

Read Full Post »

Follow-up on Tomosynthesis

Writer & Curator: Dror Nir, PhD

Tomosynthesis, is a method for performing high-resolution limited-angle (i.e. not full 3600 rotation but more like ~500) tomography. The use of such systems in breast-cancer screening is steadily increasing following the clearance of such system by the FDA on 2011; see my posts – Improving Mammography-based imaging for better treatment planning and State of the art in oncologic imaging of breast.

Many radiologists expects that Tomosynthesis will eventually replace conventional mammography due to the fact that it increases the sensitivity of breast cancer detection. This claim is supported by new peer-reviewed publications. In addition, the patient’s experience during Tomosynthesis is less painful due to a lesser pressure that is applied to the breast and while presented with higher in-plane resolution and less imaging artifacts the mean glandular dose of digital breast Tomosynthesis is comparable to that of full field digital mammography. Because it is relatively new, Tomosynthesis is not available at every hospital. As well, the procedure is recognized for reimbursement by public-health schemes.

A good summary of radiologist opinion on Tomosynthesis can be found in the following video:

Recent studies’ results with digital Tomosynthesis are promising. In addition to increase in sensitivity for detection of small cancer lesions researchers claim that this new breast imaging technique will make breast cancers easier to see in dense breast tissue.  Here is a paper published on-line by the Lancet just a couple of months ago:

Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study

Stefano Ciatto†, Nehmat Houssami, Daniela Bernardi, Francesca Caumo, Marco Pellegrini, Silvia Brunelli, Paola Tuttobene, Paola Bricolo, Carmine Fantò, Marvi Valentini, Stefania Montemezzi, Petra Macaskill , Lancet Oncol. 2013 Jun;14(7):583-9. doi: 10.1016/S1470-2045(13)70134-7. Epub 2013 Apr 25.

Background Digital breast tomosynthesis with 3D images might overcome some of the limitations of conventional 2D mammography for detection of breast cancer. We investigated the effect of integrated 2D and 3D mammography in population breast-cancer screening.

Methods Screening with Tomosynthesis OR standard Mammography (STORM) was a prospective comparative study. We recruited asymptomatic women aged 48 years or older who attended population-based breast-cancer screening through the Trento and Verona screening services (Italy) from August, 2011, to June, 2012. We did screen-reading in two sequential phases—2D only and integrated 2D and 3D mammography—yielding paired data for each screen. Standard double-reading by breast radiologists determined whether to recall the participant based on positive mammography at either screen read. Outcomes were measured from final assessment or excision histology. Primary outcome measures were the number of detected cancers, the number of detected cancers per 1000 screens, the number and proportion of false positive recalls, and incremental cancer detection attributable to integrated 2D and 3D mammography. We compared paired binary data with McNemar’s test.

Findings 7292 women were screened (median age 58 years [IQR 54–63]). We detected 59 breast cancers (including 52 invasive cancers) in 57 women. Both 2D and integrated 2D and 3D screening detected 39 cancers. We detected 20 cancers with integrated 2D and 3D only versus none with 2D screening only (p<0.0001). Cancer detection rates were 5·3 cancers per 1000 screens (95% CI 3.8–7.3) for 2D only, and 8.1 cancers per 1000 screens (6.2–10.4) for integrated 2D and 3D screening. The incremental cancer detection rate attributable to integrated 2D and 3D mammography was 2.7 cancers per 1000 screens (1.7–4.2). 395 screens (5.5%; 95% CI 5.0–6.0) resulted in false positive recalls: 181 at both screen reads, and 141 with 2D only versus 73 with integrated 2D and 3D screening (p<0·0001). We estimated that conditional recall (positive integrated 2D and 3D mammography as a condition to recall) could have reduced false positive recalls by 17.2% (95% CI 13.6–21.3) without missing any of the cancers detected in the study population.

Interpretation Integrated 2D and 3D mammography improves breast-cancer detection and has the potential to reduce false positive recalls. Randomised controlled trials are needed to compare integrated 2D and 3D mammography with 2D mammography for breast cancer screening.

Funding National Breast Cancer Foundation, Australia; National Health and Medical Research Council, Australia; Hologic, USA; Technologic, Italy.

Introduction

Although controversial, mammography screening is the only population-level early detection strategy that has been shown to reduce breast-cancer mortality in randomised trials.1,2 Irrespective of which side of the mammography screening debate one supports,1–3 efforts should be made to investigate methods that enhance the quality of (and hence potential benefit from) mam­mography screening. A limitation of standard 2D mammography is the superimposition of breast tissue or parenchymal density, which can obscure cancers or make normal structures appear suspicious. This short coming reduces the sensitivity of mammography and increases false-positive screening. Digital breast tomosynthesis with 3D images might help to overcome these limitations. Several reviews4,5 have described the development of breast tomosynthesis technology, in which several low-dose radiographs are used to reconstruct a pseudo-3D image of the breast.4–6

Initial clinical studies of 3D mammography, 6–10 though based on small or selected series, suggest that addition of 3D to 2D mammography could improve cancer detection and reduce the number of false positives. However, previous assessments of breast tomosynthesis might have been constrained by selection biases that distorted the potential effect of 3D mammography; thus, screening trials of integrated 2D and 3D mammography are needed.6

We report the results of a large prospective study (Screening with Tomosynthesis OR standard Mammog­raphy [STORM]) of 3D digital mammography. We investi­gated the effect of screen-reading using both standard 2D and 3D imaging with tomosynthesis compared with screening with standard 2D digital mammography only for population breast-cancer screening.

  

Methods

Study design and participants

STORM is a prospective population-screening study that compares mammography screen-reading in two sequential phases (figure)—2D only versus integrated 2D and 3D mammography with tomosynthesis—yielding paired results for each screening examination. Women aged 48 years or older who attended population-based screening through the Trento and Verona screening services, Italy, from August, 2011, to June, 2012, were invited to be screened with integrated 2D and 3D mammography. Participants in routine screening mammography (once every 2 years) were asymptomatic women at standard (population) risk for breast cancer. The study was granted institutional ethics approval at each centre, and participants gave written informed consent. Women who opted not to participate in the study received standard 2D mammography. Digital mammography has been used in the Trento breast-screening programme since 2005, and in the Verona programme since 2007; each service monitors outcomes and quality indicators as dictated by European standards, and both have published data for screening performance.11,12

 

study design

Procedures

All participants had digital mammography using a Selenia Dimensions Unit with integrated 2D and 3D mammography done in the COMBO mode (Hologic, Bedford, MA, USA): this setting takes 2D and 3D images at the same screening examination with a single breast position and compression. Each 2D and 3D image consisted of a bilateral two-view (mediolateral oblique and craniocaudal) mammogram. Screening mammo­grams were interpreted sequentially by radiologists, first on the basis of standard 2D mammography alone, and then by the same radiologist (on the same day) on the basis of integrated 2D and 3D mammography (figure). Thus, integrated 2D and 3D mammography screening refers to non-independent screen reading based on joint interpretation of 2D and 3D images, and does not refer to analytical combinations. Radiologists had to record whether or not to recall the participant at each screen-reading phase before progressing to the next phase of the sequence. For each screen, data were also collected for breast density (at the 2D screen-read), and the side and quadrant for any recalled abnormality (at each screen-read). All eight radiologists were breast radiologists with a mean of 8 years (range 3–13 years) experience in mammography screening, and had received basic training in integrated 2D and 3D mammography. Several of the radiologists had also used 2D and 3D mammography for patients recalled after positive conventional mammography screening as part of previous studies of tomosynthesis.8,13

Mammograms were interpreted in two independent screen-reads done in parallel, as practiced in most population breast-screening programs in Europe. A screen was considered positive and the woman recalled for further investigations if either screen-reader recorded a positive result at either 2D or integrated 2D and 3D screening (figure). When previous screening mammograms were available, these were shown to the radiologist at the time of screen-reading, as is standard practice. For assessment of breast density, we used Breast Imaging Reporting and Data System (BI-RADS)14 classification, with participants allocated to one of two groups (1–2 [low density] or 3–4 [high density]). Disagreement between readers about breast density was resolved by assessment by a third reader.

Our primary outcomes were the number of cancers detected, the number of cancers detected per 1000 screens, the number and percentage of false posi­tive recalls, and the incremental cancer detection rate attributable to integrated 2D and 3D mammography screening. We compared the number of cancers that were detected only at 2D mammography screen-reading and those that were detected only at 2D and 3D mammography screen-reading; we also did this analysis for false positive recalls. To explore the potential effect of integrated 2D and 3D screening on false-positive recalls, we also estimated how many false-positive recalls would have resulted from using a hypothetical conditional false-positive recall approach; – i.e. positive integrated 2D and 3D mammography as a condition of recall (screening recalled at 2D mammography only would not be recalled). Pre-planned secondary analyses were comparison of outcome measures by age group and breast density.

Outcomes were assessed by excision histology for participants who had surgery, or the complete assessment outcome (including investigative imaging with or without histology from core needle biopsy) for all recalled participants. Because our study focuses on the difference in detection by the two screening methods, some cancers might have been missed by both 2D and integrated 2D and 3D mammography; this possibility could be assessed at future follow-up to identify interval cancers. However, this outcome is not assessed in the present study and does not affect estimates of our primary outcomes – i.e. comparative true or false positive detection for 2D-only versus integrated 2D and 3D mammography.

 

Statistical analysis

The sample size was chosen to provide 80% power to detect a difference of 20% in cancer detection, assuming a detection probability of 80% for integrated 2D and 3D screening mammography and 60% for 2D only screening, with a two-sided significance threshold of 5%. Based on the method of Lachenbruch15 for estimating sample size for studies that use McNemar’s test for paired binary data, a minimum of 40 cancers were needed. Because most screens in the participating centres were incident (repeat) screening (75%–80%), we used an underlying breast-cancer prevalence of 0·5% to estimate that roughly 7500–8000 screens would be needed to identify 40 cancers in the study population.

We calculated the Wilson CI for the false-positive recall ratio for integrated 2D and 3D screening with conditional recall compared with 2D only screening.16 All of the other analyses were done with SAS/STAT (version 9.2), using exact methods to compute 95 CIs and p-values.

Role of the funding source

The sponsors of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author (NH) had full access to all the data in the study and had final responsibility for the decision to submit for publication.

Results

7292 participants with a median age of 58 years (IQR 54–63, range 48–71) were screened between Aug 12, 2011, and June 29, 2012. Roughly 5% of invited women declined integrated 2D and 3D screening and received standard 2D mammography. We present data for 7294 screens because two participants had bilateral cancer (detected with different screen-reading techniques for one participant). We detected 59 breast cancers in 57 participants (52 invasive cancers and seven ductal carcinoma in-situ). Of the invasive cancers, most were invasive ductal (n=37); others were invasive special types (n=7), invasive lobular (n=4), and mixed invasive types (n=4).

Table 1 shows the characteristics of the cancers. Mean tumour size (for the invasive cancers with known exact size) was 13.7 mm (SD 5.8) for cancers detected with both 2D alone and integrated 2D and 3D screening (n=29), and 13.5 mm (SD 6.7) for cancers detected only with integrated 2D and 3D screening (n=13).

 

Table 1

Of the 59 cancers, 39 were detected at both 2D and integrated 2D and 3D screening (table 2). 20 cancers were detected with only integrated 2D and 3D screening compared with none detected with only 2D screening (p<0.0001; table 2). 395 screens were false positive (5.5%, 95% CI 5.0–6.0); 181 occurred at both screen-readings, and 141 occurred at 2D screening only compared with 73 at integrated 2D and 3D screening (p<0.0001; table 2). These differences were still significant in sensitivity analyses that excluded the two participants with bilateral cancer (data not shown).


Table 2

5.3 cancers per 1000 screens (95% CI 3.8–7.3; table 3) were detected with 2D mammography only versus 8.1 cancers per 1000 screens (95% CI 6.2–10.4) with integrated 2D and 3D mammography (p<0.0001). The incremental cancer detection rate attributable to inte­grated 2D and 3D screening was 2.7 cancers per 1000 screens (95% CI 1.7–4.2), which is 33.9% (95% CI 22.1–47.4) of the cancers detected in the study popu­lation. In a sensitivity analysis that excluded the two participants with bilateral cancer the estimated incre­mental cancer detection rate attributable to integrated 2D and 3D screening was 2.6 cancers per 1000 screens (95% CI 1.4–3.8). The stratified results show that integrated 2D and 3D mammography was associated with an incrementally increased cancer detection rate in both age-groups and density categories (tables 3–5). A minority (16.7%) of breasts were of high density (category 3–4) reducing the power of statistical comparisons in this subgroup (table 5). The incremental cancer detection rate was much the same in low density versus high density groups (2.8 per 1000 vs 2.5 per 1000; p=0.84; table 3).


Table 3

Table 4-5

Overall recall—any recall resulting in true or false positive screens—was 6.2% (95% CI 5.7–6.8), and the false-positive rate for the 7235 screens of participants who did not have breast cancer was 5.5% (5.0–6.0). Table 6 shows the contribution to false-positive recalls from 2D mammography only, integrated 2D and 3D mammography only, and both, and the estimated number of false positives if positive integrated 2D and 3D mammography was a condition for recall (positive 2D only not recalled). Overall, more of the false-positive rate was driven by 2D mammography only than by integrated 2D and 3D, although almost half of the false-positive rate was a result of false positives recalled at both screen-reading phases (table 6). The findings were much the same when stratified by age and breast density (table 6). Had a conditional recall rule been applied, we estimate that the false-positive rate would have been 3.5% (95% CI 3.1–4.0%; table 6) and could have potentially prevented 68 of the 395 false positives (a reduction of 17.2%; 95% CI 13.6–21.3). The ratio between the number of false positives with integrated 2D and 3D screening with conditional recall (n=254) versus 2D only screening (n=322) was 0.79 (95% CI 0.71–0.87).

Discussion

Our study showed that integrated 2D and 3D mam­mography screening significantly increases detection of breast cancer compared with conventional mammog­raphy screening. There was consistent evidence of an incremental improvement in detection from integrated 2D and 3D mammography across age-group and breast density strata, although the analysis by breast density was limited by low number of women with breasts of high density.

One should note that we investigated comparative cancer detection, and not absolute screening sensitivity. By integrating 2D and 3D mammography using the study screen-reading protocol, 1% of false-positive recalls resulted from 2D and 3D screen-reading only (table 6). However, significantly more false positives resulted from 2D only mammography compared with integrated 2D and 3D mammography, both overall and in the stratified analyses. Application of a conditional recall rule would have resulted in a false-positive rate of 3.5% instead of the actual false-positive rate of 5.5%. The estimated false positive recall ratio of 0.79 for integrated 2D and 3D screening with conditional recall compared with 2D only screening suggests that integrated 2D and 3D screening could reduce false recalls by roughly a fifth. Had such a condition been adopted, none of the cancers detected in the study would have been missed because no cancers were detected by 2D mammography only, although this result might be because our design allowed an independent read for 2D only mammography whereas the integrated 2D and 3D read was an interpretation of a combination of 2D and 3D imaging. We do not recommend that such a conditional recall rule be used in breast-cancer screening until our findings are replicated in other mammography screening studies—STORM involved double-reading by experienced breast radiologists, and our results might not apply to other screening settings. Using a test set of 130 mammograms, Wallis and colleagues7 report that adding tomosynthesis to 2D mammography increased the accuracy of inexperienced readers (but not of experienced readers), therefore having experienced radiologists in STORM could have underestimated the effect of integrated 2D and 3D screen-reading.

No other population screening trials of integrated 2D and 3D mammography have reported final results (panel); however, an interim analysis of the Oslo trial17 a large population screening study has shown that integrated 2D and 3D mammography substantially increases detection of breast cancer. The Oslo study investigators screened women with both 2D and 3D mammography, but randomised reading strategies (with vs without 3D mammograms) and adjusted for the different screen-readers,17whereas we used sequential screen-reading to keep the same reader for each exam­ination. Our estimates for comparative cancer detection and for cancer detection rates are consistent with those of the interim analysis of the Oslo study.17 The applied recall methods differed between the Oslo study (which used an arbitration meeting to decide recall) and the STORM study (we recalled based on a decision by either screen-reader), yet both studies show that 3D mammog­raphy reduces false-positive recalls when added to standard mammography.

An editorial in The Lancet18 might indeed signal the closing of a chapter of debate about the benefits and harms of screening. We hope that our work might be the beginning of a new chapter for mammography screening: our findings should encourage new assessments of screening using 2D and 3D mammography and should factor several issues related to our study. First, we compared standard 2D mammography with integrated 2D and 3D mammography the 3D mammograms were not interpreted independently of the 2D mammograms therefore 3D mammography only (without the 2D images) might not provide the same results. Our experience with breast tomosynthesis and a review6 of 3D mammography underscore the importance of 2D images in integrated 2D and 3D screen-reading. The 2D images form the basis of the radiologist’s ability to integrate the information from 3D images with that from 2D images. Second, although most screening in STORM was incident screening, the substantial increase in cancer detection rate with integrated 2D and 3D mammography results from the enhanced sensitivity of integrated 2D and 3D screening and is probably also a result of a prevalence effect (ie, the effect of a first screening round with integrated 2D and 3D mammography). We did not assess the effect of repeat (incident) screening with integrated 2D and 3D mammography on cancer detection it might provide a smaller effect on cancer detection rates than what we report. Third, STORM was not designed to measure biological differences between the cancers detected at integrated 2D and 3D screening compared with those detected at both screen-reading phases. Descriptive analyses suggest that, generally, breast cancers detected only at integrated 2D and 3D screening had similar features (eg, histology, pathological tumour size, node status) as those detected at both screen-reading phases. Thus, some of the cancers detected only at 2D and 3D screening might represent early detection (and would be expected to receive screening benefit) whereas some might represent over-detection and a harm from screening, as for conventional screening mam mography.1,19 The absence of consensus about over-diagnosis in breast-cancer screening should not detract from the importance of our study findings to applied screening research and to screening practice; however, our trial was not done to assess the extent to which integrated 2D and 3D mam­mography might contribute to over-diagnosis.

The average dose of glandular radiation from the many low-dose projections taken during a single acquisition of 3D mammography is roughly the same as that from 2D mammography.6,20–22 Using integrated 2D and 3D en­tails both a 2D and 3D acquisition in one breast com­pression, which roughly doubles the radiation dose to the breast. Therefore, integrated 2D and 3D mammography for population screening might only be justifiable if improved outcomes were not defined solely in terms of improved detection. For example, it would be valuable to show that the increased detection with integrated 2D and 3D screening leads to reduced interval cancer rates at follow-up. A limitation of our study might be that data for interval cancers were not available; however, because of the paired design we used, future evaluation of interval cancer rates from our study will only apply to breast cancers that were not identified using 2D only or integrated 2D and 3D screening. We know of two patients from our study who have developed interval cancers (follow-up range 8–16 months). We did not get this information from cancer registries and follow-up was very short, so these data should be interpreted very cautiously, especially because interval cancers would be expected to occur in the second year of the standard 2 year interval between screening rounds. Studies of interval cancer rates after integrated 2D and 3D mammography would need to be randomised controlled trials and have a very large sample size. Additionally, the development of reconstructed 2D images from a 3D mammogram23 provides a timely solution to concerns about radiation by providing both the 2D and 3D images from tomosynthesis, eliminating the need for two acquisitions.

We have shown that integrated 2D and 3D mammog­raphy in population breast-cancer screening increases detection of breast cancer and can reduce false-positive recalls depending on the recall strategy. Our results do not warrant an immediate change to breast-screening practice, instead, they show the urgent need for random­ised controlled trials of integrated 2D and 3D versus 2D mammography, and for further translational research in breast tomosynthesis. We envisage that future screening trials investigating this issue will include measures of breast cancer detection, and will be designed to assess interval cancer rates as a surrogate endpoint for screening efficacy.

Contributors

SC had the idea for and designed the study, and collected and interpreted data. NH advised on study concepts and methods, analysed and interpreted data, searched the published work, and wrote and revised the report. DB and FC were lead radiologists, recruited participants, collected data, and commented on the draft report. MP, SB, PT, PB, PT, CF, and MV did the screen-reading, collected data, and reviewed the draft report. SM collected data and reviewed the draft report. PM planned the statistical analysis, analysed and interpreted data, and wrote and revised the report.

Conflicts of interest

SC, DB, FC, MP, SB, PT, PB, CF, MV, and SM received assistance from Hologic (Hologic USA; Technologic Italy) in the form of tomosynthesis technology and technical support for the duration of the study, and travel support to attend collaborators’ meetings. NH receives research support from a National Breast Cancer Foundation (NBCF Australia) Practitioner Fellowship, and has received travel support from Hologic to attend a collaborators’ meeting. PM receives research support through Australia’s National Health and Medical Research Council programme grant 633003 to the Screening & Test Evaluation Program.

 

References

1       Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: an independent review. Lancet 2012; 380: 1778–86.

2       Glasziou P, Houssami N. The evidence base for breast cancer screening. Prev Med 2011; 53: 100–102.

3       Autier P, Esserman LJ, Flowers CI, Houssami N. Breast cancer screening: the questions answered. Nat Rev Clin Oncol 2012; 9: 599–605.

4       Baker JA, Lo JY. Breast tomosynthesis: state-of-the-art and review of the literature. Acad Radiol 2011; 18: 1298–310.

5       Helvie MA. Digital mammography imaging: breast tomosynthesis and advanced applications. Radiol Clin North Am 2010; 48: 917–29.

6      Houssami N, Skaane P. Overview of the evidence on digital breast tomosynthesis in breast cancer detection. Breast 2013; 22: 101–08.

7   Wallis MG, Moa E, Zanca F, Leifland K, Danielsson M. Two-view and single-view tomosynthesis versus full-field digital mammography: high-resolution X-ray imaging observer study. Radiology 2012; 262: 788–96.

8   Bernardi D, Ciatto S, Pellegrini M, et al. Prospective study of breast tomosynthesis as a triage to assessment in screening. Breast Cancer Res Treat 2012; 133: 267–71.

9   Michell MJ, Iqbal A, Wasan RK, et al. A comparison of the accuracy of film-screen mammography, full-field digital mammography, and digital breast tomosynthesis. Clin Radiol 2012; 67: 976–81.

10 Skaane P, Gullien R, Bjorndal H, et al. Digital breast tomosynthesis (DBT): initial experience in a clinical setting. Acta Radiol 2012; 53: 524–29.

11 Pellegrini M, Bernardi D, Di MS, et al. Analysis of proportional incidence and review of interval cancer cases observed within the mammography screening programme in Trento province, Italy. Radiol Med 2011; 116: 1217–25.

12 Caumo F, Vecchiato F, Pellegrini M, Vettorazzi M, Ciatto S, Montemezzi S. Analysis of interval cancers observed in an Italian mammography screening programme (2000–2006). Radiol Med 2009; 114: 907–14.

13 Bernardi D, Ciatto S, Pellegrini M, et al. Application of breast tomosynthesis in screening: incremental effect on mammography acquisition and reading time. Br J Radiol 2012; 85: e1174–78.

14 American College of Radiology. ACR BI-RADS: breast imaging reporting and data system, Breast Imaging Atlas. Reston: American College of Radiology, 2003.

15  Lachenbruch PA. On the sample size for studies based on McNemar’s test. Stat Med 1992; 11: 1521–25.

16  Bonett DG, Price RM. Confidence intervals for a ratio of binomial proportions based on paired data. Stat Med 2006; 25: 3039–47.

17  Skaane P, Bandos AI, Gullien R, et al. Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; published online Jan 3. http://dx.doi.org/10.1148/ radiol.12121373.

18  The Lancet. The breast cancer screening debate: closing a chapter? Lancet 2012; 380: 1714.

19  Biesheuvel C, Barratt A, Howard K, Houssami N, Irwig L. Effects of study methods and biases on estimates of invasive breast cancer overdetection with mammography screening: a systematic review. Lancet Oncol 2007; 8: 1129–38.

20  Tagliafico A, Astengo D, Cavagnetto F, et al. One-to-one comparison between digital spot compression view and digital breast tomosynthesis. Eur Radiol 2012; 22: 539–44.

21  Tingberg A, Fornvik D, Mattsson S, Svahn T, Timberg P, Zackrisson S. Breast cancer screening with tomosynthesis—initial experiences. Radiat Prot Dosimetry 2011; 147: 180–83.

22  Feng SS, Sechopoulos I. Clinical digital breast tomosynthesis system: dosimetric characterization. Radiology 2012; 263: 35–42.

23  Gur D, Zuley ML, Anello MI, et al. Dose reduction in digital breast tomosynthesis (DBT) screening using synthetically reconstructed projection images: an observer performance study. Acad Radiol 2012; 19: 166–71.

A very good and down-to-earth comment on this article was made by Jules H Sumkin who disclosed that he is an unpaid member of SAB Hologic Inc and have a PI research agreement between University of Pittsburgh and Hologic Inc.

The results of the study by Stefano Ciatto and colleagues1 are consistent with recently published prospective,2,3 retrospective,4 and observational5 reports on the same topic. The study1 had limitations, including the fact that the same radiologist interpreted screens sequentially the same day without cross-balancing which examination was read first. Also, the false-negative findings for integrated 2D and 3D mammography, and therefore absolute benefit from the procedure, could not be adequately assessed because cases recalled by 2D mammography alone (141 cases) did not result in a single detection of an additional cancer while the recalls from the integrated 2D and 3D mammography alone (73 cases) resulted in the detection of 20 additional cancers. Nevertheless, the results are in strong agreement with other studies reporting of substantial performance improvements when the screening is done with integrated 2D and 3D mammography.

I disagree with the conclusion of the study with regards to the urgent need for randomised clinical trials of integrated 2D and 3D versus 2D mammography. First, to assess differences in mortality as a result of an imaging-based diagnostic method, a randomised trial will require several repeated screens by the same method in each study group, and the strong results from all studies to date will probably result in substantial crossover and self-selection biases over time. Second, because of the high survival rate (or low mortality rate) of breast cancer, the study will require long follow-up times of at least 10 years. In a rapidly changing environment in terms of improvements in screening technologies and therapeutic inter­ventions, the avoidance of biases is likely to be very difficult, if not impossible. The use of the number of interval cancers and possible shifts in stage at detection, while appropriately accounting for confounders, would be almost as daunting a task. Third, the imaging detection of cancer is only the first step in many management decisions and interventions that can affect outcome. The appropriate control of biases related to patient management is highly unlikely. The arguments above, in addition to the existing reports to date that show substantial improvements in cancer detection, particularly with the detection of invasive cancers, with a simultaneous reduction in recall rates, support the argument that a randomised trial is neither necessary nor warranted. The current technology might be obsolete by the time results of an appropriately done and analysed randomised trial is made public.

In order to better link the information given by “scientific” papers to the context of daily patients’ reality I suggest to spend some time reviewing few of the videos in the below links:

  1. The following group of videos is featured on a website by Siemens. Nevertheless, the presenting radiologists are leading practitioners who affects thousands of lives every year – What the experts say about tomosynthesis. – click on ECR 2013
  2. Breast Tomosynthesis in Practice – part of a commercial ad of the Washington Radiology Associates featured on the website of Diagnostic Imaging. As well, affects thousands of lives in the Washington area every year.

The pivotal questions yet to be answered are:

  1. What should be done in order to translate increase in sensitivity and early detection into decrease in mortality?

  2. What is the price of such increase in sensitivity in terms of quality of life and health-care costs and is it worth-while to pay?

An article that summarises positively the experience of introducing Tomosynthesis into routine screening practice was recently published on AJR:

Implementation of Breast Tomosynthesis in a Routine Screening Practice: An Observational Study

Stephen L. Rose1, Andra L. Tidwell1, Louis J. Bujnoch1, Anne C. Kushwaha1, Amy S. Nordmann1 and Russell Sexton, Jr.1

Affiliation: 1 All authors: TOPS Comprehensive Breast Center, 17030 Red Oak Dr, Houston, TX 77090.

Citation: American Journal of Roentgenology. 2013;200:1401-1408

 

ABSTRACT :

OBJECTIVE. Digital mammography combined with tomosynthesis is gaining clinical acceptance, but data are limited that show its impact in the clinical environment. We assessed the changes in performance measures, if any, after the introduction of tomosynthesis systems into our clinical practice.

MATERIALS AND METHODS. In this observational study, we used verified practice- and outcome-related databases to compute and compare recall rates, biopsy rates, cancer detection rates, and positive predictive values for six radiologists who interpreted screening mammography studies without (n = 13,856) and with (n = 9499) the use of tomosynthesis. Two-sided analyses (significance declared at p < 0.05) accounting for reader variability, age of participants, and whether the examination in question was a baseline were performed.

RESULTS. For the group as a whole, the introduction and routine use of tomosynthesis resulted in significant observed changes in recall rates from 8.7% to 5.5% (p < 0.001), nonsignificant changes in biopsy rates from 15.2 to 13.5 per 1000 screenings (p = 0.59), and cancer detection rates from 4.0 to 5.4 per 1000 screenings (p = 0.18). The invasive cancer detection rate increased from 2.8 to 4.3 per 1000 screening examinations (p = 0.07). The positive predictive value for recalls increased from 4.7% to 10.1% (p < 0.001).

CONCLUSION. The introduction of breast tomosynthesis into our practice was associated with a significant reduction in recall rates and a simultaneous increase in breast cancer detection rates.

Here are the facts in tables and pictures from this article

Table 1 AJR

Table 2-3 AJR

 

Table 4 AJR

 

p1 ajr

p2 ajr

Other articles related to the management of breast cancer were published on this Open Access Online Scientific Journal:

Automated Breast Ultrasound System (‘ABUS’) for full breast scanning: The beginning of structuring a solution for an acute need!

Introducing smart-imaging into radiologists’ daily practice.

Not applying evidence-based medicine drives up the costs of screening for breast-cancer in the USA.

New Imaging device bears a promise for better quality control of breast-cancer lumpectomies – considering the cost impact

Harnessing Personalized Medicine for Cancer Management, Prospects of Prevention and Cure: Opinions of Cancer Scientific Leaders @ http://pharmaceuticalintelligence.com

Predicting Tumor Response, Progression, and Time to Recurrence

“The Molecular pathology of Breast Cancer Progression”

Personalized medicine gearing up to tackle cancer

What could transform an underdog into a winner?

Mechanism involved in Breast Cancer Cell Growth: Function in Early Detection & Treatment

Nanotech Therapy for Breast Cancer

A Strategy to Handle the Most Aggressive Breast Cancer: Triple-negative Tumors

Breakthrough Technique Images Breast Tumors in 3-D With Great Clarity, Reduced Radiation

Closing the Mammography gap

Imaging: seeing or imagining? (Part 1)

Imaging: seeing or imagining? (Part 2)

Read Full Post »

Could Teleradiology contribute to “cross-borders” standardization of imaging protocols in cancer management?

Writer: Dror Nir, PhD

Teleradiology is accepted as a legitimate medical service for several years now.  It has many clinical utilities worldwide, ranging from services for expert or second opinions to comprehensive remote management of radiology departments in hospitals. Rapid advances in web-technologies infrastructure eliminated the barriers related to the transfer, reading and reporting of radiology images from remote locations. Today’s main controversies are related to issues that are relevant also to “in-house” radiology departments; e.g. clinical governance, quality assessment, work-flow and medico-legal issues.

The concept of Teleradiology is as simple as plotted in this chart.

fig1

Images are automatically uploaded from the imaging system itself or from the institution’s PACS. Reports are sent to the “client” within few hours.

The value for the users goes well beyond mere image interpretation, for example:

  • On-site physicians have more time to spend with patients.
  • Offering of additional subspecialty/multidisciplinary expertise.
  • Comprehensive image-interpretation and reporting service at reduced time-span and reduced cost
  • Sharing images and reports with referring physicians and patients with no effort.

As an example for “cross-border” standardization of a major existing radiology service, let’s consider the use-case of centralized review of mammography images. I know, quite ambitious! And; politically very challenging!

But; seem to be technologically and clinically feasible, at least according to the below quoted publication:

Teleradiology with uncompressed digital mammograms: Clinical assessment

Julia Fruehwald-Pallamar, Marion Jantsch, Katja Pinker, Ricarda Hofmeister, Friedrich Semturs, Kathrin Piegler, Daniel Staribacher, Michael Weber, Thomas H. Helbich

published online 13 April 2012.

Abstract 

Purpose

The purpose of our study was to demonstrate the feasibility of sending uncompressed digital mammograms in a teleradiologic setting without loss of information by comparing image quality, lesion detection, and BI-RADS assessment.

Materials and methods

CDMAM phantoms were sent bidirectionally to two hospitals via the network. For the clinical aspect of the study, 200 patients were selected based on the BI-RAD system: 50% BI-RADS I and II; and 50% BI-RADS IV and V. Two hundred digital mammograms (800 views) were sent to two different institutions via a teleradiology network. Three readers evaluated those 200 mammography studies at institution 1 where the images originated, and in the two other institutions (institutions 2 and 3) where the images were sent. The readers assessed image quality, lesion detection, and BI-RADS classification.

Results

Automatic readout showed that CDMAM image quality was identical before and after transmission. The image quality of the 200 studies (total 600 mammograms) was rated as very good or good in 90–97% before and after transmission. Depending on the institution and the reader, only 2.5–9.5% of all studies were rated as poor. The congruence of the readers with respect to the final BI-RADS assessment ranged from 90% and 91% at institution 1 vs. institution 2, and from 86% to 92% at institution 1 vs. institution 3. The agreement was even higher for conformity of content (BI-RADS I or II and BI-RADS IV or V). Reader agreement in the three different institutions with regard to the detection of masses and calcifications, as well as BI-RADS classification, was very good (κ: 0.775–0.884). Results for interreader agreement were similar.

Conclusion

Uncompressed digital mammograms can be transmitted to different institutions with different workstations, without loss of information. The transmission process does not significantly influence image quality, lesion detection, or BI-RADS rating.

Keywords: Breast cancerImagingDigital mammographyTeleradiologyComparative studies

 

What could be the benefits from centralizing mammography interpretation through Teleradiology?

  • A baseline protocol that could enable pulling together large number of cases from different populations without having to worry about differences in practice and experience of reporters. This will enable better epidemiology studies of this disease.
  • Quantified measure, in real-time, of the relative quality of imaging between institutions could contribute to bringing all screening services to a maximal level.
  • Development of comprehensive training program for radiologists involved in mammography based screening of breast cancer.
  • Better information sharing between all players involved in the pathway of each individual patient could improve clinical decision making and patient’s support.
  • Lower costs of screening programs, disease treatment and follow-up.

Who could organize and carry out such an operation?

There are many reputable large university hospitals already offering Teleradiology services. They are already supported by government’s funds in addition to the fact that the service itself is carrying profits. I’m not listing any of these for obvious reasons, but; google “teleradiology” will bring you many results.

Read Full Post »

Imaging of Non-tumorous and Tumorous Human Brain Tissues

Reporter and Curator: Dror Nir, PhD

The point of interest in the article I feature below is that it represents a potential building block in a future system that will use full-field optical coherence tomography during brain surgery to improve the accuracy of cancer lesions resection. The article is featuring promising results for differentiating tumor from normal brain tissue in large samples (order of 1–3 cm2) by offering images with spatial resolution comparable to histological analysis, sufficient to distinguish microstructures of the human brain parenchyma.  Easy to say, and hard to make…:) –> Intraoperative apparatus to guide the surgeon in real time during resection of brain tumors.

 

Imaging of non-tumorous and tumorous human brain tissues with full-field optical coherence tomography 

Open Access Article

Osnath Assayaga1Kate Grievea1Bertrand DevauxbcFabrice HarmsaJohan Palludbc,Fabrice ChretienbcClaude BoccaraaPascale Varletbc;  a Inserm U979 “Wave Physics For Medicine” ESPCI -ParisTech – Institut Langevin, 1 rue Jussieu, 75005, b France, Centre Hospitalier Sainte-Anne, 1 rue Cabanis 75014 Paris, France

c University Paris Descartes, France.

Abstract

A prospective study was performed on neurosurgical samples from 18 patients to evaluate the use of full-field optical coherence tomography (FF-OCT) in brain tumor diagnosis.

FF-OCT captures en face slices of tissue samples at 1 μm resolution in 3D to a penetration depth of around 200 μm. A 1 cm2 specimen is scanned at a single depth and processed in about 5 min. This rapid imaging process is non-invasive and requires neither contrast agent injection nor tissue preparation, which makes it particularly well suited to medical imaging applications.

Temporal chronic epileptic parenchyma and brain tumors such as meningiomas, low-grade and high-grade gliomas, and choroid plexus papilloma were imaged. A subpopulation of neurons, myelin fibers and CNS vasculature were clearly identified. Cortex could be discriminated from white matter, but individual glial cells such as astrocytes (normal or reactive) or oligodendrocytes were not observable.

This study reports for the first time on the feasibility of using FF-OCT in a real-time manner as a label-free non-invasive imaging technique in an intraoperative neurosurgical clinical setting to assess tumorous glial and epileptic margins.

Abbreviations

  • FF-OCT, full field optical coherence tomography;
  • OCT, optical coherence tomography

Keywords

Optical imaging; Digital pathology; Brain imaging; Brain tumor; Glioma

1. Introduction

1.1. Primary CNS tumors

Primary central nervous system (CNS) tumors represent a heterogeneous group of tumors with benign, malignant and slow-growing evolution. In France, 5000 new cases of primary CNS tumors are detected annually (Rigau et al., 2011). Despite considerable progress in diagnosis and treatment, the survival rate following a malignant brain tumor remains low and 3000 deaths are reported annually from CNS tumors in France (INCa, 2011). Overall survival from brain tumors depends on the complete resection of the tumor mass, as identified through postoperative imaging, associated with updated adjuvant radiation therapy and chemotherapy regimen for malignant tumors (Soffietti et al., 2010). Therefore, there is a need to evaluate the completeness of the tumor resection at the end of the surgical procedure, as well as to identify the different components of the tumor interoperatively, i.e. tumor tissue, necrosis, infiltrated parenchyma (Kelly et al., 1987). In particular, the persistence of non-visible tumorous tissue or isolated tumor cells infiltrating brain parenchyma may lead to additional resection.

For low-grade tumors located close to eloquent brain areas, a maximally safe resection that spares functional tissue warrants the current use of intraoperative techniques that guide a more complete tumor resection. During awake surgery, speech or fine motor skills are monitored, while cortical and subcortical stimulations are performed to identify functional areas (Sanai et al., 2008). Intraoperative MRI provides images of the surgical site as well as tomographic images of the whole brain that are sufficient for an approximate evaluation of the abnormal excised tissue, but offers low resolution (typically 1 to 1.5 mm) and produces artifacts at the air-tissue boundary of the surgical site.

Histological and immunohistochemical analyses of neurosurgical samples remain the current gold standard method used to analyze tumorous tissue due to advantages of sub-cellular level resolution and high contrast. However, these methods require lengthy (12 to 72 h), complex multiple steps, and use of carcinogenic chemical products that would not be technically possible intra-operatively. In addition, the number of histological slides that can be reviewed and analyzed by a pathologist is limited, and it defines the number and size of sampled locations on the tumor, or the surrounding tissue.

To obtain histology-like information in a short time period, intraoperative cytological smear tests are performed. However tissue architecture information is thereby lost and the analysis is carried out on only a limited area of the sample (1 mm × 1 mm).

Intraoperative optical imaging techniques are recently developed high resolution imaging modalities that may help the surgeon to identify the persistence of tumor tissue at the resection boundaries. Using a conventional operating microscope with Xenon lamp illumination gives an overall view of the surgical site, but performance is limited by the poor discriminative capacity of the white light illumination at the surgical site interface. Better discrimination between normal and tumorous tissues has been obtained using fluorescence properties of tumor cells labeled with preoperatively administered 5-ALA. Tumor tissue shows a strong ALA-induced PPIX fluorescence at 635 nm and 704 nm when the operative field is illuminated with a 440 nm-filtered lamp. More complete resections of high-grade gliomas have been demonstrated using 5-ALA fluorescence guidance (Stummer et al., 2000), however brain parenchyma infiltrated by isolated tumor cells is not fluorescent, reducing the interest of this technique when resecting low-grade gliomas.

Refinement of this induced fluorescence technique has been achieved using a confocal microscope and intraoperative injection of sodium fluorescein. A 488 nm laser illuminates the operative field and tissue contact analysis is performed using a handheld surgical probe (field of view less than 0.5 × 0.5 mm) which scans the fluorescence of the surgical interface at the 505–585 nm band. Fluorescent isolated tumor cells are clearly identified at depths from 0 to 500 μm from the resection border (Sanai et al., 2011), demonstrating the potential of this technique in low-grade glioma resection.

Reviewing the state-of-the-art, a need is identified for a quick and reliable method of providing the neurosurgeon with architectural and cellular information without the need for injection or oral intake of exogenous markers in order to guide the neurosurgeon and optimize surgical resections.

1.2. Full-field optical coherence tomography

Introduced in the early 1990s (Huang et al., 1991), optical coherence tomography (OCT) uses interference to precisely locate light deep inside tissue. The photons coming from the small volume of interest are distinguished from light scattered by the other parts of the sample by the use of an interferometer and a light source with short coherence length. Only the portion of light with the same path length as the reference arm of the interferometer, to within the coherence length of the source (typically a few μm), will produce interference. A two-dimensional B-scan image is captured by scanning. Recently, the technique has been improved, mainly in terms of speed and sensitivity, through spectral encoding (De Boer et al., 2003Leitgeb et al., 2003 and Wojtkowski et al., 2002).

A recent OCT technique called full-field optical coherence tomography (FF-OCT) enables both a large field of view and high resolution over the full field of observation (Dubois et al., 2002 and Dubois et al., 2004). This allows navigation across the wide field image to follow the morphology at different scales and different positions. FF-OCT uses a simple halogen or light-emitting diode (LED) light source for full field illumination, rather than lasers and point-by-point scanning components required for conventional OCT. The illumination level is low enough to maintain the sample integrity: the power incident on the sample is less than 1 mW/mm2 using deep red and near infrared light. FF-OCT provides the highest OCT 3D resolution of 1.5 × 1.5 × 1 μm3 (X × Y × Z) on unprepared label-free tissue samples down to depths of approximately 200 μm–300 μm (tissue-dependent) over a wide field of view that allows digital zooming down to the cellular level. Interestingly, it produces en face images in the native field view (rather than the cross-sectional images of conventional OCT), which mimic the histology process, thereby facilitating the reading of images by pathologists. Moreover, as for conventional OCT, it does not require tissue slicing or modification of any kind (i.e. no tissue fixation, coloration, freezing or paraffin embedding). FF-OCT image acquisition and processing time is less than 5 min for a typical 1 cm2 sample (Assayag et al., in press) and the imaging performance has been shown to be equivalent in fresh or fixed tissue (Assayag et al., in press and Dalimier and Salomon, 2012). In addition, FF-OCT intrinsically provides digital images suitable for telemedicine.

Numerous studies have been published over the past two decades demonstrating the suitability of OCT for in vivo or ex vivo diagnosis. OCT imaging has been previously applied in a variety of tissues such as the eye (Grieve et al., 2004 and Swanson et al., 1993), upper aerodigestive tract (Betz et al., 2008Chen et al., 2007 and Ozawa et al., 2009), gastrointestinal tract (Tearney et al., 1998), and breast tissue and lymph nodes (Adie and Boppart, 2009Boppart et al., 2004Hsiung et al., 2007Luo et al., 2005Nguyen et al., 2009Zhou et al., 2010 and Zysk and Boppart, 2006).

In the CNS, published studies that evaluate OCT (Bizheva et al., 2005Böhringer et al., 2006Böhringer et al., 2009Boppart, 2003 and Boppart et al., 1998) using time-domain (TD) or spectral domain (SD) OCT systems had insufficient resolution (10 to 15 μm axial) for visualization of fine morphological details. A study of 9 patients with gliomas carried out using a TD-OCT system led to classification of the samples as malignant versus benign (Böhringer et al., 2009). However, the differentiation of tissues was achieved by considering the relative attenuation of the signal returning from the tumorous zones in relation to that returning from healthy zones. The classification was not possible by real recognition of CNS microscopic structures. Another study showed images of brain microstructures obtained with an OCT system equipped with an ultra-fast laser that offered axial and lateral resolution of 1.3 μm and 3 μm respectively (Bizheva et al., 2005). In this way, it was possible to differentiate malignant from healthy tissue by the presence of blood vessels, microcalcifications and cysts in the tumorous tissue. However the images obtained were small (2 mm × 1 mm), captured on fixed tissue only and required use of an expensive large laser thereby limiting the possibility for clinical implementation.

Other studies have focused on animal brain. In rat brain in vivo, it has been shown that optical coherence microscopy (OCM) can reveal neuronal cell bodies and myelin fibers (Srinivasan et al., 2012), while FF-OCT can also reveal myelin fibers (Ben Arous et al., 2011), and movement of red blood cells in vessels (Binding et al., 2011).

En face images captured with confocal reflectance microscopy can closely resemble FF-OCT images. For example, a prototype system used by Wirth et al. (2012) achieves lateral and axial resolution of 0.9 μm and 3 μm respectively. However small field size prevents viewing of wide-field architecture and slow acquisition speed prohibits the implementation of mosaicking. In addition, the poorer axial resolution and lower penetration depth of confocal imaging in comparison to FF-OCT limit the ability to reconstruct cross-sections from the confocal image stack.

This study is the first to analyze non-tumorous and tumorous human brain tissue samples using FF-OCT.

2. Materials and methods

2.1. Instrument

The experimental arrangement of FF-OCT (Fig. 1A) is based on a configuration that is referred to as a Linnik interferometer (Dubois et al., 2002). A halogen lamp is used as a spatially incoherent source to illuminate the full field of an immersion microscope objective at a central wavelength of 700 nm, with spectral width of 125 nm. The signal is extracted from the background of incoherent backscattered light using a phase-shifting method implemented in custom-designed software. This study was performed on a commercial FF-OCT system (LightCT, LLTech, France).

 

Fig 1

Capturing “en face” images allows easy comparison with histological sections. The resolution, pixel number and sampling requirements result in a native field of view that is limited to about 1 mm2. The sample is moved on a high precision mechanical platform and a number of fields are stitched together (Beck et al., 2000) to display a significant field of view. The FF-OCT microscope is housed in a compact setup (Fig. 1B) that is about the size of a standard optical microscope (310 × 310 × 800 mm L × W × H).

2.2. Imaging protocol

All images presented in this study were captured on fresh brain tissue samples from patients operated on at the Neurosurgery Department of Sainte-Anne Hospital, Paris. Informed and written consent was obtained in all cases following the standard procedure at Sainte-Anne Hospital from patients who were undergoing surgical intervention. Fresh samples were collected from the operating theater immediately after resection and sent to the pathology department. A pathologist dissected each sample to obtain a 1–2 cm2 piece and made a macroscopic observation to orientate the specimen in order to decide which side to image. The sample was immersed in physiological serum, placed in a cassette, numbered, and brought to the FF-OCT imaging facility in a nearby laboratory (15 min distant) where the FF-OCT images were captured. The sample was placed in a custom holder with a coverslip on top (Fig. 1C, D). The sample was raised on a piston to rest gently against the coverslip in order to flatten the surface and so optimize the image capture. The sample is automatically scanned under a 10 × 0.3 numerical aperture (NA) immersion microscope objective. The immersion medium is a silicone oil of refractive index close to that of water, chosen to optimize index matching and slow evaporation. The entire area of each sample was imaged at a depth of 20 μm beneath the sample surface. This depth has been reported to be optimal for comparison of FF-OCT images to histology images in a previous study on breast tissue (Assayag et al., in press). There are several reasons for the choice of imaging depth: firstly, histology was also performed at approximately 20 μm from the edge of the block, i.e. the depth at which typically the whole tissue surface begins to be revealed. Secondly, FF-OCT signal is attenuated with depth due to multiple scattering in the tissue, and resolution is degraded with depth due to aberrations. The best FF-OCT images are therefore captured close to the surface, and the best matching is achieved by attempting to image at a similar depth as the slice in the paraffin block. It was also possible to capture image stacks down to several hundred μm in depth (where penetration depth is dependent on tissue type), for the purpose of reconstructing a 3D volume and imaging layers of neurons and myelin fibers. An example of such a stack in the cerebellum is shown as a video (Video 2) in supplementary material. Once FF-OCT imaging was done, each sample was immediately fixed in formaldehyde and returned to the pathology department where it underwent standard processing in order to compare the FF-OCT images to histology slides.

2.3. Matching FF-OCT to histology

The intention in all cases was to match as closely as possible to histology. FF-OCT images were captured 20 μm below the surface. Histology slices were captured 20 μm from the edge of the block. However the angle of the inclusion is hard to control and so some difference in the angle of the plane always exists when attempting matching. Various other factors that can cause differences stem from the histology process — fixing, dehydrating, paraffin inclusion etc. all alter the tissue and so precise correspondence can be challenging. Such difficulties are common in attempting to match histology to other imaging modalities (e.g. FF-OCT Assayag et al., in press; OCT Bizheva et al., 2005; confocal microscopy Wirth et al., 2012).

An additional parameter in the matching process is the slice thickness. Histology slides were 4 μm in thickness while FF-OCT optical slices have a 1 μm thickness. The finer slice of the FF-OCT image meant that lower cell densities were perceived on the FF-OCT images (in those cases where individual cells were seen, e.g. neurons in the cortex). This difference in slice thickness affects the accuracy of the FF-OCT to histology match. In order to improve matching, it would have been possible to capture four FF-OCT slices in 1 μm steps and sum the images to mimic the histology thickness. However, this would effectively degrade the resolution, which was deemed undesirable in evaluating the capacities of the FF-OCT method.

3. Results

18 samples from 18 adult patients (4 males, 14 females) of age range 19–81 years have been included in the study: 1 mesial temporal lobe epilepsy and 1 cerebellum adjacent to a pulmonary adenocarcinoma metastasis (serving as the non-tumor brain samples), 7 diffuse supratentorial gliomas (4 WHO grade II, 3 WHO grade III), 5 meningiomas, 1 hemangiopericytoma, and 1 choroid plexus papilloma. Patient characteristics are detailed in Table 1.

 

Table 1

3.1. FF-OCT imaging identifies myelinated axon fibers, neuronal cell bodies and vasculature in the human epileptic brain and cerebellum

The cortex and the white matter are clearly distinguished from one another (Fig. 2). Indeed, a subpopulation of neuronal cell bodies (Fig. 2B, C) as well as myelinated axon bundles leading to the white matter could be recognized (Fig. 2D, E). Neuronal cell bodies appear as dark triangles (Fig. 2C) in relation to the bright surrounding myelinated environment. The FF-OCT signal is produced by backscattered photons from tissues of differing refractive indices. The number of photons backscattered from the nuclei in neurons appears to be too few to produce a signal that allows their differentiation from the cytoplasm, and therefore the whole of the cell body (nucleus plus cytoplasm) appears dark.

Fig 2

 

Myelinated axons are numerous, well discernible as small fascicles and appear as bright white lines (Fig. 2E). As the cortex does not contain many myelinated axons, it appears dark gray. Brain vasculature is visible (Fig. 2F and G), and small vessels are distinguished by a thin collagen membrane that appears light gray. Video 1 in supplementary material shows a movie composed of a series of en face 1 μm thick optical slices captured over 100 μm into the depth of the cortex tissue. The myelin fibers and neuronal cell bodies are seen in successive layers.

The different regions of the human hippocampal formation are easily recognizable (Fig. 3). Indeed, CA1 field and its stratum radiatum, CA4 field, the hippocampal fissure, the dentate gyrus, and the alveus are easily distinguishable. Other structures become visible by zooming in digitally on the FF-OCT image. The large pyramidal neurons of the CA4 field (Fig. 3B) and the granule cells that constitute the stratum granulosum of the dentate gyrus are visible, as black triangles and as small round dots, respectively (Fig. 3D).

 

Fig 3

In the normal cerebellum, the lamellar or foliar pattern of alternating cortex and central white matter is easily observed (Fig. 4A). By digital zooming, Purkinje and granular neurons also appear as black triangles or dots, respectively (Fig. 4C), and myelinated axons are visible as bright white lines (Fig. 4E). Video 2 in supplementary material shows a fly-through movie in the reconstructed axial slice orientation of a cortex region in cerebellum. The Purkinje and granular neurons are visible down to depths of 200 μm in the tissue.

 

Fig 4

3.2. FF-OCT images distinguish meningiomas from hemangiopericytoma in meningeal tumors

The classic morphological features of a meningioma are visible on the FF-OCT image: large lobules of tumorous cells appear in light gray (Fig. 5A), demarcated by collagen-rich bundles (Fig. 5B) which are highly scattering and appear a brilliant white in the FF-OCT images. The classic concentric tumorous cell clusters (whorls) are very clearly distinguished on the FF-OCT image (Fig. 5D). In addition the presence of numerous cell whorls with central calcifications (psammoma bodies) is revealed (Fig. 5F). Collagen balls appear bright white on the FF-OCT image (Fig. 5H). As the collagen balls progressively calcify, they are consumed by the black of the calcified area, generating a target-like image (Fig. 5H). Calcifications appear black in FF-OCT as they are crystalline and so allow no penetration of photons to their interior.

Fig 5

Mesenchymal non-meningothelial tumors such as hemangiopericytomas represent a classic differential diagnosis of meningiomas. In FF-OCT, the hemangiopericytoma is more monotonous in appearance than the meningiomas, with a highly vascular branching component with staghorn-type vessels (Fig. 6A, C).

Fig 6

3.3. FF-OCT images identify choroid plexus papilloma

The choroid plexus papilloma appears as an irregular coalescence of multiple papillas composed of elongated fibrovascular axes covered by a single layer of choroid glial cells (Fig. 7). By zooming in on an edematous papilla, the axis appears as a black structure covered by a regular light gray line (Fig. 7B). If the papilla central axis is hemorrhagic, the fine regular single layer is not distinguishable (Fig. 7C). Additional digital zooming in on the image reveals cellular level information, and some nuclei of plexus choroid cells can be recognized. However, cellular atypia and mitosis are not visible. These represent key diagnosis criteria used to differentiate choroid plexus papilloma (grade I) from atypical plexus papilloma (grade II).

Fig 7

3.4. FF-OCT images detect the brain tissue architecture modifications generated by diffusely infiltrative gliomas

Contrary to the choroid plexus papillomas which have a very distinctive architecture in histology (cauliflower-like aspect), very easily recognized in the FF-OCT images (Fig. 7A to G), diffusely infiltrating glioma does not present a specific tumor architecture (Fig. 8) as they diffusely permeate the normal brain architecture. Hence, the tumorous glial cells are largely dispersed through a nearly normal brain parenchyma (Fig. 8E). The presence of infiltrating tumorous glial cells attested by high magnification histological observation (irregular atypical cell nuclei compared to normal oligodendrocytes) is not detectable with the current generation of FF-OCT devices, as FF-OCT cannot reliably distinguish the individual cell nuclei due to lack of contrast (as opposed to lack of resolution). In our experience, diffuse low-grade gliomas (less than 20% of tumor cell density) are mistaken for normal brain tissue on FF-OCT images. However, in high-grade gliomas (Fig. 8G–K), the infiltration of the tumor has occurred to such an extent that the normal parenchyma architecture is lost. This architectural change is easily observed in FF-OCT and is successfully identified as high-grade glioma, even though the individual glial cell nuclei are not distinguished.

Fig 8

4. Discussion

We present here the first large size images (i.e. on the order of 1–3 cm2) acquired using an OCT system that offer spatial resolution comparable to histological analysis, sufficient to distinguish microstructures of the human brain parenchyma.

Firstly, the FF-OCT technique and the images presented here combine several practical advantages. The imaging system is compact, it can be placed in the operating room, the tissue sample does not require preparation and image acquisition is rapid. This technique thus appears promising as an intraoperative tool to help neurosurgeons and pathologists.

Secondly, resolution is sufficient (on the order of 1 μm axial and lateral) to distinguish brain tissue microstructures. Indeed, it was possible to distinguish neuron cell bodies in the cortex and axon bundles going towards white matter. Individual myelin fibers of 1 μm in diameter are visible on the FF-OCT images. Thus FF-OCT may serve as a real-time anatomical locator.

Histological architectural characteristics of meningothelial, fibrous, transitional and psammomatous meningiomas were easily recognizable on the FF-OCT images (lobules and whorl formation, collagenous-septae, calcified psammoma bodies, thick vessels). Psammomatous and transitional meningiomas presented distinct architectural characteristics in FF-OCT images in comparison to those observed in hemangiopericytoma. Thus, FF-OCT may serve as an intraoperative tool, in addition to extemporaneous examination, to refine differential diagnosis between pathological entities with different prognoses and surgical managements.

Diffuse glioma was essentially recognized by the loss of normal parenchyma architecture. However, glioma could be detected on FF-OCT images only if the glial cell density is greater than around 20% (i.e. the point at which the effect on the architecture becomes noticeable). The FF-OCT technique is therefore not currently suitable for the evaluation of low tumorous infiltration or tumorous margins. Evaluation at the individual tumor cell level is only possible by IDH1R132 immunostaining in IDH1 mutated gliomas in adults (Preusser et al., 2011). One of the current limitations of the FF-OCT technique for use in diagnosis is the difficulty in estimating the nuclear/cytoplasmic boundaries and the size and form of nuclei as well as the nuclear-cytoplasmic ratio of cells. This prevents precise classification into tumor subtypes and grades.

To increase the accuracy of diagnosis of tumors where cell density measurement is necessary for grading, perspectives for the technique include development of a multimodal system (Harms et al., 2012) to allow simultaneous co-localized acquisition of FF-OCT and fluorescence images. The fluorescence channel images in this multimodal system show cell nuclei, which increase the possibility of diagnosis and tumor grading direct from optical images. However, the use of contrast agents for the fluorescence channel means that the multimodal imaging technique is no longer non-invasive, and this may be undesirable if the tissue is to progress to histology following optical imaging. This is a similar concern in confocal microscopy where use of dyes is necessary for fluorescence detection (Wirth et al., 2012).

In its current form therefore, FF-OCT is not intended to serve as a diagnostic tool, but should rather be considered as an additional intraoperative aid in order to determine in a short time whether or not there is suspicious tissue present in a sample. It does not aim to replace histological analyses but rather to complement them, by offering a tool at the intermediary stage of intraoperative tissue selection. In a few minutes, an image is produced that allows the surgeon or the pathologist to assess the content of the tissue sample. The selected tissue, once imaged with FF-OCT, may then proceed to conventional histology processing in order to obtain the full diagnosis (Assayag et al., in press and Dalimier and Salomon, 2012).

Development of FF-OCT to allow in vivo imaging is underway, and first steps include increasing camera acquisition speed. First results of in vivo rat brain imaging have been achieved with an FF-OCT prototype setup, and show real-time visualization of myelin fibers (Ben Arous et al., 2011) and movement of red blood cells in vessels (Binding et al., 2011). To respond more precisely to surgical needs, it would be preferable to integrate the FF-OCT system into a surgical probe. Work in this direction is currently underway and preliminary images of skin and breast tissue have been captured with a rigid probe FF-OCT prototype (Latrive and Boccara, 2011).

In conclusion, we have demonstrated the capacity of FF-OCT for imaging of human brain samples. This technique has potential as an intraoperative tool for determining tissue architecture and content in a few minutes. The 1 μm3 resolution and wide-field down to cellular-level views offered by the technique allowed identification of features of non-tumorous and tumorous tissues such as myelin fibers, neurons, microcalcifications, tumor cells, microcysts, and blood vessels. Correspondence with histological slides was good, indicating suitability of the technique for use in clinical practice for tissue selection for biobanking for example. Future work to extend the technique to in vivo imaging by rigid probe endoscopy is underway.

The following are the supplementary data related to this article.

Video 1.  Shows a movie composed of a series of en face 1 μm thick optical slices captured over 100 μm into the depth of the cortex tissue. The myelin fibers and neuronal cell bodies are seen in successive layers. Field size is 800 μm × 800 μm.

Video 2.  Shows a fly-through movie in the reconstructed cross-sectional orientation showing 1 μm steps through a 3D stack down to 200 μm depth in cerebellum cortical tissue. Purkinje and granular neurons are visible as dark spaces. Field size is 800 μm × 200 μm.

Acknowledgments

The authors wish to thank LLTech SAS for use of the LightCT Scanner.

References

 

Adie and Boppart, 2009

Adie, Boppart

Optical Coherence Tomography for Cancer Detection

SpringerLink (2009), pp. 209–250

Assayag et al., in press

Assayag et al.

Large field, high resolution full field optical coherence tomography: a pre-clinical study of human breast tissue and cancer assessment

Technology in Cancer Research & Treatment TCRT Express, 1 (1) (2013), p. e600254http://dx.doi.org/10.7785/tcrtexpress.2013.600254

Beck et al., 2000

Beck et al.

Computer-assisted visualizations of neural networks: expanding the field of view using seamless confocal montaging

Journal of Neuroscience Methods, 98 (2) (2000), pp. 155–163

Ben Arous et al., 2011

Ben Arous et al.

Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy

Journal of Biomedical Optics, 16 (11) (2011), p. 116012

Full Text via CrossRef

Betz et al., 2008

C.S. Betz et al.

A set of optical techniques for improving the diagnosis of early upper aerodigestive tract cancer

Medical Laser Application, 23 (2008), pp. 175–185

Binding et al., 2011

Binding et al.

Brain refractive index measured in vivo with high-NA defocus-corrected full-field OCT and consequences for two-photon microscopy

Optics Express, 19 (6) (2011), pp. 4833–4847

Bizheva et al., 2005

Bizheva et al.

Imaging ex vivo healthy and pathological human brain tissue with ultra-high-resolution optical coherence tomography

Journal of Biomedical Optics, 10 (2005), p. 011006 http://dx.doi.org/10.1117/1.1851513

Böhringer et al., 2006

Böhringer et al.

Time domain and spectral domain optical coherence tomography in the analysis of brain tumor tissue

Lasers in Surgery and Medicine, 38 (2006), pp. 588–597 http://dx.doi.org/10.1002/lsm.20353

Böhringer et al., 2009

Böhringer et al.

Imaging of human brain tumor tissue by near-infrared laser coherence tomography

Acta Neurochirurgica, 151 (2009), pp. 507–517 http://dx.doi.org/10.1007/s00701-009-0248-y

Boppart, 2003

Boppart

Optical coherence tomography: technology and applications for neuroimaging

Psychophysiology, 40 (2003), pp. 529–541 http://dx.doi.org/10.1111/1469-8986.00055

Boppart et al., 1998

Boppart et al.

Optical coherence tomography for neurosurgical imaging of human intracortical melanoma

Neurosurgery, 43 (1998), pp. 834–841 http://dx.doi.org/10.1097/00006123-199810000-00068

Boppart et al., 2004

Boppart et al.

Optical coherence tomography: feasibility for basic research and image-guided surgery of breast cancer

Breast Cancer Research and Treatment, 84 (2004), pp. 85–97

Chen et al., 2007

Chen et al.

Ultrahigh resolution optical coherence tomography of Barrett’s esophagus: preliminary descriptive clinical study correlating images with histology

Endoscopy, 39 (2007), pp. 599–605

Dalimier and Salomon, 2012

Dalimier, Salomon

Full-field optical coherence tomography: a new technology for 3D high-resolution skin imaging

Dermatology, 224 (2012), pp. 84–92 http://dx.doi.org/10.1159/000337423

De Boer et al., 2003

De Boer et al.

Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography

Optics Letters, 28 (2003), pp. 2067–2069

Dubois et al., 2002

Dubois et al.

High-resolution full-field optical coherence tomography with a Linnik microscope

Applied Optics, 41 (4) (2002), p. 805

Dubois et al., 2004

Dubois et al.

Ultrahigh-resolution full-field optical coherence tomography

Applied Optics, 43 (14) (2004), p. 2874

Grieve et al., 2004

Grieve et al.

Ocular tissue imaging using ultrahigh-resolution, full-field optical coherence tomography

Investigative Ophthalmology & Visual Science, 45 (2004), pp. 4126-3–4131

Harms et al., 2012

Harms et al.

Multimodal full-field optical coherence tomography on biological tissue: toward all optical digital pathology

Proc. SPIE 2011, Multimodal Biomedical Imaging VII, 8216 (2012)

Hsiung et al., 2007

Hsiung et al.

Benign and malignant lesions in the human breast depicted with ultrahigh resolution and three-dimensional optical coherence tomography

Radiology, 244 (2007), pp. 865–874

Read Full Post »

Ultrasound imaging as an instrument for measuring tissue elasticity: “Shear-wave Elastography” VS. “Strain-Imaging”

Writer and curator: Dror Nir, PhD

In the context of cancer-management, imaging is pivotal. For decades, ultrasound is used by clinicians to support every step in cancer pathways. Its popularity within clinicians is steadily increasing despite the perception of it being less accurate and less informative than CT and MRI. This is not only because ultrasound is easily accessible and relatively low cost, but also because advances in ultrasound technology, mainly the conversion into PC-based modalities allows better, more reproducible, imaging and more importantly; clinically-effective image interpretation.

The idea to rely on ultrasound’s physics in order to measure the stiffness of tissue lesions is not new. The motivation for such measurement has to do with the fact that many times malignant lesions are stiffer than non-malignant lesions.

The article I bring below; http://digital.studio-web.be/digitalMagazine?issue_id=254 by Dr. Georg Salomon and his colleagues, is written for lay-readers. I found it on one of the many portals that are bringing quasi-professional and usually industry-sponsored information on health issues; http://www.dieurope.com/ – The European Portal for Diagnostic Imaging. Note, that when it comes to using ultrasound as a diagnostic aid in urology, Dr. Georg Salomon is known to be one of the early adopters for new technologies and an established opinion leader who published many peer-review, frequently quoted, papers on Elastography.

The important take-away I would like to highlight for the reader: Quantified measure of tissue’s elasticity (doesn’t matter if is done by ShearWave or another “Elastography” measure implementation) is information that has real clinical value for the urologists who needs to decide on the right pathway for his patient!

Note: the highlights in the article below are added by me for the benefit of the reader.

Improvement in the visualization of prostate cancer through the use of ShearWave Elastography

by:

Dr Georg Salomon1 Dr Lars Budaeus1, Dr L Durner2 & Dr K Boe1

1. Martini-Clinic — Prostate Cancer Center University Hospital Hamburg Eppendorf Martinistrasse 52, 20253 Hamburg, Germany

2. Urologische Kilnik Dr. Castringius Munchen-Planegg Germeringer Str. 32, 82152 Planegg, Germany

Corresponding author; PD Dr. Georg Salomon

Associate Professor of Urology

Martini Clinic

Tel: 0049 40 7410 51300

gsalornon@uke.de

 

Prostate cancer is the most common cancer in males with more than 910,000 annual cases worldwide. With early detection, excellent cure rates can be achieved. Today, prostate cancer is diagnosed by a randomized transrectal ultrasound guided biopsy. However, such randomized “blind” biopsies can miss cancer because of the inability of conventional TRUS to visualize small cancerous spots in most cases.

Elastography has been shown to improve visualization of prostate cancer.

The innovative ShearWave Elastography technique is an automated, user-friendly and quantifiable method for the determination of prostatic tissue stiffness.

The detection of prostate cancer (PCA) has become easier thanks to Prostate Specific Anti­gen (PSA) testing; the diagnosis of PCA has been shifted towards an earlier stage of the disease.

Prostate cancer is, in more than 80 % of the cases, a heterogeneous and multifocal tumor. Conventional ultra­sound has limitations to accurately define tumor foci within the prostate. This is due to the fact that most PCA foci are isoechogenic, so in these cases there is no dif­ferentiation of benign and malignant tissue. Because of this, a randomized biopsy is performed under ultrasound guidance with at least 10 to 12 biopsy cores, which should represent all areas of the prostate. Tumors, however, can be missed by this biopsy regimen since it is not a lesion-targeted biopsy. When PSA is rising — which usually occurs in most men — the originally negative biopsy has to be repeated.

What urologists expect from imag­ing and biopsy procedures is the detection of prostate cancer at an early stage and an accurate description of all foci within the prostate with different (Gleason) grades of differentiation for best treatment options.

In the past 10 years a couple of new innovative ultrasound techniques (computerized, contrast enhanced and real time elastography) have been introduced to the market and their impact on the detection of early prostate cancer has been evaluated. The major benefit of elastography compared to the other techniques is its ability to provide visualization of sus­picious areas and to guide the biopsy needle, in real time, to the suspicious and potentially malignant area.

Ultrasound-based elastography has been investigated over the years and has had a lot of success for increasing the detection rate of prostate cancer or reducing the number of biopsy sam­ples required. [1-3]. Different compa­nies have used different approaches to the ultrasound elastography technique (strain elastography vs. shear wave elastography). Medical centers have seen an evolution in better image qual­ity with more stable and reproducible results from these techniques.

One drawback of real time strain elastography is that there is a sig­nificant learning curve to be climbed before reproducible elastograms can be generated. The technique has to be performed by compressing and then decompressing the ultrasound probe to derive a measurement of tissue displacement.

Today there are ultrasound scanners on the market, which have the ability to produce elastograms without this “manual” assistance: this technique is called shear-wave elastography. While the ultrasound probe is being inserted transrectally, the “elastograms” are generated automatically by the calcu­lation of shear wave velocity as the waves travel through the tissue being examined, thus providing measure­ments of tissue stiffness and not dis­placement measurements.

There are several different tech­niques for this type of elastography. The FibroScan system, which is not an ultrasound unit, uses shear waves (transient elastography) to evaluate the advancement of the stiffness of the liver. Another technique is Acous­tic Radiation Force Impulse or ARF1 technique, also used for the liver. These non-real-time techniques only provide a shear wave velocity estimation for a single region of interest and are not currently used in prostate imaging.

A shear wave technology that pro­vides specific quantification of tissue elasticity in real-time is ShearWave Elastography, developed by Super-Sonic Imagine. This technique mea­sures elasticity in kilopascals and can provide visual representation of tis­sue stiffness over the entire region of interest in a color-coded map on the ultrasound screen. On a split screen the investigator can see the conven­tional ultrasound B-mode image and the color-coded elastogram at the same time. This enables an anatomi­cal view of the prostate along with the elasticity image of the tissue to guide the biopsy needle.

In short, ShearWave Elastography (SWE) is a different elastography technique that can be used for several applications. It automatically gener­ates a real-time, reproducible, fully quantifiable color-coded image of tissue elasticity.

QUANTIFICATION OF TISSUE STIFFNESS Such quantification can help to increase the chance that a targeted biopsy is positive for cancer.

It has been shown that elastography-targeted biopsies have an up to 4.7 times higher chance to be positive for cancer than a randomized biopsy [4J. Shear-Wave Elastography can not only visual­ize the tissue stiffness in color but also quantify (in kPa) the stiffness in real time, for several organs including the prostate. Correas et al, reported that with tissue stiffness higher than 45 to 50 kPa the chance of prostate cancer is very high in patients undergoing a pros­tate biopsy. The data from Gorreas et al showed a sensitivity of 80 % and a high negative predictive value of up to 9096. Another group (Barr et A) achieved a negative predictive value of up to 99.6% with a sensitivity of 96.2% and specific­ity of 962%. With a cut-off of 4D kPa the positive biopsy rate for the ShearWave Elastography targeted biopsy was 50%, whereas for randomized biopsy it was 20.8 95. In total 53 men were enrolled in this study.

Our group used SWE prior to radical prostatectomy to determine if the Shear-Wave Elastography threshold had a high accuracy using a cutoff >55 kPa. (Fig 1)

We then compared the ShearWave results with the final histopathological results. [Figure I], Our results showed the accuracy was around 78 % for all tumor foci We were also able to verify that ShearWave Elastography targeted biopsies were more likely to be posi­tive compared to randomized biopsies. [Figures 2, 3]

F1

F2F3 

CONCLUSION

SWE is a non-invasive method to visualize prostate cancer foci with high accuracy, in a user-friendly way. As Steven Kaplan puts it in an edi­torial comment in the Journal of Urology 2013: “Obviously, large-scale studies with multicenter corroboration need to be performed. Nevertheless, SWE is a potentially promising modality to increase our efficiency in evaluating prostate diseases:’

 

REFERENCES

  1. Pallweln, L. et al-. Sonoelastography of the prostate: comparison with systematic biopsy findings in 492 patients. European journal of radiology, 2008. 65(2): p. 304-10.
  2. Pallwein, L., et al., Comparison of sono-elastography guided biopsy with systematic biopsy: Impact on prostate cancer detecton. European radiology, 2007_ 17.(9) p. 2278-85.
  3. Salomon, G., et al., Evaluation of prostate can cer detection with ultrasound real-time elas-tographyl a companion with step section path­ological analysis after radical prostatectomy. European urology, 2008. 5446): p. 135462-
  4. Aigner, F., at al., Value of real-time elastography targeted biopsy for prostate cancer detection in men with prostate specific antigen 125 ng/mi or greater and 4-00 ng/ml or Lass. The Journal of urology, 2010. 184{3): p. 813.7,

Other research papers related to the management of Prostate cancer and Elastography were published on this Scientific Web site:

Imaging: seeing or imagining? (Part 1)

Early Detection of Prostate Cancer: American Urological Association (AUA) Guideline

Today’s fundamental challenge in Prostate cancer screening

State of the art in oncologic imaging of Prostate.

From AUA2013: “HistoScanning”- aided template biopsies for patients with previous negative TRUS biopsies 

On the road to improve prostate biopsy

 

Read Full Post »

Older Posts »