Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Medical imaging’


These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade.

Reporter: Gail S. Thornton, M.A.

 

This article is excerpted from Health IT Analytics, April 11, 2019.

 By Jennifer Bresnick

April 11, 2019 – There’s no question that artificial intelligence is moving quickly in the healthcare industry.  Even just a few months ago, AI was still a dream for the next generation: something that would start to enter regular care delivery in a couple of decades – maybe ten or fifteen years for the most advanced health systems.

Even Partners HealthCare, the Boston-based giant on the very cutting edge of research and reform, set a ten-year timeframe for artificial intelligence during its 2018 World Medical Innovation Forum, identifying a dozen AI technologies that had the potential to revolutionize patient care within the decade.

But over the past twelve months, research has progressed so rapidly that Partners has blown up that timeline. 

Instead of viewing AI as something still lingering on the distant horizon, this year’s Disruptive Dozen panel was tasked with assessing which AI innovations will be ready to fundamentally alter the delivery of care by 2020 – now less than a year away.

Sixty members of the Partners faculty participated in nominating and narrowing down the tools they think will have an almost immediate benefit for patients and providers, explained Erica Shenoy, MD, PhD, an infectious disease specialist at Massachusetts General Hospital (MGH).

“These are innovations that have a strong potential to make significant advancement in the field, and they are also technologies that are pretty close to making it to market,” she said.

The results include everything from mental healthcare and clinical decision support to coding and communication, offering patients and their providers a more efficient, effective, and cost-conscious ecosystem for improving long-term outcomes.

In order from least to greatest potential impact, here are the twelve artificial intelligence innovations poised to become integral components of the next decade’s data-driven care delivery system.

NARROWING THE GAPS IN MENTAL HEALTHCARE

Nearly twenty percent of US patients struggle with a mental health disorder, yet treatment is often difficult to access and expensive to use regularly.  Reducing barriers to access for mental and behavioral healthcare, especially during the opioid abuse crisis, requires a new approach to connecting patients with services.

AI-driven applications and therapy programs will be a significant part of the answer.

“The promise and potential for digital behavioral solutions and apps is enormous to address the gaps in mental healthcare in the US and across the world,” said David Ahern, PhD, a clinical psychologist at Brigham & Women’s Hospital (BWH). 

Smartphone-based cognitive behavioral therapy and integrated group therapy are showing promise for treating conditions such as depression, eating disorders, and substance abuse.

While patients and providers need to be wary of commercially available applications that have not been rigorously validated and tested, more and more researchers are developing AI-based tools that have the backing of randomized clinical trials and are showing good results.

A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.
A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.

Source: Partners HealthCare

STREAMLINING WORKFLOWS WITH VOICE-FIRST TECHNOLOGY

Natural language processing is already a routine part of many behind-the-scenes clinical workflows, but voice-first tools are expected to make their way into the patient-provider encounter in a new way. 

Smart speakers in the clinic are prepping to relieve clinicians of their EHR burdens, capturing free-form conversations and translating the content into structured documentation.  Physicians and nurses will be able to collect and retrieve information more quickly while spending more time looking patients in the eye.

Patients may benefit from similar technologies at home as the consumer market for virtual assistants continues to grow.  With companies like Amazon achieving HIPAA compliance for their consumer-facing products, individuals may soon have more robust options for voice-first chronic disease management and patient engagement.

IDENTIFYING INDIVIDUALS AT HIGH RISK OF DOMESTIC VIOLENCE

Underreporting makes it difficult to know just how many people suffer from intimate partner violence (IPV), says Bharti Khurana, MD, an emergency radiologist at BWH.  But the symptoms are often hiding in plain sight for radiologists.

Using artificial intelligence to flag worrisome injury patterns or mismatches between patient-reported histories and the types of fractures present on x-rays can alert providers to when an exploratory conversation is called for.

“As a radiologist, I’m very excited because this will enable me to provide even more value to the patient instead of simply evaluating their injuries.  It’s a powerful tool for clinicians and social workers that will allow them to approach patients with confidence and with less worry about offending the patient or the spouse,” said Khurana.

REVOLUTIONIZING ACUTE STROKE CARE

Every second counts when a patient experiences a stroke.  In far-flung regions of the United States and in the developing world, access to skilled stroke care can take hours, drastically increasing the likelihood of significant long-term disability or death.

Artificial intelligence has the potential to close the gaps in access to high-quality imaging studies that can identify the type of stroke and the location of the clot or bleed.  Research teams are currently working on AI-driven tools that can automate the detection of stroke and support decision-making around the appropriate treatment for the individual’s needs.  

In rural or low-resource care settings, these algorithms can compensate for the lack of a specialist on-site and ensure that every stroke patient has the best possible chance of treatment and recovery.

AI revolutionizing stroke care

Source: Getty Images

REDUCING ADMINISTRATIVE BURDENS FOR PROVIDERS

The costs of healthcare administration are off the charts.  Recent data from the Center for American progress states that providers spend about $282 billion per year on insurance and medical billing, and the burdens are only going to keep getting bigger.

Medical coding and billing is a perfect use case for natural language processing and machine learning.  NLP is well-suited to translating free-text notes into standardized codes, which can move the task off the plates of physicians and reduce the time and effort spent on complying with convoluted regulations.

“The ultimate goal is to help reduce the complexity of the coding and billing process through automation, thereby reducing the number of mistakes – and, in turn, minimizing the need for such intense regulatory oversight,” Partners says.

NLP is already in relatively wide use for this task, and healthcare organizations are expected to continue adopting this strategy as a way to control costs and speed up their billing cycles.

UNLEASHING HEALTH DATA THROUGH INFORMATION EXCHANGE

AI will combine with another game-changing technology, known as FHIR, to unlock siloes of health data and support broader access to health information.

Patients, providers, and researchers will all benefit from a more fluid health information exchange environment, especially since artificial intelligence models are extremely data-hungry.

Stakeholders will need to pay close attention to maintaining the privacy and security of data as it moves across disparate systems, but the benefits have the potential to outweigh the risks.

“It completely depends on how everyone in the medical community advocates for, builds, and demands open interfaces and open business models,” said Samuel Aronson, Executive Director of IT at Partners Personalized Medicine.

“If we all row in the same direction, there’s a real possibility that we will see fundamental improvements to the healthcare system in 3 to 5 years.”

OFFERING NEW APPROACHES FOR EYE HEALTH AND DISEASE

Image-heavy disciplines have started to see early benefits from artificial intelligence since computers are particularly adept at analyzing patterns in pixels.  Ophthalmology is one area that could see major changes as AI algorithms become more accurate and more robust.

From glaucoma to diabetic retinopathy, millions of patients experience diseases that can lead to irreversible vision loss every year.  Employing AI for clinical decision support can extend access to eye health services in low-resource areas while giving human providers more accurate tools for catching diseases sooner.

REAL-TIME MONITORING OF BRAIN HEALTH

The brain is still the body’s most mysterious organ, but scientists and clinicians are making swift progress unlocking the secrets of cognitive function and neurological disease.  Artificial intelligence is accelerating discovery by helping providers interpret the incredibly complex data that the brain produces.

From predicting seizures by reading EEG tests to identifying the beginnings of dementia earlier than any human, artificial intelligence is allowing providers to access more detailed, continuous measurements – and helping patients improve their quality of life.

Seizures can happen in patients with other serious illnesses, such as kidney or liver failure, explained, Bandon Westover, MD, PhD, executive director of the Clinical Data Animation Center at MGH, but many providers simply don’t know about it.

“Right now, we mostly ignore the brain unless there’s a special need for suspicion,” he said.  “In a year’s time, we’ll be catching a lot more seizures and we’ll be doing it with algorithms that can monitor patients continuously and identify more ambiguous patterns of dysfunction that can damage the brain in a similar manner to seizures.”

AUTOMATING MALARIA DETECTION IN DEVELOPING REGIONS

Malaria is a daily threat for approximately half the world’s population.  Nearly half a million people died from the mosquito-borne disease in 2017, according to the World Health Organization, and the majority of the victims are children under the age of five.

Deep learning tools can automate the process of quantifying malaria parasites in blood samples, a challenging task for providers working without pathologist partners.  One such tool achieved 90 percent accuracy and specificity, putting it on par with pathology experts.

This type of software can be run on a smartphone hooked up to a camera on a microscope, dramatically expanding access to expert-level diagnosis and monitoring.

AI for diagnosing and detecting malaria

Source: Getty Images

AUGMENTING DIAGNOSTICS AND DECISION-MAKING

Artificial intelligence has made especially swift progress in diagnostic specialties, including pathology. AI will continue to speed down the road to maturity in this area, predicts Annette Kim, MD, PhD, associate professor of pathology at BWH and Harvard Medical School.

“Pathology is at the center of diagnosis, and diagnosis underpins a huge percentage of all patient care.  We’re integrating a huge amount of data that funnels through us to come to a diagnosis.  As the number of data points increases, it negatively impacts the time we have to synthesize the information,” she said.

AI can help automate routine, high-volume tasks, prioritize and triage cases to ensure patients are getting speedy access to the right care, and make sure that pathologists don’t miss key information hidden in the enormous volumes of clinical and test data they must comb through every day.

“This is where AI can have a huge impact on practice by allowing us to use our limited time in the most meaningful manner,” Kim stressed.

PREDICTING THE RISK OF SUICIDE AND SELF-HARM

Suicide is the tenth leading cause of death in the United States, claiming 45,000 lives in 2016.  Suicide rates are on the rise due to a number of complex socioeconomic and mental health factors, and identifying patients at the highest risk of self-harm is a difficult and imprecise science.

Natural language processing and other AI methodologies may help providers identify high-risk patients earlier and more reliably.  AI can comb through social media posts, electronic health record notes, and other free-text documents to flag words or concepts associated with the risk of harm.

Researchers also hope to develop AI-driven apps to provide support and therapy to individuals likely to harm themselves, especially teenagers who commit suicide at higher rates than other age groups.

Connecting patients with mental health resources before they reach a time of crisis could save thousands of lives every year.

REIMAGINING THE WORLD OF MEDICAL IMAGING

Radiology is already one of AI’s early beneficiaries, but providers are just at the beginning of what they will be able to accomplish in the next few years as machine learning explodes into the imaging realm.

AI is predicted to bring earlier detection, more accurate assessment of complex images, and less expensive testing for patients across a huge number of clinical areas.

But as leaders in the AI revolution, radiologists also have a significant responsibility to develop and deploy best practices in terms of trustworthiness, workflow, and data protection.

“We certainly feel the onus on the radiology community to make sure we do deliver and translate this into improved care,” said Alexandra Golby, MD, a neurosurgeon and radiologist at BWH and Harvard Medical School.

“Can radiology live up to the expectations?  There are certainly some challenges, including trust and understanding of what the algorithms are delivering.  But we desperately need it, and we want to equalize care across the world.”

Radiologists have been among the first to overcome their trepidation about the role of AI in a changing clinical world, and are eagerly embracing the possibilities of this transformative approach to augmenting human skills.”

“All of the imaging societies have opened their doors to the AI adventure,” Golby said.  “The community very anxious to learn, codevelop, and work with all of the industry partners to turn this technology into truly valuable tools. We’re very optimistic and very excited, and we look forward to learning more about how AI can improve care.”

Source:

https://healthitanalytics.com/news/top-12-artificial-intelligence-innovations-disrupting-healthcare-by-2020

 

Advertisements

Read Full Post »

Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle


Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle

Reporter: Stephen J Williams, PhD

Leaders in Pharmaceutical Business Intelligence would like to announce the First volume of their BioMedical E-Book Series C: e-Books on Cancer & Oncology

Volume One: Cancer Biology and Genomics for Disease Diagnosis

CancerandOncologyseriesCcoverwhich is now available on Amazon Kindle at                          http://www.amazon.com/dp/B013RVYR2K.

This e-Book is a comprehensive review of recent Original Research on Cancer & Genomics including related opportunities for Targeted Therapy written by Experts, Authors, Writers. This ebook highlights some of the recent trends and discoveries in cancer research and cancer treatment, with particular attention how new technological and informatics advancements have ushered in paradigm shifts in how we think about, diagnose, and treat cancer. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation. The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012.  All new articles on this subject, will continue to be incorporated, as published with periodical updates.

We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon. All forthcoming BioMed e-Book Titles can be viewed at:

https://pharmaceuticalintelligence.com/biomed-e-books/

Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of  life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.

Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:

  • delivering curation and summary interpretations of latest findings and innovations
  • on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
  • providing a social platform for scientists and clinicians to enter into discussion using social media
  • compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform

This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Table of Contents for Cancer Biology and Genomics for Disease Diagnosis

Preface

Introduction  The evolution of cancer therapy and cancer research: How we got here?

Part I. Historical Perspective of Cancer Demographics, Etiology, and Progress in Research

Chapter 1:  The Occurrence of Cancer in World Populations

Chapter 2.  Rapid Scientific Advances Changes Our View on How Cancer Forms

Chapter 3:  A Genetic Basis and Genetic Complexity of Cancer Emerge

Chapter 4: How Epigenetic and Metabolic Factors Affect Tumor Growth

Chapter 5: Advances in Breast and Gastrointestinal Cancer Research Supports Hope for Cure

Part II. Advent of Translational Medicine, “omics”, and Personalized Medicine Ushers in New Paradigms in Cancer Treatment and Advances in Drug Development

Chapter 6:  Treatment Strategies

Chapter 7:  Personalized Medicine and Targeted Therapy

Part III.Translational Medicine, Genomics, and New Technologies Converge to Improve Early Detection

Chapter 8:  Diagnosis                                     

Chapter 9:  Detection

Chapter 10:  Biomarkers

Chapter 11:  Imaging In Cancer

Chapter 12: Nanotechnology Imparts New Advances in Cancer Treatment, Detection, &  Imaging                                 

Epilogue by Larry H. Bernstein, MD, FACP: Envisioning New Insights in Cancer Translational Biology

 

Read Full Post »


Imaging Technology in Cancer Surgery

Author and curator: Dror Nir, PhD

The advent of medical-imaging technologies such as image-fusion, functional-imaging and noninvasive tissue characterisation is playing an imperative role in answering this demand thus transforming the concept of personalized medicine in cancer into practice. The leading modality in that respect is medical imaging. To date, the main imaging systems that can provide reasonable level of cancer detection and localization are: CT, mammography, Multi-Sequence MRI, PET/CT and ultrasound. All of these require skilled operators and experienced imaging interpreters in order to deliver what is required at a reasonable level. It is generally agreed by radiologists and oncologists that in order to provide a comprehensive work-flow that complies with the principles of personalized medicine, future cancer patients’ management will heavily rely on computerized image interpretation applications that will extract from images in a standardized manner measurable imaging biomarkers leading to better clinical assessment of cancer patients.

As consequence of the human genome project and technological advances in gene-sequencing, the understanding of cancer advanced considerably. This led to increase in the offering of treatment options. Yet, surgical resection is still the leading form of therapy offered to patients with organ confined tumors. Obtaining “cancer free” surgical margins is crucial to the surgery outcome in terms of overall survival and patients’ quality of life/morbidity. Currently, a significant portion of surgeries ends up with positive surgical margins leading to poor clinical outcome and increase of costs. To improve on this, large variety of intraoperative imaging-devices aimed at resection-guidance have been introduced and adapted in the last decade and it is expected that this trend will continue.

The Status of Contemporary Image-Guided Modalities in Oncologic Surgery is a review paper presenting a variety of cancer imaging techniques that have been adapted or developed for intra-operative surgical guidance. It also covers novel, cancer-specific contrast agents that are in early stage development and demonstrate significant promise to improve real-time detection of sub-clinical cancer in operative setting.

Another good (free access) review paper is: uPAR-targeted multimodal tracer for pre- and intraoperative imaging in cancer surgery

Abstract

Pre- and intraoperative diagnostic techniques facilitating tumor staging are of paramount importance in colorectal cancer surgery. The urokinase receptor (uPAR) plays an important role in the development of cancer, tumor invasion, angiogenesis, and metastasis and over-expression is found in the majority of carcinomas. This study aims to develop the first clinically relevant anti-uPAR antibody-based imaging agent that combines nuclear (111In) and real-time near-infrared (NIR) fluorescent imaging (ZW800-1). Conjugation and binding capacities were investigated and validated in vitro using spectrophotometry and cell-based assays. In vivo, three human colorectal xenograft models were used including an orthotopic peritoneal carcinomatosis model to image small tumors. Nuclear and NIR fluorescent signals showed clear tumor delineation between 24h and 72h post-injection, with highest tumor-to-background ratios of 5.0 ± 1.3 at 72h using fluorescence and 4.2 ± 0.1 at 24h with radioactivity. 1-2 mm sized tumors could be clearly recognized by their fluorescent rim. This study showed the feasibility of an uPAR-recognizing multimodal agent to visualize tumors during image-guided resections using NIR fluorescence, whereas its nuclear component assisted in the pre-operative non-invasive recognition of tumors using SPECT imaging. This strategy can assist in surgical planning and subsequent precision surgery to reduce the number of incomplete resections.

INTRODUCTION
Diagnosis, staging, and surgical planning of colorectal cancer patients increasingly rely on imaging techniques that provide information about tumor biology and anatomical structures [1-3]. Single-photon emission computed tomography (SPECT) and positron emission tomography (PET) are preoperative nuclear imaging modalities used to provide insights into tumor location, tumor biology, and the surrounding micro-environment [4]. Both techniques depend on the recognition of tumor cells using radioactive ligands. Various monoclonal antibodies, initially developed as therapeutic agents (e.g. cetuximab, bevacizumab, labetuzumab), are labeled with radioactive tracers and evaluated for pre-operative imaging purposes [5-9]. Despite these techniques, during surgery the surgeons still rely mostly on their eyes and hands to distinguish healthy from malignant tissues, resulting in incomplete resections or unnecessary tissue removal in up to 27% of rectal cancer patients [10, 11]. Incomplete resections (R1) are shown to be a strong predictor of development of distant metastasis, local recurrence, and decreased survival of colorectal cancer patients [11, 12]. Fluorescence-guided surgery (FGS) is an intraoperative imaging technique already introduced and validated in the clinic for sentinel lymph node (SLN) mapping and biliary imaging [13]. Tumor-specific FGS can be regarded as an extension of SPECT/PET, using fluorophores instead of radioactive labels conjugated to tumor-specific ligands, but with higher spatial resolution than SPECT/PET imaging and real-time anatomical feedback [14]. A powerful synergy can be achieved when nuclear and fluorescent imaging modalities are combined, extending the nuclear diagnostic images with real-time intraoperative imaging. This combination can lead to improved diagnosis and management by integrating pre-intra and postoperative imaging. Nuclear imaging enables pre-operative evaluation of tumor spread while during surgery deeper lying spots can be localized using the gamma probe counter. The (NIR) fluorescent signal aids the surgeon in providing real-time anatomical feedback to accurately recognize and resect malignant tissues. Postoperative, malignant cells can be recognized using NIR fluorescent microscopy. Clinically, the advantages of multimodal agents in image-guided surgery have been shown in patients with melanoma and prostate cancer, but those studies used a-specific agents, following the natural lymph drainage pattern of colloidal tracers after peritumoral injection [15, 16]. The urokinase-type plasminogen activator receptor (uPAR) is implicated in many aspects of tumor growth and (micro) metastasis [17, 18]. The levels of uPAR are undetectable in normal tissues except for occasional macrophages and granulocytes in the uterus, thymus, kidneys and spleen [19]. Enhanced tumor levels of uPAR and its circulating form (suPAR) are independent prognostic markers for overall survival in colorectal cancer patients [20, 21]. The relatively selective and high overexpression of uPAR in a wide range of human cancers including colorectal, breast, and pancreas nominate uPAR as a widely applicable and potent molecular target [17,22]. The current study aims to develop a clinically relevant uPAR-specific multimodal agent that can be used to visualize tumors pre- and intraoperatively after a single injection. We combined the 111Indium isotope with NIR fluorophore ZW800-1 using a hybrid linker to an uPAR specific monoclonal antibody (ATN-658) and evaluated its performance using a pre-clinical SPECT system (U-SPECT-II) and a clinically-applied NIR fluorescence camera system (FLARE™).

Fig1 Fig2 Fig3

Robotic surgery is a growing trend as a form of surgery, specifically in urology. The following review paper propose a good discussion on the added value of imaging in urologic robotic surgery:

The current and future use of imaging in urological robotic surgery: a survey of the European Association of Robotic Urological Surgeons

 Abstract

Background

With the development of novel augmented reality operating platforms the way surgeons utilize imaging as a real-time adjunct to surgical technique is changing.

Methods

A questionnaire was distributed via the European Robotic Urological Society mailing list. The questionnaire had three themes: surgeon demographics, current use of imaging and potential uses of an augmented reality operating environment in robotic urological surgery.

Results

117 of the 239 respondents (48.9%) were independently practicing robotic surgeons. 74% of surgeons reported having imaging available in theater for prostatectomy 97% for robotic partial nephrectomy and 95% cystectomy. 87% felt there was a role for augmented reality as a navigation tool in robotic surgery.

Conclusions

This survey has revealed the contemporary robotic surgeon to be comfortable in the use of imaging for intraoperative planning it also suggests that there is a desire for augmented reality platforms within the urological community. Copyright © 2014 John Wiley & Sons, Ltd.

 Introduction

Since Röntgen first utilized X-rays to image the carpal bones of the human hand in 1895, medical imaging has evolved and is now able to provide a detailed representation of a patient’s intracorporeal anatomy, with recent advances now allowing for 3-dimensional (3D) reconstructions. The visualization of anatomy in 3D has been shown to improve the ability to localize structures when compared with 2D with no change in the amount of cognitive loading [1]. This has allowed imaging to move from a largely diagnostic tool to one that can be used for both diagnosis and operative planning.

One potential interface to display 3D images, to maximize its potential as a tool for surgical guidance, is to overlay them onto the endoscopic operative scene (augmented reality). This addresses, in part, a criticism often leveled at robotic surgery, the loss of haptic feedback. Augmented reality has the potential to mitigate this sensory loss by enhancing the surgeons visual cues with information regarding subsurface anatomical relationships [2].

Augmented reality surgery is in its infancy for intra-abdominal procedures due in large part to the difficulties of applying static preoperative imaging to a constantly deforming intraoperative scene [3]. There are case reports and ex vivo studies in the literature examining the technology in minimal access prostatectomy [3-6] and partial nephrectomy [7-10], but there remains a lack of evidence determining whether surgeons feel there is a role for the technology and if so for what procedures they feel it would be efficacious.

This questionnaire-based study was designed to assess first, the pre- and intra-operative imaging modalities utilized by robotic urologists; second, the current use of imaging intraoperatively for surgical planning; and finally whether there is a desire for augmented reality among the robotic urological community.

Methods

Recruitment

A web based survey instrument was designed and sent out, as part of a larger survey, to members of the EAU robotic urology section (ERUS). Only independently practicing robotic surgeons performing robot-assisted laparoscopic prostatectomy (RALP), robot-assisted partial nephrectomy (RAPN) and/or robotic cystectomy were included in the analysis, those surgeons exclusively performing other procedures were excluded. Respondents were offered no incentives to reply. All data collected was anonymous.

Survey design and administration

The questionnaire was created using the LimeSurvey platform (www.limesurvey.com) and hosted on their website. All responses (both complete and incomplete) were included in the analysis. The questionnaire was dynamic with the questions displayed tailored to the respondents’ previous answers.

When computing fractions or percentages the denominator was the number of respondents to answer the question, this number is variable due to the dynamic nature of the questionnaire.

Demographics

All respondents to the survey were asked in what country they practiced and what robotic urological procedures they performed. In addition to what procedures they performed surgeons were asked to specify the number of cases they had undertaken for each procedure.

 Current imaging practice

Procedure-specific questions in this group were displayed according to the operations the respondent performed. A summary of the questions can be seen in Appendix 1. Procedure-nonspecific questions were also asked. Participants were asked whether they routinely used the Tile Pro™ function of the da Vinci console (Intuitive Surgical, Sunnyvale, USA) and whether they routinely viewed imaging intra-operatively.

 Augmented reality

Before answering questions in this section, participants were invited to watch a video demonstrating an augmented reality platform during RAPN, performed by our group at Imperial College London. A still from this video can be seen in Figure 1. They were then asked whether they felt augmented reality would be of use as a navigation or training tool in robotic surgery.

f1

Figure 1. A still taken from a video of augmented reality robot assisted partial nephrectomy performed. Here the tumour has been painted into the operative view allowing the surgeon to appreciate the relationship of the tumour with the surface of the kidney

Once again, in this section, procedure-specific questions were displayed according to the operations the respondent performed. Only those respondents who felt augmented reality would be of use as a navigation tool were asked procedure-specific questions. Questions were asked to establish where in these procedures they felt an augmented reality environment would be of use.

Results

Demographics

Of the 239 respondents completing the survey 117 were independently practising robotic surgeons and were therefore eligible for analysis. The majority of the surgeons had both trained (210/239, 87.9%) and worked in Europe (215/239, 90%). The median number of cases undertaken by those surgeons reporting their case volume was: 120 (6–2000), 9 (1–120) and 30 (1–270), for RALP, robot assisted cystectomy and RAPN, respectively.

 

Contemporary use of imaging in robotic surgery

When enquiring about the use of imaging for surgical planning, the majority of surgeons (57%, 65/115) routinely viewed pre-operative imaging intra-operatively with only 9% (13/137) routinely capitalizing on the TilePro™ function in the console to display these images. When assessing the use of TilePro™ among surgeons who performed RAPN 13.8% (9/65) reported using the technology routinely.

When assessing the imaging modalities that are available to a surgeon in theater the majority of surgeons performing RALP (74%, 78/106)) reported using MRI with an additional 37% (39/106) reporting the use of CT for pre-operative staging and/or planning. For surgeons performing RAPN and robot-assisted cystectomy there was more of a consensus with 97% (68/70) and 95% (54/57) of surgeons, respectively, using CT for routine preoperative imaging (Table 1).

Table 1. Which preoperative imaging modalities do you use for diagnosis and surgical planning?

  CT MRI USS None Other
RALP (n = 106) 39.8% 73.5% 2% 15.1% 8.4%
(39) (78) (3) (16) (9)
RAPN (n = 70) 97.1% 42.9% 17.1% 0% 2.9%
(68) (30) (12) (0) (2)
Cystectomy (n = 57) 94.7% 26.3% 1.8% 1.8% 5.3%
(54) (15) (1) (1) (3)

Those surgeons performing RAPN were found to have the most diversity in the way they viewed pre-operative images in theater, routinely viewing images in sagittal, coronal and axial slices (Table 2). The majority of these surgeons also viewed the images as 3D reconstructions (54%, 38/70).

Table 2. How do you typically view preoperative imaging in the OR? 3D recons = three-dimensional reconstructions

  Axial slices (n) Coronal slices (n) Sagittal slices (n) 3D recons. (n) Do not view (n)  
RALP (n = 106) 49.1% 44.3% 31.1% 9.4% 31.1%
(52) (47) (33) (10) (33)
RAPN (n = 70) 68.6% 74.3% 60% (42) 54.3% 0%
(48) (52) (38) (0)
Cystectomy (n = 57) 70.2% 52.6% 50.9% 21.1% 8.8%
(40) (30) (29) (12) (5)

The majority of surgeons used ultrasound intra-operatively in RAPN (51%, 35/69) with a further 25% (17/69) reporting they would use it if they had access to a ‘drop-in’ ultrasound probe (Figure 2).

f2

Figure 2. Chart demonstrating responses to the question – Do you use intraoperative ultrasound for robotic partial nephrectomy?

Desire for augmented reality

Overall, 87% of respondents envisaged a role for augmented reality as a navigation tool in robotic surgery and 82% (88/107) felt that there was an additional role for the technology as a training tool.

The greatest desire for augmented reality was among those surgeons performing RAPN with 86% (54/63) feeling the technology would be of use. The largest group of surgeons felt it would be useful in identifying tumour location, with significant numbers also feeling it would be efficacious in tumor resection (Figure 3).

f3

Figure 3. Chart demonstrating responses to the question – In robotic partial nephrectomy which parts of the operation do you feel augmented reality image overlay would be of assistance?

When enquiring about the potential for augmented reality in RALP, 79% (20/96) of respondents felt it would be of use during the procedure, with the largest group feeling it would be helpful for nerve sparing 65% (62/96) (Figure 4). The picture in cystectomy was similar with 74% (37/50) of surgeons believing augmented reality would be of use, with both nerve sparing and apical dissection highlighted as specific examples (40%, 20/50) (Figure 5). The majority also felt that it would be useful for lymph node dissection in both RALP and robot assisted cystectomy (55% (52/95) and 64% (32/50), respectively).

f4

Figure 4. Chart demonstrating responses to the question – In robotic prostatectomy which parts of the operation do you feel augmented reality image overlay would be of assistance?

f5

Figure 5. Chart demonstrating responses to the question – In robotic cystectomy which parts of the operation do you feel augmented reality overlay technology would be of assistance?

Discussion

The results from this study suggest that the contemporary robotic surgeon views imaging as an important adjunct to operative practice. The way these images are being viewed is changing; although the majority of surgeons continue to view images as two-dimensional (2D) slices a significant minority have started to capitalize on 3D reconstructions to give them an improved appreciation of the patient’s anatomy.

This study has highlighted surgeons’ willingness to take the next step in the utilization of imaging in operative planning, augmented reality, with 87% feeling it has a role to play in robotic surgery. Although there appears to be a considerable desire for augmented reality, the technology itself is still in its infancy with the limited evidence demonstrating clinical application reporting only qualitative results [3, 7, 11, 12].

There are a number of significant issues that need to be overcome before augmented reality can be adopted in routine clinical practice. The first of these is registration (the process by which two images are positioned in the same coordinate system such that the locations of corresponding points align [13]). This process has been performed both manually and using automated algorithms with varying degrees of accuracy [2, 14]. The second issue pertains to the use of static pre-operative imaging in a dynamic operative environment; in order for the pre-operative imaging to be accurately registered it must be deformable. This problem remains as yet unresolved.

Live intra-operative imaging circumvents the problems of tissue deformation and in RAPN 51% of surgeons reported already using intra-operative ultrasound to aid in tumour resection. Cheung and colleagues [9] have published an ex vivo study highlighting the potential for intra-operative ultrasound in augmented reality partial nephrectomy. They report the overlaying of ultrasound onto the operative scene to improve the surgeon’s appreciation of the subsurface tumour anatomy, this improvement in anatomical appreciation resulted in improved resection quality over conventional ultrasound guided resection [9]. Building on this work the first in vivo use of overlaid ultrasound in RAPN has recently been reported [10]. Although good subjective feedback was received from the operating surgeon, the study was limited to a single case demonstrating feasibility and as such was not able to show an outcome benefit to the technology [10].

RAPN also appears to be the area in which augmented reality would be most readily adopted with 86% of surgeons claiming they see a use for the technology during the procedure. Within this operation there are two obvious steps to augmentation, anatomical identification (in particular vessel identification to facilitate both routine ‘full clamping’ and for the identification of secondary and tertiary vessels for ‘selective clamping’ [15]) and tumour resection. These two phases have different requirements from an augmented reality platform; the first phase of identification requires a gross overview of the anatomy without the need for high levels of registration accuracy. Tumor resection, however, necessitates almost sub-millimeter accuracy in registration and needs the system to account for the dynamic intra-operative environment. The step of anatomical identification is amenable to the use of non-deformable 3D reconstructions of pre-operative imaging while that of image-guided tumor resection is perhaps better suited to augmentation with live imaging such as ultrasound [2, 9, 16].

For RALP and robot-assisted cystectomy the steps in which surgeons felt augmented reality would be of assistance were those of neurovascular bundle preservation and apical dissection. The relative, perceived, efficacy of augmented reality in these steps correlate with previous examinations of augmented reality in RALP [17, 18]. Although surgeon preference for utilizing augmented reality while undertaking robotic prostatectomy has been demonstrated, Thompson et al. failed to demonstrate an improvement in oncological outcomes in those patients undergoing AR RALP [18].

Both nerve sparing and apical dissection require a high level of registration accuracy and a necessity for either live imaging or the deformation of pre-operative imaging to match the operative scene; achieving this level of registration accuracy is made more difficult by the mobilization of the prostate gland during the operation [17]. These problems are equally applicable to robot-assisted cystectomy. Although guidance systems have been proposed in the literature for RALP [3-5, 12, 17], none have achieved the level of accuracy required to provide assistance during nerve sparing. In addition, there are still imaging challenges that need to be overcome. Although multiparametric MRI has been shown to improve decision making in opting for a nerve sparing approach to RALP [19] the imaging is not yet able to reliably discern the exact location of the neurovascular bundle. This said, significant advances are being made with novel imaging modalities on the horizon that may allow for imaging of the neurovascular bundle in the near future [20].

 

Limitations

The number of operations included represents a significant limitation of the study, had different index procedures been chosen different results may have been seen. This being said the index procedures selected were chosen as they represent the vast majority of uro-oncological robotic surgical practice, largely mitigating for this shortfall.

Although the available ex vivo evidence suggests that introducing augmented reality operating environments into surgical practice would help to improve outcomes [9, 21] the in vivo experience to date is limited to small volume case series reporting feasibility [2, 3, 14]. To date no study has demonstrated an in vivo outcome advantage to augmented reality guidance. In addition to this limitation augmented reality has been demonstrated to increased rates of inattention blindness among surgeons suggesting there is a trade-off between increasing visual information and the surgeon’s ability to appreciate unexpected operative events [21].

 

Conclusions

This survey shows the contemporary robotic surgeon to be comfortable with the use of imaging to aid intra-operative planning; furthermore it highlights a significant interest among the urological community in augmented reality operating platforms.

Short- to medium-term development of augmented reality systems in robotic urology surgery would be best performed using RAPN as the index procedure. Not only was this the operation where surgeons saw the greatest potential benefits, but it may also be the operation where it is most easily achievable by capitalizing on the respective benefits of technologies the surgeons are already using; pre-operative CT for anatomical identification and intra-operative ultrasound for tumour resection.

 

Conflict of interest

None of the authors have any conflicts of interest to declare.

Appendix 1

Question Asked Question Type
Demographics
In which country do you usually practise? Single best answer
Which robotic procedures do you perform?* Single best answer
Current Imaging Practice
What preoperative imaging modalities do you use for the staging and surgical planning in renal cancer? Multiple choice
How do you typically view preoperative imaging in theatre for renal cancer surgery? Multiple choice
Do you use intraoperative ultrasound for partial nephrectomy? Yes or No
What preoperative imaging modalities do you use for the staging and surgical planning in prostate cancer? Multiple choice
How do you typically view preoperative imaging in theatre for prostate cancer? Multiple choice
Do you use intraoperative ultrasound for robotic partial nephrectomy? Yes or No
Which preoperative imaging modality do you use for staging and surgical planning in muscle invasive TCC? Multiple choice
How do you typically view preoperative imaging in theatre for muscle invasive TCC? Multiple choice
Do you routinely refer to preoperative imaging intraoperativley? Yes or No
Do you routinely use Tilepro intraoperativley? Yes or No
Augmented Reality
Do you feel there is a role for augmented reality as a navigation tool in robotic surgery? Yes or No
Do you feel there is a role for augmented reality as a training tool in robotic surgery? Yes or No
In robotic partial nephrectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
In robotic nephrectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
In robotic prostatectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
Would augmented reality guidance be of use in lymph node dissection in robotic prostatectomy? Yes or No
In robotic cystectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
Would augmented reality guidance be of use in lymph node dissection in robotic cystectomy? Yes or No
*The relevant procedure related questions were displayed based on the answer to this question

References

1. Foo J-L, Martinez-Escobar M, Juhnke B, et al.Evaluating mental workload of two-dimensional and three-dimensional visualization for anatomical structure localization. J Laparoendosc Adv Surg Tech A 2013; 23(1):65–70.

2. Hughes-Hallett A, Mayer EK, Marcus HJ, et al.Augmented reality partial nephrectomy: examining the current status and future perspectives. Urology 2014; 83(2): 266–273.

3. Sridhar AN, Hughes-Hallett A, Mayer EK, et al.Image-guided robotic interventions for prostate cancer. Nat Rev Urol 2013; 10(8): 452–462.

4. Cohen D, Mayer E, Chen D, et al.Eddie’ Augmented reality image guidance in minimally invasive prostatectomy. Lect Notes Comput Sci 2010; 6367: 101–110.

5. Simpfendorfer T, Baumhauer M, Muller M, et al.Augmented reality visualization during laparoscopic radical prostatectomy. J Endourol 2011; 25(12): 1841–1845.

6. Teber D, Simpfendorfer T, Guven S, et al.In vitro evaluation of a soft-tissue navigation system for laparoscopic prostatectomy. J Endourol 2010; 24(9): 1487–1491.

7. Teber D, Guven S, Simpfendörfer T, et al.Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo Eur Urol 2009; 56(2): 332–338.

8. Pratt P, Mayer E, Vale J, et al.An effective visualisation and registration system for image-guided robotic partial nephrectomy. J Robot Surg 2012; 6(1): 23–31.

9. Cheung CL, Wedlake C, Moore J, et al.Fused video and ultrasound images for minimally invasive partial nephrectomy: a phantom study. Med Image Comput Comput Assist Interv 2010; 13(Pt 3): 408–415.

10. Hughes-Hallett A, Pratt P, Mayer E, et al.Intraoperative ultrasound overlay in robot-assisted partial nephrectomy: first clinical experience. Eur Urol 2014; 65(3): 671–672.

11. Nakamura K, Naya Y, Zenbutsu S, et al.Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. J Endourol 2010; 24(4): 521–524.

12. Ukimura O, Gill IS. Imaging-assisted endoscopic surgery: Cleveland clinic experience. J Endourol2008; 22(4):803–809.

13. Altamar HO, Ong RE, Glisson CL, et al.Kidney deformation and intraprocedural registration: a study of elements of image-guided kidney surgery. J Endourol 2011; 25(3): 511–517.

14. Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol2011; 20(3): 189–201.

15. Ukimura O, Nakamoto M, Gill IS. Three-dimensional reconstruction of renovascular-tumor anatomy to facilitate zero-ischemia partial nephrectomy. Eur Urol2012; 61(1): 211–217.

16. Pratt P, Hughes-Hallett A, Di Marco A, et al. Multimodal reconstruction for image-guided interventions. In:Yang GZ, Darzi A (eds) Proceedings of the Hamlyn symposium on medical robotics: London. 2013; 59–61.

17. Mayer EK, Cohen D, Chen D, et al.Augmented reality image guidance in minimally invasive prostatectomy. Eur Urol Supp 2011; 10(2): 300.

18. Thompson S, Penney G, Billia M, et al.Design and evaluation of an image-guidance system for robot-assisted radical prostatectomy. BJU Int 2013; 111(7): 1081–1090.

19. Panebianco V, Salciccia S, Cattarino S, et al.Use of multiparametric MR with neurovascular bundle evaluation to optimize the oncological and functional management of patients considered for nerve-sparing radical prostatectomy. J Sex Med 2012; 9(8): 2157–2166.

20. Rai S, Srivastava A, Sooriakumaran P, Tewari A. Advances in imaging the neurovascular bundle. Curr Opin Urol2012; 22(2): 88–96.

21. Dixon BJ, Daly MJ, Chan H, et al.Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg Endosc 2013; 27(2): 454–461.

Read Full Post »


The Role of Medical Imaging in Personalized Medicine

Writer & reporter: Dror Nir, PhD

The future of personalized medicine comprise quantifiable diagnosis and tailored treatments; i.e. delivering the right treatment at the right time. To achieve standardized definition of what “right” means, the designated treatment location and lesion size are important factors. This is unrelated to whether the treatment is focused to a location or general. The role of medical imaging is and will continue to be vital in that respect: Patients’ stratification based on imaging biomarkers can help identify individuals suited for preventive intervention and can improve disease staging. In vivo visualization of loco-regional physiological, biochemical and biological processes using molecular imaging can detect diseases in pre-symptomatic phases or facilitate individualized drug delivery. Furthermore, as mentioned in most of my previous posts, imaging is essential to patient-tailored therapy planning, therapy monitoring, quantification of response-to-treatment and follow-up disease progression. Especially with the rise of companion diagnostics/theranostics (therapeutics & diagnostics), imaging and treatment will have to be synchronized in real-time to achieve the best control/guidance of the treatment.

It is worthwhile noting that the new RECIST 1.1 criteria (used in oncological therapy monitoring) have been expanded to include the use of PET (in addition to lymph-node evaluation).

pet

In previous posts I already discussed many examples concerning the use of medical imaging in personalized medicine: e.g. patients’ stratification; Imaging-biomarkers is Imaging-based tissue characterization, the future of imaging-biomarkers in diagnostic; Ultrasound-based Screening for Ovarian Cancer, imaging-based guided therapies; Minimally invasive image-guided therapy for inoperable hepatocellular carcinoma, treatment follow-up; the importance of spatially-localized and quantified image interpretation in cancer management, and imaging-based assessment of response to treatment; Causes and imaging features of false positives and false negatives on 18F-PET/CT in oncologic imaging

Browsing through our collaborative open-source initiative one can find many more articles and discussions on that matter; e.g. Tumor Imaging and Targeting: Predicting Tumor Response to Treatment: Where we stand?, In Search of Clarity on Prostate Cancer Screening, Post-Surgical Followup, and Prediction of Long Term Remission

In this post I would like to highlight the potential contribution of medical imaging to development of companion diagnostics. I do that through the story on co-development of Vintafolide (EC145) and etarfolatide (Endocyte/Merck). Etarfolatide is a folate-targeted molecular radiodiagnostic imaging agent that identifies tumors that overexpress the folate receptor. The folate receptor, a glycosylphosphatidylinositol anchored cell surface receptor, is overexpressed on the vast majority of cancer tissues, while its expression is limited in healthy tissues and organs. Folate receptors are highly expressed in epithelial, ovarian, cervical, breast, lung, kidney, colorectal, and brain tumors. When expressed in normal tissue, folate receptors are restricted to the lungs, kidneys, placenta, and choroid plexus. In these tissues, the receptors are limited to the apical surface of polarized epithelia. Folate, also known as pteroylglutamate, is a non-immunogenic water-soluble B vitamin that is critical to DNA synthesis, methylation, and repair (folate is used to synthesize thymine).

Vintafolide (EC145) delivers a very potent vinca chemotherapy directly to cancer cells by targeting the folate receptor expressed on cancer cells. Approximately 80-90 percent of ovarian and lung cancers express the receptor, as do many other types of cancer. Clinical data have shown that patients with metastases that are all positive for the folate receptor, identified by etarfolatide, benefited the most from the treatment with vintafolide, the corresponding folate-targeted small molecule drug conjugate.

Having both drug and imaging agent rely on folate receptors within the patients body Endocyte’s strategy was to develop the imaging agent and to use it to accelerate R&D and regulation. Endocyte and Merck entered into a partnership for vintafolide in April 2012. Under this partnership Merck was granted an exclusive license to develop, manufacture and commercialize vintafolide. Endocyte is responsible for conducting the PROCEED Phase 3 clinical study in women with platinum resistant ovarian cancer and the Phase 2b second line NSCLC (non-small cell lung cancer) study named TARGET. Merck is responsible for further clinical studies in additional indications. This Co-development of a diagnostic and therapeutic agent, was conducted according to the FDA guidance on personalized medicine and resulted with vintafolide gaining, already in 2012, status of orphan drug in EMA.

 

 The following is an extract from a post by Phillip H. Kuo, MD, PhD, associate professor of medical imaging, medicine, and biomedical engineering; section chief of nuclear medicine; and director of PET/CT at the University of Arizona Cancer Center.

 0213-figure-1

Figure 1 — Targeted Radioimaging Diagnostic and Small Molecule Drug Conjugate

Etarfolatide is comprised of the targeting ligand folic acid (yellow), which has a high folate receptor binding affinity, and a Technetium-99m–based radioimaging agent (turquoise). Etarfolatide identifies metastases that express the folate receptor protein in real time (A). The folic acid-targeting ligand is identical to that found on vintafolide, the corresponding therapeutic small molecule drug conjugate, which also contains a linker system (blue) and a potent chemotherapeutic drug (red) (B).

 

 vinta

Figure 2 — Whole-Body Scan With 111In-DTPA-Folate 

Diagnostic images of whole-body scans obtained following administration of the targeted radioimaging agent 111In-DTPA-folate, which is constructed with the same folic acid ligand as that engineered in etarfolatide. The healthy patient image on the left shows no folate receptor-positive abdominal tumor. Instead, only healthy kidneys (involved in excretion) are revealed. The patient on the right shows folate receptor-positive tumors in the abdomen and pelvis. Patients with metastases, identified with the companion imaging diagnostic etarfolatide as folate receptor-positive are most likely to respond to treatment with the corresponding small molecular drug conjugate vintafolide. Note: Vintafolide currently is being evaluated in a phase 3 clinical trial for platinum-resistant ovarian cancer and a phase 2 trial for non–small-cell lung cancer. Both studies also are using etarfolatide.

0213-figure-3

Figure 3 — Vintafolide’s Mechanism of Action

Folate is required for cell division, and rapidly dividing cancer cells often express folate receptors to capture enough folate to support rapid cell growth. Elevated expression of the folate receptor occurs in many human malignancies, especially when associated with aggressively growing cancers. The folate-targeted small molecule drug conjugate vintafolide binds to the folate receptor (A) and subsequently is internalized by a natural endocytosis process (B). Once inside the cell, vintafolide’s serum-stable linker selectively releases a potent vinca alkaloid compound (C) to arrest cell division and induce cell death.

Epilog

I think that those of you who reached this point in my post deserve a special bonus! So here it is: A medical-imaging initiative that is as ambitious and complex as the initiative to send humans into deep-space.

This is the The European Population Imaging Infrastructure initiative of the Dutch Federation of University Medical Centres (NFU) and the Erasmus University Medical Centre Rotterdam, Department of Radiology, chaired by Professor Gabriel P. Krestin. The NFU has made available initial funding for the development of this initiative.

The European Population Imaging Infrastructure closely cooperates with the European Biomedical Imaging Infrastructure Project EURO-BioImaging which is currently being developed.

The ultimate aim of the infrastructure is to help the development and implementation of strategies to prevent or effectively treat disease. It supports imaging in large, prospective epidemiological studies on the population level. Image specific markers of pre-symptomatic diseases can be used to investigate causes of pathological alterations and for the early identification of people at risk.

More information on this infrastructure and on the role of the European Population Imaging Infrastructure in this can be found in the Netherlands Roadmap for Large-Scale Research Facilities, the applicaton for funding of the Roadmap Large Scale Research Facilities Application form of the Roadmap EuroBioImaging, and on the Euro-BioImaging website.

Certainly, while making progress with this initiative, many lessons will be learned. I recommend to explore this site and Enjoy!

Read Full Post »


Following (or not) the guidelines for use of imaging in management of prostate cancer.

Writer and curator: Dror Nir, PhD

Over diagnosis and over treatment is a trend of the last two decades. It leads to increase in health-care costs and human-misery.

The following headline on Medscape; Swedes Show That We Can Improve Imaging in Prostate Cancer elicited my curiosity.

I was expecting “good news” – well, not this time!

In spite the “general language” the study that the above mentioned headline refers to is not addressing the global use of imaging in prostate cancer patients’ pathway but is specific to use of radionuclide bone-scans as part of patients’ staging.  The “bad-news” are that realization that the Swedish government had to invest many man-years to achieve “success” in reducing unnecessary use of such imaging in low risk patients. Moreover, the paper reveals under-use of such imaging technology for staging high risk prostate cancer patients.

Based on this paper, one could come to the conclusion that in reality, we are facing long lasting non-conformity with established guidelines related to the use of “full-body” imaging as part of the prostate cancer patients’ pathway in Europe and USA.

Here is a link to the original paper:

Prostate Cancer Imaging Trends After a Nationwide Effort to Discourage Inappropriate Prostate Cancer Imaging, Danil V. MakarovStacy LoebDavid UlmertLinda DrevinMats Lambe and Pär Stattin Correspondence to: Pär Stattin, MD, PhD, Department of Surgery and Perioperative Sciences, Urology and Andrology, Umeå University, SE- 901 87 Umeå, Sweden (e-mail:par.stattin@urologi.umu.se).

JNCI J Natl Cancer Inst (2013)doi: 10.1093/jnci/djt175

 

For convenience, here are the highlights:

  • Reducing inappropriate use of imaging to stage incident prostate cancer is a challenging problem highlighted recently as a Physician Quality Reporting System quality measure and by the American Society of Clinical Oncology and the American Urological Association in the Choosing Wisely campaign.

 

  • Since 2000, the National Prostate Cancer Register (NPCR) of Sweden has led an effort to decrease national rates of inappropriate prostate cancer imaging by disseminating utilization data along with the latest imaging guidelines to urologists in Sweden.

  • Results Thirty-six percent of men underwent imaging within 6 months of prostate cancer diagnosis. Overall, imaging use decreased over time, particularly in the low-risk category, among whom the imaging rate decreased from 45% to 3% (P < .001), but also in the high-risk category, among whom the rate decreased from 63% to 47% (P < .001). Despite substantial regional variation, all regions experienced clinically and statistically (P < .001) significant decreases in prostate cancer imaging.

 

t1

t2

fig1

fig2

  • These results may inform current efforts to promote guideline-concordant imaging in the United States and internationally.

  • In 1998, the baseline low-risk prostate cancer imaging rate in Sweden was 45%. Per the NCCN guidelines (7), none of these men should have received bone imaging unless they presented with symptoms suggestive of bone pain (8,24). In the United States, the imaging rate among men with low-risk prostate cancer has been reported to be 19% to 74% in a community cohort and 10% to 48% in a Surveillance Epidemiology and End Results (SEER)–Medicare cohort (10–13,16). It is challenging to compare these rates directly across the two countries because the NPCR aggregates all staging imaging into one variable. However, our sampling revealed that 88% of those undergoing imaging had at least a bone scan, whereas only 11% had any CTs and 10% had any MRI. This suggests that baseline rates of bone scan among low-risk men in Sweden were similar to those among their low-risk counterparts in the United States, whereas rates of axial imaging were likely much lower. During the study period, rates of prostate cancer imaging among low-risk men in Sweden decreased to 3%, substantially lower than those reported in the United States at any time.

  • Miller et al. describe a decline in imaging associated with a small-scale intervention administered in three urology practices located in the United States participating in a quality-improvement consortium. Our study’s contribution is to demonstrate that a similar strategy can be applied effectively at a national scale with an associated decline in inappropriate imaging rates, a finding of great interest for policy makers in the United States seeking to improve health-care quality.

  • In 1998, the baseline high-risk prostate cancer imaging rates in Sweden were 63%, and decreased by 43% in 2008 (rising slightly to 47% in 2009). Based on our risk category definitions and the guidelines advocated in Sweden, all of these men should have undergone an imaging evaluation (8,24). Swedish rates of prostate cancer imaging among men with high-risk disease are considerably lower than those reported from the SEER–Medicare cohort, where 70% to 75% underwent bone scan and 57% to 58% underwent CT (13,16). These already low rates of imaging among men with high-risk prostate cancer only decreased further during the NPCR’s effort to promote guideline-concordant imaging. Clearly in both countries, imaging for high-risk prostate cancer remains underused despite the general overuse of imaging and numerous guidelines encouraging its appropriate use (3–9).

Similar items I have covered on this this Open Access Online Scientific Journal:

Not applying evidence-based medicine drives up the costs of screening for breast-cancer in the USA.

Read Full Post »


Follow-up on Tomosynthesis

Writer & Curator: Dror Nir, PhD

Tomosynthesis, is a method for performing high-resolution limited-angle (i.e. not full 3600 rotation but more like ~500) tomography. The use of such systems in breast-cancer screening is steadily increasing following the clearance of such system by the FDA on 2011; see my posts – Improving Mammography-based imaging for better treatment planning and State of the art in oncologic imaging of breast.

Many radiologists expects that Tomosynthesis will eventually replace conventional mammography due to the fact that it increases the sensitivity of breast cancer detection. This claim is supported by new peer-reviewed publications. In addition, the patient’s experience during Tomosynthesis is less painful due to a lesser pressure that is applied to the breast and while presented with higher in-plane resolution and less imaging artifacts the mean glandular dose of digital breast Tomosynthesis is comparable to that of full field digital mammography. Because it is relatively new, Tomosynthesis is not available at every hospital. As well, the procedure is recognized for reimbursement by public-health schemes.

A good summary of radiologist opinion on Tomosynthesis can be found in the following video:

Recent studies’ results with digital Tomosynthesis are promising. In addition to increase in sensitivity for detection of small cancer lesions researchers claim that this new breast imaging technique will make breast cancers easier to see in dense breast tissue.  Here is a paper published on-line by the Lancet just a couple of months ago:

Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study

Stefano Ciatto†, Nehmat Houssami, Daniela Bernardi, Francesca Caumo, Marco Pellegrini, Silvia Brunelli, Paola Tuttobene, Paola Bricolo, Carmine Fantò, Marvi Valentini, Stefania Montemezzi, Petra Macaskill , Lancet Oncol. 2013 Jun;14(7):583-9. doi: 10.1016/S1470-2045(13)70134-7. Epub 2013 Apr 25.

Background Digital breast tomosynthesis with 3D images might overcome some of the limitations of conventional 2D mammography for detection of breast cancer. We investigated the effect of integrated 2D and 3D mammography in population breast-cancer screening.

Methods Screening with Tomosynthesis OR standard Mammography (STORM) was a prospective comparative study. We recruited asymptomatic women aged 48 years or older who attended population-based breast-cancer screening through the Trento and Verona screening services (Italy) from August, 2011, to June, 2012. We did screen-reading in two sequential phases—2D only and integrated 2D and 3D mammography—yielding paired data for each screen. Standard double-reading by breast radiologists determined whether to recall the participant based on positive mammography at either screen read. Outcomes were measured from final assessment or excision histology. Primary outcome measures were the number of detected cancers, the number of detected cancers per 1000 screens, the number and proportion of false positive recalls, and incremental cancer detection attributable to integrated 2D and 3D mammography. We compared paired binary data with McNemar’s test.

Findings 7292 women were screened (median age 58 years [IQR 54–63]). We detected 59 breast cancers (including 52 invasive cancers) in 57 women. Both 2D and integrated 2D and 3D screening detected 39 cancers. We detected 20 cancers with integrated 2D and 3D only versus none with 2D screening only (p<0.0001). Cancer detection rates were 5·3 cancers per 1000 screens (95% CI 3.8–7.3) for 2D only, and 8.1 cancers per 1000 screens (6.2–10.4) for integrated 2D and 3D screening. The incremental cancer detection rate attributable to integrated 2D and 3D mammography was 2.7 cancers per 1000 screens (1.7–4.2). 395 screens (5.5%; 95% CI 5.0–6.0) resulted in false positive recalls: 181 at both screen reads, and 141 with 2D only versus 73 with integrated 2D and 3D screening (p<0·0001). We estimated that conditional recall (positive integrated 2D and 3D mammography as a condition to recall) could have reduced false positive recalls by 17.2% (95% CI 13.6–21.3) without missing any of the cancers detected in the study population.

Interpretation Integrated 2D and 3D mammography improves breast-cancer detection and has the potential to reduce false positive recalls. Randomised controlled trials are needed to compare integrated 2D and 3D mammography with 2D mammography for breast cancer screening.

Funding National Breast Cancer Foundation, Australia; National Health and Medical Research Council, Australia; Hologic, USA; Technologic, Italy.

Introduction

Although controversial, mammography screening is the only population-level early detection strategy that has been shown to reduce breast-cancer mortality in randomised trials.1,2 Irrespective of which side of the mammography screening debate one supports,1–3 efforts should be made to investigate methods that enhance the quality of (and hence potential benefit from) mam­mography screening. A limitation of standard 2D mammography is the superimposition of breast tissue or parenchymal density, which can obscure cancers or make normal structures appear suspicious. This short coming reduces the sensitivity of mammography and increases false-positive screening. Digital breast tomosynthesis with 3D images might help to overcome these limitations. Several reviews4,5 have described the development of breast tomosynthesis technology, in which several low-dose radiographs are used to reconstruct a pseudo-3D image of the breast.4–6

Initial clinical studies of 3D mammography, 6–10 though based on small or selected series, suggest that addition of 3D to 2D mammography could improve cancer detection and reduce the number of false positives. However, previous assessments of breast tomosynthesis might have been constrained by selection biases that distorted the potential effect of 3D mammography; thus, screening trials of integrated 2D and 3D mammography are needed.6

We report the results of a large prospective study (Screening with Tomosynthesis OR standard Mammog­raphy [STORM]) of 3D digital mammography. We investi­gated the effect of screen-reading using both standard 2D and 3D imaging with tomosynthesis compared with screening with standard 2D digital mammography only for population breast-cancer screening.

  

Methods

Study design and participants

STORM is a prospective population-screening study that compares mammography screen-reading in two sequential phases (figure)—2D only versus integrated 2D and 3D mammography with tomosynthesis—yielding paired results for each screening examination. Women aged 48 years or older who attended population-based screening through the Trento and Verona screening services, Italy, from August, 2011, to June, 2012, were invited to be screened with integrated 2D and 3D mammography. Participants in routine screening mammography (once every 2 years) were asymptomatic women at standard (population) risk for breast cancer. The study was granted institutional ethics approval at each centre, and participants gave written informed consent. Women who opted not to participate in the study received standard 2D mammography. Digital mammography has been used in the Trento breast-screening programme since 2005, and in the Verona programme since 2007; each service monitors outcomes and quality indicators as dictated by European standards, and both have published data for screening performance.11,12

 

study design

Procedures

All participants had digital mammography using a Selenia Dimensions Unit with integrated 2D and 3D mammography done in the COMBO mode (Hologic, Bedford, MA, USA): this setting takes 2D and 3D images at the same screening examination with a single breast position and compression. Each 2D and 3D image consisted of a bilateral two-view (mediolateral oblique and craniocaudal) mammogram. Screening mammo­grams were interpreted sequentially by radiologists, first on the basis of standard 2D mammography alone, and then by the same radiologist (on the same day) on the basis of integrated 2D and 3D mammography (figure). Thus, integrated 2D and 3D mammography screening refers to non-independent screen reading based on joint interpretation of 2D and 3D images, and does not refer to analytical combinations. Radiologists had to record whether or not to recall the participant at each screen-reading phase before progressing to the next phase of the sequence. For each screen, data were also collected for breast density (at the 2D screen-read), and the side and quadrant for any recalled abnormality (at each screen-read). All eight radiologists were breast radiologists with a mean of 8 years (range 3–13 years) experience in mammography screening, and had received basic training in integrated 2D and 3D mammography. Several of the radiologists had also used 2D and 3D mammography for patients recalled after positive conventional mammography screening as part of previous studies of tomosynthesis.8,13

Mammograms were interpreted in two independent screen-reads done in parallel, as practiced in most population breast-screening programs in Europe. A screen was considered positive and the woman recalled for further investigations if either screen-reader recorded a positive result at either 2D or integrated 2D and 3D screening (figure). When previous screening mammograms were available, these were shown to the radiologist at the time of screen-reading, as is standard practice. For assessment of breast density, we used Breast Imaging Reporting and Data System (BI-RADS)14 classification, with participants allocated to one of two groups (1–2 [low density] or 3–4 [high density]). Disagreement between readers about breast density was resolved by assessment by a third reader.

Our primary outcomes were the number of cancers detected, the number of cancers detected per 1000 screens, the number and percentage of false posi­tive recalls, and the incremental cancer detection rate attributable to integrated 2D and 3D mammography screening. We compared the number of cancers that were detected only at 2D mammography screen-reading and those that were detected only at 2D and 3D mammography screen-reading; we also did this analysis for false positive recalls. To explore the potential effect of integrated 2D and 3D screening on false-positive recalls, we also estimated how many false-positive recalls would have resulted from using a hypothetical conditional false-positive recall approach; – i.e. positive integrated 2D and 3D mammography as a condition of recall (screening recalled at 2D mammography only would not be recalled). Pre-planned secondary analyses were comparison of outcome measures by age group and breast density.

Outcomes were assessed by excision histology for participants who had surgery, or the complete assessment outcome (including investigative imaging with or without histology from core needle biopsy) for all recalled participants. Because our study focuses on the difference in detection by the two screening methods, some cancers might have been missed by both 2D and integrated 2D and 3D mammography; this possibility could be assessed at future follow-up to identify interval cancers. However, this outcome is not assessed in the present study and does not affect estimates of our primary outcomes – i.e. comparative true or false positive detection for 2D-only versus integrated 2D and 3D mammography.

 

Statistical analysis

The sample size was chosen to provide 80% power to detect a difference of 20% in cancer detection, assuming a detection probability of 80% for integrated 2D and 3D screening mammography and 60% for 2D only screening, with a two-sided significance threshold of 5%. Based on the method of Lachenbruch15 for estimating sample size for studies that use McNemar’s test for paired binary data, a minimum of 40 cancers were needed. Because most screens in the participating centres were incident (repeat) screening (75%–80%), we used an underlying breast-cancer prevalence of 0·5% to estimate that roughly 7500–8000 screens would be needed to identify 40 cancers in the study population.

We calculated the Wilson CI for the false-positive recall ratio for integrated 2D and 3D screening with conditional recall compared with 2D only screening.16 All of the other analyses were done with SAS/STAT (version 9.2), using exact methods to compute 95 CIs and p-values.

Role of the funding source

The sponsors of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author (NH) had full access to all the data in the study and had final responsibility for the decision to submit for publication.

Results

7292 participants with a median age of 58 years (IQR 54–63, range 48–71) were screened between Aug 12, 2011, and June 29, 2012. Roughly 5% of invited women declined integrated 2D and 3D screening and received standard 2D mammography. We present data for 7294 screens because two participants had bilateral cancer (detected with different screen-reading techniques for one participant). We detected 59 breast cancers in 57 participants (52 invasive cancers and seven ductal carcinoma in-situ). Of the invasive cancers, most were invasive ductal (n=37); others were invasive special types (n=7), invasive lobular (n=4), and mixed invasive types (n=4).

Table 1 shows the characteristics of the cancers. Mean tumour size (for the invasive cancers with known exact size) was 13.7 mm (SD 5.8) for cancers detected with both 2D alone and integrated 2D and 3D screening (n=29), and 13.5 mm (SD 6.7) for cancers detected only with integrated 2D and 3D screening (n=13).

 

Table 1

Of the 59 cancers, 39 were detected at both 2D and integrated 2D and 3D screening (table 2). 20 cancers were detected with only integrated 2D and 3D screening compared with none detected with only 2D screening (p<0.0001; table 2). 395 screens were false positive (5.5%, 95% CI 5.0–6.0); 181 occurred at both screen-readings, and 141 occurred at 2D screening only compared with 73 at integrated 2D and 3D screening (p<0.0001; table 2). These differences were still significant in sensitivity analyses that excluded the two participants with bilateral cancer (data not shown).


Table 2

5.3 cancers per 1000 screens (95% CI 3.8–7.3; table 3) were detected with 2D mammography only versus 8.1 cancers per 1000 screens (95% CI 6.2–10.4) with integrated 2D and 3D mammography (p<0.0001). The incremental cancer detection rate attributable to inte­grated 2D and 3D screening was 2.7 cancers per 1000 screens (95% CI 1.7–4.2), which is 33.9% (95% CI 22.1–47.4) of the cancers detected in the study popu­lation. In a sensitivity analysis that excluded the two participants with bilateral cancer the estimated incre­mental cancer detection rate attributable to integrated 2D and 3D screening was 2.6 cancers per 1000 screens (95% CI 1.4–3.8). The stratified results show that integrated 2D and 3D mammography was associated with an incrementally increased cancer detection rate in both age-groups and density categories (tables 3–5). A minority (16.7%) of breasts were of high density (category 3–4) reducing the power of statistical comparisons in this subgroup (table 5). The incremental cancer detection rate was much the same in low density versus high density groups (2.8 per 1000 vs 2.5 per 1000; p=0.84; table 3).


Table 3

Table 4-5

Overall recall—any recall resulting in true or false positive screens—was 6.2% (95% CI 5.7–6.8), and the false-positive rate for the 7235 screens of participants who did not have breast cancer was 5.5% (5.0–6.0). Table 6 shows the contribution to false-positive recalls from 2D mammography only, integrated 2D and 3D mammography only, and both, and the estimated number of false positives if positive integrated 2D and 3D mammography was a condition for recall (positive 2D only not recalled). Overall, more of the false-positive rate was driven by 2D mammography only than by integrated 2D and 3D, although almost half of the false-positive rate was a result of false positives recalled at both screen-reading phases (table 6). The findings were much the same when stratified by age and breast density (table 6). Had a conditional recall rule been applied, we estimate that the false-positive rate would have been 3.5% (95% CI 3.1–4.0%; table 6) and could have potentially prevented 68 of the 395 false positives (a reduction of 17.2%; 95% CI 13.6–21.3). The ratio between the number of false positives with integrated 2D and 3D screening with conditional recall (n=254) versus 2D only screening (n=322) was 0.79 (95% CI 0.71–0.87).

Discussion

Our study showed that integrated 2D and 3D mam­mography screening significantly increases detection of breast cancer compared with conventional mammog­raphy screening. There was consistent evidence of an incremental improvement in detection from integrated 2D and 3D mammography across age-group and breast density strata, although the analysis by breast density was limited by low number of women with breasts of high density.

One should note that we investigated comparative cancer detection, and not absolute screening sensitivity. By integrating 2D and 3D mammography using the study screen-reading protocol, 1% of false-positive recalls resulted from 2D and 3D screen-reading only (table 6). However, significantly more false positives resulted from 2D only mammography compared with integrated 2D and 3D mammography, both overall and in the stratified analyses. Application of a conditional recall rule would have resulted in a false-positive rate of 3.5% instead of the actual false-positive rate of 5.5%. The estimated false positive recall ratio of 0.79 for integrated 2D and 3D screening with conditional recall compared with 2D only screening suggests that integrated 2D and 3D screening could reduce false recalls by roughly a fifth. Had such a condition been adopted, none of the cancers detected in the study would have been missed because no cancers were detected by 2D mammography only, although this result might be because our design allowed an independent read for 2D only mammography whereas the integrated 2D and 3D read was an interpretation of a combination of 2D and 3D imaging. We do not recommend that such a conditional recall rule be used in breast-cancer screening until our findings are replicated in other mammography screening studies—STORM involved double-reading by experienced breast radiologists, and our results might not apply to other screening settings. Using a test set of 130 mammograms, Wallis and colleagues7 report that adding tomosynthesis to 2D mammography increased the accuracy of inexperienced readers (but not of experienced readers), therefore having experienced radiologists in STORM could have underestimated the effect of integrated 2D and 3D screen-reading.

No other population screening trials of integrated 2D and 3D mammography have reported final results (panel); however, an interim analysis of the Oslo trial17 a large population screening study has shown that integrated 2D and 3D mammography substantially increases detection of breast cancer. The Oslo study investigators screened women with both 2D and 3D mammography, but randomised reading strategies (with vs without 3D mammograms) and adjusted for the different screen-readers,17whereas we used sequential screen-reading to keep the same reader for each exam­ination. Our estimates for comparative cancer detection and for cancer detection rates are consistent with those of the interim analysis of the Oslo study.17 The applied recall methods differed between the Oslo study (which used an arbitration meeting to decide recall) and the STORM study (we recalled based on a decision by either screen-reader), yet both studies show that 3D mammog­raphy reduces false-positive recalls when added to standard mammography.

An editorial in The Lancet18 might indeed signal the closing of a chapter of debate about the benefits and harms of screening. We hope that our work might be the beginning of a new chapter for mammography screening: our findings should encourage new assessments of screening using 2D and 3D mammography and should factor several issues related to our study. First, we compared standard 2D mammography with integrated 2D and 3D mammography the 3D mammograms were not interpreted independently of the 2D mammograms therefore 3D mammography only (without the 2D images) might not provide the same results. Our experience with breast tomosynthesis and a review6 of 3D mammography underscore the importance of 2D images in integrated 2D and 3D screen-reading. The 2D images form the basis of the radiologist’s ability to integrate the information from 3D images with that from 2D images. Second, although most screening in STORM was incident screening, the substantial increase in cancer detection rate with integrated 2D and 3D mammography results from the enhanced sensitivity of integrated 2D and 3D screening and is probably also a result of a prevalence effect (ie, the effect of a first screening round with integrated 2D and 3D mammography). We did not assess the effect of repeat (incident) screening with integrated 2D and 3D mammography on cancer detection it might provide a smaller effect on cancer detection rates than what we report. Third, STORM was not designed to measure biological differences between the cancers detected at integrated 2D and 3D screening compared with those detected at both screen-reading phases. Descriptive analyses suggest that, generally, breast cancers detected only at integrated 2D and 3D screening had similar features (eg, histology, pathological tumour size, node status) as those detected at both screen-reading phases. Thus, some of the cancers detected only at 2D and 3D screening might represent early detection (and would be expected to receive screening benefit) whereas some might represent over-detection and a harm from screening, as for conventional screening mam mography.1,19 The absence of consensus about over-diagnosis in breast-cancer screening should not detract from the importance of our study findings to applied screening research and to screening practice; however, our trial was not done to assess the extent to which integrated 2D and 3D mam­mography might contribute to over-diagnosis.

The average dose of glandular radiation from the many low-dose projections taken during a single acquisition of 3D mammography is roughly the same as that from 2D mammography.6,20–22 Using integrated 2D and 3D en­tails both a 2D and 3D acquisition in one breast com­pression, which roughly doubles the radiation dose to the breast. Therefore, integrated 2D and 3D mammography for population screening might only be justifiable if improved outcomes were not defined solely in terms of improved detection. For example, it would be valuable to show that the increased detection with integrated 2D and 3D screening leads to reduced interval cancer rates at follow-up. A limitation of our study might be that data for interval cancers were not available; however, because of the paired design we used, future evaluation of interval cancer rates from our study will only apply to breast cancers that were not identified using 2D only or integrated 2D and 3D screening. We know of two patients from our study who have developed interval cancers (follow-up range 8–16 months). We did not get this information from cancer registries and follow-up was very short, so these data should be interpreted very cautiously, especially because interval cancers would be expected to occur in the second year of the standard 2 year interval between screening rounds. Studies of interval cancer rates after integrated 2D and 3D mammography would need to be randomised controlled trials and have a very large sample size. Additionally, the development of reconstructed 2D images from a 3D mammogram23 provides a timely solution to concerns about radiation by providing both the 2D and 3D images from tomosynthesis, eliminating the need for two acquisitions.

We have shown that integrated 2D and 3D mammog­raphy in population breast-cancer screening increases detection of breast cancer and can reduce false-positive recalls depending on the recall strategy. Our results do not warrant an immediate change to breast-screening practice, instead, they show the urgent need for random­ised controlled trials of integrated 2D and 3D versus 2D mammography, and for further translational research in breast tomosynthesis. We envisage that future screening trials investigating this issue will include measures of breast cancer detection, and will be designed to assess interval cancer rates as a surrogate endpoint for screening efficacy.

Contributors

SC had the idea for and designed the study, and collected and interpreted data. NH advised on study concepts and methods, analysed and interpreted data, searched the published work, and wrote and revised the report. DB and FC were lead radiologists, recruited participants, collected data, and commented on the draft report. MP, SB, PT, PB, PT, CF, and MV did the screen-reading, collected data, and reviewed the draft report. SM collected data and reviewed the draft report. PM planned the statistical analysis, analysed and interpreted data, and wrote and revised the report.

Conflicts of interest

SC, DB, FC, MP, SB, PT, PB, CF, MV, and SM received assistance from Hologic (Hologic USA; Technologic Italy) in the form of tomosynthesis technology and technical support for the duration of the study, and travel support to attend collaborators’ meetings. NH receives research support from a National Breast Cancer Foundation (NBCF Australia) Practitioner Fellowship, and has received travel support from Hologic to attend a collaborators’ meeting. PM receives research support through Australia’s National Health and Medical Research Council programme grant 633003 to the Screening & Test Evaluation Program.

 

References

1       Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: an independent review. Lancet 2012; 380: 1778–86.

2       Glasziou P, Houssami N. The evidence base for breast cancer screening. Prev Med 2011; 53: 100–102.

3       Autier P, Esserman LJ, Flowers CI, Houssami N. Breast cancer screening: the questions answered. Nat Rev Clin Oncol 2012; 9: 599–605.

4       Baker JA, Lo JY. Breast tomosynthesis: state-of-the-art and review of the literature. Acad Radiol 2011; 18: 1298–310.

5       Helvie MA. Digital mammography imaging: breast tomosynthesis and advanced applications. Radiol Clin North Am 2010; 48: 917–29.

6      Houssami N, Skaane P. Overview of the evidence on digital breast tomosynthesis in breast cancer detection. Breast 2013; 22: 101–08.

7   Wallis MG, Moa E, Zanca F, Leifland K, Danielsson M. Two-view and single-view tomosynthesis versus full-field digital mammography: high-resolution X-ray imaging observer study. Radiology 2012; 262: 788–96.

8   Bernardi D, Ciatto S, Pellegrini M, et al. Prospective study of breast tomosynthesis as a triage to assessment in screening. Breast Cancer Res Treat 2012; 133: 267–71.

9   Michell MJ, Iqbal A, Wasan RK, et al. A comparison of the accuracy of film-screen mammography, full-field digital mammography, and digital breast tomosynthesis. Clin Radiol 2012; 67: 976–81.

10 Skaane P, Gullien R, Bjorndal H, et al. Digital breast tomosynthesis (DBT): initial experience in a clinical setting. Acta Radiol 2012; 53: 524–29.

11 Pellegrini M, Bernardi D, Di MS, et al. Analysis of proportional incidence and review of interval cancer cases observed within the mammography screening programme in Trento province, Italy. Radiol Med 2011; 116: 1217–25.

12 Caumo F, Vecchiato F, Pellegrini M, Vettorazzi M, Ciatto S, Montemezzi S. Analysis of interval cancers observed in an Italian mammography screening programme (2000–2006). Radiol Med 2009; 114: 907–14.

13 Bernardi D, Ciatto S, Pellegrini M, et al. Application of breast tomosynthesis in screening: incremental effect on mammography acquisition and reading time. Br J Radiol 2012; 85: e1174–78.

14 American College of Radiology. ACR BI-RADS: breast imaging reporting and data system, Breast Imaging Atlas. Reston: American College of Radiology, 2003.

15  Lachenbruch PA. On the sample size for studies based on McNemar’s test. Stat Med 1992; 11: 1521–25.

16  Bonett DG, Price RM. Confidence intervals for a ratio of binomial proportions based on paired data. Stat Med 2006; 25: 3039–47.

17  Skaane P, Bandos AI, Gullien R, et al. Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; published online Jan 3. http://dx.doi.org/10.1148/ radiol.12121373.

18  The Lancet. The breast cancer screening debate: closing a chapter? Lancet 2012; 380: 1714.

19  Biesheuvel C, Barratt A, Howard K, Houssami N, Irwig L. Effects of study methods and biases on estimates of invasive breast cancer overdetection with mammography screening: a systematic review. Lancet Oncol 2007; 8: 1129–38.

20  Tagliafico A, Astengo D, Cavagnetto F, et al. One-to-one comparison between digital spot compression view and digital breast tomosynthesis. Eur Radiol 2012; 22: 539–44.

21  Tingberg A, Fornvik D, Mattsson S, Svahn T, Timberg P, Zackrisson S. Breast cancer screening with tomosynthesis—initial experiences. Radiat Prot Dosimetry 2011; 147: 180–83.

22  Feng SS, Sechopoulos I. Clinical digital breast tomosynthesis system: dosimetric characterization. Radiology 2012; 263: 35–42.

23  Gur D, Zuley ML, Anello MI, et al. Dose reduction in digital breast tomosynthesis (DBT) screening using synthetically reconstructed projection images: an observer performance study. Acad Radiol 2012; 19: 166–71.

A very good and down-to-earth comment on this article was made by Jules H Sumkin who disclosed that he is an unpaid member of SAB Hologic Inc and have a PI research agreement between University of Pittsburgh and Hologic Inc.

The results of the study by Stefano Ciatto and colleagues1 are consistent with recently published prospective,2,3 retrospective,4 and observational5 reports on the same topic. The study1 had limitations, including the fact that the same radiologist interpreted screens sequentially the same day without cross-balancing which examination was read first. Also, the false-negative findings for integrated 2D and 3D mammography, and therefore absolute benefit from the procedure, could not be adequately assessed because cases recalled by 2D mammography alone (141 cases) did not result in a single detection of an additional cancer while the recalls from the integrated 2D and 3D mammography alone (73 cases) resulted in the detection of 20 additional cancers. Nevertheless, the results are in strong agreement with other studies reporting of substantial performance improvements when the screening is done with integrated 2D and 3D mammography.

I disagree with the conclusion of the study with regards to the urgent need for randomised clinical trials of integrated 2D and 3D versus 2D mammography. First, to assess differences in mortality as a result of an imaging-based diagnostic method, a randomised trial will require several repeated screens by the same method in each study group, and the strong results from all studies to date will probably result in substantial crossover and self-selection biases over time. Second, because of the high survival rate (or low mortality rate) of breast cancer, the study will require long follow-up times of at least 10 years. In a rapidly changing environment in terms of improvements in screening technologies and therapeutic inter­ventions, the avoidance of biases is likely to be very difficult, if not impossible. The use of the number of interval cancers and possible shifts in stage at detection, while appropriately accounting for confounders, would be almost as daunting a task. Third, the imaging detection of cancer is only the first step in many management decisions and interventions that can affect outcome. The appropriate control of biases related to patient management is highly unlikely. The arguments above, in addition to the existing reports to date that show substantial improvements in cancer detection, particularly with the detection of invasive cancers, with a simultaneous reduction in recall rates, support the argument that a randomised trial is neither necessary nor warranted. The current technology might be obsolete by the time results of an appropriately done and analysed randomised trial is made public.

In order to better link the information given by “scientific” papers to the context of daily patients’ reality I suggest to spend some time reviewing few of the videos in the below links:

  1. The following group of videos is featured on a website by Siemens. Nevertheless, the presenting radiologists are leading practitioners who affects thousands of lives every year – What the experts say about tomosynthesis. – click on ECR 2013
  2. Breast Tomosynthesis in Practice – part of a commercial ad of the Washington Radiology Associates featured on the website of Diagnostic Imaging. As well, affects thousands of lives in the Washington area every year.

The pivotal questions yet to be answered are:

  1. What should be done in order to translate increase in sensitivity and early detection into decrease in mortality?

  2. What is the price of such increase in sensitivity in terms of quality of life and health-care costs and is it worth-while to pay?

An article that summarises positively the experience of introducing Tomosynthesis into routine screening practice was recently published on AJR:

Implementation of Breast Tomosynthesis in a Routine Screening Practice: An Observational Study

Stephen L. Rose1, Andra L. Tidwell1, Louis J. Bujnoch1, Anne C. Kushwaha1, Amy S. Nordmann1 and Russell Sexton, Jr.1

Affiliation: 1 All authors: TOPS Comprehensive Breast Center, 17030 Red Oak Dr, Houston, TX 77090.

Citation: American Journal of Roentgenology. 2013;200:1401-1408

 

ABSTRACT :

OBJECTIVE. Digital mammography combined with tomosynthesis is gaining clinical acceptance, but data are limited that show its impact in the clinical environment. We assessed the changes in performance measures, if any, after the introduction of tomosynthesis systems into our clinical practice.

MATERIALS AND METHODS. In this observational study, we used verified practice- and outcome-related databases to compute and compare recall rates, biopsy rates, cancer detection rates, and positive predictive values for six radiologists who interpreted screening mammography studies without (n = 13,856) and with (n = 9499) the use of tomosynthesis. Two-sided analyses (significance declared at p < 0.05) accounting for reader variability, age of participants, and whether the examination in question was a baseline were performed.

RESULTS. For the group as a whole, the introduction and routine use of tomosynthesis resulted in significant observed changes in recall rates from 8.7% to 5.5% (p < 0.001), nonsignificant changes in biopsy rates from 15.2 to 13.5 per 1000 screenings (p = 0.59), and cancer detection rates from 4.0 to 5.4 per 1000 screenings (p = 0.18). The invasive cancer detection rate increased from 2.8 to 4.3 per 1000 screening examinations (p = 0.07). The positive predictive value for recalls increased from 4.7% to 10.1% (p < 0.001).

CONCLUSION. The introduction of breast tomosynthesis into our practice was associated with a significant reduction in recall rates and a simultaneous increase in breast cancer detection rates.

Here are the facts in tables and pictures from this article

Table 1 AJR

Table 2-3 AJR

 

Table 4 AJR

 

p1 ajr

p2 ajr

Other articles related to the management of breast cancer were published on this Open Access Online Scientific Journal:

Automated Breast Ultrasound System (‘ABUS’) for full breast scanning: The beginning of structuring a solution for an acute need!

Introducing smart-imaging into radiologists’ daily practice.

Not applying evidence-based medicine drives up the costs of screening for breast-cancer in the USA.

New Imaging device bears a promise for better quality control of breast-cancer lumpectomies – considering the cost impact

Harnessing Personalized Medicine for Cancer Management, Prospects of Prevention and Cure: Opinions of Cancer Scientific Leaders @ http://pharmaceuticalintelligence.com

Predicting Tumor Response, Progression, and Time to Recurrence

“The Molecular pathology of Breast Cancer Progression”

Personalized medicine gearing up to tackle cancer

What could transform an underdog into a winner?

Mechanism involved in Breast Cancer Cell Growth: Function in Early Detection & Treatment

Nanotech Therapy for Breast Cancer

A Strategy to Handle the Most Aggressive Breast Cancer: Triple-negative Tumors

Breakthrough Technique Images Breast Tumors in 3-D With Great Clarity, Reduced Radiation

Closing the Mammography gap

Imaging: seeing or imagining? (Part 1)

Imaging: seeing or imagining? (Part 2)

Read Full Post »


Could Teleradiology contribute to “cross-borders” standardization of imaging protocols in cancer management?

Writer: Dror Nir, PhD

Teleradiology is accepted as a legitimate medical service for several years now.  It has many clinical utilities worldwide, ranging from services for expert or second opinions to comprehensive remote management of radiology departments in hospitals. Rapid advances in web-technologies infrastructure eliminated the barriers related to the transfer, reading and reporting of radiology images from remote locations. Today’s main controversies are related to issues that are relevant also to “in-house” radiology departments; e.g. clinical governance, quality assessment, work-flow and medico-legal issues.

The concept of Teleradiology is as simple as plotted in this chart.

fig1

Images are automatically uploaded from the imaging system itself or from the institution’s PACS. Reports are sent to the “client” within few hours.

The value for the users goes well beyond mere image interpretation, for example:

  • On-site physicians have more time to spend with patients.
  • Offering of additional subspecialty/multidisciplinary expertise.
  • Comprehensive image-interpretation and reporting service at reduced time-span and reduced cost
  • Sharing images and reports with referring physicians and patients with no effort.

As an example for “cross-border” standardization of a major existing radiology service, let’s consider the use-case of centralized review of mammography images. I know, quite ambitious! And; politically very challenging!

But; seem to be technologically and clinically feasible, at least according to the below quoted publication:

Teleradiology with uncompressed digital mammograms: Clinical assessment

Julia Fruehwald-Pallamar, Marion Jantsch, Katja Pinker, Ricarda Hofmeister, Friedrich Semturs, Kathrin Piegler, Daniel Staribacher, Michael Weber, Thomas H. Helbich

published online 13 April 2012.

Abstract 

Purpose

The purpose of our study was to demonstrate the feasibility of sending uncompressed digital mammograms in a teleradiologic setting without loss of information by comparing image quality, lesion detection, and BI-RADS assessment.

Materials and methods

CDMAM phantoms were sent bidirectionally to two hospitals via the network. For the clinical aspect of the study, 200 patients were selected based on the BI-RAD system: 50% BI-RADS I and II; and 50% BI-RADS IV and V. Two hundred digital mammograms (800 views) were sent to two different institutions via a teleradiology network. Three readers evaluated those 200 mammography studies at institution 1 where the images originated, and in the two other institutions (institutions 2 and 3) where the images were sent. The readers assessed image quality, lesion detection, and BI-RADS classification.

Results

Automatic readout showed that CDMAM image quality was identical before and after transmission. The image quality of the 200 studies (total 600 mammograms) was rated as very good or good in 90–97% before and after transmission. Depending on the institution and the reader, only 2.5–9.5% of all studies were rated as poor. The congruence of the readers with respect to the final BI-RADS assessment ranged from 90% and 91% at institution 1 vs. institution 2, and from 86% to 92% at institution 1 vs. institution 3. The agreement was even higher for conformity of content (BI-RADS I or II and BI-RADS IV or V). Reader agreement in the three different institutions with regard to the detection of masses and calcifications, as well as BI-RADS classification, was very good (κ: 0.775–0.884). Results for interreader agreement were similar.

Conclusion

Uncompressed digital mammograms can be transmitted to different institutions with different workstations, without loss of information. The transmission process does not significantly influence image quality, lesion detection, or BI-RADS rating.

Keywords: Breast cancerImagingDigital mammographyTeleradiologyComparative studies

 

What could be the benefits from centralizing mammography interpretation through Teleradiology?

  • A baseline protocol that could enable pulling together large number of cases from different populations without having to worry about differences in practice and experience of reporters. This will enable better epidemiology studies of this disease.
  • Quantified measure, in real-time, of the relative quality of imaging between institutions could contribute to bringing all screening services to a maximal level.
  • Development of comprehensive training program for radiologists involved in mammography based screening of breast cancer.
  • Better information sharing between all players involved in the pathway of each individual patient could improve clinical decision making and patient’s support.
  • Lower costs of screening programs, disease treatment and follow-up.

Who could organize and carry out such an operation?

There are many reputable large university hospitals already offering Teleradiology services. They are already supported by government’s funds in addition to the fact that the service itself is carrying profits. I’m not listing any of these for obvious reasons, but; google “teleradiology” will bring you many results.

Read Full Post »

Older Posts »