Feeds:
Posts
Comments

Posts Tagged ‘innovation in healthcare’


Author and Curator: Dror Nir, PhD

Radiology congresses are all about imaging in medicine. Interestingly, radiology originates from radiation. It was the discovery of X-ray radiation at the beginning of the 20th century that opened the road to “seeing” the inside of the human body without harming it (at that time that meant cutting into the body).

Radiology meetings are about sharing experience and knowhow on imaging-based management patients. The main topic is always image-interpretation: the bottom line of clinical radiology! This year’s European Congress of Radiology (ECR) dedicated few of its sessions to recent developments in image-interpretation tools. I chose to discuss the one that I consider contributing the most to the future of cancer patients’ management.

In the refresher course dedicated to computer application the discussion was aimed at understanding the question “How do image processing and CAD impact radiological daily practice?” Experts’ reviews gave the audience some background information on the following subjects:

  1. A.     The link between image reconstruction and image analysis.
  2. B.     Semantic web technologies for sharing and reusing imaging-related information
  3. C.     Image processing and CAD: workflow in clinical practice.

I find item A to be a fundamental education item. Not once did I hear a radiologist saying: “I know this is the lesion because it’s different on the image”.  Being aware of the computational concepts behind image rendering, even if it is at a very high level and lacking deep understanding of the computational processes,  will contribute to more balanced interpretations.

Item B is addressing the dream of investigators worldwide. Imagine that we could perform a web search and find educating, curated materials linking visuals and related clinical information, including standardized pathology reporting. We would only need to remember that search engines used certain search methods and agree, worldwide, on the method and language to be used when describing things. Having such tools is a pre-requisite to successful pharmaceutical and bio-tech development.

I find item C strongly linked to A, as all methods for better image interpretation must fit into a workflow. This is a design goal that is not trivial to achieve. To understand what I mean by that, try to think about how you could integrate the following examples in your daily workflow: i.e. what kind of expertise is needed for execution, how much time it will take, do you have the infrastructure?

In the rest of this post, I would like to highlight, through examples that were discussed during ECR 2012, the aspect of improving cancer patients’ clinical assessment by using information fusion to support better image interpretation.

  • Adding up quantitative information from MR spectroscopy (quantifies biochemical property of a target lesion) and Dynamic Contrast Enhanced MR imaging (highlights lesion vasculature).

Image provided by: Dr. Pascal Baltzer, director of mammography at the centre for radiology at Friedrich Schiller University in Jena, Germany

  • Registration of images generated by different imaging modalities (Multi-modal imaging registration).

The following examples: Fig 2 demonstrates registration of a mammography image of a breast lesion to an MRI image of this lesion. Fig3 demonstrates registration of an ultrasound image of a breast lesion scanned by an Automatic Breast Ultrasound (ABUS) system and an MRI image of the same lesion.

Images provided by members of the HAMAM project (an EU, FP7 funded research project: Highly Accurate Breast Cancer Diagnosis through Integration of Biological Knowledge, Novel Imaging Modalities, and Modelling): http://www.hamam-project.org

 

 Multi-modality image registration is usually based on the alignment of image-features apparent in the scanned regions. For ABUS-MRI matching these were: the location of the nipple and the breast thickness; the posterior of the nipple in both modalities; the medial-lateral distance of the nipple to the breast edge on ultrasound; and an approximation of the rib­cage using a cylinder on the MRI. A mean accuracy of 14mm was achieved.

Also from the HAMAM project, registration of ABUS image to a mammography image:

registration of ABUS image to a mammography image, Image provided by members of the HAMAM project (an EU, FP7 funded research project: Highly Accurate Breast Cancer Diagnosis through Integration of Biological Knowledge, Novel Imaging Modalities, and Modelling): http://www.hamam-project.org

  • Automatic segmentation of suspicious regions of interest seen in breast MRI images

Segmentation of suspicious the lesions on the image is the preliminary step in tumor evaluation; e.g. finding its size and location. Since lesions have different signal/image character­istics to the rest of the breast tissue, it gives hope for the development of computerized segmentation techniques. If successful, such techniques bear the promise of enhancing standardization in the reporting of lesions size and location: Very important information for the success of the treatment step.

Roberta Fusco of the National Cancer Institute of Naples Pascal Foundation, Naples/IT suggested the following automatic method for suspi­cious ROI selection within the breast using dynamic-derived information from DCE-MRI data.

 

Automatic segmentation of suspicious ROI in breast MRI images, image provided by Roberta Fusco of the National Cancer Institute of Naples Pascal Foundation, Naples/IT

 

 Her algorithm includes three steps (Figure 2): (i) breast mask extraction by means of automatic intensity threshold estimation (Otsu Thresh-holding) on the par­ametric map obtained through the sum of intensity differences (SOD) calculated pixel by pixel; (ii) hole-filling and leakage repair by means of morphological operators: closing is required to fill the holes on the boundaries of breast mask, filling is required to fill the holes within the breasts, erosion is required to reduce the dilation obtained by the closing operation; (iii) suspicious ROIs extraction: a pixel is assigned to a suspicious ROI if it satisfies two conditions: the maximum of its normalized time-intensity curve should be greater than 0.3 and the maximum signal intensity should be reached before the end of the scan time. The first condition assures that the pixels within the ROI have a significant contrast agent uptake (thus excluding type I and type II curves) and the second condition is required for the time-intensity pattern to be of type IV or V (thus excluding type III curves).

Written by: Dror Nir, PhD

Read Full Post »

« Newer Posts