Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘imaging’


Live Conference Coverage @Medcitynews Converge 2018 Philadelphia: The Davids vs. the Cancer Goliath Part 2

8:40 – 9:25 AM The Davids vs. the Cancer Goliath Part 2

Startups from diagnostics, biopharma, medtech, digital health and emerging tech will have 8 minutes to articulate their visions on how they aim to tame the beast.

Start Time End Time Company
8:40 8:48 3Derm
8:49 8:57 CNS Pharmaceuticals
8:58 9:06 Cubismi
9:07 9:15 CytoSavvy
9:16 9:24 PotentiaMetrics

Speakers:
Liz Asai, CEO & Co-Founder, 3Derm Systems, Inc. @liz_asai
John M. Climaco, CEO, CNS Pharmaceuticals @cns_pharma 

John Freyhof, CEO, CytoSavvy
Robert Palmer, President & CEO, PotentiaMetrics @robertdpalmer 
Moira Schieke M.D., Founder, Cubismi, Adjunct Assistant Prof UW Madison @cubismi_inc

 

3Derm Systems

3Derm Systems is an image analysis firm for dermatologic malignancies.  They use a tele-medicine platform to accurately triage out benign malignancies observed from the primary care physician, expediate those pathology cases if urgent to the dermatologist and rapidly consults with you over home or portable device (HIPAA compliant).  Their suite also includes a digital dermatology teaching resource including digital training for students and documentation services.

 

CNS Pharmaceuticals

developing drugs against CNS malignancies, spun out of research at MD Anderson.  They are focusing on glioblastoma and Berubicin, an anthracycline antiobiotic (TOPOII inhibitor) that can cross the blood brain barrier.  Berubicin has good activity in a number of animal models.  Phase I results were very positive and Phase II is scheduled for later in the year.  They hope that the cardiotoxicity profile is less severe than other anthracyclines.  The market opportunity will be in temazolamide resistant glioblastoma.

Cubismi

They are using machine learning and biomarker based imaging to visualize tumor heterogeneity. “Data is the new oil” (Intel CEO). We need prediction machines so they developed a “my body one file” system, a cloud based data rich file of a 3D map of human body.

CUBISMI IS ON A MISSION TO HELP DELIVER THE FUTURE PROMISE OF PRECISION MEDICINE TO CURE DISEASE AND ASSURE YOUR OPTIMAL HEALTH.  WE ARE BUILDING A PATIENT-DOCTOR HEALTH DATA EXCHANGE PLATFORM THAT WILL LEVERAGE REVOLUTIONARY MEDICAL IMAGING TECHNOLOGY AND PUT THE POWER OF HEALTH DATA INTO THE HANDS OF YOU AND YOUR DOCTORS.

 

CytoSavvy

CytoSavvy is a digital pathology company.  They feel AI has a fatal flaw in that no way to tell how a decision was made. Use a Shape Based Model Segmentation algorithm which uses automated image analysis to provide objective personalized pathology data.  They are partnering with three academic centers (OSU, UM, UPMC) and pool data and automate the rule base for image analysis.

CytoSavvy’s patented diagnostic dashboards are intuitive, easy–to-use and HIPAA compliant. Our patented Shape-Based Modeling Segmentation (SBMS) algorithms combine shape and color analysis capabilities to increase reliability, save time, and improve decisions. Specifications and capabilities for our web-based delivery system follow.

link to their white paper: https://www.cytosavvy.com/resources/healthcare-ai-value-proposition.pdf

PotentialMetrics

They were developing a diagnostic software for cardiology epidemiology measuring outcomes however when a family member got a cancer diagnosis felt there was a need for outcomes based models for cancer treatment/care.  They deliver real world outcomes for persoanlized patient care to help patients make decisions on there care by using a socioeconomic modeling integrated with real time clinical data.

Featured in the Wall Street Journal, using the informed treatment decisions they have generated achieve a 20% cost savings on average.  There research was spun out of Washington University St. Louis.

They have concentrated on urban markets however the CEO had mentioned his desire to move into more rural areas of the country as there models work well for patients in the rural setting as well.

Please follow on Twitter using the following #hash tags and @pharma_BI 

#MCConverge

#cancertreatment

#healthIT

#innovation

#precisionmedicine

#healthcaremodels

#personalizedmedicine

#healthcaredata

And at the following handles:

@pharma_BI

@medcitynews

Advertisements

Read Full Post »


Imaging  Living Cells and Devices, Presentation by Danut Dragoi, PhD, LPBI group

Imaging living cells is for a good number of years a hot place in Biology, Physics, Chemistry as well as Engineering and Technology for producing specific devices to visualize living cells. In this presentation is shown my opinion on this topic regarding actual status of applied technology for visualizing living cells as well as small small areas of interest.

Slide #1

Slide1

Slide #2

Slide2

As an overview, slide #2 describes: higher resolution imaging of living cells based on advanced CT and MicroCT scanners, and their actual technological trend,  advanced optical microscopy, optical magnetic imaging of living cells, and conclusion.

Slide #3

Slide3

Slide #3 describes a schematic of a computing tomography applied to a single cell, see the inside URL address. The work is in progress as a SBIR application of a group of researchers from Arizona State.The partial section of the cell is supposed to  reveal the contents of the cell, which is very important in Biology and Medicine.

Slide #4

Slide4

Slide #4 describes the principle of computed tomography for relative small objects that are expose by a soft x-ray source on the left, an x-ray detector screen that takes the x-ray projection radiography for the sample on the right. The sample is rotated discretely a small number of degrees and pictures recorded. Depending on the absorption of the sample, the reconstructed 3D object is possible. The resolution of the reconstructed object is a function of the number of pixels as well as the pitch distance d (in the slide 0.127 microns). Because the sample is rotated, the precision of the axis of rotation is very important and becomes a challenging task for small objects.

Slide #5

Slide5

Slide #5 shows a sample taken from the URL address given below the picture. It represents an insect and the future CT development is expected to produce similar images for mono living cells.

Slide #6

Slide6

For many Bio-labs the reverse optical microscope is the working horse. The slide above shows a such microscope with a culture cell inside a transparent box. The picture can be found at the address shown inside the slide.

Slide #7

Slide7

Slide #7 describes an innovative digital microscope from Keyence in which we can observe any object entirely in focus, a 20x greater depth-of-field than an optical microscope, we can view objects from any angle, and measure lengths directly on screen.

Slide #8

Slide8

Slide #8 shows an actual innovative digital microscope from Keyence, see the website address at the bottom of the slide.

Slide #9

Slide9

As we know, the samples visualized by a common optical microscope have to be flat on the surface to be visualized because there is no clear image above and below the focal plane, which is the surface of the sample. For a con-focal microscope the situation is changed. Objects can be visualized at different depths and image files recorded can reconstruct as 3D image object.

Slide #10

Slide10

Slide #10 describes the principle of a con-focal microscope, in which a green laser on left side excites molecules of the specimen at a given depth of focusing, the molecules emit on red light (less energetic than green light) that go all way to the photo-multiplier, which has a small pinhole aperture in front of it that limits the entrance of red rays (parasitic light) from out of range area. More details can be found at the URL address given at the bottom of the slide.

Slide #11

Confocal microscopy Leica

Slide #11 shows a sample of a living specimen taken with a Leica micro-system, see the website address inside the slide.

Slide #12

Slide12

Slide #12 shows the principle of fluorescent microscope and how it works. A light source is filtered to allow blue light (energetic photons for excitation of the molecules of the specimen), the green light emitted is going through objective and ocular lenses and further to the photo-multiplier or digital camera.

Slide #13

Slide13

For their discovery of fluorescent microscopy, Eric Betzig, William Moerner and Stefan Hell won the Nobel Prize in Chemistry on 2014,  for “the development of super-resolved fluorescence microscopy,” which brings optical microscopy into the nano-dimension.

Slide #14

Slide14

Slide #14 introduces the improvement on micro CT scanners for imaging living cells which now is on R & D under heavy development.  The goal is to visualize the interior of living cells. Challenging tasks are: miniaturization, respond to customer needs, low cost, and versatility.

Slide #15

Slide15

Slide #15 shows the schematic for an optical magnetic imaging microscope for visualizing living cells with one dimension less than 500 nm. The website address gven describes in details the working principle.

Slide #16

Slide16

Slide #16 shows the picture of a hand held microscope that is useful on finding spot cancer in moments, ses the website.

Slide #17

Slide17

Slide #17 shows a hand held MRI that connects to an iPhone. It is useful device for detecting cancer cells.

Slide #18

Slide18

Slide #18 shows in comparison a portable NMR device, left side, and a Lab NMR instrument whose height is greater than 5 Ft. The spectrum in the left side is that of Toluene and a capillary sample holder is shown also next to the magnetic device.

Slide #19

Slide19

Slide #19 shows that the hand held MRI can recognize complex molecules,  can diagnose cancer faster, can be connected to a smartphone, and be accurate on precise measurements.

Slide #20

Slide20

An optical dental camera is shown in slide #20. It is less then $100 and a USB cable can connect to a computer. It is very useful for every family in checking the status of the teeth and gums.

Slide #21

Slide21

For detecting dental cavities the x-ray source packaged as a camera and the sensor that connects to a computer are very useful tools in a dental office.

Slide #22

Slide22

Slide #22 shows the conclusions of the presentation, in which we summarize: the automated con-focal microscope partially satisfies the actual needs for imaging of living cells, the optical magnetic imaging microscope for living cells is a promising technique,
a higher resolution is needed on all actual microscopes,  the advanced CT and Micro CT scanners provide a new avenue on the investigation of living cells, more research needed
on hand-held MRI, which is a new solution for complex molecules recognition including cancer

Read Full Post »


3D Imaging of Cancer Cells

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

3D Imaging of Cancer Cells Could Lead to Improved Ability of Pathologists and Radiologists to Plan Cancer Treatments and Monitor Cell Interactions

Dark Daily Apr 8th 2016        Jon Stone

https://www.linkedin.com/pulse/3d-imaging-cancer-cells-could-lead-improved-ability-plan-joseph-colao

 

3D Imaging of Cancer Cells Could Lead to Improved Ability of Pathologists and Radiologists to Plan Cancer Treatments and Monitor Cell Interactions.

New technology from researchers at the University of Texas Southwestern Medical Center enables the ability to study cancer cells in their native microenvironments.

Imaging research is one step closer to giving clinicians a way to do high-resolution scans of malignant cells in order to diagnose cancer and help identify useful therapies. If this technology were to prove successful in clinical studies, it might change how anatomic pathologists and radiologists diagnose and treat cancer.

Researchers at the University of Texas Southwestern Medical Center developed a way to create near-isotropic, high-resolution scans of cells within their microenvironments. The process involves utilizing a combination of two-photonBessel beams and specialized filtering.

New Imaging Approach Could be Useful to Both Pathologists and Radiologists

In a recent press release, senior author Reto Fiolka, PhD, said “there is clear evidence that the environment strongly affects cellular behavior—thus, the value of cell culture experiments on glass must at least be questioned. Our microscope is one tool that may bring us a deeper understanding of the molecular mechanisms that drive cancer cell behavior, since it enables high-resolution imaging in more realistic tumor.”

In a study in Developmental Cell, Erik S. Welf, PhD, et al, described the new microenvironmental selective plane illumination microscopy (meSPIM). When developing the technology, the team outlined three goals:

1. The microscope design must not prohibitively constrain microenvironmental properties.

2. Spatial and temporal resolution must match the cellular features of interest.

3. Spatial resolution must be isotropic to avoid spatial bias in quantitative measurements.

This new technology offers pathologists and medical laboratory scientists a new look at cancer cells and other diseases. The study notes that meSPIM eliminates the influence of stiff barriers, such as glass slide covers, while also allowing a level of control over both mechanical and chemical influences that was previously impossible.

Early meSPIM Research Reveals New Cell Behaviors

Early use of meSPIM in observing melanoma cells is already offering new insights into the relationship between the cell behavior of cellular- and subcellular-scale mechanisms and the microenvironment in which these cells exist. The study notes, “The ability to image fine cellular details in controllable microenvironments revealed morphodynamic features not commonly observed in the narrow range of mechanical environments usually studied in vitro.”

One such difference is the appearance of blebbing. Created by melanoma cells and lines, these small protrusions are thought to aid in cell mobility and survival. Using meSPIM, observers could follow the blebbing process in real-time. Formation of blebs on slides and within an extracellular matrix (ECM) showed significant differences in both formation and manipulation of the surrounding microenvironment.

The team is also using meSPIM to take a look at membrane-associated biosensor and cytosolic biosensor signals in 3D. They hope that investigation of proteins such as phosphatidylinositol 3-kinase (PI3K) and protein kinase C will help to further clarify the roles these signals play in reorientation of fibroblasts.

meSPIM combined with computer vision enables imaging, visualization, and quantification of how cells alter collagen fibers over large distances within an image volume measuring 100 mm on each side. (Photo Copyright: Welf and Driscoll et al.)

The research team believes this opens new possibilities for studying diseases at a subcellular level, saying, “Cell biology is necessarily restricted to studying what we can measure. Accordingly, while the last hundred years have yielded incredible insight into cellular processes, unfortunately most of these studies have involved cells plated onto flat, stiff surfaces that are drastically different from the in vivo microenvironment …

“Here, we introduce an imaging platform that enables detailed subcellular observations without compromising microenvironmental control and thus should open a window for addressing these fundamental questions of cell biology.”

Limitations of meSPIM

One significant issue associated with the use of meSPIM is the need to process the large quantity of data into useful information. Algorithms currently allow for automatic bleb detection. However, manual marking, while time consuming, still provides increased accuracy. Researchers believe the next step in improving the quality of meSPIM scans lie in computer platforms designed to extract and process the scan data.

Until this process is automated, user bias, sample mounting, and data handling will remain risks for introducing errors into the collected data. Yet, even in its early stages, meSPIM offers new options for assessing the state of cancer cells and may eventually provide pathologists and radiologists with additional information when creating treatment plans or assessments.

 

Seeing cancer cells in 3-D (w/ Video)

http://phys.org/news/2016-02-cancer-cells-d-video.html

 

Cancer in 3-D

http://cdn.phys.org/newman/csz/news/800/2016/cancerin3d.png

Extracted surfaces of two cancer cells. (Left) A lung cancer cell colored by actin intensity near the cell surface. Actin is a structural molecule that is integral to cell movement. (Right) A melanoma cell colored by PI3-kinase activity near the cell surface. PI3K is a signaling molecule that is key to many cell processes. Credit: Welf and Driscoll et al.

Cancer cells don’t live on glass slides, yet the vast majority of images related to cancer biology come from the cells being photographed on flat, two-dimensional surfaces—images that are sometimes used to make conclusions about the behaviour of cells that normally reside in a more complex environment. But a new high-resolution microscope, presented February 22 in Developmental Cell, now makes it possible to visualize cancer cells in 3D and record how they are signaling to other parts of their environment, revealing previously unappreciated biology of how cancer cells survive and disperse within living things.

“There is clear evidence that the environment strongly affects cellular behavior—thus, the value of cell culture experiments on glass must at least be questioned,” says senior author Reto Fiolka, an optical scientist at the University of Texas Southwestern Medical Center. “Our is one tool that may bring us a deeper understanding of the molecular mechanisms that drive cancer cell behavior, since it enables high-resolution imaging in more realistic tumor environments.”

Read more at: http://phys.org/news/2016-02-cancer-cells-d-video.html#jCp

Read Full Post »


Experience with Thyroid Cancer

Author: Larry H. Bernstein, MD, FCAP

 

 

I retired from my position as pathologist in charge of clinical laboratories after five years at New York Methodist Hospital, with great satisfaction in mentoring students from the high schools and university undergraduate programs nearby interested in science.  I was fortunate to experience the Brooklyn “cityscape” and vibrance, and to work with other physician educators in surgery and cardiology and pulmonary medicine. Most of my students participated in presenting papers at professional meetings, and some coauthored published work.  But I was about to enter a new phase of life.  I returned to my home in Connecticut and immediately accepted a temporary position for less than a year as the Blood Bank – Transfusion Medicine Director at Norwalk Hospital, which also afforded the opportunity to help with the installation of an automated hematology system, and to help in the quality monitoring in Chemistry.  It was a good reprieve from the anxiety of having nothing to do after an intense professional  career.  When that ended I went to Yale University Department of Mathematics  and found a collaborative project with a brilliant postdoc and his mentor, Professor and Emeritus Chairman Ronald Coifman.  A colleague of mine many years ago had done a project with the automated hematology, but it was too early for a good interpretive hemogram.  I had sufficient data in 8,000 lines of data containing all of the important information.  We managed to develop an algorithm in over a year that would interpret the data and provide a list of probabilities for the physician, and we used part of the data set for creating the algorithm and another set for validation.   In the meantime I also became engaged in twice weekly sessions in Yoga, Pilates, and massage therapy, and did some swimming.  I also participated in discussions with a group of retired men up to 20 years senior to me. I also did two rounds of walking around the condonium that was home to my wife and I.

 

Then I noticed that I became weak and short of breath in walking around the condominium streets and had to stop and hold a tree or streetlamp.  I was long-term diabetic and was followed by a pulmonologist for sleep apnea for some five years.  This was an insidious health presentation, as I had had good pulmonary and cardiac status at that point in time.  Then an “aha!” moment occurred when my laboratory results showed a high level of thyroid stimulating hormone.  It was one of a rare instances of hyperparathyroidism occurring with a thyroid tumor.

I then had radiological testing of the head and neck, which led to a thyroid biopsy.  I then chose to referral to Yale University Health Sciences Center, where there was an excellent endocrinologist, and it was a leading center for head and neck surgery.  All of this took many trips, much testing, biopsies of thyroid and its removal.  There also were 3 proximate lymph nodes.  In undergoing the tests the technicians said that they had never had a patient like me because of my questions and comments.  It was a papillary thyroid cancer involving the center and right lobe, with a characteristic appearance and identified by a histologically stained biomarker that I reviewed with my longtime friend and colleague, Dr. Marguerite Pinto.  The surgery and followup went well.

 

However, I developed  double-vision (diplopia) and was referent to one of a handful of neuro-ophthalmologists in Connecticut.  Perhaps related to the hyperthyroid condition, I had developed an anti-thyroid antibody that disturbed the lower muscle that moves the right eye.  This required many test over months, and my wearing a special attachable lens gradient to equalize the vision in both eyes.  The next requirement was to watch and wait. It could be corrected by surgery if it remained after a year.  Nevertheless, it subsided over a period of perhaps 9 months and I removed the attachment with sufficient return of my previous sight.

In the meantime I was writing a lot over this period, and I also began to watch MSNBC and Turner Classic Movies on a regular basis and found relief.  I’m not a “laugher” and have had a long-term anxiety state.  I enjoyed watching the magic of Charlie Chaplin, Al Jolson, Lassie, and whatever caught my fancy.

My daughter was accepted for a tenure earning faculty position competing against a large field of candidates for an Assistant Professorship at Holyoke Community College in Western Massachusetts. Her husband had invested 15 years as a Navy physician and neurologist, having graduated from the Armed Services Medical School in Bethesda, and given this opportunity, decided to forgo further service  would pay for their child’s future college education.  He is very bright, knowledgable, and a blessing for a son-in-law.  We went through the sale of our house and the search for a living arrangement near our daughter, all while I was going through my therapy.  It was undoubtedly the best thing to moving near the daughter.

The move became an enormous challenge.  It took time to sell the condominium, which was  desirable in  a difficult market.  I became engaged in trashing what I need not save, but I had to review hundreds of published work, unpublished papers, saved publications, and hundreds of photographs large and small, that I had kept over many years.  I had to dispense of my darkroom equipment, and we managed to give much away.  It was very engaging.  It was impossible to be overwhelmed, but also tiring over the long haul.

Prior to moving, my wife had trouble swallowing, and she was subsequently found to have an esophageal carcinoma at 20 cm, and invading the submucosa.  We made arrangement for treatment by Massachusetts General Hospital, which could be done at its cancer affiliate in Northampton, MA.  The move was made, and we have temporary residence in a townhouse in Northampton, woon to move to an adult living facility.  My wife is lucky enough to have a squamous cell carcinoma, not adenocarcinoma.  Her treatment needed careful adjustments.  She decided to live it out whatever the outcome.  However, she has done well.  She maintained her weight, underwent radiation and chemotherapy, which is finished, and is returning to eating more than soft food and protein shakes. She has enjoyed being a grandmother to an incredible kid in kindergarten only a block away, and engaged in reading and all sorts of puzzles and games.

My own health has seen a decline in ease of motion. I am starting physical therapy and also pulmonary therapy for my asthma.  Having a grandson is both a pleasure and an education. Being a grandparent, one is relieved of the responsibility of being a parent.

In following my wife’s serious illness, which precluded surgery, we have had phone calls from her sister daily, weekend visits nonstop, and more to come.  She has been very satisfied with the quality of care.
My triplet sister calls often for both of us.  We also call my 95 year old aunt, who is my mother’s sister.  My mother’s younger brother enjoyed life, left Hungary as a medical student in 1941 and became an insurance salesman in Cleveland. He lived to 99 years old.  He outlived 3 wives, all friends of my mother.
His daughter has called me for a medical second opinion for a good fifteen years.  She was a very rare patient who had a pituitary growth hormone secreting adenocarcinoma (Addison’s Disease) for which she had two surgeries, and regularly visits the Cleveland Clinic and the Jewish Hospital of Los Angeles.

 

 

 

Read Full Post »


Superresolution Microscopy

Larry H. Bernstein, MD, FCAP, Curator

LPBI

Lens-Free 3-D Microscope Sharp Enough for Pathology

LOS ANGELES, Dec. 18, 2014 —

http://www.photonics.com/Article.aspx?AID=57019&refer=TechMicroscopy&utm_source=TechMicroscopy_2015_10_13

A computational lens-free, holographic on-chip microscope could provide a faster and cheaper means of diagnosing cancer and other diseases at the cellular level.

Developed by researchers at the University of California, Los Angeles, the compact system illuminates tissue or blood samples with a laser or LED, while a sensor records the pattern of shadows created by the sample.

The device processes these patterns as a series of holograms using the transport-of-intensity equation, multiheight iterative phase retrieval and rotational field transformations. Computer algorithms correct for imaging artifacts and enhance contrast in the reconstructed 3-D images, which can be focused to any depth within the field of view even after image capture.
A tissue sample image created by a new chip-based, lens-free microscope. Images courtesy of Aydogan Ozcan/UCLA.
With a field of view several hundred times larger than that of an optical microscope, the lens-free device could considerably speed up diagnostic imaging. It is also much smaller than conventional microscopes.

“While mobile health care has expanded rapidly with the growth of consumer electronics — cellphones in particular — pathology is still, by and large, constrained to advanced clinical laboratory settings,” said professor Dr. Aydogan Ozcan. “Accompanied by advances in its graphical user interface, this platform could scale up for use in clinical, biomedical, scientific, educational and citizen-science applications, among others.”

The researchers tested the device using Pap smears that indicated cervical cancer, tissue specimens containing cancerous breast cells and blood samples containing sickle cell anemia.

In a blind test, a board-certified pathologist analyzed sets of specimen images created by the lens-free technology and by conventional microscopes. The pathologist’s diagnoses using the lens-free microscopic images were accurate 99 percent of the time, the researchers said.

“By providing high-resolution images of large-area pathology samples with 3-D digital focus adjustment, lens-free on-chip microscopy can be useful in resource-limited and point-of-care settings,” the researchers wrote in Science Translational Medicine (doi: 10.1126/scitranslmed.3009850).

Funding for the project came from the Presidential Early Career Award for Scientists and Engineers, the National Science Foundation, the National Institutes of Health, the U.S. Army Research Office, the Office of Naval Research and the Howard Hughes Medical Institute.

Smartphone DNA measurements

Ozcan’s lab also recently demonstrated an optomechanical attachment that enables smartphones to perform fluorescence microscopy measurements on DNA.

The 3-D-printed device augments the phone’s camera by creating a high-contrast darkfield imaging setup with an inexpensive external lens, thin-film interference filters, a miniature dovetail stage and a laser diode for oblique excitation of fluorescent labels. The molecules are labeled and stretched on disposable chips that fit into the smartphone attachment.
A smartphone add-on enables imaging of DNA in the field.
The device also includes an app that transmits data to a server at UCLA, which measures the lengths of the individual DNA molecules.

This project was funded by the National Science Foundation. The results were published in ACS Nano (doi: 10.1021/nn505821y).

For more infromation, visit www.ucla.edu.

Expanded Tissues Show Confocal Microscopes More Detail

CAMBRIDGE, Mass., Jan. 26, 2015 —

http://www.photonics.com/Article.aspx?AID=57140&refer=TechMicroscopy&utm_source=TechMicroscopy_2015_10_13

Nobel Prize-winning superresolution microscopy techniques circumvent the diffraction limit of light to image the smallest details of cells. But there is another way.

Researchers at MIT have developed a method for making biological tissue samples physically larger, rendering their nanoscale features visible to conventional confocal microscopes.

Using inexpensive, commercially available chemicals and microscopes commonly found in research labs, the technique could give more scientists access to 3-D superresolution imaging.

“Instead of acquiring a new microscope to take images with nanoscale resolution, you can take the images on a regular microscope,” said MIT professor Dr. Edward Boyden. “You physically make the sample bigger, rather than trying to magnify the rays of light that are emitted by the sample.”

The process involves meshes of sodium polyacrylate, the superabsorbent chemical used in disposable diapers. When exposed to water, these meshes expand, and the cellular structures around them expand too.

Cells in rodent brain slices, as well as cells grown in vitro, are first fixed in formaldehyde and then gently stripped of their fatty membranes before being labeled with fluorescent markers.

A precursor is then added and heated to form the polyacrylate gel. Proteins that hold the specimen together are digested, allowing it to expand uniformly. Finally, the sample is washed in salt-free water to trigger the expansion.

Even though the proteins have been broken apart, the original location of each fluorescent label stays the same relative to the overall structure of the tissue because it is anchored to the polyacrylate gel by antibodies.
Scientists modified the superabsorbant diaper compound sodium polyacrylate to enlarge brain tissue and image it in 3-D using fluorescent tags and confocal microscopes. Courtesy of the Boyden Lab/MIT.
Samples were imaged before expansion using superresolution microscopes, and imaged afterward using confocal microscopes. Expansion gave the confocal microscopes an effective 70-nm lateral resolution — sharp enough to resolve details of the cell protein complexes, the spaces between rows of skeletal microtubule filaments and the two sides of synapses.

Trade-offs

The diffraction limit means standard microscopes can’t resolve objects smaller than about 250 nm. “Unfortunately, in biology that’s right where things get interesting,” Boyden said.

Three inventors of superresolution microscopy won the 2014 Nobel Prize in chemistry. Superresolution techniques, however, have their own limitation: They work best with small, thin samples, and take a long time to image large samples. They can also be hampered by optical scattering in thick samples.

“If you want to map the brain, or understand how cancer cells are organized in a metastasizing tumor, or how immune cells are configured in an autoimmune attack, you have to look at a large piece of tissue with nanoscale precision,” Boyden said.

The MIT technique allowed imaging of samples approximately 500 × 200 × 100 µm in volume.

“The other methods currently have better resolution, but are harder to use, or slower,” said graduate student Paul Tillberg. “The benefits of our method are the ease of use and, more importantly, compatibility with large volumes, which is challenging with existing technologies.”

Funding came from the National Institutes of Health, National Science Foundation, New York Stem Cell Foundation, Jeremy and Joyce Wertheimer and the Fannie and John Hertz Foundation.

The research was published in Science (doi: 10.1126/science.1260088).

For more information, visit www.mit.edu.

Microscope Takes 3-D Images From Inside Moving Subjects

NEW YORK, Jan. 19, 2015 —

http://www.photonics.com/Article.aspx?AID=57106&refer=TechMicroscopy&utm_source=TechMicroscopy_2015_10_13

A new kind of microscope enables rapid 3-D imaging of living and moving samples, potentially offering advantages over laser-scanning confocal, two-photon and light-sheet microscopy.

Developed by Columbia University professor Dr. Elizabeth Hillman and graduate student Matthew Bouchard, swept confocally aligned planar excitation (SCAPE) microscopy involves simplified equipment and does not require sample mounting or translation. The microscope scans a sheet of light through the sample, making it unnecessary to position the sample or the microscope’s single objective.

“The ability to perform real-time, 3-D imaging at cellular resolution in behaving organisms is a new frontier for biomedical and neuroscience research,” Hillman said. “With SCAPE, we can now image complex, living things, such as neurons firing in the rodent brain, crawling fruit fly larvae and single cells in the zebrafish heart while the heart is actually beating spontaneously.”
SCAPE yields data equivalent to conventional light-sheet microscopy, but using a single, stationary objective lens; no sample translation; and high-speed 3-D imaging. This unique configuration permitted volumetric imaging of cortical dendrites in the awake, behaving mouse brain. Courtesy of Elizabeth Hillman/Columbia Engineering.
Conventional light-sheet microscopes use two orthogonal objectives and require that samples be in a fixed position. Confocal and two-photon microscopes can image a single plane within a living sample, but cannot generate 3-D images quickly enough to capture events like neurons firing.

SCAPE does have one drawback: Using a 488-nm laser, it cannot penetrate tissue as deeply as two-photon microscopy.

The new technique could be combined with optogenetics and other tissue manipulations, the researchers said. It could also be used for imaging cellular replication, function and motion in intact tissues, 3-D cell cultures and engineered tissue constructs; as well as imaging 3-D dynamics in microfluidics and flow-cell cytometry systems.

Hillman next plans to explore clinical applications of SCAPE, such as video-rate 3-D microendoscopy and intrasurgical imaging.

Funding for the project came from the National Institutes of Health, Human Frontier Science Program, Wallace H. Coulter Foundation, Dana Foundation and the U.S. Department of Defense.

The research was published in Nature Photonics (doi:10.1038/nphoton.2014.323).

For more information, visit www.engineering.columbia.edu.

Read Full Post »


Imaging Technology in Cancer Surgery

Author and curator: Dror Nir, PhD

The advent of medical-imaging technologies such as image-fusion, functional-imaging and noninvasive tissue characterisation is playing an imperative role in answering this demand thus transforming the concept of personalized medicine in cancer into practice. The leading modality in that respect is medical imaging. To date, the main imaging systems that can provide reasonable level of cancer detection and localization are: CT, mammography, Multi-Sequence MRI, PET/CT and ultrasound. All of these require skilled operators and experienced imaging interpreters in order to deliver what is required at a reasonable level. It is generally agreed by radiologists and oncologists that in order to provide a comprehensive work-flow that complies with the principles of personalized medicine, future cancer patients’ management will heavily rely on computerized image interpretation applications that will extract from images in a standardized manner measurable imaging biomarkers leading to better clinical assessment of cancer patients.

As consequence of the human genome project and technological advances in gene-sequencing, the understanding of cancer advanced considerably. This led to increase in the offering of treatment options. Yet, surgical resection is still the leading form of therapy offered to patients with organ confined tumors. Obtaining “cancer free” surgical margins is crucial to the surgery outcome in terms of overall survival and patients’ quality of life/morbidity. Currently, a significant portion of surgeries ends up with positive surgical margins leading to poor clinical outcome and increase of costs. To improve on this, large variety of intraoperative imaging-devices aimed at resection-guidance have been introduced and adapted in the last decade and it is expected that this trend will continue.

The Status of Contemporary Image-Guided Modalities in Oncologic Surgery is a review paper presenting a variety of cancer imaging techniques that have been adapted or developed for intra-operative surgical guidance. It also covers novel, cancer-specific contrast agents that are in early stage development and demonstrate significant promise to improve real-time detection of sub-clinical cancer in operative setting.

Another good (free access) review paper is: uPAR-targeted multimodal tracer for pre- and intraoperative imaging in cancer surgery

Abstract

Pre- and intraoperative diagnostic techniques facilitating tumor staging are of paramount importance in colorectal cancer surgery. The urokinase receptor (uPAR) plays an important role in the development of cancer, tumor invasion, angiogenesis, and metastasis and over-expression is found in the majority of carcinomas. This study aims to develop the first clinically relevant anti-uPAR antibody-based imaging agent that combines nuclear (111In) and real-time near-infrared (NIR) fluorescent imaging (ZW800-1). Conjugation and binding capacities were investigated and validated in vitro using spectrophotometry and cell-based assays. In vivo, three human colorectal xenograft models were used including an orthotopic peritoneal carcinomatosis model to image small tumors. Nuclear and NIR fluorescent signals showed clear tumor delineation between 24h and 72h post-injection, with highest tumor-to-background ratios of 5.0 ± 1.3 at 72h using fluorescence and 4.2 ± 0.1 at 24h with radioactivity. 1-2 mm sized tumors could be clearly recognized by their fluorescent rim. This study showed the feasibility of an uPAR-recognizing multimodal agent to visualize tumors during image-guided resections using NIR fluorescence, whereas its nuclear component assisted in the pre-operative non-invasive recognition of tumors using SPECT imaging. This strategy can assist in surgical planning and subsequent precision surgery to reduce the number of incomplete resections.

INTRODUCTION
Diagnosis, staging, and surgical planning of colorectal cancer patients increasingly rely on imaging techniques that provide information about tumor biology and anatomical structures [1-3]. Single-photon emission computed tomography (SPECT) and positron emission tomography (PET) are preoperative nuclear imaging modalities used to provide insights into tumor location, tumor biology, and the surrounding micro-environment [4]. Both techniques depend on the recognition of tumor cells using radioactive ligands. Various monoclonal antibodies, initially developed as therapeutic agents (e.g. cetuximab, bevacizumab, labetuzumab), are labeled with radioactive tracers and evaluated for pre-operative imaging purposes [5-9]. Despite these techniques, during surgery the surgeons still rely mostly on their eyes and hands to distinguish healthy from malignant tissues, resulting in incomplete resections or unnecessary tissue removal in up to 27% of rectal cancer patients [10, 11]. Incomplete resections (R1) are shown to be a strong predictor of development of distant metastasis, local recurrence, and decreased survival of colorectal cancer patients [11, 12]. Fluorescence-guided surgery (FGS) is an intraoperative imaging technique already introduced and validated in the clinic for sentinel lymph node (SLN) mapping and biliary imaging [13]. Tumor-specific FGS can be regarded as an extension of SPECT/PET, using fluorophores instead of radioactive labels conjugated to tumor-specific ligands, but with higher spatial resolution than SPECT/PET imaging and real-time anatomical feedback [14]. A powerful synergy can be achieved when nuclear and fluorescent imaging modalities are combined, extending the nuclear diagnostic images with real-time intraoperative imaging. This combination can lead to improved diagnosis and management by integrating pre-intra and postoperative imaging. Nuclear imaging enables pre-operative evaluation of tumor spread while during surgery deeper lying spots can be localized using the gamma probe counter. The (NIR) fluorescent signal aids the surgeon in providing real-time anatomical feedback to accurately recognize and resect malignant tissues. Postoperative, malignant cells can be recognized using NIR fluorescent microscopy. Clinically, the advantages of multimodal agents in image-guided surgery have been shown in patients with melanoma and prostate cancer, but those studies used a-specific agents, following the natural lymph drainage pattern of colloidal tracers after peritumoral injection [15, 16]. The urokinase-type plasminogen activator receptor (uPAR) is implicated in many aspects of tumor growth and (micro) metastasis [17, 18]. The levels of uPAR are undetectable in normal tissues except for occasional macrophages and granulocytes in the uterus, thymus, kidneys and spleen [19]. Enhanced tumor levels of uPAR and its circulating form (suPAR) are independent prognostic markers for overall survival in colorectal cancer patients [20, 21]. The relatively selective and high overexpression of uPAR in a wide range of human cancers including colorectal, breast, and pancreas nominate uPAR as a widely applicable and potent molecular target [17,22]. The current study aims to develop a clinically relevant uPAR-specific multimodal agent that can be used to visualize tumors pre- and intraoperatively after a single injection. We combined the 111Indium isotope with NIR fluorophore ZW800-1 using a hybrid linker to an uPAR specific monoclonal antibody (ATN-658) and evaluated its performance using a pre-clinical SPECT system (U-SPECT-II) and a clinically-applied NIR fluorescence camera system (FLARE™).

Fig1 Fig2 Fig3

Robotic surgery is a growing trend as a form of surgery, specifically in urology. The following review paper propose a good discussion on the added value of imaging in urologic robotic surgery:

The current and future use of imaging in urological robotic surgery: a survey of the European Association of Robotic Urological Surgeons

 Abstract

Background

With the development of novel augmented reality operating platforms the way surgeons utilize imaging as a real-time adjunct to surgical technique is changing.

Methods

A questionnaire was distributed via the European Robotic Urological Society mailing list. The questionnaire had three themes: surgeon demographics, current use of imaging and potential uses of an augmented reality operating environment in robotic urological surgery.

Results

117 of the 239 respondents (48.9%) were independently practicing robotic surgeons. 74% of surgeons reported having imaging available in theater for prostatectomy 97% for robotic partial nephrectomy and 95% cystectomy. 87% felt there was a role for augmented reality as a navigation tool in robotic surgery.

Conclusions

This survey has revealed the contemporary robotic surgeon to be comfortable in the use of imaging for intraoperative planning it also suggests that there is a desire for augmented reality platforms within the urological community. Copyright © 2014 John Wiley & Sons, Ltd.

 Introduction

Since Röntgen first utilized X-rays to image the carpal bones of the human hand in 1895, medical imaging has evolved and is now able to provide a detailed representation of a patient’s intracorporeal anatomy, with recent advances now allowing for 3-dimensional (3D) reconstructions. The visualization of anatomy in 3D has been shown to improve the ability to localize structures when compared with 2D with no change in the amount of cognitive loading [1]. This has allowed imaging to move from a largely diagnostic tool to one that can be used for both diagnosis and operative planning.

One potential interface to display 3D images, to maximize its potential as a tool for surgical guidance, is to overlay them onto the endoscopic operative scene (augmented reality). This addresses, in part, a criticism often leveled at robotic surgery, the loss of haptic feedback. Augmented reality has the potential to mitigate this sensory loss by enhancing the surgeons visual cues with information regarding subsurface anatomical relationships [2].

Augmented reality surgery is in its infancy for intra-abdominal procedures due in large part to the difficulties of applying static preoperative imaging to a constantly deforming intraoperative scene [3]. There are case reports and ex vivo studies in the literature examining the technology in minimal access prostatectomy [3-6] and partial nephrectomy [7-10], but there remains a lack of evidence determining whether surgeons feel there is a role for the technology and if so for what procedures they feel it would be efficacious.

This questionnaire-based study was designed to assess first, the pre- and intra-operative imaging modalities utilized by robotic urologists; second, the current use of imaging intraoperatively for surgical planning; and finally whether there is a desire for augmented reality among the robotic urological community.

Methods

Recruitment

A web based survey instrument was designed and sent out, as part of a larger survey, to members of the EAU robotic urology section (ERUS). Only independently practicing robotic surgeons performing robot-assisted laparoscopic prostatectomy (RALP), robot-assisted partial nephrectomy (RAPN) and/or robotic cystectomy were included in the analysis, those surgeons exclusively performing other procedures were excluded. Respondents were offered no incentives to reply. All data collected was anonymous.

Survey design and administration

The questionnaire was created using the LimeSurvey platform (www.limesurvey.com) and hosted on their website. All responses (both complete and incomplete) were included in the analysis. The questionnaire was dynamic with the questions displayed tailored to the respondents’ previous answers.

When computing fractions or percentages the denominator was the number of respondents to answer the question, this number is variable due to the dynamic nature of the questionnaire.

Demographics

All respondents to the survey were asked in what country they practiced and what robotic urological procedures they performed. In addition to what procedures they performed surgeons were asked to specify the number of cases they had undertaken for each procedure.

 Current imaging practice

Procedure-specific questions in this group were displayed according to the operations the respondent performed. A summary of the questions can be seen in Appendix 1. Procedure-nonspecific questions were also asked. Participants were asked whether they routinely used the Tile Pro™ function of the da Vinci console (Intuitive Surgical, Sunnyvale, USA) and whether they routinely viewed imaging intra-operatively.

 Augmented reality

Before answering questions in this section, participants were invited to watch a video demonstrating an augmented reality platform during RAPN, performed by our group at Imperial College London. A still from this video can be seen in Figure 1. They were then asked whether they felt augmented reality would be of use as a navigation or training tool in robotic surgery.

f1

Figure 1. A still taken from a video of augmented reality robot assisted partial nephrectomy performed. Here the tumour has been painted into the operative view allowing the surgeon to appreciate the relationship of the tumour with the surface of the kidney

Once again, in this section, procedure-specific questions were displayed according to the operations the respondent performed. Only those respondents who felt augmented reality would be of use as a navigation tool were asked procedure-specific questions. Questions were asked to establish where in these procedures they felt an augmented reality environment would be of use.

Results

Demographics

Of the 239 respondents completing the survey 117 were independently practising robotic surgeons and were therefore eligible for analysis. The majority of the surgeons had both trained (210/239, 87.9%) and worked in Europe (215/239, 90%). The median number of cases undertaken by those surgeons reporting their case volume was: 120 (6–2000), 9 (1–120) and 30 (1–270), for RALP, robot assisted cystectomy and RAPN, respectively.

 

Contemporary use of imaging in robotic surgery

When enquiring about the use of imaging for surgical planning, the majority of surgeons (57%, 65/115) routinely viewed pre-operative imaging intra-operatively with only 9% (13/137) routinely capitalizing on the TilePro™ function in the console to display these images. When assessing the use of TilePro™ among surgeons who performed RAPN 13.8% (9/65) reported using the technology routinely.

When assessing the imaging modalities that are available to a surgeon in theater the majority of surgeons performing RALP (74%, 78/106)) reported using MRI with an additional 37% (39/106) reporting the use of CT for pre-operative staging and/or planning. For surgeons performing RAPN and robot-assisted cystectomy there was more of a consensus with 97% (68/70) and 95% (54/57) of surgeons, respectively, using CT for routine preoperative imaging (Table 1).

Table 1. Which preoperative imaging modalities do you use for diagnosis and surgical planning?

  CT MRI USS None Other
RALP (n = 106) 39.8% 73.5% 2% 15.1% 8.4%
(39) (78) (3) (16) (9)
RAPN (n = 70) 97.1% 42.9% 17.1% 0% 2.9%
(68) (30) (12) (0) (2)
Cystectomy (n = 57) 94.7% 26.3% 1.8% 1.8% 5.3%
(54) (15) (1) (1) (3)

Those surgeons performing RAPN were found to have the most diversity in the way they viewed pre-operative images in theater, routinely viewing images in sagittal, coronal and axial slices (Table 2). The majority of these surgeons also viewed the images as 3D reconstructions (54%, 38/70).

Table 2. How do you typically view preoperative imaging in the OR? 3D recons = three-dimensional reconstructions

  Axial slices (n) Coronal slices (n) Sagittal slices (n) 3D recons. (n) Do not view (n)  
RALP (n = 106) 49.1% 44.3% 31.1% 9.4% 31.1%
(52) (47) (33) (10) (33)
RAPN (n = 70) 68.6% 74.3% 60% (42) 54.3% 0%
(48) (52) (38) (0)
Cystectomy (n = 57) 70.2% 52.6% 50.9% 21.1% 8.8%
(40) (30) (29) (12) (5)

The majority of surgeons used ultrasound intra-operatively in RAPN (51%, 35/69) with a further 25% (17/69) reporting they would use it if they had access to a ‘drop-in’ ultrasound probe (Figure 2).

f2

Figure 2. Chart demonstrating responses to the question – Do you use intraoperative ultrasound for robotic partial nephrectomy?

Desire for augmented reality

Overall, 87% of respondents envisaged a role for augmented reality as a navigation tool in robotic surgery and 82% (88/107) felt that there was an additional role for the technology as a training tool.

The greatest desire for augmented reality was among those surgeons performing RAPN with 86% (54/63) feeling the technology would be of use. The largest group of surgeons felt it would be useful in identifying tumour location, with significant numbers also feeling it would be efficacious in tumor resection (Figure 3).

f3

Figure 3. Chart demonstrating responses to the question – In robotic partial nephrectomy which parts of the operation do you feel augmented reality image overlay would be of assistance?

When enquiring about the potential for augmented reality in RALP, 79% (20/96) of respondents felt it would be of use during the procedure, with the largest group feeling it would be helpful for nerve sparing 65% (62/96) (Figure 4). The picture in cystectomy was similar with 74% (37/50) of surgeons believing augmented reality would be of use, with both nerve sparing and apical dissection highlighted as specific examples (40%, 20/50) (Figure 5). The majority also felt that it would be useful for lymph node dissection in both RALP and robot assisted cystectomy (55% (52/95) and 64% (32/50), respectively).

f4

Figure 4. Chart demonstrating responses to the question – In robotic prostatectomy which parts of the operation do you feel augmented reality image overlay would be of assistance?

f5

Figure 5. Chart demonstrating responses to the question – In robotic cystectomy which parts of the operation do you feel augmented reality overlay technology would be of assistance?

Discussion

The results from this study suggest that the contemporary robotic surgeon views imaging as an important adjunct to operative practice. The way these images are being viewed is changing; although the majority of surgeons continue to view images as two-dimensional (2D) slices a significant minority have started to capitalize on 3D reconstructions to give them an improved appreciation of the patient’s anatomy.

This study has highlighted surgeons’ willingness to take the next step in the utilization of imaging in operative planning, augmented reality, with 87% feeling it has a role to play in robotic surgery. Although there appears to be a considerable desire for augmented reality, the technology itself is still in its infancy with the limited evidence demonstrating clinical application reporting only qualitative results [3, 7, 11, 12].

There are a number of significant issues that need to be overcome before augmented reality can be adopted in routine clinical practice. The first of these is registration (the process by which two images are positioned in the same coordinate system such that the locations of corresponding points align [13]). This process has been performed both manually and using automated algorithms with varying degrees of accuracy [2, 14]. The second issue pertains to the use of static pre-operative imaging in a dynamic operative environment; in order for the pre-operative imaging to be accurately registered it must be deformable. This problem remains as yet unresolved.

Live intra-operative imaging circumvents the problems of tissue deformation and in RAPN 51% of surgeons reported already using intra-operative ultrasound to aid in tumour resection. Cheung and colleagues [9] have published an ex vivo study highlighting the potential for intra-operative ultrasound in augmented reality partial nephrectomy. They report the overlaying of ultrasound onto the operative scene to improve the surgeon’s appreciation of the subsurface tumour anatomy, this improvement in anatomical appreciation resulted in improved resection quality over conventional ultrasound guided resection [9]. Building on this work the first in vivo use of overlaid ultrasound in RAPN has recently been reported [10]. Although good subjective feedback was received from the operating surgeon, the study was limited to a single case demonstrating feasibility and as such was not able to show an outcome benefit to the technology [10].

RAPN also appears to be the area in which augmented reality would be most readily adopted with 86% of surgeons claiming they see a use for the technology during the procedure. Within this operation there are two obvious steps to augmentation, anatomical identification (in particular vessel identification to facilitate both routine ‘full clamping’ and for the identification of secondary and tertiary vessels for ‘selective clamping’ [15]) and tumour resection. These two phases have different requirements from an augmented reality platform; the first phase of identification requires a gross overview of the anatomy without the need for high levels of registration accuracy. Tumor resection, however, necessitates almost sub-millimeter accuracy in registration and needs the system to account for the dynamic intra-operative environment. The step of anatomical identification is amenable to the use of non-deformable 3D reconstructions of pre-operative imaging while that of image-guided tumor resection is perhaps better suited to augmentation with live imaging such as ultrasound [2, 9, 16].

For RALP and robot-assisted cystectomy the steps in which surgeons felt augmented reality would be of assistance were those of neurovascular bundle preservation and apical dissection. The relative, perceived, efficacy of augmented reality in these steps correlate with previous examinations of augmented reality in RALP [17, 18]. Although surgeon preference for utilizing augmented reality while undertaking robotic prostatectomy has been demonstrated, Thompson et al. failed to demonstrate an improvement in oncological outcomes in those patients undergoing AR RALP [18].

Both nerve sparing and apical dissection require a high level of registration accuracy and a necessity for either live imaging or the deformation of pre-operative imaging to match the operative scene; achieving this level of registration accuracy is made more difficult by the mobilization of the prostate gland during the operation [17]. These problems are equally applicable to robot-assisted cystectomy. Although guidance systems have been proposed in the literature for RALP [3-5, 12, 17], none have achieved the level of accuracy required to provide assistance during nerve sparing. In addition, there are still imaging challenges that need to be overcome. Although multiparametric MRI has been shown to improve decision making in opting for a nerve sparing approach to RALP [19] the imaging is not yet able to reliably discern the exact location of the neurovascular bundle. This said, significant advances are being made with novel imaging modalities on the horizon that may allow for imaging of the neurovascular bundle in the near future [20].

 

Limitations

The number of operations included represents a significant limitation of the study, had different index procedures been chosen different results may have been seen. This being said the index procedures selected were chosen as they represent the vast majority of uro-oncological robotic surgical practice, largely mitigating for this shortfall.

Although the available ex vivo evidence suggests that introducing augmented reality operating environments into surgical practice would help to improve outcomes [9, 21] the in vivo experience to date is limited to small volume case series reporting feasibility [2, 3, 14]. To date no study has demonstrated an in vivo outcome advantage to augmented reality guidance. In addition to this limitation augmented reality has been demonstrated to increased rates of inattention blindness among surgeons suggesting there is a trade-off between increasing visual information and the surgeon’s ability to appreciate unexpected operative events [21].

 

Conclusions

This survey shows the contemporary robotic surgeon to be comfortable with the use of imaging to aid intra-operative planning; furthermore it highlights a significant interest among the urological community in augmented reality operating platforms.

Short- to medium-term development of augmented reality systems in robotic urology surgery would be best performed using RAPN as the index procedure. Not only was this the operation where surgeons saw the greatest potential benefits, but it may also be the operation where it is most easily achievable by capitalizing on the respective benefits of technologies the surgeons are already using; pre-operative CT for anatomical identification and intra-operative ultrasound for tumour resection.

 

Conflict of interest

None of the authors have any conflicts of interest to declare.

Appendix 1

Question Asked Question Type
Demographics
In which country do you usually practise? Single best answer
Which robotic procedures do you perform?* Single best answer
Current Imaging Practice
What preoperative imaging modalities do you use for the staging and surgical planning in renal cancer? Multiple choice
How do you typically view preoperative imaging in theatre for renal cancer surgery? Multiple choice
Do you use intraoperative ultrasound for partial nephrectomy? Yes or No
What preoperative imaging modalities do you use for the staging and surgical planning in prostate cancer? Multiple choice
How do you typically view preoperative imaging in theatre for prostate cancer? Multiple choice
Do you use intraoperative ultrasound for robotic partial nephrectomy? Yes or No
Which preoperative imaging modality do you use for staging and surgical planning in muscle invasive TCC? Multiple choice
How do you typically view preoperative imaging in theatre for muscle invasive TCC? Multiple choice
Do you routinely refer to preoperative imaging intraoperativley? Yes or No
Do you routinely use Tilepro intraoperativley? Yes or No
Augmented Reality
Do you feel there is a role for augmented reality as a navigation tool in robotic surgery? Yes or No
Do you feel there is a role for augmented reality as a training tool in robotic surgery? Yes or No
In robotic partial nephrectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
In robotic nephrectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
In robotic prostatectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
Would augmented reality guidance be of use in lymph node dissection in robotic prostatectomy? Yes or No
In robotic cystectomy which parts of the operation do you feel augmented reality image overlay technology would be of assistance? Multiple choice
Would augmented reality guidance be of use in lymph node dissection in robotic cystectomy? Yes or No
*The relevant procedure related questions were displayed based on the answer to this question

References

1. Foo J-L, Martinez-Escobar M, Juhnke B, et al.Evaluating mental workload of two-dimensional and three-dimensional visualization for anatomical structure localization. J Laparoendosc Adv Surg Tech A 2013; 23(1):65–70.

2. Hughes-Hallett A, Mayer EK, Marcus HJ, et al.Augmented reality partial nephrectomy: examining the current status and future perspectives. Urology 2014; 83(2): 266–273.

3. Sridhar AN, Hughes-Hallett A, Mayer EK, et al.Image-guided robotic interventions for prostate cancer. Nat Rev Urol 2013; 10(8): 452–462.

4. Cohen D, Mayer E, Chen D, et al.Eddie’ Augmented reality image guidance in minimally invasive prostatectomy. Lect Notes Comput Sci 2010; 6367: 101–110.

5. Simpfendorfer T, Baumhauer M, Muller M, et al.Augmented reality visualization during laparoscopic radical prostatectomy. J Endourol 2011; 25(12): 1841–1845.

6. Teber D, Simpfendorfer T, Guven S, et al.In vitro evaluation of a soft-tissue navigation system for laparoscopic prostatectomy. J Endourol 2010; 24(9): 1487–1491.

7. Teber D, Guven S, Simpfendörfer T, et al.Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo Eur Urol 2009; 56(2): 332–338.

8. Pratt P, Mayer E, Vale J, et al.An effective visualisation and registration system for image-guided robotic partial nephrectomy. J Robot Surg 2012; 6(1): 23–31.

9. Cheung CL, Wedlake C, Moore J, et al.Fused video and ultrasound images for minimally invasive partial nephrectomy: a phantom study. Med Image Comput Comput Assist Interv 2010; 13(Pt 3): 408–415.

10. Hughes-Hallett A, Pratt P, Mayer E, et al.Intraoperative ultrasound overlay in robot-assisted partial nephrectomy: first clinical experience. Eur Urol 2014; 65(3): 671–672.

11. Nakamura K, Naya Y, Zenbutsu S, et al.Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. J Endourol 2010; 24(4): 521–524.

12. Ukimura O, Gill IS. Imaging-assisted endoscopic surgery: Cleveland clinic experience. J Endourol2008; 22(4):803–809.

13. Altamar HO, Ong RE, Glisson CL, et al.Kidney deformation and intraprocedural registration: a study of elements of image-guided kidney surgery. J Endourol 2011; 25(3): 511–517.

14. Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol2011; 20(3): 189–201.

15. Ukimura O, Nakamoto M, Gill IS. Three-dimensional reconstruction of renovascular-tumor anatomy to facilitate zero-ischemia partial nephrectomy. Eur Urol2012; 61(1): 211–217.

16. Pratt P, Hughes-Hallett A, Di Marco A, et al. Multimodal reconstruction for image-guided interventions. In:Yang GZ, Darzi A (eds) Proceedings of the Hamlyn symposium on medical robotics: London. 2013; 59–61.

17. Mayer EK, Cohen D, Chen D, et al.Augmented reality image guidance in minimally invasive prostatectomy. Eur Urol Supp 2011; 10(2): 300.

18. Thompson S, Penney G, Billia M, et al.Design and evaluation of an image-guidance system for robot-assisted radical prostatectomy. BJU Int 2013; 111(7): 1081–1090.

19. Panebianco V, Salciccia S, Cattarino S, et al.Use of multiparametric MR with neurovascular bundle evaluation to optimize the oncological and functional management of patients considered for nerve-sparing radical prostatectomy. J Sex Med 2012; 9(8): 2157–2166.

20. Rai S, Srivastava A, Sooriakumaran P, Tewari A. Advances in imaging the neurovascular bundle. Curr Opin Urol2012; 22(2): 88–96.

21. Dixon BJ, Daly MJ, Chan H, et al.Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg Endosc 2013; 27(2): 454–461.

Read Full Post »


Heroes in Medical Research: Green Fluorescent Protein and the Rough Road in Science

Curator: Stephen J. Williams, Ph.D.

In this series, “Heroes in Medical Research”, I like to discuss the people who made some important contributions to science and medicine which underlie the great transformative changes but don’t usually get the notoriety given to Nobel Laureates or who seem to fly under the radar of popular news. Their work may be the development of research tools which allowed a great discovery leading to a line of transformative research, a moment of serendipity leading to discovery of a cure, or just contributions to the development of a new field or the mentoring of a new generation of scientists and clinicians. One such discovery, which has probably been pivotal in many of our research, is the discovery of the green fluorescent protein (GFP), commonly used as an invaluable tool to monitor protein for cellular expression and localization studies. Although the development of research tools, whether imaging tools, vectors, animal models, cell lines, and such are not heralded, they always assist in the pivotal discoveries of our time. The following is a heartwarming story by Discover Magazine’s Yudhijit Bhattacharjee behind Dr. Douglas Prasher’s discovery of the green fluorescent protein, his successful efforts to sequence the gene and subsequent struggles in science and finally scientific recognition for his work. In addition the story describes Dr. Prather’s perseverance, a trait necessary for every scientist.

http://discovermagazine.com/2011/apr/30-how-bad-luck-networking-cost-prasher-nobel

 

The following is a wonderful entry into Wikipedia about Dr. Prasher at:

http://en.wikipedia.org/wiki/Douglas_Prasher

including a listing of his publications including the seminal Science and PNAS publications1,2.

 

prasher

 

 

(Photo: Dr. Prasher in the lab at UCSD. Photo credit UCSD and John Galstaldo)

 

 

 

In summary, Dr. Prather had been working at Wood’s Hole in Massachusetts trying to discover, isolate, then clone the protein which allowed a species of jellyfish living in the cold waters of the North Pacific, Aequorea victoria, to emit a green glow. Eventually he cloned the GFP gene, but gave up on work to express the gene in mammalian cells. Before leaving Wood’s Hole he gave the gene to Dr. Roger Tsien, who with Dr. Martin Chalfie and Osamu Shimomura showed the utility of GFP as an intracellular tracer to visualize, in real time, the expression and localization of GFP-tagged proteins (all three shared the 2008 Nobel Prize for this work). Dr. Tsien however realized the importance of Douglas’s cloning work as pivotal for their research, contacted Douglas (who now due to the bad economy was working at a Toyota dealership in Alabama) and invited him to the Nobel Prize Award Ceremony in Sweden as his guest. Although Dr. Prasher had “left academic science” he never really stopped his quest for a scientific career, using his spare time to review manuscripts.

Other researchers have invited their colleagues who made important contributions to the ultimate Nobel work. One such guest was one of my colleagues Dr. Leonard Cohen, who worked with Dr. Irwin Rose and Avram Hershko at the Institute for Cancer Research in Philadelphia a cell-free system from clams to discover the mechanism how cyclin B is degraded during the exit from the cell cycle (from A. Hershko’s Nobel speech). Dr. Hershko had acknowledged a slew of colleagues and highlighted their contributions to the ultimate work. It shows how even small discoveries can contribute to the sphere of scientific knowledge and breakthrough.

Luckily, in the end, perseverance has paid off as Dr. Prasher is now using his talents in Roger Tsien‘s group at the University of California in San Diego.

References:

1. Chalfie, M., Tu, Y., Euskirchen, G., Ward, W.W., Prasher, D.C., Green fluorescent protein as a marker for gene expression. Science, 263(5148), 802-805 (1994).

 

2. Heim, R., Prasher, D.C., Tsien, R.Y., Wavelength mutations and posttranslational autoxidation of green fluorescent protein. Proc. Natl. Acad. Sci. USA, 91(26), 12501-12504 (1994).

More posts on this site on Heroes in Medical Research series include:

Heroes in Medical Research: Developing Models for Cancer Research

Heroes in Medical Research: Dr. Carmine Paul Bianchi Pharmacologist, Leader, and Mentor

Heroes in Medical Research: Dr. Robert Ting, Ph.D. and Retrovirus in AIDS and Cancer

Heroes in Medical Research: Barnett Rosenberg and the Discovery of Cisplatin

 

Read Full Post »

Older Posts »