1887

Abstract

Introduction: Surgery remains one of the primary methods for terminating cancerous tumours. Minimally-invasive robotic surgery, in particular, provides several benefits, such as filtering of hand tremor, offering more complex and flexible manipulation capabilities that lead to increased dexterity and higher precisions, and more comfortable seating for the surgeon. All in turn lead to reduced blood loss, lower infection and complication rates, less post-operative pain, shorter hospital stays, and better overall surgical outcomes. Pre-operative 3D medical imaging modalities, mainly magnetic resonance imaging (MRI) and computed tomography (CT) are used for surgical planning, in which tumour excision margins are identified for maximal sparing of healthy tissue. However, transferring such plans from the pre-operative frame-of-reference to the dynamic intra-operative scene remains a necessary yet largely unsolved problem. We summarize our team's progress towards addressing this problem focusing on partial nephrectomy (RAPN) performed with a daVinci surgical robot. Method: We perform pre-operative 3D image segmentation of the tumour and surrounding healthy tissue using interactive random walker image segmentation, which provides an uncertainty-encoding segmentation used to construct a 3D model of the segmented patient anatomy. We reconstruct the 3D geometry of the surgical scene from the stereo endoscopic video, regularized by the patient-specific shape prior. We process the endoscopic images to detect tissue boundaries and other features. Then we align, first via rigid then via deformable registration, the pre-operative segmentation to the 3D reconstructed scene and the endoscopic image features. Finally, we present to the surgeon an augmented reality view showing an overlay of the tumour resection targets on top of the endoscopic view, in a way that depicts uncertainty in localizing the tumour boundary. Material: We collected pre-operative and intra-operative patient data in the context of RAPN including stereo endoscopic video at full HD 1080i (da Vinci S HD Surgical System), CT images (Siemens CT Sensation 16 and 64 slices), MR images (Siemens MRI Avanto 1.5T), and US images (Ultrasonix SonixTablet with a flexible laparoscopic linear probe). We also acquired CT images and stereo video from in-silico phantoms and ex-vivo lamb kidneys with artificial tumours for test and validation purposes. Results and Discussion: We successfully developed a novel proof-of-concept framework for prior and uncertainty encoded augmented reality system that fuses pre-operative patient specific information into the intra-operative surgical scene. Preliminary studies and initial surgeons' feedback on the developed augmented reality system are encouraging. Our future work will focus on investigating the use of intra-operative US data in our system to leverage all imaging modalities available during surgeries. Before a full system integration of these components, improving accuracy and speed of aforementioned algorithms, and the intuitiveness of the augmented reality visualization, remain active research projects for our team.

Loading

Article metrics loading...

/content/papers/10.5339/qfarf.2013.ICTO-08
2013-11-20
2024-04-25
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/papers/10.5339/qfarf.2013.ICTO-08
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error