Difference between revisions of "Integration of stereo video into Slicer3"

From NAMIC Wiki
Jump to: navigation, search
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
__NOTOC__
 +
<gallery>
 +
Image:PW2009-v3.png|[[2009_Summer_Project_Week#Projects|Project Week Main Page]]
 +
</gallery>
 +
 
==Key Investigators==
 
==Key Investigators==
 
* Robarts Research Institute / University of Western Ontario:  Mehdi Esteghamatian
 
* Robarts Research Institute / University of Western Ontario:  Mehdi Esteghamatian
 
+
* Isomics: Alex  Yarmarkovich
 
+
* BWH (NCIGT): Nobuhiko  Hata
  
 
<div style="margin: 20px;">
 
<div style="margin: 20px;">
Line 25: Line 30:
 
<h3>Progress</h3>
 
<h3>Progress</h3>
  
I started with a simple GUI to show a 3D volume in the 3D view of the Slicer3. I just run through the "Gradient Anisotropic Deffusion" module and modified the method that had been associated with 'Apply' button in that module. To read and show an image, I used the code that is used by the 'Volumes' module. However, the time that is used to visualize an image using this method is not short enough to let us achieve an acceptable frame rate for visualizing a video stream in real-time situation.  
+
* I started with a simple GUI to show a 3D volume in the 3D view of the Slicer3. I just run through the "Gradient Anisotropic Deffusion" module and modified the method that had been associated with 'Apply' button in that module. To read and show an image, I used the code that is used by the 'Volumes' module. However, the time that is used to visualize an image using this method is not short enough to let us achieve an acceptable frame rate for visualizing a video stream in real-time situation.  
  
In order to reduce the visualization time, Steve showed me 'vtkSlicerSliceLogic::CreateSliceModel' method. I run through the code but could not undrestand that much.  
+
* In order to reduce the visualization time, Steve showed me 'vtkSlicerSliceLogic::CreateSliceModel' method. I am still running through the code.  
  
I also talked with  Alexander Yarmarkovich. He has implemented a module which communicates with a tracking machine via network sockets so as to transfer tracking information. He has used [http://www.na-mic.org/Wiki/index.php/OpenIGTLink OpenIGTLink] to transfer tracking data from the tracking machine to Slicer3. He has used this tracking information to show the apparatus in right position in the 3D view of the Slicer3. His code can visualize the corresponding apparatus so fast thanks to turning off unnecessary events trough visualization step.
+
* I also talked with  Alexander Yarmarkovich. He has implemented a module which communicates with a tracking machine via network sockets so as to transfer tracking information. He has used [http://www.na-mic.org/Wiki/index.php/OpenIGTLink OpenIGTLink] to transfer tracking data from the tracking machine to Slicer3. He has used this tracking information to show the apparatus in right position in the 3D view of the Slicer3. His code can visualize the corresponding apparatus so fast thanks to turning off unnecessary events trough visualization step.
  
  

Latest revision as of 15:10, 26 June 2009

Home < Integration of stereo video into Slicer3

Key Investigators

  • Robarts Research Institute / University of Western Ontario: Mehdi Esteghamatian
  • Isomics: Alex Yarmarkovich
  • BWH (NCIGT): Nobuhiko Hata

Objective

The objective of this study is to grab and visualize video images in Slicer3 as soon as they are acquired. The source of the video can be laparoscope or ultrasound for my project. However, generally the video source can be any modality capable of streaming out the video. Actually, I plan to integrate laparoscope images and intra-operative ultrasound with pre-operative MR image. Moreover, in order to present the video in right position with respective to pre-operative MR, we need to track the laparoscope camera and/or ultrasound transducer. Therefore, camera calibration and ultrasound calibration should be added to the slicer in the long run.


Approach, Plan

Real-time video grabbing and visualization have been implemented previously for the ultrasound in AtamiViewr by my colleague Danielle Pace. However, this time I am trying to do that in Slicer3. Up to now, I studied two possible alternatives so as to tackle video grabbing in Slicer3. The first possible approach was using a IGSTK library containing a VideoImager. Actually, since the code is recently developed and is currently under review by Andinet Enquobahrie. He believes that the code is not developed enough so that it can be used for Slicer3.

The second alternative is to use vtkVideoSource and to extend it according to the targeted modality. For instance, vtkMILVideoSource provides an interface to Matrox Meteor, MeteorII and Corona video digitizers through the Matrox Imaging Library interface. As an another example vtkWin32VideoSource grabs frames or streaming video from a Video for Windows compatible device on the Win32 platform.

Progress

  • I started with a simple GUI to show a 3D volume in the 3D view of the Slicer3. I just run through the "Gradient Anisotropic Deffusion" module and modified the method that had been associated with 'Apply' button in that module. To read and show an image, I used the code that is used by the 'Volumes' module. However, the time that is used to visualize an image using this method is not short enough to let us achieve an acceptable frame rate for visualizing a video stream in real-time situation.
  • In order to reduce the visualization time, Steve showed me 'vtkSlicerSliceLogic::CreateSliceModel' method. I am still running through the code.
  • I also talked with Alexander Yarmarkovich. He has implemented a module which communicates with a tracking machine via network sockets so as to transfer tracking information. He has used OpenIGTLink to transfer tracking data from the tracking machine to Slicer3. He has used this tracking information to show the apparatus in right position in the 3D view of the Slicer3. His code can visualize the corresponding apparatus so fast thanks to turning off unnecessary events trough visualization step.