Difference between revisions of "Projects:MGH-HeadAndNeck-PtSetReg"

From NAMIC Wiki
Jump to: navigation, search
 
(11 intermediate revisions by one other user not shown)
Line 1: Line 1:
  Back to [[Algorithm:BU|Boston University Algorithms]]
+
  Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]
 
__NOTOC__
 
__NOTOC__
 
= Semi-Automatic Image Registration =
 
= Semi-Automatic Image Registration =
  
We recognize that the difference between a failure of an automatic image registration approach and a success of a semi-automatic method can be a small amount of user input. The goal of this work is to register two CT volumes of different patients that are related by a large deformation. The user sets two thresholds for each image: one for the bone mask and another for flesh tissue. This operation is not time consuming but simplifies the registration task dramatically for the automatic algorithm.
+
We recognize that the difference between a failure of an automatic image registration approach and a success of a semi-automatic method can be a small amount of user input. The goal of this work is to register two CT volumes of different patients that are related by a large misalignment. The user sets two thresholds for each image: one for the bone mask and another for the flesh tissue. This operation is not time consuming but simplifies the registration task dramatically for the automatic algorithm.
  
 
= Description =
 
= Description =
In this work, interactive segmentation is integrated with an active contour model and segmentation is posed as a human-supervisory control problem. User input is tightly coupled with an automatic segmentation algorithm leveraging the user's high-level anatomical knowledge and the automated method's speed. Real-time visualization enables the user to quickly identify and correct the result in a sub-domain where the variational model's statistical assumptions do not agree with his expert knowledge. Methods developed in this work are applied to magnetic resonance imaging (MRI) volumes as part of a population study of human skeletal development. Segmentation time is reduced by approximately five times over similarly accurate manual segmentation of large bone structures.
+
In this example, large misalignment is present between the two patients.
  
 +
* [[Image:PreRegFleshSkeleton.png | PreRegFleshSkeleton| 400px]]
 +
Original Misalignment of the volumes.
 +
 +
Point clouds are generated from label maps of bone. The computed registration field, which is guaranteed to be injective, is applied to the original CT volumes.
 +
 +
* [[Image:SkeletonMisalignedView1.png | SkeletonMisalignedView1| 400px]]  [[Image:SkeletonAlignedView1.png | SkeletonMisalignedView1| 400px]]
 +
Point clouds representing bone tissue of the patients (before and after registration).
 +
 +
 +
Another set of point clouds is generated by sampling from label maps of flesh. To avoid undoing the previous registration, regions belonging to the registered bone tissue from above are constrained not to move. Again, an injective deformation field is computed.
 +
 +
* [[Image:FleshMis.png | FleshMis| 400px]]  [[Image:FleshAlignedView1.png | FleshAlignedView1| 400px]]
 +
Point clouds representing flesh tissue of the patients (before and after registration). This step is constrained.
 +
 +
The result of applying the two deformations computed by the proposed process are shown below.
 +
 +
* [[Image:PostRegFleshSkeleton.png | PostRegFleshSkeleton| 400px]]
 +
Aligned images using the two step registration process.
  
* [[Image:KSliceFlowChart.png | Rel Pred| 800px]]
 
Flowchart for the interactive segmentation approach. Notice the user's pivotal role in the process.
 
* [[Image:KSliceInptTimeChart.png | Eye Seg| 800px]]
 
Time-line of user input into the system. Note that user input is sparse, has local effect only, and decreases in frequency and magnitude over time.
 
* [[Image:KVoutSegTightMod.png | Eye Seg| 300px]]
 
Result of the segmentation.
 
  
 
== Current State of Work ==
 
== Current State of Work ==
The described algorithm is implemented in c++ and delivered to physicians. We have begun to analyze the data they created by segmenting the knee with out tool. Future work incorporates shape prior into the segmentation and improves user interaction(according to feedback physician's provide us).
+
A pipeline composed of Matlab and mex-ed C++ code has been implemented.
  
 
= Key Investigators =
 
= Key Investigators =
Line 30: Line 42:
 
''In Press''
 
''In Press''
  
I. Kolesov, P.Karasev, G.Muller, K.Chudy, J.Xerogeanes, and A. Tannenbaum. Human Supervisory Control Framework for Interactive Medical Image Segmentation. MICCAI Workshop on Computational Biomechanics for Medicine 2011.
+
I. Kolesov, J. Lee, P.Vela, G. Sharp and A. Tannenbaum. Diffeomorphic Point Set Registration with Landmark Constraints. In Preparation for PAMI.
 
 
 
 
P.Karasev, I.Kolesov, K.Chudy, G.Muller, J.Xerogeanes, and A. Tannenbaum. Interactive MRI Segmentation with Controlled Active Vision. IEEE CDC-ECC 2011.
 

Latest revision as of 01:03, 16 November 2013

Home < Projects:MGH-HeadAndNeck-PtSetReg
Back to Stony Brook University Algorithms

Semi-Automatic Image Registration

We recognize that the difference between a failure of an automatic image registration approach and a success of a semi-automatic method can be a small amount of user input. The goal of this work is to register two CT volumes of different patients that are related by a large misalignment. The user sets two thresholds for each image: one for the bone mask and another for the flesh tissue. This operation is not time consuming but simplifies the registration task dramatically for the automatic algorithm.

Description

In this example, large misalignment is present between the two patients.

  • PreRegFleshSkeleton

Original Misalignment of the volumes.

Point clouds are generated from label maps of bone. The computed registration field, which is guaranteed to be injective, is applied to the original CT volumes.

  • SkeletonMisalignedView1 SkeletonMisalignedView1

Point clouds representing bone tissue of the patients (before and after registration).


Another set of point clouds is generated by sampling from label maps of flesh. To avoid undoing the previous registration, regions belonging to the registered bone tissue from above are constrained not to move. Again, an injective deformation field is computed.

  • FleshMis FleshAlignedView1

Point clouds representing flesh tissue of the patients (before and after registration). This step is constrained.

The result of applying the two deformations computed by the proposed process are shown below.

  • PostRegFleshSkeleton

Aligned images using the two step registration process.


Current State of Work

A pipeline composed of Matlab and mex-ed C++ code has been implemented.

Key Investigators

  • Georgia Tech: Ivan Kolesov, Patricio Vela
  • Boston University: Jehoon Lee, Allen Tannenbaum
  • MGH: Gregory Sharp

Publications

In Press

I. Kolesov, J. Lee, P.Vela, G. Sharp and A. Tannenbaum. Diffeomorphic Point Set Registration with Landmark Constraints. In Preparation for PAMI.