Difference between revisions of "Projects:NonRigidRegistration"

From NAMIC Wiki
Jump to: navigation, search
Line 50: Line 50:
 
# Quantify current performance and bottlenecks
 
# Quantify current performance and bottlenecks
 
## Identify timing tools
 
## Identify timing tools
##* We have chosen TAU for the performance quantification tool
+
##* We have chosen [http://www.cs.uoregon.edu/research/tau/home.php| TAU] for the performance quantification tool
 
##* Summary of select performance quantification tools is available here
 
##* Summary of select performance quantification tools is available here
  
 
 
= Algorithm Discussion =
 
= Algorithm Discussion =
  

Revision as of 15:50, 3 January 2007

Home < Projects:NonRigidRegistration

Goals

There are two components to this research

  1. Identify registration algorithms that are suitable for non-rigid registration problems that are indemic to NA-MIC
  2. Develop implementations of those algorithms that take advantage of multi-core and multi-processor hardware.

Ron says: We need a set of non-rigid (i.e. deformable) registration solutions for several of our applications: The atlas based segementation, the fMRI effort, and the DTI efforts are all in need of an algorithm for non-rigid registration. The purpose of this discussion is to identify the best available algorithms and approaches among the ones that have been published and or are available as ITK components.

Algorithmic Requirements and Use Cases

  • Requirements
    • relatively robust, with few parameters to tweak
    • runs on grey scale images
    • has already been published
    • relatively fast (ideally speaking a few minutes for volume to volume).
    • not patented
    • can be implemented in ITK and parallelized.

Performance Requirements and Use Cases

  • Requirements
    • Single and multi-core machines
    • Single and multi-processor machines
    • AMD and Intel - Windows, Linux, and SunOS
  • Use-cases
    • <list specific machines here>

Workplan

  1. Quantify current performance and bottlenecks
    1. Identify timing tools (cross platform, multi-threaded)
    2. For each use-case
      1. Centralized data and provide easy access
      2. Identify relevant registration algorithm(s)
      3. Develop traditional ITK-style implementations
      4. Develop timing tests using implementations and data
    3. Across use-cases
      1. Identify ITK classes/functions common to implementations (e.g., interpolation/resampling)
      2. Develop timing tests specific to these common sub-classes
    4. Compute performance on multiple platforms

Progress Highlights

  1. Quantify current performance and bottlenecks
    1. Identify timing tools
      • We have chosen TAU for the performance quantification tool
      • Summary of select performance quantification tools is available here

Algorithm Discussion

  • MR morphology to MR morphology: (Intersubject mapping)
    • (Kilian Pohl) When aligning two anatomical MRI volumes with each other the results generated by BSpline or Demons generally look very nice visually. However, voxels within voxels within an area of similar intensity are randomly mapped from source to target. This is a huge problem for segmentation algorithms that rely on aligned prior information to identify weakly visible boundaries. For example, most boundaries between neighboring cortical structures are not clearly defined on anatomical MR images, such as between middle and superior temporal gyrus. Thus, segmentation algorithms use aligned prior information to distinguish these structures. However, the algorithm does not gain any information from the aligned prior information in that area as the deformation field maps voxels to random locations.
  • Non-Rigid registration of anatomical MRIs specifically for the cortical region
  • (Stephen Aylward) Should it handle or be a part of registration process for patient-atlas registration in the presence of large tumors or resections?
    • Atlas - Image Registration
      • (Bruce Fischl) we have one that is part of our segmentation procedure that meets all these needs except the "relatively fast" one :) . It's quite robust, we've run it on hundreds of AD, schizophrenia, etc..., but it is also quite slow (15 hours or so). Gheorghe's NA-MIC project is also on non-rigid registration, although it's not published we're hoping to write it up soon.
        • (Kilian Pohl) Just to make sure that we are all on the same boat, I have a couple of questions:
          What type of registration do you use as part of your segmentation ?
          Is there a paper that describes the registration in detail ?
          Does the registration rely on tissue classification ?
          If so, can you register brains with pathologies, such as MS lesions or meningiomas ?
          • (Bruce Fischl) yes, the linear part was described in our Neuron 2002 paper (page 9 - 10), and the nonlinear extension in the IPAM thing that was published in NeuroImage NeuroImage (pages 5-7). It works fine with MS and white matter damage, not sure about tumors and such as we haven't really tried it. It is designed to be part of our segmentation (the MRF stuff), and I doubt it's optimal for functional alignment, but it works quite well for classification. It doesn't require classification - the classification requires it.
        • (Ron Kikinis) Does it have the potential to be parallelized?
          • (Bruce Fischl) As far as parallelization, I don't see any reason why not. I think Anders may have been messing around with parallelizing it, I'm not sure.
  • EPI/MRI: Mapping EPI data sets into the morphology scans obtained at the same session.
    • (Sandy Wells) I have some concerns about the application of general-purpose non-rigid registration approaches to the EPI/MRI registration problem. While such approaches may produce pairs of EPI/MRI that "look better", I would be cautious about expecting that approach to be accurate. My feeling is that robust solutions to this problem will require some of the physics of the problem to be built into the solution, either by way of field maps, or physics simulations. I feel that this applies even more to Echo-Planar DTI data. (Randy Gollub) I agree.
  • DTI: registering the components of the diffusion tensor to compensate for non-linear distortions (susceptibility and eddy currents)
    • (Marc Niethammer): There seems to be no consensus on how to properly do DTI registration (tensor components vs. diffusion weighted images, ...). The DTMRI module allows in principle for the registration of tensor components, but needs some additional work to become usable. We also only had very limited success with demons-based registration for DTI in slicer (this may be in line with Kilian's comments on too many degrees of freedom) producing strong artifacts. Scalar registrations, for example on FA, using the itk based nonlinear registration within slicer, resulted in visually pleasing results, however, counteracted previous findings, since many studies build on differences in diffusion properties between subjects. Those differences are then "removed" or greatly diminished by registration, underlining the need for DTI registration that goes beyond simple scalar measures of diffusivity.

Metrics

  • MI
    • (Sandy Wells) While the MI objective function can be nice to use, because it does not require domain knowledge, sometimes additional robustness can be gained by using objective functions that do, such as recent contributions by Chung and Zollei.
  • KLD
    • (Sandy Wells) Albert Chung's KLD approach has been shown experimentally to be substantially more robust than MI on MRI/CT registration on the Vanderbilt data set, though somewhat less accurate, i.e., there is a "bias vs capture" tradeoff.
  • "Dirichlet" approach
    • (Sandy Wells) Lilla Zollei's "Dirichlet" approach provides a natural generalizatin of the entropy approach that incorporates prior knowledge in a more controlled way, and it is better for EPI / MRI than MI is. It appeared in her thesis, and in a recent WBIR paper (see her web page for those... just google "Lilla Zollei").
  • optimal transport
    • (Allen Tannenbaum) We have proposed a method for elastic registration, based on an optimization problem built around the Monge--Kantorovich distance taken as a similarity measure. The constraint that we put on the transformations considered is that they obey a mass preservation property. Thus, we are matching mass densities in this method, which may be thought of as weighted areas in 2D or weighted volumes in 3D. We will assume that a rigid (non-elastic) registration process has already been applied before applying our scheme. Our method has a number of distinguishing characteristics. It is parameter free. It utilizes all of the grayscale data in both images, and places the two images on equal footing. It is thus symmetrical, the optimal mapping from image A to image B being the inverse of the optimal mapping from B to A. It does not require that landmarks be specified. The minimizer of the distance functional involved is unique; there are no other local minimizers. Finally, it is specifically designed to take into account changes in density that result from changes in area or volume.
  • "Don't Care" feature"
    • (Stephen Aylward) Should it support the use of "don't care" regions across which the deformation is smoothly interpolated
  • feature-image registration metrics
    • (Stephen Aylward) The metric used in HAMMER registration is one example from this class; however I am not promoting that particular metric - it is simply the most well known method from that class.
      • Features used in the metric could be tuned for the modalities/scales involved.
      • Using sparse, pre-selected features makes the metric very fast.
      • Parameters/features are limited by domain knowledge (e.g., presets for T1/fMRI registration).
      • The ITK image-image and feature-image registration frameworks support such metrics, in general - improvements are clearly possible/needed.

Transforms

  • affine
  • Low order
    • (Jim Miller) I am currently fond of the idea of using a network low order transformations to model a complicated deformations.
  • polynomial based
    • B-Spline
      • (Sandy Wells) One general purpose method that is used pretty widely in neuroimaging is Daniel Reuckert's combination of the MI objective function with a B spline mesh. My impression is that both of those things are in ItK already... do they play well?
      • (Guido Gerig) At UNC, we have excellent experience since over 7 years with Daniel Rueckert's Rview/cisgvtk tool (freely available), which combines linear and nonlinear registration using a polynomial approach with choice of grid spacing, choice of 6 different image match metrics like MI, NMI, cross-correlation etc., excellent esign of GUI/visualization/ROI selection/parameter settings and command-line execution as batch jobs, very nice design of cascading deformation fields, running the tool for calculation of deformation and separately for applying the deformation field etc. This tool is embeded into all our brain processing pipelines and we have experience on thousands of intra- inter-patient and inter-modality registrations and atlas building. This method/tool seems reimplemented in ITK. Recently, our PhD student Marcel Prastawa integrated the ITK version into our EM-ITK Brain Segmentation Package and we have very good experience with its use for atlas-moderated segmentation, i.e. to deform a probabilistic atlas to be used as a prior for multi-variate tissue segmentation. We use grid sampling of maximum of 12x12x12 for this application.
      • (Kilian Pohl) One of the areas B-Splines are frequently used for are MR morphology to MR morphology mappings. Especially in the cortex area, the algorithm is not very reliable as it has too many degrees of freedom (see my earlier comments about MR morphology to MR morphology mappings). Does anybody know for which exact tasks Guido's group uses the B-Spline implementation ? What about constraining B-Spline even further using PCA on the deformation maps such as in Rueckert et al. TMI 03 ? It is my understanding that using PCA is not practical as it requires too many training samples. Is that correct ?
  • high-dimensional
  • cascading several transformations
  • directed non-invertible
  • diffeomorphic
    • (Guido Gerig) For diffeomorphic high-dimensional transformations, we use high-dimensional fluid deformation as developed by Miller/Christensen/Joshi, which is also a central part on the population-based unbiased atlas-building developed by Joshi et al. Since this method is diffeomorphic, and was extended to provide a symmetric transformation between pairs and populations of datasets, it can be used for transforming to an average/template but also to go backwards by mapping atlas segmentations back to the individual cases for automatic segmentation and statistical analysis. The fluid transformation is not part of ITK and there are speed issues with the Fourier transforms/backtransforms if integrated in ITK, but my programmer colleagues might know better if there was is an ITK-based fluid deformation available. Speed issues were a concern a few years ago, but current versions take 30 to 60' on standard cheap PCs, which is good enough for automatic batch processing.
  • fully invertible transformations
  • Multi-scale
    • (Stephen Aylward) insensitive to moving/fixed images being at different resolutions e.g., DTI/fMRI vs 3D T1

Optimizers

  • stochastic gradient descent
    • (Sandy Wells) I have found it, empirically, to be an effective (i.e., fast) choice. There was an interesting paper at the WBIR conference by one of Josien Pluim's students that evaluated a collection of optimizers on the Reuckert-style MI + B spline approach. This paper showed winning performance by SGC.

Example Solutions

Please post your pipeline here. Only open-source software please. ITK preferred, but not required. For help in posting, contact us.

  • HAMMER is available online at: https://www.rad.upenn.edu/sbia/rsoftware.html
  • Guido's recommendation: (Guido Gerig) At UNC, we have excellent experience since over 7 years with Daniel Rueckert's Rview/cisgvtk tool (freely available), which combines linear and nonlinear registration using a polynomial approach with choice of grid spacing, choice of 6 different image match metrics like MI, NMI, cross-correlation etc., excellent esign of GUI/visualization/ROI selection/parameter settings and command-line execution as batch jobs, very nice design of cascading deformation fields, running the tool for calculation of deformation and separately for applying the deformation field etc. This tool is embeded into all our brain processing pipelines and we have experience on thousands of intra- inter-patient and inter-modality registrations and atlas building. [SNIP] The Rueckert tool is not invertable and does mathematically not guarantee that there isn't overfolding of space.
    • ITK's implementation of B-spline, MI registration perhaps needs tuning. It has, however, already improved significantly since Guido's initial negative impression.
    • Also, I (Bill L) believe that vtkcisg is no longer available?
    • The Image Registration Toolkit (Studholme for rigid/affine and Rueckert/Schnabel for deformable/b-spline) is available at: http://www.doc.ic.ac.uk/~dr/software/index.html
  • For rigid/affine registration, InsightApplications/LandmarkInitializedMutualInformationRegistration/ImageRegTool. It has an effective user interface and good command-line support (exports command-line options as XML using metaCommand - as is now down for all Slicer Modules). It is about 1.5x faster than Rueckert's program for affine/rigid registration.

Performance Discussion

How fast should it be

  • Ron says: Fast is good, robust is better. minutes is acceptable. If it takes more than an hour per case, its a practical problem.


Related Meetings