Difference between revisions of "Projects:GroupwiseRegistration"

From NAMIC Wiki
Jump to: navigation, search
 
(76 intermediate revisions by 7 users not shown)
Line 1: Line 1:
= Introduction =
+
Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]
 +
__NOTOC__
 +
= Non-rigid Groupwise Registration =
  
Image registration is an important tool in medical image analysis. Medical experts traditionally rely on manual comparisons of images, but the abundance of information due to technological advances makes this task a challenging one. We present a fully automatic procedure which achieves correspondence among anatomical structures in medical images. Our registration procedure runs in a simultaneous fashion, with every member of the population approaching the central tendency of the population at the same time. Compared to pairwise algorithms we eliminate the need for selecting a particular reference frame a priori. Therefore, our algorithm can generate unbiased atlases.
+
We aim at providing efficient groupwise registration algorithms
 +
for population analysis of anatomical structures.
 +
Here we extend a previously demonstrated entropy based groupwise registration method
 +
to include a free-form deformation model based on B-splines.  
 +
We provide
 +
an efficient implementation using stochastic gradient descents
 +
in a multi-resolution setting.  
 +
We demonstrate the method in application to a set of 50 MRI brain scans
 +
and compare the results to a pairwise approach
 +
using segmentation labels to evaluate the quality of alignment.
 +
Our results indicate that increasing the complexity of the deformation model
 +
improves registration accuracy significantly, especially at cortical regions.
  
A problem associated with the use of pairwise image registration is that the atlas created is dependent on the choice of the reference subject. If there is wide variation within the population the reference subject might not be good representative of the population. Instead of choosing a reference subject and performing pairwise registration, we propose a groupwise non-rigid registration algorithm to simultaneously register all subjects in a population to a common reference coordinate system. By registering all the images simultaneously, the resulting deformation fields contain information about the variability across the population. Using the resulting deformation fields we obtain a dense and consistent correspondence across the population which allows for the creation of population-specific atlases.
+
= Description =
  
= Algorithm =
+
We first describe
 +
the stack entropy cost function and the B-spline based deformation model.
 +
Then we discuss implementation details.
 +
Next, we compare groupwise registration to the pairwise method and
 +
evaluate both methods using label prediction values.
  
The method we propose uses the congealing framework with an information theoretic objective function. The congealing technique was introduced to capture the central tendency of a collection of input images for classification purposes [1]. We adapt the congealing framework to a population of grayscale-valued 3D data volumes and provide a computationally efficient implementation via a stochastic gradient-based optimization procedure in a multi-resolution framework.
 
  
In the congealing framework we use the sum of voxel-wise entropies as a joint alignment criterion. If the images are aligned properly, intensity values at corresponding coordinate locations from all the inputs form a low entropy distribution; hence, the objective function achieves a minimum. This statement holds even if the intensity values are not identical. Therefore, an entropy-based objective function makes the algorithm considerably robust to noise and bias fields. As B-Spline based non-rigid deformations with dense control points can overfit the data, we add a term to our objective function penalizing non-affine deformations (similar to [2]).
+
''Objective Function''
  
To find the set of non-rigid transformations that aligns all subjects to a common reference coordinate system, B-Spline control points are overlayed uniformly over the image sets. The directions of movement of the B-Spline control points are found by calculating the gradient of the objective function with respect to the control point displacements. The control points are updated using gradient descents until the objective function converges to a local minimum. While minimizing the objective function we constrain the sum of all the deformations to be zero to force the mean scale of the registered images to be the same as the original images. Therefore, the gradient of the objective function is projected onto the constraint surface after each iteration. To reduce the computational time of the algorithm we make use of stochastic subsampling in a multiresolution setting (similar to that of [3]). To estimate the entropy measure from the samples, we use the Parzen windowing method [4].
+
[[Image:GroupwiseStackBiModal.PNG|thumb|350px|Figure 1: On the left is shown a stack of images
 +
and a sample pixel stack around a cortical region. On the left is shown the Gaussian(blue) fittet to  
 +
a real sample from the dataset we used along with the non-parametric density estimate(red).  
 +
Note that the distribution is bi-modal because of white matter-gray matter transaction.]]
  
Prior to non-rigid registration, affine congealing [5] is performed on the data to obtain a good initialization for B-Splines. Afterwards, B-Spline based nonrigid deformations are used to obtain a dense deformation field on the data set. We embed both registrations in a multiresolution setting to avoid the deformation fields to be trapped by local minima. Our implementation of groupwise non-rigid registration is based on open source Insight Toolkit(ITK) and is publicly available [6].
+
In order to align all subjects in the population,
 +
we consider sum of pixelwise entropies as a joint alignment criterion.
 +
The justification for this approach is that if the images are aligned properly,
 +
intensity values at corresponding coordinate locations from all the images
 +
will form a low entropy distribution.
 +
This approach does not require the use of a reference subject; all
 +
subjects are simultenously driven to the common tendency of the population.
  
= Results =
+
We employ a kernel based density estimation scheme to estimate univariate entropies.
 +
Using the entropy measure we obtain a better treatment of transitions between different
 +
tissue types, such as gray matter-white matter transitions in the cortical regions
 +
where intensity distributions can be bi-modal as shown in Figure 1.
  
We present the results of our algorithm on two synthetic data sets. The first data set is generated by applying 30 random affine transformations to an MRI data set of size 256x256x128. The original images were recovered by our algorithm. The second data set was created by applying 30 random B-Spline based nonrigid deformations to the same image. As the mean images indicate the original images are obtained accurately.
 
  
<br>
 
<p>
 
<table width="612" border="1" align="center" >
 
<caption> <b> Affine registration on synthetic data </b> </caption>
 
<tr>
 
  <td width="306" align="center"> [[Image:serdarAffineOrig.jpg]] <p>Central slice of the mean of synthetic data</p> </td>
 
  <td width="306" align="center"> [[Image:serdarAffineRegister.jpg]] <p>Central slice of the mean after affine registration</p> </td>
 
</tr>
 
<tr>
 
<td colspan="2">
 
<b>Left</b>: The central slice of the mean of the synthetic data is shown on
 
left. The images were created by 30 random affine
 
transformations. <b>Right</b>: The central slice of the mean of the registered
 
images.
 
<td>
 
</tr>
 
</table>
 
</p>
 
  
 +
''Deformation Model''
  
 +
[[Image:GroupwiseBspline.png|thumb|350px|Figure 2: An example deformation field. The local neighborhood affecting the deformation is overlayed on the image.]]
  
<br>
+
For the nonrigid deformation model,
<p>
+
we define a combined transformation consisting of  
<table width="612" border="1" align="center" >
+
a global and a local component
<caption> <b> Non-rigid registration on synthetic data </b> </caption>
 
<tr>
 
  <td width="306" align="center"> [[Image:serdarBSplineOrig.jpg]] <p>Central slice of the mean of the synthetic data </p></td>
 
<td width="306" align="center">  [[Image:serdarBSplineRegister.jpg]] <p>Central slice of the mean after B-Spline registration </p></td>
 
</tr>
 
<tr>
 
<td colspan="2">
 
<b>Left:</b> The central slice of the mean of the synthetic data is shown on
 
left. The images were created by 30 random B-Spline
 
transformations. <b>Right:</b> The central slice of the mean of the registered
 
images.
 
<td>
 
</tr>
 
</table>
 
</p>
 
  
 +
:<math>
 +
T(\mathbf{x}) = T_{local}({T_{global}(\mathbf{x})})
 +
</math>
  
 +
where <math>T_{global}</math> is a twelve parameter affine transform and
 +
<math>T_{local}</math> is a deformation model based on B-splines.
  
<!--
+
The free form deformation can be written as the 3-D tensor product
<br>
+
of 1-D cubic B-splines.
<p>
 
<table width="612" border="1" align="center" >
 
<caption> <b> Grouwise registration results on real data </b> </caption>
 
<tr>
 
  <td width="306" align="center"> [[Image:MeanOriginal.png]] <p>Central slice of the mean of the real data (50 subjects)</p></td>
 
  <td width="306" align="center"> [[Image:MeanRegisteredAffine.png]] <p>Central slice of the mean after afffine registration </p></td>
 
  <td width="306" align="center"> [[Image:MeanRegisteredBspline.png]] <p>Central slice of the mean after B-Spline registration </p></td>
 
</tr>
 
<tr>
 
<td colspan="3">
 
<b>Left:</b> The central slice of the mean of the real data is shown on
 
left. The images are MRI scans <b>Right:</b> The central slice of the mean of the registered
 
<b>Right:</b> The central slice of the mean of the registered
 
images.
 
<td>
 
</tr>
 
</table>
 
</p>
 
-->
 
= References =
 
  
 +
:<math>
 +
T_{local}(\mathbf{x}) = \mathbf{x} + \sum_{l=0}^3\sum_{m=0}^3\sum_{n=0}^3 B_l(u)B_m(v)B_n(w) \Phi_{i+l,j+m,k+n}
 +
</math>
  
<p>
 
[1] E. Miller, N. Matsakis and P. Viola. Learning from One Example through Shared Densities on Transforms.
 
In <em> IEEE Computer Society Conference on Computer Vision and Pattern Recognition </em>, vol. 1, pp. 464-471, 2000.
 
</p>
 
  
<p>
+
[2] D. Rueckert, L. Sonodo, I.C. Hayes, D.L.G. Hill, M.O. Leach, D.J. Hawkes.
+
where <math>B_l</math> is <math>l</math>'th cubic B-spline basis function. <math>(u,v,w)</math> is the distance
Nonrigid Registration Using Free-Form Deformations: Application to Breast MR Images.
+
to <math>(x,y,z)</math> from the control point <math>\Phi_{i,j,k}</math> as shown in Figure 2.
In <em>IEEE Transactions on Medical Imaging</em>, vol. 18, n. 8, pp. 712-721, 1999.
 
</p>
 
  
<p>
+
The deformation of a given point can be found using only the control points in the neighborhood of the given point. Therefore,
[3] W.M. Wells, P. Viola, H. Atsumi, S. Nakajima and R. Kikinis. Multimodality Image Registration by Maximization of Mutual Information. In <em> Medical Image Analysis </em>, vol. 1, n. 1, pp. 35-51, 1996.
+
optimization of the objective function can be implemented efficiently.
</p>
 
  
<p>
 
[4] R.O. Duda, P.E. Hart and D.G. Stork.
 
Pattern Classification.
 
New York John Wiley, 2001.
 
</p>
 
  
<p>
+
''Implementation''
[5] L. Zollei, E. Learned-Miller, E. Grimson and W. Wells. Efficient Population Registration of 3D Data.
 
In <em>ICCV 2005 </em>, vol. 3765 of LNCS. pp. 291-301, 2005.
 
</p>
 
  
+
[[Image:GroupwiseIncreasingScale.PNG|thumb|350px|Figure 3: A registration schedule using gradually increasing deformation field complexity. From left to right deformation fields for increasing deformation field complexity. ]]
<p>
 
[6] The Insight Segmentation and Registration Toolkit,
 
http://www.itk.org.  
 
</p>
 
  
= Software =
+
We provide an efficient optimization scheme by using line search with the gradient descent algorithm.
 +
For computational efficiency, we employ a stochastic subsampling procedure.
 +
In each iteration of the algorithm,
 +
a random subset is drawn from all samples and the objective function is evaluated
 +
only on this sample set.
  
Our algorithm is implemented using ITK. The source code is publicly available through ITK's svn servers. The project is located under the folder NAMICSandBox/trunk/MultiImageRegistration. Current implementation supports groupwise registration using congealing, registering to the mean and registering all pairs to all pairs. http://www.na-mic.org/websvn/listing.php?repname=NAMICSandBox&amp;path=%2Ftrunk%2FMultiImageRegistration%2F&amp;rev=0&amp;sc=0
+
To obtain a dense deformation field capturing anatomical variations at different scales,
 +
we gradually increase the complexity of the deformation field by refining the grid of B-spline control points.
 +
 
 +
 
 +
[[Image:GroupwiseMultiResolution.PNG|thumb|350px|Figure 4: An example showing the multi-resolution scheme. The registration is first performed at a coarse scale by downsampling the input.
 +
Results from coarser scales are used to initialize
 +
optimization at finer scales. Also note that the objective function is only evaluated on a small subset of input points. ]]
 +
 
 +
As in every iterative search algorithm, local minima pose a significant problem.
 +
To avoid local minima we use a multi-resolution optimization scheme for each resolution level of the deformation field.
 +
 
 +
We implemented our groupwise registration method in a multi-threaded fashion using Insight Toolkit(ITK)
 +
and made the implementation publicly available [http://www.na-mic.org/svn/NAMICSandBox/trunk/MultiImageRegistration/ (code)].
 +
 
 +
''Results''
 +
 
 +
[[Image:GroupwiseMeanImages.png|thumb|350px|Figure 5:  Central slices of 3D volumes for groupwise registration. Rows show mean and standard deviation images followed by label overlap images for GM, WM and CSF labels. Columns display the results for affine and B-splines with grid spacing 32, 16 and 8 voxels, respectively. ]]
 +
[[Image:GroupwiseBarsWMGM.png|thumb|350px|Figure 6: GM, WM DICE measures computed for different deformation field resolution levels. Blue bars show the results for groupwise registration and the red bars show the results for registration to the mean setting.]]
 +
[[Image:GroupwiseBarsManual.png|thumb|350px|Figure 7: DICE measures for manually segmented labels. Bars correspond to the same setting as in figure 6. ]]
 +
 
 +
 
 +
We tested the groupwise registration algorithm on a MR brain dataset.
 +
The dataset consists of 50 MR brain images of three subgroups:
 +
schizophrenics, affected disorder and normal control patients.
 +
MR images are T1 scans with 256x256x128 voxels
 +
and 0.9375x0.9375x1.5 mm<sup>3</sup> spacing.
 +
For each image in the dataset, an automatic tissue classification
 +
was performed, yielding gray matter (GM), white matter (WM) and cerebro-spinal
 +
fluid (CSF) labels. In addition, manual segmentations of four subcortical regions
 +
(left and right hippocampus and amygdala) and four cortical regions (left and right superior temporal
 +
gyrus and para-hippocampus) were available for each MR image.
 +
 
 +
Increasing the complexity of the deformation model improves the
 +
accuracy of prediction. An interesting open problem is automatically
 +
identifying the appropriate deformation complexity before the
 +
registration overfits and the accuracy of prediction goes down.  We
 +
also note that the alignment of the subcortical structures is much
 +
better than that of the cortical regions. It is not surprising as the
 +
registration algorithm does not use the information about geometry of the cortex
 +
to optimize the alignment of the cortex. In addition, it has
 +
been often observed that the cortical structures exhibit higher
 +
variability across subjects when considered in the 3D volume rather
 +
than modelled on the surface.
 +
 
 +
Our experiments highlight the need for further research in developing
 +
evaluation criteria for image alignment. We used the standard Dice
 +
measure, but it is not clear that this measurement captures all the
 +
nuances of the resulting alignment.
 +
 
 +
Comparing the groupwise registration to the pairwise approach, we
 +
observe that the sharpness of the mean images and the tissue overlaps
 +
in Figure 5 look visually similar. From Figures 6 and 7, we note that
 +
groupwise registration performs slightly better than the pairwise
 +
setting in most of the cases, especially as we increase the complexity
 +
of the warp. This suggests that considering the population as a whole
 +
and registering subjects jointly brings the population into better
 +
alignment than matching each subject to a mean template
 +
image. However, the advantage shown here is only slight; more
 +
comparative studies are needed of the two approaches.
 +
 
 +
We compare our groupwise algorithm to a pairwise method where we register
 +
each subject to the mean intensity using sum of square differences.
 +
During each iteration we consider the mean image as a reference image
 +
and register
 +
every subject to the mean image using sum of squared differences.
 +
After each iteration the mean image is updated and pairwise registrations are performed until convergence.
 +
 
 +
The images in Figure 5 show central slices of 3D images after registration.
 +
Visually, mean images get sharper and variance images becomes darker, especially around central ventricles and cortical regions.
 +
We can observe that anatomical variability at cortical regions causes significant blur for
 +
GM, WM and CSF structures using affine registration.
 +
Finer scales of B-spline deformation fields capture a significant part of this anatomical variability and
 +
the tissue label overlap images get sharper.
 +
 
 +
=Asymmetric Image-Template Registration=
 +
 
 +
A natural requirement in pairwise image registration is that the resulting deformation is independent of the order of the images. This constraint is typically achieved via a symmetric cost function and has been shown to reduce the effects of local optima. Consequently, symmetric registration has been successfully applied to pairwise image registration as well as the spatial alignment of individual images with a template. However, recent work has shown that the relationship between
 +
an image and a template is fundamentally asymmetric. In this work, we develop a method that reconciles the practical advantages of symmetric registration with the asymmetric nature of image-template registration by adding a simple correction factor to the symmetric cost function. We instantiate our model within a log-domain diffeomorphic registration
 +
framework. Our experiments show exploiting the asymmetry in image-template registration improves alignment in the image coordinates.
 +
 
 +
= Key Investigators =
 +
 
 +
* MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Serdar K. Balci, Polina Golland.
 +
* Harvard: Sylvain Bouix, Martha E. Shenton, Bruce Fischl, W.M. (Sandy) Wells.
 +
* Kitware: Brad Davis, Louis Ibanez.
 +
 
 +
= Publications =
 +
 
 +
 
 +
[http://www.na-mic.org/publications/pages/display?search=Projects%3AGroupwiseRegistration&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Groupwise Registration]
 +
 
 +
[[Category: Registration]] [[Category:Segmentation]] [[Category:MRI]] [[Category:Schizophrenia]]

Latest revision as of 20:07, 11 May 2010

Home < Projects:GroupwiseRegistration
Back to NA-MIC Collaborations, MIT Algorithms

Non-rigid Groupwise Registration

We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.

Description

We first describe the stack entropy cost function and the B-spline based deformation model. Then we discuss implementation details. Next, we compare groupwise registration to the pairwise method and evaluate both methods using label prediction values.


Objective Function

Figure 1: On the left is shown a stack of images and a sample pixel stack around a cortical region. On the left is shown the Gaussian(blue) fittet to a real sample from the dataset we used along with the non-parametric density estimate(red). Note that the distribution is bi-modal because of white matter-gray matter transaction.

In order to align all subjects in the population, we consider sum of pixelwise entropies as a joint alignment criterion. The justification for this approach is that if the images are aligned properly, intensity values at corresponding coordinate locations from all the images will form a low entropy distribution. This approach does not require the use of a reference subject; all subjects are simultenously driven to the common tendency of the population.

We employ a kernel based density estimation scheme to estimate univariate entropies. Using the entropy measure we obtain a better treatment of transitions between different tissue types, such as gray matter-white matter transitions in the cortical regions where intensity distributions can be bi-modal as shown in Figure 1.


Deformation Model

Figure 2: An example deformation field. The local neighborhood affecting the deformation is overlayed on the image.

For the nonrigid deformation model, we define a combined transformation consisting of a global and a local component

[math] T(\mathbf{x}) = T_{local}({T_{global}(\mathbf{x})}) [/math]

where [math]T_{global}[/math] is a twelve parameter affine transform and [math]T_{local}[/math] is a deformation model based on B-splines.

The free form deformation can be written as the 3-D tensor product of 1-D cubic B-splines.

[math] T_{local}(\mathbf{x}) = \mathbf{x} + \sum_{l=0}^3\sum_{m=0}^3\sum_{n=0}^3 B_l(u)B_m(v)B_n(w) \Phi_{i+l,j+m,k+n} [/math]


where [math]B_l[/math] is [math]l[/math]'th cubic B-spline basis function. [math](u,v,w)[/math] is the distance to [math](x,y,z)[/math] from the control point [math]\Phi_{i,j,k}[/math] as shown in Figure 2.

The deformation of a given point can be found using only the control points in the neighborhood of the given point. Therefore, optimization of the objective function can be implemented efficiently.


Implementation

Figure 3: A registration schedule using gradually increasing deformation field complexity. From left to right deformation fields for increasing deformation field complexity.

We provide an efficient optimization scheme by using line search with the gradient descent algorithm. For computational efficiency, we employ a stochastic subsampling procedure. In each iteration of the algorithm, a random subset is drawn from all samples and the objective function is evaluated only on this sample set.

To obtain a dense deformation field capturing anatomical variations at different scales, we gradually increase the complexity of the deformation field by refining the grid of B-spline control points.


Figure 4: An example showing the multi-resolution scheme. The registration is first performed at a coarse scale by downsampling the input. Results from coarser scales are used to initialize optimization at finer scales. Also note that the objective function is only evaluated on a small subset of input points.

As in every iterative search algorithm, local minima pose a significant problem. To avoid local minima we use a multi-resolution optimization scheme for each resolution level of the deformation field.

We implemented our groupwise registration method in a multi-threaded fashion using Insight Toolkit(ITK) and made the implementation publicly available (code).

Results

Figure 5: Central slices of 3D volumes for groupwise registration. Rows show mean and standard deviation images followed by label overlap images for GM, WM and CSF labels. Columns display the results for affine and B-splines with grid spacing 32, 16 and 8 voxels, respectively.
Figure 6: GM, WM DICE measures computed for different deformation field resolution levels. Blue bars show the results for groupwise registration and the red bars show the results for registration to the mean setting.
Figure 7: DICE measures for manually segmented labels. Bars correspond to the same setting as in figure 6.


We tested the groupwise registration algorithm on a MR brain dataset. The dataset consists of 50 MR brain images of three subgroups: schizophrenics, affected disorder and normal control patients. MR images are T1 scans with 256x256x128 voxels and 0.9375x0.9375x1.5 mm3 spacing. For each image in the dataset, an automatic tissue classification was performed, yielding gray matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) labels. In addition, manual segmentations of four subcortical regions (left and right hippocampus and amygdala) and four cortical regions (left and right superior temporal gyrus and para-hippocampus) were available for each MR image.

Increasing the complexity of the deformation model improves the accuracy of prediction. An interesting open problem is automatically identifying the appropriate deformation complexity before the registration overfits and the accuracy of prediction goes down. We also note that the alignment of the subcortical structures is much better than that of the cortical regions. It is not surprising as the registration algorithm does not use the information about geometry of the cortex to optimize the alignment of the cortex. In addition, it has been often observed that the cortical structures exhibit higher variability across subjects when considered in the 3D volume rather than modelled on the surface.

Our experiments highlight the need for further research in developing evaluation criteria for image alignment. We used the standard Dice measure, but it is not clear that this measurement captures all the nuances of the resulting alignment.

Comparing the groupwise registration to the pairwise approach, we observe that the sharpness of the mean images and the tissue overlaps in Figure 5 look visually similar. From Figures 6 and 7, we note that groupwise registration performs slightly better than the pairwise setting in most of the cases, especially as we increase the complexity of the warp. This suggests that considering the population as a whole and registering subjects jointly brings the population into better alignment than matching each subject to a mean template image. However, the advantage shown here is only slight; more comparative studies are needed of the two approaches.

We compare our groupwise algorithm to a pairwise method where we register each subject to the mean intensity using sum of square differences. During each iteration we consider the mean image as a reference image and register every subject to the mean image using sum of squared differences. After each iteration the mean image is updated and pairwise registrations are performed until convergence.

The images in Figure 5 show central slices of 3D images after registration. Visually, mean images get sharper and variance images becomes darker, especially around central ventricles and cortical regions. We can observe that anatomical variability at cortical regions causes significant blur for GM, WM and CSF structures using affine registration. Finer scales of B-spline deformation fields capture a significant part of this anatomical variability and the tissue label overlap images get sharper.

Asymmetric Image-Template Registration

A natural requirement in pairwise image registration is that the resulting deformation is independent of the order of the images. This constraint is typically achieved via a symmetric cost function and has been shown to reduce the effects of local optima. Consequently, symmetric registration has been successfully applied to pairwise image registration as well as the spatial alignment of individual images with a template. However, recent work has shown that the relationship between an image and a template is fundamentally asymmetric. In this work, we develop a method that reconciles the practical advantages of symmetric registration with the asymmetric nature of image-template registration by adding a simple correction factor to the symmetric cost function. We instantiate our model within a log-domain diffeomorphic registration framework. Our experiments show exploiting the asymmetry in image-template registration improves alignment in the image coordinates.

Key Investigators

  • MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Serdar K. Balci, Polina Golland.
  • Harvard: Sylvain Bouix, Martha E. Shenton, Bruce Fischl, W.M. (Sandy) Wells.
  • Kitware: Brad Davis, Louis Ibanez.

Publications

NA-MIC Publications Database on Groupwise Registration