Difference between revisions of "Projects:LearningRegistrationCostFunctions"

From NAMIC Wiki
Jump to: navigation, search
(Created page with ' Back to NA-MIC Collaborations, MIT Algorithms We present the fast Spherical Demons algorithm for re…')
 
 
(76 intermediate revisions by 6 users not shown)
Line 1: Line 1:
  Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]
+
  Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]], [[Algorithm:MGH|MGH Algorithms]]
 +
__NOTOC__
 +
= Learning Task-Optimal Registration Cost Functions =
  
We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline
+
We present a framework for learning the parameters of registration cost functions. The parameters of the registration cost function -- for example, the tradeoff between the image similarity and regularization terms -- are typically determined manually through inspection of the image alignment and then fixed for all applications. We propose a principled approach to learn these parameters with respect to particular applications.
interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently
 
approximated on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is
 
diffeomorphic and fast -- registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer
 
registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.
 
  
= Description =
+
Image registration is ambiguous. For example, the figures below show two subjects with Brodmann areas overlaid on their cortical folding patterns. Brodmann areas are parcellation of the cortex based on the cellular architecture of the cortex. Here, we see that perfectly aligning the inferior frontal sulcus will misalign the superior end of BA44 (Broca's language area). If our goal is segment sulci and gyri, perfectly alignment of the cortical folding pattern is ideal. But it is unclear whether perfectly aligning cortical folds is optimal for localizing Brodmann areas. Here, we show that by taking into account the end-goal of registration, we not only improve the application performance but also potentially eliminate ambiguities in image registration.
Motivated by the spherical representation of the cerebral cortex, this work deals with the problem of registering spherical images. Cortical folding patterns are correlated with both cytoarchitectural and functional regions. In group studies of cortical structure and function, determining corresponding folds across subjects is therefore important.
 
  
Unfortunately, many spherical warping algorithms are computationally expensive. One reason is the need for invertible deformations that preserve the topology of structural or functional regions across subjects. In this work, we take the approach, previously demonstrated in the Euclidean space [3], of restricting the deformation space to be a composition of diffeomorphisms, each of which is parameterized by a stationary velocity field. In each iteration, the algorithm greedily seeks the best diffeomorphism to be composed with the current transformation, resulting in much faster updates.
 
  
Another challenge in registration is the tradeoff between the image similarity measure and the regularization in the objective function. Since most regularizations favor smooth deformations, the gradient computation is complicated by the need to take into account the deformation in neighboring regions. For Euclidean images, the demons objective function facilitates a fast two-step optimization where the second step handles the warp regularization via a single convolution with a smoothing filter [2,3]. Based on spherical vector spline interpolation theory and other differential geometric tools, we show that the two-stage optimization procedure of the demons algorithm can be efficiently applied on the sphere.
+
<gallery>
 +
Image:to.rh.pm1696.BAsALL.lat.cropped.caption.png
 +
Image:to.rh.pm20784.BAsALL.lat.cropped.caption.png
 +
</gallery>
  
[[Image:CoordinateChart.png | center | 400px]]
+
= Description =
 
 
= Experimental Results =
 
We use two sets of experiments to compare the accuracy of Spherical Demons and FreeSurfer [1]. The FreeSurfer registration algorithm uses the same similarity measure as Spherical Demons, but penalizes for metric and areal distortion. Its runtime is more than an hour while our runtime is less than 5 minutes.
 
  
== (1) Parcellation of In-Vivo Cortical Surfaces ==
+
The key idea is to introduce a second layer of optimization over and above the usual registration. This second layer of optimization traverses the space of local minima, selecting registration parameters that result in good registration local minima as measured by the performance of the specific application in a training data set. The training data provides additional information not present in a test image, allowing the task-specific cost function to be evaluated during training. For example, if the task is segmentation, we assume the existence of a training data set with ground truth segmentation and a smooth cost function that evaluates segmentation accuracy. This segmentation accuracy is used as a proxy to evaluate registration accuracy.
  
We consider a set of 39 left and right cortical surface models extracted from in-vivo MRI. Each surface is spherically parameterized and represented as a spherical image with geometric features at each vertex (e.g., sulcal depth and curvature). Both hemispheres are manually parcellated by a neuroanatomist into 35 major sulci and gyri. We validate our algorithm in the context of automatic cortical parcellation.
+
If the registration cost function employs a single parameter, then the optimal parameter value can be found by exhaustive search. With multiple parameters, exhaustive search is not possible. Here, we demonstrate the optimization of thousands of parameters by gradient descent on the space of local minima, selecting registration parameters that result in good registration local minima as measured by the task-specific cost function in the training data set.  
  
We co-register all 39 spherical images of cortical geometry with Spherical Demons by iteratively building an atlas and registering
+
Our formulation is related to the use of continuation methods in computing the entire path of solutions of learning problems (e.g., SVM or Lasso) as a function of a single regularization parameter. Because we deal with multiple (thousands of) parameters, it is impossible for us to compute a solution manifold. Instead, we trace a path within the solution manifold that improves the task-specific cost function.  
the surfaces to the atlas. The atlas consists of the mean and variance of cortical geometry. We then perform cross-validation parcellation 4 times, by leaving out subjects 1 to 10, training a classifier using the remaining subjects, and using it to classify subjects 1 to 10. We repeat with subjects 11-20, 21-30 and 31-39. We also perform registration and cross-validation with the FreeSurfer algorithm [1] using the same features and parcellation algorithm. Once again, the atlas consists of the mean and variance of cortical geometry.
 
  
 +
Another advantage of our approach is that we do not require ground truth deformations. As suggested in the example above, the concept of “ground truth deformations” may not always be well-defined, since the optimal registration may depend on the application at hand. In contrast, our approach avoids the need for ground truth deformations by focusing on the application performance, where ground truth (e.g., via segmentation labels) is better defined.
  
The average Dice measure (defined as the ratio of cortical surface area with correct labels to the total surface area averaged over the test set) on the left hemisphere is 88.9 for FreeSurfer and 89.6 for Spherical Demons. While the improvement is not big, the
+
== Experimental Results ==
difference is statistically significant for a one-sided t-test with the Dice measure of each subject treated as an independent sample (p = 2e-6). On the right hemisphere, FreeSurfer obtains a Dice of 88.8 and Spherical Demons achieves 89.1. Here, the improvement is smaller, but still statistically significant (p = 0.01).
 
  
<center>
+
We instantiate the framework for the alignment of hidden labels whose extents are not necessarily well-predicted by local image features. We consider the generic weighted Sum of Squared Differences (wSSD) cost function and estimate either (1) the optimal weights or (2) cortical folding template for localizing cytoarchitectural and functional regions based only on macroanatomical cortical folding information. We demonstrate state-of-the-art localization results in both histological and fMRI data sets.
<table>
 
<tr>
 
<td>
 
[[Image:SD.Lh.PercentImprove.lat.png|thumb|center|150px|Left Medial]]
 
</td>
 
<td>
 
[[Image:SD.Lh.PercentImprove.med.png|thumb|center|150px|Left Lateral]]
 
</td>
 
<td>
 
[[Image:SD.Rh.PercentImprove.lat.png|thumb|center|150px|Right Lateral]]
 
</td>
 
<td>
 
[[Image:SD.Rh.PercentImprove.med.png|thumb|center|142px|Right Medial]]
 
</td>
 
</tr>
 
</table>
 
</center>
 
  
Because the average Dice can be deceiving by suppressing small structures, we analyze the segmentation accuracy per structure. On
+
== (1) Localizing Brodmann Areas Using Cortical Folding ==
the left (right) hemisphere, the segmentations of 16 (8) structures are statistically significantly improved by Spherical Demons with respect to FreeSurfer, while no structure got worse (FDR = 0.05). The above figure shows the percentage improvement
+
In this experiment, we estimate the optimal template in the wSSD cost function for localizing Brodmann areas in 10 histologically-analyzed subjects. We compare 3 algorithms: task-optimal template (red), FreeSurfer (green) [1] and optimal uniform weights of the wSSD (black). The optimal uniform weights of the wSSD are found by setting all the weights to a single value and performing an exhaustive search of the weights. We see in the figure below, that task-optimal framework achieves the lowest localization errors.
of individual structures. Parcellation results suggest that our registration is at least as accurate as FreeSurfer.
 
  
== (2) Brodmann Areas Localization on Ex-vivo Cortical Surfaces ==
+
<gallery>
 +
Image:to.Presentation.mean2.V1V2BA2.png
 +
Image:to.Presentation.mean2.BA44BA45MT.png
 +
</gallery>
  
In this experiment, we evaluate the registration accuracy on ten human brains analyzed histologically postmortem. The histological sections were aligned to postmortem MR with nonlinear warps to build a 3D volume. Eight manually labeled Brodmann areas from histology were sampled onto each hemispheric surface model and sampling errors were manually corrected. Brodmann areas are cyto-architectonically defined regions closely related to cortical function.
+
== (1b) Interpreting Task-Optimal Template Estimation ==
 +
Fig(a) shows the initial cortical geometry of a template subject with its corresponding BA2 in black outline. In this particular subject, the postcentral sulcus is more prominent than the central sulcus. Fig(b) shows the cortical geometry of a test subject together with its BA2. In this subject, the central sulcus is more prominent than the postcentral sulcus. Consequently, in the uniform-weights method, the central sulcus of the test subject is wrongly mapped to the postcentral sulcus of the template, so that BA2 is misregistered, as shown by the green outline in Fig(a). During task-optimal training, our method interrupts the geometry of the postcentral sulcus in the template because the uninterrupted postcentral sulcus in the template is inconsistent with localizing BA2 in the training subjects. The final template is shown in Fig(c). We see that the BA2 of the subject (green) and the task-optimal template (black) are well-aligned, although there still exists localization error in the superior end of BA2.
 +
 +
<gallery>
 +
Image:BA2_template.png|Initial Template
 +
Image:BA2_subject.png|Test Subject
 +
Image:BA2_final_template.png|Final Template
 +
</gallery>
  
It has been shown that nonlinear surface registration of cortical folds can significantly improve Brodmann area overlap across
+
The video below visualizes the template at each iteration of the optimization.
different subjects. Registering the ex-vivo surfaces is more difficult than in-vivo surfaces because the reconstructed volumes are extremely noisy, resulting in noisy geometric features.
+
<gallery>Image:lh.pm14686.BA2.gif</gallery>
  
We co-register the ten surfaces to each other by iteratively building an atlas and registering the surfaces to the atlas. We
+
== (2) Localizing fMRI-defined MT+ Using Cortical Folding ==
compute the average distance between the boundaries of the Brodmann areas for each pair of registered subjects. We perform a permutation test to test for statistical significance. Spherical Demons improves the alignment of 5 (2) Brodmann areas on the left (right) hemisphere (FDR = 0.05) compared with FreeSurfer and no structure gets worse. These results suggest that the Spherical Demons algorithm is at least as accurate as FreeSurfer in aligning Brodmann areas.  
+
In this experiment, we consider 42 subjects with fMRI-defined MT+. MT+ is thought to include the cytoarchitectonically-defined MT, as well as, a small part of the medial superior temporal region. We use the ex-vivo MT template to predict MT+ in the 42 in-vivo subjects. We see that once again, task-optimal template achieves better localization results than FreeSurfer.
  
= Code =
+
<gallery>Image:exvivoMTPredictsInvivoMT.png</gallery>
  
Matlab code is currently available at http://yeoyeo02.googlepages.com/sphericaldemonsrelease
+
== (2b) Cross-validating In-vivo Subjects ==
 +
We now perform cross-validation within the in-vivo data set. We consider 9, 19 or 29 training subjects for the task-optimal template. For FreeSurfer, there is no training, so there is only 1 data point. Like before, task-optimal template achieves lower localization errors.
  
 +
<gallery>
 +
Image:To.LeftInvivoCrossValidation.png|Initial Template
 +
Image:To.RightInvivoCrossValidation.png|Test Subject
 +
</gallery>
  
 
[1] B. Fischl, M. Sereno, R. Tootell, and A. Dale. High-resolution Intersubject Averaging and a Coordinate System for the
 
Cortical Surface. Human Brain Mapping, 8(4):272–284, 1999.
 
 
[2] J. Thirion. Image Matching as a Diffusion Process: an Analogy with Maxwell’s Demons. Medical Image Analysis,
 
2(3):243–260, 1998.
 
 
[3] T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache. Non-parametric Diffeomorphic Image Registration with the
 
Demons Registration. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted
 
Intervention (MICCAI), volume 4792 of LNCS, pages 319–326, 2007.
 
  
 
= Key Investigators =
 
= Key Investigators =
  
* MIT: [[http://people.csail.mit.edu/ythomas/ | B.T. Thomas Yeo]], Mert Sabuncu, Tom Vercauteren, Nicholas Ayache, Bruce Fischl, Polina Golland
+
* MIT: [http://people.csail.mit.edu/ythomas/ B.T. Thomas Yeo], Mert Sabuncu, Polina Golland
 +
* MGH: Bruce Fischl, Daphne Holt
 +
* INRIA: Tom Vercauteren
 +
* Aachen University: Katrin Amunts
 +
* Research Center Juelich: Karl Zilles
  
 
= Publications =
 
= Publications =
''In Print''
 
 
* [http://www.na-mic.org/publications/pages/display?search=Projects:SphericalDemons&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database]
 
  
 +
[http://www.na-mic.org/publications/pages/display?search=Task-optimal NA-MIC Publications Database on Learning Task-Optimal Registration Cost Functions]
  
 
[[Category: Registration]] [[Category:Segmentation]]
 
[[Category: Registration]] [[Category:Segmentation]]

Latest revision as of 03:14, 6 November 2010

Home < Projects:LearningRegistrationCostFunctions
Back to NA-MIC Collaborations, MIT Algorithms, MGH Algorithms

Learning Task-Optimal Registration Cost Functions

We present a framework for learning the parameters of registration cost functions. The parameters of the registration cost function -- for example, the tradeoff between the image similarity and regularization terms -- are typically determined manually through inspection of the image alignment and then fixed for all applications. We propose a principled approach to learn these parameters with respect to particular applications.

Image registration is ambiguous. For example, the figures below show two subjects with Brodmann areas overlaid on their cortical folding patterns. Brodmann areas are parcellation of the cortex based on the cellular architecture of the cortex. Here, we see that perfectly aligning the inferior frontal sulcus will misalign the superior end of BA44 (Broca's language area). If our goal is segment sulci and gyri, perfectly alignment of the cortical folding pattern is ideal. But it is unclear whether perfectly aligning cortical folds is optimal for localizing Brodmann areas. Here, we show that by taking into account the end-goal of registration, we not only improve the application performance but also potentially eliminate ambiguities in image registration.


Description

The key idea is to introduce a second layer of optimization over and above the usual registration. This second layer of optimization traverses the space of local minima, selecting registration parameters that result in good registration local minima as measured by the performance of the specific application in a training data set. The training data provides additional information not present in a test image, allowing the task-specific cost function to be evaluated during training. For example, if the task is segmentation, we assume the existence of a training data set with ground truth segmentation and a smooth cost function that evaluates segmentation accuracy. This segmentation accuracy is used as a proxy to evaluate registration accuracy.

If the registration cost function employs a single parameter, then the optimal parameter value can be found by exhaustive search. With multiple parameters, exhaustive search is not possible. Here, we demonstrate the optimization of thousands of parameters by gradient descent on the space of local minima, selecting registration parameters that result in good registration local minima as measured by the task-specific cost function in the training data set.

Our formulation is related to the use of continuation methods in computing the entire path of solutions of learning problems (e.g., SVM or Lasso) as a function of a single regularization parameter. Because we deal with multiple (thousands of) parameters, it is impossible for us to compute a solution manifold. Instead, we trace a path within the solution manifold that improves the task-specific cost function.

Another advantage of our approach is that we do not require ground truth deformations. As suggested in the example above, the concept of “ground truth deformations” may not always be well-defined, since the optimal registration may depend on the application at hand. In contrast, our approach avoids the need for ground truth deformations by focusing on the application performance, where ground truth (e.g., via segmentation labels) is better defined.

Experimental Results

We instantiate the framework for the alignment of hidden labels whose extents are not necessarily well-predicted by local image features. We consider the generic weighted Sum of Squared Differences (wSSD) cost function and estimate either (1) the optimal weights or (2) cortical folding template for localizing cytoarchitectural and functional regions based only on macroanatomical cortical folding information. We demonstrate state-of-the-art localization results in both histological and fMRI data sets.

(1) Localizing Brodmann Areas Using Cortical Folding

In this experiment, we estimate the optimal template in the wSSD cost function for localizing Brodmann areas in 10 histologically-analyzed subjects. We compare 3 algorithms: task-optimal template (red), FreeSurfer (green) [1] and optimal uniform weights of the wSSD (black). The optimal uniform weights of the wSSD are found by setting all the weights to a single value and performing an exhaustive search of the weights. We see in the figure below, that task-optimal framework achieves the lowest localization errors.

(1b) Interpreting Task-Optimal Template Estimation

Fig(a) shows the initial cortical geometry of a template subject with its corresponding BA2 in black outline. In this particular subject, the postcentral sulcus is more prominent than the central sulcus. Fig(b) shows the cortical geometry of a test subject together with its BA2. In this subject, the central sulcus is more prominent than the postcentral sulcus. Consequently, in the uniform-weights method, the central sulcus of the test subject is wrongly mapped to the postcentral sulcus of the template, so that BA2 is misregistered, as shown by the green outline in Fig(a). During task-optimal training, our method interrupts the geometry of the postcentral sulcus in the template because the uninterrupted postcentral sulcus in the template is inconsistent with localizing BA2 in the training subjects. The final template is shown in Fig(c). We see that the BA2 of the subject (green) and the task-optimal template (black) are well-aligned, although there still exists localization error in the superior end of BA2.

The video below visualizes the template at each iteration of the optimization.

(2) Localizing fMRI-defined MT+ Using Cortical Folding

In this experiment, we consider 42 subjects with fMRI-defined MT+. MT+ is thought to include the cytoarchitectonically-defined MT, as well as, a small part of the medial superior temporal region. We use the ex-vivo MT template to predict MT+ in the 42 in-vivo subjects. We see that once again, task-optimal template achieves better localization results than FreeSurfer.

(2b) Cross-validating In-vivo Subjects

We now perform cross-validation within the in-vivo data set. We consider 9, 19 or 29 training subjects for the task-optimal template. For FreeSurfer, there is no training, so there is only 1 data point. Like before, task-optimal template achieves lower localization errors.


Key Investigators

  • MIT: B.T. Thomas Yeo, Mert Sabuncu, Polina Golland
  • MGH: Bruce Fischl, Daphne Holt
  • INRIA: Tom Vercauteren
  • Aachen University: Katrin Amunts
  • Research Center Juelich: Karl Zilles

Publications

NA-MIC Publications Database on Learning Task-Optimal Registration Cost Functions