# Projects:MultimodalAtlas

Home < Projects:MultimodalAtlas
Back to NA-MIC Collaborations, MIT Algorithms, Harvard DBP2


# Multi-Modal Atlas

Today, computational anatomy studies are mainly hypothesis-driven, aiming to identify and characterize structural or functional differences between, for instance a group of patients with a specific disorder and control subjects. This type of approach has two premises: clinical classification of the subjects and spatial correspondence across the images. In practice, achieving either can be challenging. First, the complex spectrum of symptoms of neuro-degenerative disorders like schizophrenia and overlapping symptoms across different types of dementia like Alzheimer's disease, delirium and depression make a diagnosis based on standardized clinical tests like the mental status examination difficult. Second, across-subject correspondence in the images is a particularly hard problem that requires different approaches in various contexts. A popular technique is to normalize all subjects into a standard space, such as the Talairach space, by registering each image with a single, universal template image that represents an average brain. However, the quality of such an approach is limited by the accuracy with which the universal template represents the population in the study.

With the increasing availability of medical images, data-driven algorithms offer the ability to probe a population and potentially discover sub-groups that may differ in unexpected ways. In this paper, we propose and demonstrate an efficient probabilistic clustering algorithm, called iCluster, that:

• computes a small number of templates that summarize a given population of images,
• simultaneously co-registers all the images using a nonlinear transformation model,
• assigns each input image to a template.

The templates are guaranteed to live in an affine-normalized space, i.e., they are spatially aligned with respect to an affine transformation model.

# Description

iCluster is derived from a simple generative model. We assume that there are a fixed and known number of template images. Then the process that generates an observed image is as follows: a template is randomly drawn – note that the probability that governs this process doesn’t have to be uniform. Next, the chosen template is warped with a random transformation and i.i.d Gaussian noise is added to this warped image to generate an observed image. This process is repeated multiple times to generate a collection of images.

We formulate the problem as a maximum likelihood solution. We employ a Generalized Maximum Likelihood (GEM) algorithm to solve the problem. The GEM algorithm is derived using Jensen's inequality and has three steps:

• E-step: Given the estimates for the template images, template prior probabilities and noise variance image estimates from the previous iteration, the algorithm updates the memberships of each image as the posterior probability of an image being generated from a particular template.
• T-step: Given the membership estimates from the previous E-step, the algorithm updates the template image, template prior and noise variance estimates using closed-form expressions.
• R-step: Given the membership, template, template prior and noise variance estimates from the pervious iterations, the algorithm updates the warps for each image. This step is a collection of pairwise registration instances, where each image is aligned with an effective template image. The effective template image is a weighted average of the current individual templates, where the weights are current memberships.

The resulting algorithm is fast and efficient: each iteration's time and memory requirements are linear in the number of voxels, input images and templates. We employ a stochastic subsampling strategy in each one of the E, T and R steps. A random subsample of voxels (typically less than 1% of the total voxels) are used for the computations. In the R-step, we employ a B-spline nonlinear transformation model and the optimization is done using gradient-descent. During this optimization, the gradients are normalized so that each cluster (i.e. the images assigned to the same template image) are subject to an average of zero deformation. This is an extension of the "anchoring" strategy used in groupwise registration algorithms. This is usually done by subtracting the average gradient from the individual gradients.

## Results

We present two experiments. The first one demonstrates the use of iCluster for building a multi-template atlas in a segmentation application. In the second experiment, we employ iCluster to compute multiple templates of a large data set that contains 416 brain MRI. Our results show that these templates correspond to different age groups. We find the correlation between the image-based clustering, and demographic and clinical characteristics particularly intriguing, given the fact that iCluster did not employ the latter information.

Experiment 1: Segmentation Label Alignment

In this experiment, we used a data set of 50 whole brain MR brain images (of size 256x256x124 and voxel dimensions 0.9375x0.9375x1.5 mm) that contained 16 patients with first episode schizophrenia (SZ), 17 patients with first-episode affective disorder (AFF) and 17 healthy subjects (CON). First episode patients are relatively free of chronicity-related confounds such as the long-term effects of medication, thus any structural differences between the three groups are subtle, local and difficult to identify in individual scans.

The 50 MR images also contained manual labels of certain medial temporal lobe structures: the superior temporal gyrus (STG), hippocampus (HIPP), amygdala (AMY) and parahippocampal gyrus (PHG). We used these manual labels to explore label alignment across subjects under different groupings: on the whole data set, on random partitionings of the data set into two subsets of equal size, on the clinical grouping, and on the image-based clustering as determined by iCluster.

We spatially normalized all the subjects into \textit{a standard space} using the iCluster algorithm with one-template and a 32x32x32 B-spline transformation model, and explored the alignment of the manual labels for clinical and image-based groupings. For each region of interest, such as amygdala, we computed the modified Haussdorff distance (MHD) in the standard space. MHD is a non -symmetric distance measure between the boundaries of two labels and is zero for perfect alignment.The MHD values for each region of interest were then summed up to obtain a total label distance for each ordered subject pair. The following figure shows the total label distance for all subject pairings under the different groupings. We note that image-based clustering of iCluster (both with two-template and three-template) groups subjects that have better label alignment, whereas the clinical grouping demonstrates no such coherence.

Experiment 2: Age groups in the OASIS data set

In this experiment, we used the OASIS data set [1] which consists of 416 pre-processed (skull stripped and gain-field corrected) brain MR images of subjects aged 18-96 years including individuals with early-stage Alzheimer's disease (AD). We ran iCluster on the whole data set while varying the number of templates from 2 through 6. Each run took approximately 4-8 hours on a 16 processor PC with 128GB RAM. For two- and three templates the algorithm computed unique and structurally different templates. We observed that these templates were robust: they were the same for random subsets of the data set of as little as 60 subjects. For larger number of templates, however, we observed that the computed templates were not all unique, or corresponded to single outlier subjects, or were not robust to random sub-sampling of the data set.

The following figure shows the three robust templates computed by iCluster.

The following figure shows the difference images between the three templates shown above.

Difference_templates_oasis.png

The following figure includes the age distributions estimated using Parzen windowing with a Gaussian kernel and a s.t.d. of 4 years for each cluster identified by the algorithm.

Software

The algorithm is currently implemented in the Insight ToolKit (ITK) and will be made publicly available. We also plan to integrate it into Slicer.

# Key Investigators

• MIT: Mert R. Sabuncu, Serdar K. Balci and Polina Golland
• Harvard: M.E. Shenton, M. Kubicki and S. Bouix