https://www.na-mic.org/w/api.php?action=feedcontributions&user=Melonakos&feedformat=atomNAMIC Wiki - User contributions [en]2024-03-28T14:23:11ZUser contributionsMediaWiki 1.33.0https://www.na-mic.org/w/index.php?title=Projects:UtahAtlasSegmentation&diff=52433Projects:UtahAtlasSegmentation2010-05-13T15:51:14Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:Utah2|Utah 2 Algorithms]]<br />
__NOTOC__<br />
<br />
= Atlas Based Classification (ABC) for Healthy Brain MRI =<br />
<br />
Automatic segmentation of brain MR images can be performed reliably using priors from brain atlases and an image generative model. We have developed a tool called ABC that provides an automatic segmentation pipeline in a modular framework.<br />
The processing pipeline is composed of tasks such as filtering the input images, registering the multimodal input images and the brain atlas to a common space, followed by iterative steps which interleave segmentation, inhomogeneity correction, and atlas warping.<br />
<br />
Our tool generates bias corrected images, fuzzy classification maps, and discrete segmentation labels. The tool has been used to automatically segment thousands of adult and toddler images from the University of North Carolina (UNC), and is also being used as a skull stripping mechanism for DTI processing at UNC and Utah. An example of the output of the tool is shown below.<br />
<br />
{|<br />
|-<br />
| [[Image:UtahSegPlugin_result.png|thumb|center|400px|Output of the segmentation plugin, showing the bias corrected image and the probabilities for white and gray matter.]]<br />
| [[Image:TBI-seg.jpg|thumb|center|400px|ABC tool applied to TBI MRI data.]]<br />
|}<br />
<br />
<br />
The ABC tool has been integrated into Slicer as an extension, and it can also be executed as a stand-alone application. Both versions are available for download through NITRC: http://www.nitrc.org/projects/abc .<br />
{|<br />
|-<br />
| [[Image:UtahSegPlugin_screen.png|thumb|center|200px|Screen shot of the segmentation plugin in Slicer.]]<br />
| [[Image:UtahSegGUI_screen.png|thumb|center|400px|Screen shot of the stand-alone segmentation GUI.]]<br />
|}<br />
<br />
== External Use and Modifications of ABC ==<br />
<br />
An earlier version of ABC has been distributed to a large number of other research groups associated with ongoing image analysis projects at the University of North Carolina - Chapel Hill and the University of Utah, and to research groups linked to these investigators. High profile projects include the Silvio Conte Center (http://www.psychiatry.unc.edu/conte/, PI John H. Gilmore, UNC) involving the processing of over 1000 infant MRI, and the Autism Centers of Excellence Project IBIS (http://www.ibis-network.org/, PI Joseph Piven, UNC) where over 650 infant brain MRI will be processed with this tool. In order to segment infant brains with age range of 6 months to 2 years as part of this longitudinal study, new age-specific atlases have been developed to be used as spatial priors for ABC. <br />
<br />
The ABC method has also been modified and extensively used by the Hans Johnson from University of Iowa, as part of a large, multi-center Huntington Disease study. Experience with a couple of hundred datasets at Iowa demonstrated the excellent robustness and reliability of the methodology.<br />
<br />
{|<br />
|-<br />
| [[Image:ABC-MIND1.png|thumb|center|250px|Output of the segmentation plugin, showing the bias corrected image and the probabilities for white and gray matter.]]<br />
| [[Image:ABC-MIND2.png|thumb|center|250px|BRAINSABC expanded prior class images.]]<br />
| [[Image:ABC-MIND3.png|thumb|center|250px|BRAINSABC expanded prior class images.]]<br />
|}<br />
<br />
The probability maps for the atlas were created from a set of 729 3T multi-modal image data sets (Iowa, Hans Johnson). Improvements made were to include a measure of Venus Blood as a part of the model, and to add explicit tissue regions outside the head for tissue types that are not part of the brain. This atlas definitions are available from<br />
svn co https://www.nitrc.org/svn/brains/BRAINS/trunk/BRAINSTools/BRAINSABC/Atlas_20100510<br />
<br />
Under guidance of Hans Johnson, a branch of the ABC NITRC code has been modified to meet the needs of the Iowa group. This includes integration of a new Bspline-based registration from BRAINSFit, integration of BRAINSROIAuto, and to make it work with data with arbitrary orientation, and modifications of the interface (svn to https://www.nitrc.org/svn/brains/BRAINS/trunk/BRAINSTools/BRAINSABC).<br />
<br />
== ABC Bias Correction Module ==<br />
<br />
In order to meet existing strong demands for a stand-alone MRI bias correction method with FLASH images, part of the ABC code has been made available as a separate module and distributed via NITRC (http://www.nitrc.org/projects/probbiascor) by the MIND Institute group lead by Mark Skully and Jeremy Bockholt.<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display/?search=Projects%3AUtahAtlasSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Atlas Based Classification (ABC) for Healthy Brain MRI]<br />
<br />
= Key Investigators =<br />
<br />
*Utah Algorithms: Marcel Prastawa, Guido Gerig<br />
*UNC Algorithms: Martin Styner<br />
*External research collaboration: Hans Johnson, The University of Iowa<br />
*External research collaboration: Jeremy Bockholt, MIND Institute</div>Melonakoshttps://www.na-mic.org/w/index.php?title=File:COPDGeneDashboard4.png&diff=52432File:COPDGeneDashboard4.png2010-05-13T15:41:07Z<p>Melonakos: uploaded a new version of "File:COPDGeneDashboard4.png"</p>
<hr />
<div>COPDGene Dashboard</div>Melonakoshttps://www.na-mic.org/w/index.php?title=NA-MIC_External_Collaborations&diff=52417NA-MIC External Collaborations2010-05-13T06:10:15Z<p>Melonakos: </p>
<hr />
<div> Back to [[NA-MIC_Collaborations|NA-MIC Collaborations]]<br />
<br />
=Projects funded by "Collaborations with NCBC PAR"=<br />
<br />
This section describes external collaborations with NA-MIC that are funded by NIH under the "Collaboration with NCBC" PAR. (Details for this funding mechanism are provided [[Collaborator:Resources|<big><big>here</big></big>]]).<br />
<br />
{| style="text-align:left;"<br />
| style="width:10%" | [[Image:Cli-mesh-quality-small-062607.png|300px]]<br />
| style="width:90%" |<br />
<br />
<br />
== [[NA-MIC_NCBC_Collaboration:Automated_FE_Mesh_Development|PAR-05-063: R01EB005973 Automated FE Mesh Development]] ==<br />
<br />
This project is funded under an NCBC collaboration grant to PIs Nicole Grosland and Vincent Magnotta at UIowa. The goal of this project is to integrate and expand methods to automate the development of specimen- / patient-specific finite element (FE) models into the [[NA-MIC-Kit|NA-MIC kit]]. [[NA-MIC_NCBC_Collaboration:Automated_FE_Mesh_Development|More...]]<br />
<br />
|-<br />
<br />
| | [[Image:NhpATOCPic.jpg|300px]]<br />
| |<br />
<br />
==[[NA-MIC_NCBC_Collaboration:Measuring_Alcohol_and_Stress_Interaction|PAR-07-249: R01AA016748 Measuring Alcohol and Stress Interaction with Structural and Perfusion MRI]]==<br />
<br />
This project is funded under an NCBC collaboration grant to PIs James Daunais, Robert Kraft, and Chris Wyatt. The goal of this project is to examine the the effects of chronic alcohol self- administration on brain structure and function the monkey brain. MRI image analysis tools from the [[NA-MIC-Kit|NA-MIC kit]] will be adapted for use with the monkey brain datasets. [[NA-MIC_NCBC_Collaboration:Measuring_Alcohol_and_Stress_Interaction|More...]]<br />
<br />
|-<br />
<br />
| | [[Image:LiverRFAPhantom.png|300px]]<br />
| |<br />
<br />
==[[NA-MIC_NCBC_Collaboration:An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors|PAR-05-063: R01CA124377 An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors]]==<br />
<br />
This project is funded under an NCBC collaboration grant to PI Kevin Cleary at Georgetown University. The goal of this project is to develop and validate an integrated system based on open source software for improved visualization and probe placement during radiofrequency ablation (RFA) of liver tumors.[[NA-MIC_NCBC_Collaboration:An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors|More...]]<br />
|-<br />
<br />
| | [[Image:HammerABrain.png]]<br />
| |<br />
<br />
==[[NA-MIC_NCBC_Collaboration:Development and Dissemination of Robust Brain MRI Measurement Tools|PAR-07-249: R01EB006733 Development and Dissemination of Robust Brain MRI Measurement Tools]]==<br />
<br />
This project is funded under an NCBC collaboration grant to PI Dinggang Shen at UNC-Chapel Hill. The goal of this project is to develop and widely distribute a software package for robust measurement of brain structures in MR images using computational neuroanatomy methods.[[NA-MIC_NCBC_Collaboration:Development and Dissemination of Robust Brain MRI Measurement Tools|More...]]<br />
|-<br />
<br />
| | [[Image:Virtual Colonoscopy Auto Detection - Yoshida.png|300px]]<br />
| |<br />
<br />
==[[NA-MIC_NCBC_Collaboration:NA-MIC virtual colonoscopy|PAR-07-249: R01CA131718 NA-MIC Virtual Colonoscopy]]==<br />
<br />
This project is funded under an NCBC collaboration grant to PI Hiroyuki Yoshida. The goal of this project is to [[NA-MIC_NCBC_Collaboration:NA-MIC virtual colonoscopy|More...]]<br />
|-<br />
<br />
| | [[Image:JHUCollaboration.jpg|300px]]<br />
| |<br />
<br />
==[[NA-MIC_NCBC_Collaboration:3D Shape Analysis for Computational Anatomy|PAR-07-249: R01EB008171 3D Shape Analysis for Computational Anatomy]]==<br />
<br />
This project is funded under an NCBC collaboration grant to PI Michael Miller JHU (with Joe Hennessey)<br />
|-<br />
<br />
| | [[Image:UtahCollaboration.jpg|300px]]<br />
| |<br />
<br />
==[[NA-MIC_NCBC_Collaboration:The Microstructural Basis of Abnormal Connectivity in Autism|PAR-07-249: R01MH084795 The Microstructural Basis of Abnormal Connectivity in Autism]]==<br />
<br />
This project is funded under an NCBC collaboration grant to PI Janet Lainhart, MD. <br />
It will use tools developed within NAMIC for a longitudinal neuroimaging, clinical, and neuropsychological study of late neurodevelopment in autism.combining analysis of connectivity and morphometry. <br />
|-<br />
<br />
<br />
| align="center"| [[Image:JHUSkullStripping.png|300px]] [[Image:JHU.jpg]]<br />
||<br />
==[[NA-MIC_JHU_Skull_Stripping_Collaboration | PAR-08-183: R21EB009900 Johns Hopkins Skull Stripping]]==<br />
<br />
The group at Johns Hopkins is developing software that enables the stripping of skull, scalp, and meninges from structural MRI scans in a fully automated fashion.<br />
|}<br />
<br />
----<br />
----<br />
<br />
=Additional External Collaborations=<br />
<br />
This section describes external collaborations with NA-MIC that are funded by other mechanisms:<br />
<br />
{| style="text-align:left;"<br />
| style="width:10%" | [[Image:BRAINS.gif|300px]]<br />
| style="width:90%" |<br />
<br />
==[[NA-MIC Brains Collaboration| PAR-05-057: R01NS050568 BRAINS Morphology and Image Analysis]]==<br />
<br />
This project is a funded under a Continued Development and Maintenance of Software grant to PIs Vincent Magnotta, Hans Johnson, Jeremy Bockholt, and Nancy Andreasen at the University of Iowa. The goal of this project is to update the '''BRAINS''' image analysis software developed at the University of Iowa. [[NA-MIC Brains Collaboration|More...]]<br />
<br />
|-<br />
<br />
| | [[Image:27y-leftabdcan-T6SQ-voltage-withheart4.png |300px]]<br />
| |<br />
<br />
==[[NA-MIC Childrens Collaboration| Children's Pediatric Cardiology Collaboration with SCI/SPL/Northeastern]]==<br />
<br />
Collaboration with John Triedman, Matt Jolley, Dana Brooks, SCI.<br />
<br />
|-<br />
<br />
| | [[Image:NITRC.png|300px]]<br />
| |<br />
<br />
==[[NA-MIC and NITRC| U54EB005149-04S1 NA-MIC Collaboration with NITRC]]==<br />
<br />
The NA-MIC Project is working to make NA-MIC neuroimaging software available through the [http://www.nitrc.org/ NITRC web site]. Supplemental support is helping to create the [http://www.slicer.org/slicerWiki/index.php/Slicer3:Loadable_Modules Slicer3 Loadable Modules] project so that slicer plugins can be hosted on NITRC, allowing greater scalability for developers and users of Slicer.<br />
<br />
|-<br />
<br />
| | [[Image:NA-MIC-NAC-collaborations-ARRA-2010-03-14.png|300px]]<br />
| |<br />
<br />
==[[Collaboration:NAC| P41RR013218 NA-MIC Collaboration with NAC]]==<br />
<br />
NAC, the neuroimage analysis center, is a national resource center. NAC is relying on the NA-MIC kit for its general software environment. The mission of NAC is to develop novel concepts for the analysis of images of the brain and develop and disseminate tools based on those concepts. Several [[Projects:ARRASuplements|ARRA-funded supplements to the NAC grant]] have close ties to related efforts in NA-MIC.<br />
<br />
|-<br />
<br />
| | [[Image:Neurosurgery-slicer-fmri-dti-openigtlink.png|300px]]<br />
| |<br />
<br />
==[[Collaboration:NCIGT| U41RR019703 NA-MIC Collaboration with NCIGT]]==<br />
<br />
NCIGT is leveraging the NA-MIC kit as a platform for developing dedicated IGT capabilities.<br />
<br />
|-<br />
<br />
| | [[Image:SummerProjectWeek2009_ProstateRobot1.jpg|300px]]<br />
| |<br />
<br />
==[[Collaboration:Prostate BRP| R01CA111288 NA-MIC Collaboration with Prostate BRP]]==<br />
<br />
BRP is leveraging the NA-MIC kit as a platform for developing dedicated IGT capabilities.<br />
<br />
<br />
<br />
|-<br />
|-<br />
<br />
| | [[Image:snapshot.gif|300px]]<br />
| |<br />
<br />
==[[Collaboration:College of William and Mary|Real-Time Computing for Image Guided Neurosurgery]]==<br />
<br />
Using the Tera Grid to implement mesh-based non-rigid registration for Neurosurgery.<br />
<br />
|-<br />
<br />
| | [[Image:Catalyst_logo_final.jpg]]<br />
| | <br />
==[[Collaboration:Harvard CTSC|UL1RR025758 NA-MIC support for Harvard CTSC Translational Imaging Consortium]]==<br />
<br />
The Harvard CTSC Translational Imaging Consortium is using NA-MIC communication tools to facilitate the rapid deployment of expertise in medical imaging acquisition, analysis and visualization to clinical translational investigators. <br />
<br />
|-<br />
<br />
| | [[Image:MicroscopyTutorialSlide.jpg|300px]]<br />
| |<br />
<br />
==[[Collaboration:NCMIR Microscopy|NCBC Supplement for Microscopy and Slicer]]==<br />
<br />
An NCBC Supplement to NCMIR, UCSD focused on the utilization of Slicer with microscopy data and resulted in a tutorial for use of Slicer with confocal microscopy data.<br />
<br />
|-<br />
<br />
| | [[Image:Femur Patella Tibia.jpg|300px]]<br />
| | <br />
<br />
==[[Stanford_Simbios_group|U54GM072970 NCBC Stanford Simbios]]==<br />
<br />
Our sister NCBC at Stanford, dedicated to biomedical simulation, is working to adapt NA-MIC image analysis routines generate simulation models directly from MRI scans.<br />
<br />
|-<br />
<br />
| | [[Image:I2b2_collage.jpg|100px]]<br />
| | <br />
<br />
==[[NCBC I2B3|U54LM008748 NCBC I2B2]]==<br />
<br />
Our sister NCBC at Harvard Medical School, dedicated to biomedical image informatics, is working with us through the Harvard based, NCRR funded CTSC (Catalyst) program, to develop common open source software for the community.<br />
<br />
|-<br />
<br />
| | [[Image:COPDGeneDashboard4.png|300px]]<br />
| | <br />
<br />
==[[Collaboration: COPDGene | NAMIC supports COPDGene® quantitative analysis]]==<br />
The Genetic Epidemiology of COPD (COPDGene®) Study is one of the largest studies ever to investigate the underlying genetic factors of Chronic Obstructive Pulmonary Disease or COPD. Through the enrollment of over 10,000 individuals, the COPDGene® Study aims to find inherited or genetic factors that make some people more likely than others to develop COPD. With the use of CT scans, COPDGene® also seeks to better classify COPD and understand how the disease may differ from person to person.<br />
<br />
|-<br />
<br />
| | [[Image:BIRNLogo.jpg|300px]]<br />
| | <br />
<br />
==[[Collaboration:BIRN-CC | U24RR025736 BIRN CC]]==<br />
The NCRR-funded efforts of the BIRN Coordinating Center (BIRN-CC) strive to apply state-of-the-art computer science techniques to the growing problem of working with large biomedical informatics datasets. Through a series of test bed projects and use case studies, the BIRN-CC refines and optimizes its offerings. NA-MIC provides an opportunity for BIRN-CC to work closely with the image analysis community and learn from the experience of NA-MIC DBPs.<br />
<br />
|-<br />
<br />
| | [[Image:fBIRNImage.jpg|300px]]<br />
| |<br />
<br />
==[[Collaboration:fBIRN | U24RR021992 fBIRN ]]==<br />
The Function BIRN (fBIRN) addresses the difficult problem of collecting and analyzing fMRI data collected at multiple sites in the context of schizophrenia research. A number of important and difficult scientific and engineering problems can only be addressed in the context of a multi-site consortium that aims to reproducibly quantify brain activity. Outcomes of fBIRN include quality assurance checks that have become industry standards, novel statistical approaches to identify and control for site-specific and scanner-specific biases, standardized stimulus paradigms, and fMRI informatics techniques for large scale datasets.<br />
<br />
|-<br />
<br />
| | [[Image:mBIRNImage.jpg|300px]]<br />
| |<br />
<br />
==[[Collaboration:mBIRN | U24RR021382 mBIRN]]==<br />
The Morphometry BIRN (mBIRN) seeks to support multi-site brain studies using structural and diffusion MRI. Standard acquisition protocols, analysis methods, and informatics techniques have all been studied and promulgated by the mBIRN community of researchers.<br />
<br />
|-<br />
<br />
| | [[Image:BIRNLogo.jpg|300px]]<br />
| | <br />
<br />
==[[Collaboration:BIRN-CTSN | U24RR026057 Collaborative Tools Support Network for BIRN]]==<br />
As the mBIRN and fBIRN test bed activities wind down the activities, research labs that have adopted the BIRN tool suite are supported through the BIRN-CTSN efforts. This funding covers software maintenance and ongoing deployment support to ensure that these important resources remain a vital part of the research community.<br />
<br />
|-<br />
<br />
| | [[Image:BrainColor-logo.png|330px]]<br />
| | <br />
<br />
==[[Collaboration:BrainColor | BrainColor]]==<br />
<br />
[http://www.braincolor.org/ brainCOLOR] is a Collaborative Open Labeling Online Resource for to create high quality manually segmented brain data sets.<br />
<br />
|}<br />
<br />
----<br />
----<br />
<br />
=International Collaborations=<br />
{| style="text-align:left;"<br />
<br />
| style="width:10%" |[[Image:LevelSetSegmentGUIModule_alpha.png|300px]]<br />
| style="width:90%" |<br />
<br />
==[[NA-MIC VMTK Collaboration | Vascular Modeling Toolkit Collaboration]]==<br />
<br />
Slicer as a platform for segmentation and geometric analysis of vascular segments and image-based computational fluid dynamics (CFD). [[NA-MIC VMTK Collaboration|More...]]<br />
<br />
Collaboration with [http://villacamozzi.marionegri.it/~luca/ Luca Antiga] of the [http://www.marionegri.it Mario Negri Institute].<br />
|-<br />
<br />
| | [[Image:AISTlogo.gif|200px]]<br />
| |<br />
<br />
==[[Collaboration:AIST| NA-MIC Collaboration with Research and Development Project on Intelligent Surgical Instruments]]==<br />
<br />
Intelligent Surgical Instruments Projects uses Open-source software engineering tools developed by NA-MIC, and leverage it to surgical robotics.<br />
<br />
<br />
|-<br />
<br />
| | [[Image:ISML.gif|200px]]<br />
| |<br />
<br />
==[[Collaboration:UWA-Perth| Real Time Computer Simulation of Human Soft Organ Deformation for Computer Assisted Surgery]]==<br />
<br />
Real Time Computer Simulation of Human Soft Organ Deformation for Computer Assisted Surgery.<br />
<br />
|-<br />
<br />
| |[[Image:CtkLogo.png|200px]]<br />
| |<br />
<br />
==[[Collaboration:CTK| Common Toolkit (CTK)]]==<br />
<br />
[http://commontk.org CTK] is a multi-institution international collaboration to share software development resources for medical imaging applications.<br />
<br />
|-<br />
<br />
| |[[Image:CO-ME-logo.png|200px]]<br />
| |<br />
<br />
==[[Collaboration:CO-ME| Computer Aided and Image Guidance Medical Interventions (CO-ME)]]==<br />
<br />
[http://www.co-me.ch CO-ME] The National Centre of Competence in Research (NCCR) Co-Me is a network of leading clinics and engineering sites in Switzerland with strong links to industry and international partners.<br />
<br />
<br />
|-<br />
<br />
| |[[Image:OCAIRO animation.gif|200px]]<br />
| |<br />
<br />
==[[Collaboration:OCAIRO| Ontario Consortium of Adaptive Interventions for Radiation Oncology (OCAIRO)]]==<br />
<br />
OCAIRO is a cross-Ontario initiative led by Dr. Jaffray, and will work towards developing adaptive radiation therapy--a new approach involving the creation of hardware, software, imaging and database systems to enable oncologists to adapt radiation to each individual patient and their response during the course of therapy. <br />
<br />
|}</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTINoiseStatistics&diff=52243Projects:DTINoiseStatistics2010-05-11T20:30:40Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:Utah2|Utah 2 Algorithms]]<br />
__NOTOC__<br />
= DTI Noise Statistics =<br />
<br />
Clinical time limitations on the acquisition of diffusion weighted volumes in DTI present several key challenges for quantitative statistics of diffusion tensors and tensor-derived measures. First, the signal to noise ratio (SNR) in each individual diffusion weighted volume is relatively low due to the need for quick acquisition. Secondly, the presence of Rician noise in MR imaging can introduce bias in the estimation of anisotropy and trace. Unlike structural MRI where intensities are primarily used to obtain contrast, the goal of DTI is to quantify the local diffusion properties in each voxel. Therefore, an understanding of the influence of imaging noise on the distribution of measured values is important to understand the results of statistical analysis and to design new imaging protocols.<br />
<br />
= Key Investigators =<br />
<br />
Utah: Casey Goodlett, Guido Gerig, Tom Fletcher, Ross Whitaker<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=DTINoiseStatistics&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Influence of Imaging Noise on DTI Statistics]<br />
<br />
<br />
Project Week Results: [[Media:Riemannian_DTI_ProgWeek2006.ppt|Jan 2006]], [[Media:2006_Summer_Project_Week_DTI_Processing.ppt|Jun 2006]], [[Media:2007_Project_Half_Week_TensorEstimation.ppt|Jan 2007]]<br />
<br />
[[Category: Statistics]] [[Category:Diffusion MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:LesionSegmentation&diff=52242Projects:LesionSegmentation2010-05-11T20:30:19Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:Utah2|Utah 2 Algorithms]]<br />
__NOTOC__<br />
<br />
= Lesion Segmentation =<br />
<br />
= Description =<br />
<br />
Quantification, analysis and display of brain pathology such as white matter lesions as observed in MRI is important for diagnosis, monitoring of disease progression, improved understanding of pathological processes and for developing new therapies. Utah Center for Neuroimage Analysis develops new methodology for extraction of brain lesions from volumetric MRI scans and for characterization of lesion patterns over time. The images show white matter lesions (yellow) displayed with ventricles (blue) and transparent brain surface in a patient with an autoimmune disease (lupus). Lesions in white matter and possible correlations with cognitive deficits are also studied in patients with multiple sclerosis (MS), chronic depression, Alzheimer’s disease (AD) and in older persons.<br />
<br />
<center><br />
{| border="0" style="background:transparent;"<br />
|[[Image:prastawa-lupus-demo001.png|thumb|256px|Segmentation of a lupus case with large lesions.]]<br />
|[[Image:prastawa-lupus-case001-3T.png|thumb|256px|Segmentation of a 3T lupus case with small lesions.]]<br />
|-<br />
|}<br />
</center><br />
<br />
In addition to the identification of the location and shape of lesions in 3D, we are interested in analyzing the longitudinal series of brain MRI showing lesions. For this purpose, we have developed a method for estimating a physical model for lesion formation. The model that we use is an approximation using a reaction-diffusion process that is based on expected diffusion properties (as observed through DTI). This approach gives a richer parametrization of lesion changes in addition to volume and location, as the model estimation provides descriptions of growth and spread for individual lesions. In the future, we plan to incorporate this approach for analyzing lesion MRI of a subject over time by characterizing the change patterns through the physical model parameters.<br />
<br />
[[Image:prastawa-lesionest.png|thumb|center|400px|An example of the lesion model formation estimation result. Starting from an initial guess of the reaction-diffusion process, the method estimates a model that best fits the observed data. Left: initial guess. Center: final estimate. Right: observed patient data. Top row: the T2 intensities, bottom row: lesion probabilities]]<br />
<br />
= Key Investigators =<br />
<br />
*Utah Algorithms: Marcel Prastawa, Guido Gerig<br />
*Clinical Collaborators<br />
**MIND: Jeremy Bockholt, Mark Scully<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=LesionSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Lesion Segmentation]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIPopulationAnalysis&diff=52241Projects:DTIPopulationAnalysis2010-05-11T20:29:59Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:DiffusionImageAnalysis|NA-MIC Collaborations]], [[Algorithm:Utah2|Utah2 Algorithms]], [[Algorithm:MIT|MIT Algorithms]], [[Algorithm:UNC|UNC Algorithms]]<br />
__NOTOC__<br />
= DTI ATlas Building =<br />
<br />
Our methodology for population analysis of DT-MRI is based on unbiased non-rigid registration of a population to a common coordinate system. The registration jointly produces an average DTI atlas, which is unbiased with respect to the choice of a template image, along with diffeomorphic correspondence between each image. The registration image match metric uses a feature detector for thin fiber structures of white matter, and interpolation and averaging of diffusion tensors use the Riemannian symmetric space framework. The anatomically significant correspondence provides a basis for comparison of tensor features and fiber tract geometry in clinical studies.<br />
<br />
[[Image:goodlett_dti_atlas_flowchart.png]]<br />
<br />
{| <br />
|- style="vertical-align:top"<br />
| [[Image:cbg-dtiatlas-tensors.png|thumb|none|<!-- Attempt give both boxes the same height.<br />
--><div style="float:right;clear:right;font-size:inherit;background:inherit;border:none;margin:0;"><!--<br />
--></div>Tensors in Splenium of averaged DTI atlas]]<br />
| [[Image:cbg-dtiatlas-tracts.png|thumb|none|<!-- Attempt give both boxes the same height.<br />
--><div style="float:right;clear:right;font-size:inherit;background:inherit;border:none;margin:0;"><!--<br />
--></div>Tractography through Corpus Callosum of averaged DTI atlas]]<br />
|}<br />
<br />
<br />
Our registration procedure is based on a scalar feature image which is sensitive to sheet like structures. We have observed that the major fiber bundles of interest occur as sheet or tube like manifolds in the FA image of the brain. As a feature image we use the maximum eigenvalue of the hessian of the FA image. Images are initially aligned using an affine registration and then deformed to a common coordinate system using the unbiased atlas-building procedure of Joshi et al. [http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=15501084&dopt=Citation]. The deformation fields produced by the registration process are applied to the tensors fields using appropriate methods for reorienting and interpolating tensors. The transformed images are averaged in the atlas space to produce a DTI atlas.<br />
<br />
An initial test was performed by using the procedure on a set of images of healthy subject at age one year. The results of the tensor averaging are shown on the right. Tractography was also performed on the mean atlas image as shown.<br />
<br />
==Collaboration with [[DBP:Harvard|PNL]]==<br />
<br />
We have begun to apply the DTI atlas building procedure to data provided by the [[DBP:Harvard|PNL]]. A combined set of DTI scans from control and Schizophrenic subjects were aligned using the procedure described above. In the atlas space the SZ and CNTL groups are processed to produce voxel-wise statistics for each group. The figure below shows colored FA and mean diffusivity slices for both the CNTL and SZ group. Preliminary work is now being done on region of interest (ROI) hypothesis testing between the two populations.<br />
<br />
{|<br />
|[[Image:ncatlasfibers-axial.jpg|thumb|256px|Axial view of fibers tracked in the control atlas]]<br />
|[[Image:ncatlasfibers-coronal.jpg|thumb|256px|Coronal view of fibers tracked in the control atlas]]<br />
|[[Image:ncatlasfibers-sagittal.jpg|thumb|256px|Saggital view of fibers tracked in the control atlas]]<br />
|}<br />
<br />
==Collaboration with [[DBP:MIT|MIT]]==<br />
<br />
[[Image:Gcasey-atlas-slices.png |thumb|480px|Slices from atlases built with only affine registration, fluid registration, and b-spline registration]]<br />
<br />
During the [[2008_Summer_Project_Week:PopulationDTIApplication|Summer 2008 project week]], we worked with the MIT group to incorporate [[Projects:GroupwiseRegistration|b-spline groupwise registration]] into our atlas building procedure to construct a 100% open source atlas building toolkit. The results of this work appear on initial inspection to be similar in quality to that obtained with the fluid-based registration.<br />
<br />
= DTI Quantitative Tract Analysis =<br />
<br />
This project proposes a framework for quantitative analysis of DTI data. The framework uses the full tensor information for statistical analysis using the affine-invariant Riemannian metric for defining operations such as interpolation and averaging on tensors. Furtheremore, the results of tractography are used to provide a reference coordinate system the respresents the underlying structure of fiber bundles. The tract modeling framework includes a model both of the geometry of the fiber bundle and of the diffusion properties along the bundle. A new anisotropy measure called geodesic anisotropy (GA) is also included in the framework.<br />
<br />
[[image:corouge_dti_statistics.jpg|thumb|320px|Fiber tracts colored with FA attributes]]<br />
<br />
[[image:corouge-tract-analysis-flowchart.jpg|550px]]<br />
<br />
= Group Analysis of tract properties =<br />
We have combined the procedure for atlas building of DTI with the tract analysis to provide a method for tract specific statistical analysis. This procedure is designed to do hypothesis testing and discrimination between two populations. The focus of the analysis is tract specific comparisons of diffusion properties such as fractional anisotropy (FA), mean diffusivity, etc rather than comparison of the shape or size of fiber tracts. The procedure works by combining all images from both populations into a single DTI atlas using the procedure described above. In the atlas, fibers are computed to create a template geometry for fiber tracts of interest. The transformations mapping each subject into the atlas are then used to map the diffusion statistics of interest onto the reference atlas fiber tract. This creates a set of fiber bundles, one for each subject, with identical geometry but differing diffusion properties. The tract analsysis procedure described above is then used to create a sampled function of these diffusion properties as a function of tract arc-length.<br />
<br />
Statistical analysis of the sampled function requires the understanding the arc-length functions are sampled representations of a continuous underlying biology. Furthermore, diffusion statistics such as FA or MD may be highly correlated in the same voxel, and sampled locations along the tracts are likely to be highly spatially correlated. For these reasons, a method that operates on the space of continuous functions of the joint tensor properties is advantageous for statistical analysis. Our method fits b-spline basis functions to the sampled curves and performs PCA for data driven reduction of the dimensionality. Within the PCA coefficient space multi-variate analysis is performed using a permutation test based on the Hotelling T2 metric. This provides a method for performing a single global hypothesis test between the tracts of the two populations.<br />
<br />
== Software ==<br />
<br />
* Algorithms written in ITK. GUI of prototype software written in QT ('''FiberViewer''' software). Prototype software tested in clinical studies at UNC. Validation tests with repeated DTI of same subject (6 cases). [http://www.ia.unc.edu/dev/download/fibertracking/index.htm| FiberTracking download]<br />
* Additionally available: ITK compatible fibertracking prototype tool '''FiberTracking''' to be used to study overlap/dissimilarity with other tools already available to NA-MIC: Functionality: reads raw DW-MRI data (6 direction Basser scheme), fiber tracking based on user-selected source and regions (S. Mori scheme), display of fibertracts and volumetric data, output: sets of streamlines in ITK polyline format attributedwith DTI properties and display parameteres (radiusof tubes, local color, etc.). [http://www.ia.unc.edu/dev/download/fiberviewer/index.htm| FiberViewer download]<br />
* Command line tools for DTI processing available [http://www.sci.utah.edu/~gcasey/software/ DWIProcess]<br />
<br />
<br />
== Application to pediatric data ==<br />
<br />
We have tested the analysis procedure on a dataset of pediatric development provided by John H Gilmore of UNC.<br />
<br />
= Key Investigators =<br />
*Utah Algorithms: Casey Goodlett, Isabelle Corouge, P. Thomas Fletcher, Guido Gerig<br />
*Clinical Collaborators<br />
**UNC: John H Gilmore<br />
**PNL: Marek Kubicki, Sylvain Bouix<br />
*Algorithm Collaborators<br />
**MIT: Polina Golland, Serdar Balci<br />
<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=DTIPopulationAnalysis&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Group Analysis of DTI Fiber Tracts]<br />
<br />
<br />
[[Category: Registration]] [[Category:Diffusion MRI]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:TissueClassificationWithNeighborhoodStatistics&diff=52240Projects:TissueClassificationWithNeighborhoodStatistics2010-05-11T20:29:10Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= Tissue Classification with Neighborhood Statistics =<br />
<br />
We have implemented the MRI Tissue Classification Algorithm described in the reference below. Classes for non-parametric density estimation and automatic parameter selection have been implemented as the basic framework on which we build the classification algorithm.<br />
<br />
= Description =<br />
<br />
* The stochastic non-parametric density estimation framework is very general and allows the user to change kernel types (we have coded isotropic Gaussian, but additional kernels can easily be derived from the same parent class) and sampler types (for example local vs. global image sampling as well as sampling in non-image data) as template parameters.<br />
<br />
* The classification class uses the stochastic non-parametric density estimation framework to implement the algorithm in the reference below.<br />
<br />
* An existing ITK bias correction method has been incorporated into the method.<br />
<br />
* Currently, we are registering atlas images to our data using the stand-alone LandmarkInitializedMutualInformationRegistration application. Ideally, we'd like to incorporate an exiting registration algorithm into our code so that classification can be carried out in one step. The initialization to the registration can be provided as command line arguments.<br />
<br />
= Key Investigators =<br />
<br />
Utah: Tolga Tasdizen, Suyash Awate, Ross Whitaker<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=TissueClassificationWithNeighborhoodStatistics&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Tissue Classification with Neighborhood Statistics]<br />
<br />
[[Category:MRI]] [[Category: Statistics]] [[Category:Registration]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIProcessingTools&diff=52239Projects:DTIProcessingTools2010-05-11T20:28:49Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= DTI Processing and Statistics Tools =<br />
<br />
<br />
* ''Differential Geometry'' We will provide methods for computing geodesics and distances between diffusion tensors. Several different metrics will be made available, including a simple linear metric and also a symmetric space (curved) metric. These routines are the building blocks for the routines below.<br />
<br />
* ''Statistics'' Given a collection of diffusion tensors, compute the average and covariance statistics. This can be done using the metrics and geometry routines above. A general method for testing differences between groups is planned. The hypothesis test also depends on the underlying geometry used.<br />
<br />
* ''Interpolation'' Interpolation routines will be implemented as a weighted averaging of diffusion tensors in the metric framework. The metric may be chosen so that the interpolation preserves desired properties of the tensors, e.g., orientation, size, etc.<br />
<br />
* ''Filtering'' We will provide anisotropic filtering of DTI using the full tensor data (as opposed to component-wise filtering). Filtering will also be able to use the different metrics, allowing control over what properties of the tensors are preserved in the smoothing. We have also developed methods for filtering the original diffusion weighted images (DWIs) that takes the Rician distribution of MR noise into account (see MICCAI 2006 paper below).<br />
<br />
{|<br />
|[[Image:DTIFiltering.jpg|thumb|512px|Coronal slice from a noisy DTI (left). The same slice after applying our Rician noise DTI filtering method (right).]]<br />
|}<br />
<br />
= Description =<br />
<br />
* Developed a Slicer module for our DT-MRI Rician noise removal during the [[2007_Project_Half_Week|2007 Project Half Week]]. Also enhanced the method by including an automatic method for determining the noise sigma in the image.<br />
<br />
* Developed prototype of DTI geometry package. This includes an abstract class for computing distances and geodesics between tensors, while derived classes can specify the particular metric to use. Current implemented subclasses are the basic linear metric and the symmetric space metric.<br />
<br />
* Developed prototype of DTI statistical package. A general class has been developed for computing averages and principal modes of variation of tensor data. The statistics class can use any of the metrics described above.<br />
<br />
* We have begun work on a general method for hypothesis testing of differences in two diffusion tensor groups. This method works on the full six-dimensional tensor information, rather than derived measures. The hypothesis testing class can also use any of the different tensor metrics.<br />
<br />
* Participated in the [[Engineering:Programmers_Week_Summer_2005|Programmer's Week]] (June 2005, Boston). During this week the DTI statistics code was developed and added to the NA-MIC toolkit. See our [[Progress_Report:Diffusion_Tensor_Statistics|Progress Report (July 2005)]].<br />
<br />
= Key Investigators =<br />
<br />
* Utah: Tom Fletcher, Ran Tao, Saurav Basu, Sylvain Gouttard, Ross Whitaker<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=DTIProcessingTools&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on DTI Processing and Statistics Tools]<br />
<br />
= Software =<br />
<br />
* [http://www.nitrc.org/projects/dtiricianrem| Slicer3 Command Line Module]<br />
<br />
<br />
<br />
<br />
[[Category:Diffusion MRI]] [[Category:Statistics]] [[Category:Registration]] [[Category:Slicer]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIVolumetricWhiteMatterConnectivity&diff=52238Projects:DTIVolumetricWhiteMatterConnectivity2010-05-11T20:28:27Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:DiffusionImageAnalysis|NA-MIC Collaborations]], [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= DTI Volumetric White Matter Connectivity =<br />
= Description =<br />
<br />
We have developed a PDE-based approach to white matter connectivity from DTI that is founded on the principal of minimal paths through the tensor volume. Our method computes a volumetric representation of a white matter tract given two endpoint regions. We have also developed statistical methods for quantifying the full tensor data along these pathways, which should be useful in clinical studies using DT-MRI. This work has been accepted to IPMI 2007.<br />
<br />
{|<br />
|[[Image:FiberTracts-top.jpg|thumb|256px|Five extracted fiber tracts (top view)]]<br />
|[[Image:FiberTracts-angle.jpg|thumb|256px|Five extracted fiber tracts (side angle view)]]<br />
|}<br />
<br />
''Effecient GPU implementation:'' We have recently implemented a fast solver for the volumetric white matter connectivity using graphics hardware, i.e., the Graphics Processing Unit (GPU). This method takes advantage of the massively parallel nature of modern GPUs and runs 50-100 times faster than a standard implementation on the CPU. The fast solver allows interactive visualization of white matter pathways. We have developed a user interface in which a user can select two endpoint regions for the white matter tract of interest, which is typically computed and displayed within 1-3 seconds. This work has been submitted to VIS 2007.<br />
<br />
{|<br />
|[[Image:GPU-tract.jpg|thumb|320px|A screen shot from the interactive white matter connectivity solver. Shown are two selected endpoint regions and the resulting white matter pathway.]]<br />
|}<br />
<br />
= Key Investigators =<br />
Utah 1:Tom Fletcher, Ran Tao, Won-Ki Jeong, Ross Whitaker<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=DTIVolumetricWhiteMatterConnectivity&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on DTI Volumetric White Matter Connectivity]<br />
<br />
[[Category:Statistics]] [[Category:Diffusion MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:StructuralAndDWIPipeline&diff=52237Projects:StructuralAndDWIPipeline2010-05-11T20:27:43Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= A Framework for Joint Analysis of Structural and Diffusion MRIs =<br />
<br />
<br />
{|<br />
|[[Image:pipeline.png|thumb|300px|Joint structural and diffusion image analysis pipeline.]]<br />
|}<br />
<br />
= Description =<br />
This framework addresses the simultaneous alignment and filtering of DWI images to <br />
correct eddy current artifacts and the subsequent alignment of those images to structural, T1 MRI to correct for susceptibility artifacts, and this <br />
paper demonstrates the importance of performing these corrections. It also shows how a T1-based, group specific atlas can be used to generate <br />
grey-matter regions of interest that can drive subsequent connectivity analyses. The result is a system that can be combined with a variety of <br />
tools for MRI analysis for tissue classification, morphometry, and cortical parcellation. <br />
<br />
<br />
* ''Eddy Current Correction'' <br />
<br />
We implemented the diffusion weighted image (DWI) registration model from the paper of G.K.Rohde et al. [[Image:DTIregistration.png|thumb|256px|Coronal slice from a unregisted DTI (left). The same slice after applying the registration model (right).]]<br />
Patient head motion and eddy currents distortion cause artifacts in maps of diffusion parameters computer from DWI. <br />
This model corrects these two distortions at the same time including brightness correction.<br />
<br />
<br />
* "Structural Image Preprocessing"<br />
Preprocess structural images to remove skull, correct bias field, normalize intensities, and segment tissue classes (to provide a white matter mask).<br />
<br />
* "Group Atlas"<br />
Build a structural atlas from all sub jects’ T1 images. Seed regions for tract endpoints are manually delineated in the structural atlas and then mapped <br />
from the atlas to each individual. Automatically segment white matter tracts and quantify diffusion properties <br />
using volumetric pathway analysis. <br />
<br />
{|<br />
|[[Image:seeds.png|thumb|256px|The structural atlas built from the five T1 images with manually outlined <br />
frontal forceps seeds (left). The seeds mapped to each of the individual cases (right).]]<br />
|[[Image:tracts.png|thumb|350px|Tracts on each of the individual cases.]]<br />
|}<br />
<br />
<br />
= Key Investigators =<br />
<br />
* Utah: Ran Tao, Tom Fletcher, Ross Whitaker<br />
<br />
= Publications =<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3AStructuralAndDWIPipeline&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on A Framework for Joint Analysis of Structural and Diffusion MRI]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:BrainManifold&diff=52235Projects:BrainManifold2010-05-11T20:26:38Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= Brain Manifold Learning =<br />
[[Image:sgerber_brainmanifold_oasis_manifold.png|thumb|300px|Manifold learned from OASIS database. The image shows a 2-dimensional parametrization of the database. The green, red and blue are the mean, median and mode images computed using the manifold representation]]<br />
<br />
This work investigates the use of manifold learning approaches in the context of brain population analysis. The goal is to construct a manifold model from a set of brain images that captures variability in shape, a parametrization of the shape space.<br />
Such a manifold model is interesting in several ways<br />
* The low dimensional parametrization simplifies statistical analysis of populations.<br />
* Applications to searching and browsing large database<br />
* The manifold represents a localized Atlas. Alternative to template based applications, for example as a segmentation prior.<br />
* Aid in clinical diagnosis. Different regions on the manifold can indicate different pathologies.<br />
<br />
= Description =<br />
In many neuroimage applications a summary or representation of a population of brain images is needed. A common approach is to build a template, or atlas, that represents a population. Recent work introduced clustering based approaches, which in a data driven fashion, compute multiple templates Each template represents a part of the population. In a different direction, researcher proposed kernel-based regression of brain images with respect to an underlying parameter. This yields a continuous curve in the space of brain images that estimates the conditional expectation of a brain image given the parameter. A natural question that arises based on these investigations is can the space spanned by a set of brain images be approximated by a low-dimensional manifold? In other words, how effectively can a low-dimensional, nonlinear model represent the variability in brain anatomy.<br />
<br />
= Key Investigators =<br />
<br />
* Utah: Samuel Gerber, Tolga Tasdizen, Sarang Joshi, Tom Fletcher, Ross Whitaker<br />
<br />
= Publications =<br />
<br />
'' In Print ''<br />
''Published in MICCAI and ICCV''<br />
* [http://www.cs.utah.edu/~sgerber/research/ Manifold Learning Research Page]<br />
* [http://www.na-mic.org/publications/pages/display?search=BrainManifold&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Brain Manifold Learning]<br />
<br />
'' In Press ''<br />
<br />
* S Gerber, T Tasdizen, R Whitaker, Dimensionality Reduction and Principal Surfaces via Kernel Map, ICCV 2009<br />
* S Gerber, T Tasdizen, S Joshi, R Whitaker, On the Manifold Structure of the Space of Brain Images, MICCAI 2009<br />
<br />
<br />
[[Category:Statistics]] [[Category:Registration]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:PopulationBasedCorrespondence&diff=52234Projects:PopulationBasedCorrespondence2010-05-11T20:25:11Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:UNC|UNC Algorithms]]<br />
__NOTOC__<br />
= Correspondence of complex structures using (Curvature + Location) MDL =<br />
<br />
<br />
[[Image:UNCShape_ShapeCorrespondence.png|thumb|300px|]]<br />
<br />
<br />
The SPHARM-PDM based correspondence is a global correspondence method that does performs well for many structures. But in our studies it has shown to be inferior to population based correspondence methods, when assessing statistical modeling properties derived from the established correspondence, such as specificity and generalization ability of a statistical model. Current methodology in population based correspondence is based mainly on minimizing distribution properties of surface point locations and are thus not invariant to alignment.<br />
<br />
= Description =<br />
<br />
We have extended the population based correspondence framework to include curvature based measurements, such as the Koenderink '''Shape''' Index S and Curvedness C in combination with the standard location information. The implementation is based on ITK and uses the SPHARM-PDM correspondence as an initialization. We have favorably compared our combined "Curvature + Location" MDL to the standard MDL, as well as to the SPHARM approach. Especially with more complex structures, such as the femural bone and the striatal structure (composed of caudate, nucleus accumbens and putamen), our method outperforms the other methods. It also illustrates the potential of this approach of objects as complex as the human cortex, the object of study in the NAMIC year 07/08. In the followup work using an particle based entropy approach, local shape measures of curvature have proven to help improve cortical surface correspondence.<br />
<br />
= Key Investigators =<br />
* UNC Algorithms: Ipek Oguz, Martin Styner<br />
<br />
= Publications =<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=PopulationBasedCorrespondence&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Population Based Correspondence]<br />
<br />
= Links =<br />
<br />
* [[NA-MIC/Projects/Structural/Shape_Analysis/Correspondence|Shape Correspondence Based on Local Curvature]]<br />
<br />
Project Week Results: [[media:2006_MIT_Project_Week_LocalCurvatureBasedCorrespondence.ppt|Jun 2006]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:MethodEvaluationValidation&diff=52233Projects:MethodEvaluationValidation2010-05-11T20:24:50Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:UNC|UNC Algorithms]]<br />
__NOTOC__<br />
= Evaluation and Comparison of Medical Image Analysis Methods =<br />
<br />
In this project, we want to focus on the evaluation of medical image analysis methods for specific clinical applications in respect to development of evaluation methodology and the organization of venues promoting such comparison and validation studies.<br />
<br />
= Description =<br />
<br />
[[Image:Cause07Competition.gif|thumb|300px|]]<br />
<br />
We have developed an open source tool for the evaluation of 3D segmentation algorithms combining a variety of segmentation comparison methods with respect to the performance of human expert raters. This tool was employed during the workshop on "3D Segmentation in the Clinic - A Grand Challenge", on October 26, 2007 held in conjunction with MICCAI 2007 and continues to be used in workshop attached websites on caudate segmentation (cause07.org) and liver segmentation (sliver07.org). On these webpages, you can browse the results of various systems, and read the respective papers and descriptions about the systems. Anybody can join the competition by registering a team, downloading the training and test datasets, and submitting the segmentation results. The MICCAI 2007 workshop was organized in collaboration with Tobias Heimann (German Cancer Research Center, Liver Segmentation) and Bram van Ginneken (University of Utrecht, general organization & webpage development). There were close to 70 participants at the workshop with an overwhelmingly positive feedback.<br />
<br />
A version adapted for lesion segmentation (in light of the MIND DBP 2 application) was developed for the 2008 MICCAI workshop with special focus on lesion detection metrics. The workshop consisted on 3 challenges: Multiple Sclerosis lesion segmentation, Liver tumor segmentation and artery central lumen line extraction. Next to our team, the workshop was co-organized with the Children Hospital of Boston, Siemens Medical Systems and Erasmus University (Rotterdam, Netherlands). As in the previous year, the workshop was a full success and with over 70 participants MICCAI's largest workshop this year. The MS lesion segmentation is continued online. The 2008 workshop proceedings were also publicly released on the online published [http://www.midasjournal.org MIDAS journal].<br />
<br />
In 2009, the third installment of the workshop took place. We provided significant advice to this year's organizers and did not directly organize one of the challenges. The success also in 2009 reassures us though of the continued need for such evaluation workshops and we are again directly involved for the 2010 MICCAI workshop. The challenges this year encompass knee bone and cartilage segmentation, parotid gland segmentation and lung CT registration.<br />
<br />
= Key Investigators =<br />
* UNC: Martin Styner<br />
<br />
= Publications =<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=MethodEvaluationValidation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publication Database on Evaluation and Comparison of Medical Image Analysis Methods]<br />
<br />
= Links =<br />
* [http://grand-challenge2009.bigr.nl/ MICCAI 2009 workshop "3D Segmentation in the Clinic III - A Grand Challenge"]<br />
* Internal link to event: [[Miccai_2008_Segmentation_Workshop ]]<br />
* [http://grand-challenge2008.bigr.nl MICCAI 2008 workshop "3D Segmentation in the Clinic II - A Grand Challenge"]<br />
** [http://www.ia.unc.edu/MSseg MS lesion segmentation] <br />
** [http://www.midasjournal.org/browse/journal/44 MS lesion workshop proceedings]<br />
* [http://mbi.dkfz-heidelberg.de/grand-challenge2007 MICCAI 2007 workshop "3D Segmentation in the Clinic - A Grand Challenge"]<br />
** [http://cause07.org/ Caudate Segmentation Evaluation (2007)]<br />
** [http://sliver07.org Segmentation of the Liver (2007)]<br />
** [http://mbi.dkfz-heidelberg.de/grand-challenge2007/download/eval-src3.tar.gz Open source segmentation comparison tool]<br />
<br />
[[Category: Shape Analysis]] [[Category: Segmentation]] [[Category: Lesion]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:LocalStatisticalAnalysisViaPermutationTests&diff=52232Projects:LocalStatisticalAnalysisViaPermutationTests2010-05-11T20:24:30Z<p>Melonakos: </p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:UNC|UNC Algorithms]], [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= Local Statistical Analysis via Permutation Tests =<br />
<br />
[[Image:UNCShape_CaudatePval_MICCAI06.png|thumb|right|300px|]]<br />
<br />
As part of NA-MIC, we have developed two statistical frameworks for computing shape analysis of 3D surfaces, both based on permutation tests. The first framework is useful for straightforward group difference testing and computes differences locally for every surface point via the standard robust Hotelling T^2 two sample metric. The second one compute locally first a Generalized Linear Model (GLM) followed by a MANCOVA based metric computation for group difference testing. The second framework also allows the computation of correlation statistics with the GLM fitted shape coordinates. In both cases, out tools will proveide statistical p-value maps, both raw and corrected for multiple comparisons, as well as mean difference magnitude and vector maps, group covariance maps, correlation maps and z-score maps.<br />
<br />
= Description =<br />
<br />
== Shape Statistics: Permutation tests ==<br />
<br />
The local '''shape''' '''analysis''' involves testing from a few to many thousands of hypothesis (one per surface point) for statistically significant effects. At each surface point, we propose to perform a separate statistical test that analyzes either the local coordinate (in a general group difference shape study) or the change difference vector (in a longitudinal study or twin study). Here we present a set of methods for computing a) the local shape metric as well as b) the global correction for doing multiple (local) tests. For the shape metric, two separate methods were developed for different use scenarios depending on whether correction for / correlation with patient variables such as gender, age or clinical scores is needed. <br />
<br />
The most common measure of multiple false positives is the familywise error rate (FWER). The multiple testing problem has been an active area of research in the functional neuroimaging community. One of the widely used methods in the '''analysis''' of neuroimaging data makes inferences based on the maximum distribution. Common to both testing framework is the use of non-parametric permutation tests to estimate uniformly sensitive extrema distributions. The correction method is based on computing first the local p-values using permutation tests. The minimum of these p-values across the surface is then computed for every permutation. The appropriate corrected p-value at level α can then be obtained by the computing the value at the α-quantile in the histogram of these minimum values. Using the minimum statistic of the p-values, this method correctly controls for the FWER, or the false positives, but no control of the false negatives is provided. The resulting corrected local significance values can thus be regarded as pessimistic estimates akin to a simple Bonferroni correctection.<br />
<br />
Additionally to the non-parametric permutation correction, we have also implemented and applied a False Discovery Rate Estimation (FDR) method. The innovation of this procedure is that it controls the expected proportion of false positives only among those tests for which a local significance has been detected. The FDR method thus allows an expected proportion (usually 5%) of the FDR corrected significance values to be falsely positive. The correction using FDR provides an interpretable and adaptive criterion with higher power than the non-parametric permutation tests. FDR thus results in a less conservative estimate of the false-negatives.<br />
<br />
== Hotelling T^2 metric ==<br />
<br />
For the case group mean difference testing of balanced groups that are well controlled, our original testing framework is of best use. Here we compute locally the standard robust Hotelling T^2 two sample metric, then estimate p-values via standard permutation tests and correct for multiple comparison via FDR and permutation tests of the p-value extrema distributions.<br />
<br />
== GLM & MANCOVA ==<br />
<br />
In case that correlation statistics are needed, or for the case of unbalanced or not well controlled groups, the novel GLM & MANCOVA statistical framework should be employed. This method first computes a Generalized Linear Model (GLM), followed by a MANCOVA analysis and p-value computation and correction via the same permutation approach mentioned above.<br />
The main MANCOVA metric is the Hotelling Trace, but other standard MANCOVA metrics can be used (such as Ray's Max Lambda). In addition to the permutation based p-values, F-test based significance maps are generated. The MANCOVA based testing framework also contains facilities for z-score testing and correlation testing.<br />
<br />
= Key Investigators =<br />
<br />
* UNC Algorithms: Martin Styner, Ipek Oguz, Marc Niethammer, Marc Macenko, Beatriz Paniagua, Christine Shun Xu, Hongtu Zhu, Corentin Hamel<br />
* Utah Algorithms: Guido Gerig<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=LocalStatisticalAnalysisViaPermutationTests&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Local Statistical Analysis via Permutation Tests]<br />
<br />
= Links =<br />
* [[Projects:ShapeAnalysisFrameworkUsingSPHARMPDM | Main SPHARM shape analysis framework ]]<br />
* [[NA-MIC/Projects/Structural/Shape_Analysis/ShapeStatisticsWithPermTestCorrectionAndFDR|ITK statistical analysis using non-parametric permutation analysis and false discovery rate]]<br />
* [http://www.nitrc.org/projects/spharm-pdm NITRC page for SPHARM PDM toolbox]<br />
* [http://www.nitrc.org/projects/shape_mancova NITRC page for Shape analysis via GLM & MANCOVA]<br />
<br />
Project Week Results: [[media:2006_06_PW_StatAnal.ppt|Jun 2006]]<br />
<br />
[[Category: Statistics]] [[Category:Shape Analysis]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:ShapeAnalysisFrameworkUsingSPHARMPDM&diff=52231Projects:ShapeAnalysisFrameworkUsingSPHARMPDM2010-05-11T20:24:05Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:UNC|UNC Algorithms]], [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= Shape Analysis Framework using SPHARM-PDM =<br />
<br />
[[Image:UNCShape_OverviewAnalysis_MICCAI06.gif|thumb|300px|]]<br />
<br />
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. In summary, the input of the proposed shape analysis is a set of binary segmentation of a single brain structure, such as the hippocampus or caudate. These segmentations are first processed to fill any interior holes and a minimal smoothing operation. The processed binary segmentations are converted to surface meshes, and a spherical parametrization is computed for the surface meshes using a area-preserving, distortion minimizing spherical mapping. The SPHARM description is computed from the mesh and its spherical parametrization. Using the first order ellipsoid from the spherical harmonic coefficients, the spherical parametrizations are aligned to establish correspondence across all surfaces. The SPHARM description is then sampled into a triangulated surfaces (SPHARM-PDM) via icosahedron subdivision of the spherical parametrization. These SPHARM-PDM surfaces are all spatially aligned using rigid Procrustes alignment. Group differences between groups of surfaces are computed using the standard robust Hotelling T^2 two sample metric. An alternative testing framework applies first a Generalized Linear Model and performs testing via a MANCOVA based test statistics. Statistical p-values, both raw and corrected for multiple comparisons, result in significance maps. Additional visualization of the group tests are provided via mean difference magnitude and vector maps, as well as maps of the group covariance information.<br />
<br />
The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, GeorgiaTech, Utah) and many else outside of NAMIC.<br />
<br />
= Description =<br />
<br />
A considerable amount of work was spent on the development aspect of our shape analysis tools. The main visualization tool, KWMeshVisu, can be called directly from Slicer 3 for the overlay of scalar, vector and ellipsoid data onto surfaces (so-called attribution) and the application of a set of versatile colormaps. The display of a saved "attributed" surfaces is then again possible within Slicer 3 to close the loop. This lean visualization tool fills a niche and is also used in our cortical thickness analysis tool. <br />
<br />
The individual shape analysis components have also been integrated into Slicer 3 modules. While it is entirely possible to run all steps of our shape analysis pipeline by calling the individual modules, this is highly inefficient. As a result we are developing, and a first prototype is ready, a separate shape pipeline tool to called from within Slicer3. This tool creates pipeline scripts for Batchmake, another NAMIC supported project at Kitware, to run the shape analysis pipeline as a distributed, background process. The whole shape analysis pipeline thus becomes entirely encapsulated and accessible to the trained clinical collaborator.<br />
<br />
In addition, a novel [[ Projects:LocalStatisticalAnalysisViaPermutationTests | statistical analysis]] was incorporated that allows to control for patient covariates (such as gender, age etc) via a Generalized Linear Model and a MANCOVA based testing framework. This also enables testing of interaction between shape and continuous patient variables such as testing scores. This tool is included in the current release and a corresponding Insight Journal article has been published.<br />
<br />
Currently, we are improving the embedding of our tools in Slicer, specifically the functionality for the use of XNAT to access the input data and store the results implemented directly in a Slicer module for the SPHARM-PDM shape analysis.<br />
<br />
The current toolset distribution (via [http://www.ia.unc.edu/dev NeuroLib]) now also contains open data for other researchers to evaluate novel shape analysis enhancements.<br />
<br />
= Key Investigators =<br />
* UNC Algorithms: Martin Styner, Ipek Oguz, Marc Niethammer, Beatriz Paniagua, Hongtu Zhu<br />
* Utah Algorithms: Guido Gerig<br />
<br />
= Publications =<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display/?search=ShapeAnalysisFrameworkUsingSPHARMPDM&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Shape Analysis Framework using SPHARM-PDM]<br />
<br />
= Links =<br />
<br />
* [http://www.nitrc.org/projects/spharm-pdm NITRC SPHARM PDM page] <br />
* [[ Projects:LocalStatisticalAnalysisViaPermutationTests | Statistical shape testing framework]]<br />
* [[Engineering:Project:Shape_Analysis|Shape Analysis]]<br />
* [[Engineering:Project:UNC_Shape_AnalysisLONI_pipeline|Shape Analysis LONI pipeline]]<br />
* [[Engineering:Project:2006_AHM_Programming:MeshVisu|MeshVisu]]<br />
* [[AHM_2006:ProjectsUNCLoniShap|AHM2006 - LONI Shape]]<br />
* [[NA-MIC/Projects/Structural/Shape_Analysis/FemaleSPDCaudates|Female SPD Caudates]]<br />
* [[Algorithm:GATech:Multiscale_Shape_Analysis|UNC shape analysis with Spherical Wavelet Features]]<br />
<br />
Project Week Results: [[media:ProgWeek05ProjectDescShapeDesc.ppt|Jun 2005-1]], [[media:ProgWeek05ProjectDescStatShapeAnalFrame.ppt|Jun 2005-2]], [[media:ProgWeek05ProjectDescLONI.ppt|Jun 2005-3]], [[media:2006_AHM_Programming_Half_week_MeshVisu.ppt|Jan 2006-1]], [[media:2006_AHM_Programming_Half_week_UNCShapeLONI.ppt|Jan 2006-2]], [[media:2006_06_PW_female_SPD.ppt|Jun 2006]], [[media:2007_Project_Half_Week_ShapeAnalysis_WithSphericalWavelets.ppt|Jan 2007-1]], [[media:2007_AHM_Programming_Half_week_MeshVisu.ppt|Jan 2007-2]]<br />
<br />
[[Category:Shape Analysis]] [[Category:Slicer]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:CorticalCorrespondenceWithParticleSystem&diff=52230Projects:CorticalCorrespondenceWithParticleSystem2010-05-11T20:23:44Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:UNC|UNC Algorithms]], [[Algorithm:Utah|Utah Algorithms]]<br />
__NOTOC__<br />
= Cortical Correspondence with Particle Systems =<br />
<br />
[[Image:Sulcaldepth.png|thumb|300px|]]<br />
<br />
In this project, we want to compute cortical correspondence on populations,<br />
using various features such as cortical structure, DTI connectivity, vascular<br />
structure. This presents a challenge because of<br />
the highly convoluted surface of the cortex, as well as because of the different<br />
properties of the data features we want to incorporate together.<br />
<br />
= Description =<br />
<br />
[[Image:CorticalCorrespondenceConnectivityIPMI_scheme.png|thumb|300px|Schematic processing pipeline]]<br />
<br />
We use a particle-based entropy minimizing system for the correspondence<br />
computation, in a population-based manner. This method best suits<br />
our needs since, parameterization-based methods, such as MDL or SPHARM,<br />
require a spherical parametrization of the surface, which is hard to obtain for<br />
the highly convoluted cortex surface. Another advantage of using the particle-based correspondence technique is that it does not require the surface to be of<br />
spherical topology; this means a lot less pre-processing for our method, since<br />
the brain cortex is not of spherical topology. Another strength of this method is<br />
that it would (eventually) enable correspondence computation on the subcortical structures<br />
and on the cortical surface using the same framework. We also would like to<br />
explore correspondence on the cerebellum, which traditionally is excluded from<br />
such studies (e.g. in FreeSurfer-based work).<br />
<br />
[[Image:CorticalCorrespondenceIPMI.png|thumb|300px|Impact of brain deflation algorithm on surface connectivity values using an example of brainstem connectivity. The noisy tracking around temporal lobe is reflected on the connec- <br />
tivity map that uses simple averaging(without deflation). The surface deflation method ignores the <br />
noisy signal and reflects a more accurate connectivity map. Note how strongly the <br />
averaging method depends on sulcal depth, illustrated in highlighted regions]]<br />
<br />
The main disadvantage of using the particle-based correspondence technique<br />
on the brain cortex is that it assumes the particles to be existing on local tangent<br />
planes, which presents a challenge for the cortex given the highly convoluted<br />
surface. We propose to overcome this difficulty by first ‘inflating’ the cortex<br />
surface. This way, we obtain a less convoluted, sphere-like surface, where the<br />
particles will be interacting. However, we need a 1-1 correspondence between<br />
the original cortex surface and the inflated surface, since the data to be used<br />
for correspondence, such as the the curvature, and the vascular data, lives on<br />
the original cortex surface. FreeSurfer offers a method that minimizes the distance distortion in the<br />
mapping, while also smoothing the surface. FreeSurfer also preprocesses the<br />
input surface to generate a spherical topology. <br />
<br />
<br />
Once the framework for computing the correspondence given certain data features<br />
is established, the major challenge is to incorporate the various data forms<br />
that we would like to use together. We are using structural data as well as<br />
connectivity (DTI). We have first tested our method using structural metrics, namely, sulcal depth (as computed by FreeSurfer), and have demonstrated improved correspondence quality compared to traditional, location-only correspondence (using the particle-based entropy framework), and we have shown that our results are at least comparable to FreeSurfer. This comes as no surprise as we had already shown in [[Projects:PopulationBasedCorrespondence|our previous studies]] that correspondence can be enhanced by using local curvature in addition of point locations for objects with complex geometry. Next, we have developed a technique for representing the white matter connectivity information on the cortical surface in a manner that can be incorporated into the correspondence framework. We use probabilistic connectivity maps obtained by performing stochastic tractography (which is a [[Projects:DTIStochasticTractography | separate NA-MIC project]]) from various ROI's. The top left part on the image to the right shows this connectivity map overlaid with a coronal slice. The blue outline shows the cortical boundary. Note that the connectivity values on this surface are basically a function of sulcal depth, as illustrated on the far right. We use a cortex deflating scheme to overcome these problems. An example of this deflation process is shown on the bottom row of the figure. The connectivity values on the deflated surface provides a more accurate representation of the DTI data on the surface. We have found that using the connectivity maps in addition to sulcal depth and spatial location further improves correspondence quality.<br />
<br />
= Publications =<br />
<br />
'' In Print ''<br />
<br />
''Published in LNCS/IPMI: Information processing in medical imaging''<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3ACorticalCorrespondenceWithParticleSystem&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publication Database on Cortical Correspondence using Particle System]<br />
<br />
= Key Investigators =<br />
* UNC Algorithms: Ipek Oguz, Martin Styner<br />
* Utah Algorithms: Josh Cates, Tom Fletcher, Ross Whitaker<br />
<br />
Project Week Results: [[Summer_Project_Week_Slicer3_Cortical_Thickness_Pipeline|June 2009]]<br />
<br />
[[Category:fMRI]] [[Category:Shape Analysis]] [[Category: Diffusion MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:ShapeAnalysis&diff=52229Projects:ShapeAnalysis2010-05-11T20:21:51Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],<br />
__NOTOC__<br />
= Shape based Population Analysis =<br />
<br />
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on the Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit of employing the classification approach is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We estimate the expected accuracy of the classifier in a jackknife procedure. We have also adapted a non-parametric permutation test to the classification setting to estimate the statistical significance of the detected differences and the observed classification accuracy.<br />
<br />
= Description =<br />
<br />
Study of Hippocampal Shape in Schizophrenia.<br />
<br />
[[Image:HippocampalShapeDifferences.gif|thumb|left|400px|Detected shape differences in the study of hyppocampus in schizophrenia. The differences are represented as a defomation of a normal hippocampus (from blue - inwards defomration, to green - no deformation, to red - outward deformation).]]<br />
<br />
<br clear="all" /><br />
<br />
[[Image:HippocampalShapeColorbar.jpg|left|200px]]<br />
<br />
<br clear="all" /><br />
''Recent Update''<br />
<br />
We have also started exploring alternative, surface-based descriptors for objects of spherical topology based on hierarchical decomposition of the surface functions into an over-complete basis. This work is still preliminary, but promises to improve our ability to detect and characterize subtle differences int he shape of anatomical structures due to diseases such as schizophrenia.<br />
<br />
''Structural Constellations for Population Analysis'' <br />
<br />
We investigate a framework where global properties of structural constellations in medical images, i.e., the \emph{configuration and size} of multiple anatomical units, can be employed for population analysis of anatomical variability. The method takes advantage of the fact that cross-subject correspondence of certain structures is relatively well-established. This is in contrast with a majority of today's morphology studies that rely on local correspondence and/or employ a large number of features from each subject. Moreover, the representations we use can be interpreted in meaningful terms of global anatomy, allowing for the potential use of the analysis for exploring the pathology of neurodegenerative diseases like schizophrenia. In this paper, we demonstrate that with a small number of measurements per subject, one can achieve good separation between schizophrenics and matched controls. Our experiments indicate that the location of certain structures can capture discriminative characterization which may not be available through volumetric (size) measurements. For example, we find that the normalized sagittal position of the parahippocampal gyrus is significantly different between schizophrenics and controls. As an example, we also employ a linear Support Vector Machine to achieve up to 85% leave-one-out classification accuracy.<br />
<br />
''Software''<br />
<br />
* Stand alone code for training a classifier, jackknifing and permutation testing.<br />
* Current plans include integration within the shape analysis pipeline, in collaboration with [[Algorithm:UNC|UNC (Martin Styner)]].<br />
* We are currently also porting the software into ITK.<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3AShapeAnalysis&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Population Analysis of Anatomical Variability]<br />
<br />
[[Category:Shape Analysis]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIStochasticTractography&diff=52228Projects:DTIStochasticTractography2010-05-11T20:21:28Z<p>Melonakos: </p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:DiffusionImageAnalysis|NA-MIC Collaborations]], [[NAC| NAC Algorithms]], [[Algorithm:MIT|MIT Algorithms]], [[DBP1:Harvard|Harvard DBP1]]<br />
__NOTOC__<br />
= DTI Stochastic Tractography =<br />
<br />
Stochastic Tractography is a Bayesian approach to estimating nerve fiber tracts from DWMRI (Diffusion Weighted Magnetic Imaging) images. The Bayesian framework provides a measure of confidence regarding the estimated tracts. This measure of confidence allows the algorithm to generate tracts which pass through regions with uncertain fiber directions, revealing more details about structural connectivity than non-Bayesian tractography algorithms.<br />
<br />
= Description =<br />
<br />
Magnetic Resonance Imaging (MRI) is a valuable imaging modality for studying the brain in-vivo. We can use use MRI to differentiate between tissue types, which is valuable for anatomical studies. However, anatomical MRI provides a homogeneous image of white matter making it difficult to characterize white matter fiber tracts which pass through this region. Diffusion Weighted Magnetic Resonance Imaging (DWMRI) provides information about the diffusion of water molecules in the brain. DWMRI images can be used to construct a DTI data set which provides a complete description of water diffusion.<br />
<br />
Researchers have hypothesized that white matter abnormalities may underlie some neurological conditions. For instance, the neurological disease schizophrenia by is characterized by its behavioral symptoms, which include auditory hallucinations, disordered thinking and delusion[6]. Studies have suggested that these behavioral symptoms have some connection with the neuroanatomical abnormalities observed in schizophrenia patients. Using DTI, Researchers can noninvasively investigate the relationship between brain white matter abnormalities and schizophrenia by using DTI.<br />
<br />
We can visualize DTI data sets using a number of methods. DTI provides information about the diffusion of water at each voxel, or volume element in the form of diffusion tensors. A popular technique to visualize these diffusion tensors is to draw fiber tracts which utilize the diffusion information across many voxels. This technique is known as DTI White Matter Tractography.<br />
<br />
One possible method to perform tractography is to draw tracts which are oriented along the direction of maximal water diffusion of the voxels they pass through[1]. However, this method does not provide information about the uncertainty of the generated tracks due to noise or insufficient spatial resolution. Probabilistic white matter tractography addresses this problem by performing tractography under a probabilistic framework and provides a metric for assessing the uncertainty of generated fiber tracts. Several mathematical formulations of probabilistic tractography have existed for some time with the earliest being Behren's [3].<br />
<br />
Ultimately the success of the algorithm will depend on its use in the research community. To this end, we have created a complete user interface to support the algorithm. This interface will be integrated into the popular 3D Slicer medical image visualization program. Additionally, the algorithm will be implemented within the ITK medical image analysis toolkit. ITK provides a standardized programming interface for a large collection of medical image processing algorithms which enable application developers to quickly incorporate the algorithms into new applications.<br />
<br />
''Progress''<br />
<br />
Here we estimate the distribution of Tract-Average FA and tract lengths (in mm) for tracts which originate from the right internal capsule and progress toward the prefrontal cortex.<br />
<br />
[[Image:ConnectivityMap.png|thumb|left|400px|Connectivity Probability Map. Colors indicate the probability that a voxel is connected to the right internal capsule (solid magenta) via a fiber tract which progresses towards the frontal cortex. Yellow indicates lower probability while blue is high probability of connection via these fibers.]]<br />
<br clear="all" /><br />
[[Image:FAdistribution.png|thumb|left|400px|Distribution of Tract-Average FA values]]<br />
[[Image:TractLengthDistribution.png|thumb|left|400px|Distribution of Tract Lengths]]<br />
[[Image:JointFAdistribution.png|thumb|left|400px|Joint Distribution of Tract-Average FA values and Tract Lengths.]]<br />
<br clear="all" /><br />
<br />
''NAMIC Software''<br />
<br />
A 3D Slicer module has been created using the command line module interface introduced in 3D Slicer version 3.<br />
<br clear="all" /><br />
[[Image:Slicer3CML.png|thumb|left|200px|Slicer 3 interface for Stochastic Tractography ITK Filter]]<br />
<br clear="all" /><br />
<br />
The software developed in this project includes:<br />
<br />
* New multithreaded ITK Filter (itkStochasticTractographyFilter)<br />
* 3D Slicer Command Line Module<br />
** Allows the algorithm to be executed without 3D Slicer.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Tri Ngo, Polina Golland <br />
* BWH/Harvard: NAC: Carl-Fredrik Westin<br />
* BWH/Harvard: DBP1: Marek Kubicki<br />
* Kitware: Brad Davis<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3ADTIStochasticTractographyClinical&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database - Stochastic Tractography Clinical Applications]<br />
<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3ADTIStochasticTractographyAlgorithms&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database - Stochastic Tractography Algorithms Development]<br />
<br />
= Links =<br />
<br />
* Brigham and Women's Hospital. 3d slicer medical visualization and processing environment for research. http://www.slicer.org/.<br />
* Insight Software Consortium. National library of medicine insight segmentation and registration toolkit(itk). http://www.itk.org/.<br />
<br />
Project Week Results: [[Media:2007_Project_Half_Week_StochasticTractography.ppt|Jan 2007]]<br />
<br />
[[Category:Diffusion MRI]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTISegmentation&diff=52227Projects:DTISegmentation2010-05-11T20:21:01Z<p>Melonakos: </p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:DiffusionImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]], [[DBP1:Harvard|Harvard DBP1]], [[Engineering:UCLA|UCLA Engineering]]<br />
__NOTOC__<br />
=DTI Segmentation =<br />
<br />
Recent work shows that diffusion tensor imaging (DTI) can help resolving thalamic nuclei based on the characteristic fiber orientation of the corticothalamic/thalamocortical striations within each nucleus. In this project we develop a novel segmentation method based on spectral clustering.<br />
<br />
With the new segmentation methods, we can resolve the organization of the thalamic nuclei into groups and subgroups solely based on the voxel affity matrix, avoiding the need for explicitly defined cluster centers. The identification of nuclear subdivisions can facilitate localization of functional activation and pathology to individual nuclear subgroups.<br />
<br />
Further our methods do not make any assumptions specific to the thalamus, which could potentially allow us to expand our work to enable segmentation of other gray matter structures.<br />
<br />
= Description =<br />
<br />
''Segmentation''<br />
<br />
We are using a modified spectral clustering algorithm to segment the thalamic data.<br />
<br />
[[Image:Thalamus_algo_outline.png|640px]]<br />
<br />
This image is a schematic outline of spectral segmentation algorithm. (A) DTI data from an individual thalamic hemisphere, shown here as a single slice cuboid map (B) Initial graph corresponding to the sparse affinity matrix (C) Unordered affinity matrix (D) Ordered and clustered affinity matrix (E) Clusters in the original data space (F) Clusters in 3D.<br />
<br />
[[Image:Thalamus_results.png]]<br />
<br />
On the left there is a 3D rendering of expert segmentation of both hemispheres from one subject. On the right, it is the same subject segmented by the modified spectral clustering algorithm with 12 clusters. Clusters are colored according to their mean tensor orientations, therefore similar colors indicate similar mean tensor orientation.<br />
<br />
''Software''<br />
<br />
The algorithms now are implemented in matlab.<br />
<br />
<br />
= Key Investigators =<br />
<br />
*MIT: Ulas Ziyan <br />
<br />
*UCLA Engineering: Jon Wisco<br />
<br />
*Harvard DBP1: Carl-Fredrik Westin<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3ADTISegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on DTI-based Segmentation]<br />
<br />
[[Category: Segmentation]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:fMRIDetection&diff=52226Projects:fMRIDetection2010-05-11T20:20:46Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:fMRIAnalysis|NA-MIC_Collaborations]], [[Algorithm:MIT|MIT Algorithms]], [[DBP1:Harvard|Harvard DBP1]]<br />
__NOTOC__<br />
= fMRI Detection =<br />
<br />
Validation is known as one of the most challenging problem in fMRI analysis since the ground truth activation is unknown. In this work, we evaluate the proposed fMRI analysis methods with respect to their ability to detect activations from reduced-length time courses. The following flow chart depicts our evaluation process. We compare the detection results, obtained from partial-length time courses, to the pseudo ground truth activation map, created by majority voting of the GLM detection results from four full-length sessions (all 17 epochs) in fMRI experiments for the same subject. No smoothing is performed while creating the pseudo ground truth activation maps.<br />
<br />
<br />
[[Image:FMRIEvaluationchart.jpg|thumb|center|400px|This flow chart outlines the validation procedure using real fMRI data.]]<br />
<br />
Data across runs and across subjects yields similar results in our comparison experiments of the detectors. We present the results in coronal view for one subject across all detectors in the following figures<br />
<br />
{|<br />
|[[Image:Gt_sm007ep3_Sn36.tif|thumb|152px|Pseudo Ground truth activation]]<br />
|[[Image:Glm_sm007ep3_Sn36.tif|thumb|152px|Detection result without spatial regularization]]<br />
|[[Image:Gau_sm007ep3_Sn36.tif|thumb|152px|Detection result with Gaussian smoothing]]<br />
|[[Image:Mf_sm007ep3_Sn36.tif|thumb|152px|Detection result with MRF spatial regularization]]<br />
|[[Image:Mfseg_sm007ep3_Sn36.tif|thumb|152px|Detection result with anatomically-guided MRF spatial regularization]]<br />
|}<br />
<br />
Without spatial regularization, the GLM detector's activation map is more fragmented due to loss in statistical power from reducing the length of the signals. The other two images illustrate the results of applying GLM with the MRF priors, as well as its anatomically-guided version. MRF regularization is able to capture activations with elongated spatial structures. This highlights the potential benefit of using the Markov priors in fMRI detection. Furthermore, anatomically-guided MRF produces activations maps that follow closely the highly folded cortical sheet. The MRF model benefits from using anatomical information to remove spurious activations. Our experiments demonstrate that employing anatomically-guided MRF spatial regularization leads to high detection accuracy from time courses of substantially reduced length.<br />
<br />
<br /><br />
<br />
= Description =<br />
<br />
<br />
We study Markov Random Fields (MRF) as spatial smoothing priors in fMRI detection. In this work, we investigate fast approximate inference algorithms for using MRFs in fMRI detection, propose a novel way to incorporate anatomical information into the detection framework, validate the methods through ROC analysis on simulated data and demonstrate their application in a real fMRI study. The following figures illustrate the detection results from phantom data by showing one axial slice of the estimated activation map using our proposed methods.<br />
<br />
{|<br />
|[[Image:Pattern10.tif|thumb|150px|Ground truth activation]]<br />
|[[Image:Glm_p10n0.tif|thumb|150px|Detection result without spatial regularization]]<br />
|[[Image:Gau_p10n0.tif|thumb|150px|Detection result with Gaussian smoothing]]<br />
|[[Image:Mf_p10n0.tif|thumb|150px|Detection result with MRF spatial regularization]]<br />
|[[Image:Mfseg_p10n0.tif|thumb|150px|Detection result with anatomically-guided MRF spatial regularization]]<br />
|}<br />
<br />
The detection results are obtained with thresholding at 0.5% false positive rate. Yellow pixels indicate true positve, red pixels indicate false positve, and green pixels indicate false negative.<br />
<br />
<br /><br />
<br />
''Implementation''<br />
<br />
<br /> Cosman [1] demonstrated the potential benefit of using binary MRF as a spatial regularization for fMRI detection. With binary states, exact solution can be obtained in polynomial time. However, if one wants to go beyond binary states (e.g., treating positively and negatively activated voxels differently), the problem of estimating the optimal activation states becomes intractable and approximation algorithms must be used. Our work begins with adopting the Mean Field solver for approximate MRF solution. The following graph depicts the corresponding graphical model. You can find detialed derivation regarding Mean Field solver in [2]. In our experiments, the Mean Field algorithm produced results comparable to those of the exact solver while reducing computation time by one to two orders of magnitude.<br />
<br />
[[Image:MRFnoSegM.fig2ps.tmp.jpg|thumb|left|200px||Graphical model for MRF. Xi and Zi denote activation state and voxel-by-voxel fMRI statistics of voxel i, respectively. Xi is the hidden variable, while Zi is the noisy observation.]]<br />
<br />
<br clear="all"/><br />
<br />
<br /> We further refine MRF spatial regularization by incorporating anatomical information. Similarly to segmentation, where a probabilistic atlas serves as a spatially varying prior on the tissue types, the anatomical information can provide a prior on the activation map. The following figure illustrates the graphical model of the MRF with anatomical information. Intuitively speaking, we want the prior to reflect the fact that activation is much more likely to occur in gray matter than in white matter, and not at all in cerebrospinal fluid (CSF) or bone. In addition, the spatial coherency of activation is strong within each tissue and not across tissue boundaries.<br />
<br />
[[Image:MRFSegM2.fig2ps.tmp.jpg|thumb|left|200px|Graphical model for MRF with anatomical information. Wi denotes (potentially noisy) segmentation label of voxel i. U_i is combination of activation state and true tissue type of voxel i. Wi and Zi are noisy obervations, and Ui is the hidden variable.]]<br />
<br />
<br clear="all"/><br />
<br />
<br /> In our experiments, we compared our proposed spatial regularization methods: MRF solved by Mean Field algorithm and MRF with anatomical information, with most common spatial regulariation method: Gaussian smoothing. We found that MRF's detection rate is higher than Guassian smoothing's results when the signal to noise ratio (SNR) of the date is relatively high. Incorporating anatomical information to either MRF or Gaussian smoothing can further improve their performance disregarding the SNR level.<br />
<br />
''Software''<br />
<br />
MRF/based regularizers and anatomically-guided fMRI detection tools are being integrated into Slicer.<br />
<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Wanmei Ou, Polina Golland<br />
* Harvard DBP1: Sandy Wells, Wendy Plesniak, Carsten Richter<br />
<br />
<br />
= Publications =<br />
''In Print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3AfMRIDetection&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on fMRI Detection and Analysis]<br />
<br />
[[Category: fMRI]] [[Category: Slicer]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIFiberRegistration&diff=52225Projects:DTIFiberRegistration2010-05-11T20:20:21Z<p>Melonakos: </p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:DiffusionImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Joint Registration and Segmentation of DWI Fiber Tractography =<br />
<br />
The purpose of this work is to jointly register and cluster DWI fiber tracts obtained from a group of subjects. We formulate a<br />
maximum likelihood problem which the proposed method solves using a<br />
generalized Expectation Maximization (EM) framework. Additionally,<br />
the algorithm employs an outlier rejection and denoising strategy to<br />
produce sharp probabilistic maps (an atlas) of certain bundles of interest.<br />
This atlas is potentially useful for making diffusion measurements in<br />
a common coordinate system to identify pathology related changes or<br />
developmental trends.<br />
<br />
= Description =<br />
<br />
''Initial Registration''<br />
<br />
A spatial normalization is necessary to obtain a group-wise<br />
clustering of the resulting fibers. This initial normalization is performed <br />
on the Fractional Anisotropy (FA)<br />
images generated for each subject. This initial normalization aims<br />
to remove gross differences across subjects due to global head size<br />
and orientation. It is thus limited to a 9 parameter affine<br />
transformation that accounts for scaling, rotation and translation.<br />
The resulting transformations are then applied to each of the<br />
computed fibers to map them into a common coordinate frame for<br />
clustering.<br />
<br />
<br />
[[Image:MIT_DTI_JointSegReg_ourapproach.jpg |thumb|400px|Our Approach]]<br />
<br />
''Initial Fiber Clustering''<br />
<br />
Organization of tract fibers into bundles, in the entire white<br />
matter, reveals anatomical connections such as the corpus callosum<br />
and corona radiata. By clustering fibers from multiple subjects into<br />
bundles, these common white matter structures can be discovered in <br />
an automatic way, and the bundle models can be saved with expert <br />
anatomical labels to form an atlas. In this work, <br />
we take advantage of automatically segmented tractography that has been labeled (as<br />
bundles) with such an atlas for initialization.<br />
<br />
<br />
''Joint Registration and Segmentation''<br />
<br />
Once we obtain an initial affine registration and clustering results using the high dimensional atlas, we iteratively fine-tune the registration and clustering results using a maximum likelihood framework, which is solved through a generalized EM algorithm. For the registration we use one set of affine parameters per fiber bundle, and combine these affine registrations into a single smooth and invertable warp field using a log-Euclidian poly-affine framework. Additionally, the algorithm employs an outlier rejection and denoising strategy while<br />
producing sharp probabilistic maps of certain bundles of interest.<br />
<br />
We tested the registration component of this algorithm without updating the clustering with 26 major fiber bundles. The poly-affine warp fields with relatively limited number of components resulted in similar quality registrations when compared with results from a benchmark non-linear registration algorithm that was run on FA images: <br />
<br />
{|<br />
|<br />
[[Image:FiberBundleReg.jpg|thumb|800px|Top Row: 3D renderings of the registered tracts of a subject (in green) and the template (in red) within 5mm of the central axial slice overlayed on the central FA slice of the template. ''Aff'' (left) stands for the FA based global affine, ''Dem'' (middle) for the demons algorithm and ''PA'' (right) for the proposed framework in this work. Arrows point to an area of differing qualities of registration. Overlapping of the red and green fibers is indicative of better registration. Bottom Row: Jacobian determinant images from the central slice of the volume: Yellow represents areas with small changes in size, and the shades of red and blue represent enlargement and shrinking, respectively. The Jacobian of the global affine registration is constant. The Jacobian of the demons algorithm is smooth due to the Gaussian regularization. The Jacobian of the new algorithm reflects the underlying anatomy because of the fiber bundle-based definition of the deformation.]]<br />
|<br />
|}<br />
<br />
<br />
Corpus Callosum, Cingulum and the Fornix were selected for further investigation because of the specific challenges they present. These three structures are in close proximity with each other, and that results in many mislabeled fibers when labeled using a high dimensional atlas (see figure below (left)). Their close proximity also results in a number of trajectories deviating from one structure to another. These are precisely the sorts of artifacts we wish to reduce through learning common spatial distributions of fiber bundles from a group of subjects.<br />
<br />
{|<br />
|<br />
[[Image:MIT_DTI_JointSegReg_beforeandafter.jpg|thumb|1000px|Tracts from Fornix (in green) and Cingulum (in purple) bundles along with a few selected tracts from Corpus Callosum (in black) as labeled using the high dimensional atlas (left) and after the EM algorithm with tract cuts (right). The tractography noise is evident in the images on the left as tracts deviating from one bundle to another. Also, these images contain instances where the high dimensional atlas failed to label the tracts correctly. The EM algorithm is able to remove the segments of tract bundles that are not consistent from subject to subject.]]<br />
|<br />
|}<br />
<br />
We also constructed two different atlases to compare the effects of labeling algorithms on the quality of resulting group maps. The first one is constructed using the initial labels from the high dimensional atlas. A second one is built using the proposed algorithm:<br />
<br />
{|<br />
|<br />
[[Image:MIT_DTI_JointSegReg_atlas2D.jpg|thumb|600px|Spatial distributions of Corpus Callosum, Cingulum and Fornix bundles from three single slices overlaid on their corresponding FA images. These maps are constructed using two different methods. a)High dimensional atlas, c) Proposed algorithm. The colorbars indicate the probability of each voxel in the spatial distribution of the corresponding fiber bundle. Note that the probabilities become higher in the central regions of the bundles and the number of sporadical voxels with non-zero probabilities decrease from left to right, indicating a sharper atlas through better registration and more consistent labeling of the subjects.]]<br />
|<br />
|}<br />
<br />
<br />
[[Image:MIT_DTI_JointSegReg_atlas3D.jpg|thumb|400px|Isoprobability surfaces of the spatial distributions of Fornix (in green) and Cingulum (in purple) bundles constructed from 15 subjects using the EM algorithm with tract cut operations. A few selected tracts from Corpus Callosum (in black) are also drawn to highlight the spatial proximity of the three bundles. These spatial distributions retain very little of the tractography noise that is apparent in the individuals' tract bundles.]]<br />
<br />
''Project Status''<br />
<br />
* Working 3D implementation in Matlab and C.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Ulas Ziyan, Mert R. Sabuncu<br />
* Harvard DBP1: Carl-Fredrik Westin, Lauren O'Donnell<br />
<br />
= Publications =<br />
''In Print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3ADTIFiberRegistration&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on DTI Fiber Registration]<br />
<br />
[[Category: Registration]] [[Category: Segmentation]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIClustering&diff=52224Projects:DTIClustering2010-05-11T20:20:00Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:DiffusionImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]], [[DBP1:Harvard|Harvard DBP1]], [[Engineering:Kitware|Kitware Engineering]]<br />
__NOTOC__<br />
= DTI Clustering =<br />
<br />
In the past, we have demonstrated ways to characterize the strength of connectivity between selected regions in the brain based on several alternative ways to integrate local diffusion tensor measurements into a global field that provided connection strength estimates for distant points. Our current work aims to provide structural description of the white matter as partitioned into coherent fiber bundles and clusters, allowing automatic segmentation of tractography. Furthermore, we are using the segmentation method to enable analysis of diffusion properties along white matter fiber tracts (see the new tract-based morphometry method and the tumor measurement project, below).<br />
<br />
= Description =<br />
<br />
''NA-MIC Software Development''<br />
<br />
We are developing tools in the 3D Slicer for automatic clustering of tractographic paths through diffusion tensor MRI (DTI) data. By grouping tractographic paths based on shape and location, the white matter architecture may be more clearly visualized, and interesting properties of the clusters (such as for example FA or Westin's linear measure) may be quantified.<br />
<br />
<br clear="all" /><br />
[[Image:ClusterScreenshot.png|thumb|left|400px|Slicer clustering interface and example clusters.]]<br />
[[Image:Slicer-cluster-screenshot0028.png|thumb|left|300px|This image shows the same fibers as the bright green cluster in the previous image (part of the corpus callosum). Here randomly sampled tensors are displayed along the paths and colored with Westin's linear measure.]]<br />
<br clear="all" /><br />
<br />
<br />
''Clustering Implementation''<br />
<br />
Our implementation uses spectral clustering, a method for grouping data using eigenvectors of a data affinity matrix. This image gives an overview of the method. On the left example input tractographic paths are shown (these were created by manually seeding in the 3D Slicer). The center image shows an embedding of the tracts as points in 2D, where the distance between points is related to their shape similarity. This embedding was calculated as an intermediate step during spectral clustering. The image on the right shows the final output in the 3D Slicer, where tractographic paths are colored by cluster membership.<br />
<br />
[[Image:ClusterMethod.jpg|thumb|center|500px|Steps in clustering: shape comparison, spectral embedding, and output clusters]]<br />
<br clear="all" /><br />
<br />
''Recent Update: Tract Based Morphometry Method''<br />
<br />
Multisubject statistical analyses of DTI in regions of specific white matter tracts have commonly measured only the mean value of a scalar invariant such as the fractional anisotropy (FA). The spatial patterns of FA along fiber tracts have not yet been studied in detail due to the difficulty of finding pointwise correspondences along the lengths of tracts from multiple subjects. We are investigating a new method for calculation of multisubject tract arc length coordinate systems, enabling tract-based morphometry (TBM), the group statistical analysis of tensors or scalar invariants along the length of fiber tracts. <br />
<br />
<br />
<br clear="all" /><br />
[[Image:cingulumAllSubjectsFibers.png|thumb|left|175px|Cingulum bundle with fibers from all 32 subjects (identified using group clustering).]]<br />
[[Image:cingulum1.png|thumb|left|175px|Arc length coordinates (in color) for one subject.]]<br />
[[Image:cingulum3.png|thumb|left|190px|Arc length coordinates (in color) for another subject.]]<br />
<br clear="all" /><br />
[[Image:cingulumMeanFARightVsLeftAlongTract.png|thumb|left|300px|Mean FA along tract (32 subjects). The left hemisphere FA is shown in blue and the right in red, and anterior is to the left in the plot. Each point represents the mean of subject mean FA values at that arc length coordinate. Bars are standard error across subject means.]]<br />
[[Image:cingulumPValueOnFiber.png|thumb|left|330px|P-value for left/right hemisphere difference in FA, overlaid on the prototype fiber (multiple comparison corrected using permutation testing). Significant differences are found especially in the anterior cingulum bundle.]]<br />
<br clear="all" /><br />
''Recent Update: Tumor Measurement Project''<br />
[[Image:TumorCSTForWiki.jpg|thumb|right|150px|Tumor and CST clusters displayed in 3D Slicer]]<br />
Primary brain tumors lead to changes in the diffusion properties of white matter due to <br />
edema, infiltration, tract displacement and destruction. Despite investigation of diffusion changes in <br />
white matter bordering tumors, these changes have not been quantitatively determined along the length <br />
of white matter tracts that may be affected by a tumor. The study of these tracts is especially interesting as the pattern of spread/growth of primary brain tumors is by infiltration along white matter tracts.<br />
<br />
Clustered fibers in the region of the corticospinal tract have been used to identify regions of interest for slice-by-slice measurements of this tract's diffusion properties in normals and in tumor subjects. A pilot study (with Monica Lemmond at Harvard Medical School and Stephen Whalen and Alexandra Golby at Brigham and Women's Hospital/HMS) has demonstrated changes in tumor-affected tracts (relative to the contralateral unaffected side) beyond the apparent tumor border. A larger study is currently underway.<br />
<br />
<br clear="all" /><br />
[[Image:Tumorfigure2.jpg|thumb|left|256px|Mean diffusivity along tract affected by tumor and contralateral unaffected tract. Axial level of tumor border on T2 is marked with dashed line.]]<br />
[[Image:Tumorfigure3.jpg|thumb|left|256px|Parallel diffusivity (major eigenvalue) along tract affected by tumor and contralateral. Axial level of tumor border on T2 is marked with dashed line.]]<br />
[[Image:Tumorfigure4.jpg|thumb|left|256px|Parallel diffusivity (major eigenvalue) along bilateral tracts in a normal subject]]<br />
<br clear="all" /><br />
<br />
''Thesis Results: Automatic Tractography Segmentation''<br />
<br />
Atlas creation and automatic labeling has been performed in high-quality DTI datasets from Susumu Mori. Images showing example segmentation results are below. Work is underway to apply this atlas to segment additional datasets to define regions of interest that may be used in the study of schizophrenia.<br />
<br />
''Example Results:''<br />
<br />
Selected anatomical regions, automatically labeled using the cluster atlas in 3 subjects.<br />
<br />
[[Image:Th_SM_14_41_labels-0001.png]] [[Image:Th_SM_15_labels-0001.png]] [[Image:Th_SM_17_labels-0001.png]]<br />
<br />
Subdivisions of the corpus callosum, labeled using the cluster atlas in 3 subjects.<br />
<br />
[[Image:Th_corpus-SM_020001.png]] [[Image:Th_corpus-SM_140001.png]] [[Image:Th_corpus-SM_150001.png]]<br />
<br />
<br /><br />
<br />
''Software''<br />
<br />
The software employed in this process includes:<br />
<br />
* Slicer DTMRI module VTK classes for tractography.<br />
* Slicer VTK class to compute tract path affinity matrix.<br />
** This matrix contains results of shape comparisons between tract paths.<br />
** Several methods are available: endpoint distance, Hausdorff distance, and mean/covariance distance.<br />
* New ITK class (itkSpectralClustering) clusters data based on an affinity matrix.<br />
** Generally applicable for clustering problems because input is just this matrix.<br />
* New and improved ITK Statistics framework which allows our embedding vectors (such as those in the middle image above) to have variable length and therefore employ more shape information when it is present. Thank you to Karthik Krishnan for reworking the ITK Statistics classes.<br />
<br />
All of the code is part of NA-MIC. The VTK classes are in the 3D Slicer DTMRI module, while the ITK class is currently located in the NA-MIC Sandbox and will be included in ITK in the future.<br />
<br />
Additional matlab code is used for multiple subject clustering and atlas labeling.<br />
<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Lauren O'Donnell<br />
* Harvard DBP1: Marta Shenton, Monica Lemmond, Alexandra Golby, Carl-Fredrick Westin<br />
* Kitware Engineering: Karthik Krishnan<br />
<br />
= Publications =<br />
''In Print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3ADTIClustering&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on DTI Fiber Clustering and Fiber-Based Analysis]<br />
<br />
''In Press''<br />
<br />
* Monica E. Lemmond, Lauren J. O'Donnell, Stephen Whalen, and Alexandra J. Golby.Characterizing Diffusion Along White Matter Tracts Affected by Primary Brain Tumors.<br />
Accepted to HBM 2007.<br />
[http://people.csail.mit.edu/lauren/publications/LemmondHBM2007.pdf (pdf)]<br />
<br />
[[Category: Segmentation]] [[Category: Slicer]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:ShapeBasedSegmentationAndRegistration&diff=52223Projects:ShapeBasedSegmentationAndRegistration2010-05-11T20:19:27Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Shape Based Segmentation and Registration =<br />
<br />
Standard image based segmentation approaches perform poorly when there is little or no contrast along boundaries of different regions. In such cases segmentation is mostly performed manually using prior knowledge of the shape and relative location of the underlying structures combined with partially discernible boundaries. We present an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an Expectation-Maximization formulation of the Maximum A posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. <br />
<br />
== EM-Based Shape Segmenter ==<br />
<br />
{|<br />
|<br />
[[Image:Progress_Registration_Segmentation_Model.jpg|[[Image:Progress_Registration_Segmentation_Model.jpg|Estimation Model]]]]<br />
|-<br />
| The Bayesian framework models the relationship between the observed data I (input image), the hidden data T (label map), and the parameter space (S,R,B) with shape model S, registration parameters R, and image inhomogeneities B. The optimal solution with respect to (S,R,B) is defined by the MAP estimate of the framework. We iteratively determine the solution of the estimation problem using an Expectation Maximization (EM) implementation. The E-Step calculates the ‘weights’ for each structure at voxel x. The M-Step updates the approximation of (S,B,R).<br />
|}<br />
<br />
== Shape Atlas ==<br />
<br />
{|<br />
|<br />
[[Image:Progress_Registration_Segmentation_Shape.jpg|[[Image:Progress_Registration_Segmentation_Shape.jpg|Shape Model]]]]<br />
|<br />
The atlas captures shapes by the signed distance map representation and covariation of the shapes across structures through Principle Component Analysis (PCA). The atlas is constructed through the following method (see [1] for further detail: )<br />
<br />
* Turn training cases into vectors of distance maps of all structures.<br />
* Run PCA on distance maps to determine average distance map and modes of variations.<br />
|}<br />
<br />
== Registration Model ==<br />
<br />
{|<br />
|<br />
[[Image:Progress_Registration_Segmentation_Registration.jpg|[[Image:Progress_Registration_Segmentation_Registration.jpg|Hierarchical Registration model]]]]<br />
| The hierarchical registration framework represents the correspondence between the coordinate system of the atlas, which defines the shape model, and the MR image. The structure-independent parameters capture the correspondence between atlas and image space. The structure dependent parameters are the residual structure-specific deformations not adequately explained by the structure-independent registration parameters.<br />
|}<br />
<br />
== Image Inhomogeneity Model ==<br />
<br />
{|<br />
|<br />
[[Image:Progress_Registration_Segmentation_Inhomogeneity.jpg|[[Image:Progress_Registration_Segmentation_Inhomogeneity.jpg|Inhomogeneity of an MR Image ]]]]<br />
| Image (a) shows an MR image corrupted by image inhomogeneities, noise, partial voluming and other image artifacts. The image inhomogeneities of (a) are shown in (b). Unlike noise, image inhomogeneities are characterized by a slowly varying values within the brain. Image (c) is as image inhomogeneity corrected MR image of (a). In our method, inhomogeneities are modeled as a Gaussian distribution over the image space. As Wells et al. [2] shows the inhomogeneity can then be approximated by the product between a low pass filter represented as a matrix and the weighted residual between estimated and observed MR image.<br />
|}<br />
<br />
== Experiement of 22 Cases ==<br />
<br />
{|<br />
|<br />
[[Image:Progress_Registration_Segmentation_Validation.jpg|[[Image:Progress_Registration_Segmentation_Validation.jpg|Validation Study for Joint Registration and Segmentation]]]]<br />
|<br />
The experiment empirically demonstrates the utility of joining registration and shape based segmentation in an EM implementation by comparing the accuracy our new method with (EM-Sim-Sh) to three other EM implementations:<br />
<br />
* EM-Affine = sequentially performing affine registration and segmentation without shape model<br />
* EM-NRigid = sequentially performing non-rigid registration and segmentation without shape model<br />
* EM-Sim-Af = integrated registration and segmentation approach without shape model<br />
* EM-Shape = sequentially performing affine registration and with shape based segmentation<br />
<br />
All methods segment 22 cases into the three brain tissue classes as well as the ventricles, thalamus, and caudate. We then measure the overlap between manual and automatic segmentations of the thalamus and caudate using DICE. Only our new approach (EM-Sim-Sh) performs well for both structures.<br />
|}<br />
<br />
== Active Mean Field (AFM) ==<br />
<br />
The approach estimates the posterior probability of<br />
tissue labels. Conventional likelihood models are combined with a<br />
curve length prior on boundaries, and an approximate posterior<br />
distribution on labels is sought via the Mean Field approach. Optimizing the<br />
resulting estimator by gradient descent leads to a level set style algorithm<br />
where the level set functions are the logarithm-of-odds encoding of the<br />
posterior label probabilities in an unconstrained linear vector space.<br />
Applications with more than two labels are easily accommodated. The<br />
label assignment is accomplished by the Maximum ''A Posteriori''<br />
rule, so there are no problems of 'overlap' or 'vacuum'. We test<br />
the method on synthetic images with additive noise. In addition, we<br />
segment a magnetic resonance scan into the major brain compartments<br />
and subcortical structures.<br />
<br />
== LogOdds Maps ==<br />
<br />
LogOdds Map relate to the certainty of objects’ boundaries in images. Like<br />
sign distance maps, LogOdds encode the boundary of the shape via a zero-level set that<br />
now represents the set of voxels with the highest uncertainty of being assigned<br />
to fore- or background. Unlike sign distance maps, the rest of the space is defined by the<br />
logarithm of the odds of a structure to be present at that location under the<br />
assumption that voxels in an image are independently distributed<br />
and the training set consists of aligned segmentations. This relationship<br />
with the odds of the presence of an anatomical label provides a natural<br />
way to capture boundary uncertainty. Importantly, the space of LogOdds is<br />
closed under addition and scalar multiplication, and as such it can be used for<br />
efficient and straightforward statistical modeling and inference of shape. <br />
<br />
The figure below shows the LogOdds maps generated from label maps of six experts segmenting the right superior temporal gyrus. Dark blue and dark red indicates high<br />
certainty that the voxel is assigned to the background and foreground respectively.<br />
All other colors represent statistical uncertainty about the assignment of the voxel.<br />
<br />
[[Image:POHL LogOdssVsDistanceMaps.jpg | Log Odds vs Signed Distance Map for representing the right Superior Temporal Gyrus]]<br />
<br />
'The AFM Algorithm'<br />
We now combine the Mean Field approximation with the level set framework by using the LogOdds parametrization. We do so by embedding the Mean Field parameters into<br />
the LogOdds space. We then determine the optimal parameters via gradient descent which we is realized in the level set formulation. This results in the AMF algorithm<br />
which computes space conditioned probabilities while incorporating regional as well as boundary properties of objects.<br />
<br />
'Results'<br />
<br />
We now discuss the curve evolution of our algorithm on a noisy image that was segmented by a Gaussian classifier into a fragmented<br />
label map. The corresponding probability maps are the inputs to our algorithm, which<br />
robustly identifies the boundary of the structure. We initialize our curve evolution with the distance map of a small circle (see green<br />
circle in top, left image and distance map below) and the input is the noisy LogOdds<br />
map of the normalized likelihood (bottom, right). The initial curve is disconnected from<br />
the square forcing our method to split the zero-level set into two separate curves by<br />
Iteration 1. The circle connected to the square is expanding while the other curve is<br />
shrinking. Our curve evolution further evolves both curves until the connected curve<br />
converges to the shape of the square and the disconnected curve vanishes.<br />
The evolution produces the LogOdds maps shown in the bottom row of Figure 1.<br />
Initially, the dark blue region shrinks, i.e. the number of voxels with high certainty<br />
about the presence of the square is decreasing. The shrinking is due to the discrepancy<br />
between the initial LogOdds map and the input label likelihoods. As the method progresses,<br />
the blue region assimilates towards the predefined LogOdds map. Unlike the<br />
segmentation produced through thresholding the initial likelihoods, our level set method<br />
filters out the noise. The final LogOdds map is smooth and the binary map shows the<br />
square as one connected region.<br />
<br />
[[Image:POHL_IPMI07_MovieNoisy.gif|[[Image:POHL_IPMI07_MovieNoisy.gif|Noisy Image Segmented via AMF]]]]<br />
<br />
The second experiment includes real<br />
MRI images, in which AMF automatically segments the major brain compartments as<br />
well as subcortical structures. Due to the LogOdds parametrization, our method naturally<br />
evolves families of curves.<br />
<br />
[[Image:POHL_IPMI07_MovieBalls.gif|[[POHL_IPMI07_MovieBalls.gif| Segmenting MR images Noisy]]]] <br />
<br /><br />
<br />
'Software'<br />
<br />
The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.<br />
<br />
= Key Investigators =<br />
* MIT: K.M. Pohl, S. Bouix, M. Shenton, R. Kikinis, and W.M. Wells<br />
<br />
= Publications =<br />
''In Print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3AShapeBasedSegmentationAndRegistration&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Shape Based Segmentation and Registration]<br />
<br />
[[Category: Shape Analysis]] [[Category: Registration]] [[Category: Segmentation]] [[Category:MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:DTIModeling&diff=52222Projects:DTIModeling2010-05-11T20:19:03Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= DTI Modeling =<br />
<br />
We developed a novel approach for joint clustering and point-by-point mapping of white matter fiber pathways. Knowledge of the point correspondence along the fiber pathways is not only necessary for accurate clustering of the trajectories into fiber bundles, but also crucial for any tract-oriented quantitative analysis. The proposed approach is also capable of incorporating anatomical knowledge as prior information.<br />
<br />
= Description =<br />
<br />
We employ an expectation-maximization (EM) algorithm to cluster the trajectories in a Gamma mixture model context. The result of clustering is the probabilistic assignment of the fiber trajectories to each cluster and an estimate of the cluster parameters, i.e. spatial mean and variance, and point correspondences. The fiber bundles are modeled by the mean trajectory and its spatial variation. Point-by-point correspondence of the trajectories within a bundle is obtained by constructing a distance map and a label map from each cluster center at every iteration of the EM algorithm. This offers a time-efficient alternative to pairwise curve matching of all trajectories with respect to each cluster center. <br />
<br />
The proposed method has the potential to benefit from an anatomical atlas of fiber tracts by incorporating it as prior information in the EM algorithm. The algorithm is also capable of handling outliers in a principled way. Here are some examples of modeling/clustering the bundles:<br />
<br />
[[Image:models.jpg|Model of fiber tracts]]<br />
<br />
[[Image:wholebrain.jpg|Model of fiber tracts|600px]]<br />
<br />
One of the difficult bundles of fiber tracts to cluster is the cingulum. Even starting tractography from a user-defined ROI results in a set of disordered trajectories, mostly short in length because of low FA. Also, due to its adjacency to the corpus callosum, many callosal trajectories are included that adversely affect any further analysis of the bundle. As shown in the following figure for two subjects, our method is well capable of clustering these trajectories into the desired bundles. Two arbitrary trajectories, one from the the superior and one from the posterior part of the cingulum were selected as the initial cluster centers. Knowledge of the point correspondence and hence rigorous calculation of the similarity measure is essential for clustering of such a disordered set of trajectories. <br />
<br />
[[Image:cingulum.jpg|400px]][[Image:Gamma.jpg|250px]]<br />
<br />
Figure on the right illustrates the evolution of the Gamma distribution for the clusters of the first case shown the above figure. Convergence is achieved just after a few iterations of the EM algorithm. <br />
<br />
The proposed algorithm is being applied on several datasets. Below are two examples:<br />
<br />
'''Population Study on Patahogical Subjects'''<br />
<br />
A population study on the cinglum bundle in controls and Schizophrenia cases:<br />
<br />
[[Image:Populationstudy.jpg|400px]]<br />
<br />
'''Brain Development'''<br />
<br />
[[Image:braindevelopment.jpg|400px]][[Image:braindevelopment_qa.jpg|300px]]<br />
<br />
FA-colored trajectories from (a), (d) cortico-spinal, (b), (e) cingulum and (c), (f) uncinate fasciculus at 32-wk (up) and 42-wk (down) postmenstrual age. part of the cingulum and at 32-wk (up) and 42-wk (down) postmenstrual age. Spatial patterns of the tract development are clearly seen.<br />
On the right, the figure shows the box-plot of the FA variation along the tract arc length for part of the cingulum and at 32-wk (up) and 42-wk (down) postmenstrual age. Only the posterior part shows a significant FA increase. ROI-based analysis fails to detect such spatial dependencies.<br />
<br />
<br />
''Software''<br />
<br />
Currently, all of the codes are implemented in MATLAB.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Mahnaz Maddah,Eric Grimson.<br />
* Harvard: Sandy Wells, Simon Warfield, C-F Westin, Martha E. Shenton, Marek Kubicki.<br />
<br />
= Publications =<br />
<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3ADTIModeling&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Databaseon Fiber Tract Modeling, Clustering, and Quantitative Analysis]<br />
<br />
<br />
<br />
[[Category: Shape Analysis]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:ShapeAnalysisWithOvercompleteWavelets&diff=52221Projects:ShapeAnalysisWithOvercompleteWavelets2010-05-11T20:18:39Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Shape Analysis with Overcomplete Wavelets =<br />
<br />
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.<br />
<br />
= Description =<br />
<br />
Bi-orthogonal spherical wavelets have been shown to be<br />
powerful tools in the segmentation and shape analysis of 2D<br />
closed surfaces, but unfortunately they suffer from aliasing<br />
problems and are therefore not invariant under rotations of<br />
the underlying surface parameterization. See the toy example in the figure below.<br />
[[Image:NorthPoleBump.png|thumb|center|300px|Bump on the sphere (left side). When the north pole is right under the bump, both the bi-orthogonal and overcomplete wavelets detect the bump (first column on the right). When the north pole is rotated away from the bump, only the overcomplete wavelet detects the bump (second column on the right).]]<br />
<br />
Instead, we propose to use the over-complete spherical wavelets. These over-complete spherical wavelets are based on filter bank theory, directly extending the ideas of Euclidean steerable pyramid to the sphere. We demonstrate the theoretical advantage of over-complete<br />
wavelets over bi-orthogonal wavelets. We also show that over-complete spherical wavelets allow us to<br />
build more stable cortical folding development models, and detect a wider array of regions of folding development in a newborn dataset. The use of spherical wavelet transform in cortical<br />
shape analysis allows us to study cortical folds of different<br />
spatial scales, which are difficult to analyze by cortical<br />
folding analysis methods based on local features such as<br />
curvature and sulcal depth measurements.<br />
<br />
== Experimental Results ==<br />
<br />
'''Comparison with Bi-orthogonal Wavelets'''<br />
<br />
The two images on the left of the figure below show the cortical folding speed detected by the bi-orthogonal wavelets. Notice, how the detection changes wildly with different parameterizations. On the other hand, the results of over-complete wavelets are stable across different parameterizations. Furthermore, the over-complete wavelets are much more sensitive than the bi-orthogonal wavelets in detecting changes.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:RobustnessOfWaveletAnalysis1.png|thumb|center|150px|Bi-orthogonal wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:RobustnessOfWaveletAnalysis2.png|thumb|center|150px|Bi-orthogonal wavelets: Different Parameterization]]<br />
</td><br />
<td><br />
[[Image:RobustnessOfWaveletAnalysis3.png|thumb|center|154px|Over-complete wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:RobustnessOfWaveletAnalysis4.png|thumb|center|154px|Over-complete wavelets: Different Parameterization]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
'''Overcomplete Wavelets Global Shape Analysis'''<br />
<br />
As seen in the figure below, we find that larger folds (lower wavelet scales) develop earlier but slower. This is consistent with the previous postmortem study[3]. Furthermore, we find that the fastest folding development occurs at a younger age on the left hemisphere than on the right. <br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:speed_bar_plot.png|thumb|center|222px|Maximum growth rate (1/week) for each wavelet scale]]<br />
</td><br />
<td><br />
[[Image:age_bar_plot.png|thumb|center|222px|Age (weeks) of fastest growth for each wavelet scale]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
'''Overcomplete Wavelets Local Shape Analysis'''<br />
<br />
The figure below shows the results of the regional analysis. Consistent with the global development result discussed above, regions that develop earlier (darker blue) also grow more slowly (more red). For example, the lateral side of the parietal lobe on the left hemisphere develops earlier than the right hemisphere, but at a slower speed.<br />
<br />
Also consistent with the global analysis, we find that larger folds develop earlier but slower. On the lateral side, the pre- and post-central gyri develop the fastest during 30-31 weeks on both hemispheres while smaller structures such as supramarginal and angular gyri develop the fastest at a much later time, as indicated at frequency level 3.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
Maximum Growth Rate (Left)<br />
</td><br />
<td><br />
Maximum Growth Rate (Right)<br />
</td><br />
<td><br />
Age of Maximum Growth (Left)<br />
</td><br />
<td><br />
Age of Maximum Growth (Right)<br />
</td><br />
</tr><br />
<tr><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.rate.level0_lat.png|center|200px|Bi-orthogonal wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level0_lat.png|center|200px|Bi-orthogonal wavelets: Different Parameterization]]<br />
</td><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.age.level0_lat.png|center|200px|Over-complete wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level0_lat.png|center|200px|Over-complete wavelets: Different Parameterization]]<br />
</td><br />
</tr><br />
<tr><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.rate.level1_lat.png|center|200px|Bi-orthogonal wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level1_lat.png|center|200px|Bi-orthogonal wavelets: Different Parameterization]]<br />
</td><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.age.level1_lat.png|center|200px|Over-complete wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level1_lat.png|center|200px|Over-complete wavelets: Different Parameterization]]<br />
</td><br />
</tr><br />
<tr><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.rate.level2_lat.png|center|200px|Bi-orthogonal wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level2_lat.png|center|200px|Bi-orthogonal wavelets: Different Parameterization]]<br />
</td><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.age.level2_lat.png|center|200px|Over-complete wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level2_lat.png|center|200px|Over-complete wavelets: Different Parameterization]]<br />
</td><br />
</tr><br />
<tr><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.rate.level3_lat.png|center|200px|Bi-orthogonal wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level3_lat.png|center|200px|Bi-orthogonal wavelets: Different Parameterization]]<br />
</td><br />
<td><br />
[[Image:Lh.smoothwm.csw.lsq.age.level3_lat.png|center|200px|Over-complete wavelets: Original Parameterization]]<br />
</td><br />
<td><br />
[[Image:Rh.smoothwm.csw.lsq.rate.level3_lat.png|center|200px|Over-complete wavelets: Different Parameterization]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
= Key Investigators =<br />
<br />
* MIT: [[http://people.csail.mit.edu/ythomas/ | B.T. Thomas Yeo]], Peng Yu, Wanmei Ou, Polina Golland.<br />
* Harvard: Ellent Grant, Bruce Fischl.<br />
<br />
= Publications =<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3AShapeAnalysisWithOvercompleteWavelets&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Shape Analysis With Overcomplete Wavelets]<br />
<br />
[[Category: Shape Analysis]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:RegistrationRegularization&diff=52220Projects:RegistrationRegularization2010-05-11T20:18:17Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Registration Regularization =<br />
<br />
We propose a unified framework for computing atlases from manually labeled data sets at various degrees of “sharpness” and the joint registration and segmentation of a new brain with these atlases. In non-rigid registration, the tradeoff between warp regularization and image fidelity is typically set empirically. In segmentation, this leads to a probabilistic atlas of arbitrary “sharpness”: weak regularization results in well-aligned training images, producing a “sharp” atlas; strong regularization yields a “blurry” atlas. We study the effects of this tradeoff in the context of cortical surface parcellation, but the framework applies to volume registration as well. This is an important question because of the increasingly availability of atlases in public databases and the development of registration algorithms separate from the atlas construction process.<br />
<br />
= Description =<br />
In image registration, one usually optimizes the objective function with two parts. The first term is the similarity between images. The second term regularizes the warp. The smoothness parameter that weights the second term determines the tradeoff between the similarity measure and the regularization. In Atlas-based segmentation, one is given a set of labeled training images. The training images are co-registered to a common space. An atlas that summarizes the information between the image features and the labels is computed in this common space. This atlas is used to segment and normalize a new image.<br />
<br />
[[Image:RegSeg.png|center|400px|]]<br />
<br />
We employ a generative model for the joint registration and segmentation of images (see figures below). The atlas construction process arises naturally as estimation of the model parameters. This framework allows the computation of unbiased atlases from manually labeled data at various degrees of "sharpness", as well as the joint registration and segmentation of a novel brain in a consistent manner.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:JointRegSeg.png|thumb|center|420px| Joint Registration-Segmentation]]<br />
</td><br />
<td><br />
[[Image:GraphicalModel.png|thumb|center|400px|"A" is an atlas used to generate the label map L' in some universal atlas space. The atlas A and label map L' generate image I'. S is the smoothness parameter that generates random warp field R. This warp is then applied to the label map L' and image I' to create the label map L and the image I. We assume the label map L is available for the training images, but not for the test image. The image I is observed in both training and test cases.]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
We use the generative model to compute atlases from manually labeled data at various degrees of “sharpness” and the joint registration-segmentation of a new brain with these atlases. Using this framework, we investigate the tradeoff between warp regularization and image fidelity, i.e. the smoothness of the new subject warp and the sharpness of the atlas. We compare three special cases of our framework, namely: <br />
<br />
(1) Progressive registration of a new brain to increasingly “sharp” atlases using increasingly flexible warps, by initializing each registration stage with the optimal warps from a “blurrier” atlas. We call this multiple atlases, multiple warp scales (MAMS).<br />
<br />
(2) Progressive registration to a single atlas with increasingly flexible warps. We call this single atlas, multiple warp scales (SAMS).<br />
<br />
(3) Registration to a single atlas with fixed constrained warps. We call this single atlas, single warp scale (SASS).<br />
<br />
== Experimental Results ==<br />
<br />
We use dice as the measure of segmentation quality. From the graph below, we note that the optimal algorithms correspond to a unique balance between atlas “sharpness” and warp regularization. Our experiments show that the optimal parameter values that correspond to this balance can be determined using cross-validation. The optimal parameter values are robust across subjects, and the same for both co-registration of the training data and registration of a new subject. This suggests that a single atlas at an optimal sharpness is sufficient to achieve the best segmentation results. Furthermore, our experiments also suggest that segmentation accuracy is tolerant up to a small mismatch between atlas sharpness and warp smoothness.<br />
<br />
[[Image:AvgResults.jpg|thumb|center|300px|Plot of Dice as a function of the warp smoothness S. Note that S is on a log scale. <math>\alpha</math> corresponds to the sharpness of the atlas used.]]<br />
<br />
In the figure below, we display the percentage improvement of SASS over FreeSurfer [1,2]. For each of the 35 structures for each hemisphere, we perform a one-sided paired-sampled t-test between SASS and FreeSurfer, where each subject is considered a sample. We use the False Discovery Rate (FDR) to correct for multiple comparisons. In the left hemisphere, SASS achieves statistically significant improvement over FreeSurfer for 17 structures (FDR < 0.05), while the remaining structures yield no statistical difference. In the right hemisphere, SASS achieves improvement for 11 structures (FDR < 0.05), while the remaining structures yield no statistical difference. The p-values for the left and right hemispheres are pooled together for the False Discovery Rate analysis.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:Lh.PercentImprove1.png|thumb|center|150px|Left Medial]]<br />
</td><br />
<td><br />
[[Image:Lh.PercentImprove2.png|thumb|center|150px|Left Lateral]]<br />
</td><br />
<td><br />
[[Image:Rh.PercentImprove2.png|thumb|center|150px|Right Lateral]]<br />
</td><br />
<td><br />
[[Image:Rh.PercentImprove1.png|thumb|center|142px|Right Medial]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
<br />
[1] Fischl, Sereno and Dale. High-resolution intersubject averaging and a coordinate system for the cortical surface. Human Brain Mapping, 8(4):272--284, 1999<br />
<br />
[2] Fischl, van der Kouwe, Destrieux, Halgren, Segonne, Salat, Busa, Seidman, Goldstein, Kennedy, Caviness, Makris, Rosen and Dale. Automatically Parcellating the Human cerebral Cortex. Cerebral Cortex, 14:11--22, 2004<br />
<br />
= Key Investigators =<br />
<br />
* MIT : [[http://people.csail.mit.edu/ythomas/ | B.T. Thomas Yeo]], Mert Sabuncu, Polina Golland<br />
* Harvard : Rahul Desikan, Bruce Fischl<br />
<br />
= Publications =<br />
<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects:RegistrationRegularization&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Optimal Atlas Regularization in Image Segmentation]<br />
<br />
<br />
<br />
[[Category: Registration]] [[Category:Segmentation]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:GroupwiseRegistration&diff=52219Projects:GroupwiseRegistration2010-05-11T20:07:03Z<p>Melonakos: </p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Non-rigid Groupwise Registration =<br />
<br />
We aim at providing efficient groupwise registration algorithms <br />
for population analysis of anatomical structures.<br />
Here we extend a previously demonstrated entropy based groupwise registration method <br />
to include a free-form deformation model based on B-splines. <br />
We provide <br />
an efficient implementation using stochastic gradient descents <br />
in a multi-resolution setting. <br />
We demonstrate the method in application to a set of 50 MRI brain scans <br />
and compare the results to a pairwise approach <br />
using segmentation labels to evaluate the quality of alignment.<br />
Our results indicate that increasing the complexity of the deformation model<br />
improves registration accuracy significantly, especially at cortical regions.<br />
<br />
= Description =<br />
<br />
We first describe <br />
the stack entropy cost function and the B-spline based deformation model.<br />
Then we discuss implementation details. <br />
Next, we compare groupwise registration to the pairwise method and<br />
evaluate both methods using label prediction values.<br />
<br />
<br />
''Objective Function''<br />
<br />
[[Image:GroupwiseStackBiModal.PNG|thumb|350px|Figure 1: On the left is shown a stack of images <br />
and a sample pixel stack around a cortical region. On the left is shown the Gaussian(blue) fittet to <br />
a real sample from the dataset we used along with the non-parametric density estimate(red). <br />
Note that the distribution is bi-modal because of white matter-gray matter transaction.]]<br />
<br />
In order to align all subjects in the population,<br />
we consider sum of pixelwise entropies as a joint alignment criterion.<br />
The justification for this approach is that if the images are aligned properly, <br />
intensity values at corresponding coordinate locations from all the images <br />
will form a low entropy distribution.<br />
This approach does not require the use of a reference subject; all<br />
subjects are simultenously driven to the common tendency of the population.<br />
<br />
We employ a kernel based density estimation scheme to estimate univariate entropies.<br />
Using the entropy measure we obtain a better treatment of transitions between different<br />
tissue types, such as gray matter-white matter transitions in the cortical regions<br />
where intensity distributions can be bi-modal as shown in Figure 1.<br />
<br />
<br />
<br />
''Deformation Model''<br />
<br />
[[Image:GroupwiseBspline.png|thumb|350px|Figure 2: An example deformation field. The local neighborhood affecting the deformation is overlayed on the image.]]<br />
<br />
For the nonrigid deformation model,<br />
we define a combined transformation consisting of <br />
a global and a local component<br />
<br />
:<math><br />
T(\mathbf{x}) = T_{local}({T_{global}(\mathbf{x})})<br />
</math><br />
<br />
where <math>T_{global}</math> is a twelve parameter affine transform and <br />
<math>T_{local}</math> is a deformation model based on B-splines.<br />
<br />
The free form deformation can be written as the 3-D tensor product<br />
of 1-D cubic B-splines.<br />
<br />
:<math><br />
T_{local}(\mathbf{x}) = \mathbf{x} + \sum_{l=0}^3\sum_{m=0}^3\sum_{n=0}^3 B_l(u)B_m(v)B_n(w) \Phi_{i+l,j+m,k+n}<br />
</math><br />
<br />
<br />
<br />
where <math>B_l</math> is <math>l</math>'th cubic B-spline basis function. <math>(u,v,w)</math> is the distance <br />
to <math>(x,y,z)</math> from the control point <math>\Phi_{i,j,k}</math> as shown in Figure 2.<br />
<br />
The deformation of a given point can be found using only the control points in the neighborhood of the given point. Therefore,<br />
optimization of the objective function can be implemented efficiently.<br />
<br />
<br />
''Implementation''<br />
<br />
[[Image:GroupwiseIncreasingScale.PNG|thumb|350px|Figure 3: A registration schedule using gradually increasing deformation field complexity. From left to right deformation fields for increasing deformation field complexity. ]]<br />
<br />
We provide an efficient optimization scheme by using line search with the gradient descent algorithm.<br />
For computational efficiency, we employ a stochastic subsampling procedure. <br />
In each iteration of the algorithm, <br />
a random subset is drawn from all samples and the objective function is evaluated<br />
only on this sample set. <br />
<br />
To obtain a dense deformation field capturing anatomical variations at different scales,<br />
we gradually increase the complexity of the deformation field by refining the grid of B-spline control points.<br />
<br />
<br />
[[Image:GroupwiseMultiResolution.PNG|thumb|350px|Figure 4: An example showing the multi-resolution scheme. The registration is first performed at a coarse scale by downsampling the input.<br />
Results from coarser scales are used to initialize<br />
optimization at finer scales. Also note that the objective function is only evaluated on a small subset of input points. ]]<br />
<br />
As in every iterative search algorithm, local minima pose a significant problem. <br />
To avoid local minima we use a multi-resolution optimization scheme for each resolution level of the deformation field.<br />
<br />
We implemented our groupwise registration method in a multi-threaded fashion using Insight Toolkit(ITK)<br />
and made the implementation publicly available [http://www.na-mic.org/svn/NAMICSandBox/trunk/MultiImageRegistration/ (code)].<br />
<br />
''Results''<br />
<br />
[[Image:GroupwiseMeanImages.png|thumb|350px|Figure 5: Central slices of 3D volumes for groupwise registration. Rows show mean and standard deviation images followed by label overlap images for GM, WM and CSF labels. Columns display the results for affine and B-splines with grid spacing 32, 16 and 8 voxels, respectively. ]]<br />
[[Image:GroupwiseBarsWMGM.png|thumb|350px|Figure 6: GM, WM DICE measures computed for different deformation field resolution levels. Blue bars show the results for groupwise registration and the red bars show the results for registration to the mean setting.]]<br />
[[Image:GroupwiseBarsManual.png|thumb|350px|Figure 7: DICE measures for manually segmented labels. Bars correspond to the same setting as in figure 6. ]]<br />
<br />
<br />
We tested the groupwise registration algorithm on a MR brain dataset. <br />
The dataset consists of 50 MR brain images of three subgroups:<br />
schizophrenics, affected disorder and normal control patients. <br />
MR images are T1 scans with 256x256x128 voxels <br />
and 0.9375x0.9375x1.5 mm<sup>3</sup> spacing. <br />
For each image in the dataset, an automatic tissue classification<br />
was performed, yielding gray matter (GM), white matter (WM) and cerebro-spinal<br />
fluid (CSF) labels. In addition, manual segmentations of four subcortical regions <br />
(left and right hippocampus and amygdala) and four cortical regions (left and right superior temporal <br />
gyrus and para-hippocampus) were available for each MR image.<br />
<br />
Increasing the complexity of the deformation model improves the<br />
accuracy of prediction. An interesting open problem is automatically<br />
identifying the appropriate deformation complexity before the<br />
registration overfits and the accuracy of prediction goes down. We<br />
also note that the alignment of the subcortical structures is much<br />
better than that of the cortical regions. It is not surprising as the<br />
registration algorithm does not use the information about geometry of the cortex<br />
to optimize the alignment of the cortex. In addition, it has<br />
been often observed that the cortical structures exhibit higher<br />
variability across subjects when considered in the 3D volume rather<br />
than modelled on the surface.<br />
<br />
Our experiments highlight the need for further research in developing<br />
evaluation criteria for image alignment. We used the standard Dice<br />
measure, but it is not clear that this measurement captures all the<br />
nuances of the resulting alignment.<br />
<br />
Comparing the groupwise registration to the pairwise approach, we<br />
observe that the sharpness of the mean images and the tissue overlaps<br />
in Figure 5 look visually similar. From Figures 6 and 7, we note that<br />
groupwise registration performs slightly better than the pairwise<br />
setting in most of the cases, especially as we increase the complexity<br />
of the warp. This suggests that considering the population as a whole<br />
and registering subjects jointly brings the population into better<br />
alignment than matching each subject to a mean template<br />
image. However, the advantage shown here is only slight; more<br />
comparative studies are needed of the two approaches.<br />
<br />
We compare our groupwise algorithm to a pairwise method where we register<br />
each subject to the mean intensity using sum of square differences.<br />
During each iteration we consider the mean image as a reference image<br />
and register<br />
every subject to the mean image using sum of squared differences. <br />
After each iteration the mean image is updated and pairwise registrations are performed until convergence.<br />
<br />
The images in Figure 5 show central slices of 3D images after registration. <br />
Visually, mean images get sharper and variance images becomes darker, especially around central ventricles and cortical regions. <br />
We can observe that anatomical variability at cortical regions causes significant blur for <br />
GM, WM and CSF structures using affine registration. <br />
Finer scales of B-spline deformation fields capture a significant part of this anatomical variability and <br />
the tissue label overlap images get sharper.<br />
<br />
=Asymmetric Image-Template Registration=<br />
<br />
A natural requirement in pairwise image registration is that the resulting deformation is independent of the order of the images. This constraint is typically achieved via a symmetric cost function and has been shown to reduce the effects of local optima. Consequently, symmetric registration has been successfully applied to pairwise image registration as well as the spatial alignment of individual images with a template. However, recent work has shown that the relationship between <br />
an image and a template is fundamentally asymmetric. In this work, we develop a method that reconciles the practical advantages of symmetric registration with the asymmetric nature of image-template registration by adding a simple correction factor to the symmetric cost function. We instantiate our model within a log-domain diffeomorphic registration <br />
framework. Our experiments show exploiting the asymmetry in image-template registration improves alignment in the image coordinates.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Serdar K. Balci, Polina Golland.<br />
* Harvard: Sylvain Bouix, Martha E. Shenton, Bruce Fischl, W.M. (Sandy) Wells.<br />
* Kitware: Brad Davis, Louis Ibanez.<br />
<br />
= Publications =<br />
<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3AGroupwiseRegistration&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Groupwise Registration]<br />
<br />
[[Category: Registration]] [[Category:Segmentation]] [[Category:MRI]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:MultimodalAtlas&diff=52218Projects:MultimodalAtlas2010-05-11T20:05:58Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]], [[DBP2:Harvard|Harvard DBP2]]<br />
__NOTOC__<br />
= Multi-Modal Atlas =<br />
<br />
Today, computational anatomy studies are mainly hypothesis-driven, aiming to identify and characterize structural or functional differences between, for instance a group of patients with a specific disorder and control subjects. This type of approach has two premises: clinical classification of the subjects and spatial correspondence across the images. In practice, achieving either can be challenging. First, the complex spectrum of symptoms of neuro-degenerative disorders like schizophrenia and overlapping symptoms across different types of dementia like Alzheimer's disease, delirium and depression make a diagnosis based on standardized clinical tests like the mental status examination difficult. Second, across-subject correspondence in the images is a particularly hard problem that requires different approaches in various contexts. A popular technique is to normalize all subjects into a standard space, such as the Talairach space, by registering each image with a single, universal template image that represents an average brain. However, the quality of such an approach is limited by the accuracy with which the universal template represents the population in the study.<br />
<br />
With the increasing availability of medical images, data-driven algorithms offer the ability to probe a population and potentially discover sub-groups that may differ in unexpected ways. In this paper, we propose and demonstrate an efficient probabilistic clustering algorithm, called '''iCluster''', that:<br />
<br />
* computes a small number of templates that summarize a given population of images,<br />
* simultaneously co-registers all the images using a nonlinear transformation model,<br />
* assigns each input image to a template.<br />
<br />
The templates are guaranteed to live in an affine-normalized space, i.e., they are spatially aligned with respect to an affine transformation model.<br />
<br />
= Description =<br />
<br />
[[Image:GenerativeModel.png|center|400px|Generative Model used in iCluster.]]<br />
<br />
'''iCluster''' is derived from a simple generative model. We assume that there are a fixed and known number of template images. Then the process that generates an observed image is as follows: a template is randomly drawn – note that the probability that governs this process doesn’t have to be uniform. Next, the chosen template is warped with a random transformation and i.i.d Gaussian noise is added to this warped image to generate an observed image. This process is repeated multiple times to generate a collection of images.<br />
<br />
We formulate the problem as a maximum likelihood solution. We employ a Generalized Maximum Likelihood (GEM) algorithm to solve the problem. The GEM algorithm is derived using Jensen's inequality and has three steps: <br />
*E-step: Given the estimates for the template images, template prior probabilities and noise variance image estimates from the previous iteration, the algorithm updates the memberships of each image as the posterior probability of an image being generated from a particular template.<br />
*T-step: Given the membership estimates from the previous E-step, the algorithm updates the template image, template prior and noise variance estimates using closed-form expressions.<br />
*R-step: Given the membership, template, template prior and noise variance estimates from the pervious iterations, the algorithm updates the warps for each image. This step is a collection of pairwise registration instances, where each image is aligned with an effective template image. The effective template image is a weighted average of the current individual templates, where the weights are current memberships.<br />
<br />
The resulting algorithm is fast and efficient: each iteration's time and memory requirements are linear in the number of voxels, input images and templates.<br />
We employ a stochastic subsampling strategy in each one of the E, T and R steps. A random subsample of voxels (typically less than 1% of the total voxels) are used for the computations. <br />
In the R-step, we employ a B-spline nonlinear transformation model and the optimization is done using gradient-descent. During this optimization, the gradients are normalized so that each cluster (i.e. the images assigned to the same template image) are subject to an average of zero deformation. This is an extension of the "anchoring" strategy used in groupwise registration algorithms. This is usually done by subtracting the average gradient from the individual gradients.<br />
<br />
== Results ==<br />
<br />
We present two experiments. The first one demonstrates the use of iCluster for building a multi-template atlas in a segmentation application. In the second experiment, we employ iCluster to compute multiple templates of a large data set that contains 416 brain MRI. Our results show that these templates correspond to different age groups. We find the correlation between the image-based clustering, and demographic and clinical characteristics particularly intriguing, given the fact that iCluster did not employ the latter information.<br />
<br />
'''Experiment 1: Segmentation Label Alignment'''<br />
<br />
In this experiment, we used a data set of 50 whole brain MR brain images (of size 256x256x124 and voxel dimensions 0.9375x0.9375x1.5 mm) that<br />
contained 16 patients with first episode schizophrenia (SZ), 17 patients with first-episode affective disorder (AFF) and 17 healthy subjects (CON). First episode patients are relatively free of chronicity-related confounds such as the long-term effects of medication, thus any structural differences between the three groups are subtle, local and difficult to identify in individual scans.<br />
<br />
The 50 MR images also contained manual labels of certain medial temporal lobe structures: the superior temporal gyrus (STG), hippocampus (HIPP), amygdala (AMY) and parahippocampal gyrus (PHG). We used these manual labels to explore label alignment across subjects under different groupings: on the whole data set, on random partitionings of the data set into two subsets of equal size, on the clinical grouping, and on the image-based clustering as determined by iCluster.<br />
<br />
[[Image:Two_templates_shenton50.png|center|600px|Two templates in a 50 subject MRI.]]<br />
<br />
We spatially normalized all the subjects into \textit{a standard space} using the iCluster algorithm with one-template and a 32x32x32 B-spline transformation model, and explored the alignment of the manual labels for clinical and image-based groupings. For each region of interest, such as amygdala, we computed the modified Haussdorff distance (MHD) in the standard space. MHD is a non -symmetric distance measure between the boundaries of two labels and is zero for perfect alignment.The MHD values for each region of interest were then summed up to obtain a total label distance for each ordered subject pair.<br />
The following figure shows the total label distance for all subject pairings under the different groupings. We note that image-based clustering of iCluster (both with two-template and three-template)<br />
groups subjects that have better label alignment, whereas the clinical grouping demonstrates no such coherence.<br />
<br />
[[Image:LabelAlignmentMatrixShenton50.png|center|600px|Label Alignment Matrices for the three groupings in the Shenton50 data set.]]<br />
<br />
<br />
'''Experiment 2: Age groups in the OASIS data set'''<br />
<br />
In this experiment, we used the OASIS data set [http://www.oasis-brains.org] which consists of 416 pre-processed (skull stripped and gain-field corrected) brain MR images of subjects aged 18-96 years including individuals with early-stage Alzheimer's disease (AD). We ran iCluster on the whole data set while varying the number of templates from 2 through 6. Each run took approximately 4-8<br />
hours on a 16 processor PC with 128GB RAM. For two- and three templates the algorithm computed unique and structurally different templates. We observed that these templates were robust: they were the same for random subsets of the data set of as little as 60 subjects. For larger number of templates, however, we observed that the computed templates were not all unique, or corresponded to single outlier subjects, or were not robust to random sub-sampling of the data set.<br />
<br />
The following figure shows the three robust templates computed by iCluster.<br />
<br />
[[Image:Three_templates_oasis.png|center|600px|Three templates of the OASIS data.]]<br />
<br />
The following figure shows the difference images between the three templates shown above.<br />
<br />
Difference_templates_oasis.png<br />
<br />
[[Image:Difference_templates_oasis.png|center|600px|Difference between the three templates of the OASIS data]]<br />
<br />
The following figure includes the age distributions estimated using Parzen windowing with a Gaussian kernel and a s.t.d. of 4 years for each cluster identified by the algorithm.<br />
<br />
[[Image:Age_distributions_oasis.png|center|600px|Age groups in the OASIS data]]<br />
<br />
<br />
'''Software'''<br />
<br />
The algorithm is currently implemented in the Insight ToolKit (ITK) and will be made publicly available. We also plan to integrate it into Slicer.<br />
<br />
= Key Investigators =<br />
<br />
*MIT: Mert R. Sabuncu, Serdar K. Balci and Polina Golland<br />
*Harvard: M.E. Shenton, M. Kubicki and S. Bouix<br />
<br />
= Publications =<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3AMultimodalAtlas&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Multimodal Atlas]<br />
<br />
<br />
[[Category: Registration]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:BayesianMRSegmentation&diff=52217Projects:BayesianMRSegmentation2010-05-11T20:05:39Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Bayesian Segmentation of MRI Images =<br />
<br />
= Description =<br />
<br />
The aim of this project is to develop, implement, and validate a generic method for segmenting MRI images that automatically adapts to different acquisition sequences. Towards this end, we design parametric computational models of how MRI images are generated, and then use these models to obtain automated segmentations in a Bayesian framework.<br />
<br />
The model we have developed incorporates a ''prior'' distribution that makes predictions about where neuroanatomical labels typically occur throughout an image, and is based on a generalization of probabilistic atlases that uses a deformable, compact tetrahedral mesh representation. The model also includes a ''likelihood'' distribution that predicts how a label image, where each voxel is assigned a unique neuroanatomical label, translates into an MRI image, where each voxel has an intensity.<br />
<br />
Given an image to be segmented, we first estimate the parameters of the model that are most probable in light of the data. The parameter estimation involves finding the deformation that optimally warps the mesh-based probabilistic atlas onto the image under study, estimating MRI intensity inhomogeneities corrupting the image, as well as finding the mean intensity and the intensity variance for each of the structures to be segmented. Once these parameters are estimated, the most probable image segmentation is obtained.<br />
<br />
== Application to Hippocampal Subfield Segmentation ==<br />
<br />
We have used our technique to automatically segment several subfields of the hippocampus directly from ultra-high resolution ''in vivo'' MRI data. Recent developments in MR data acquisition technology have started to yield images that show anatomical features of the hippocampal formation at an unprecedented level of detail, providing the basis for hippocampal subfield measurement. Because of the role of the hippocampus in human memory and its implication in a variety of disorders and conditions, the ability to reliably and efficiently quantify its subfields through in ''vivo'' neuroimaging is of great interest to both basic neuroscience and clinical research.<br />
<br />
We have validated our technique by comparing our automated segmentation results with corresponding manual delineations in ultra-high resolution MRI scans (voxel size 0.38x0.38x0.80mm^3) of 10 individuals. For each of seven structures of interest (fimbria, CA1, CA2/3, CA4/DG, presubiculum, subiculum, and hippocampal fissure), we calculated the Dice overlap coefficient, defined as the volume of overlap between the automated and manual segmentation divided by their mean volume. We used a leave-one-out cross-validation strategy, in which we built an atlas mesh from the delineations in nine subjects, and used this to segment the image of the remaining subject. We repeated this process for each of the 10 subjects, and compared the automated segmentation results with the corresponding manual delineations. <br />
<br />
Figure 1 compares the manual and automated segmentation results qualitatively on a set of cross-sectional slices. The upper half of figure 2 shows the average Dice overlap measure for each of the structures of interest, along with error bars that indicate the standard errors around the mean. The lower half of the figure shows, for each structure, the volume differences between the automated and manual segmentations relative to their mean volumes. An example of our mesh-based probabilistic atlas, derived from nine manually labeled hippocampi, is shown in figure 3.<br />
<br />
[[Image:MITHippocampalSubfieldSegmentationQualitative.png|thumb|center|600px|Fig 1. From left to right: ultra-high resolution MRI data, manual delineations, and corresponding automated segmentations.]]<br />
<br />
<br />
<table align="center"><br />
<tr><br />
<td><br />
[[Image:MITHippocampalSubfieldSegmentationQuantitative.png|thumb|center|300px|Fig 2. Dice overlap measures (top) and relative volume differences (bottom) between automated and manual segmentations. The colors are as in figure 1.]]<br />
</td><br />
<td><br />
[[Image:MITHippocampalSubfieldSegmentationAtlas.png|thumb|center|300px|Fig 3. Mesh-based probabilistic atlas, derived from manual delineations in nine subjects, warped onto the 10th subject shown in figure 1. The colors are as in figure 1.]]<br />
</td><br />
</tr><br />
</table><br />
<br />
= Integration into Slicer =<br />
<br />
We are currently working on integrating our method into 3D Slicer. Our aim is to provide an implementation that is fast and intuitive enough to be useful in hospital environments.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Koen Van Leemput, Sylvain Jaume, Polina Golland<br />
* Harvard: Steve Pieper, Ron Kikinis<br />
<br />
= Publications =<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3ABayesianMRSegmentation+OR+Projects%3AHippocampalSubfieldSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Bayesian Segmentation of MRI Images]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:fMRIClustering&diff=52216Projects:fMRIClustering2010-05-11T20:05:20Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:fMRIAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= fMRI Clustering =<br />
<br />
One of the major goals in analysis of fMRI data is the detection of networks in the brain with similar functional behavior. A wide variety of methods including hypothesis-driven statistical tests, unsupervised learning methods such as PCA and ICA, and different clustering algorithms have been employed to find these networks. This project aims to particularly study application of model-based clustering algorithms in identification of functional connectivity in the brain. <br />
<br />
= Clustering for the Exploration of Functional Connectivity =<br />
<br />
'''''Generative Model for Functional Connectivity'''''<br />
<br />
In the classical functional connectivity analysis, networks of interest are<br />
defined based on correlation with the mean time course of a user-selected<br />
`seed' region. Further, the user has to also specify a subject-specific threshold at which correlation<br />
values are deemed significant. In this project, we simultaneously estimate the optimal<br />
representative time courses that summarize the fMRI data well and<br />
the partition of the volume into a set of disjoint regions that are best<br />
explained by these representative time courses. This approach to functional connectivity analysis offers two<br />
advantages. First, is removes the sensitivity of the analysis to the details<br />
of the seed selection. Second, it substantially simplifies group analysis<br />
by eliminating the need for the subject-specific threshold. Our experimental results indicate that<br />
the functional segmentation provides a robust, anatomically meaningful<br />
and consistent model for functional connectivity in fMRI.<br />
<br />
We formulate the problem of characterizing connectivity as a partition of voxels into subsets that are well characterized by a certain number of representative hypotheses, or time courses, based on the similarity of their time courses to each hypothesis. We model the fMRI signal at each voxel as generated by a mixture of Gaussian distributions whose centers are the desired representative time courses. Using the EM algorithm to solve the corresponding model-fitting problem, we alternatively estimate the representative time courses and cluster assignments to improve our random initialization. <br />
<br />
''' ''Experimental Results'' '''<br />
<br />
We used data from 7 subjects with a diverse set of visual experiments including localizer, morphing, rest, internal tasks, and movie. The functional scans were pre-processed for motion artifacts, manually aligned into the Talairach coordinate system, detrended (removing linear trends in the<br />
baseline activation) and smoothed (8mm kernel).<br />
<br />
Fig. 1 shows the 2-system partition extracted in each subject independently<br />
of all others. It also displays the boundaries of the intrinsic system determined<br />
through the traditional seed selection, showing good agreement between the two<br />
partitions. Fig. 2 presents the results of further clustering the stimulus-driven cluster into two clusters independently for each subject. <br />
<br />
<table><br />
<tr> <th> '''Fig 1. 2-System Parcelation. Results for all 7 subjects.''' <th> '''Fig 2. 3-System Parcelation. Results for all 7 subjects.''' <br />
<tr> <td align="center"> <br />
[[Image:mit_fmri_clustering_parcellation2_shb1_4.png |400px]]<br />
[[Image:mit_fmri_clustering_parcellation2_shb5_6.png |400px]]<br />
[[Image:mit_fmri_clustering_parcellation2_shb7.png |400px]]<br />
<td align="center"><br />
[[Image:mit_fmri_clustering_parcellation3_shb1_3.png |400px]]<br />
[[Image:mit_fmri_clustering_parcellation3_shb4_5.png |400px]]<br />
[[Image:mit_fmri_clustering_parcellation3_shb6.png |400px]]<br />
[[Image:mit_fmri_clustering_parcellation3_shb7.png |400px]]<br />
</table><br />
<br />
Fig.3 presents the group average of the subject-specific 2-system maps. Color shading shows the proportion of subjects whose clustering agreed with the majority label. Fig. 4 shows the group average of a further parcelation of the intrinsic system, i.e., one of two clusters associated with the non-stimulus-driven regions. In order to present a validation of the method, we compare these results with the conventional scheme for detection of visually responsive areas. In Fig. 5, color shows the statistical parametric map while solid lines indicate the boundaries of the visual system obtained through clustering. The result illustrate the agreement between the two methods.<br />
<br />
<table><br />
<tr><th> '''Fig 3. 2-System Parcellation. Group-wise result.''' <th> '''Fig 4. Validation: Parcelation of the intrinsic system.'''<br />
<tr> <td align="center"><br />
[[Image:mit_fmri_clustering_parcellation2_xsub.png |thumb|570px]]<br />
<td align="center"><br />
[[Image:mit_fmri_clustering_intrinsicsystem.png |thumb|500px]]<br />
</table><br />
<br />
{|<br />
|+ '''Fig 5. Validation: Visual system.'''<br />
|valign="top"|[[Image:mit_fmri_clustering_validation.png |thumb|1150px]]<br />
|}<br />
<br />
'''''Comparison between Different Clustering Schemes'''''<br />
<br />
As a continuation to the above experiments, we apply two distinct clustering algorithms to functional connectivity analysis: K-Means clustering and Spectral Clustering. The K-Means algorithm assumes that each voxel time course is drawn independently from one of <em>k</em> multivariate Gaussian distributions with unique means and spherical covariances. In contrast, Spectral Clustering does not presume any parametric form for the data. Rather it captures the underlying signal geometry by inducing a low-dimensional representation based on a pairwise affinity matrix constructed from the data. Without placing any <em>a priori</em> constraints, both clustering methods yield partitions that are associated with brain systems traditionally identified via seed-based correlation analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.<br />
<br />
One downside of Spectral Clustering is that it relies on the eigen-decomposition of an <em>NxN</em> affinity matrix, where <em>N</em> is the number of voxels in the whole brain. Since <em>N</em> is on the order of ~200,000 voxels, it is infeasible to compute the full eigen-decomposition given realistic memory and time constraints. To solve this problem, we approximate the leading eigenvalues and eigenvectors of the affinity matrix via the Nystrom Method. This is done by selecting a random subset of "Nystrom Samples" from the data. The affinity matrix and spectral decomposition is computed only for this subset, and the results are projected onto the remaining data points.<br />
<br />
'''''Experimental Results'''''<br />
<br />
We validate these algorithms on resting state data collected from 45 healthy young adults (mean age 21.5, 26 female). Four 2mm isotropic functional runs were acquired from each subject. Each scan lasted for 6m20s with TR = 5s. The first 4 time points in each run were discarded, yielding 72 time samples per run. The entire brain volume is partitioned into an increasing number of clusters. We perform standard preprocessing on each of the four runs, including motion correction by rigid body alignment of the volumes, slice timing correction and registration to the MNI atlas space. The data is spatially smoothed with a 6mm 3D Gaussian filter, temporally low-pass filtered using a 0.08Hz cutoff, and motion corrected via linear regression. Next, we estimate and remove contributions from the white matter, ventricle and whole brain regions (assuming a linear signal model). We mask the data to include only brain voxels and normalized the time courses to have zero mean and unit variance. Finally, we concatenate the four runs into a single time course for analysis.<br />
<br />
We first study the robustness of Spectral Clustering to the number of random samples. In this experiment, we start with a 4,000-sample Nystrom set, which is the computational limit of our machine. We then iteratively remove 500 samples and examine the effect on clustering performance. After matching the resulting clusters to those estimated with 4,000 samples, we compute the percentage of mismatched voxels between each trial and the 4,000-sample template. This procedure is repeated twice for each participant.<br />
<br />
<table><br />
<tr> <th> '''Fig 6. Varying the number of Nystrom Samples''' <th> '''Fig 7. Nystrom consistency for 2,000 random samples''' <br />
<tr> <br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_Samples_LogMedian2.jpeg |400px]]<br />
<td align="center"><br />
[[Image:mit_fmri_clustering_Consistency_Box.jpeg |400px]]<br />
</table><br />
<br />
Fig 6. depicts the median clustering difference when varying the number of Nystrom samples. Values represent the percentage of mismatched voxels w.r.t. the 4,000-sample template. Error bars delineate the <em>10th</em>-<em>90</em> percentile region. The median clustering difference is less than 1% for 1,000 or more Nystrom samples, and the <em>90th</em> percentile difference is less than 1% for 1,500 or more samples. This experiment suggests that Nystrom-based SC converges to a stable clustering pattern as the number of samples increases. Based on these results, we chose to use 2,000 Nystrom samples for the remainder of this work. At this sample size, less than 5% of the runs for 2,4,5 clusters and approximately 8% of the runs for 3 clusters differed by more than 5% from the 4,000-sample template.<br />
<br />
The box plot in Fig 7. summarizes the consistency of Nystrom-based Spectral Clustering across<br />
different random samplings. The red lines indicate median values, the box corresponds to the upper and lower quartiles, and error bars denote the <em>10th</em> and <em>90th</em> percentiles. Here, we perform SC 10 times on each participant using 2,000 Nystrom samples. We then align the cluster maps and compute the percentage of mismatched voxels between each unique pair of runs. This yields a total of 45 comparisons per participant. In all cases, the median clustering difference is less than 1%, and the <em>90th</em> percentile value is less than 2.1%. Empirically, we find that Nystrom SC predictably converges to a second or third cluster pattern in only a handful of participants. This experiment suggests that we can obtain consistent clusterings with only 2,000 Nystrom samples.<br />
<br />
<table><br />
<tr> ''' Fig 8. Clustering results across participants. The brain is partitioned into 5 clusters using Spectral Clustering/K-Means, and various seed are selected for Seed-Based Analysis. The color indicates the proportion of participants for whom the voxel was included in the detected system.'''<br />
<tr> <th> '''Spectral Clustering''' <th> '''K-Means''' <th> '''Seed-Based'''<br />
<tr> <br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_SC_5Clust_2.jpeg |275px]]<br />
<td align="center"><br />
[[Image:mit_fmri_clustering_KM_5Clust_1.jpeg |275px]]<br />
<td align="center"><br />
[[Image:mit_fmri_clustering_Seed_PCC.jpeg |275px]]<br />
<tr> <th> '''Cluster 1, Slice 37''' <th> '''Cluster 1, Slice 37''' <th> '''PCC, Slice 37'''<br />
<tr><br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_SC_5Double.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_KM_5Double.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_Seed_vACC.jpeg |275px]]<br />
<tr> <th> '''Cluster 1, Slice 55''' <th> '''Cluster 1, Slice 55''' <th> '''vACC, Slice 55'''<br />
<tr><br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_SC_5Clust_3.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_KM_5Clust_4.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_Seed_V1.jpeg |275px]]<br />
<tr> <th> '''Cluster 2, Slice 55''' <th> '''Cluster 2, Slice 55''' <th> '''Visual, Slice 55'''<br />
<tr><br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_SC_5Clust_4.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_KM_5Clust_3.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_Seed_M1.jpeg |275px]]<br />
<tr> <th> '''Cluster 3, Slice 31''' <th> '''Cluster 3, Slice 31''' <th> '''Motor, Slice 31'''<br />
<tr><br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_SC_5Clust_1.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_KM_5Clust_2.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_Seed_IPS.jpeg |275px]]<br />
<tr> <th> '''Cluster 4, Slice 31''' <th> '''Cluster 4, Slice 31''' <th> '''IPS, Slice 31'''<br />
<tr><br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_SC_5Clust_5.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_KM_5Clust_5.jpeg |275px]]<br />
<td align="center"> <br />
[[Image:mit_fmri_clustering_Colorbar2.jpeg |64px]]<br />
<tr> <th> '''Cluster 5, Slice 37''' <th> '''Cluster 5, Slice 37''' <th><br />
</table><br />
<br />
Fig 8. shows clearly that both Spectral Clustering and K-Means can identify well-known structures such as the default network, the visual cortex, the motor cortex, and the dorsal attention system. Spectral Clustering and K-Means also identified white matter. In general, one would not attempt to delineate this region using seed-based correlation analysis because we regress out the white matter signal during the preprocessing. In our experiments Spectral Clustering and K-Means achieve similar clustering results across participants. Furthermore, both methods identify the same functional systems as seed-based analysis without requiring <em>a priori</em> knowledge about the brain and without significant computation. Thus, clustering algorithms offer a viable alternative to standard functional connectivity analysis techniques.<br />
<br />
<br />
''' ''Comparison of ICA and Clustering for the Identification of Functional Connectivity in fMRI'' '''<br />
<br />
Although ICA and clustering rely on very different assumptions on the underlying distributions, they produce surprisingly similar results for signals with large variation. Our main goal is to evaluate and compare the performance of ICA and clustering based on Gaussian mixture model (GMM) for identification of functional connectivity. Using the synthetic data with artificial activations and artifacts under various levels of length of the time course and signal-to-noise ratio of the data, we compare both spatial maps and their associated time courses estimated by ICA and GMM to each other and to the ground truth. We choose the number of sources via the model selection scheme, and compare all of the resulting components of GMM and ICA, not just the task-related components, after we match them component-wise using the Hungarian algorithm. This comparison scheme is verified in a high level visual cortex fMRI study. We find that ICA requires a smaller number of total components to extract the task-related components, but also needs a large number of total components to describe the entire data. We are currently applying ICA and clustering methods to connectivity analysis of schizophrenia patients.<br />
<br />
= Clustering for Discovering Structure in the Space of Functional Selectivity = <br />
<br />
''' ''Clustering Study of Domain Specificity in High Level Visual Cortex'' '''<br />
<br />
As a more specific application of model-based clustering algorithms, we are devising clustering algorithms for discovering structure in the functional organization of the high-level visual cortex. It is suggested that there are regions in the visual cortex with high selectivity to certain categories of visual stimuli. Currently, the conventional method for detection of these methods is based on statistical tests comparing response of each voxel in the brain to different visual categories to see if it shows considerably higher activation to one category. For example, the well-known FFA (Fusiform Face Area) is the set of voxels which show high activation to face images. We use a model-based clustering approach to the analysis of this type of data as a means to make this analysis automatic and further discover new structures in the high-level visual cortex.<br />
<br />
Introducing the notion of space of activation<br />
profiles, we construct a representation of the data which explicitly<br />
parametrizes all interesting patterns of activation. Mapping the data into<br />
this space, we formulate a model-based clustering algorithm that simultaneously<br />
finds a set of activation profiles and their spatial maps. We validate<br />
our method on the data from studies of category selectivity in visual<br />
cortex, demonstrating good agreement with the findings based on prior<br />
hypothesis-driven methods. This model enables functional group analysis<br />
independent of spatial correspondence among subjects. We are currently working on a co-clustering extension of this<br />
algorithm which can simultaneously find a set of clusters of voxels and meta-categories<br />
of stimuli in experiments with diverse sets of stimulus categories.<br />
<br />
Fig. 9 compares the map of voxels assigned to a face-selective profile by our algorithm with the t-test's map of voxels with statistically significant (p<0.0001) response to faces when compared with object stimuli. Note that in contrast with the hypothesis testing method, we don't specify the existence of a face-selective region in our algorithm and the algorithm automatically discovers such a profile of activation in the data.<br />
<br />
{|<br />
|+ '''Fig 9. Spatial maps of the face selective regions found by the statistical test (red) and our mixture model (dark blue). Maps are presented in alternating rows for comparison. Visually responsive mask of voxels used in our experiment is illustrated in yellow and light blue.'''<br />
|align="center"|[[Image:mit_fmri_clustering_mapffacompare.PNG |thumb|800px]]<br />
|}<br />
<br />
'''''Hierarchical Model for Exploratory fMRI Analysis without Spatial Normalization'''''<br />
<br />
Building on the work on the clustering model for the domain specificity, we develop a hierarchical exploratory method for simultaneous parcellation of multisub ect fMRI data into functionally coherent areas. The method is based on a solely functional representation of the fMRI data and a hierarchical probabilistic model that accounts for both inter-subject and intra-subject forms of variability in fMRI response. We employ a Variational Bayes approximation to fit the model to the data. The resulting algorithm finds a functional parcellation of the individual brains along with a set of population-level clusters, establishing correspondence between these two levels. The model eliminates the need for spatial normalization while still enabling us to fuse data from several subjects. We demonstrate the application of our method on the same visual fMRI study as before. Fig. 10 shows the scene-selective parcel in 2 different subjects. Parcel-level spatial correspondence is evident in the figure between the subjects.<br />
<br />
<table><br />
<tr> <th> '''Fig 10. The map of the scene selective parcels in two different subjects. The rough location of the scene-selective areas PPA and TOS, identified by the expert, are shown on the maps by yellow and green circles, respectively.''' <br />
<tr> <br />
<td align="center"> <br />
[[Image:mit_fmriclustering_hierarchicalppamapsubject1.jpg |650px]]<br />
<td align="center"><br />
[[Image:mit_fmriclustering_hierarchicalppamapsubject2.jpg |650px]]<br />
</table><br />
<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Danial Lashkari, Archana Venkataraman, Ed Vul, Nancy Kanwisher, Polina Golland.<br />
* Harvard: J. Oh, Marek Kubicki, Carl-Fredrik Westin.<br />
<br />
= Publications =<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3AfMRIClustering&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on fMRI clustering]<br />
<br />
Project Week Results: [[2008_Summer_Project_Week:fMRIconnectivity|June 2008]]<br />
<br />
[[Category:fMRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:FieldmapFreeDistortionCorrection&diff=52215Projects:FieldmapFreeDistortionCorrection2010-05-11T20:04:55Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:fMRIAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Fieldmap-Free EPI Distortion Correction = <br />
<br />
Echo-planar imaging (EPI) is one of the most widely used pulse sequences in<br />
functional magnetic resonance imaging (fMRI) due to its high temporal resolution. <br />
The ability to acquire an entire volume within seconds makes it a valuable,<br />
non-invasive tool for probing dynamic physiological processes such as the blood-oxygenation-level-dependent (BOLD) response. <br />
A significant limitation of EPI is its sensitivity to magnetic field inhomogeneity. <br />
Perturbations in the field result in signal loss and geometric distortion<br />
in EPI data. Previous studies have shown that correcting geometric distortion<br />
in functional images increases the accuracy of co-registration to structural MR. <br />
Precise anatomical localization of functional activation is especially important in single-subject studies (ie. pre-surgical evaluation) and in cases where the<br />
structural MR is used as a reference to sample the functional data <br />
(ie. cortical-surface-based analysis). Therefore, field inhomogeneity and distortion is a<br />
significant problem in fMRI. In this work, we describe a method for correcting the distortions present<br />
in echo planar images (EPI) and registering the EPI to structural MRI.<br />
A fieldmap is predicted from a tissue/air segmentation of the MRI<br />
using a perturbation method and subsequently used to unwarp the EPI<br />
data. Shim and other missing parameters are estimated by registration.<br />
We obtain results that are similar to those obtained using fieldmaps,<br />
however neither fieldmaps, nor knowledge of shim parameters is required.<br />
<br />
== Registration without Correction ==<br />
<br />
Localization of functional information relies on accurate registration of EPI and structural MR, which can be difficult due to EPI distortion caused by B0 field inhomogeneity. Correcting distortion using acquired Fieldmaps has been shown to improve registration [1], but Fieldmaps may not be available or may not be applicable if significant motion is present in the EPI, resulting in sub-optimal registration.<br />
<br />
{|<br />
|[[Image:Igt poster fig1.jpg|600px|]]<br />
|}<br />
<br />
== Problem 1: Segmentation ==<br />
<br />
Magnetic field models exist to compute a Fieldmap from a Tissue/Air segmentation [2,3], but segmenting structural MR is difficult due to the similar intensities of bone and air. In the Fieldmap-Free method [4], T1 MRI was segmented using a trained classifier that computes the probability of tissue given MR intensity.<br />
CT data was used for training and validation only, allowing the trained classifier to be applied to data sets without CT.<br />
<br />
{|<br />
|[[Image:Igt poster fig2.jpg|600px|]]<br />
|}<br />
<br />
== Problem 2: Shim Estimation ==<br />
<br />
Existing magnetic field models do not account for the shim fields that reduce B0 field inhomogeniety prior to acquisition. Without this information, accurate unwarping is not possible. In this Method, a Fieldmap (without shim) was computed from the Segmented MR using the field model in [2]. <br />
Missing Shim Fields were modeled by spherical harmonic basis functions.<br />
Registration was used to search over shim parameters until optimal agreement between the EPI and structural MR was obtained.<br />
<br />
{|<br />
|[[Image:Igt poster fig3.jpg|600px|]]<br />
|}<br />
<br />
<br />
== Atlas-Based Improved Prediction of Magnetic Field Inhomogeneity for Distortion Correction of EPI Data Motivation ==<br />
<br />
In the Fieldmap-Free method described above, it was shown that tissue/air susceptibility models could be derived from structural MRI by using<br />
an intensity-based classifier trained with CT [4]. It was also shown that registration of the EPI and structural MR could be used to search over the unknown shim<br />
parameters allowing distortion correction of the EPI that agrees well with results obtained using acquired fieldmaps.<br />
<br />
Variability in structural MR acquisitions, however, may limit the efficacy of an intensity-based classifier in cases where the MR<br />
intensity properties differ significantly from those of the training data. In [4], CT data sets with MR acquired on the same scanner as the subjects of interest could be used to train the classifier, but this may not be possible in many cases. Limited anatomical information below the brain may also prevent accurate estimation of the perturbing field. Therefore, obtaining more reliable susceptibility models from structural MR is critical for retrospective unwarping of EPI data sets<br />
that lack acquired fieldmaps. While previous results predicting fieldmaps from structural MR have shown good agreement with acquired<br />
fieldmaps, we hypothesized that improved segmentation methods would result in even greater accuracy.<br />
<br />
= Description =<br />
<br />
In this work, anatomical information from a set of 22 whole-head CT data sets is used to achieve improved,<br />
subject-specific segmentation of structural MR. A tissue/air atlas is constructed from the CT data to obtain priors on<br />
the probability of tissue or air at each location in the anatomy. The corresponding structural MR is used to train a classifier that<br />
segments the MR of the subject of interest and this is used as input to a first order perturbation field model to compute a<br />
subject-specific fieldmap. The method is evaluated by comparison of predicted fieldmaps and acquired fieldmaps. In addition,<br />
the MR classifier can be used to obtain probabilistic bone segmentations from structural MR that show promising agreement with segmented CT.<br />
<br />
== Results ==<br />
<br />
=== Atlas Construction ===<br />
<br />
We obtained 22 datasets consisting of CT and MRI from 3 sources: the publicly available Retrospective Image Registration<br />
Evaluation (RIRE) database (17 neurosurgery patients), the Radiology department at Brigham and Women's Hospital (BWH) (4 neurosurgery patients) and the Zubal<br />
head phantom (1 subject) [5]. For each subject, the CT data was registered to its corresponding MR. The MR was registered<br />
to standard space and these transformations were then applied to the co-registered CT. Tissue/air labels were obtained by thresholding the CT data in<br />
standard space. Probabilistic tissue/air and air/bone atlases were then constructed [6] and are shown in Fig. 4 below.<br />
<br />
{|<br />
|[[Image:Tissue air atlas.jpg|600px|thumb|Fig. 4. Results of Atlas Construction. Tissue / Air (left and center) and Bone atlases (right) constructed from 22 CT data sets.]]<br />
|}<br />
<br />
=== Atlas-based Segmentation ===<br />
<br />
Structural MR was segmented using an MR classifier that incorporates spatially<br />
dependent prior information from the probabilistic atlas and MR intensity information (from the subject of interest) to obtain a subject-specific susceptibility<br />
model. The classifier was trained using the CT/MR training data described above, but applied to segment MR data acquired at a separate site. The accuracy of the segmentations was evaluated by comparing fieldmaps predicted<br />
from the atlas-based segmenter to acquired fieldmaps. The fieldmaps were also<br />
compared to those predicted using intensity information alone (ie. a spatially<br />
constant prior).<br />
<br />
<br />
Results of atlas-based segmentation of structural MR is shown in Fig. 5 below. <br />
T1 of the sinus region is shown on the left. The limitations of using the intensity<br />
classifier to segment the MR is shown in the middle image. While the classifier produces reasonable results for many of the voxels in the sinuses, <br />
voxels outside this region which are clearly soft tissue or <br />
bone are mislabeled with values close to zero. In contrast, using the atlas-based<br />
segmenter (right image) achieves similar results for the highly variable<br />
subject-specific anatomy within the sinus region, while producing fewer errors<br />
in the surrounding area.<br />
<br />
<br />
Results of the segmentation of bone from structural MR for a representative subject is shown in Fig. 6. The<br />
CT shown on the left can be easily thresholded to segment bone from air and<br />
soft tissue. The results of using the intensity and atlas-<br />
based classifiers are shown in the middle and right images, respectively. While the intensity classifier has some success in<br />
segmenting MR into tissue/air classes, it is much less effective in segmenting<br />
bone. Inspection of the the atlas-based segmentation, however, shows good general agreement with the CT, with a dice score of 0.780 for<br />
this subject.<br />
<br />
{|<br />
|[[Image:Tissue air seg.jpg|600px|thumb|Fig. 5. Tissue / Air Segmentation. Tissue probability maps computed from the structural MR show improved segmentation when the tissue/air atlas is incorporated into the MR classifier.]]<br />
|}<br />
<br />
<br />
<br />
{|<br />
|[[Image:Bone air seg.jpg|600px|thumb|Fig. 6. Bone Segmentation. Segmentation of bone from MR using intensity information alone is ineffective (middle), while atlas-based segmentation shows good overall agreement with CT (right).]]<br />
|}<br />
<br />
=== Fieldmap Estimation ===<br />
<br />
Fieldmaps are predicted from the atlas and intensity-based segmentations using<br />
the perturbation field model described in [2]. In this model, a first order perturba-<br />
tion solution of Maxwell’s equations is calculated from a tissue/air susceptibility<br />
model, where each pixel takes continuous values between 0 (air) and 1 (tissue).<br />
Current field modeling techniques, including the one described in [2], do not<br />
account for the shim fields that reduce the B0 inhomogeneity prior to fieldmap<br />
acquisition. Therefore, in order to compare an estimated fieldmap to an acquired<br />
one, the shim fields must added to the predicted fieldmaps. This is done by mod-<br />
eling the shim fields using the set of first and second order spherical harmonic<br />
basis functions. In addition, a global scaling of the predicted fieldmap must be<br />
estimated since the model assumes the magnetic susceptibility throughout the<br />
brain is constant, but this may not be accurate near<br />
bone interfaces where both partial volume effects and mis-estimation of segmen-<br />
tation values are most likely to occur. Furthermore, the perturbing fieldmaps are<br />
calculated assuming a perfectly homogeneous B0 field, which cannot be achieved<br />
in practice due to constraints on the hardware. The fieldmap scaling and shim<br />
parameters can be obtained by least squares fitting to the acquired fieldmap.<br />
Once these coefficients are known, the predicted fieldmap<br />
with shim can be compared to the acquired fieldmap.<br />
<br />
<br />
The results of the fieldmap estimation are shown in Fig 7. <br />
The first column of Fig. 7 shows fieldmaps computed from the intensity-based segmentations, which<br />
show significant differences relative to the acquired fieldmaps shown for each<br />
subject in column 3. These are especially noticable in areas that have lower signal in MR, such as in the ventricles and major sulci. Fieldmap results from the<br />
atlas-based segmentations are shown in the second column of Fig. 7 and show<br />
improved agreement with acquired fieldmaps. Quantitative analysis of the absolute error in the B0 field between these images is given in the table in Fig. 7.<br />
Since the bandwidth/pixel for the EPI data acquired in this study is 22.3 Hz,<br />
90% of the voxels in the atlas-based fieldmaps show subvoxel error. The mean<br />
of these statistics across all five subjects is also shown for both the intensity<br />
and atlas-based classifiers. The intensity classifier shows a slight improvement<br />
over the results reported by Koch et al [3] for a single subject. The atlas-based<br />
classifier out performs both the intensity and Koch methods. Paired t-tests comparing the means of the intensity and atlas-based results shows this improvement<br />
is statistically significant (all p-values < 0.05).<br />
{|<br />
|[[Image:Fieldmaps.jpg|600px|thumb|Fig. 7. Fieldmap Estimation. Fieldmaps predicted from intensity-based segmentations show significant differences relative to acquired fieldmaps while those computed from atlas-based segmentations show improved agreement. The mean absolute difference in field between computed and acquired fieldmaps for all 5 subjects is given in the table above, as well as results reported by Koch [3] for a single subject. In the atlas-based results, 90 % of voxels show sub-voxel error (field differences < 22.3 Hz, the bandwidth/pixel).]]<br />
|}<br />
<br />
<br />
<br />
<br />
[1] Cusack, et al. NeuroImage 18:127-142. 2003. <br />
<br />
[2] Jenkinson, et al. Magn Reson Med 52:471-477. 2004. <br />
<br />
[3] Koch, et al. Phys Med Biol. 51(24):6381-402. 2006. <br />
<br />
[4] Poynton, et al. MICCAI. 271-279, 2008.<br />
<br />
[5] Zubal IG, et al. Med. Phys. 21, 299--302. 1994.<br />
<br />
[6] Poynton, et al. MICCAI. (in press). 2009.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: Clare Poynton, Polina Golland<br />
* BWH/Harvard:Alex Golby, William (Sandy) Wells<br />
* Oxford: Mark Jenkinson<br />
<br />
= Publications =<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3AFieldmapFreeDistortionCorrection&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Fieldmap-Free EPI Distortion Correction]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:SphericalDemons&diff=52214Projects:SphericalDemons2010-05-11T20:04:31Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Spherical Demons =<br />
<br />
We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline<br />
interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently<br />
approximated on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is<br />
diffeomorphic and fast -- registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer<br />
registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.<br />
<br />
= Description =<br />
Motivated by the spherical representation of the cerebral cortex, this work deals with the problem of registering spherical images. Cortical folding patterns are correlated with both cytoarchitectural and functional regions. In group studies of cortical structure and function, determining corresponding folds across subjects is therefore important.<br />
<br />
Unfortunately, many spherical warping algorithms are computationally expensive. One reason is the need for invertible deformations that preserve the topology of structural or functional regions across subjects. In this work, we take the approach, previously demonstrated in the Euclidean space [3], of restricting the deformation space to be a composition of diffeomorphisms, each of which is parameterized by a stationary velocity field. In each iteration, the algorithm greedily seeks the best diffeomorphism to be composed with the current transformation, resulting in much faster updates. <br />
<br />
Another challenge in registration is the tradeoff between the image similarity measure and the regularization in the objective function. Since most regularizations favor smooth deformations, the gradient computation is complicated by the need to take into account the deformation in neighboring regions. For Euclidean images, the demons objective function facilitates a fast two-step optimization where the second step handles the warp regularization via a single convolution with a smoothing filter [2,3]. Based on spherical vector spline interpolation theory and other differential geometric tools, we show that the two-stage optimization procedure of the demons algorithm can be efficiently applied on the sphere.<br />
<br />
[[Image:CoordinateChart.png | center | 400px]]<br />
<br />
== Experimental Results ==<br />
We use two sets of experiments to compare the accuracy of Spherical Demons and FreeSurfer [1]. The FreeSurfer registration algorithm uses the same similarity measure as Spherical Demons, but penalizes for metric and areal distortion. Its runtime is more than an hour while our runtime is less than 5 minutes.<br />
<br />
== (1) Parcellation of In-Vivo Cortical Surfaces ==<br />
<br />
We consider a set of 39 left and right cortical surface models extracted from in-vivo MRI. Each surface is spherically parameterized and represented as a spherical image with geometric features at each vertex (e.g., sulcal depth and curvature). Both hemispheres are manually parcellated by a neuroanatomist into 35 major sulci and gyri. We validate our algorithm in the context of automatic cortical parcellation.<br />
<br />
We co-register all 39 spherical images of cortical geometry with Spherical Demons by iteratively building an atlas and registering<br />
the surfaces to the atlas. The atlas consists of the mean and variance of cortical geometry. We then perform cross-validation parcellation 4 times, by leaving out subjects 1 to 10, training a classifier using the remaining subjects, and using it to classify subjects 1 to 10. We repeat with subjects 11-20, 21-30 and 31-39. We also perform registration and cross-validation with the FreeSurfer algorithm [1] using the same features and parcellation algorithm. Once again, the atlas consists of the mean and variance of cortical geometry.<br />
<br />
<br />
The average Dice measure (defined as the ratio of cortical surface area with correct labels to the total surface area averaged over the test set) on the left hemisphere is 88.9 for FreeSurfer and 89.6 for Spherical Demons. While the improvement is not big, the<br />
difference is statistically significant for a one-sided t-test with the Dice measure of each subject treated as an independent sample (p = 2e-6). On the right hemisphere, FreeSurfer obtains a Dice of 88.8 and Spherical Demons achieves 89.1. Here, the improvement is smaller, but still statistically significant (p = 0.01).<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:SD.Lh.PercentImprove.lat.png|thumb|center|150px|Left Medial]]<br />
</td><br />
<td><br />
[[Image:SD.Lh.PercentImprove.med.png|thumb|center|150px|Left Lateral]]<br />
</td><br />
<td><br />
[[Image:SD.Rh.PercentImprove.lat.png|thumb|center|150px|Right Lateral]]<br />
</td><br />
<td><br />
[[Image:SD.Rh.PercentImprove.med.png|thumb|center|142px|Right Medial]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
Because the average Dice can be deceiving by suppressing small structures, we analyze the segmentation accuracy per structure. On<br />
the left (right) hemisphere, the segmentations of 16 (8) structures are statistically significantly improved by Spherical Demons with respect to FreeSurfer, while no structure got worse (FDR = 0.05). The above figure shows the percentage improvement<br />
of individual structures. Parcellation results suggest that our registration is at least as accurate as FreeSurfer.<br />
<br />
== (2) Brodmann Areas Localization on Ex-vivo Cortical Surfaces ==<br />
<br />
In this experiment, we evaluate the registration accuracy on ten human brains analyzed histologically postmortem. The histological sections were aligned to postmortem MR with nonlinear warps to build a 3D volume. Eight manually labeled Brodmann areas from histology were sampled onto each hemispheric surface model and sampling errors were manually corrected. Brodmann areas are cyto-architectonically defined regions closely related to cortical function.<br />
<br />
It has been shown that nonlinear surface registration of cortical folds can significantly improve Brodmann area overlap across<br />
different subjects. Registering the ex-vivo surfaces is more difficult than in-vivo surfaces because the reconstructed volumes are extremely noisy, resulting in noisy geometric features.<br />
<br />
We co-register the ten surfaces to each other by iteratively building an atlas and registering the surfaces to the atlas. We<br />
compute the average distance between the boundaries of the Brodmann areas for each pair of registered subjects. We perform a permutation test to test for statistical significance. Spherical Demons improves the alignment of 5 (2) Brodmann areas on the left (right) hemisphere (FDR = 0.05) compared with FreeSurfer and no structure gets worse. These results suggest that the Spherical Demons algorithm is at least as accurate as FreeSurfer in aligning Brodmann areas. <br />
<br />
= Code = <br />
<br />
Matlab code is currently available at http://yeoyeo02.googlepages.com/sphericaldemonsrelease<br />
<br />
We are currently working in collaboration with Kitware to make the code available in ITK.<br />
<br />
<br />
<br />
[1] B. Fischl, M. Sereno, R. Tootell, and A. Dale. High-resolution Intersubject Averaging and a Coordinate System for the <br />
Cortical Surface. Human Brain Mapping, 8(4):272–284, 1999.<br />
<br />
[2] J. Thirion. Image Matching as a Diffusion Process: an Analogy with Maxwell’s Demons. Medical Image Analysis, <br />
2(3):243–260, 1998. <br />
<br />
[3] T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache. Non-parametric Diffeomorphic Image Registration with the <br />
Demons Registration. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted <br />
Intervention (MICCAI), volume 4792 of LNCS, pages 319–326, 2007.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: [[http://people.csail.mit.edu/ythomas/ | B.T. Thomas Yeo]], Mert Sabuncu, Polina Golland.<br />
* Harvard: Bruce Fischl. <br />
* INRIA: Nicholas Ayache. <br />
* Mauna Kea Technologies: Tom Vercauteren.<br />
* Kitware: Luis Ibanez, Michel Audette.<br />
<br />
= Publications =<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects:SphericalDemons&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Spherical Demons: Fast Surface Registration]<br />
<br />
<br />
[[Category: Registration]] [[Category:Segmentation]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:TumorModeling&diff=52213Projects:TumorModeling2010-05-11T20:03:38Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Modeling tumor growth in patients with glioma =<br />
We are interested in developing computational methods for the assimilation of magnetic resonance image data into physiological models of glioma - the most frequent primary brain tumor - for a patient-adaptive modeling of tumor growth.<br />
<br />
<br />
This aims at two directions: First, it aims at making complex information from longitudinal multimodal data set accessible for diagnostic radiology through physiological models. This will allow to estimate features such as degree of infiltration, speed of growth, or mass effect in a quantitative fashion; for therapy it will allow to identify regions at risk for progression. Second, it aims at providing the means to test different mavroscopic tumor models from theoretical biology on real clinical data.<br />
<br />
<br />
To realize these aims, the project comprises a number of ascpects -- automated segmentation of tumors in large multimodal image data sets, making information of different MR image modalities accessible for the tumor model, with a focus on the processing of magnetic resonance spectroscopic images (MRSI), and the development of methods for the image-based estimation of parameters in reaction-diffusion type models of tumor growth.<br />
<br />
<br />
[[Image:Multimodal_glioma.png|thumb|center|600px| Figure 1: Multi-modal image data from a patient with low-grade glioma. A large number of different modalities and derived parameter volumes are acquired during the monitoring of tumor growth.]]<br />
<br />
== Segmenting tumors in large multimodal data sets ==<br />
To segment all MR image volumes available for a patient we developed an approach for learning patient-specific lesion atlases (Figure 2) with limited user interaction. Figure 2 shows the manual segmentation of the tumor from different raters (red, green, blue) and the automatic segmentation using the patient-specific lesion atlas (black) in T1-MRI, T1-MRI and the fractional anisotropy map from DTI. <br />
<br />
<br />
[[Image:Tumor_segmentation_lesion_atlas.png|thumb|center|600px| Figure 2: Tumor segmentation - by human rater (red, green, blue) and our methods (black). The right image shows the lesion atlas.]]<br />
<br />
== Processing magnetic resonance spectroscopic images ==<br />
To make the metabolic information of magnetic resonance spectroscopic images available for modeling the evolution of glioma growth we are implementing an [http://wiki.na-mic.org/Wiki/index.php/2009_Summer_Project_Week_MRSI-Module MRSI processing module] for Slicer.<br />
<br />
<br />
= Key Investigators =<br />
* MIT: [http://people.csail.mit.edu/menze Bjoern Menze], [http://people.csail.mit.edu/tammy Tammy Riklin Raviv], [http://people.csail.mit.edu/koen Koen Van Leemput], [http://people.csail.mit.edu/polina Polina Golland]<br />
* INRIA Sophia-Antipolis, France: Ezequiel Geremia, Olivier Clatz, Nicholas Ayache<br />
* DKFZ Heidelberg, Germany: Bram Stieltjes, Marc-Andre Weber<br />
<br />
= Publications =<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3ATumorModeling&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Modeling the growth of brain tumors]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:LatentAtlasSegmentation&diff=52212Projects:LatentAtlasSegmentation2010-05-11T20:03:12Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],<br />
__NOTOC__<br />
= Joint Segmentation of Image Ensembles via Latent Atlases =<br />
<br />
<br />
Spatial priors, such as probabilistic atlases, play an important role<br />
in MRI segmentation. The atlases are typically generated by<br />
averaging manual labels of aligned brain regions across different<br />
subjects. However, the availability of comprehensive, reliable and suitable<br />
manual segmentations is limited. We therefore propose a joint segmentation of<br />
corresponding, aligned structures in the entire population<br />
that does not require a probability atlas.<br />
Instead, a latent atlas, initialized by a single manual segmentation, is inferred from the evolving segmentations of the ensemble.<br />
The proposed method is based on probabilistic principles but is solved using partial differential equations (PDEs)<br />
and energy minimization criteria.<br />
We evaluate the method by segmenting 50<br />
brain MR volumes. Segmentation accuracy for cortical and subcortical<br />
structures approaches the quality of state-of-the-art atlas-based segmentation results,<br />
suggesting that the ''latent atlas'' method is a reasonable alternative when<br />
existing atlases are not compatible with the data to be processed.<br />
<br />
<br />
= Description =<br />
Here we propose and demonstrate a method that does not use a set of<br />
training images or probabilistic atlases as priors. Instead we extract an ensemble of corresponding structures<br />
simultaneously. The evolving segmentation of the entire image set<br />
supports each of the individual segmentations. In practice, a subset<br />
of the model parameters, called the spatial parameters, is inferred<br />
as part of the joint segmentation processes. These latent spatial<br />
parameters, which can be viewed as a `dynamic atlas', are estimated exclusively<br />
from the data at hand and a single manual segmentation.<br />
The updated estimates of the latent atlas are used iteratively as Markov Random Field (MRF) priors on the tissue labels. The single node potentials term of the MRF model is formulated as a spatial constraint in a level-set functional for segmentation.<br />
The main novelty of the suggested method with respect to other group-wise segmentation methods is the consistent statistically-driven variational framework for MR ensemble segmentation by estimating a latent atlas.<br />
<br />
<br />
<br />
== Results ==<br />
We test the proposed approach on 50 MR<br />
brain scans. Some of the subjects in this set are diagnosed with the first episode schizophrenia or affective disorder.<br />
The MR images (T1, 256X256X128 volume, 0.9375 X 0.9375 X 1.5 mm<br />
voxel size) were acquired by a 1.5-T General Electric Scanner.<br />
In addition to the MR volumes, manual<br />
segmentations of three structures (superior temporal<br />
gyrus, amygdala, and hippocampus) in each hemisphere were provided<br />
for each of the 50 individuals and used to evaluate<br />
the quality of the automatic segmentation results. MR images are preprocessed by skull stripping.<br />
The volumes were aligned using B-spline registration.<br />
<br />
[[Image:LatentAtlasSeg.jpg | center | 700px]]<br />
<br />
''Three cross-sections of 3D segmentation of Hippocampus, Amygdala and Superior Temporal Gyrus in the left and the right hemispheres. Automatic segmentation is shown in red. Manual segmentation is shown in blue. Fourth column: Coronal views of the resulting atlases for each pair of structures.''<br />
<br />
== Brain Tumor Modeling ==<br />
We have applied the proposed algorithm to a longitudinal multi-modal Patient specific brain scans for brain tumor segmentation and modeling. In this particular application the inferred spatial parameters estimate the patient's latent anatomy. No prior information is assumed but a couple of mouse clicks that define a sphere that initializes the segmentation of the first time point. To learn more please refer to the [[Projects:TumorModeling| brain tumor modeling]] page.<br />
<br />
= Key Investigators =<br />
* MIT: [http://people.csail.mit.edu/tammy/ Tammy Riklin Raviv], Polina Golland, Koen Van Leemput<br />
* Harvard: William M. Wells, Ron Kikinis, Martha Shenton, Sylvain Bouix<br />
<br />
= Publications =<br />
<br />
*[http://www.na-mic.org/publications/pages/display?search=Projects%3ALatentAtlasSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Joint Segmentation of Image Ensembles via Latent Atlases]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&diff=52211Projects:NonparametricSegmentation2010-05-11T20:02:53Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],<br />
__NOTOC__<br />
= Nonparametric Segmentation =<br />
<br />
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use<br />
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.<br />
<br />
= Description =<br />
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). <br />
The proposed non-parametric model yields four types of label fusion algorithms: <br />
<br />
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].<br />
<br />
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.<br />
<br />
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.<br />
<br />
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.<br />
<br />
The following figure shows an example segmentation obtained via Local Label Fusion.<br />
<br />
[[File:Segmentation_example2.png]]<br />
<br />
== Experiments ==<br />
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. <br />
<br />
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).<br />
<br />
[[File:DiceScoresPerROI.png]]<br />
<br />
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. <br />
<br />
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.<br />
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.<br />
<br />
[[File:HippocampalVolume.png]]<br />
<br />
<br />
== Conclusion ==<br />
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.<br />
<br />
== Literature ==<br />
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:<br />
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.<br />
<br />
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation<br />
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.<br />
<br />
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image<br />
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.<br />
<br />
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.<br />
<br />
= Key Investigators =<br />
<br />
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland<br />
*Harvard: Koen Van Leemput and Bruce Fischl<br />
<br />
<br />
= Publications =<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:AutomaticFullBrainSegmentation&diff=52210Projects:AutomaticFullBrainSegmentation2010-05-11T20:02:07Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:MGH|MGH Algorithms]]<br />
__NOTOC__<br />
= Atlas Renormalization for Improved Brain MR Image Segmentation across Scanner Platforms = <br />
<br />
Atlas-based approaches have demonstrated the ability to automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. Unfortunately, the accuracy of this type of method often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this paper, we improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets has shown that the new procedure improves the segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.<br />
<br />
= Description =<br />
<br />
''Status''<br />
<br />
Prototype<br />
<br />
= Key Investigators = <br />
<br />
* MGH Algorithms: Xiao Han, Bruce Fischl<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=AutomaticFullBrainSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Automatic Full Brain Segmentation]<br />
<br />
<br />
[[Category: Segmentation]] [[Category: MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:TopologyCorrectionNonSeparatingLoops&diff=52209Projects:TopologyCorrectionNonSeparatingLoops2010-05-11T20:00:57Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:MGH|MGH Algorithms]]<br />
__NOTOC__<br />
= Geometrically-Accurate Topology-Correction of Cortical Surfaces using Non-Separating Loops =<br />
<br />
= Description =<br />
<br />
''Status'': Prototype<br />
<br />
''Submitted'': IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 26, NO. 4, APRIL 2007<br />
<br />
''Abstract'': In this paper, we focus on the retrospective topology correction of surfaces. We propose a technique to accurately correct the spherical topology of cortical surfaces. Specifically,we construct a mapping from the original surface onto the sphere to detect topological defects as minimal nonhomeomorphic regions. The topology of each defect is then corrected by opening and sealing the surface along a set of nonseparating loops that are selected in a Bayesian framework. The proposed method is a wholly self-contained topology correction algorithm, which determines geometrically accurate, topologically correct solutions based on the magnetic resonance imaging (MRI) intensity profile and the expected local curvature. Applied to real data, our method provides topological corrections similar to those made by a trained operator.<br />
<br />
= Key Investigators =<br />
<br />
* MGH Algorithms: Florent Ségonne, Jenni Pacheco, and Bruce Fischl<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=TopologyCorrectionNonSeparatingLoops&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Topology Correction]<br />
<br />
[[Category: MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:CorticalSurfaceShapeAnalysisUsingSphericalWavelets&diff=52208Projects:CorticalSurfaceShapeAnalysisUsingSphericalWavelets2010-05-11T20:00:37Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:MGH|MGH Algorithms]]<br />
__NOTOC__<br />
= Cortical Surface Shape Analysis Based on Spherical Wavelets =<br />
<br />
Cortical folding patterns vary both in terms of relative spatial location as well as in spatial frequency content. Wavelets are thus a natural tool for the analysis of folding patterns.<br />
<br />
= Description =<br />
<br />
''Status'' Prototype<br />
<br />
''Submitted'' IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 26, NO. 4, APRIL 2007<br />
<br />
''Abstract'' In vivo quantification of neuroanatomical shape variations is possible due to recent advances in medical imaging and has proven useful in the study of neuropathology and neurodevelopment. In this paper, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from<br />
magnetic resonance images (MRIs) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, which allows us to characterize the order of development of large-scale and finer folding patterns independently. Given a limited amount of training data, we use a regularization framework to estimate the parameters of the Gompertz model to improve the prediction performance on new data. We develop an efficient method to estimate this regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomic information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurologic deficits in newborns.<br />
<br />
= Key Investigators =<br />
<br />
* MGH Algorithms: Peng Yu, P. Ellen Grant, Yuan Qi, Xiao Han, Florent Ségonne, Rudolph Pienaar, Evelina Busa, Jenni Pacheco, Nikos Makris, Randy L. Buckner, Polina Golland, and Bruce Fischl<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=CorticalSurfaceShapeAnalysisUsingSphericalWavelets&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Spherical Wavelets]<br />
<br />
[[Category: Shape Analysis]] [[Category:MRI]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:LearningRegistrationCostFunctions&diff=52207Projects:LearningRegistrationCostFunctions2010-05-11T19:59:42Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]], [[Algorithm:MGH|MGH Algorithms]]<br />
__NOTOC__<br />
= Learning Task-Optimal Registration Cost Functions =<br />
<br />
We present a framework for learning the parameters of registration cost functions. The parameters of the registration cost function -- for example, the tradeoff between the image similarity and regularization terms -- are typically determined manually through inspection of the image alignment and then fixed for all applications. We propose a principled approach to learn these parameters with respect to particular applications.<br />
<br />
Image registration is ambiguous. For example, the figures below show two subjects with Brodmann areas overlaid on their cortical folding patterns. Brodmann areas are parcellation of the cortex based on the cellular architecture of the cortex. Here, we see that perfectly aligning the inferior frontal sulcus will misalign the superior end of BA44 (Broca's language area). If our goal is segment sulci and gyri, perfectly alignment of the cortical folding pattern is ideal. But it is unclear whether perfectly aligning cortical folds is optimal for localizing Brodmann areas. Here, we show that by taking into account the end-goal of registration, we not only improve the application performance but also potentially eliminate ambiguities in image registration.<br />
<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:to.rh.pm1696.BAsALL.lat.cropped.caption.png|thumb|center|300px]]<br />
</td><br />
<td><br />
[[Image:to.rh.pm20784.BAsALL.lat.cropped.caption.png|thumb|center|300px]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
= Description =<br />
<br />
The key idea is to introduce a second layer of optimization over and above the usual registration. This second layer of optimization traverses the space of local minima, selecting registration parameters that result in good registration local minima as measured by the performance of the specific application in a training data set. The training data provides additional information not present in a test image, allowing the task-specific cost function to be evaluated during training. For example, if the task is segmentation, we assume the existence of a training data set with ground truth segmentation and a smooth cost function that evaluates segmentation accuracy. This segmentation accuracy is used as a proxy to evaluate registration accuracy.<br />
<br />
If the registration cost function employs a single parameter, then the optimal parameter value can be found by exhaustive search. With multiple parameters, exhaustive search is not possible. Here, we demonstrate the optimization of thousands of parameters by gradient descent on the space of local minima, selecting registration parameters that result in good registration local minima as measured by the task-specific cost function in the training data set. <br />
<br />
Our formulation is related to the use of continuation methods in computing the entire path of solutions of learning problems (e.g., SVM or Lasso) as a function of a single regularization parameter. Because we deal with multiple (thousands of) parameters, it is impossible for us to compute a solution manifold. Instead, we trace a path within the solution manifold that improves the task-specific cost function. <br />
<br />
Another advantage of our approach is that we do not require ground truth deformations. As suggested in the example above, the concept of “ground truth deformations” may not always be well-defined, since the optimal registration may depend on the application at hand. In contrast, our approach avoids the need for ground truth deformations by focusing on the application performance, where ground truth (e.g., via segmentation labels) is better defined.<br />
<br />
== Experimental Results ==<br />
<br />
We instantiate the framework for the alignment of hidden labels whose extents are not necessarily well-predicted by local image features. We consider the generic weighted Sum of Squared Differences (wSSD) cost function and estimate either (1) the optimal weights or (2) cortical folding template for localizing cytoarchitectural and functional regions based only on macroanatomical cortical folding information. We demonstrate state-of-the-art localization results in both histological and fMRI data sets.<br />
<br />
== (1) Localizing Brodmann Areas Using Cortical Folding ==<br />
In this experiment, we estimate the optimal template in the wSSD cost function for localizing Brodmann areas in 10 histologically-analyzed subjects. We compare 3 algorithms: task-optimal template (red), FreeSurfer (green) [1] and optimal uniform weights of the wSSD (black). The optimal uniform weights of the wSSD are found by setting all the weights to a single value and performing an exhaustive search of the weights. We see in the figure below, that task-optimal framework achieves the lowest localization errors.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:to.Presentation.mean2.V1V2BA2.png|thumb|center|300px]]<br />
</td><br />
<td><br />
[[Image:to.Presentation.mean2.BA44BA45MT.png|thumb|center|300px]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
== (1b) Interpreting Task-Optimal Template Estimation ==<br />
Fig(a) shows the initial cortical geometry of a template subject with its corresponding BA2 in black outline. In this particular subject, the postcentral sulcus is more prominent than the central sulcus. Fig(b) shows the cortical geometry of a test subject together with its BA2. In this subject, the central sulcus is more prominent than the postcentral sulcus. Consequently, in the uniform-weights method, the central sulcus of the test subject is wrongly mapped to the postcentral sulcus of the template, so that BA2 is misregistered, as shown by the green outline in Fig(a). During task-optimal training, our method interrupts the geometry of the postcentral sulcus in the template because the uninterrupted postcentral sulcus in the template is inconsistent with localizing BA2 in the training subjects. The final template is shown in Fig(c). We see that the BA2 of the subject (green) and the task-optimal template (black) are well-aligned, although there still exists localization error in the superior end of BA2.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:BA2_template.png|thumb|center|250px|(a) Initial Template]]<br />
</td><br />
<td><br />
[[Image:BA2_subject.png|thumb|center|250px|(b) Test Subject]]<br />
</td><br />
<td><br />
[[Image:BA2_final_template.png|thumb|center|250px|(c) Final Template]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
The video below visualizes the template at each iteration of the optimization.<br />
[[Image:lh.pm14686.BA2.gif|thumb|center|300px]]<br />
<br />
== (2) Localizing fMRI-defined MT+ Using Cortical Folding ==<br />
In this experiment, we consider 42 subjects with fMRI-defined MT+. MT+ is thought to include the cytoarchitectonically-defined MT, as well as, a small part of the medial superior temporal region. We use the ex-vivo MT template to predict MT+ in the 42 in-vivo subjects. We see that once again, task-optimal template achieves better localization results than FreeSurfer.<br />
<br />
[[Image:exvivoMTPredictsInvivoMT.png|thumb|center|300px]]<br />
<br />
== (2b) Cross-validating In-vivo Subjects ==<br />
We now perform cross-validation within the in-vivo data set. We consider 9, 19 or 29 training subjects for the task-optimal template. For FreeSurfer, there is no training, so there is only 1 data point. Like before, task-optimal template achieves lower localization errors.<br />
<br />
<center><br />
<table><br />
<tr><br />
<td><br />
[[Image:To.LeftInvivoCrossValidation.png|thumb|center|250px|(a) Initial Template]]<br />
</td><br />
<td><br />
[[Image:To.RightInvivoCrossValidation.png|thumb|center|250px|(b) Test Subject]]<br />
</td><br />
</tr><br />
</table><br />
</center><br />
<br />
<br />
<br />
<br />
[1] B. Fischl, M. Sereno, R. Tootell, and A. Dale. High-resolution Intersubject Averaging and a Coordinate System for the <br />
Cortical Surface. Human Brain Mapping, 8(4):272–284, 1999.<br />
<br />
= Key Investigators =<br />
<br />
* MIT: [http://people.csail.mit.edu/ythomas/ B.T. Thomas Yeo], Mert Sabuncu, Polina Golland<br />
* MGH: Bruce Fischl, Daphne Holt<br />
* INRIA: Tom Vercauteren<br />
* Aachen University: Katrin Amunts<br />
* Research Center Juelich: Karl Zilles<br />
<br />
= Publications =<br />
<br />
<font color="red">'''New: '''</font> <br />
B.T. Thomas Yeo, Mert R. Sabuncu, Tom Vercauteren, Daphne Holt, Katrin Amunts, Karl Zilles, Polina Golland, and Bruce Fischl.<br />
Learning Task-Optimal Registration Cost Functions for Localizing Cytoarchitecture and Function in the Cerebral Cortex.<br />
IEEE Transactions on Medical Imaging, in press, 2010.<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=Task-optimal&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Learning Task-Optimal Registration Cost Functions]<br />
<br />
[[Category: Registration]] [[Category:Segmentation]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:ShapeBasedLevelSetSegmentation&diff=52206Projects:ShapeBasedLevelSetSegmentation2010-05-11T19:59:20Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:MIT|MIT Algorithms]]<br />
__NOTOC__<br />
= Shape Based Level Set Segmentation =<br />
This class of algorithms explicitly manipulates the representation of the object boundary to fit the strong gradients in the image, indicative of the object outline. Bias in the boundary evolution towards the likely shapes improves the robustness of the segmentation results when the intensity information alone is insufficient for boundary detection.<br />
<br />
Already in ITK.<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=Projects%3AShapeBasedLevelSetSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Shape Based Level Segmentation]<br />
<br />
[[Category: Segmentation]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:StatisticalSegmentationSlicer2&diff=52205Projects:StatisticalSegmentationSlicer22010-05-11T19:55:00Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:GATech|Georgia Tech Algorithms]]<br />
__NOTOC__<br />
= Statistical Segmentation Slicer 2 =<br />
<br />
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE frameowrk.<br />
<br />
= Description =<br />
<br />
We developped flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows.<br />
<br />
''Completed''<br />
<br />
* A statistically based flow for image segmentation, using Fast Marching<br />
<br />
[[Image:Gatech_SlicerModel2.jpg|thumb|right|180px|Figure 1:Screenshot from the Slicer Fast Marching module]]<br />
<br />
* The code has been integrated into the Slicer<br />
* A user-oriented tutorial for the Fast Marching algorithm is available at:[http://www.bme.gatech.edu/groups/minerva/publications/papers/pichon.slicer.fastMarching/index.html Slicer Module Tutorial]<br />
<br />
= Improvements =<br />
<br />
Improvements over the original method are [[RobustStatisticsSegmentation|here.]]<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: Delphine Nain, Eric Pichon, Oleg Michailovich, Yogesh Rathi, James Malcolm, Allen Tannenbaum<br />
<br />
= Publications =<br />
<br />
''In print''<br />
* [http://www.na-mic.org/publications/pages/display?search=StatisticalSegmentationSlicer2&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Statistical/PDE Methods using Fast Marching for Segmentation]<br />
<br />
Note: Best student presentation in image segmentation award[http://www.bme.gatech.edu/groups/minerva/publications/papers/pichon-media2004-segmentation.pdf [1]]<br />
<br />
[[Category: Slicer]] [[Category: Segmentation]] [[Category:Statistics]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:KPCA_LLE_KLLE_ShapeAnalysis&diff=52204Projects:KPCA LLE KLLE ShapeAnalysis2010-05-11T19:54:34Z<p>Melonakos: </p>
<hr />
<div> Back to [[Algorithm:GATech|Georgia Tech Algorithms]]<br />
__NOTOC__<br />
= KPCA LLE KLLE Shape Analysis =<br />
<br />
Our Objective is to compare various shape representation techniques like linear PCA (LPCA), kernel PCA (KPCA), locally linear embedding (LLE) and<br />
kernel locally linear embedding (KLLE).<br />
<br />
= Description =<br />
<br />
The surfaces are represented as the zero level set of a signed distance function and shape learning is performed on the embeddings of these shapes. We carry out some experiments to see how well each of these methods can represent a shape, given the training set. We tested the performance of these methods on shapes of left caudate nucleus and left hippocampus. The training set of left caudate nucleus consisted of 26 data sets and the test set contained 3 volumes. Error between a particular shape representation and<br />
ground truth was calculated by computing the number of mislabeled voxels using each of the methods. Figure 1 gives the error<br />
using each of the methods. Similar tests were done on a training set of 20 hippocampus data with 3 test volumes. Figure 2 gives the error table for each of the methods [1].<br />
<br />
[[Image:Table1.png|thumb|600px|Figure 1: Table gives the number of mislabelled voxels for each of the methods for left caudate nucleus]]<br />
[[Image:Table2.png|thumb|600px|Figure 2: Table gives the number of mislabelled voxels for each of the methods for left hippocampus]]<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: Yogesh Rathi, Samuel Dambreville, Allen Tannenbaum<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=KPCA+LLE+KLLE+ShapeAnalysis&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on KPCA, LLE, KLLE Shape Analysis]<br />
<br />
[[Category: Shape Analysis]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:KnowledgeBasedBayesianSegmentation&diff=52203Projects:KnowledgeBasedBayesianSegmentation2010-05-11T19:53:51Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]], [[Engineering:Kitware|Kitware Engineering]]<br />
__NOTOC__<br />
= Knowledge Based Bayesian Segmentation =<br />
<br />
This ITK filter is a segmentation algorithm which utilizes Bayes's Rule along with an affine-invarient anisotropic smoothing filter.<br />
<br />
= Description =<br />
<br />
''Use Case''<br />
<br />
I'd like to segment a volume or sub-volume into 'N' classes in a very general manner. I will provide the data and the number of classes that I expect and the algorithm will output a labelmap with 'N' classes.<br />
<br />
''Data''<br />
<br />
We have applied this algorithm to 20 normal brain MRI data-sets. We used publicly available data-sets from<br />
the Internet Brain Segmentation Repository (IBSR) offered by the Massachusetts General Hospital, Center for<br />
Morphometric Analysis. The IBSR data-sets are T1-weighted, 3D coronal brain scans after having been<br />
positionally normalized. Manual expert segmentations for these data-sets are publicly available and represent<br />
the ground truth used in this work.<br />
<br />
''Algorithm''<br />
<br />
This algorithm can be cast in either a static or dynamic framework. In the static framework, the following is the algorithm:<br />
<br />
# The user sets the number of distinct classes for segmentation: 'N'<br />
# Generate 'N' prior images (default, 'N' uniform prior images) <br /><br />
# Generate 'N' statistical distributions (default, 'N' normal distributions) <br /><br />
# Generate 'N' membership images by applying the statistical distributions to the raw data <br /><br />
# Generate 'N' posterior images by applying Bayes' rule to the prior and membership images <br /><br />
# Smooth the posterior images for 'm' iterations using an affine-invarient anisotropic smoothing filter and renormalize after each iteration (default, m = 5) <br /><br />
# Apply maximum a posteriori rule to apply labeling and finalize segmentation<br />
<br />
In the dynamic framework, the following image depicts the adaptation of the static framework to the dynamic formulation:<br />
<br />
[[Image:Flowchart-classification.png| Dynamic Tissue Tracking Algorithm | center]]<br />
<br />
<br/><br />
<br />
''The ITK filter design''<br />
<br />
<br/><br />
<br />
[[Image:Flowchart.png| Flowchart]]<br />
<br />
''Some Results''<br />
<br />
* [[Image:Plot white.png | White Matter Performance on the 20 ISBR datasets | 600px]] WM Algorithm Comparisons<br />
* [[Image:Plot gray.png | Gray Matter Performance on the 20 ISBR datasets | 600px]] GM Algorithm Comparisons<br />
* [[Image:Fig67.png | Visual Results | 600px]] Visual Results on ISBR data<br />
<br />
''Project Status''<br />
<br />
* Fully incorporated into itkBayesianClassificationImageFilter and itkBayesianClassificationInitializationImageFilter in ITK CVS. <br /><br />
* Fully wrapped in VTK for use in Slicer. <br /><br />
* The working ITK code has been committed to the [http://www.na-mic.org:8000/svn/NAMICSandBox/BayesianSegmentationModule/ SandBox]<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: John Melonakos, Yi Gao, Allen Tannenbaum<br />
* Kitware Engineering: Luis Ibanez, Karthik Krishnan<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=KnowledgeBasedBayesianSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Knowledge-Based Bayesian Segmentation]<br />
<br />
<br />
[[Category: Segmentation]] [[Category:MRI]] [[Category:Slicer]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:BloodVesselSegmentation&diff=52202Projects:BloodVesselSegmentation2010-05-11T19:50:40Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[Algorithm:GATech|Georgia Tech Algorithms]]<br />
__NOTOC__<br />
= Blood Vessel Segmentation =<br />
<br />
Atherosclerosis is a systematic disease of the vessel wall that occurs in the aorta, carotid, coronary and peripheral arteries. Atherosclerotic plaques in coronary arteries may cause stenosis (narrowing) or complete occlusion of the arteries and lead to serious results such as heart attacks. Imaging techniques have greatly assisted the diagnoses and treatment procedures of atherosclerosis. Three dimensional imaging such as CTA for coronary arteries is a relatively new approach but has great potentials for detecting and evaluating coronary calcification and stenosis. Fig. 1 (b) shows an example of the 3D reconstruction of coronary arteries and the aorta.<br />
<br />
= Description =<br />
<br />
[[Image:Fig1yan.PNG | Figure 1]]<br />
<br />
A novel image segmentation approach is proposed combining Bayesian pixel classification method and the active surface model in a level set formulation to extract coronary arteries from CT angiography images. Fig. (2) shows the reconstructed coronary arteries from three different patients, and Fig. (3) are sample slices showing the original images and the delineated vessels as cross-sections.<br />
<br />
[[Image:Fig2yan.PNG | Figure 2]]<br />
<br />
Once the surface of the coronaries are reconstructed, further shape analysis and measurements can be conducted based on it. Fig. (4) shows the results of performing centerline extraction using a hamonic skeletonization technique [3]. The skeletons can then serve as a guide for finding the perpendicular planes to the arteries, and these planes are used to intersect with the vessel in order to measure the local cross-sectional areas, as shown in Fig. (5).<br />
<br />
[[Image:Fig3yan.PNG | Figure 3]]<br />
<br />
[[Image:Fig4yan.PNG | Figure 4]]<br />
<br />
[[Image:Fig5yan.PNG | Figure 5]]<br />
<br />
''Soft Plaque Detection and Segmentation''<br />
<br />
Recent studies have shown that the soft plaque is more vulnerable to rupture than the hard plaque. Hence it becomes necessary to develop methods to detect and segment the soft plaque automatically. The soft plaque has an intensity that lies between the intensities of the blood lumen and the cardiac muscle, thus making it difficult to be detected using the energy calculated globally.<br />
<br />
''Vessel segmentation using Tubular Surface Extraction framework''<br />
<br />
In this work, we are extending the Tubular Surface Extraction framework of Mohan et al towards segmenting vessel structures. The blood vessel is modeled as a tube with a center-line and a radius function associated with each point. Further, the work is also being extended to accomodate evolution of end points. This allows a segmentation framework where a portion of the main branch of the vessel tree can be selected as input and the framework evolves this to capture the entire vessel tree. Fig. 10 shows the results from the application of the fixed end points version of this framework.<br />
<br />
[[Image:Mohan_Tubular_Vessels_1.PNG | Figure 10]]<br />
<br />
<br />
<br />
<br />
<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms:Vandana Mohan, Shawn Lankton, Yan Yang, Ponnappan Arumuganainar, Allen Tannenbaum <br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=BloodVesselSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Blood Vessel Segmentation]<br />
<br />
[[Category: Segmentation]] [[Category:CT]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:ConformalFlatteningRegistration&diff=52201Projects:ConformalFlatteningRegistration2010-05-11T19:50:21Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:fMRIAnalysis|NA-MIC_Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]]<br />
__NOTOC__<br />
= Conformal Flattening Registration =<br />
<br />
The goal of this project is for better visualizing and computation of neural activity from fMRI brain imagery. Also, with this technique, shapes can be mapped to spheres for shape analysis, registration or other purposes. Our technique is based on conformal mappings which map genus-zero surface: in fMRI case cortical or other surfaces, onto a sphere in an angle preserving manner.<br />
<br />
The explicit transform is obtained by solving a partial differential equation. Such transform will map the original surface to a plane(flattening) and then one can use classic stereographic transformation to map the plane to a sphere.<br />
<br />
= Description =<br />
<br />
The process of the algorithm is briefly given below:<br />
<br />
# The conformal mapping <span class="texhtml">''f''</span> is defined on the originla surface <span class="texhtml">Σ</span> as <math>\triangle f = (\frac{\partial}{\partial u} - i\frac{\partial}{\partial v})\delta_p</math>. In that <span class="texhtml">''u''</span> and <span class="texhtml">''v''</span> are the conformal coordinates defined on the surface and the <span class="texhtml">δ<sub>''p''</sub></span> is a Dirac function whose value is non-zero only at point <span class="texhtml">''p''</span>. By solving this partial differential equation the mapping <span class="texhtml">''f''</span> can be obtained.<br />
# To solve that equation on the discrete mesh representation of the surface, finite element method(FEM) is used. The problem is turned to solving a linear system <span class="texhtml">''D''.''x'' = ''b''</span>. Since b is complex vector, the real and imaginary parts of the mapping <span class="texhtml">''f''</span> can be calculated separately by two linear system.<br />
# Having the mapping <span class="texhtml">''f''</span>, the original surface can be mapped to a plane.<br />
# Further, the plane can be mapped to a sphere by the stereographic projection.<br />
<br />
Also, in the work of [[Algorithm:GATech:Multiscale_Shape_Segmentation|Multiscale Shape Segmentation]], conformal flattening is used as the first step for remeshing the surface.<br />
<br />
= Progress =<br />
The road to ITK and Slicer3:<br />
<br />
== ITK ==<br />
The conformal flattening algorithm is now in ITK/Code/Review/itkConformalFlatteningMeshFilter.h/txx. The following figures show how the surfaces are mapped conformally to a sphere.<br />
<br />
[[Image:Nice.PNG| | 300px ]] ===> [[Image:Nice-flat.PNG| 300px ]]<br />
<br />
[[Image:Brain.PNG | 300px ]] ===> [[Image:Brain-flat.PNG | 300px ]]<br />
<br />
== Slicer3 module ==<br />
<br />
During the project week 2008, it is further put into Slicer3 as a command line module. Below we show two screen shots of using it.<br />
<br />
[[Image:SlicerConformalFlatten.png| The mesh to be mapped to sphere. | 500px]]<br />
[[Image:SlicerConformalFlattenResult.png| The spherical image of the mesh. | 500px]]<br />
<br />
= Acceleration =<br />
<br />
The major computation happens in the solution of the linear system using conjugate gradient method. This can be accelerated using the pre-conditioning technique.<br />
<br />
= Key Investigators =<br />
<br />
*Georgia Tech Algorithms: Yi Gao, John Melonakos, Allen Tannenbaum<br />
<br />
= Publications =<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=ConformalFlatteningRegistration&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Conformal Flattening (inactive)]<br />
<br />
[[Category:fMRI]] [[Category:Registration]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:RuleBasedStriatumSegmentation&diff=52199Projects:RuleBasedStriatumSegmentation2010-05-11T19:49:53Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]], [[DBP1:Harvard|Harvard DBP1]], [[Engineering:Kitware|Kitware Engineering]], [[Engineering:Isomics|Isomics Engineering]], [[DBP1:Irvine|Irvine DBP1]]<br />
__NOTOC__<br />
= Rule Based Striatum Segmentation = <br />
<br />
In this work, we provide software to semi-automate the implementation of segmentation procedures based on expert neuroanatomist rules. We have implemented our code in Slicer 2. We currently provide modules for the semi-automatic segmentation of the DLPFC and the Striatum.<br />
<br />
= Description =<br />
<br />
We have developed an algorithm for Semi-Automatic Segmentation of the DLPFC based on the rules of Core 3 collaborator, Dr. James Fallon. This algorithm was tested last year in Matlab with successful results. This year, we implemented the algorithm into a 3D SLICER module which works with the current Editor Tab. A screenshot of the module is shown below. The ITK Bayesian Segmentation Filter is currently being incorporated into the module. This is important, since we use Bayesian classifiers in order to enhance the Fallon method. The motivation of the DLPFC semi-automatic segmenter was to minimize segmentation time of the DLPFC by incorporating the rules of Dr. Fallon into an algorithm, while still giving the user control of the segmentation process. The time to segment the DLPFC was reduced from over 30 minutes to approximately 5 minutes. The algorithm is based on the average proportional distances of the posterior boundary from the temporal lobe tip and the anterior boundary from the frontal pole. Each hemisphere must be done separately. The average shape is a parallelogram from the movement of the middle frontal gyrus dorsally as moving posteriorly through coronal slices Dr. James Fallon has visited Georgia Tech in December 2005 to train our local researchers about his heuristic rules. He will be visiting again on May 17-18, 2007 for further testing and algorithmic development as well as clinical applications.<br />
<br />
''Striatum Progress''<br />
<br />
We have developed an algorithm for delineation of the striatum into 5 physiological subregions (pre/post caudate, pre/post putamen, and nucleus accumbens) while requiring only minimal user input. We have implemented this algorithm from the geometric rules for delineating the striatum as defined by our Core 3 collaborator, Dr. James Levitt of the PNL, into a 3D SLICER module. The current run time for the algorithm is ~20 seconds after the initial user input. The user inputs a label map of the full striatum, the most superior/dorsal voxel of the putamen on each slice, and the anterior commisure voxel (see figure below). From these, the labelmap is delineated into the aforementioned subregions. The figure below shows a 3D model of the left and right striatum delineated into the five subregions.<br />
<br />
''Striatum Representative Image and Descriptive Caption''<br />
<br />
[[Image:Striatum1.png|[[Image:Striatum1.png|Image:Striatum1.png]]]] [[Image:Striatum2.png|[[Image:Striatum2.png|Image:Striatum2.png]]]]<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: Ramsey Al-Hakim, Delphine Nain, Allen Tannenbaum, John Melonakos<br />
* Harvard DBP1: Sylvain Bouix, James Levitt, Marc Niethammer, Martha Shenton.<br />
* Kitware Engineering: Luis Ibanez<br />
* Isomics Engineering: Steve Pieper<br />
* Irvine DBP1: James Fallon<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=RuleBasedStriatumSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&sponsors=checked&searchbytag=checked| NA-MIC Publications Database on Rule-Based Striatum Segmentation]<br />
<br />
[[Category: Segmentation]] [[Category:MRI]] [[Category:Schizophrenia]] [[Category:Slicer]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:RuleBasedDLPFCSegmentation&diff=52198Projects:RuleBasedDLPFCSegmentation2010-05-11T19:49:33Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]], [[DBP1:Irvine|Irvine DBP 1]], [[DBP2:Harvard|Harvard DBP 2]], [[Engineering:Kitware|Kitware Engineering]], [[Engineering:Isomics|Isomics Engineering]]<br />
__NOTOC__<br />
= Rule Based DLPFC Segmentation = <br />
<br />
In this work, we provide software to semi-automate the implementation of segmentation procedures based on expert neuroanatomist rules. We have implemented our code in Slicer 2. We currently provide modules for the semi-automatic segmentation of the DLPFC and the Striatum.<br />
<br />
= Description =<br />
<br />
We have developed an algorithm for Semi-Automatic Segmentation of the DLPFC based on the rules of Core 3 collaborator, Dr. James Fallon. This algorithm was tested last year in Matlab with successful results. This year, we implemented the algorithm into a 3D SLICER module which works with the current Editor Tab. A screenshot of the module is shown below. The ITK Bayesian Segmentation Filter is currently being incorporated into the module.<br />
This is important, since we use Bayesian classifiers in order to enhance the Fallon method.<br />
The motivation of the DLPFC semi-automatic segmentor was to minimize segmentation time of the DLPFC by incorporating the rules of Dr. Fallon into an algorithm, while still giving the user control of the segmentation process. The time to segment the DLPFC was reduced from over 30 minutes to approximately 5 minutes. The algoirthm is based on the average proportional distances of the posterior boundary from the temporal lobe tip and the anterior boundary from the frontal pole. Each hemisphere must be done separately. The average shape is a parallelogram from the movement of the middle frontal gyrus dorsally as moving posteriorly through coronal slices Dr. James Fallon has visited Georgia Tech in December 2005 to train our local reseachers about his heuristic rules. He will be visting again on May 17-18, 2007 for further testing and algorithmic development as well as clinical applications.<br />
<br />
''DLPFC Progress''<br />
<br />
We have developed an algorithm for delineation of the striatum into 5 physiological subregions (pre/post caudate, pre/post putamen, and nucleus accumbens) while requiring only minimal user input. We have implemented this algorithm from the geometric rules for delineating the striatum as defined by our Core 3 collaborator, Dr. James Levitt of the PNL, into a 3D SLICER module. The current run time for the algorithm is ~20 seconds after the initial user input. The user inputs a label map of the full striatum, the most superior/dorsal voxel of the putamen on each slice, and the anterior commisure voxel (see figure below). From these, the labelmap is delineated into the aforementioned subregions. The figure below shows a 3D model of the left and right striatum delineated into the five subregions.<br />
<br />
[[Image:Dlpfc1.jpg]]<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: Ramsey Al-Hakim, John Melonakos, Allen Tannenbaum<br />
* Irvine DBP 1: James Fallon<br />
* Harvard DBP 2: Marek Kubicki, Marc Niethammer, Sylvain Bouix<br />
* Kitware Engineering: Luis Ibanez<br />
* Isomics Engineering: Steve Pieper<br />
<br />
= Publications =<br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=RuleBasedDLPFCSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Rule-Based DLPFC Segmentation]<br />
<br />
Project Week Results: [[ProjectWeek200706:vtkITKWrapperForRuleBasedSegmentation|Jun 2007]]<br />
<br />
[[Category:Segmentation]] [[Category:MRI]] [[Category:Schizophrenia]] [[Category:Slicer]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:MultiscaleShapeAnalysis&diff=52197Projects:MultiscaleShapeAnalysis2010-05-11T19:49:13Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]], [[Algorithm:UNC|UNC Algorithms]], [[DBP1:Harvard|Harvard DBP1]]<br />
<br />
__NOTOC__<br />
= Multiscale Shape Analysis applied to Caudate and Hippocampus =<br />
<br />
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM).<br />
<br />
= Description =<br />
<br />
''Pre-processing''<br />
<br />
[[Image:UNCShape_OverviewAnalysis_MICCAI06.gif|thumb|right|300px|]]<br />
<br />
We use the [[Projects:ShapeAnalysisFrameworkUsingSPHARMPDM|UNC Pipeline]] to pre-process the shapes. The input is a set of binary segmentation of a single brain structure. We use the following steps from the pipeline:<br />
<br />
# Morphological operations: fills any interior holes and applies a minimal smoothing operation<br />
# Surface conversion: converts the processed binary segmentations to surface meshes<br />
# Spherical Parameterization: computes a spherical parameterization for the surface meshes using a area-preserving, distortion minimizing spherical mapping<br />
# Alignment: computes a spherical harmonic (SPARM) description of the parameterized surface meshes and aligns the spherical parameterizations using the first order ellipsoid from the spherical harmonic coefficients.<br />
# Retriangulation: samples the parameterized and aligned surface meshes into triangulated surfaces (SPHARM-PDM) via icosahedron subdivision of the spherical parametrization. These SPHARM-PDM surfaces are all spatially aligned using rigid Procrustes alignment.<br />
<br />
''Spherical Wavelet Features''<br />
<br />
[[Image:Basis_membership.png|thumb|right|200px|Figure 2: Visualization of spherical wavelet basis functions and associated regions at three levels (columns). Top row : Values of single spherical Wavelet Basis Function shown on the sphere at scales 1 through 3. Middle and Bottom row: Regions of influence of the spherical wavelet basis functions shown on the sphere and on the original surface, each basis function region has a random color.]]<br />
<br />
For each triangulated SPHARM-PDM surface (and its corresponding spherical parameterization), a spherical wavelet description is computed. As a result, each shape is represented by a series of 3D spherical wavelet coefficients (SWC). Each 3D coefficient is associated with a basis function that describes a region of the surface. The size of that region depends on the scale of the coefficient. Therefore each 3D coefficient describes the shape at a specific scale and spatial region (See Figure 2).<br />
<br />
''Statistics''<br />
<br />
We use the UNC [[Projects:LocalStatisticalAnalysisViaPermutationTests |statistical test toolbox]] that analyzes differences between two groups of surfaces described by a set of features. The group differences are computed locally for every feature using the standard robust Hotelling T^2 two sample metric. Statistical p-values, both raw and corrected for multiple comparisons are given as output. The toolbox outputs a global average P-value for all features, as well as a raw and corrected P-value for each feature. We use the 3D spherical wavelet coefficients as features.<br />
<br />
''Significance Maps''<br />
<br />
If a feature (3D spherical wavelet coefficient) is found significant (i.e. its P-value is less than a pre-determined significance level, such as 0.05), we color all points that are in the support of that basis function at that scale with the corresponding P-value. This allows us to visualize both the raw and FDR corrected P-values as significance color maps on the surface of the mean shape of the structure under study. The color at each point is the P-value.<br />
<br />
''Progress''<br />
<br />
We conducted statistical shape analysis of two brain structures, the caudate nucleus and hippocampus, using spherical wavelet coefficients (SWC) as features and compare the results obtained to shape analysis using a SPHARM-PDM representation.<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: Delphine Nain, Yi Gao, Xavier Le Faucheur, Allen Tannenbaum<br />
* UNC Algorithms: Martin Styner<br />
* Harvard DBP1: James Levitt, Marc Niethammer, Sylvain Bouix, Martha Shenton<br />
<br />
= Publications =<br />
<br />
''In print''<br />
<br />
* [http://www.na-mic.org/publications/pages/display?search=MultiscaleShapeAnalysis&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Multiscale Shape Analysis]<br />
<br />
[[Category:Shape Analysis]] [[Category:Schizophrenia]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:WaveletShrinkage&diff=52196Projects:WaveletShrinkage2010-05-11T19:48:31Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]]<br />
__NOTOC__<br />
<br />
= Wavelet Shrinkage for Shape Analysis =<br />
<br />
= Description =<br />
<br />
Shape analysis has become a topic of interest in medical imaging since local variations of a shape could carry relevant information about a disease that may affect only a portion of an organ. We developed a novel wavelet-based denoising and compression statistical model for 3D shapes.<br />
<br />
''Method''<br />
<br />
Shapes are encoded using spherical wavelets that allow for a multiscale shape representation by decomposing the data both in scale and space using a multiresolution mesh (See Nain ''et al.'' MICCAI 2005). This representation also allows for efficient compression by discarding wavelet coefficients with low values that correspond to irrelevant shape information and high frequency coefficients that represent noisy artifacts. This process is called hard wavelet shrinkage and has been widely researched for traditional types of wavelets, but not much explored for second generation wavelets. <br />
<br />
In the wavelet domain, we model a shape as a vector of coefficients, consisting in the sum of signal part coefficients and coefficients considered as noise:<br />
[[Image:Equation_Shape_Representation.jpg| Equation| center]]<br />
<br />
We develop a non-linear statistical wavelet shrinkage model based on a data-driven framework that will adaptively threshold wavelet coefficients in order to accurately estimate the noiseless part of the shape signal. In fact, the proposed selection model in the wavelet domain is based on a Bayesian hypotheses testing: a given wavelet coefficient will either be kept (null hypothesis rejected) or get shrunk to zero (null hypothesis non rejected). Our threshold rule locally takes into account shape curvature and interscale dependencies between neighboring wavelet coefficients. Interscale dependencies enable us to take advantage of the correlation that exists between levels of decomposition by looking at coefficients' parents, and curvature terms will help adjust strength of shrinkage by locally looking at the coarse shape variations. A coefficient will be very likely to be shrunk if local curvature is low and if its parents are low-valued. <br />
<br />
Our Bayesian framework incorporates that information as follows:<br />
<br />
[[Image:Bayesian_Framework.jpg| Bayesian Framework| center]]<br />
<br />
''Validation''<br />
<br />
Our validation shows how this new wavelet shrinkage framework outperforms classical compression and denoising methods for shape representation. We apply our method to the denoising of the left hippocampus and caudate nucleus from MRI brain data. In the following figures, we compare our denoising results to those obtained with universal thresholding (Donoho, 1995) and to the method that we previously developed (SPIE Optics East, 2007), which was based on inter- and intra-scale dependent shrinkage.<br />
<br />
[[Image:Left_Hippocampus_Denoising.jpg| Left Hippocampus Denoising| center]]<br />
''Shrinkage is applied to left hippocampus shapes for denoising: (a) original shape, (b) noisy shape, (c) results with traditional thresholding, (d) inter-/intra-scale and (e) proposed Bayesian method (in (c),(d),(e) color is normalized reconstruction error at each vertex (in % of the shape bounding box) from blue (lowest) to red)''<br />
<br />
We actually are able to remove more than 90% of the coefficients from the fine levels while recovering an accurate estimation of the original shape. Our data-driven Bayesian framework allows us to obtain a spatially consistent model for wavelet shrinkage as it preserves intrinsic features of the shape and keeps the smoothing process under control.<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech: Xavier Le Faucheur, Allen Tannenbaum, Delphine Nain<br />
<br />
= Publications =<br />
<br />
''In print''<br />
<br />
[http://www.na-mic.org/publications/pages/display?search=WaveletShrinkage&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Wavelet Shrinkage for Shape Analysis]<br />
<br />
<br />
[[Category:Shape Analysis]]</div>Melonakoshttps://www.na-mic.org/w/index.php?title=Projects:MultiscaleShapeSegmentation&diff=52195Projects:MultiscaleShapeSegmentation2010-05-11T19:48:10Z<p>Melonakos: /* Publications */</p>
<hr />
<div> Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:GATech|Georgia Tech Algorithms]], [[Algorithm:UNC|UNC Algorithms]], [[Engineering:GE|GE Engineering]], [[Engineering:Kitware|Kitware Engineering]], [[DBP1:Harvard|Harvard DBP 1]]<br />
__NOTOC__<br />
= Multiscale Shape Segmentation =<br />
<br />
To represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus.<br />
<br />
= Description =<br />
<br />
== Shape Representation and Prior ==<br />
<br />
The overview of our shape representation is given in Figure 1. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population (Figure 2). We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations (Figure 4) [1].<br />
<br />
[[Image:Gatech_SW_representation.png|thumb|200px|Figure 1: Steps of the Shape Representation using Spherical Wavelets]]<br />
[[Image:Gatech_SW_mscale_shape.png|thumb|200px|Figure 2: A shape is represented using spherical wavelet coefficients]]<br />
<br />
== Segmentation ==<br />
<br />
Based on this representation, we derive a parametric active surfaceIn evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner.<br />
<br />
== Results ==<br />
<br />
We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia [2]. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.<br />
<br />
= Key Investigators =<br />
<br />
* Georgia Tech Algorithms: Delphine Nain, Aaron Bobick, Allen Tannenbaum<br />
* UNC Algorithms: Martin Styner<br />
* GE Engineering: Jim Miller<br />
* Kitware Engineering: Luis Ibanez<br />
* Harvard DBP 1: Steven Haker, James Levitt, Marc Niethammer, Sylvain Bouix, Martha Shenton<br />
<br />
= Publications = <br />
<br />
''In Print''<br />
* [http://www.na-mic.org/publications/pages/display?search=MultiscaleShapeSegmentation&submit=Search&words=all&title=checked&keywords=checked&authors=checked&abstract=checked&searchbytag=checked&sponsors=checked| NA-MIC Publications Database on Multiscale Shape Segmentation Techniques]<br />
<br />
[[Category:Shape Analysis]] [[Category:Segmentation]] [[Category:MRI]] [[Category:Schizophrenia]]</div>Melonakos