<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wachinge</id>
	<title>NAMIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wachinge"/>
	<link rel="alternate" type="text/html" href="https://www.na-mic.org/wiki/Special:Contributions/Wachinge"/>
	<updated>2026-04-05T20:52:38Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.33.0</generator>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Analysis_of_different_atlas-based_segmentation_techniques_for_parotid_glands&amp;diff=81998</id>
		<title>Analysis of different atlas-based segmentation techniques for parotid glands</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Analysis_of_different_atlas-based_segmentation_techniques_for_parotid_glands&amp;diff=81998"/>
		<updated>2013-06-17T17:33:23Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2013.png|[[2013_Summer_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image: NAMIC_HeadNeck_segmentation.png|Parotid gland + brainstem&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* MIT: Christian Wachinger, Matthew Brennan&lt;br /&gt;
* MGH: Karl Fritscher, Greg Sharp&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Our goal is to investigate various segmentation approaches for identifying parotid glands on head and neck CT images. The focus will be on atlas-based methods, which exploit the information from a number of previously labeled images. Several different strategies exist on how to employ this prior information to achieve the segmentation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We will try to work out the differences in terms of parameterization and regularization of various atlas-based methods. We will further try to characterize properties of such methods for the segmentation of parotid glands, which show high structural variability. Finally, we would like to investigate, which combination of methods may be promising.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3 different segmentation approaches based on the usage of multiple atlases [1], statistical appearance models [2] and a method based on using image patches in combination with Gaussian processes for segmentation have been tested for their suitability to segment the parotid gland using a dataset of 18 CT images. Different approaches to combine the atlas and model based approaches [1,2] in different ways are currently under development. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] Peroni M, Methods and Algorithms for Image Guided Adaptive Radio- and Hadron Therapy.  PhD Thesis, Politecnico di Milano, 2011&lt;br /&gt;
* [2] Fritscher KD, Gruenerbl A, Schubert R, 3D image segmentation using combined shape-intensity prior models. Journal of Computer Assisted Radiology and Surgery, 2007;1:341–350&lt;br /&gt;
* [3] Wachinger C, Sharp G, Golland P, Contour-Driven Regression for Label Inference in Atlas-Based Segmentation, MICCAI, 2013.&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2013_Summer_Project_Week&amp;diff=81375</id>
		<title>2013 Summer Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2013_Summer_Project_Week&amp;diff=81375"/>
		<updated>2013-06-04T23:44:33Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Radiation Therapy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Events]]&lt;br /&gt;
[[image:PW-MIT2013.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Dates: June 17-21, 2013.&lt;br /&gt;
&lt;br /&gt;
Location: MIT, Cambridge, MA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Agenda==&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-style=&amp;quot;background:#b0d5e6;color:#02186f&amp;quot; &lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; |Time&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Monday, June 17&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Tuesday, June 18&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Wednesday, June 19&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Thursday, June 20&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Friday, June 21&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|bgcolor=&amp;quot;#dbdbdb&amp;quot;|'''Project Presentations'''&lt;br /&gt;
|bgcolor=&amp;quot;#6494ec&amp;quot;|'''NA-MIC Update Day'''&lt;br /&gt;
|&lt;br /&gt;
|bgcolor=&amp;quot;#88aaae&amp;quot;|'''IGT and RT Day'''&lt;br /&gt;
|bgcolor=&amp;quot;#faedb6&amp;quot;|'''Reporting Day'''&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''8:30am'''&lt;br /&gt;
|&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''9am-12pm'''&lt;br /&gt;
|&lt;br /&gt;
|'''10-11am''' [[2013 Project Week Breakout Session:Slicer4Python|Slicer4 Python Modules, Testing, Q&amp;amp;A]] &amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms|Grier Room (Left)]] &lt;br /&gt;
|'''9:30-11pm: &amp;lt;font color=&amp;quot;#4020ff&amp;quot;&amp;gt;Breakout Session:'''&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt; [[2013 Project Week Breakout Session: SimpleITK|Slicer and SimpleITK]] (Hans)&lt;br /&gt;
[[MIT_Project_Week_Rooms#32-D507|32-D507]]&lt;br /&gt;
|'''10am-12pm: &amp;lt;font color=&amp;quot;#4020ff&amp;quot;&amp;gt;Breakout Session:'''&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;[[2013 Project Week Breakout Session: IGT|Image-Guided Therapy]] (Tina)&lt;br /&gt;
[[MIT_Project_Week_Rooms#32-D407|32-D407]]&lt;br /&gt;
|'''10am-12pm:''' [[#Projects|Project Progress Updates]]&lt;br /&gt;
[[MIT_Project_Week_Rooms#Grier_34-401_AB|Grier Rooms]]&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''12pm-1pm'''&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch boxes; Adjourn by 1:30pm&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''1pm-5:30pm'''&lt;br /&gt;
|'''1-1:05pm: &amp;lt;font color=&amp;quot;#503020&amp;quot;&amp;gt;Ron Kikinis: Welcome&amp;lt;/font&amp;gt;'''&lt;br /&gt;
[[MIT_Project_Week_Rooms#Grier_34-401_AB|Grier Rooms]]&lt;br /&gt;
&amp;lt;br&amp;gt;----------------------------------------&amp;lt;br&amp;gt;&lt;br /&gt;
'''1:05-3:30pm:''' [[#Projects|Project Introductions]] (all Project Leads)&lt;br /&gt;
[[MIT_Project_Week_Rooms#Grier_34-401_AB|Grier Rooms]]&lt;br /&gt;
&amp;lt;br&amp;gt;----------------------------------------&amp;lt;br&amp;gt;&lt;br /&gt;
'''3:30-4:30pm''' [[2013 Summer Project Week Breakout Session:SlicerExtensions|Slicer4 Extensions]] (Jean-Christophe Fillion-Robin)  &amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms#Grier_34-401_AB|Grier Room (Left)]]&lt;br /&gt;
|'''1-3pm:''' [[Renewal-06-2013|NA-MIC Renewal]] &amp;lt;br&amp;gt;PIs &amp;lt;br&amp;gt;Closed Door Session with Ron&lt;br /&gt;
[[MIT_Project_Week_Rooms#32-D407|32-D407]] &lt;br /&gt;
&amp;lt;br&amp;gt;----------------------------------------&amp;lt;br&amp;gt;&lt;br /&gt;
'''3-4pm:''' [[2013_Tutorial_Contest|Tutorial Contest Presentations]] &amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms#Grier_34-401_AB|Grier Rooms]]&lt;br /&gt;
|'''12:45-1pm:''' [[Events:TutorialContestJune2013|Tutorial Contest Winner Announcement]]&lt;br /&gt;
[[MIT_Project_Week_Rooms#Grier_34-401_AB|Grier Rooms]]&lt;br /&gt;
|'''3-5:30pm: &amp;lt;font color=&amp;quot;#4020ff&amp;quot;&amp;gt;Breakout Session:'''&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt; [[2013 Summer Project Week Breakout Session:RT|Radiation Therapy]] (Greg, Csaba)&lt;br /&gt;
[[MIT_Project_Week_Rooms#32-D407|32-D407]]&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''5:30pm'''&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== '''Projects''' ==&lt;br /&gt;
&lt;br /&gt;
Please use [http://wiki.na-mic.org/Wiki/index.php/Project_Week/Template this template] to create wiki pages for your project. Then link the page here with a list of key personnel. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Huntington's Disease===&lt;br /&gt;
* [[Dynamically Configurable Quality Assurance Module for Large Huntington's Disease Database Frontend]] (Dave)&lt;br /&gt;
* [[DWIConvert]] (Kent)&lt;br /&gt;
* [[Learn and Apply FiberBundleLabelSelect for Huntington's Disease Data]] (Hans, Demian)&lt;br /&gt;
* [[Investigate Potential Tensor Computation Improvement via Positive Semi-Definite (PSD) Tensor Estimation]] (Hans)&lt;br /&gt;
* [[Enhance and update SPL atlas]] (Dave, Hans)&lt;br /&gt;
&lt;br /&gt;
===Traumatic Brain Injury===&lt;br /&gt;
* Validation and testing of 3D Slicer modules implementing the Utah segmentation algorithm for traumatic brain injury (Andrei Irimia, Micah Chambers, Bo Wang, Marcel Prastawa, Guido Gerig, Jack van Horn)&lt;br /&gt;
* Visualization and quantification of peri-contusional white matter bundles in traumatic brain injury using diffusion tensor imaging (Andrei Irimia, Micah Chambers, Ron Kikinis, Jack van Horn)&lt;br /&gt;
* Clinically oriented assessment of local changes in the properties of white matter affected by intra-cranial hemorrhage (Andrei Irimia, Micah Chambers, Ron Kikinis, Jack van Horn)&lt;br /&gt;
* Investigation of the peri-lesional penumbra in traumatic brain injury using diffusion tensor imaging to isolate longitudinal changes in white matter integrity (Andrei Irimia, Micah Chambers, Ron Kikinis, Jack van Horn)&lt;br /&gt;
* Reconstruction and visualization of the corticospinal tract in traumatic brain injury in the presence of severe hematoma and CSF-perfused edematous tissue using diffusion tensor imaging (Andrei Irimia, Micah Chambers, Ron Kikinis, Jack van Horn)&lt;br /&gt;
&lt;br /&gt;
===Atrial Fibrillation===&lt;br /&gt;
* [[2013_Summer_Project_Week:CARMA_workflow_wizard|Cardiac MRI Toolkit LA segmentation and enhancement quantification workflow wizard]] (Salma Bengali, Alan Morris, Brian Zenger, Josh Cates, Rob MacLeod)&lt;br /&gt;
* [[2013_Summer_Project_Week:CARMA_Documentataion|Cardiac MRI Toolkit Documentation Project]] (Salma Bengali, Alan Morris, Brian Zenger, Josh Cates, Rob MacLeod)&lt;br /&gt;
* [[2013_Summer_Project_Week:CARMA_Visualization|LA model visualization]] (Salma Bengali, Alan Morris, Josh Cates, Rob MacLeod)&lt;br /&gt;
* [[2013_Summer_Project_Week:CARMA_AutoLASeg|Cardiac MRI Toolkit: Automatic LA Segmentation with Graph Cuts Module]] (Salma Bengali, Alan Morris, Josh Cates, Gopal, Ross Whitaker, Rob MacLeod)&lt;br /&gt;
* [[2013_Summer_Project_Week:Sobolev_Segmenter|Medical Volume Segmentation Using Sobolev Active Contours]] (Arie Nakhmani, Yi Gao, LiangJia Zhu, Rob MacLeod, Josh Cates, Ron Kikinis, Allen Tannenbaum)&lt;br /&gt;
* [[2013_Summer_Project_Week:Fibrosis_analysis|Fibrosis distribution analysis]] (Yi Gao, LiangJia Zhu, Rob MacLeod, Josh Cates, Ron Kikinis, Allen Tannenbaum)&lt;br /&gt;
&lt;br /&gt;
===Radiation Therapy===&lt;br /&gt;
* Landmark Registration (Steve, Nadya, Greg, Paolo, Erol)&lt;br /&gt;
* [[Slicer RT: DICOM-RT Export]] (Greg Sharp, Kevin Wang, Csaba Pinter)&lt;br /&gt;
* [[2013_Summer_Project_Week:Proton_dose_calculation | Proton dose calculation]]  (Greg Sharp, Kevin Wang, Maxime Desplanques)&lt;br /&gt;
* [[2013_Summer_Project_Week:Deformable_registration_validation_toolkit | Deformable registration validation toolkit]] (Greg Sharp, anyone else?)&lt;br /&gt;
* [[Analysis of different atlas-based segmentation techniques for parotid glands]] (Christian Wachinger, Karl Fritscher, Greg Sharp, Matthew Brennan)&lt;br /&gt;
&lt;br /&gt;
===Device Integration with Slicer===&lt;br /&gt;
* Open-source electromagnetic trackers using OpenIGTLink (Peter Traneus Anderson, Tina Kapur, Sonia Pujol)&lt;br /&gt;
&lt;br /&gt;
===IGT===&lt;br /&gt;
* [[2013_Summer_Project_Week:SlicerIGT_Extension| SlicerIGT extension]] (Tamas, Junichi, Laurent)&lt;br /&gt;
* [[2013_Summer_Project_Week:Ultrasound_Calibration| Ultrasound Calibration]] (Matthew Toews, Daniel Kostro, William Wells, Steven Aylward, Tamas Ungi)&lt;br /&gt;
* Application of Statistical Shape Modeling to Robot Assisted Spine Surgery (Marine Clogenson)&lt;br /&gt;
* [[2013_Summer_Project_Week:Epilepsy_Surgery|Identification of MRI Blurring in Temporal Lobe Epilepsy Surgery]] (Luiz Murta)&lt;br /&gt;
* Is Neurosurgical Rigid Registration really rigid? (Athena)&lt;br /&gt;
* [[2013_Summer_Project_Week:Liver_Trajectory_Management| Liver Trajectory Management]] (Laurent, Junichi)&lt;br /&gt;
* [[2013_Summer_Project_Week:4DUltrasound| 4D Ultrasound]] (Laurent, Junichi)&lt;br /&gt;
* [[2013_Summer_Project_Week: Individualized Neuroimaging Content Analysis using 3D Slicer in Alzheimer's Disease| Individualized Neuroimaging Content Analysis using 3D Slicer]] (Sidong Liu, Weidong Cai, Sonia Pujol, Ron Kikinis)&lt;br /&gt;
* [[2013_Summer_Project_Week: Computer Assisted Surgery| Computer Assisted Reconstruction of Complex Bone Fractures]] (Karl Fritscher, Peter Karasev, Ron Kikinis)&lt;br /&gt;
* [[2013_Summer_Project_Week:PerkTutorExtension| Perk Tutor Extension]] (Matthew Holden, Tamas Ungi)&lt;br /&gt;
&lt;br /&gt;
=== '''Informatics'''===&lt;br /&gt;
* [[2013_Summer_Project_Week:Biomedical_Image_Computing_Teaching_Modules|3D Slicer based Biomedical image computing teaching modules]]   (A.Vilchis, J-C. Avila-Vilchis, S.Pujol)&lt;br /&gt;
* [[2013_Summer_Project_Week:Robot_Control| Robot Control]] (A.Vilchis, J-C. Avila-Vilchis, S.Pujol)&lt;br /&gt;
&lt;br /&gt;
==='''Infrastructure'''===&lt;br /&gt;
* [[2013_Summer_Project_Week:MarkupsModuleSummer2013| Markups/Annotations rewrite]] (Nicole Aucoin)&lt;br /&gt;
* Brain atlas optimisations demo (Marianna) &lt;br /&gt;
* Provenance&lt;br /&gt;
* [[Patient hierarchy]] (Csaba Pinter)&lt;br /&gt;
* Sample data (Steve Pieper, Jim Miller)&lt;br /&gt;
** content addressable data, in external data processing in Slicer, cmake file for external data, when write test can decorate the data file name with macro keywords saying it's external&lt;br /&gt;
* Plastimatch in NiPype (Paolo, Dave, Hans)&lt;br /&gt;
** look for commonalities/reuse of CompareVolumes&lt;br /&gt;
* iPython in Slicer (Hans, Jc, Dave)&lt;br /&gt;
* Optimizing start time of slicer (Jc)&lt;br /&gt;
* [[Common resampling and conversion utility functions in Slicer]] (Steve Pieper, Hans, Kevin Wang, Csaba Pinter)&lt;br /&gt;
* [[2013_Summer_Project_Week:CLI_modules_in_MeVisLab| Integrating CTK CLI modules into MeVisLab]] (Hans Meine, Steve, Jc)&lt;br /&gt;
&lt;br /&gt;
==='''Brain Segmentation'''===&lt;br /&gt;
* Multi-Atlas-Based Multi-Image Segmentation for Brain MR Images (Minjeong Kim, Xiaofeng Liu, Jim Miller, Dinggang Shen)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Background''' ==&lt;br /&gt;
&lt;br /&gt;
We are pleased to announce the 17th PROJECT WEEK of hands-on research and development activity for applications in Neuroscience, Image-Guided Therapy and several additional areas of biomedical research that enable personalized medicine. Participants will engage in open source programming using the [[NA-MIC-Kit|NA-MIC Kit]], algorithm design, medical imaging sequence development, tracking experiments, and clinical application. The main goal of this event is to move forward the translational research deliverables of the sponsoring centers and their collaborators. Active and potential collaborators are encouraged and welcome to attend this event. This event will be set up to maximize informal interaction between participants.  If you would like to learn more about this event, please [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week click here to join our mailing list].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Active preparation begins on Thursday, April 25th at 3pm ET, with a kick-off teleconference.  Invitations to this call will be sent to members of the sponsoring communities, their collaborators, past attendees of the event, as well as any parties who have expressed an interest in working with these centers. The main goal of the kick-off call is to get an idea of which groups/projects will be active at the upcoming event, and to ensure that there is sufficient coverage for all. Subsequent teleconferences will allow for more focused discussions on individual projects and allow the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in breakout sessions. In the final days leading upto the meeting, all project teams will be asked to fill in a template page on this wiki that describes the objectives and plan of their projects.  &lt;br /&gt;
&lt;br /&gt;
The event itself will start off with a short presentation by each project team, driven using their previously created description, and will help all participants get acquainted with others who are doing similar work. In the rest of the week, about half the time will be spent in breakout discussions on topics of common interest of subsets of the attendees, and the other half will be spent in project teams, doing hands-on project work.  The hands-on activities will be done in 40-50 small teams of size 2-4, each with a mix of multi-disciplinary expertise.  To facilitate this work, a large room at MIT will be setup with several tables, with internet and power access, and each computer software development based team will gather on a table with their individual laptops, connect to the internet to download their software and data, and be able to work on their projects.  Teams working on projects that require the use of medical devices will proceed to Brigham and Women's Hospital and carry out their experiments there. On the last day of the event, a closing presentation session will be held in which each project team will present a summary of what they accomplished during the week.&lt;br /&gt;
&lt;br /&gt;
This event is part of the translational research efforts of [http://www.na-mic.org NA-MIC], [http://www.ncigt.org NCIGT], [http://nac.spl.harvard.edu/ NAC], [http://catalyst.harvard.edu/home.html Harvard Catalyst],  [http://www.cimit.org CIMIT], and OCAIRO.  It is an expansion of the NA-MIC Summer Project Week that has been held annually since 2005. It will be held every summer at MIT and Brigham and Womens Hospital in Boston, typically during the last full week of June, and in Salt Lake City in the winter, typically during the second week of January.  &lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Logistics''' ==&lt;br /&gt;
&lt;br /&gt;
*'''Dates:''' June 17-21, 2013.&lt;br /&gt;
*'''Location:''' MIT. &lt;br /&gt;
*'''REGISTRATION:'''  http://www.regonline.com/namic2013summerprojweek. Please note that  as you proceed to the checkout portion of the registration process, RegOnline will offer you a chance to opt into a free trial of ACTIVEAdvantage -- click on &amp;quot;No thanks&amp;quot; in order to finish your Project Week registration.&lt;br /&gt;
*'''Registration Fee:''' $300.&lt;br /&gt;
*'''Hotel:''' Similar to previous years, no rooms have been blocked in a particular hotel.&lt;br /&gt;
*'''Room sharing''': If interested, add your name to the list before May 27th. See [[2013_Summer_Project_Week/RoomSharing|here]]&lt;br /&gt;
&lt;br /&gt;
== '''Preparation''' ==&lt;br /&gt;
&lt;br /&gt;
# Please make sure that you are on the http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week mailing list&lt;br /&gt;
# The NA-MIC engineering team will be discussing projects in a their [http://wiki.na-mic.org/Wiki/index.php/Engineering:TCON_2013 weekly teleconferences]. Participants from the above mailing list will be invited to join to discuss their projects, so please make sure you are on it!&lt;br /&gt;
# By 3pm ET on Thursday May 8, all participants to add a one line title of their project to #Projects&lt;br /&gt;
# By 3pm ET on Thursday June 6, all project leads to complete [[Project_Week/Template|Complete a templated wiki page for your project]]. Please do not edit the template page itself, but create a new page for your project and cut-and-paste the text from this template page.  If you have questions, please send an email to tkapur at bwh.harvard.edu.&lt;br /&gt;
# By 3pm on June 13: Create a directory for each project on the [[Engineering:SandBox|NAMIC Sandbox]] (Matt)&lt;br /&gt;
## Commit on each sandbox directory the code examples/snippets that represent our first guesses of appropriate methods. (Luis and Steve will help with this, as needed)&lt;br /&gt;
## Gather test images in any of the Data sharing resources we have (e.g. XNAT/MIDAS). These ones don't have to be many. At least three different cases, so we can get an idea of the modality-specific characteristics of these images. Put the IDs of these data sets on the wiki page. (the participants must do this.)&lt;br /&gt;
## Where possible, setup nightly tests on a separate Dashboard, where we will run the methods that we are experimenting with. The test should post result images and computation time. (Matt)&lt;br /&gt;
# Please note that by the time we get to the project event, we should be trying to close off a project milestone rather than starting to work on one...&lt;br /&gt;
# People doing Slicer related projects should come to project week with slicer built on your laptop.&lt;br /&gt;
## See the [http://www.slicer.org/slicerWiki/index.php/Documentation/4.0/Developers Developer Section of slicer.org] for information.&lt;br /&gt;
## Projects to develop extension modules should be built against the latest Slicer4 trunk.&lt;br /&gt;
&lt;br /&gt;
== '''Registrants''' ==&lt;br /&gt;
&lt;br /&gt;
Do not add your name to this list - it is maintained by the organizers based on your paid registration.  ([http://www.regonline.com/Register/Checkin.aspx?EventID=1233699  Please click here to register.])&lt;br /&gt;
&lt;br /&gt;
#Peter Anderson, retired, traneus@verizon.net&lt;br /&gt;
#Nicole Aucoin, BWH, nicole@bwh.harvard.edu&lt;br /&gt;
#Juan Carlos Avila Vilchis, Univ del Estado de Mexico, jc.avila.vilchis@hotmail.com&lt;br /&gt;
#Salma Bengali, Univ UT, salma.bengali@carma.utah.edu&lt;br /&gt;
#Anthony Blumfield, Radnostics, Anthony.Blumfield@Radnostics.com&lt;br /&gt;
#Vinicius Boen, Univ Michigan, vboen@umich.edu&lt;br /&gt;
#Francois Budin, NIRAL-UNC, fbudin@unc.edu&lt;br /&gt;
#Josh Cates, Univ UT, cates@sci.utah.edu&lt;br /&gt;
#Micah Chambers, UCLA, micahcc@ucla.edu&lt;br /&gt;
#Marine Clogenson, Ecole Polytechnique Federale de Lausanne (Switzerland), marine.clogenson@epfl.ch&lt;br /&gt;
#Manasi Datar, Univ UT-SCI Institute, datar@sci.utah.edu&lt;br /&gt;
#Andriy Fedorov, BWH, fedorov@bwh.harvard.edu&lt;br /&gt;
#Jean-Christophe Fillion-Robin, Kitware, jchris.fillionr@kitware.com&lt;br /&gt;
#Karl Fritscher, MGH, kfritscher@gmail.com&lt;br /&gt;
#Yi Gao, Univ AL Birmingham, gaoyi.cn@gmail.com&lt;br /&gt;
#Rola Harmouche, BWH, rharmo@bwh.harvard.edu&lt;br /&gt;
#Matthew Holden, Queen's Univ (Canada), mholden8@cs.queensu.ca&lt;br /&gt;
#Hans Johnson, Univ Iowa, hans-johnson@uiowa.edu&lt;br /&gt;
#Tina Kapur, BWH/HMS, tkapur@bwh.harvard.edu&lt;br /&gt;
#Ron Kikinis, HMS, kikinis@bwh.harvard.edu&lt;br /&gt;
#Daniel Kostro, BWH, dkostro@bwh.harvard.edu&lt;br /&gt;
#Andras Lasso, Queen's Univ (Canada), lasso@cs.queensu.ca&lt;br /&gt;
#Rui Li, GE Global Research, li.rui@ge.com&lt;br /&gt;
#Sidong Liu, Univ Sydney (Australia), sliu7418@uni.sydney.edu.au&lt;br /&gt;
#William Lorensen, Bill's Basement, bill.lorensen@gmail.com &lt;br /&gt;
#Bradley Lowekamp, Medical Science &amp;amp; Computing Inc, bradley.lowekamp@nih.gov&lt;br /&gt;
#Athena Lyons, Univ Western Australia, 20359511@student.uwa.edu.au&lt;br /&gt;
#Hans Meine, Fraunhofer MEVIS (Germany), hans.meine@mevis.fraunhofer.de&lt;br /&gt;
#Jim Miller, GE Global Research, millerjv@ge.com&lt;br /&gt;
#Luis Murta, Univ Sao Paulo (Brazil), lomurta@gmail.com&lt;br /&gt;
#Arie Nakhmani, Univ AL Birmingham, anry@uab.edu&lt;br /&gt;
#Isaiah Norton, BWH, inorton@bwh.harvard.edu&lt;br /&gt;
#Dirk Padfield, GE Global Research, padfield@research.ge.com&lt;br /&gt;
#Steve Pieper, Isomics Inc, pieper@isomics.com&lt;br /&gt;
#Csaba Pinter, Queen's Univ (Canada), pinter@cs.queensu.ca&lt;br /&gt;
#Sonia Pujol, HMS, spujol@bwh.harvard.edu&lt;br /&gt;
#Adam Rankin, Queen's Univ (Canada), rankin@cs.queensu.ca&lt;br /&gt;
#Nathaniel Reynolds, MGH, reynolds@nmr.mgh.harvard.edu&lt;br /&gt;
#Raul San Jose, BWH, rjosest@bwh.harvard.edu&lt;br /&gt;
#Greg Sharp, MGH, gcsharp@partners.org&lt;br /&gt;
#Nadya Shusharina, MGH, nshusharina@partners.org&lt;br /&gt;
#Matthew Toews, BWH/HMS, mt@bwh.harvard.edu&lt;br /&gt;
#Tamas Ungi, Queen's Univ (Canada), ungi@cs.queensu.ca&lt;br /&gt;
#Adriana Vilchis González, Univ del Estado de Mexico, hvigady@hotmail.com&lt;br /&gt;
#Demian Wassermann, BWH, demian@bwh.harvard.edu&lt;br /&gt;
#David Welch, Univ Iowa, david-welch@uiowa.edu&lt;br /&gt;
#Phillip White, BWH/HMS, white@bwh.harvard.edu&lt;br /&gt;
#Paolo Zaffino, Univ Magna Graecia of Catanzaro (Italy), p.zaffino@unicz.it&lt;br /&gt;
#Lilla Zollei, MGH, lzollei@nmr.mgh.harvard.edu&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Analysis_of_different_atlas-based_segmentation_techniques_for_parotid_glands&amp;diff=81374</id>
		<title>Analysis of different atlas-based segmentation techniques for parotid glands</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Analysis_of_different_atlas-based_segmentation_techniques_for_parotid_glands&amp;diff=81374"/>
		<updated>2013-06-04T23:43:50Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Key Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2013.png|[[2013_Summer_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image: NAMIC_HeadNeck_segmentation.png|Parotid gland + brainstem&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* MIT: Christian Wachinger, Matthew Brennan&lt;br /&gt;
* MGH: Karl Fritscher, Greg Sharp&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Our goal is to investigate various segmentation approaches for identifying parotid glands on head and neck CT images. The focus will be on atlas-based methods, which exploit the information from a number of previously labeled images. Several different strategies exist on how to employ this prior information to achieve the segmentation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We will try to work out the differences in terms of parameterization and regularization of various atlas-based methods. We will further try to characterize properties of such methods for the segmentation of parotid glands, which show high structural variability. Finally, we would like to investigate, which combination of methods may be promising.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3 different segmentation approaches based on the usage of multiple atlases [1], statistical appearance models [2] and a method based on using image patches in combination with Gaussian processes for segmentation have been tested for their suitability to segment the parotid gland using a dataset of 18 CT images. Different approaches to combine the atlas and model based approaches [1,2] in different ways are currently under development. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] Peroni M, Methods and Algorithms for Image Guided Adaptive Radio- and Hadron Therapy.  PhD Thesis, Politecnico di Milano, 2011&lt;br /&gt;
* [2] Fritscher KD, Gruenerbl A, Schubert R, 3D image segmentation using combined shape-intensity prior models. Journal of Computer Assisted Radiology and Surgery, 2007;1:341–350&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78445</id>
		<title>Projects:CardiacAblation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78445"/>
		<updated>2012-11-28T20:28:24Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Left atrium segmentation =&lt;br /&gt;
&lt;br /&gt;
Automatic segmentation of the heart’s left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. The high anatomical variability of the left atrium makes its segmentation a particularly difficult problem. Specifically, the shape of the left atrium cavity, as well as the number and locations of the pulmonary veins connecting to it, vary substantially across subjects. We propose and demonstrate a robust atlas-based method for automatic segmentation of the left atrium in contrast-enhanced magnetic resonance angiography (MRA) images.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
We perform the segmentation via a label fusion algorithm [1] that uses a training set of MRA images of different patients with corresponding manual segmentations. We first align the training images to the test subject image to be segmented and apply the resulting deformations to the corresponding manual segmentation label maps to yield a set of left atrium segmentations in the coordinate space of the test subject. These form a non-parametric subject-specific statistical atlas. We then use a weighted voting algorithm to assign every voxel to the left atrium or to the background. The weighted label fusion scheme assigns higher weights to voxels in training segmentations that are located deeper within the structure of interest and that have similar intensities in training and test images. We also handle varying intensity distributions between images by incorporating iterative intensity equalization in a variant of the demons registration algorithm [2] used for the registration of the training images to the novel test image. &lt;br /&gt;
We also applied our new spectral label fusion algorithm for the segmentation of the cardiac data, with the description of the method and the results presented at&lt;br /&gt;
[http://www.na-mic.org/Wiki/index.php/Projects:NonparametricSegmentation  Nonparametric Segmentation].&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
We show in Figure 1 below a qualitative comparison between expert manual left atrium segmentations and automatic segmentations produced by our approach.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_3D_comparison.png|500px|thumb|center|Figure 1: Qualitative evaluation of left atrium segmentations in three different subjects. First row shows expert manual segmentations. The corresponding automatic segmentations produced by our method are in the second row.]]&lt;br /&gt;
&lt;br /&gt;
We compare our method of weighed voting (WV) label fusion to three alternative automatic atlas-based approaches: majority voting (MV) label fusion, parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). The majority voting label fusion is similar to weighted voting, except it assigns each voxel to the label that occurs most frequently in the registered training set at that voxel. We also construct a parametric atlas that summarizes all 16 subjects in a single template image and a probabilistic label map by performing groupwise registration to an average space. After registering this new atlas to the test subject, we segment the left atrium using two different approaches. In atlas thresholding, we simply threshold the warped probabilistic label map at 0.5 to obtain the segmentation. This baseline method is analogous to majority voting in the parametric atlas setting. We also use the parametric atlas as a spatial prior in a traditional model-based EM-segmentation. Note that this construction favors the baseline algorithms as it includes the test image in the registration of all subjects into the common coordinate frame.&lt;br /&gt;
&lt;br /&gt;
In Figure 2 below, we show example segmentations comparing the automatic segmentations produced by these methods and expert manual segmentations.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_2D_comparison.png|600px|thumb|center|Figure 2: Example segmentations of four different subjects: (a) expert manual segmentation (MS), (b) weighted voting label fusion (WV), (c) majority voting label fusion (MV), (d) parametric atlas thresholding (AT) and (e) EM-segmentation using the parametric atlas as a spatial prior (EM).]]&lt;br /&gt;
&lt;br /&gt;
Figure 3 reports the segmentation accuracy for each method, as measured by the volume overlap Dice scores. We also report the differences in segmentation accuracy between our method and the benchmark algorithms. To compute the difference between two methods, we subtract the Dice score of the second method from the score of the first for each subject. Our approach clearly outperforms other algorithms (WV vs. MV: p &amp;lt; 10−9, WV vs. AT: p &amp;lt; 0.002, WV vs. EM: p &amp;lt; 0.003; single-sided paired t-test). To focus the evaluation on the critical part of the structure, we manually isolate the pulmonary veins in each of the manual and automatic segmentations, and compare the Dice scores for these limited label maps. Again, we observe consistent improvements offered by our approach (WV vs. MV: p &amp;lt; 10−7, WV vs. AT: p &amp;lt; 10−7, WV vs. EM: p &amp;lt; 0.03; single-sided paired t-test). Since atlas-based EM-segmentation is an intensity based method, it performs relatively well in segmenting pulmonary veins, but suffers from numerous false positives in other areas, which lower its overall Dice scores.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_Dices.png|500px|thumb|center|Figure 3: Dice scores of results for weighted voting label fusion (WV), majority voting la- bel fusion (MV), parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). For each box plot, the central red line indicates the median, the boxes extend to the 25th and 75th percentiles, and the whiskers extend to the most extreme values not considered outliers, which are plotted as red crosses. Stars indicate that the weighted label fusion method achieves significantly more accurate segmentation than the baseline method (single-sided paired t-test, ∗: p &amp;lt; 0.05, ∗∗: p &amp;lt; 0.01).]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
Experimental results illustrate the capacity of our method to handle high anatomical variability, yielding accurate segmentation and detecting all pulmonary veins in all subjects. By explicitly modeling the anatomical variability represented in the label maps and the corresponding training images, the proposed method outperforms traditional atlas-based segmentation algorithms and a simple label fusion benchmark.&lt;br /&gt;
&lt;br /&gt;
= Cardiac ablation scar visualization =&lt;br /&gt;
&lt;br /&gt;
Atrial fibrillation is one of the most common heart conditions and can have very serious consequences such as stroke and heart failure. A technique called catheter radio-frequency (RF) ablation has recently emerged as a treatment. It involves burning the cardiac tissue that is responsible for the fibrillation. Even though this technique has been shown to work fairly well on atrial fibrillation patients, repeat procedures are often needed to fully correct the condition because surgeons lack the necessary tools to quickly evaluate the success of the procedure.&lt;br /&gt;
&lt;br /&gt;
We propose a method to automatically visualize the scar created by RF ablation in delayed enhancement MR images acquired after the procedure. This will provide surgeons with a way to evaluate the outcome of cardiac ablation procedures.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The visualization of cardiac scars resulting from ablation procedures in delayed enhancement magnetic resonance images (DE-MRI) is a very challenging problem because of intersubject anatomical variability of the left atrium body and the pulmonary veins, the variation in the shape and location of the scars and tissue that appears enhanced in DE-MRI images even though it is not ablation scar. In addition, visualization is also challenging because even the most advanced acquisition techniques yield DE-MRI images with relatively poor contrast.&lt;br /&gt;
&lt;br /&gt;
With all of these difficulties, performing this segmentation without exploiting some prior knowledge or significant feedback from the user is extremely challenging. Most previous attempts to segment scar in DE-MRI images relied heavily on input from the user. In contrast, we avoid this by automatically segmenting the left atrium in the DE-MRI images of the patients. The atrium segmentation provides us with prior&lt;br /&gt;
information about the location and shape of the left atrium, which in turn helps counter some of the challenges that were previously solved by requiring significant amounts of user interaction. We obtain this segmentation by first segmenting the left atrium in the MRA image of the patient’s heart using the method presented above. We then align the MRA image to the corresponding DE-MRI image of the same subject. With these two images aligned, we transfer the left atrium segmentation from the MRA to the DE-MRI image by applying the transformation computed in the registration.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
After obtaining the segmentation of the left atrium in the DE-MRI image, we produce a visualization of the ablation scar by simply projecting the DE-MRI data onto the left atrium surface. We restrict the projection to only use image voxels within an empirically determined distance of 7mm of each side of the left atrium surface. Figure 4 below illustrates the maximum intensity projection results for one subject. In addition, we automatically threshold these projection values by computing the 75th percentile and show the resulting visualization as well. For comparison, we also project the expert manual scar segmentation onto the same left atrium surface.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_scar_visualization.png|400px|thumb|center|Figure 4: Comparison of projections of DE-MRI data and manual scar segmentation onto left atrium surface. Circled area indicates acquisition artifact that causes non-scar tissue to be appear enhanced in the DE-MRI image.]]&lt;br /&gt;
&lt;br /&gt;
We confirm visually that the thresholded projection values correlate well with the manual scar segmentations. Nevertheless, there is one area, which we circled in the figure, where these two differ considerably. This discrepancy is due to an imaging artifact caused by the acquisition protocol and is likely to cause false positives in any intensity-based algorithm.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
We visualize the ablation scars by performing a maximum intensity projection of the DE-MRI image onto the automatically generated surface of the left atrium. The visualization is further improved by thresholding the projection. We showed visually that both visualizations correlate well with the expert manual segmentation of the ablation scars.&lt;br /&gt;
&lt;br /&gt;
= Literature =&lt;br /&gt;
[1] Nonparametric Mixture Models for Supervised Image Parcellation, M.R. Sabuncu, B.T.T. Yeo, K. Van Leemput, B. Fischl, and P. Golland. PMMIA Workshop at MICCAI 2009.&lt;br /&gt;
&lt;br /&gt;
[2] Diffeomorphic demons: Efficient non-parametric image registration. Vercauteren, T., Pennec, X., Perchant, A., Ayache, N. NeuroImage 45(1), S61–S72 (2009)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: [http://people.csail.mit.edu/mdepa/ Michal Depa] and Polina Golland&lt;br /&gt;
&lt;br /&gt;
*BWH: Ehud Schmidt and Ron Kikinis&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ACardiacAblation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Segmentation and Visualization for Cardiac Ablation Procedures]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78444</id>
		<title>Projects:CardiacAblation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78444"/>
		<updated>2012-11-28T20:27:38Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Left atrium segmentation =&lt;br /&gt;
&lt;br /&gt;
Automatic segmentation of the heart’s left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. The high anatomical variability of the left atrium makes its segmentation a particularly difficult problem. Specifically, the shape of the left atrium cavity, as well as the number and locations of the pulmonary veins connecting to it, vary substantially across subjects. We propose and demonstrate a robust atlas-based method for automatic segmentation of the left atrium in contrast-enhanced magnetic resonance angiography (MRA) images.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
We perform the segmentation via a label fusion algorithm [1] that uses a training set of MRA images of different patients with corresponding manual segmentations. We first align the training images to the test subject image to be segmented and apply the resulting deformations to the corresponding manual segmentation label maps to yield a set of left atrium segmentations in the coordinate space of the test subject. These form a non-parametric subject-specific statistical atlas. We then use a weighted voting algorithm to assign every voxel to the left atrium or to the background. The weighted label fusion scheme assigns higher weights to voxels in training segmentations that are located deeper within the structure of interest and that have similar intensities in training and test images. We also handle varying intensity distributions between images by incorporating iterative intensity equalization in a variant of the demons registration algorithm [2] used for the registration of the training images to the novel test image. &lt;br /&gt;
We also applied our new spectral label fusion algorithm for the segmentation of the cardiac data, with the description of the method and the results at&lt;br /&gt;
[http://www.na-mic.org/Wiki/index.php/Projects:NonparametricSegmentation  spectral label fusion].&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
We show in Figure 1 below a qualitative comparison between expert manual left atrium segmentations and automatic segmentations produced by our approach.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_3D_comparison.png|500px|thumb|center|Figure 1: Qualitative evaluation of left atrium segmentations in three different subjects. First row shows expert manual segmentations. The corresponding automatic segmentations produced by our method are in the second row.]]&lt;br /&gt;
&lt;br /&gt;
We compare our method of weighed voting (WV) label fusion to three alternative automatic atlas-based approaches: majority voting (MV) label fusion, parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). The majority voting label fusion is similar to weighted voting, except it assigns each voxel to the label that occurs most frequently in the registered training set at that voxel. We also construct a parametric atlas that summarizes all 16 subjects in a single template image and a probabilistic label map by performing groupwise registration to an average space. After registering this new atlas to the test subject, we segment the left atrium using two different approaches. In atlas thresholding, we simply threshold the warped probabilistic label map at 0.5 to obtain the segmentation. This baseline method is analogous to majority voting in the parametric atlas setting. We also use the parametric atlas as a spatial prior in a traditional model-based EM-segmentation. Note that this construction favors the baseline algorithms as it includes the test image in the registration of all subjects into the common coordinate frame.&lt;br /&gt;
&lt;br /&gt;
In Figure 2 below, we show example segmentations comparing the automatic segmentations produced by these methods and expert manual segmentations.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_2D_comparison.png|600px|thumb|center|Figure 2: Example segmentations of four different subjects: (a) expert manual segmentation (MS), (b) weighted voting label fusion (WV), (c) majority voting label fusion (MV), (d) parametric atlas thresholding (AT) and (e) EM-segmentation using the parametric atlas as a spatial prior (EM).]]&lt;br /&gt;
&lt;br /&gt;
Figure 3 reports the segmentation accuracy for each method, as measured by the volume overlap Dice scores. We also report the differences in segmentation accuracy between our method and the benchmark algorithms. To compute the difference between two methods, we subtract the Dice score of the second method from the score of the first for each subject. Our approach clearly outperforms other algorithms (WV vs. MV: p &amp;lt; 10−9, WV vs. AT: p &amp;lt; 0.002, WV vs. EM: p &amp;lt; 0.003; single-sided paired t-test). To focus the evaluation on the critical part of the structure, we manually isolate the pulmonary veins in each of the manual and automatic segmentations, and compare the Dice scores for these limited label maps. Again, we observe consistent improvements offered by our approach (WV vs. MV: p &amp;lt; 10−7, WV vs. AT: p &amp;lt; 10−7, WV vs. EM: p &amp;lt; 0.03; single-sided paired t-test). Since atlas-based EM-segmentation is an intensity based method, it performs relatively well in segmenting pulmonary veins, but suffers from numerous false positives in other areas, which lower its overall Dice scores.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_Dices.png|500px|thumb|center|Figure 3: Dice scores of results for weighted voting label fusion (WV), majority voting la- bel fusion (MV), parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). For each box plot, the central red line indicates the median, the boxes extend to the 25th and 75th percentiles, and the whiskers extend to the most extreme values not considered outliers, which are plotted as red crosses. Stars indicate that the weighted label fusion method achieves significantly more accurate segmentation than the baseline method (single-sided paired t-test, ∗: p &amp;lt; 0.05, ∗∗: p &amp;lt; 0.01).]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
Experimental results illustrate the capacity of our method to handle high anatomical variability, yielding accurate segmentation and detecting all pulmonary veins in all subjects. By explicitly modeling the anatomical variability represented in the label maps and the corresponding training images, the proposed method outperforms traditional atlas-based segmentation algorithms and a simple label fusion benchmark.&lt;br /&gt;
&lt;br /&gt;
= Cardiac ablation scar visualization =&lt;br /&gt;
&lt;br /&gt;
Atrial fibrillation is one of the most common heart conditions and can have very serious consequences such as stroke and heart failure. A technique called catheter radio-frequency (RF) ablation has recently emerged as a treatment. It involves burning the cardiac tissue that is responsible for the fibrillation. Even though this technique has been shown to work fairly well on atrial fibrillation patients, repeat procedures are often needed to fully correct the condition because surgeons lack the necessary tools to quickly evaluate the success of the procedure.&lt;br /&gt;
&lt;br /&gt;
We propose a method to automatically visualize the scar created by RF ablation in delayed enhancement MR images acquired after the procedure. This will provide surgeons with a way to evaluate the outcome of cardiac ablation procedures.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The visualization of cardiac scars resulting from ablation procedures in delayed enhancement magnetic resonance images (DE-MRI) is a very challenging problem because of intersubject anatomical variability of the left atrium body and the pulmonary veins, the variation in the shape and location of the scars and tissue that appears enhanced in DE-MRI images even though it is not ablation scar. In addition, visualization is also challenging because even the most advanced acquisition techniques yield DE-MRI images with relatively poor contrast.&lt;br /&gt;
&lt;br /&gt;
With all of these difficulties, performing this segmentation without exploiting some prior knowledge or significant feedback from the user is extremely challenging. Most previous attempts to segment scar in DE-MRI images relied heavily on input from the user. In contrast, we avoid this by automatically segmenting the left atrium in the DE-MRI images of the patients. The atrium segmentation provides us with prior&lt;br /&gt;
information about the location and shape of the left atrium, which in turn helps counter some of the challenges that were previously solved by requiring significant amounts of user interaction. We obtain this segmentation by first segmenting the left atrium in the MRA image of the patient’s heart using the method presented above. We then align the MRA image to the corresponding DE-MRI image of the same subject. With these two images aligned, we transfer the left atrium segmentation from the MRA to the DE-MRI image by applying the transformation computed in the registration.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
After obtaining the segmentation of the left atrium in the DE-MRI image, we produce a visualization of the ablation scar by simply projecting the DE-MRI data onto the left atrium surface. We restrict the projection to only use image voxels within an empirically determined distance of 7mm of each side of the left atrium surface. Figure 4 below illustrates the maximum intensity projection results for one subject. In addition, we automatically threshold these projection values by computing the 75th percentile and show the resulting visualization as well. For comparison, we also project the expert manual scar segmentation onto the same left atrium surface.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_scar_visualization.png|400px|thumb|center|Figure 4: Comparison of projections of DE-MRI data and manual scar segmentation onto left atrium surface. Circled area indicates acquisition artifact that causes non-scar tissue to be appear enhanced in the DE-MRI image.]]&lt;br /&gt;
&lt;br /&gt;
We confirm visually that the thresholded projection values correlate well with the manual scar segmentations. Nevertheless, there is one area, which we circled in the figure, where these two differ considerably. This discrepancy is due to an imaging artifact caused by the acquisition protocol and is likely to cause false positives in any intensity-based algorithm.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
We visualize the ablation scars by performing a maximum intensity projection of the DE-MRI image onto the automatically generated surface of the left atrium. The visualization is further improved by thresholding the projection. We showed visually that both visualizations correlate well with the expert manual segmentation of the ablation scars.&lt;br /&gt;
&lt;br /&gt;
= Literature =&lt;br /&gt;
[1] Nonparametric Mixture Models for Supervised Image Parcellation, M.R. Sabuncu, B.T.T. Yeo, K. Van Leemput, B. Fischl, and P. Golland. PMMIA Workshop at MICCAI 2009.&lt;br /&gt;
&lt;br /&gt;
[2] Diffeomorphic demons: Efficient non-parametric image registration. Vercauteren, T., Pennec, X., Perchant, A., Ayache, N. NeuroImage 45(1), S61–S72 (2009)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: [http://people.csail.mit.edu/mdepa/ Michal Depa] and Polina Golland&lt;br /&gt;
&lt;br /&gt;
*BWH: Ehud Schmidt and Ron Kikinis&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ACardiacAblation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Segmentation and Visualization for Cardiac Ablation Procedures]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78442</id>
		<title>Projects:CardiacAblation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78442"/>
		<updated>2012-11-28T20:27:26Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Left atrium segmentation =&lt;br /&gt;
&lt;br /&gt;
Automatic segmentation of the heart’s left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. The high anatomical variability of the left atrium makes its segmentation a particularly difficult problem. Specifically, the shape of the left atrium cavity, as well as the number and locations of the pulmonary veins connecting to it, vary substantially across subjects. We propose and demonstrate a robust atlas-based method for automatic segmentation of the left atrium in contrast-enhanced magnetic resonance angiography (MRA) images.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
We perform the segmentation via a label fusion algorithm [1] that uses a training set of MRA images of different patients with corresponding manual segmentations. We first align the training images to the test subject image to be segmented and apply the resulting deformations to the corresponding manual segmentation label maps to yield a set of left atrium segmentations in the coordinate space of the test subject. These form a non-parametric subject-specific statistical atlas. We then use a weighted voting algorithm to assign every voxel to the left atrium or to the background. The weighted label fusion scheme assigns higher weights to voxels in training segmentations that are located deeper within the structure of interest and that have similar intensities in training and test images. We also handle varying intensity distributions between images by incorporating iterative intensity equalization in a variant of the demons registration algorithm [2] used for the registration of the training images to the novel test image. &lt;br /&gt;
We also applied our new spectral label fusion algorithm for the segmentation of the cardiac data, with the description of the method and the results at&lt;br /&gt;
[http://www.na-mic.org/Wiki/index.php/Projects:NonparametricSegmentation | spectral label fusion].&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
We show in Figure 1 below a qualitative comparison between expert manual left atrium segmentations and automatic segmentations produced by our approach.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_3D_comparison.png|500px|thumb|center|Figure 1: Qualitative evaluation of left atrium segmentations in three different subjects. First row shows expert manual segmentations. The corresponding automatic segmentations produced by our method are in the second row.]]&lt;br /&gt;
&lt;br /&gt;
We compare our method of weighed voting (WV) label fusion to three alternative automatic atlas-based approaches: majority voting (MV) label fusion, parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). The majority voting label fusion is similar to weighted voting, except it assigns each voxel to the label that occurs most frequently in the registered training set at that voxel. We also construct a parametric atlas that summarizes all 16 subjects in a single template image and a probabilistic label map by performing groupwise registration to an average space. After registering this new atlas to the test subject, we segment the left atrium using two different approaches. In atlas thresholding, we simply threshold the warped probabilistic label map at 0.5 to obtain the segmentation. This baseline method is analogous to majority voting in the parametric atlas setting. We also use the parametric atlas as a spatial prior in a traditional model-based EM-segmentation. Note that this construction favors the baseline algorithms as it includes the test image in the registration of all subjects into the common coordinate frame.&lt;br /&gt;
&lt;br /&gt;
In Figure 2 below, we show example segmentations comparing the automatic segmentations produced by these methods and expert manual segmentations.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_2D_comparison.png|600px|thumb|center|Figure 2: Example segmentations of four different subjects: (a) expert manual segmentation (MS), (b) weighted voting label fusion (WV), (c) majority voting label fusion (MV), (d) parametric atlas thresholding (AT) and (e) EM-segmentation using the parametric atlas as a spatial prior (EM).]]&lt;br /&gt;
&lt;br /&gt;
Figure 3 reports the segmentation accuracy for each method, as measured by the volume overlap Dice scores. We also report the differences in segmentation accuracy between our method and the benchmark algorithms. To compute the difference between two methods, we subtract the Dice score of the second method from the score of the first for each subject. Our approach clearly outperforms other algorithms (WV vs. MV: p &amp;lt; 10−9, WV vs. AT: p &amp;lt; 0.002, WV vs. EM: p &amp;lt; 0.003; single-sided paired t-test). To focus the evaluation on the critical part of the structure, we manually isolate the pulmonary veins in each of the manual and automatic segmentations, and compare the Dice scores for these limited label maps. Again, we observe consistent improvements offered by our approach (WV vs. MV: p &amp;lt; 10−7, WV vs. AT: p &amp;lt; 10−7, WV vs. EM: p &amp;lt; 0.03; single-sided paired t-test). Since atlas-based EM-segmentation is an intensity based method, it performs relatively well in segmenting pulmonary veins, but suffers from numerous false positives in other areas, which lower its overall Dice scores.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_Dices.png|500px|thumb|center|Figure 3: Dice scores of results for weighted voting label fusion (WV), majority voting la- bel fusion (MV), parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). For each box plot, the central red line indicates the median, the boxes extend to the 25th and 75th percentiles, and the whiskers extend to the most extreme values not considered outliers, which are plotted as red crosses. Stars indicate that the weighted label fusion method achieves significantly more accurate segmentation than the baseline method (single-sided paired t-test, ∗: p &amp;lt; 0.05, ∗∗: p &amp;lt; 0.01).]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
Experimental results illustrate the capacity of our method to handle high anatomical variability, yielding accurate segmentation and detecting all pulmonary veins in all subjects. By explicitly modeling the anatomical variability represented in the label maps and the corresponding training images, the proposed method outperforms traditional atlas-based segmentation algorithms and a simple label fusion benchmark.&lt;br /&gt;
&lt;br /&gt;
= Cardiac ablation scar visualization =&lt;br /&gt;
&lt;br /&gt;
Atrial fibrillation is one of the most common heart conditions and can have very serious consequences such as stroke and heart failure. A technique called catheter radio-frequency (RF) ablation has recently emerged as a treatment. It involves burning the cardiac tissue that is responsible for the fibrillation. Even though this technique has been shown to work fairly well on atrial fibrillation patients, repeat procedures are often needed to fully correct the condition because surgeons lack the necessary tools to quickly evaluate the success of the procedure.&lt;br /&gt;
&lt;br /&gt;
We propose a method to automatically visualize the scar created by RF ablation in delayed enhancement MR images acquired after the procedure. This will provide surgeons with a way to evaluate the outcome of cardiac ablation procedures.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The visualization of cardiac scars resulting from ablation procedures in delayed enhancement magnetic resonance images (DE-MRI) is a very challenging problem because of intersubject anatomical variability of the left atrium body and the pulmonary veins, the variation in the shape and location of the scars and tissue that appears enhanced in DE-MRI images even though it is not ablation scar. In addition, visualization is also challenging because even the most advanced acquisition techniques yield DE-MRI images with relatively poor contrast.&lt;br /&gt;
&lt;br /&gt;
With all of these difficulties, performing this segmentation without exploiting some prior knowledge or significant feedback from the user is extremely challenging. Most previous attempts to segment scar in DE-MRI images relied heavily on input from the user. In contrast, we avoid this by automatically segmenting the left atrium in the DE-MRI images of the patients. The atrium segmentation provides us with prior&lt;br /&gt;
information about the location and shape of the left atrium, which in turn helps counter some of the challenges that were previously solved by requiring significant amounts of user interaction. We obtain this segmentation by first segmenting the left atrium in the MRA image of the patient’s heart using the method presented above. We then align the MRA image to the corresponding DE-MRI image of the same subject. With these two images aligned, we transfer the left atrium segmentation from the MRA to the DE-MRI image by applying the transformation computed in the registration.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
After obtaining the segmentation of the left atrium in the DE-MRI image, we produce a visualization of the ablation scar by simply projecting the DE-MRI data onto the left atrium surface. We restrict the projection to only use image voxels within an empirically determined distance of 7mm of each side of the left atrium surface. Figure 4 below illustrates the maximum intensity projection results for one subject. In addition, we automatically threshold these projection values by computing the 75th percentile and show the resulting visualization as well. For comparison, we also project the expert manual scar segmentation onto the same left atrium surface.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_scar_visualization.png|400px|thumb|center|Figure 4: Comparison of projections of DE-MRI data and manual scar segmentation onto left atrium surface. Circled area indicates acquisition artifact that causes non-scar tissue to be appear enhanced in the DE-MRI image.]]&lt;br /&gt;
&lt;br /&gt;
We confirm visually that the thresholded projection values correlate well with the manual scar segmentations. Nevertheless, there is one area, which we circled in the figure, where these two differ considerably. This discrepancy is due to an imaging artifact caused by the acquisition protocol and is likely to cause false positives in any intensity-based algorithm.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
We visualize the ablation scars by performing a maximum intensity projection of the DE-MRI image onto the automatically generated surface of the left atrium. The visualization is further improved by thresholding the projection. We showed visually that both visualizations correlate well with the expert manual segmentation of the ablation scars.&lt;br /&gt;
&lt;br /&gt;
= Literature =&lt;br /&gt;
[1] Nonparametric Mixture Models for Supervised Image Parcellation, M.R. Sabuncu, B.T.T. Yeo, K. Van Leemput, B. Fischl, and P. Golland. PMMIA Workshop at MICCAI 2009.&lt;br /&gt;
&lt;br /&gt;
[2] Diffeomorphic demons: Efficient non-parametric image registration. Vercauteren, T., Pennec, X., Perchant, A., Ayache, N. NeuroImage 45(1), S61–S72 (2009)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: [http://people.csail.mit.edu/mdepa/ Michal Depa] and Polina Golland&lt;br /&gt;
&lt;br /&gt;
*BWH: Ehud Schmidt and Ron Kikinis&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ACardiacAblation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Segmentation and Visualization for Cardiac Ablation Procedures]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78440</id>
		<title>Projects:CardiacAblation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78440"/>
		<updated>2012-11-28T20:26:22Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Left atrium segmentation =&lt;br /&gt;
&lt;br /&gt;
Automatic segmentation of the heart’s left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. The high anatomical variability of the left atrium makes its segmentation a particularly difficult problem. Specifically, the shape of the left atrium cavity, as well as the number and locations of the pulmonary veins connecting to it, vary substantially across subjects. We propose and demonstrate a robust atlas-based method for automatic segmentation of the left atrium in contrast-enhanced magnetic resonance angiography (MRA) images.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
We perform the segmentation via a label fusion algorithm [1] that uses a training set of MRA images of different patients with corresponding manual segmentations. We first align the training images to the test subject image to be segmented and apply the resulting deformations to the corresponding manual segmentation label maps to yield a set of left atrium segmentations in the coordinate space of the test subject. These form a non-parametric subject-specific statistical atlas. We then use a weighted voting algorithm to assign every voxel to the left atrium or to the background. The weighted label fusion scheme assigns higher weights to voxels in training segmentations that are located deeper within the structure of interest and that have similar intensities in training and test images. We also handle varying intensity distributions between images by incorporating iterative intensity equalization in a variant of the demons registration algorithm [2] used for the registration of the training images to the novel test image. &lt;br /&gt;
We also applied our new spectral label fusion algorithm for the segmentation of the cardiac data, with the description of the method and the results at&lt;br /&gt;
[http://www.na-mic.org/Wiki/index.php/Projects:NonparametricSegmentation| spectral label fusion].&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
We show in Figure 1 below a qualitative comparison between expert manual left atrium segmentations and automatic segmentations produced by our approach.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_3D_comparison.png|500px|thumb|center|Figure 1: Qualitative evaluation of left atrium segmentations in three different subjects. First row shows expert manual segmentations. The corresponding automatic segmentations produced by our method are in the second row.]]&lt;br /&gt;
&lt;br /&gt;
We compare our method of weighed voting (WV) label fusion to three alternative automatic atlas-based approaches: majority voting (MV) label fusion, parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). The majority voting label fusion is similar to weighted voting, except it assigns each voxel to the label that occurs most frequently in the registered training set at that voxel. We also construct a parametric atlas that summarizes all 16 subjects in a single template image and a probabilistic label map by performing groupwise registration to an average space. After registering this new atlas to the test subject, we segment the left atrium using two different approaches. In atlas thresholding, we simply threshold the warped probabilistic label map at 0.5 to obtain the segmentation. This baseline method is analogous to majority voting in the parametric atlas setting. We also use the parametric atlas as a spatial prior in a traditional model-based EM-segmentation. Note that this construction favors the baseline algorithms as it includes the test image in the registration of all subjects into the common coordinate frame.&lt;br /&gt;
&lt;br /&gt;
In Figure 2 below, we show example segmentations comparing the automatic segmentations produced by these methods and expert manual segmentations.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_2D_comparison.png|600px|thumb|center|Figure 2: Example segmentations of four different subjects: (a) expert manual segmentation (MS), (b) weighted voting label fusion (WV), (c) majority voting label fusion (MV), (d) parametric atlas thresholding (AT) and (e) EM-segmentation using the parametric atlas as a spatial prior (EM).]]&lt;br /&gt;
&lt;br /&gt;
Figure 3 reports the segmentation accuracy for each method, as measured by the volume overlap Dice scores. We also report the differences in segmentation accuracy between our method and the benchmark algorithms. To compute the difference between two methods, we subtract the Dice score of the second method from the score of the first for each subject. Our approach clearly outperforms other algorithms (WV vs. MV: p &amp;lt; 10−9, WV vs. AT: p &amp;lt; 0.002, WV vs. EM: p &amp;lt; 0.003; single-sided paired t-test). To focus the evaluation on the critical part of the structure, we manually isolate the pulmonary veins in each of the manual and automatic segmentations, and compare the Dice scores for these limited label maps. Again, we observe consistent improvements offered by our approach (WV vs. MV: p &amp;lt; 10−7, WV vs. AT: p &amp;lt; 10−7, WV vs. EM: p &amp;lt; 0.03; single-sided paired t-test). Since atlas-based EM-segmentation is an intensity based method, it performs relatively well in segmenting pulmonary veins, but suffers from numerous false positives in other areas, which lower its overall Dice scores.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_Dices.png|500px|thumb|center|Figure 3: Dice scores of results for weighted voting label fusion (WV), majority voting la- bel fusion (MV), parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). For each box plot, the central red line indicates the median, the boxes extend to the 25th and 75th percentiles, and the whiskers extend to the most extreme values not considered outliers, which are plotted as red crosses. Stars indicate that the weighted label fusion method achieves significantly more accurate segmentation than the baseline method (single-sided paired t-test, ∗: p &amp;lt; 0.05, ∗∗: p &amp;lt; 0.01).]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
Experimental results illustrate the capacity of our method to handle high anatomical variability, yielding accurate segmentation and detecting all pulmonary veins in all subjects. By explicitly modeling the anatomical variability represented in the label maps and the corresponding training images, the proposed method outperforms traditional atlas-based segmentation algorithms and a simple label fusion benchmark.&lt;br /&gt;
&lt;br /&gt;
= Cardiac ablation scar visualization =&lt;br /&gt;
&lt;br /&gt;
Atrial fibrillation is one of the most common heart conditions and can have very serious consequences such as stroke and heart failure. A technique called catheter radio-frequency (RF) ablation has recently emerged as a treatment. It involves burning the cardiac tissue that is responsible for the fibrillation. Even though this technique has been shown to work fairly well on atrial fibrillation patients, repeat procedures are often needed to fully correct the condition because surgeons lack the necessary tools to quickly evaluate the success of the procedure.&lt;br /&gt;
&lt;br /&gt;
We propose a method to automatically visualize the scar created by RF ablation in delayed enhancement MR images acquired after the procedure. This will provide surgeons with a way to evaluate the outcome of cardiac ablation procedures.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The visualization of cardiac scars resulting from ablation procedures in delayed enhancement magnetic resonance images (DE-MRI) is a very challenging problem because of intersubject anatomical variability of the left atrium body and the pulmonary veins, the variation in the shape and location of the scars and tissue that appears enhanced in DE-MRI images even though it is not ablation scar. In addition, visualization is also challenging because even the most advanced acquisition techniques yield DE-MRI images with relatively poor contrast.&lt;br /&gt;
&lt;br /&gt;
With all of these difficulties, performing this segmentation without exploiting some prior knowledge or significant feedback from the user is extremely challenging. Most previous attempts to segment scar in DE-MRI images relied heavily on input from the user. In contrast, we avoid this by automatically segmenting the left atrium in the DE-MRI images of the patients. The atrium segmentation provides us with prior&lt;br /&gt;
information about the location and shape of the left atrium, which in turn helps counter some of the challenges that were previously solved by requiring significant amounts of user interaction. We obtain this segmentation by first segmenting the left atrium in the MRA image of the patient’s heart using the method presented above. We then align the MRA image to the corresponding DE-MRI image of the same subject. With these two images aligned, we transfer the left atrium segmentation from the MRA to the DE-MRI image by applying the transformation computed in the registration.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
After obtaining the segmentation of the left atrium in the DE-MRI image, we produce a visualization of the ablation scar by simply projecting the DE-MRI data onto the left atrium surface. We restrict the projection to only use image voxels within an empirically determined distance of 7mm of each side of the left atrium surface. Figure 4 below illustrates the maximum intensity projection results for one subject. In addition, we automatically threshold these projection values by computing the 75th percentile and show the resulting visualization as well. For comparison, we also project the expert manual scar segmentation onto the same left atrium surface.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_scar_visualization.png|400px|thumb|center|Figure 4: Comparison of projections of DE-MRI data and manual scar segmentation onto left atrium surface. Circled area indicates acquisition artifact that causes non-scar tissue to be appear enhanced in the DE-MRI image.]]&lt;br /&gt;
&lt;br /&gt;
We confirm visually that the thresholded projection values correlate well with the manual scar segmentations. Nevertheless, there is one area, which we circled in the figure, where these two differ considerably. This discrepancy is due to an imaging artifact caused by the acquisition protocol and is likely to cause false positives in any intensity-based algorithm.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
We visualize the ablation scars by performing a maximum intensity projection of the DE-MRI image onto the automatically generated surface of the left atrium. The visualization is further improved by thresholding the projection. We showed visually that both visualizations correlate well with the expert manual segmentation of the ablation scars.&lt;br /&gt;
&lt;br /&gt;
= Literature =&lt;br /&gt;
[1] Nonparametric Mixture Models for Supervised Image Parcellation, M.R. Sabuncu, B.T.T. Yeo, K. Van Leemput, B. Fischl, and P. Golland. PMMIA Workshop at MICCAI 2009.&lt;br /&gt;
&lt;br /&gt;
[2] Diffeomorphic demons: Efficient non-parametric image registration. Vercauteren, T., Pennec, X., Perchant, A., Ayache, N. NeuroImage 45(1), S61–S72 (2009)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: [http://people.csail.mit.edu/mdepa/ Michal Depa] and Polina Golland&lt;br /&gt;
&lt;br /&gt;
*BWH: Ehud Schmidt and Ron Kikinis&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ACardiacAblation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Segmentation and Visualization for Cardiac Ablation Procedures]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78439</id>
		<title>Projects:CardiacAblation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:CardiacAblation&amp;diff=78439"/>
		<updated>2012-11-28T20:21:35Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Left atrium segmentation =&lt;br /&gt;
&lt;br /&gt;
Automatic segmentation of the heart’s left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. The high anatomical variability of the left atrium makes its segmentation a particularly difficult problem. Specifically, the shape of the left atrium cavity, as well as the number and locations of the pulmonary veins connecting to it, vary substantially across subjects. We propose and demonstrate a robust atlas-based method for automatic segmentation of the left atrium in contrast-enhanced magnetic resonance angiography (MRA) images.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
We perform the segmentation via a label fusion algorithm [1] that uses a training set of MRA images of different patients with corresponding manual segmentations. We first align the training images to the test subject image to be segmented and apply the resulting deformations to the corresponding manual segmentation label maps to yield a set of left atrium segmentations in the coordinate space of the test subject. These form a non-parametric subject-specific statistical atlas. We then use a weighted voting algorithm to assign every voxel to the left atrium or to the background. The weighted label fusion scheme assigns higher weights to voxels in training segmentations that are located deeper within the structure of interest and that have similar intensities in training and test images. We also handle varying intensity distributions between images by incorporating iterative intensity equalization in a variant of the demons registration algorithm [2] used for the registration of the training images to the novel test image. &lt;br /&gt;
[[http://www.na-mic.org/Wiki/index.php/Projects:NonparametricSegmentation]]&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
We show in Figure 1 below a qualitative comparison between expert manual left atrium segmentations and automatic segmentations produced by our approach.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_3D_comparison.png|500px|thumb|center|Figure 1: Qualitative evaluation of left atrium segmentations in three different subjects. First row shows expert manual segmentations. The corresponding automatic segmentations produced by our method are in the second row.]]&lt;br /&gt;
&lt;br /&gt;
We compare our method of weighed voting (WV) label fusion to three alternative automatic atlas-based approaches: majority voting (MV) label fusion, parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). The majority voting label fusion is similar to weighted voting, except it assigns each voxel to the label that occurs most frequently in the registered training set at that voxel. We also construct a parametric atlas that summarizes all 16 subjects in a single template image and a probabilistic label map by performing groupwise registration to an average space. After registering this new atlas to the test subject, we segment the left atrium using two different approaches. In atlas thresholding, we simply threshold the warped probabilistic label map at 0.5 to obtain the segmentation. This baseline method is analogous to majority voting in the parametric atlas setting. We also use the parametric atlas as a spatial prior in a traditional model-based EM-segmentation. Note that this construction favors the baseline algorithms as it includes the test image in the registration of all subjects into the common coordinate frame.&lt;br /&gt;
&lt;br /&gt;
In Figure 2 below, we show example segmentations comparing the automatic segmentations produced by these methods and expert manual segmentations.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_2D_comparison.png|600px|thumb|center|Figure 2: Example segmentations of four different subjects: (a) expert manual segmentation (MS), (b) weighted voting label fusion (WV), (c) majority voting label fusion (MV), (d) parametric atlas thresholding (AT) and (e) EM-segmentation using the parametric atlas as a spatial prior (EM).]]&lt;br /&gt;
&lt;br /&gt;
Figure 3 reports the segmentation accuracy for each method, as measured by the volume overlap Dice scores. We also report the differences in segmentation accuracy between our method and the benchmark algorithms. To compute the difference between two methods, we subtract the Dice score of the second method from the score of the first for each subject. Our approach clearly outperforms other algorithms (WV vs. MV: p &amp;lt; 10−9, WV vs. AT: p &amp;lt; 0.002, WV vs. EM: p &amp;lt; 0.003; single-sided paired t-test). To focus the evaluation on the critical part of the structure, we manually isolate the pulmonary veins in each of the manual and automatic segmentations, and compare the Dice scores for these limited label maps. Again, we observe consistent improvements offered by our approach (WV vs. MV: p &amp;lt; 10−7, WV vs. AT: p &amp;lt; 10−7, WV vs. EM: p &amp;lt; 0.03; single-sided paired t-test). Since atlas-based EM-segmentation is an intensity based method, it performs relatively well in segmenting pulmonary veins, but suffers from numerous false positives in other areas, which lower its overall Dice scores.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_MRA_seg_Dices.png|500px|thumb|center|Figure 3: Dice scores of results for weighted voting label fusion (WV), majority voting la- bel fusion (MV), parametric atlas thresholding (AT) and atlas-based EM-segmentation (EM). For each box plot, the central red line indicates the median, the boxes extend to the 25th and 75th percentiles, and the whiskers extend to the most extreme values not considered outliers, which are plotted as red crosses. Stars indicate that the weighted label fusion method achieves significantly more accurate segmentation than the baseline method (single-sided paired t-test, ∗: p &amp;lt; 0.05, ∗∗: p &amp;lt; 0.01).]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
Experimental results illustrate the capacity of our method to handle high anatomical variability, yielding accurate segmentation and detecting all pulmonary veins in all subjects. By explicitly modeling the anatomical variability represented in the label maps and the corresponding training images, the proposed method outperforms traditional atlas-based segmentation algorithms and a simple label fusion benchmark.&lt;br /&gt;
&lt;br /&gt;
= Cardiac ablation scar visualization =&lt;br /&gt;
&lt;br /&gt;
Atrial fibrillation is one of the most common heart conditions and can have very serious consequences such as stroke and heart failure. A technique called catheter radio-frequency (RF) ablation has recently emerged as a treatment. It involves burning the cardiac tissue that is responsible for the fibrillation. Even though this technique has been shown to work fairly well on atrial fibrillation patients, repeat procedures are often needed to fully correct the condition because surgeons lack the necessary tools to quickly evaluate the success of the procedure.&lt;br /&gt;
&lt;br /&gt;
We propose a method to automatically visualize the scar created by RF ablation in delayed enhancement MR images acquired after the procedure. This will provide surgeons with a way to evaluate the outcome of cardiac ablation procedures.&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
The visualization of cardiac scars resulting from ablation procedures in delayed enhancement magnetic resonance images (DE-MRI) is a very challenging problem because of intersubject anatomical variability of the left atrium body and the pulmonary veins, the variation in the shape and location of the scars and tissue that appears enhanced in DE-MRI images even though it is not ablation scar. In addition, visualization is also challenging because even the most advanced acquisition techniques yield DE-MRI images with relatively poor contrast.&lt;br /&gt;
&lt;br /&gt;
With all of these difficulties, performing this segmentation without exploiting some prior knowledge or significant feedback from the user is extremely challenging. Most previous attempts to segment scar in DE-MRI images relied heavily on input from the user. In contrast, we avoid this by automatically segmenting the left atrium in the DE-MRI images of the patients. The atrium segmentation provides us with prior&lt;br /&gt;
information about the location and shape of the left atrium, which in turn helps counter some of the challenges that were previously solved by requiring significant amounts of user interaction. We obtain this segmentation by first segmenting the left atrium in the MRA image of the patient’s heart using the method presented above. We then align the MRA image to the corresponding DE-MRI image of the same subject. With these two images aligned, we transfer the left atrium segmentation from the MRA to the DE-MRI image by applying the transformation computed in the registration.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
After obtaining the segmentation of the left atrium in the DE-MRI image, we produce a visualization of the ablation scar by simply projecting the DE-MRI data onto the left atrium surface. We restrict the projection to only use image voxels within an empirically determined distance of 7mm of each side of the left atrium surface. Figure 4 below illustrates the maximum intensity projection results for one subject. In addition, we automatically threshold these projection values by computing the 75th percentile and show the resulting visualization as well. For comparison, we also project the expert manual scar segmentation onto the same left atrium surface.&lt;br /&gt;
&lt;br /&gt;
[[File:Mdepa_scar_visualization.png|400px|thumb|center|Figure 4: Comparison of projections of DE-MRI data and manual scar segmentation onto left atrium surface. Circled area indicates acquisition artifact that causes non-scar tissue to be appear enhanced in the DE-MRI image.]]&lt;br /&gt;
&lt;br /&gt;
We confirm visually that the thresholded projection values correlate well with the manual scar segmentations. Nevertheless, there is one area, which we circled in the figure, where these two differ considerably. This discrepancy is due to an imaging artifact caused by the acquisition protocol and is likely to cause false positives in any intensity-based algorithm.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
We visualize the ablation scars by performing a maximum intensity projection of the DE-MRI image onto the automatically generated surface of the left atrium. The visualization is further improved by thresholding the projection. We showed visually that both visualizations correlate well with the expert manual segmentation of the ablation scars.&lt;br /&gt;
&lt;br /&gt;
= Literature =&lt;br /&gt;
[1] Nonparametric Mixture Models for Supervised Image Parcellation, M.R. Sabuncu, B.T.T. Yeo, K. Van Leemput, B. Fischl, and P. Golland. PMMIA Workshop at MICCAI 2009.&lt;br /&gt;
&lt;br /&gt;
[2] Diffeomorphic demons: Efficient non-parametric image registration. Vercauteren, T., Pennec, X., Perchant, A., Ayache, N. NeuroImage 45(1), S61–S72 (2009)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: [http://people.csail.mit.edu/mdepa/ Michal Depa] and Polina Golland&lt;br /&gt;
&lt;br /&gt;
*BWH: Ehud Schmidt and Ron Kikinis&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ACardiacAblation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Segmentation and Visualization for Cardiac Ablation Procedures]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78437</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78437"/>
		<updated>2012-11-28T20:19:22Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
In the figures below, we present dice volume overlap (left) and modified Hausdorff distance (right) for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|400px]] [[File:boxplotHausdorff.png|400px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
Additionally, we presented spectral label fusion, a new approach for multi-atlas image segmentation. It combines the strengths of label fusion with advanced spectral segmentation. The integration of label cues into the spectral framework results in improved segmentation performance for the left atrium of the heart. The extracted image regions form a nested collection of segmentations and support a region-based voting scheme. The resulting method is more robust to registration errors than a voxel-wise approach.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa, Christian Wachinger and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78436</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78436"/>
		<updated>2012-11-28T20:16:43Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Key Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
In the figures below, we present dice volume overlap (left) and modified Hausdorff distance (right) for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|400px]] [[File:boxplotHausdorff.png|400px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa, Christian Wachinger and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78434</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78434"/>
		<updated>2012-11-28T20:12:11Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
In the figures below, we present dice volume overlap (left) and modified Hausdorff distance (right) for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|400px]] [[File:boxplotHausdorff.png|400px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78432</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78432"/>
		<updated>2012-11-28T20:11:42Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
In the figures below, we present dice volume overlap and modified Hausdorff distance for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|400px]] [[File:boxplotHausdorff.png|400px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78431</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78431"/>
		<updated>2012-11-28T20:11:05Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
Fig. 3 presents dice volume overlap and modified Hausdorff distance for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|400px]] [[File:boxplotHausdorff.png|400px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78430</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78430"/>
		<updated>2012-11-28T20:10:50Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
Fig. 3 presents dice volume overlap and modified Hausdorff distance for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|200px]] [[File:boxplotHausdorff.png|200px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:BoxplotHausdorff.png&amp;diff=78429</id>
		<title>File:BoxplotHausdorff.png</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:BoxplotHausdorff.png&amp;diff=78429"/>
		<updated>2012-11-28T20:10:31Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: Boxplot of Hausdorff&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Boxplot of Hausdorff&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78428</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78428"/>
		<updated>2012-11-28T20:10:17Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
Fig. 3 presents dice volume overlap and modified Hausdorff distance for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png|100px]] [[File:boxplotHausdorff.png|100px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:BoxplotDice.png&amp;diff=78427</id>
		<title>File:BoxplotDice.png</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:BoxplotDice.png&amp;diff=78427"/>
		<updated>2012-11-28T20:09:25Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: Boxplot of Dice&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Boxplot of Dice&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78426</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78426"/>
		<updated>2012-11-28T20:09:08Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. We set the UCM threshold (see scheme) to ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW).&lt;br /&gt;
Fig. 3 presents dice volume overlap and modified Hausdorff distance for each algorithm. The improvements in segmentation accuracy between the proposed method and IW are statistically significant (p&amp;lt;10−5).&lt;br /&gt;
&lt;br /&gt;
[[File:boxplotDice.png]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78423</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78423"/>
		<updated>2012-11-28T20:04:26Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
In a third experiment, we evaluated the spectral label fusion. We automatically segment the left atrium of the heart in a set of 16 electro-cardiogram gated (0.2 mmol/kg) Gadolinium- DTPA contrast-enhanced cardiac MRA images (CIDA sequence, TR=4.3ms, TE=2.0ms, θ = 40º, in-plane resolution varying from 0.51mm to 0.68mm, slice thickness varying from 1.2mm to 1.7mm, 512 × 512 × 96, -80 kHz bandwidth, atrial diastolic ECG timing to counteract considerable volume changes of the left atrium). The left atrium was manually segmented in each image by an expert. For all the experiments we set γ = 2.5, giving higher weight to the spectral component. We set ρ = 0.2 for the 2D and ρ = 0 for the 3D watershed after inspecting the UCM. We perform leave-one-out experiments by treating one subject as the test image and the remaining 15 subjects as the training set. We use the Dice score and the modified (average) Hausdorff distance between the automatic and expert segmentations as quantitative measures of segmentation quality. We compare our method to majority voting (MV) and intensity-weighted label fusion (IW) [11].&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78420</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78420"/>
		<updated>2012-11-28T19:57:16Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):''' In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
'''(3):''' In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78419</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78419"/>
		<updated>2012-11-28T19:57:03Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(2):'''In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. &lt;br /&gt;
&lt;br /&gt;
In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78418</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78418"/>
		<updated>2012-11-28T19:55:51Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''  The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. &lt;br /&gt;
&lt;br /&gt;
'''(1):'''In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78416</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78416"/>
		<updated>2012-11-28T19:55:02Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The input to the algorithm are the new image to be segmented and the probabilistic label map. &lt;br /&gt;
&lt;br /&gt;
The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78415</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78415"/>
		<updated>2012-11-28T19:54:09Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The new image to be segmented and the probabilistic label map are the inputs to the algorithm.&lt;br /&gt;
The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. The boundary extraction we employ concepts based on spectral clustering, as presented in~[5]. In the second step, the extracted contours give rise to regions, partitioning the image. We obtain the parcellation of the image with the watershed algorithm. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. For the region-based voting, we replace the assumption of independent voxel samples from the probabilistic framework with the Markov property. Crucial is the selection of image-specific neighborhoods that capture the relevant information, as done in the second step. &lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78408</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78408"/>
		<updated>2012-11-28T19:48:09Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Literature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The new image to be segmented and the probabilistic label map are the inputs to the algorithm.&lt;br /&gt;
The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. In the second step, these contours give rise to regions, partitioning the image. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
[5] Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. on Pat. Anal. Mach. Intel. 33(5), 898–916 (2011)&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78406</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78406"/>
		<updated>2012-11-28T19:47:44Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. &lt;br /&gt;
The new image to be segmented and the probabilistic label map are the inputs to the algorithm.&lt;br /&gt;
The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. In the second step, these contours give rise to regions, partitioning the image. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78404</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78404"/>
		<updated>2012-11-28T19:46:01Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Spectral Label Fusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in the figure below. The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. In the second step, these contours give rise to regions, partitioning the image. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:scheme5.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78403</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78403"/>
		<updated>2012-11-28T19:44:59Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Nonparametric Segmentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image, referred to as &amp;quot;Spectral Label Fusion&amp;quot;. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in Fig. 2. The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. In the second step, these contours give rise to regions, partitioning the image. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. We formulate segmentation as a binary labeling problem; for a multi-label problem the same procedure is repeated for each label.&lt;br /&gt;
[[File:scheme5.png|800px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78401</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78401"/>
		<updated>2012-11-28T19:44:10Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
Spectral label fusion consists of three steps, as illustrated in Fig. 2. The first step extracts the boundaries from the image and label map, joins them in the spectral framework, and produces weighted contours. In the second step, these contours give rise to regions, partitioning the image. In the third step, we assign a label to each region based on the input label map, producing the final segmentation. We formulate segmentation as a binary labeling problem; for a multi-label problem the same procedure is repeated for each label.&lt;br /&gt;
[[File:scheme5.png|800px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78395</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78395"/>
		<updated>2012-11-28T19:41:06Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
== Probabilistic Model ==&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Spectral Label Fusion ==&lt;br /&gt;
[[File:scheme5.png|width=800px]] &lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Scheme5.png&amp;diff=78393</id>
		<title>File:Scheme5.png</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Scheme5.png&amp;diff=78393"/>
		<updated>2012-11-28T19:39:23Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: Scheme of Spectral Label Fusion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Scheme of Spectral Label Fusion&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78388</id>
		<title>Projects:NonparametricSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:NonparametricSegmentation&amp;diff=78388"/>
		<updated>2012-11-28T19:34:04Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Nonparametric Segmentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations:StructuralImageAnalysis|NA-MIC Collaborations]], [[Algorithm:MIT|MIT Algorithms]],&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Nonparametric Segmentation =&lt;br /&gt;
&lt;br /&gt;
We propose a non-parametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms we develop rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute a final segmentation of the test subject. Label fusion methods have been shown to yield accurate segmentation, since the use&lt;br /&gt;
of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures, cf. [1,2,3]. To the best of our knowledge, this project investigates the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multi-atlas segmentation algorithms are interpreted as special cases of our framework.&lt;br /&gt;
&lt;br /&gt;
Next to the non-parametric model, we presented a new technique that exploits boundary information in the intensity image. Contour and texture cues are extracted from the image and combined with the label map in a spectral clustering framework. This approach offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise voting. To derive the region-based voting, we modify the previous non-parametric, probabilistic model.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
We instantiate our model in the context of brain MRI segmentation, where whole brain MRI volumes are automatically parcellated into the following anatomical Regions of Interest (ROI): white matter (WM), cerebral cortex(CT), lateral ventricle (LV), hippocampus(HP), amygdala (AM), thalamus (TH), caudate (CA), putamen (PU), palladum (PA). &lt;br /&gt;
The proposed non-parametric model yields four types of label fusion algorithms: &lt;br /&gt;
&lt;br /&gt;
'''(1) Majority Voting:''' The algorithm computes the most frequent propagated labels at each voxel independently. This is a segmentation algorithm commonly used in practice, e.g. [2,3].&lt;br /&gt;
&lt;br /&gt;
'''(2) Local Label Fusion:''' An independent weighted averaging of propagated labels, where the weights vary at each voxel and are a function of the intensity difference between the training image and test image.&lt;br /&gt;
&lt;br /&gt;
'''(3) Semi-Local Label Fusion:''' Propagated labels are fused in a weighted fashion using a variational mean field algorithm. The weights are encouraged to be similar in local patches.&lt;br /&gt;
&lt;br /&gt;
'''(4) Global Label Fusion:''' Propagated labels are fused in a weighted fashion using an Expectation Maximization algorithm. The weights are global, i.e., there is a single weight for each training subject.&lt;br /&gt;
&lt;br /&gt;
The following figure shows an example segmentation obtained via Local Label Fusion.&lt;br /&gt;
&lt;br /&gt;
[[File:Segmentation_example2.png]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
We conduct two sets of experiments to validate our framework. In the first set of experiments, we use 39 brain MRI scans – with manually segmented white matter, cerebral cortex, ventricles and subcortical structures – to compare different label fusion algorithms and the widely-used Freesurfer whole-brain segmentation tool [4]. &lt;br /&gt;
&lt;br /&gt;
The following figure shows the Boxplots of Dice scores for all methods: Freesurfer (red), Majority Voting (yellow), Global Weighted Fusion (green), Local Weighted Voting (blue), Semi-local Weighted Fusion (purple).&lt;br /&gt;
&lt;br /&gt;
[[File:DiceScoresPerROI.png]]&lt;br /&gt;
&lt;br /&gt;
Our results indicate that the proposed framework yields more accurate segmentation than Freesurfer and Majority Voting. &lt;br /&gt;
&lt;br /&gt;
In a second experiment, we use brain MRI scans of 304 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal atrophy that foreshadows the onset of Alzheimer’s Disease.&lt;br /&gt;
The following figure plots the Average Hippocampal volumes for five different groups (young: younger than 30, middle aged: older than 30, younger than 60; old: older than 60; patients with MCI, and AD patients) in the 304 subjects of Experiment 2. Error bars indicate standard error.&lt;br /&gt;
&lt;br /&gt;
[[File:HippocampalVolume.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In this work, we investigate a generative model that leads to label fusion style image segmentation methods. Within the proposed framework, we derived various algorithms that combine transferred training labels into a single segmentation estimate. With a dataset of 39 brain MRI scans and corresponding label maps obtained from an expert, we experimentally compared these segmentation algorithms with Freesurfer’s widely-used atlas-based segmentation tool [4]. Our results suggested that the proposed framework yields accurate and robust segmentation tools that can be employed on large multi-subject datasets. In a second experiment, we employed one of the developed segmentation algorithms to compute hippocampal volumes in MRI scans of 304 subjects. A comparison of these measurements across clinical and age groups indicate that the proposed algorithms are sufficiently sensitive to detect hippocampal atrophy that precedes probable onset of Alzheimer’s Disease.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
[1] X. Artaechevarria, A. Munoz-Barrutia, and C. Ortiz de Solorzano. Combination strategies in multi-atlas image segmentation:&lt;br /&gt;
Application to brain MR data. IEEE Tran. Med. Imaging, 28(8):1266 – 1277, 2009.&lt;br /&gt;
&lt;br /&gt;
[2] R.A. Heckemann, J.V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain MRI segmentation&lt;br /&gt;
combining label propagation and decision fusion. Neuroimage, 33(1):115–126, 2006.&lt;br /&gt;
&lt;br /&gt;
[3] T. Rohlfing, R. Brandt, R. Menzel, and C.R. Maurer. Evaluation of atlas selection strategies for atlas-based image&lt;br /&gt;
segmentation with application to confocal microscopy images of bee brains. NeuroImage, 21(4):1428–1442, 2004.&lt;br /&gt;
&lt;br /&gt;
[4] Freesurfer Wiki. http://surfer.nmr.mgh.harvard.edu.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
*MIT: Mert R. Sabuncu, B.T. Thomas Yeo, Koen Van Leemput, Michal Depa and Polina Golland&lt;br /&gt;
*Harvard: Koen Van Leemput and Bruce Fischl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/publications/pages/display?search=Projects%3ANonparametricSegmentation&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Nonparametric Models for Supervised Image Segmentation]&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=73472</id>
		<title>2012 Winter Project Week:PatchBased</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=73472"/>
		<updated>2012-01-13T00:17:54Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Project Title: '''A patch-based approach to the segmentation of organs at risk'''&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2012.png|[[2012_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Investigators ==&lt;br /&gt;
* Christian Wachinger&lt;br /&gt;
* Polina Golland&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
We will investigate the applicability of a patch-based approach in the context of label fusion. &lt;br /&gt;
This is interesting in scenarios, where the subjects show a high variability and consequently making the calculation of the deformation fields between them challenging. &lt;br /&gt;
One application of this method could be the automatic segmentation of organs at risk in the head and neck data. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
In order to evaluate the discriminative power of patches, we perform manifold learning to project points from high-dimensional patch space to lower dimensions. Having the low-dimensional embedding, we can assess if patches that correspond to the same label arrange in clusters. &lt;br /&gt;
One challenge is the very large number of patches in an image or a group of images, posing problems to standard spectral methods for dimensionality reduction. Consequently we investigate the application of approaches to deal with a large number of samples. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* Discussed data and motivation with fellow project week participants&lt;br /&gt;
* Discussed registration issues with data&lt;br /&gt;
* Continued to apply method to synthetic data&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=73470</id>
		<title>2012 Winter Project Week:PatchBased</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=73470"/>
		<updated>2012-01-13T00:16:22Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Project Title: '''A patch-based approach to the segmentation of organs at risk'''&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2012.png|[[2012_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Investigators ==&lt;br /&gt;
* Christian Wachinger&lt;br /&gt;
* Polina Golland&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
We will investigate the applicability of a patch-based approach in the context of label fusion. &lt;br /&gt;
This is interesting in scenarios, where the subjects show a high variability and consequently making the calculation of the deformation fields between them challenging. &lt;br /&gt;
One application of this method could be the automatic segmentation of organs at risk in the head and neck data. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
In order to evaluate the discriminative power of patches, we perform manifold learning to project points from high-dimensional patch space to lower dimensions. Having the low-dimensional embedding, we can assess if patches that correspond to the same label arrange in clusters. &lt;br /&gt;
One challenge is the very large number of patches in an image or a group of images, posing problems to standard spectral methods for dimensionality reduction. Consequently we investigate the application of approaches to deal with a large number of samples, such as the Nyström method. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* Discussed data and motivation with fellow project week participants&lt;br /&gt;
* Discussed registration issues with data&lt;br /&gt;
* Continued to apply method to synthetic data&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=72854</id>
		<title>2012 Winter Project Week:PatchBased</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=72854"/>
		<updated>2012-01-05T09:21:59Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Project Title: '''A patch-based approach to the segmentation of organs at risk'''&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2012.png|[[2012_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Investigators ==&lt;br /&gt;
* Christian Wachinger&lt;br /&gt;
* Polina Golland&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
We will investigate the applicability of a patch-based approach in the context of label fusion. &lt;br /&gt;
This is interesting in scenarios, where the subjects show a high variability and consequently making the calculation of the deformation fields between them challenging. &lt;br /&gt;
One application of this method could be the automatic segmentation of organs at risk in the head and neck data. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
In order to evaluate the discriminative power of patches, we perform manifold learning to project points from high-dimensional patch space to lower dimensions. Having the low-dimensional embedding, we can assess if patches that correspond to the same label arrange in clusters. &lt;br /&gt;
One challenge is the very large number of patches in an image or a group of images, posing problems to standard spectral methods for dimensionality reduction. Consequently we investigate the application of approaches to deal with a large number of samples, such as the Nyström method. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=72613</id>
		<title>2012 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=72613"/>
		<updated>2011-12-20T22:27:37Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Project Events]], [[Events]]&lt;br /&gt;
 Back to [[Project Events]], [[AHM_2012]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2012.png|300px]]&lt;br /&gt;
&lt;br /&gt;
== Dates.Venue.Registration ==&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2012#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
== Agenda and Project List==&lt;br /&gt;
&lt;br /&gt;
Please:&lt;br /&gt;
*  [[AHM_2012#Agenda|'''Click here for the agenda for AHM 2012 and Project Week''']].&lt;br /&gt;
*  [[#Projects|'''Click here to jump to Project list''']]&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
From January 9-13, 2012, the 14th project week for hands-on research and development activity in Neuroscience and Image-Guided Therapy applications will be hosted in Salt Lake City, Utah. Participant engage in open source programming using the [[NA-MIC-Kit|NA-MIC Kit]], algorithms, medical imaging sequence development, tracking experiments, and clinical applications. The main goal of this event is to further the translational research deliverables of the sponsoring centers ([http://www.na-mic.org NA-MIC], [http://www.ncigt.org NCIGT], [http://nac.spl.harvard.edu NAC], [http://catalyst.harvard.edu/home.html Harvard Catalyst], and [http://www.cimit.org CIMIT]) and their collaborators by identifying and solving programming problems during planned and ad hoc break-out sessions.  &lt;br /&gt;
&lt;br /&gt;
Active preparation for this conference begins with a kick-off teleconference. Invitations to this call are sent to members of the sponsoring communities, their collaborators, past attendees of the event, as well as any parties expressing an interest in working with these centers. The main goal of the initial teleconference is to gather information about which groups/projects would be active at the upcoming event to ensure that there were sufficient resources available to meet everyone's needs. Focused discussions about individual projects are conducted during several subsequent teleconferences and permits the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in break-out sessions. In the final days leading up to the meeting, all project teams are asked to complete a template page on the wiki describing the objectives and research plan for each project.  &lt;br /&gt;
&lt;br /&gt;
On the first day of the conference, each project team leader delivers a short presentation to introduce their topic and individual members of their team. These brief presentations serve to both familiarize other teams doing similar work about common problems or practical solutions, and to identify potential subsets of individuals who might benefit from collaborative work.  For the remainder of the conference, about 50% time is devoted to break-out discussions on topics of common interest to particular subsets and 50% to hands-on project work.  For hands-on project work, attendees are organized into 30-50 small teams comprised of 2-4 individuals with a mix of multi-disciplinary expertise.  To facilitate this work, a large room is setup with ample work tables, internet connection, and power access. This enables each computer software development-based team to gather on a table with their individual laptops, connect to the internet, download their software and data, and work on specific projects.  On the final day of the event, each project team summarizes their accomplishments in a closing presentation.&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
===Traumatic Brain Injury ===&lt;br /&gt;
&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIClinicalAnalysis|Segmentation of Serial MRI of TBI patients &lt;br /&gt;
using Personalized Atlas Construction]] (Bo Wang, Marcel Prastawa, Andrei Irimia, Micah Chambers, Jack van Horn, Guido Gerig, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIDTIAnalysis|Registration and analysis of white matter tract changes in TBI]] (Clement Vachet, Anuja Sharma, Marcel Prastawa, Andrei Irimia, Jack van Horn, Guido Gerig, Martin Styner, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIValidation|Validation, visualization and analysis of segmentation for TBI]] (Bo Wang, Marcel Prastawa, Andrei Irimia, Micah Chambers, Jack van Horn, Guido Gerig, Danielle Pace, Stephen Aylward)&lt;br /&gt;
*Geometric Metamorphosis for TBI (Danielle Pace, Marc Niethammer, Marcel Prastawa, Andrei Irimia, Jack van Horn, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIRegistration|Multimodal Deformable Registration of Traumatic Brain Injury MR Volumes using Graphics Processing Units]] (Yifei Lou, Andrei Irimia, Patricio Vela, Allen Tannenbaum, Micah C. Chambers, Jack Van Horn and Paul M. Vespa, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIRegistration|Integration of unscented Kalman filter (UKF) based multi-tensor tractography in Slicer]] (Christian Baumgartner, Yogesh Rathi, Carl-Fredrik Westin)&lt;br /&gt;
&lt;br /&gt;
===Predict Huntington's Disease===&lt;br /&gt;
* [[2012_Winter_Project_Week:SPIEWorkshop|SPIE DTI Workshop Preparation: Perform DTI Quality Control]] (Jean-Baptiste Berger, Sonia Pujol, Guido Gerig, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DWIPhantom|DTI tractography phantom: a software for evaluating tractography algorithms]] (Gwendoline Roger,Yundi Shi, Clement Vachet, Martin Styner, Sylvain Gouttard)&lt;br /&gt;
* [[2012_Winter_Project_Week:FVLight|FiberViewerLight: a fiber bundle visualization and clustering tool]] (Jean-Baptiste Berger, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DTIAFA|DTIAtlasFiberAnalyzer]] (Jean-Baptiste Berger, Yundi Shi, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:PairWiseDTIRegistration|Pairwise DTI registration: DTI-Reg]] (Clement Vachet, Hans Johnson, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:ShapeAnalysisSubcorticalStructuresHD|Morphometric analysis in subcortical structures in HD]] (Beatriz Paniagua, Clement Vachet, Hans Johnson, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DTI pipeline|Applying our DTI pipeline to analyse HD data]] (Gopalkrishna Veni, Hans Johnson, Martin Styner, Ross Whitaker)&lt;br /&gt;
* [[2012_Winter_Project_Week: DTI Change Modeling | Longitudinal change modeling of fiber tracts in serial HD DTI data]] (Anuja Sharma, Hans Johnson, Guido Gerig)&lt;br /&gt;
* [[2012_Winter_Project_Week: Continuous 4D shapes | Continuous 4d shape models from time-discrete data: Subcortical structures in HD]] (James Fishbaugh, Hans Johnson, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
===Atrial fibrillation ===&lt;br /&gt;
* [[2012_Winter_Project_Week:EndoSeg|Endocardial Segmentation in DE-MRI for AFib]] (Yi Gao, Liang-Jia Zhu, Josh Cates, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:LAWallRegistration|Longitudinal Alignment and Visualization of Left-Atrial Wall from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:PVRegistration|Longitudinal Alignment and Visualization of Pulmonary Veins from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:RealTime|OpenIGT for realtime MRI-guided RF ablation]] (Gene Payne, Rob MacLeod, and Junichi Tokuda)&lt;br /&gt;
&lt;br /&gt;
===Head and Neck Cancer ===&lt;br /&gt;
* [[2012_Winter_Project_Week:PatchBased|A patch-based approach to the segmentation of organs of risk]]  (Christian Wachinger, Polina Golland)&lt;br /&gt;
* [[2012_Winter_Project_Week:PairwiseLF|Label fusion with pairwise interactions]]  (Ramesh Sridharan, Christian Wachinger, Polina Golland)&lt;br /&gt;
* RT dose comparison tool for Slicer (Nadya Shusharina, Greg Sharp)&lt;br /&gt;
* [[2012_Winter_Project_Week:InteractiveSegmentation|Interactive editing tools for segmentation]] (Greg Sharp, Steve Pieper)&lt;br /&gt;
* [[2012_Winter_Project_Week:UserInTheLoop_InteractiveSegmn|Interactive 3D Level-Set Segmentation]] (Peter Karasev, Karl Fritscher, Ivan Kolesov, Allen Tannenbaum)&lt;br /&gt;
&lt;br /&gt;
===IGT for Surgery and Radiation Treatments===&lt;br /&gt;
*[[2012_Winter_Project_Week:PelvicRegistration|Deformable prostate registration: 3D ultrasound to MRI]] (Mehdi Moradi, Jan Egger, Andrey Fedorov)&lt;br /&gt;
*[[2012_Winter_Project_Week:iGyne | iGyne: A Software Prototype to support Gynecologic Radiation Treatment in AMIGO]] (Jan Egger, Xiaojun Chen, Radhika Tibrewal, Mehdi Moradi)&lt;br /&gt;
*[[2012_Winter_Project_Week:OpenIGTLink_Interface_for_Slicer4| OpenIGTLink interface for Slicer4]] (Junichi Tokuda, Clif Burdette/Jack Blevins, Tamas Ungi, Andras Lasso)&lt;br /&gt;
*[[2012_Winter_Project_Week:Needle Detection in MR Images for Brachytherapy in AMIGO|Needle Detection in MR Images for Brahytherapy in AMIGO]] (Radhika Tibrewal, Jan Egger, Xiaojun Chen, Stephen Aylward)&lt;br /&gt;
*[[2012_Winter_Project_Week:LiveUltrasound|Live ultrasound in Slicer4 using Plus and OpenIGTLink]] (Tamas Ungi, Elvis Chen)&lt;br /&gt;
*[[2012_Winter_Project_Week:4DUltrasound|4D Ultrasound Storage and Volume Rendering on Slicer 3.6]] (Laurent, Noby)&lt;br /&gt;
*[[2012_Winter_Project_Week:BKPLUSSlicer|Integration of BK Ultrasound into PLUS and Slicer]] (Mehdi Moradi, Isaiah Norton, PLUS developers?)&lt;br /&gt;
*[[2012_Winter_Project_Week:hybridMRS | Generation of a hybrid MR-Spectroscopic (MRS) dataset under 3DSlicer]] (Jan Egger, Isaiah Norton, Christopher Nimsky, Tina Kapur)&lt;br /&gt;
*[[2012_Winter_Project_Week:RTTools|RT tools for Slicer4]] (Csaba Pinter, Kevin Wang, Andras Lasso, Greg Sharp)&lt;br /&gt;
*[[2012_Winter_Project_Week:RTSS|RT structure set data representation]] (Greg Sharp, Andras Lasso, Steve Pieper, etc.)&lt;br /&gt;
&lt;br /&gt;
===Musculoskeletal System===&lt;br /&gt;
* [[2012_Winter_Project_Week:Radnostics|Spine Segmentation &amp;amp; Osteoporosis Screening In CT Imaging Studies]] (Anthony Blumfield)&lt;br /&gt;
&lt;br /&gt;
===Registration===&lt;br /&gt;
* [[2012_Winter_Project_Week:CMFreg|Framework for Cranio-Maxillo Facial registration in Slicer3]] (Beatriz Paniagua, Lucia Cevidanes, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:SlidingOrgans|Registration in the presence of sliding between organs (Danielle Pace, Marc Neithammer, Stephen Aylward)]]&lt;br /&gt;
* [[2012_Winter_Project_Week:GeometricMetamorphosis|Estimating the infiltration / recession of pathologies independent of background deformations (Danielle Pace, Stephen Aylward, Marc Niethammer)]]&lt;br /&gt;
&lt;br /&gt;
===Shape Analysis===&lt;br /&gt;
* [[2012_Winter_Project_Week:PNSnormals|Principal Nested Spheres Normal Consistency in ShapeWorks]] (Beatriz Paniagua, Josh Cates, Manasi Datar, Ross Whitaker, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:GeomIndicesSlicer4|Porting of White Matter Geometric Indices Module to Slicer4]] (Peter Savadjiev)&lt;br /&gt;
* [[2012_Winter_Project_Week:ParticleWrapper|Slicer end-to-end particle correspondence wrapper module]] (Ipek Oguz, Beatriz Paniagua, Josh Cates, Manasi Datar, Ross Whitaker, Martin Styner)&lt;br /&gt;
&lt;br /&gt;
===NA-MIC Kit Internals===&lt;br /&gt;
*Slicer4 release (Jean-Christophe Fillion-Robin (JC), and Julien Finet (J2))&lt;br /&gt;
*Slicer4 extensions (JC)&lt;br /&gt;
*Slicer4 documentation (JC)&lt;br /&gt;
*Slicer4 GUI Testing (Benjamin Long, J2, JC)&lt;br /&gt;
*Slicer4 data on MIDAS (Josh Cates, Patrick Reynolds)&lt;br /&gt;
*[[2012_Project_Week:SceneViews|Slicer4 Scene Views Module]] (Nicole Aucoin, Ron Kikinis, Julien Finet)&lt;br /&gt;
*[[2012_Project_Week:AnnotationsFileFormatRefactor|Annotations Module File Format Refactor]] (Nicole Aucoin)&lt;br /&gt;
*[[2012_Project_Week:QT3DTextRendering|QT 3D Text rendering proof of concept]] (Julien Finet, Steve Pieper, Nicole Aucoin)&lt;br /&gt;
*[[2012_Project_Week:DICOM|DICOM Networking, Database, and Slicer Integration]] (Steve, Andrey, Andras)&lt;br /&gt;
*[[2012_Project_Week:EditorExtensions|Editor Extension Examples and Debugging]] (Steve, Andrey, Jc, Hans, Satra)&lt;br /&gt;
*[[2012_Project_Week:ViewerControls|Redesign of the slice viewer control panels]] (Julien Finet, Ron Kikinis, Hans Johnson, Greg Sharp)&lt;br /&gt;
* [[2012_Project_Week:AutomatedTesting |Automated Testing (Sonia Pujol, Steve Pieper, Jc, Benjamin)]]&lt;br /&gt;
* Remove legacy code from slicer4 (itk, modules, build scripts) (Hans, Jim, Steve, J2, JC)&lt;br /&gt;
*[[2012_Project_Week:BatchProcessing|Batch Processing with Slicer Modules]] (Steve, Andrey, JC, Hans, Satra)&lt;br /&gt;
*[[2012_Project_Week:4DImageSlicer4|Support for 4D Images in Slicer4]] (Andrey, Steve, Junichi, Alex)&lt;br /&gt;
* AIM, DICOM SR and Slicer annotations (Andrey, Steve, Nicole, Jayashree)&lt;br /&gt;
&lt;br /&gt;
=== Preparation ===&lt;br /&gt;
&lt;br /&gt;
#Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list] &lt;br /&gt;
#Starting Thursday, October 27th, part of the weekly Thursday 3pm NA-MIC Engineering TCON will be used to prepare for this meeting.  The schedule for these preparatory calls is as follows:&lt;br /&gt;
#*October 27: MGH DBP&lt;br /&gt;
#*November 3: Iowa DBP Huntingtons, Engineering Infrastructure Topics&lt;br /&gt;
#*November 10:  Utah Atrial Fibrillation DBP&lt;br /&gt;
#*November 17: UCLA TBI DBP&lt;br /&gt;
#*November 24:  No call.  thanksgiving.&lt;br /&gt;
#*December 1: &lt;br /&gt;
#*December 8: &lt;br /&gt;
#*December 15:Finalize Projects &lt;br /&gt;
#*January 5: Loose Ends&lt;br /&gt;
#By December 15: [[Project_Week/Template|Complete a templated wiki page for your project]]. Please do not edit the template page itself, but create a new page for your project and cut-and-paste the text from this template page.  If you have questions, please send an email to tkapur at bwh.harvard.edu.&lt;br /&gt;
#By December 15: Create a directory for each project on the [[Engineering:SandBox|NAMIC Sandbox]] (Zack)&lt;br /&gt;
##[https://www.kitware.com/Admin/SendPassword.cgi Ask Zack for a Sandbox account]&lt;br /&gt;
##Commit on each sandbox directory the code examples/snippets that represent our first guesses of appropriate methods. (Luis and Steve will help with this, as needed)&lt;br /&gt;
##Gather test images in any of the Data sharing resources we have (e.g. MIDAS, xNAT). These ones don't have to be many. At least three different cases, so we can get an idea of the modality-specific characteristics of these images. Put the IDs of these data sets on the wiki page. (the participants must do this.)&lt;br /&gt;
##Setup nightly tests on a separate Dashboard, where we will run the methods that we are experimenting with. The test should post result images and computation time. (Zack)&lt;br /&gt;
#Please note that by the time we get to the project event, we should be trying to close off a project milestone rather than starting to work on one...&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PairwiseLF&amp;diff=72612</id>
		<title>2012 Winter Project Week:PairwiseLF</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PairwiseLF&amp;diff=72612"/>
		<updated>2011-12-20T22:26:00Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: Created page with '__NOTOC__ &amp;lt;gallery&amp;gt; Image:PW-SLC2012.png|Projects List &amp;lt;/gallery&amp;gt;  ==Investigators == * Ramesh Sridharan * Christian Wachinger * Polina Goll…'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2012.png|[[2012_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Investigators ==&lt;br /&gt;
* Ramesh Sridharan&lt;br /&gt;
* Christian Wachinger&lt;br /&gt;
* Polina Golland&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
We will evaluate the potential of label fusion with pairwise interactions. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=72611</id>
		<title>2012 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=72611"/>
		<updated>2011-12-20T22:20:42Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Project Events]], [[Events]]&lt;br /&gt;
 Back to [[Project Events]], [[AHM_2012]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2012.png|300px]]&lt;br /&gt;
&lt;br /&gt;
== Dates.Venue.Registration ==&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2012#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
== Agenda and Project List==&lt;br /&gt;
&lt;br /&gt;
Please:&lt;br /&gt;
*  [[AHM_2012#Agenda|'''Click here for the agenda for AHM 2012 and Project Week''']].&lt;br /&gt;
*  [[#Projects|'''Click here to jump to Project list''']]&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
From January 9-13, 2012, the 14th project week for hands-on research and development activity in Neuroscience and Image-Guided Therapy applications will be hosted in Salt Lake City, Utah. Participant engage in open source programming using the [[NA-MIC-Kit|NA-MIC Kit]], algorithms, medical imaging sequence development, tracking experiments, and clinical applications. The main goal of this event is to further the translational research deliverables of the sponsoring centers ([http://www.na-mic.org NA-MIC], [http://www.ncigt.org NCIGT], [http://nac.spl.harvard.edu NAC], [http://catalyst.harvard.edu/home.html Harvard Catalyst], and [http://www.cimit.org CIMIT]) and their collaborators by identifying and solving programming problems during planned and ad hoc break-out sessions.  &lt;br /&gt;
&lt;br /&gt;
Active preparation for this conference begins with a kick-off teleconference. Invitations to this call are sent to members of the sponsoring communities, their collaborators, past attendees of the event, as well as any parties expressing an interest in working with these centers. The main goal of the initial teleconference is to gather information about which groups/projects would be active at the upcoming event to ensure that there were sufficient resources available to meet everyone's needs. Focused discussions about individual projects are conducted during several subsequent teleconferences and permits the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in break-out sessions. In the final days leading up to the meeting, all project teams are asked to complete a template page on the wiki describing the objectives and research plan for each project.  &lt;br /&gt;
&lt;br /&gt;
On the first day of the conference, each project team leader delivers a short presentation to introduce their topic and individual members of their team. These brief presentations serve to both familiarize other teams doing similar work about common problems or practical solutions, and to identify potential subsets of individuals who might benefit from collaborative work.  For the remainder of the conference, about 50% time is devoted to break-out discussions on topics of common interest to particular subsets and 50% to hands-on project work.  For hands-on project work, attendees are organized into 30-50 small teams comprised of 2-4 individuals with a mix of multi-disciplinary expertise.  To facilitate this work, a large room is setup with ample work tables, internet connection, and power access. This enables each computer software development-based team to gather on a table with their individual laptops, connect to the internet, download their software and data, and work on specific projects.  On the final day of the event, each project team summarizes their accomplishments in a closing presentation.&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
===Traumatic Brain Injury ===&lt;br /&gt;
&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIClinicalAnalysis|Segmentation of Serial MRI of TBI patients &lt;br /&gt;
using Personalized Atlas Construction]] (Bo Wang, Marcel Prastawa, Andrei Irimia, Micah Chambers, Jack van Horn, Guido Gerig, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIDTIAnalysis|Registration and analysis of white matter tract changes in TBI]] (Clement Vachet, Anuja Sharma, Marcel Prastawa, Andrei Irimia, Jack van Horn, Guido Gerig, Martin Styner, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIValidation|Validation, visualization and analysis of segmentation for TBI]] (Bo Wang, Marcel Prastawa, Andrei Irimia, Micah Chambers, Jack van Horn, Guido Gerig, Danielle Pace, Stephen Aylward)&lt;br /&gt;
*Geometric Metamorphosis for TBI (Danielle Pace, Marc Niethammer, Marcel Prastawa, Andrei Irimia, Jack van Horn, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIRegistration|Multimodal Deformable Registration of Traumatic Brain Injury MR Volumes using Graphics Processing Units]] (Yifei Lou, Andrei Irimia, Patricio Vela, Allen Tannenbaum, Micah C. Chambers, Jack Van Horn and Paul M. Vespa, Danielle Pace, Stephen Aylward)&lt;br /&gt;
* [[2012_Winter_Project_Week:TBIRegistration|Integration of unscented Kalman filter (UKF) based multi-tensor tractography in Slicer]] (Christian Baumgartner, Yogesh Rathi, Carl-Fredrik Westin)&lt;br /&gt;
&lt;br /&gt;
===Predict Huntington's Disease===&lt;br /&gt;
* [[2012_Winter_Project_Week:SPIEWorkshop|SPIE DTI Workshop Preparation: Perform DTI Quality Control]] (Jean-Baptiste Berger, Sonia Pujol, Guido Gerig, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DWIPhantom|DTI tractography phantom: a software for evaluating tractography algorithms]] (Gwendoline Roger,Yundi Shi, Clement Vachet, Martin Styner, Sylvain Gouttard)&lt;br /&gt;
* [[2012_Winter_Project_Week:FVLight|FiberViewerLight: a fiber bundle visualization and clustering tool]] (Jean-Baptiste Berger, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DTIAFA|DTIAtlasFiberAnalyzer]] (Jean-Baptiste Berger, Yundi Shi, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:PairWiseDTIRegistration|Pairwise DTI registration: DTI-Reg]] (Clement Vachet, Hans Johnson, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:ShapeAnalysisSubcorticalStructuresHD|Morphometric analysis in subcortical structures in HD]] (Beatriz Paniagua, Clement Vachet, Hans Johnson, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DTI pipeline|Applying our DTI pipeline to analyse HD data]] (Gopalkrishna Veni, Hans Johnson, Martin Styner, Ross Whitaker)&lt;br /&gt;
* [[2012_Winter_Project_Week: DTI Change Modeling | Longitudinal change modeling of fiber tracts in serial HD DTI data]] (Anuja Sharma, Hans Johnson, Guido Gerig)&lt;br /&gt;
* [[2012_Winter_Project_Week: Continuous 4D shapes | Continuous 4d shape models from time-discrete data: Subcortical structures in HD]] (James Fishbaugh, Hans Johnson, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
===Atrial fibrillation ===&lt;br /&gt;
* [[2012_Winter_Project_Week:EndoSeg|Endocardial Segmentation in DE-MRI for AFib]] (Yi Gao, Liang-Jia Zhu, Josh Cates, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:LAWallRegistration|Longitudinal Alignment and Visualization of Left-Atrial Wall from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:PVRegistration|Longitudinal Alignment and Visualization of Pulmonary Veins from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:RealTime|OpenIGT for realtime MRI-guided RF ablation]] (Gene Payne, Rob MacLeod, and Junichi Tokuda)&lt;br /&gt;
&lt;br /&gt;
===Head and Neck Cancer ===&lt;br /&gt;
* [[2012_Winter_Project_Week:PatchBased|A patch-based approach to the segmentation of organs of risk]]  (Christian Wachinger, Polina Golland)&lt;br /&gt;
* RT dose comparison tool for Slicer (Nadya Shusharina, Greg Sharp)&lt;br /&gt;
* [[2012_Winter_Project_Week:InteractiveSegmentation|Interactive editing tools for segmentation]] (Greg Sharp, Steve Pieper)&lt;br /&gt;
* [[2012_Winter_Project_Week:UserInTheLoop_InteractiveSegmn|Interactive 3D Level-Set Segmentation]] (Peter Karasev, Karl Fritscher, Ivan Kolesov, Allen Tannenbaum)&lt;br /&gt;
&lt;br /&gt;
===IGT for Surgery and Radiation Treatments===&lt;br /&gt;
*[[2012_Winter_Project_Week:PelvicRegistration|Deformable prostate registration: 3D ultrasound to MRI]] (Mehdi Moradi, Jan Egger, Andrey Fedorov)&lt;br /&gt;
*[[2012_Winter_Project_Week:iGyne | iGyne: A Software Prototype to support Gynecologic Radiation Treatment in AMIGO]] (Jan Egger, Xiaojun Chen, Radhika Tibrewal, Mehdi Moradi)&lt;br /&gt;
*[[2012_Winter_Project_Week:OpenIGTLink_Interface_for_Slicer4| OpenIGTLink interface for Slicer4]] (Junichi Tokuda, Clif Burdette/Jack Blevins, Tamas Ungi, Andras Lasso)&lt;br /&gt;
*[[2012_Winter_Project_Week:Needle Detection in MR Images for Brachytherapy in AMIGO|Needle Detection in MR Images for Brahytherapy in AMIGO]] (Radhika Tibrewal, Jan Egger, Xiaojun Chen, Stephen Aylward)&lt;br /&gt;
*[[2012_Winter_Project_Week:LiveUltrasound|Live ultrasound in Slicer4 using Plus and OpenIGTLink]] (Tamas Ungi, Elvis Chen)&lt;br /&gt;
*[[2012_Winter_Project_Week:4DUltrasound|4D Ultrasound Storage and Volume Rendering on Slicer 3.6]] (Laurent, Noby)&lt;br /&gt;
*[[2012_Winter_Project_Week:BKPLUSSlicer|Integration of BK Ultrasound into PLUS and Slicer]] (Mehdi Moradi, Isaiah Norton, PLUS developers?)&lt;br /&gt;
*[[2012_Winter_Project_Week:hybridMRS | Generation of a hybrid MR-Spectroscopic (MRS) dataset under 3DSlicer]] (Jan Egger, Isaiah Norton, Christopher Nimsky, Tina Kapur)&lt;br /&gt;
*[[2012_Winter_Project_Week:RTTools|RT tools for Slicer4]] (Csaba Pinter, Kevin Wang, Andras Lasso, Greg Sharp)&lt;br /&gt;
*[[2012_Winter_Project_Week:RTSS|RT structure set data representation]] (Greg Sharp, Andras Lasso, Steve Pieper, etc.)&lt;br /&gt;
&lt;br /&gt;
===Musculoskeletal System===&lt;br /&gt;
* [[2012_Winter_Project_Week:Radnostics|Spine Segmentation &amp;amp; Osteoporosis Screening In CT Imaging Studies]] (Anthony Blumfield)&lt;br /&gt;
&lt;br /&gt;
===Registration===&lt;br /&gt;
* [[2012_Winter_Project_Week:CMFreg|Framework for Cranio-Maxillo Facial registration in Slicer3]] (Beatriz Paniagua, Lucia Cevidanes, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:SlidingOrgans|Registration in the presence of sliding between organs (Danielle Pace, Marc Neithammer, Stephen Aylward)]]&lt;br /&gt;
* [[2012_Winter_Project_Week:GeometricMetamorphosis|Estimating the infiltration / recession of pathologies independent of background deformations (Danielle Pace, Stephen Aylward, Marc Niethammer)]]&lt;br /&gt;
&lt;br /&gt;
===Shape Analysis===&lt;br /&gt;
* [[2012_Winter_Project_Week:PNSnormals|Principal Nested Spheres Normal Consistency in ShapeWorks]] (Beatriz Paniagua, Josh Cates, Manasi Datar, Ross Whitaker, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:GeomIndicesSlicer4|Porting of White Matter Geometric Indices Module to Slicer4]] (Peter Savadjiev)&lt;br /&gt;
* [[2012_Winter_Project_Week:ParticleWrapper|Slicer end-to-end particle correspondence wrapper module]] (Ipek Oguz, Beatriz Paniagua, Josh Cates, Manasi Datar, Ross Whitaker, Martin Styner)&lt;br /&gt;
&lt;br /&gt;
===NA-MIC Kit Internals===&lt;br /&gt;
*Slicer4 release (Jean-Christophe Fillion-Robin (JC), and Julien Finet (J2))&lt;br /&gt;
*Slicer4 extensions (JC)&lt;br /&gt;
*Slicer4 documentation (JC)&lt;br /&gt;
*Slicer4 GUI Testing (Benjamin Long, J2, JC)&lt;br /&gt;
*Slicer4 data on MIDAS (Josh Cates, Patrick Reynolds)&lt;br /&gt;
*[[2012_Project_Week:SceneViews|Slicer4 Scene Views Module]] (Nicole Aucoin, Ron Kikinis, Julien Finet)&lt;br /&gt;
*[[2012_Project_Week:AnnotationsFileFormatRefactor|Annotations Module File Format Refactor]] (Nicole Aucoin)&lt;br /&gt;
*[[2012_Project_Week:QT3DTextRendering|QT 3D Text rendering proof of concept]] (Julien Finet, Steve Pieper, Nicole Aucoin)&lt;br /&gt;
*[[2012_Project_Week:DICOM|DICOM Networking, Database, and Slicer Integration]] (Steve, Andrey, Andras)&lt;br /&gt;
*[[2012_Project_Week:EditorExtensions|Editor Extension Examples and Debugging]] (Steve, Andrey, Jc, Hans, Satra)&lt;br /&gt;
*[[2012_Project_Week:ViewerControls|Redesign of the slice viewer control panels]] (Julien Finet, Ron Kikinis, Hans Johnson, Greg Sharp)&lt;br /&gt;
* [[2012_Project_Week:AutomatedTesting |Automated Testing (Sonia Pujol, Steve Pieper, Jc, Benjamin)]]&lt;br /&gt;
* Remove legacy code from slicer4 (itk, modules, build scripts) (Hans, Jim, Steve, J2, JC)&lt;br /&gt;
*[[2012_Project_Week:BatchProcessing|Batch Processing with Slicer Modules]] (Steve, Andrey, JC, Hans, Satra)&lt;br /&gt;
*[[2012_Project_Week:4DImageSlicer4|Support for 4D Images in Slicer4]] (Andrey, Steve, Junichi, Alex)&lt;br /&gt;
* AIM, DICOM SR and Slicer annotations (Andrey, Steve, Nicole, Jayashree)&lt;br /&gt;
&lt;br /&gt;
=== Preparation ===&lt;br /&gt;
&lt;br /&gt;
#Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list] &lt;br /&gt;
#Starting Thursday, October 27th, part of the weekly Thursday 3pm NA-MIC Engineering TCON will be used to prepare for this meeting.  The schedule for these preparatory calls is as follows:&lt;br /&gt;
#*October 27: MGH DBP&lt;br /&gt;
#*November 3: Iowa DBP Huntingtons, Engineering Infrastructure Topics&lt;br /&gt;
#*November 10:  Utah Atrial Fibrillation DBP&lt;br /&gt;
#*November 17: UCLA TBI DBP&lt;br /&gt;
#*November 24:  No call.  thanksgiving.&lt;br /&gt;
#*December 1: &lt;br /&gt;
#*December 8: &lt;br /&gt;
#*December 15:Finalize Projects &lt;br /&gt;
#*January 5: Loose Ends&lt;br /&gt;
#By December 15: [[Project_Week/Template|Complete a templated wiki page for your project]]. Please do not edit the template page itself, but create a new page for your project and cut-and-paste the text from this template page.  If you have questions, please send an email to tkapur at bwh.harvard.edu.&lt;br /&gt;
#By December 15: Create a directory for each project on the [[Engineering:SandBox|NAMIC Sandbox]] (Zack)&lt;br /&gt;
##[https://www.kitware.com/Admin/SendPassword.cgi Ask Zack for a Sandbox account]&lt;br /&gt;
##Commit on each sandbox directory the code examples/snippets that represent our first guesses of appropriate methods. (Luis and Steve will help with this, as needed)&lt;br /&gt;
##Gather test images in any of the Data sharing resources we have (e.g. MIDAS, xNAT). These ones don't have to be many. At least three different cases, so we can get an idea of the modality-specific characteristics of these images. Put the IDs of these data sets on the wiki page. (the participants must do this.)&lt;br /&gt;
##Setup nightly tests on a separate Dashboard, where we will run the methods that we are experimenting with. The test should post result images and computation time. (Zack)&lt;br /&gt;
#Please note that by the time we get to the project event, we should be trying to close off a project milestone rather than starting to work on one...&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=72610</id>
		<title>2012 Winter Project Week:PatchBased</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week:PatchBased&amp;diff=72610"/>
		<updated>2011-12-20T22:19:48Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: Created page with '__NOTOC__ &amp;lt;gallery&amp;gt; Image:PW-SLC2012.png|Projects List &amp;lt;/gallery&amp;gt;  ==Investigators == * Christian Wachinger * Polina Golland  &amp;lt;div style=&amp;quot;ma…'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2012.png|[[2012_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Investigators ==&lt;br /&gt;
* Christian Wachinger&lt;br /&gt;
* Polina Golland&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
We will investigate the applicability of a patch-based approach in the context of label fusion. &lt;br /&gt;
This could, for instance, be applied to the automatic segmentation of organs at risk in the head and neck data. &lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=71914</id>
		<title>2012 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=71914"/>
		<updated>2011-11-11T23:54:59Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Project Events]], [[Events]]&lt;br /&gt;
 Back to [[Project Events]], [[AHM_2012]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2012.png|300px]]&lt;br /&gt;
&lt;br /&gt;
== Dates.Venue.Registration ==&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2012#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
== Agenda==&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2012#Agenda|click here for the agenda for AHM 2012 and Project Week]].&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
From January 9-13, 2012, the 14th project week for hands-on research and development activity in Neuroscience and Image-Guided Therapy applications will be hosted in Salt Lake City, Utah. Participant engage in open source programming using the [[NA-MIC-Kit|NA-MIC Kit]], algorithms, medical imaging sequence development, tracking experiments, and clinical applications. The main goal of this event is to further the translational research deliverables of the sponsoring centers ([http://www.na-mic.org NA-MIC], [http://www.ncigt.org NCIGT], [http://nac.spl.harvard.edu NAC], [http://catalyst.harvard.edu/home.html Harvard Catalyst], and [http://www.cimit.org CIMIT]) and their collaborators by identifying and solving programming problems during planned and ad hoc break-out sessions.  &lt;br /&gt;
&lt;br /&gt;
Active preparation for this conference begins with a kick-off teleconference. Invitations to this call are sent to members of the sponsoring communities, their collaborators, past attendees of the event, as well as any parties expressing an interest in working with these centers. The main goal of the initial teleconference is to gather information about which groups/projects would be active at the upcoming event to ensure that there were sufficient resources available to meet everyone's needs. Focused discussions about individual projects are conducted during several subsequent teleconferences and permits the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in break-out sessions. In the final days leading up to the meeting, all project teams are asked to complete a template page on the wiki describing the objectives and research plan for each project.  &lt;br /&gt;
&lt;br /&gt;
On the first day of the conference, each project team leader delivers a short presentation to introduce their topic and individual members of their team. These brief presentations serve to both familiarize other teams doing similar work about common problems or practical solutions, and to identify potential subsets of individuals who might benefit from collaborative work.  For the remainder of the conference, about 50% time is devoted to break-out discussions on topics of common interest to particular subsets and 50% to hands-on project work.  For hands-on project work, attendees are organized into 30-50 small teams comprised of 2-4 individuals with a mix of multi-disciplinary expertise.  To facilitate this work, a large room is setup with ample work tables, internet connection, and power access. This enables each computer software development-based team to gather on a table with their individual laptops, connect to the internet, download their software and data, and work on specific projects.  On the final day of the event, each project team summarizes their accomplishments in a closing presentation.&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
===IGT===&lt;br /&gt;
*MR guided laser ablation for neurosurgery (Dan Orringer, MD BWH, Jason Stafford, MD Anderson, Isaiah Norton BWH)&lt;br /&gt;
*Pelvic Registration (Sandy Wells, Firdaus Janoos, Mehdi Moradi UBC/BWH, jan egger, andrey fedorov)&lt;br /&gt;
*openIGTLink interface for Slicer4(Junichi, Clif Burdette/Jack Blevins, Tamas, Andras)&lt;br /&gt;
*needle tracking (atushi yamada, radhika tibrewal, a needle navigation person)&lt;br /&gt;
*?mr susceptability (clare poynton, mr physics person?)&lt;br /&gt;
&lt;br /&gt;
===Predict Huntington's Disease DBP===&lt;br /&gt;
* [[2012_Winter_Project_Week:FVLight|FiberViewerLight: a fiber bundle visualization and clustering tool]] (Jean-Baptiste Berger, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DTIAFA|DTIAtlasFiberAnalyzer]] (Jean-Baptiste Berger, Yundi Shi, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:PairWiseDTIRegistration|Pairwise DTI registration: DTI-Reg]] (Clement Vachet, Hans Johnson, Martin Styner)&lt;br /&gt;
&lt;br /&gt;
===Atrial fibrillation DBP===&lt;br /&gt;
* [[2012_Winter_Project_Week:EndoSeg|Endocardial Segmentation in DE-MRI for AFib]] (Yi Gao, Liang-Jia Zhu, Josh Cates, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:LAWallRegistration|Longitudinal Alignment and Visualization of Left-Atrial Wall from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:PVRegistration|Longitudinal Alignment and Visualization of Pulmonary Veins from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:RealTime|OpenIGT for realtime MRI-guided RF ablation]] (Gene Payne, Rob MacLeod, and Junichi Tokuda)&lt;br /&gt;
&lt;br /&gt;
===Head and Neck Cancer DBP===&lt;br /&gt;
* A patch-based approach to the segmentation of organs of risk (Christian Wachinger, Polina Golland)&lt;br /&gt;
&lt;br /&gt;
===NA-MIC Kit Internals===&lt;br /&gt;
*Slicer4 Scene Views Module (Nicole Aucoin)&lt;br /&gt;
*Slicer4 Annotations Module&lt;br /&gt;
** File format refactor (Nicole Aucoin)&lt;br /&gt;
** QT 3D Text rendering proof of concept (Julien Finet, Steve Pieper, Nicole Aucoin)&lt;br /&gt;
* Editor Extension Examples and Debugging (Steve Pieper)&lt;br /&gt;
&lt;br /&gt;
=== Preparation ===&lt;br /&gt;
&lt;br /&gt;
#Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list] &lt;br /&gt;
#Starting Thursday, October 27th, part of the weekly Thursday 3pm NA-MIC Engineering TCON will be used to prepare for this meeting.  The schedule for these preparatory calls is as follows:&lt;br /&gt;
#*October 27: MGH DBP&lt;br /&gt;
#*November 3: Iowa DBP Huntingtons, Engineering Infrastructure Topics&lt;br /&gt;
#*November 10:  Utah Atrial Fibrillation DBP&lt;br /&gt;
#*November 17: UCLA TBI DBP&lt;br /&gt;
#*November 24:  No call.  thanksgiving.&lt;br /&gt;
#*December 1: &lt;br /&gt;
#*December 8: &lt;br /&gt;
#*December 15:Finalize Projects &lt;br /&gt;
#*January 5: Loose Ends&lt;br /&gt;
#By December 15: [[Project_Week/Template|Complete a templated wiki page for your project]]. Please do not edit the template page itself, but create a new page for your project and cut-and-paste the text from this template page.  If you have questions, please send an email to tkapur at bwh.harvard.edu.&lt;br /&gt;
#By December 15: Create a directory for each project on the [[Engineering:SandBox|NAMIC Sandbox]] (Zack)&lt;br /&gt;
##[https://www.kitware.com/Admin/SendPassword.cgi Ask Zack for a Sandbox account]&lt;br /&gt;
##Commit on each sandbox directory the code examples/snippets that represent our first guesses of appropriate methods. (Luis and Steve will help with this, as needed)&lt;br /&gt;
##Gather test images in any of the Data sharing resources we have (e.g. MIDAS, xNAT). These ones don't have to be many. At least three different cases, so we can get an idea of the modality-specific characteristics of these images. Put the IDs of these data sets on the wiki page. (the participants must do this.)&lt;br /&gt;
##Setup nightly tests on a separate Dashboard, where we will run the methods that we are experimenting with. The test should post result images and computation time. (Zack)&lt;br /&gt;
#Please note that by the time we get to the project event, we should be trying to close off a project milestone rather than starting to work on one...&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=71912</id>
		<title>2012 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2012_Winter_Project_Week&amp;diff=71912"/>
		<updated>2011-11-11T23:54:43Z</updated>

		<summary type="html">&lt;p&gt;Wachinge: /* Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Project Events]], [[Events]]&lt;br /&gt;
 Back to [[Project Events]], [[AHM_2012]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2012.png|300px]]&lt;br /&gt;
&lt;br /&gt;
== Dates.Venue.Registration ==&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2012#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
== Agenda==&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2012#Agenda|click here for the agenda for AHM 2012 and Project Week]].&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
From January 9-13, 2012, the 14th project week for hands-on research and development activity in Neuroscience and Image-Guided Therapy applications will be hosted in Salt Lake City, Utah. Participant engage in open source programming using the [[NA-MIC-Kit|NA-MIC Kit]], algorithms, medical imaging sequence development, tracking experiments, and clinical applications. The main goal of this event is to further the translational research deliverables of the sponsoring centers ([http://www.na-mic.org NA-MIC], [http://www.ncigt.org NCIGT], [http://nac.spl.harvard.edu NAC], [http://catalyst.harvard.edu/home.html Harvard Catalyst], and [http://www.cimit.org CIMIT]) and their collaborators by identifying and solving programming problems during planned and ad hoc break-out sessions.  &lt;br /&gt;
&lt;br /&gt;
Active preparation for this conference begins with a kick-off teleconference. Invitations to this call are sent to members of the sponsoring communities, their collaborators, past attendees of the event, as well as any parties expressing an interest in working with these centers. The main goal of the initial teleconference is to gather information about which groups/projects would be active at the upcoming event to ensure that there were sufficient resources available to meet everyone's needs. Focused discussions about individual projects are conducted during several subsequent teleconferences and permits the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in break-out sessions. In the final days leading up to the meeting, all project teams are asked to complete a template page on the wiki describing the objectives and research plan for each project.  &lt;br /&gt;
&lt;br /&gt;
On the first day of the conference, each project team leader delivers a short presentation to introduce their topic and individual members of their team. These brief presentations serve to both familiarize other teams doing similar work about common problems or practical solutions, and to identify potential subsets of individuals who might benefit from collaborative work.  For the remainder of the conference, about 50% time is devoted to break-out discussions on topics of common interest to particular subsets and 50% to hands-on project work.  For hands-on project work, attendees are organized into 30-50 small teams comprised of 2-4 individuals with a mix of multi-disciplinary expertise.  To facilitate this work, a large room is setup with ample work tables, internet connection, and power access. This enables each computer software development-based team to gather on a table with their individual laptops, connect to the internet, download their software and data, and work on specific projects.  On the final day of the event, each project team summarizes their accomplishments in a closing presentation.&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
===IGT===&lt;br /&gt;
*MR guided laser ablation for neurosurgery (Dan Orringer, MD BWH, Jason Stafford, MD Anderson, Isaiah Norton BWH)&lt;br /&gt;
*Pelvic Registration (Sandy Wells, Firdaus Janoos, Mehdi Moradi UBC/BWH, jan egger, andrey fedorov)&lt;br /&gt;
*openIGTLink interface for Slicer4(Junichi, Clif Burdette/Jack Blevins, Tamas, Andras)&lt;br /&gt;
*needle tracking (atushi yamada, radhika tibrewal, a needle navigation person)&lt;br /&gt;
*?mr susceptability (clare poynton, mr physics person?)&lt;br /&gt;
&lt;br /&gt;
===Predict Huntington's Disease DBP===&lt;br /&gt;
* [[2012_Winter_Project_Week:FVLight|FiberViewerLight: a fiber bundle visualization and clustering tool]] (Jean-Baptiste Berger, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:DTIAFA|DTIAtlasFiberAnalyzer]] (Jean-Baptiste Berger, Yundi Shi, Clement Vachet, Martin Styner)&lt;br /&gt;
* [[2012_Winter_Project_Week:PairWiseDTIRegistration|Pairwise DTI registration: DTI-Reg]] (Clement Vachet, Hans Johnson, Martin Styner)&lt;br /&gt;
&lt;br /&gt;
===Atrial fibrillation DBP===&lt;br /&gt;
* [[2012_Winter_Project_Week:EndoSeg|Endocardial Segmentation in DE-MRI for AFib]] (Yi Gao, Liang-Jia Zhu, Josh Cates, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:LAWallRegistration|Longitudinal Alignment and Visualization of Left-Atrial Wall from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:PVRegistration|Longitudinal Alignment and Visualization of Pulmonary Veins from DEMRI and MRA]] (Josh Cates, Yi Gao, Liang-Jia Zhu, Greg Gardner, Alan Morris, Danny Perry, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
* [[2012_Winter_Project_Week:RealTime|OpenIGT for realtime MRI-guided RF ablation]] (Gene Payne, Rob MacLeod, and Junichi Tokuda)&lt;br /&gt;
&lt;br /&gt;
===Head and Neck Cancer DBP===&lt;br /&gt;
* A patch-based approach to the segmentation of organs of risk (Christian Wachinger, Polina Golland)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===NA-MIC Kit Internals===&lt;br /&gt;
*Slicer4 Scene Views Module (Nicole Aucoin)&lt;br /&gt;
*Slicer4 Annotations Module&lt;br /&gt;
** File format refactor (Nicole Aucoin)&lt;br /&gt;
** QT 3D Text rendering proof of concept (Julien Finet, Steve Pieper, Nicole Aucoin)&lt;br /&gt;
* Editor Extension Examples and Debugging (Steve Pieper)&lt;br /&gt;
&lt;br /&gt;
=== Preparation ===&lt;br /&gt;
&lt;br /&gt;
#Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list] &lt;br /&gt;
#Starting Thursday, October 27th, part of the weekly Thursday 3pm NA-MIC Engineering TCON will be used to prepare for this meeting.  The schedule for these preparatory calls is as follows:&lt;br /&gt;
#*October 27: MGH DBP&lt;br /&gt;
#*November 3: Iowa DBP Huntingtons, Engineering Infrastructure Topics&lt;br /&gt;
#*November 10:  Utah Atrial Fibrillation DBP&lt;br /&gt;
#*November 17: UCLA TBI DBP&lt;br /&gt;
#*November 24:  No call.  thanksgiving.&lt;br /&gt;
#*December 1: &lt;br /&gt;
#*December 8: &lt;br /&gt;
#*December 15:Finalize Projects &lt;br /&gt;
#*January 5: Loose Ends&lt;br /&gt;
#By December 15: [[Project_Week/Template|Complete a templated wiki page for your project]]. Please do not edit the template page itself, but create a new page for your project and cut-and-paste the text from this template page.  If you have questions, please send an email to tkapur at bwh.harvard.edu.&lt;br /&gt;
#By December 15: Create a directory for each project on the [[Engineering:SandBox|NAMIC Sandbox]] (Zack)&lt;br /&gt;
##[https://www.kitware.com/Admin/SendPassword.cgi Ask Zack for a Sandbox account]&lt;br /&gt;
##Commit on each sandbox directory the code examples/snippets that represent our first guesses of appropriate methods. (Luis and Steve will help with this, as needed)&lt;br /&gt;
##Gather test images in any of the Data sharing resources we have (e.g. MIDAS, xNAT). These ones don't have to be many. At least three different cases, so we can get an idea of the modality-specific characteristics of these images. Put the IDs of these data sets on the wiki page. (the participants must do this.)&lt;br /&gt;
##Setup nightly tests on a separate Dashboard, where we will run the methods that we are experimenting with. The test should post result images and computation time. (Zack)&lt;br /&gt;
#Please note that by the time we get to the project event, we should be trying to close off a project milestone rather than starting to work on one...&lt;/div&gt;</summary>
		<author><name>Wachinge</name></author>
		
	</entry>
</feed>