<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ygao</id>
	<title>NAMIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ygao"/>
	<link rel="alternate" type="text/html" href="https://www.na-mic.org/wiki/Special:Contributions/Ygao"/>
	<updated>2026-04-22T08:55:26Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.33.0</generator>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:Roadmap&amp;diff=93714</id>
		<title>DBP2:Queens:Roadmap</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:Roadmap&amp;diff=93714"/>
		<updated>2016-11-13T22:50:51Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* Publications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Internal_Collaborations|NA-MIC Internal Collaborations]], [[DBP2:Queens|Queens DBP 2]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Queens Roadmap Project (Transrectal MRI-guided robotic prostate biopsy) =&lt;br /&gt;
&lt;br /&gt;
[[Image:TRProstateBiopsyRobot.jpg|thumb|320px|The transrectal prostate robot visualization inside SLICER.]]&lt;br /&gt;
== Objective ==&lt;br /&gt;
&lt;br /&gt;
We would like to create an end-to-end application within the NA-MIC Kit to enable an existing transrectal prostate biopsy device to perform multi-parametric MRI guided prostate biopsy in closed-bore high-field MRI magnets.&lt;br /&gt;
&lt;br /&gt;
This page describes the technology roadmap for robotic prostate biopsy in the NA-MIC Kit. The basic components necessary for this application are:&lt;br /&gt;
&lt;br /&gt;
*'''Tissue segmentation''': Should be multi-modality, correcting for intensity inhomogeneity and work for both supine and prone patients, all imaged with an endorectal coil (ERC).&lt;br /&gt;
*'''Registration''': co-registration of MRI datasets taken at different times, in different body positions, and under different imaging parameters&lt;br /&gt;
*'''Prostate Measurement''': Measure volume of all segmented structures&lt;br /&gt;
*'''Biopsy Device Parameters''': Geometry, kinematics, and calibration/registration of the robot system must be available in some form.  This capability is not currently part of the NA-MIC kit.  The application will be modular, to enable use of different devices.&lt;br /&gt;
*'''Tutorial''': Documentation will be written for a tutorial and sample data sets will be provided to perform simulated biopsies.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Roadmap==&lt;br /&gt;
&lt;br /&gt;
The primary goal for the roadmap is to develop an interventional module for Slicer3 for MRI-guided prostate biopsies.  This module and the accompanying tutorial will serve as a template for interventional applications with Slicer3. The module will provide the necessary functionality for calibrating the robot to the MR scanner, planning biopsies, computing the necessary robot trajectory to perform each biopsy, and verification via post-biopsy images. We will obtain a biopsy plan from multi-parametric endorectal image volumes, executable with an existing prostate biopsy device. The system will be will be implemented under Slicer3 as an interactive application.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Menard.jpg|thumb|350px|Prostate intervention (biopsy) in closed MR scanner.]]&lt;br /&gt;
|[[Image:Robot.jpg|thumb|400px|Transrectal prostate intervention robot assembled.]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Schedule==&lt;br /&gt;
&lt;br /&gt;
System Implementation: Apart from the one research element (segmentation), the rest of the project is a massive software engineering effort, and will follow these major milestones and schedule:&lt;br /&gt;
*2007-10: Application Workflow Development (Define the workflow for the application; create GUI templates for Calibration step - David, Csaba, Gabor)&lt;br /&gt;
*2007-12: Device Modeling &amp;amp; Data Display (Conversion of engineering data into VTK-viewable objects, provide display logic for targets and prostate outlines - David)&lt;br /&gt;
*2008-07: Measurement Tools: Semi-automatic identification of fiducials via thresholding &amp;amp; centroids, logic for robot registration with fiducials, calculations for robot trajectory based on target position - Csaba)&lt;br /&gt;
*2008-11: Robot positioning &amp;amp; Application Workflow Development (GUI targeting readouts for optical encoders; Wizard GUI for Targeting and Verification step - David, Siddharth)&lt;br /&gt;
*2009-03: Measurement Tools &amp;amp; Biopsy Planning/Targeting (Integration of prostate segmentation developed at Georgia Tech by Yi, Tennanbaum during the [[prostateSegmentationAHM2009|NAMIC project week in Utah]]; integration of semi-automatic identification of fiducials in the SLICER module; finding targeting parameters of robot for particular target; Targets' list implemented as fiducials lists for visualization of targets in 2d and 3D viewers - Siddharth, Yi)&lt;br /&gt;
*2009-05: Application Workflow Development (On selecting target from the list, bring target to view in all viewers, Needle trajectory visualization; Verification step GUI and functionality, Save experiment functionality; planning window display (on secondary monitor), 3D display of robot manipulator and segmented calibration markers (using VR) - Siddharth, Andras)&lt;br /&gt;
*2009-10: Prostate segmentation module development &amp;amp; Coverage area display &amp;amp; Multiple devices support (create a separate, standalone module for prostate segmentation (ProstateSeg) from the existing algorithm code, design proposal for supporting multiple robotic devices - transrectal and transperineal; Implement robot coverage area display - Andras, Yi)&lt;br /&gt;
*2010-03: Multiple devices support &amp;amp; fixes (implementation of multiple device support for transrectal (APT-MRI) and transperineal (BRP) device and transrectal template; Test and fix robot calibration method; multiple fixes and enhancements in the 3D Slicer core - Andras, Junichi)&lt;br /&gt;
*2010-06: Fixes of ProstateNav module; Packaging of ProstateSeg module (fix usability and other potential problems in ProstateNav in the Slicer3-3.6 branch; package prosate segmentation module as an extension module for Slicer3-3.6)&lt;br /&gt;
&lt;br /&gt;
== '''Software''' ==&lt;br /&gt;
* Prostate biopsy module with multiple device support:&lt;br /&gt;
** Latest stable version is available in [http://www.slicer.org/pages/Special:SlicerDownloads Slicer3-3.6]&lt;br /&gt;
** Latest development version source code is available in [http://svn.na-mic.org/NAMICSandBox/trunk/IGTLoadableModules/ProstateNav/ SVN]&lt;br /&gt;
* ProstateSeg module source code is available in [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/ProstateSeg/ SVN]&lt;br /&gt;
&lt;br /&gt;
==Data==&lt;br /&gt;
* The MRI-guided needle insertion prostate MRI data sets are available at: http://hdl.handle.net/1926/1558.&lt;br /&gt;
&lt;br /&gt;
==Tutorial==&lt;br /&gt;
* [[Media:DBP2JohnsHopkinsTransRectalProstateBiopsy.pdf|Prostate biopsy module tutorial]]&lt;br /&gt;
* [[Media:TransRectalProstateBiopsyTutorialDataset.zip‎|Tutorial data set]]&lt;br /&gt;
&lt;br /&gt;
==Screenshots of the Slicer-based software==&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;center&amp;quot;|[[Image:TRPBTarget3DView3.JPG|thumb|350px|Targets, targeting parameters and robot coverage area display]]&lt;br /&gt;
|[[Image:Calib3DViewOnly.JPG|3D view and targeting parameters display on a secondary (procedure room) monitor|thumb|300px]]&lt;br /&gt;
|}&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;center&amp;quot;|[[Image:TRPBTarget3DView5.JPG|thumb|350px|Target verification, slicer reformatted to be aligned with the planned needle trajectory]]&lt;br /&gt;
|[[Image:TRPBTarget3DView7--TrajectoryCloseUp3D.JPG|Visualization of patient motion between targeting and verification image|thumb|350px]]&lt;br /&gt;
|}&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;center&amp;quot;|[[Image:Calib3DView3.JPG|thumb|350px|Calibrate and register robot markers]]&lt;br /&gt;
|[[Image:TRPB_ProstateSegmentation.JPG|center|thumb|350px|Prostate segmentation]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Team and Institutes==&lt;br /&gt;
&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Co-I: Purang Abolmaesumi, Queen’s University (purang at cs.queensu.ca)&lt;br /&gt;
*Software Engineer: Andras Lasso (lasso at cs.queensu.ca), Siddharth Vikal, David Gobbi, Queen’s University; Junichi Tokuda, Brigham and Women's Hospital; Csaba Csoma, Johns Hopkins University&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital (hayes at bwh.harvard.edu)&lt;br /&gt;
*NA-MIC Algorithms Contact: Allen Tannenbaum, PhD, GeorgiaTech (tannenba at ece.gatech.edu)&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
*Gao, Y., Sandhu, R., Fichtinger, G., Tannenbaum, A. “A coupled global registration and segmentation framework with application to magnetic resonance prostate imagery,” IEEE Transactions on Medical Imaging, vol. 29, no. 10, pp. 1781–1795, 2010.&lt;br /&gt;
*Lasso, A., J. Tokuda, S. Vikal, C. M. Tempany, N. Hata, and G. Fichtinger, A generic computer assisted intervention plug-in module for 3D Slicer with multiple device support. Medical Image Computing and Computer-Assisted Intervention (MICCAI), London, UK, 2009&lt;br /&gt;
*Vikal, S., S. Haker, C. Tempany, and G. Fichtinger, Prostate contouring in MRI guided biopsy&amp;quot;, SPIE Medical Imaging, vol. 7259, 2009.&lt;br /&gt;
*Boisvert, J., D. Gobbi, S. Vikal, R. Rohling, G. Fichtinger, and P. Abolmaesumi An open-source solution for interactive acquisition, processing and transfer of interventional ultrasound images. Workshop on Systems and Architectures for Computer Assisted Interventions, held in conjunction with the 11th International Conference on Medical Image Computing and Computer Assisted Intervention , 2008.&lt;br /&gt;
*Fischer G.S., Krieger A., Iordachita I., Csoma C., Whitcomb L., Fichtinger G. MRI Compatibility of Robot Actuation Techniques - A Comparative Study. Int Conf Med Image Comput Comput Assist Interv. 2008;11(Pt 2):509-517.&lt;br /&gt;
*Gill S., Abolmaesumi P., Vikal S., Mousavi P., Fichtinger G. Intraoperative Prostate Tracking with Slice-to-Volume Registration in MRI. Proceedings of the 20th International Conference of the Society for Medical Innovation and Technology 2008; 154-158.&lt;br /&gt;
*Krieger, A., P. Guion, C. Csoma, I. Iordachita, A. Singh, A. Kaushal, C. Menard, G. Fichtinger, and L. Whitcomb. Design and Preliminary Clinical Studies of an MRI-Guided Transrectal Prostate Intervention System. International Society of Magnetic Resonance in Medicine (ISMRM), 2008.&lt;br /&gt;
*Mewes P., Tokuda J., DiMaio S.P., Fischer G., Csoma C., Gobbi D., Tempany C.M., Fichtinger G., Hata N. Integrated System for Robot-Assisted in Prostate Biopsy in Closed MRI Scanner. Proceedings of the IEEE International Conference on Robotics and Automation 2008; 2959-2962. &lt;br /&gt;
*Tokuda, J., S. DiMaio, G. Fischer, C. Csoma, D. Gobbi, G. Fichtinger, N. Hata, and C. Tempany, &amp;quot;Real-time MR Imaging Controlled by Transperineal Needle Placement Device for MRI-guided Prostate Biopsy&amp;quot;, 16th Scientific Meeting and Exhibition of International Society of Magnetic Resonance in Medicine, 2008.&lt;br /&gt;
*Tokuda J., Fischer G.S., Csoma C., DiMaio S.P., Gobbi D.G., Fichtinger G., Tempany C.M., Hata N. Software Strategy for Robotic Transperineal Prostate Therapy in Closed-Bore MRI. Int Conf Med Image Comput Comput Assist Interv. 2008;11(Pt 2):701-709.&lt;br /&gt;
*Vikal, S., S. Haker, C. Tempany, and G. Fichtinger, &amp;quot;Prostate contouring in MRI guided biopsy&amp;quot;, Workshop on Prostate image analysis and computer-assisted intervention, International Conference on Medical Image Computing and Computer Assisted Intervention , 2008.&lt;br /&gt;
*Susil R., Menard C., Krieger A., Coleman J., Camphausen K., Choyke P., Fichtinger G., Whitcomb L., Coleman N., Atalar E., Transrectal prostate biopsy and fiducial marker placement in a standard 1.5T magnetic resonance imaging scanner. J Urol. 2006 Jan;175(1):113-20.&lt;br /&gt;
*Krieger, A., R. Susil, C. Menard, J. Coleman, G. Fichtinger, E. Atalar, and L. Whitcomb, Design of a novel MRI compatible manipulator for image guided prostate interventions. IEEE Transactions on Biomedical Engineering 2005;52(2):306–313.&lt;br /&gt;
&lt;br /&gt;
[[Category: Segmentation]] [[Category: Registration]] [[Category: Slicer]] [[Category: Prostate]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=StonyBrook_Workshop&amp;diff=92390</id>
		<title>StonyBrook Workshop</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=StonyBrook_Workshop&amp;diff=92390"/>
		<updated>2016-04-13T17:58:21Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:NAMIC.jpg‎]]&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:NCIGTlogo.gif]]&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:Logo_nac.gif‎]]&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:StonyBrook.png]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The 3D Slicer Stony Brook workshop is a full day course that will combine a series of lectures and hands-on sessions using the 3D Slicer software. The morning session will provide an introduction to the basics of data loading, 3D visualization and image registration. The afternoon session will focus on fundamental and practical aspects of Diffusion MRI analysis.&lt;br /&gt;
&lt;br /&gt;
=Workshop Organizers=&lt;br /&gt;
* [https://www.spl.harvard.edu/pages/People/spujol Sonia Pujol, Ph.D., Brigham and Women's Hospital, Harvard Medical School ]&lt;br /&gt;
&lt;br /&gt;
=Local Host=&lt;br /&gt;
* [https://www.cs.stonybrook.edu/people/faculty/AllenTannenbaum Allen Tannenbaum, Ph.D., Departments Computer Science and Applied Mathematics, Stony Brook University]&lt;br /&gt;
* [https://medicine.stonybrookmedicine.edu/radiology/mary-saltz Mary Saltz, M.D., Department of Radiology, Stony Brook School of Medicine]&lt;br /&gt;
&lt;br /&gt;
=Tentative Agenda=&lt;br /&gt;
*9:30-9:40 am Welcome and Goals of the Workshop (Mary Saltz)&lt;br /&gt;
*9:40-10:00 am Introduction to the 3D Slicer software and community (Sonia Pujol)&lt;br /&gt;
*10:00-11:15 am  Hands-on session 1: 3D Data Loading and Visualization (Sonia Pujol)&lt;br /&gt;
*11:15-11:30 am Coffee Break&lt;br /&gt;
*11:30-12:30 pm Hands on session 2: Basics of Image Registration (Sonia Pujol)&lt;br /&gt;
*12:30-1:30 pm Lunch&lt;br /&gt;
*1:30 pm -2:45 pm  An Introduction to Diffusion Tensor Imaging  (Sonia Pujol)&lt;br /&gt;
*2:45 pm -3:00 pm Coffee Break&lt;br /&gt;
*3:00-4:00 pm  Brain Mapping for Neurosurgery (Sonia Pujol)&lt;br /&gt;
*4:00-4:30 pm Questions from the audience and concluding remarks&lt;br /&gt;
&lt;br /&gt;
=Logistics=&lt;br /&gt;
* Date: Thursday April 14, 2016&lt;br /&gt;
* Location: Department of Biomedical Informatics, Stony Brook University&lt;br /&gt;
* Workshop Materials: In preparation for the hands-on sessions, please download the following software and datasets:&lt;br /&gt;
*Software: 3D Slicer version 4.5: &lt;br /&gt;
**[https://www.dropbox.com/s/ayb7mtqb21w9zgz/Slicer-4.5.0-2016-04-11-linux-amd64.tar.gz?dl=0 Slicer4.5 (Linux)]&lt;br /&gt;
**[https://www.dropbox.com/s/bt3lnns2gbq2l16/Slicer-4.5.0-2016-04-11-macosx-amd64.dmg?dl=0 Slicer4.5 (MacOSX)]&lt;br /&gt;
**[https://www.dropbox.com/s/ygm8xuybqlu7am9/Slicer-4.5.0-2016-04-11-win-amd64.exe?dl=0 Slicer4.5 (Windows)]&lt;br /&gt;
*Datasets:&lt;br /&gt;
**[[Media:3DVisualizationData.zip | 3D Visualization Dataset]]&lt;br /&gt;
**[[Media:Dti_tutorial_data.zip‎ |Slicer  DTI tutorial Dataset]]&lt;br /&gt;
**[[Media:WhiteMatterExplorationData.zip|White Matter Exploration dataset]]&lt;br /&gt;
**[[Media:RegistrationData.zip|Registration dataset]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=StonyBrook_Workshop&amp;diff=92389</id>
		<title>StonyBrook Workshop</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=StonyBrook_Workshop&amp;diff=92389"/>
		<updated>2016-04-13T17:57:49Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:NAMIC.jpg‎]]&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:NCIGTlogo.gif]]&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:Logo_nac.gif‎]]&lt;br /&gt;
|style=&amp;quot;width:25%&amp;quot; |[[Image:StonyBrook.png]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The 3D Slicer Stony Brook workshop is a full day course that will combine a series of lectures and hands-on sessions using the 3D Slicer software. The morning session will provide an introduction to the basics of data loading, 3D visualization and image registration. The afternoon session will focus on fundamental and practical aspects of Diffusion MRI analysis.&lt;br /&gt;
&lt;br /&gt;
=Workshop Organizers=&lt;br /&gt;
* [https://www.spl.harvard.edu/pages/People/spujol Sonia Pujol, Ph.D., Brigham and Women's Hospital, Harvard Medical School ]&lt;br /&gt;
&lt;br /&gt;
=Local Host=&lt;br /&gt;
* [http://www.iacs.stonybrook.edu/people/affiliates/allen-tannenbaum Allen Tannenbaum, Ph.D., Departments Computer Science and Applied Mathematics, Stony Brook University]&lt;br /&gt;
* [https://medicine.stonybrookmedicine.edu/radiology/mary-saltz Mary Saltz, M.D., Department of Radiology, Stony Brook School of Medicine]&lt;br /&gt;
&lt;br /&gt;
=Tentative Agenda=&lt;br /&gt;
*9:30-9:40 am Welcome and Goals of the Workshop (Mary Saltz)&lt;br /&gt;
*9:40-10:00 am Introduction to the 3D Slicer software and community (Sonia Pujol)&lt;br /&gt;
*10:00-11:15 am  Hands-on session 1: 3D Data Loading and Visualization (Sonia Pujol)&lt;br /&gt;
*11:15-11:30 am Coffee Break&lt;br /&gt;
*11:30-12:30 pm Hands on session 2: Basics of Image Registration (Sonia Pujol)&lt;br /&gt;
*12:30-1:30 pm Lunch&lt;br /&gt;
*1:30 pm -2:45 pm  An Introduction to Diffusion Tensor Imaging  (Sonia Pujol)&lt;br /&gt;
*2:45 pm -3:00 pm Coffee Break&lt;br /&gt;
*3:00-4:00 pm  Brain Mapping for Neurosurgery (Sonia Pujol)&lt;br /&gt;
*4:00-4:30 pm Questions from the audience and concluding remarks&lt;br /&gt;
&lt;br /&gt;
=Logistics=&lt;br /&gt;
* Date: Thursday April 14, 2016&lt;br /&gt;
* Location: Department of Biomedical Informatics, Stony Brook University&lt;br /&gt;
* Workshop Materials: In preparation for the hands-on sessions, please download the following software and datasets:&lt;br /&gt;
*Software: 3D Slicer version 4.5: &lt;br /&gt;
**[https://www.dropbox.com/s/ayb7mtqb21w9zgz/Slicer-4.5.0-2016-04-11-linux-amd64.tar.gz?dl=0 Slicer4.5 (Linux)]&lt;br /&gt;
**[https://www.dropbox.com/s/bt3lnns2gbq2l16/Slicer-4.5.0-2016-04-11-macosx-amd64.dmg?dl=0 Slicer4.5 (MacOSX)]&lt;br /&gt;
**[https://www.dropbox.com/s/ygm8xuybqlu7am9/Slicer-4.5.0-2016-04-11-win-amd64.exe?dl=0 Slicer4.5 (Windows)]&lt;br /&gt;
*Datasets:&lt;br /&gt;
**[[Media:3DVisualizationData.zip | 3D Visualization Dataset]]&lt;br /&gt;
**[[Media:Dti_tutorial_data.zip‎ |Slicer  DTI tutorial Dataset]]&lt;br /&gt;
**[[Media:WhiteMatterExplorationData.zip|White Matter Exploration dataset]]&lt;br /&gt;
**[[Media:RegistrationData.zip|Registration dataset]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2016_Winter_Project_Week/Projects/SphericalWaveletShapeAnalysis&amp;diff=91964</id>
		<title>2016 Winter Project Week/Projects/SphericalWaveletShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2016_Winter_Project_Week/Projects/SphericalWaveletShapeAnalysis&amp;diff=91964"/>
		<updated>2016-01-08T14:07:44Z</updated>

		<summary type="html">&lt;p&gt;Ygao: goals and actions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2016.png|link=2016_Winter_Project_Week#Projects|[[2016_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;!-- Use the &amp;quot;Upload file&amp;quot; link on the left and then add a line to this list like &amp;quot;File:MyAlgorithmScreenshot.png&amp;quot; --&amp;gt;&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
&amp;lt;!-- Add a bulleted list of investigators and their institutions here --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yi Gao - [https://bmi.stonybrookmedicine.edu/ Stony Brook University]&lt;br /&gt;
* Erich Bremer - [https://bmi.stonybrookmedicine.edu/ Stony Brook University]&lt;br /&gt;
* Allen Tannenbaum - [https://bmi.stonybrookmedicine.edu/ Stony Brook University]&lt;br /&gt;
* Ron Kikinis - [http://www.brighamandwomens.org/ Brigham and Women's Hospital]&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Objective&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Approach and Plan&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Progress and Next Steps&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Objective bullet points --&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Approach and Plan bullet points --&amp;gt;&lt;br /&gt;
* We have identified the goal of writing a ITK filter for the Spherical wavelet forward and backward transformation.&lt;br /&gt;
* We are cleaning up the previous spherical wavelet code.&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Progress and Next steps bullet points (fill out at the end of project week --&amp;gt;&lt;br /&gt;
* The Spherical wavelet transformation has been implement as an ITK filter in our previous work. We are aiming to improve upon that project and explore new directions and applications.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Background and References==&lt;br /&gt;
&amp;lt;!-- Use this space for information that may help people better understand your project, like links to papers, source code, or data --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.insight-journal.org/browse/publication/155 Spherical Wavelet ITK Filter - Insight Journal]&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/Projects:SphericalWaveletsInITK Previous Na-mic project on Spherical wavelet transformation]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2016_Winter_Project_Week/Projects/SphericalWaveletShapeAnalysis&amp;diff=91408</id>
		<title>2016 Winter Project Week/Projects/SphericalWaveletShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2016_Winter_Project_Week/Projects/SphericalWaveletShapeAnalysis&amp;diff=91408"/>
		<updated>2015-12-29T16:42:49Z</updated>

		<summary type="html">&lt;p&gt;Ygao: Created page with &amp;quot;__NOTOC__ &amp;lt;gallery&amp;gt; Image:PW-MIT2016.png|link=2016_Winter_Project_Week#Projects|Projects List &amp;lt;!-- Use the &amp;quot;Upload file&amp;quot; link on the left...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2016.png|link=2016_Winter_Project_Week#Projects|[[2016_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;!-- Use the &amp;quot;Upload file&amp;quot; link on the left and then add a line to this list like &amp;quot;File:MyAlgorithmScreenshot.png&amp;quot; --&amp;gt;&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
&amp;lt;!-- Add a bulleted list of investigators and their institutions here --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Yi Gao - [https://bmi.stonybrookmedicine.edu/ Stony Brook University]&lt;br /&gt;
* Erich Bremer - [https://bmi.stonybrookmedicine.edu/ Stony Brook University]&lt;br /&gt;
* Allen Tannenbaum - [https://bmi.stonybrookmedicine.edu/ Stony Brook University]&lt;br /&gt;
* Ron Kikinis - [http://www.brighamandwomens.org/ Brigham and Women's Hospital]&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Objective&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Approach and Plan&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Progress and Next Steps&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Objective bullet points --&amp;gt;&lt;br /&gt;
*&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Approach and Plan bullet points --&amp;gt;&lt;br /&gt;
* &lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Progress and Next steps bullet points (fill out at the end of project week --&amp;gt;&lt;br /&gt;
* The Spherical wavelet transformation has been implement as an ITK filter in our previous work. We are aiming to improve upon that project and explore new directions and applications.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Background and References==&lt;br /&gt;
&amp;lt;!-- Use this space for information that may help people better understand your project, like links to papers, source code, or data --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.insight-journal.org/browse/publication/155 Spherical Wavelet ITK Filter - Insight Journal]&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/Projects:SphericalWaveletsInITK Previous Na-mic project on Spherical wavelet transformation]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2016_Winter_Project_Week&amp;diff=91407</id>
		<title>2016 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2016_Winter_Project_Week&amp;diff=91407"/>
		<updated>2015-12-29T16:38:19Z</updated>

		<summary type="html">&lt;p&gt;Ygao: add project Spherical wavelet shape analysis&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
[[image:PW-MIT2016.png|300px]]&lt;br /&gt;
&lt;br /&gt;
'''Dates:''' January 4-8, 2016&lt;br /&gt;
&lt;br /&gt;
'''Location:''' [https://www.google.com/maps/place/MIT:+Computer+Science+and+Artificial+Intelligence+Laboratory/@42.361864,-71.090563,16z/data=!4m2!3m1!1s0x0:0x303ada1e9664dfed?hl=en MIT CSAIL], Cambridge, MA. (Rooms: [[MIT_Project_Week_Rooms#Kiva|Kiva]], R&amp;amp;D)&lt;br /&gt;
&lt;br /&gt;
'''REGISTRATION:''' Register [https://www.regonline.com/namic16 here].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Founded  in 2005, the National Alliance for Medical Image Computing (NAMIC), was chartered with building a computational infrastructure to support biomedical research as part of the NIH funded [http://www.ncbcs.org/ NCBC] program. The work of this alliance has resulted in important progress in algorithmic research, an open source medical image computing platform [http://www.slicer.org 3D Slicer], built  using [http://www.vtk.org VTK], [http://www.itk.org ITK], [http://www.cmake.org CMake], and [http://www.cdash.org CDash], and the creation of a community of algorithm researchers, biomedical scientists and software engineers who are committed to open science. This community meets twice a year in an event called Project Week. &lt;br /&gt;
&lt;br /&gt;
[[Engineering:Programming_Events|Project Week]] is a semi-annual event which draws 80-120 researchers. As of August 2014, it is a [http://www.miccai.org/organization MICCAI] endorsed event. The participants work collaboratively on open-science solutions for problems that lie on the interfaces of the fields of computer science, mechanical engineering, biomedical engineering, and medicine. In contrast to conventional conferences and workshops the primary focus of the Project Weeks is to make progress in projects (as opposed to reporting about progress). The objective of the Project Weeks is to provide a venue for this community of medical open source software creators. Project Weeks are open to all, are publicly advertised, and are funded through fees paid by the attendees. Participants are encouraged to stay for the entire event. &lt;br /&gt;
&lt;br /&gt;
Project Week activities: Everyone shows up with a project. Some people are working on the platform. Some people are developing algorithms. Some people are applying the tools to their research problems. We begin the week by introducing projects and connecting teams. We end the week by reporting progress. In addition to the ongoing working sessions, breakout sessions are organized ad-hoc on a variety of special topics. These topics include: discussions of software architecture, presentations of new features and approaches and topics such as Image-Guided Therapy.&lt;br /&gt;
&lt;br /&gt;
Several funded projects use the Project Week as a place to convene and collaborate. These include [http://nac.spl.harvard.edu/ NAC], [http://www.ncigt.org/ NCIGT], [http://qiicr.org/ QIICR], and [http://ocairo.technainstitute.com/open-source-software-platforms-and-databases-for-the-adaptive-process/ OCAIRO]. &lt;br /&gt;
&lt;br /&gt;
A summary of all previous Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
This project week is an event [[Post-NCBC-2014|endorsed]] by the MICCAI society.&lt;br /&gt;
&lt;br /&gt;
Please make sure that you are on the [http://public.kitware.com/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list]&lt;br /&gt;
&lt;br /&gt;
==Agenda==&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-style=&amp;quot;background:#b0d5e6;color:#02186f&amp;quot; &lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; |Time&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Monday, January 4&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Tuesday,  January 5&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Wednesday, January 6&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Thursday, January 7&lt;br /&gt;
!style=&amp;quot;width:18%&amp;quot; |Friday, January 8&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|bgcolor=&amp;quot;#dbdbdb&amp;quot;|'''Project Presentations''' &lt;br /&gt;
|bgcolor=&amp;quot;#6494ec&amp;quot;|&lt;br /&gt;
|&lt;br /&gt;
|bgcolor=&amp;quot;#88aaae&amp;quot;|'''IGT Day'''&lt;br /&gt;
|bgcolor=&amp;quot;#faedb6&amp;quot;|'''Reporting Day'''&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''8:30am'''&lt;br /&gt;
|&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Breakfast &lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''9:00am-12:00pm'''&lt;br /&gt;
|'''10:30am-12pm:''' [Tutorial] Diffeomorphic registration and geodesic shooting methods (I). (Sarang Joshi)&amp;lt;br&amp;gt; Room: [http://www.csail.mit.edu/resources/maps/5D/D507.gif 32-D507].&lt;br /&gt;
|'''10:00-11:30am:''' Breakout Session:[[2016_Winter_Project_Week/Breakout_Sessions/NewSlicerExtensions | Slicer Extensions Birds of a Feather]]&lt;br /&gt;
|&lt;br /&gt;
'''10:00-11:30am:''' Breakout Session: [[2016_Winter_Project_Week/Projects/SlicerROSIntegration| Slicer for Medical Robotics Research]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
|'''8:30-9:30am''' TBD &amp;lt;br&amp;gt;&lt;br /&gt;
'''9:30-10:30am''' [[2016_Winter_Project_Week/Breakout_Sessions/IGT#Image-guided Neurosurgery| Clinical perspective on Image Guided Neurosurgery]]  (Alexandra Golby) &amp;lt;br&amp;gt;&lt;br /&gt;
'''10:30-11:30am''' [[2016_Winter_Project_Week/Breakout_Sessions/IGT#Multiparametric MRI| Clinical perspective on Multiparametric MRI]] (Fiona Fennessy)&amp;lt;br&amp;gt;&lt;br /&gt;
'''11:30am-12:30pm''' TBD &amp;lt;br&amp;gt;&lt;br /&gt;
|'''10:00am-12:00pm:''' [[#Projects|Project Progress Updates]]&amp;lt;br&amp;gt; &lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]]&lt;br /&gt;
&amp;lt;br&amp;gt;-----------------&amp;lt;br&amp;gt;&lt;br /&gt;
'''12pm:''' [[Events:TutorialContestJanuary2016|Tutorial Contest Winner Announcement]]&amp;lt;br&amp;gt; &lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]]&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''12:00pm-1:00pm'''&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch &lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch&lt;br /&gt;
|bgcolor=&amp;quot;#ffffaa&amp;quot;|Lunch boxes; Adjourn by 1:30pm&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''1:00-5:30pm'''&lt;br /&gt;
|'''1:00pm-1:05pm: &amp;lt;font color=&amp;quot;#503020&amp;quot;&amp;gt;Welcome&amp;lt;/font&amp;gt;'''&amp;lt;br&amp;gt; &lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]]&lt;br /&gt;
&amp;lt;br&amp;gt;-----------------&amp;lt;br&amp;gt;&lt;br /&gt;
'''1:05-2:30pm:''' [[#Projects|Project Introductions]] (all Project Leads)&amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]]&lt;br /&gt;
&amp;lt;br&amp;gt;-----------------&amp;lt;br&amp;gt;&lt;br /&gt;
'''2:45-4:00pm:''' Breakout Session: [[2016_Winter_Project_Week/Breakout_Sessions/Ultrasound| Ultrasound]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]]&lt;br /&gt;
&amp;lt;br&amp;gt;-----------------&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4:00-5:30pm:''' [Tutorial] Diffeomorphic registration geodesic shooting methods (II). (Sarang Joshi) &amp;lt;br&amp;gt; Room: [http://www.csail.mit.edu/resources/maps/5D/D507.gif 32-D507].&lt;br /&gt;
|&lt;br /&gt;
|'''1:00-2:30pm:''' Breakout Session:[[2016_Winter_Project_Week/Breakout_Sessions/DiffusionMRI| Diffusion MRI]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]] &amp;lt;br&amp;gt;&lt;br /&gt;
'''3:00-4:30pm:''' Breakout Session:[[2016_Winter_Project_Week/Breakout_Sessions/QIICRTools| QIICR Tools]]&lt;br /&gt;
|'''1:00-3:00pm:''' Breakout Session:[[2016_Winter_Project_Week/Breakout_Session/What's_Planned_for_Slicer_Core|What's Planned for Slicer Core]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[MIT_Project_Week_Rooms#Kiva|Kiva]] &lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|bgcolor=&amp;quot;#ffffdd&amp;quot;|'''5:30pm'''&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|bgcolor=&amp;quot;#f0e68b&amp;quot;|Adjourn for the day&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Calendar==&lt;br /&gt;
{{#widget:Google Calendar&lt;br /&gt;
|id=kitware.com_sb07i171olac9aavh46ir495c4@group.calendar.google.com&lt;br /&gt;
|timezone=America/New_York&amp;amp;dates=20160103%2F20160110&lt;br /&gt;
|title=NAMIC Winter Project Week&lt;br /&gt;
|view=WEEK&lt;br /&gt;
|dates=20160103/20160110&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
iCal (.ics) link: https://calendar.google.com/calendar/ical/kitware.com_sb07i171olac9aavh46ir495c4%40group.calendar.google.com/public/basic.ics&lt;br /&gt;
&lt;br /&gt;
='''Projects'''=&lt;br /&gt;
*Use this [[2016_Project_Week_Template | Updated Template for project pages]]&lt;br /&gt;
&lt;br /&gt;
== Tractography==&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/Tractography_format_interoperability | Tractography Format Interoperability]] (Isaiah Norton, Michael Onken, Lauren O'Donnell, others)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SlicerDMRI_documentation | Slicer Diffusion MR / tractography workflow documentation]] (Pegah Kahaliardabili, Fan Zhang, Isaiah Norton, Lauren O'Donnell, others)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/TractographyModuleDevelop&amp;amp;Test | Tractography Analysis Module Development and Testing]] (Fan Zhang, Pegah Kahaliardabili, Isaiah Norton, Lauren O'Donnell, others)&lt;br /&gt;
&lt;br /&gt;
== IGT ==&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/TrackedUltrasoundStandardization | Tracked Ultrasound Standardization]] (Andras Lasso, Christian Askeland, Simon Drouin, Junichi Tokuda, Steve Pieper, Adam Rankin)&lt;br /&gt;
*[[2016_Winter_Project_Week/Projects/IntegrationCustusX|Integration of CustusX with PLUS on BK System]] (Christian A, Andras Lasso, Adam Rankin)&lt;br /&gt;
*[[2016_Winter_Project_Week/Projects/MITK_Plus_Integration | Integration of Plus and MITK]] (Thomas Kirchner, Janek Groehl)&lt;br /&gt;
*[[2016_Winter_Project_Week/Projects/IntegrationImFusion| Integration of ImFusion MR-US Registration with BWH AMIGO Neurosurgery Setup]] (Sarah Frisken, Tina Kapur, Steve Pieper, Sandy Wells, Andras Lasso, Christian Askelan)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SlicerROSIntegration | 3D Slicer + ROS Integration]] (Junichi Tokuda, Axel Krieger, Simon Leonard, Jayender Jagadeesan)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/CryoPlanningSlicerModule | CryoPlanning Module in Slicer]] (Jayender Jagadeesan, Steve Pieper, Sandy Wells)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/External_beam_planning | External Beam Radiotherapy Planning]] (Greg Sharp, others)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/EVD |Measuring Anatomic Factors for Extraventricular Drain Placement]] (Kirby Vosburgh, P. Jason White)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/PLUS | Inter-device messaging for robust support of depth switching]] (Adam Rankin)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/PLUSOCR | Exploration of open-source OCR libraries for device meta-data capture without research interface ]] (Adam Rankin)&lt;br /&gt;
&lt;br /&gt;
==Image Analysis==&lt;br /&gt;
*[[2016_Winter_Project_Week/Projects/ChestImagingPlatform|Chest Imaging Platform: COPD and Other Pulmonary Diseases]] (Raúl San José, Jorge Onieva)&lt;br /&gt;
* [[2016 Winter Project Week/Projects/Cluster-Driven Lung Segmentation | Cluster-Driven Segmentation of Lung Nodules]] (Vivek Narayan, Raúl San José, Daniel Blezek, Steve Pieper, Chintan Parmar)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/BatchImageAnalysis  | Batch Clinical Image Analysis]] (Kalli Retzepi, Yangming Ou, Matt Toews, Steve Pieper, Sandy Wells, Randy Gollub)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/ImageRestoration | Image Restoration via Patch GMMs]] (Adrian Dalca, Katie Bouman, Polina Golland)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/PatchRegistration | Patch Based Discrete Registration for Difficult Images]] (Adrian Dalca, Andreea Bobu, Polina Golland)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/DigitalPathologyNuclearSegmentation|Digital Pathology Nuclear Segmentation]] (Erich Bremer, Yi Gao, Nicole Aucoin, Andrey Fedorov)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SphericalWaveletShapeAnalysis|Spherical Wavelet Shape Analysis]] (Yi Gao, Erich Bremer, Allen Tannenbaum, Ron Kikinis)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/Interactive4DSegmentation | Interactive 4D Segmentation Module]] (Ethan Ulrich)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SlicerCMFNextSteps | Moving beyond SlicerCMF and Future Projects]] (Beatriz Paniagua, Lucia Cevidanes, Steve Pieper, Juan Prieto)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SlicerOpenCVExtension | Slicer OpenCV Extension]] (Nicole Aucoin, Erich Bremer, Andrey Fedorov)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/ShapeAnalysis | Low-dimensional Principal Geodesic Analysis On the Manifold of Diffeomorphisms]] (Miaomiao Zhang, Polina Golland)&lt;br /&gt;
&lt;br /&gt;
==Infrastructure==&lt;br /&gt;
*[[2016_Winter_Project_Week/Projects/UpgradeNAMICSlicerWiki|Upgrade the NAMIC (and Slicer?) Wiki]] (Mike Halle, JC)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/CommonDataStructure | Common Data Structure for CMF modules in Slicer]] (Jean-Baptiste Vimort, François Budin, Lucia Cevidanes, Beatriz Paniagua, Steve Pieper, Juan Prieto)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/StatisticalShapeModeling | Statistical Shape Modeling in Slicer: OA Index]] (Laura Pascal, Beatriz Paniagua, François Budin, Lucia Cevidanes, Steve Pieper, Juan Prieto)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/CommonGL  | CommonGL]] (Steve Pieper, Jim Miller)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/CLIModules Backgrounding in MeVisLab | Running CLI Modules in MeVisLab Asynchronously]] (Hans Meine)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/BRAINSFit_in_MeVisLab | Interoperability Tests with BRAINSFit (or other interesting CLIs) in MeVisLab]] (Hans Meine, Steve Pieper)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/CLI_Dashboard | Kibana Dashboard for Browsing All Available CLI Modules]] (Hans Meine, JC?)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SegmentationEditorWidget | Editor Widget using Segmentations]] (Csaba Pinter, Andras Lasso, Andrey Fedorov, Steve Pieper?)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SlicerTerminologyEditor | Terminology Editor]] (Csaba Pinter, Nicole Aucoin, Andrey Fedorov)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/DICOMSegObjIntegration | Integration of DICOM Segmentation Image Storage with Segmentations Module]] (Kyle Sunderland, Csaba Pinter, Andras Lasso, Andrey Fedorov, Steve Pieper?)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/CondaSlicer | Integration of Anaconda Python in Slicer]] (JC, Raúl San José, Jorge Onieva, Slicer Community?)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/Data Persisting | Mechanism to Persist Clinical User Data from Different Modules Based on SQLite and/or other Database Engines ]] (Raúl San José, Jorge Onieva)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/Workflows | Workflow Module that Enables the Navigation and Data Sharing between Different Modules in a Clinical Workflow ]] (Raúl San José, Jorge Onieva)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/AIMInteroperability | AIM for Interoperability]] (Hans Meine, Andrey Fedorov, ??)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/3DNrrdSequences | Sequences extension support for 3D+t NRRD]] (Adam Rankin)&lt;br /&gt;
* [[2016_Winter_Project_Week/Projects/SlicerEnhancedMR | Developing support for Enhanced MR in Slicer]] (Michael Onken, Andrey Fedorov)&lt;br /&gt;
&lt;br /&gt;
= '''Logistics''' =&lt;br /&gt;
&lt;br /&gt;
*'''Dates:''' January 4-8, 2016&lt;br /&gt;
*'''Location:''' MIT, Kiva Conference room; 4th floor of Building 32.&lt;br /&gt;
*'''REGISTRATION:''' Register [https://www.regonline.com/namic16 here]. Registration Fee: $300.&lt;br /&gt;
*'''Hotel:''' Similar to previous years, no rooms have been blocked in a particular hotel.&lt;br /&gt;
*'''Room sharing''': If interested, add your name to the list  [[2016_Winter_Project_Week/RoomSharing|here]]&lt;br /&gt;
&lt;br /&gt;
= '''Registrants''' =&lt;br /&gt;
&lt;br /&gt;
Do not add your name to this list - it is maintained by the organizers based on your paid registration.  To register, visit this [https://www.regonline.com/namic16  registration site].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#Polina Golland, MIT&lt;br /&gt;
#Ron Kikinis, BWH&lt;br /&gt;
#Nicole Aucoin, BWH/SPL&lt;br /&gt;
#Peter Anderson&lt;br /&gt;
#Daniel Blezek, Isomics, Inc.&lt;br /&gt;
#Lucia Cevidanes, University of Michigan&lt;br /&gt;
#Adrian Dalca, MIT&lt;br /&gt;
#Simon Drouin, Montreal Neurological Institute&lt;br /&gt;
#Janek Groehl, German Cancer Research Center&lt;br /&gt;
#Tina Kapur, BWH/HMS&lt;br /&gt;
#Thomas Kirchner, German Cancer Research Center&lt;br /&gt;
#Hans Meine, University of Bremen/MEVIS&lt;br /&gt;
#Vivek Narayan, Dana Farber Cancer Institute&lt;br /&gt;
#Danielle Pace, MIT&lt;br /&gt;
#Laura Pascal, University of Michigan&lt;br /&gt;
#Steve Pieper, Isomics, Inc.&lt;br /&gt;
#Csaba Pinter, Queen's University&lt;br /&gt;
#Gregory Sharp, MGH&lt;br /&gt;
#James Miller, GE Research&lt;br /&gt;
#Kyle Sunderland, Queen's University&lt;br /&gt;
#Ethan Ulrich, University of Iowa&lt;br /&gt;
#Jean-Baptiste Vimort, University of Michigan&lt;br /&gt;
#Miaomiao Zhang, MIT&lt;br /&gt;
#Beatrize Paniagua, University of North Carolina at Chapel Hill&lt;br /&gt;
#Sonia Pujol, BWH&lt;br /&gt;
#Junichi Tokuda, BWH&lt;br /&gt;
#Katie Mastrogiacomo, BWH&lt;br /&gt;
#Niravkumar Patel, Worcester Polytechnic Institute &lt;br /&gt;
#Michael Onken, Open Connections (Germany)&lt;br /&gt;
#Erich Bremer, Stony Brook University&lt;br /&gt;
#Xiao Da, MGH&lt;br /&gt;
#Tobias Frank, Leibniz Universität Hannover&lt;br /&gt;
#Kirby Vosburgh, BWH&lt;br /&gt;
#P. Jason White, BWH&lt;br /&gt;
#Lauren O'Donnell, BWH&lt;br /&gt;
#Pegah Kahali, BWH&lt;br /&gt;
#Fan Zhang, BWH&lt;br /&gt;
#Adam Rankin, Robarts Research Institute &lt;br /&gt;
#Simon Leoard, Johns Hopkins University&lt;br /&gt;
#David Gering, HealthMyne&lt;br /&gt;
#Johan Andruejol, Kitware&lt;br /&gt;
#Jean-Christophe Fillion-Robin, Kitware&lt;br /&gt;
#Kelly Xu, MIT&lt;br /&gt;
#Christian Askeland, SINTEF&lt;br /&gt;
#Katharine Carter, BWH&lt;br /&gt;
#Nick Todd, BWH&lt;br /&gt;
#Ye Cheng, BWH&lt;br /&gt;
#Andriy Fedorov, BWH/HMS&lt;br /&gt;
#Sudhanshu Semwal, UCCS Professor&lt;br /&gt;
#Michael Halle, BWH&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=85018</id>
		<title>2014 Project Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=85018"/>
		<updated>2014-01-10T16:18:00Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* Key Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2014.png|[[2014_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurrent patient.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Among the AFib patients underwent RF ablation, the relative high AFib recurrence rate is a concern. We combine both the image and shape information for the purpose of predicting the RF ablation success rate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
The fibrosis distributions on the left atrium wall is imaged using the dynamic contrast enhanced MRI. Distributed on different anatomical structures, they are considered as &amp;quot;mass&amp;quot; defined on different domains. Under the framework of the optimal mass transport (OMT), the masses are transported to a common domain where the statistical analysis can then be applied. The significant different regions are then characterized by the low-p-value regions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* C++ code ready&lt;br /&gt;
* Six months&lt;br /&gt;
** Discuss with DBP collaborators and start writing the manuscript&lt;br /&gt;
** Started constructing the CLI extension&lt;br /&gt;
** Documentation&lt;br /&gt;
* Beyond&lt;br /&gt;
** Continue work on the image/shape analysis and success rate prediction&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a commandline extension.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/DBP3:Utah Utah DBP]&lt;br /&gt;
* Y. Gao, A. Tannenabum, S. Bouix; &amp;quot;A Framework for Joint Image-and-Shape Analysis&amp;quot;; SPIE Medical Imaging. 2014&lt;br /&gt;
* Y. Gao, Y. Rathi, S. Bouix, A. Tannenbaum; ''Filtering in the Diffeomorphism Group and the Registration of Point Sets''; IEEE Transactions on Image Processing 21 (10), 4383--4396&lt;br /&gt;
* Y. Gao and S. Bouix, ''Synthesis of realistic subcortical anatomy with known surface deformations''; in MICCAI Workshop on Mesh Processing in Medical Image Analysis, 2012, pp. 80–88.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84945</id>
		<title>2014 Project Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84945"/>
		<updated>2014-01-10T15:11:05Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* Key Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2014.png|[[2014_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurrent patient.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Among the AFib patients underwent RF ablation, the relative high AFib recurrence rate is a concern. We combine both the image and shape information for the purpose of predicting the RF ablation success rate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
The fibrosis distributions on the left atrium wall is imaged using the dynamic contrast enhanced MRI. Distributed on different anatomical structures, they are considered as &amp;quot;mass&amp;quot; defined on different domains. Under the framework of the optimal mass transport (OMT), the masses are transported to a common domain where the statistical analysis can then be applied. The significant different regions are then characterized by the low-p-value regions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* C++ code ready&lt;br /&gt;
* Discuss with DBP collaborators and start writing the manuscript&lt;br /&gt;
* Started constructing the CLI extension&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a commandline extension.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/DBP3:Utah Utah DBP]&lt;br /&gt;
* Y. Gao, A. Tannenabum, S. Bouix; &amp;quot;A Framework for Joint Image-and-Shape Analysis&amp;quot;; SPIE Medical Imaging. 2014&lt;br /&gt;
* Y. Gao, Y. Rathi, S. Bouix, A. Tannenbaum; ''Filtering in the Diffeomorphism Group and the Registration of Point Sets''; IEEE Transactions on Image Processing 21 (10), 4383--4396&lt;br /&gt;
* Y. Gao and S. Bouix, ''Synthesis of realistic subcortical anatomy with known surface deformations''; in MICCAI Workshop on Mesh Processing in Medical Image Analysis, 2012, pp. 80–88.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84501</id>
		<title>2014 Project Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84501"/>
		<updated>2014-01-06T20:17:28Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* Key Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2014.png|[[2014_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurrent patient.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Among the AFib patients underwent RF ablation, the relative high AFib recurrence rate is a concern. We combine both the image and shape information for the purpose of predicting the RF ablation success rate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
The fibrosis distributions on the left atrium wall is imaged using the dynamic contrast enhanced MRI. Distributed on different anatomical structures, they are considered as &amp;quot;mass&amp;quot; defined on different domains. Under the framework of the optimal mass transport (OMT), the masses are transported to a common domain where the statistical analysis can then be applied. The significant different regions are then characterized by the low-p-value regions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* C++ code ready&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a commandline extension.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/DBP3:Utah Utah DBP]&lt;br /&gt;
* Y. Gao, A. Tannenabum, S. Bouix; &amp;quot;A Framework for Joint Image-and-Shape Analysis&amp;quot;; SPIE Medical Imaging. 2014&lt;br /&gt;
* Y. Gao, Y. Rathi, S. Bouix, A. Tannenbaum; ''Filtering in the Diffeomorphism Group and the Registration of Point Sets''; IEEE Transactions on Image Processing 21 (10), 4383--4396&lt;br /&gt;
* Y. Gao and S. Bouix, ''Synthesis of realistic subcortical anatomy with known surface deformations''; in MICCAI Workshop on Mesh Processing in Medical Image Analysis, 2012, pp. 80–88.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84464</id>
		<title>2014 Project Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84464"/>
		<updated>2014-01-06T17:40:27Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2014.png|[[2014_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurrent patient.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Among the AFib patients underwent RF ablation, the relative high AFib recurrence rate is a concern. We combine both the image and shape information for the purpose of predicting the RF ablation success rate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
The fibrosis distributions on the left atrium wall is imaged using the dynamic contrast enhanced MRI. Distributed on different anatomical structures, they are considered as &amp;quot;mass&amp;quot; defined on different domains. Under the framework of the optimal mass transport (OMT), the masses are transported to a common domain where the statistical analysis can then be applied. The significant different regions are then characterized by the low-p-value regions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* Discussion with Josh about similar work. CARMA has also work on this using particle based shape analysis on surface. This module uses volumetric OMT for the wall volume.&lt;br /&gt;
* Next&lt;br /&gt;
** validation on particle/OMT based methods&lt;br /&gt;
** Test Extension and then Nightly build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a commandline extension.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/DBP3:Utah Utah DBP]&lt;br /&gt;
* Y. Gao, A. Tannenabum, S. Bouix; &amp;quot;A Framework for Joint Image-and-Shape Analysis&amp;quot;; SPIE Medical Imaging. 2014&lt;br /&gt;
* Y. Gao, Y. Rathi, S. Bouix, A. Tannenbaum; ''Filtering in the Diffeomorphism Group and the Registration of Point Sets''; IEEE Transactions on Image Processing 21 (10), 4383--4396&lt;br /&gt;
* Y. Gao and S. Bouix, ''Synthesis of realistic subcortical anatomy with known surface deformations''; in MICCAI Workshop on Mesh Processing in Medical Image Analysis, 2012, pp. 80–88.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84462</id>
		<title>2014 Project Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis&amp;diff=84462"/>
		<updated>2014-01-06T17:38:45Z</updated>

		<summary type="html">&lt;p&gt;Ygao: Created page with ' &amp;lt;gallery&amp;gt; Image:PW-SLC2014.png|Projects List Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurren…'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2014.png|[[2014_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurrent patient.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Among the AFib patients underwent RF ablation, the relative high AFib recurrence rate is a concern. We combine both the image and shape information for the purpose of predicting the RF ablation success rate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
The fibrosis distributions on the left atrium wall is imaged using the dynamic contrast enhanced MRI. Distributed on different anatomical structures, they are considered as &amp;quot;mass&amp;quot; defined on different domains. Under the framework of the optimal mass transport (OMT), the masses are transported to a common domain where the statistical analysis can then be applied. The significant different regions are then characterized by the low-p-value regions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* Discussion with Josh about similar work. CARMA has also work on this using particle based shape analysis on surface. This module uses volumetric OMT for the wall volume.&lt;br /&gt;
* Next&lt;br /&gt;
** validation on particle/OMT based methods&lt;br /&gt;
** Test Extension and then Nightly build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a commandline extension.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/DBP3:Utah Utah DBP]&lt;br /&gt;
* Y Gao, Y Rathi, S Bouix, A Tannenbaum; ''Filtering in the Diffeomorphism Group and the Registration of Point Sets''; IEEE Transactions on Image Processing 21 (10), 4383--4396&lt;br /&gt;
* Y. Gao and S. Bouix, ''Synthesis of realistic subcortical anatomy with known surface deformations''; in MICCAI Workshop on Mesh Processing in Medical Image Analysis, 2012, pp. 80–88.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Winter_Project_Week&amp;diff=84461</id>
		<title>2014 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Winter_Project_Week&amp;diff=84461"/>
		<updated>2014-01-06T17:37:51Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* Atrial Fibrillation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Project Events]], [[AHM_2014]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2014.png|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Project Week is a hands on activity -- programming using the open source [[NA-MIC-Kit|NA-MIC Kit]], algorithm design, and clinical application -- that has become one of the major events in the NA-MIC, NCIGT, and NAC calendars. It is held in the summer at MIT, typically the last week of June, and a shorter version is held in Salt Lake City in the winter, typically the second week of January.   &lt;br /&gt;
&lt;br /&gt;
Active preparation begins 6-8 weeks prior to the meeting, when a kick-off teleconference is hosted by the NA-MIC Engineering, Dissemination, and Leadership teams, the primary hosts of this event.  Invitations to this call are sent to all NA-MIC members, past attendees of the event, as well as any parties who have expressed an interest in working with NA-MIC. The main goal of the kick-off call is to get an idea of which groups/projects will be active at the upcoming event, and to ensure that there is sufficient NA-MIC coverage for all. Subsequent teleconferences allow the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in breakout sessions. In the final days leading upto the meeting, all project teams are asked to fill in a template page on this wiki that describes the objectives and plan of their projects.&lt;br /&gt;
&lt;br /&gt;
The event itself starts off with a short presentation by each project team, driven using their previously created description, and allows all participants to be acquainted with others who are doing similar work. In the rest of the week, about half the time is spent in breakout discussions on topics of common interest of subsets of the attendees, and the other half is spent in project teams, doing hands-on programming, algorithm design, or clinical application of NA-MIC kit tools.  The hands-on activities are done in 10-20 small teams of size 3-5, each with a mix of experts in NA-MIC kit software, algorithms, and clinical.  To facilitate this work, a large room is setup with several tables, with internet and power access, and each team gathers on a table with their individual laptops, connects to the internet to download their software and data, and is able to work on their projects.  On the last day of the event, a closing presentation session is held in which each project team presents a summary of what they accomplished during the week.&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
= Dates.Venue.Registration =&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2014#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
= [[AHM_2014#Agenda|'''AGENDA''']] and Project List=&lt;br /&gt;
&lt;br /&gt;
Please:&lt;br /&gt;
*  [[AHM_2014#Agenda|'''Click here for the agenda for AHM 2014 and Project Week''']].&lt;br /&gt;
*  [[#Projects|'''Click here to jump to Project list''']]&lt;br /&gt;
&lt;br /&gt;
=Background and Preparation=&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list]&lt;br /&gt;
&lt;br /&gt;
=Projects=&lt;br /&gt;
* [[2014_Project_Week_Template | Template for project pages]]&lt;br /&gt;
&lt;br /&gt;
==TBI==&lt;br /&gt;
*[[2014_Project_Week:TBIatrophy|Multimodal neuroimaging for the quantification of brain atrophy at six months following severe traumatic brain injury]] (Andrei Irimia, SY Matthew Goh, Carinna M. Torgerson, John D. Van Horn)&lt;br /&gt;
*[[2014_Project_Week:TBIdemyelination|Systematic evaluation of axonal demyelination subsequent to traumatic brain injury using structural T1- and T2-weighted magnetic resonance imaging]] (Andrei Irimia, SY Matthew Goh, Carinna M. Torgerson, John D. Van Horn)&lt;br /&gt;
*[[2014_Project_Week:BrainAging|Mapping the effect of traumatic brain injury upon white matter connections in the human brain using 3D Slicer]] (Andrei Irimia, John D. Van Horn)&lt;br /&gt;
*[[2014_Project_Week:LongitudinalDTI|Patient-specific longitudinal DTI analysis in traumatic brain injury]] (Anuja Sharma, Andrei Irimia, Bo Wang, John D. Van Horn, Martin Styner, Guido gerig)&lt;br /&gt;
*[[2014_Project_Week:TBISegmentation|Testing the interactive segmentation algorithm for traumatic brain injury]] (Bo Wang, Marcel Prastawa, Andrei Irimia, John D. Van Horn, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
==Atrial Fibrillation==&lt;br /&gt;
*[[2014_Project_Week:MRAFusionRegistration|DEMRI LA Segmentation via Image Fusion (MRA)]] (Josh, Salma, Alan)&lt;br /&gt;
*[[2014_Project_Week:LAFibrosisVisualizationModule|LA Fibrosis / Scar Visualization]] (Josh, Salma, Alan)&lt;br /&gt;
*[[2014_Project_Week:CARMADocumentation|CARMA Extension Documentation Project]] (Josh, Salma)&lt;br /&gt;
*[[2014_Project_Week:GraphCutsLASegmentationModule|LA Segmentation module using multi-column Graph Cuts]] (Gopal, Salma, Josh, Rob, Ross)&lt;br /&gt;
*[[2014_Project_Week:AblationSuccessRatePredictionUsingJointImageAndShapeAnalysis|Ablation Success Rate Prediction Using Joint Image And Shape Analysis]](Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum)&lt;br /&gt;
*[[2014_Project_Week:GrowCutLevelSetLA|Grow cut, level set integration for interactive LA segmentation]] ( Liangjia Zhu, Ivan Kolesov, Yi Gao, Allen Tannenbaum)&lt;br /&gt;
&lt;br /&gt;
==Huntington's Disease==&lt;br /&gt;
*[[2014_Project_Week:DWIDispersion|DWI Dispersion &amp;amp; Compressed Sensing Conversions]] (Hans, CF, Peter Savadjiev, Kent, David)&lt;br /&gt;
*[[2014_Project_Week:Modules scripting|Slicer module scripting?]] (David)&lt;br /&gt;
*[[2014_Project_Week:DWIConverter|DWIConverter?]] (Hans, Kent)&lt;br /&gt;
*[[2014_Project_Week:Slicer_Based_Surface_Template_Estimation|Slicer Based Surface Template Estimation]] (Saurabh Jain, Steve Pieper, Hans Johnson, Josh Cates)&lt;br /&gt;
*[[2014_Project_Week:HD_4DShapes|4D shape analysis: application to HD ]] (James Fishbaugh,Hans Johnson, Guido Gerig)&lt;br /&gt;
*[[2014_Project_Week:Shape_Registration_and_Regression|Shape registration and regression in Slicer4 ]] (James Fishbaugh,Hans Johnson, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
==Head and Neck Cancer==&lt;br /&gt;
*[[2014_Project_Week:DIR_validation|DIR Validation]] (Nadya and Greg)&lt;br /&gt;
*[[2014_Project_Week:Hybrid_bspline|Hybrid B Spline]] (Nadya, Greg, Steve)&lt;br /&gt;
*[[2014_Project_Week:CarreraSlice|Interactive Segmentation]] (Ivan, LiangJia, Nadya, Yi, Greg, Allen)&lt;br /&gt;
&lt;br /&gt;
==Slicer4 Extensions==&lt;br /&gt;
*[[2014_Project_Week:ShapePopulationViewer|Surface Visualization - ShapePopulationViewer]] (Alexis Girault, Francois Budin, Beatriz Panaigua, Martin Styner)&lt;br /&gt;
*[[2014_Project_Week:DTIAnalysisPipeline|DTI Analysis Pipeline as Slicer4 Extensions]] (Francois Budin, Martin Styner)&lt;br /&gt;
&lt;br /&gt;
==Cardiac==&lt;br /&gt;
*[[2014_Project_Week:CardiacStemCellMonitoring|Monitoring engrafted stem cells in cardiac tissue with time series manganese enhanced MRI]] (Karl Diedrich)&lt;br /&gt;
*[[2014_Project_Week:CardiacCongenitalSegmentation|Whole-heart segmentation of cardiac MR images in congenital heart defect cases]] (Danielle Pace, Polina Golland)&lt;br /&gt;
&lt;br /&gt;
==Stroke==&lt;br /&gt;
&lt;br /&gt;
*[[2014_Project_Week:Multi-Tissue_Stroke_Segmentation|Multi-Tissue Stroke Segmentation]] (Ramesh, Polina B., Polina G.)&lt;br /&gt;
&lt;br /&gt;
==Brain Segmentation==&lt;br /&gt;
*[[2014_Project_Week:MultiAtlas_MultiImage_Segmentation|Multi-Atlas based Multi-Image Segmentation]] (Minjeong Kim, Xiaofeng Liu, Jim Miller, Dinggang Shen)&lt;br /&gt;
&lt;br /&gt;
==Image-Guided Interventions==&lt;br /&gt;
*[[2014_Project_Week:OpenIGTLink| OpenIGTLink Interface: New data types and structures]] (Junichi Tokuda, Andras Lasso, Steve Piper, ???)&lt;br /&gt;
*[[2014_Project_Week:Ultrasound Visualization and Navigation in Neurosurgery|Ultrasound Visualization and Navigation in Neurosurgery]] (Matthew Toews, Alireza Mehrtash, Csaba Pinter, Andras Lasso, Steve Pieper, William M. Wells III)&lt;br /&gt;
*[[2014_Project_Week:PercutaneousApproachAnalysis| Percutaneous Approach Analysis]] (Atsushi Yamada, Junichi Tokuda, Koichiro Murakami, ??)&lt;br /&gt;
*[[2014_Project_Week:EndoscopeConsole| Endoscope Console]] (Atsushi Yamada, Junichi Tokuda, ??)&lt;br /&gt;
*[[2014_Project_Week:Statistical Shape Model for robotic spine surgery| Statistical Shape Model for robotic spine surgery]] (Marine Clogenson, ???)&lt;br /&gt;
*[[2014_Project_Week:ImmersiveVR| Immersive VR devices]] (Franklin King, Andras Lasso)&lt;br /&gt;
&lt;br /&gt;
==Radiation Therapy==&lt;br /&gt;
*[[2014_Project_Week:DICOM_RT|DICOM RT Export]] (Greg Sharp, Kevin Wang, others??)&lt;br /&gt;
*[[2014_Project_Week:DICOM_SRO|DICOM Spatial Registration Export]] (Greg Sharp, Kevin Wang, others??)&lt;br /&gt;
*[[2014_Project_Week:Registration_Evaluation|Interactive Registration and Evaluation]] (Kevin Wang, Steve Pieper, Greg Sharp)&lt;br /&gt;
*[[2014_Project_Week:External_Beam_Planning|External Beam Planning Visualization]] (Kevin Wang, Greg Sharp, Csaba Pinter)&lt;br /&gt;
&lt;br /&gt;
==TMJ-OA==&lt;br /&gt;
* [[2014_Winter_Project_Week:Constrain Fiducial along Suface|Constrain Fiducial along Suface]] (Vinicius Boen, Nicole Aucoin, Beatriz Paniagua)&lt;br /&gt;
* [[2014_Winter_Project_Week:Cropping Multiple Surfaces|Cropping multiple surfaces simultaneously]] (Alexander, Jc, Steve, Vinicius, Beatriz Paniagua)&lt;br /&gt;
* [[2014_Winter_Project_Week:Color Code Tables|Color Coded Tables]] (Vinicius Boen, Beatriz Paniagua, Nicole Aucoin, Steve Pieper, Francois Budin)&lt;br /&gt;
* [[2014_Winter_Project_Week:4DShape Analysis of mandibular changes|4DShape Analysis of mandibular changes]] (Vinicius Boen, James Fishbaugh, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
==Chronic Obstructive Pulmonary Disease ==&lt;br /&gt;
* [[2014_Winter_Project_Week:CIP Core|Chest Imaging Platform (CIP) - Core Infrastructure]] (Raul San Jose, Rola Harmouche, Pietro Nardelli, James Ross)&lt;br /&gt;
* [[2014_Winter_Project_Week:CIP Infrastructure Testing and SuperBuild|CIP Testing and SuperBuild]] (James Ross, Raul San Jose)&lt;br /&gt;
* [[2014_Winter_Project_Week:Slicer CIP Slicer MRML| Slicer CIP- MRML consolidation]] (Pietro Nardelli, Rola Harmouche,  James Ross, Raul San Jose)&lt;br /&gt;
* [[2014_Winter_Project_Week:Slicer CIP  Modules| Slicer CIP- Modules]] (Rola Harmouche, Pietro Nardelli, James Ross, Raul San Jose)&lt;br /&gt;
&lt;br /&gt;
==[http://qiicr.org QIICR]==&lt;br /&gt;
&lt;br /&gt;
*[[2014_Project_Week:4D_NIfTI_Multivolume|4D NIfTI Multivolume Support]] (Jayashree, Andrey, Jim, John)&lt;br /&gt;
*[[2014_Project_Week:RT_FormatConversions|RT and ITK Format Conversions]] (Jayashree, Andras, Csaba. John)&lt;br /&gt;
*[[2014_Project_Week:BatchConvertDICOM|Python Scripting Slicer DICOM read/write to convert segmentation objects]] (Jayashree, Andrey, Alireza, Steve, Jc, Hans, John)&lt;br /&gt;
*[[2014_Project_Week:PkModeling_user_tool|User module for DCE modeling]] (Andrey, Jayashree, Jim, Alireza, Steve, Ron)&lt;br /&gt;
*[[2014_Project_Week:DICOM_enhanced_multiframe|DICOM enhanced multiframe object support]] (Andrey, Alireza, David Clunie, Jayashree, Steve, Reinhard, Jim)&lt;br /&gt;
*[[2014_Project_Week:Quantitative_Index_Computation|Quantitative Index Computation]] (Ethan Ulrich, Reinhard Beichel, Nicole, Andrey, Jim)&lt;br /&gt;
*[[2014_Project_Week:TCIA Browser Extension in Slicer|TCIA Browser Extension in Slicer]] (Alireza, Andrey, Steve, Ron)&lt;br /&gt;
&lt;br /&gt;
==Infrastructure==&lt;br /&gt;
*[[2014_Project_Week:MRMLSceneSpeedUp|MRML Scene speed up]] (Jc, Andras Lasso)&lt;br /&gt;
*[[2014_Project_Week:MultidimensionalDataSupport|Multidimensional data support]] (Andras Lasso, Andriy Fedorov, Steve Pieper, JC, Kevin Wang)&lt;br /&gt;
*[[2014_Project_Week:MarkupsModule|Markups Module]] (Nicole Aucoin)&lt;br /&gt;
* [[2014_Winter_Project_Week:Logging|Logging (standardization, logging to file)]] (Nicole Aucoin, Steve Pieper, Jc, Andras Lasso, Csaba Pinter, ???)&lt;br /&gt;
*[[2014_Project_Week:CLI|CLI]] (Jim Miller)&lt;br /&gt;
* [[2014_Winter_Project_Week:Steered Registration|Steered Registration (LandmarkRegistration module)]] (Steve, Greg, Kevin, Vinicius, Marcel)&lt;br /&gt;
* [[2014_Winter_Project_Week:MRB Extension Dependencies|MRB Extension Dependencies]] (Steve, Jc, Jim, Nicole, Alex)&lt;br /&gt;
* [[2014_Winter_Project_Week:SubjectHierarchy|Subject hierarchy]] (Csaba Pinter, Andras Lasso, Steve Pieper, Jc, Jayashree, John, Alireza, Andrey)&lt;br /&gt;
* [[2014_Winter_Project_Week:IntegrationOfContourObject|Integration of Contour object]] (Csaba Pinter, Andras Lasso, Steve Pieper, ???)&lt;br /&gt;
* [[2014_Winter_Project_Week:NonlinearTransforms|Integration nonlinear transforms]] (Alex Yarmarkovich, Csaba Pinter, Andras Lasso, Steve Pieper, ???)&lt;br /&gt;
* [[2014_Winter_Project_Week:ParameterSerialization | JSON Parameter Serialization]] (Matt McCormick, Steve Pieper, Jim Miller)&lt;br /&gt;
* [[2014_Winter_Project_Week:XNATSlicerLink| 3DSlicer annotations in XNAT]] (Erwin Vast, Nicole Aucoin, Andrey Fedorov)&lt;br /&gt;
* [[2014_Winter_Project_Week:PlanarImage|Planar Images]] (Franklin King, Csaba Pinter, Andras Lasso)&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Project_Week:JointImageAndShapeAnalysisForFibrosisDistribution&amp;diff=84309</id>
		<title>2014 Project Week:JointImageAndShapeAnalysisForFibrosisDistribution</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Project_Week:JointImageAndShapeAnalysisForFibrosisDistribution&amp;diff=84309"/>
		<updated>2014-01-04T04:25:01Z</updated>

		<summary type="html">&lt;p&gt;Ygao: Created page with '__NOTOC__ &amp;lt;gallery&amp;gt; Image:PW-SLC2014.png|Projects List Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib…'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-SLC2014.png|[[2014_Winter_Project_Week#Projects|Projects List]]&lt;br /&gt;
Image:FibrosisPval20130526.png|p-value between DCE-MRI between cured patiens and AFib recurrent patient.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
Among the AFib patients underwent RF ablation, the relative high AFib recurrence rate is a concern. The correlation between the cure/recurrence ratio with the distribution of the fibrosis would provide insight on the disease assessment and treatment planning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
The fibrosis distributions on the left atrium wall is imaged using the dynamic contrast enhanced MRI. Distributed on different anatomical structures, they are considered as &amp;quot;mass&amp;quot; defined on different domains. Under the framework of the optimal mass transport (OMT), the masses are transported to a common domain where the statistical analysis can then be applied. The significant different regions are then characterized by the low-p-value regions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
* Discussion with Josh about similar work. CARMA has also work on this using particle based shape analysis on surface. This module uses volumetric OMT for the wall volume.&lt;br /&gt;
* Next&lt;br /&gt;
** validation on particle/OMT based methods&lt;br /&gt;
** Test Extension and then Nightly build&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a commandline extension.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [http://www.na-mic.org/Wiki/index.php/DBP3:Utah Utah DBP]&lt;br /&gt;
* Y Gao, Y Rathi, S Bouix, A Tannenbaum; ''Filtering in the Diffeomorphism Group and the Registration of Point Sets''; IEEE Transactions on Image Processing 21 (10), 4383--4396&lt;br /&gt;
* Y. Gao and S. Bouix, ''Synthesis of realistic subcortical anatomy with known surface deformations''; in MICCAI Workshop on Mesh Processing in Medical Image Analysis, 2012, pp. 80–88.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2014_Winter_Project_Week&amp;diff=84308</id>
		<title>2014 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2014_Winter_Project_Week&amp;diff=84308"/>
		<updated>2014-01-04T04:22:23Z</updated>

		<summary type="html">&lt;p&gt;Ygao: /* Atrial Fibrillation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Project Events]], [[AHM_2014]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2014.png|300px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Project Week is a hands on activity -- programming using the open source [[NA-MIC-Kit|NA-MIC Kit]], algorithm design, and clinical application -- that has become one of the major events in the NA-MIC, NCIGT, and NAC calendars. It is held in the summer at MIT, typically the last week of June, and a shorter version is held in Salt Lake City in the winter, typically the second week of January.   &lt;br /&gt;
&lt;br /&gt;
Active preparation begins 6-8 weeks prior to the meeting, when a kick-off teleconference is hosted by the NA-MIC Engineering, Dissemination, and Leadership teams, the primary hosts of this event.  Invitations to this call are sent to all NA-MIC members, past attendees of the event, as well as any parties who have expressed an interest in working with NA-MIC. The main goal of the kick-off call is to get an idea of which groups/projects will be active at the upcoming event, and to ensure that there is sufficient NA-MIC coverage for all. Subsequent teleconferences allow the hosts to finalize the project teams, consolidate any common components, and identify topics that should be discussed in breakout sessions. In the final days leading upto the meeting, all project teams are asked to fill in a template page on this wiki that describes the objectives and plan of their projects.&lt;br /&gt;
&lt;br /&gt;
The event itself starts off with a short presentation by each project team, driven using their previously created description, and allows all participants to be acquainted with others who are doing similar work. In the rest of the week, about half the time is spent in breakout discussions on topics of common interest of subsets of the attendees, and the other half is spent in project teams, doing hands-on programming, algorithm design, or clinical application of NA-MIC kit tools.  The hands-on activities are done in 10-20 small teams of size 3-5, each with a mix of experts in NA-MIC kit software, algorithms, and clinical.  To facilitate this work, a large room is setup with several tables, with internet and power access, and each team gathers on a table with their individual laptops, connects to the internet to download their software and data, and is able to work on their projects.  On the last day of the event, a closing presentation session is held in which each project team presents a summary of what they accomplished during the week.&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
= Dates.Venue.Registration =&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2014#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
= [[AHM_2014#Agenda|'''AGENDA''']] and Project List=&lt;br /&gt;
&lt;br /&gt;
Please:&lt;br /&gt;
*  [[AHM_2014#Agenda|'''Click here for the agenda for AHM 2014 and Project Week''']].&lt;br /&gt;
*  [[#Projects|'''Click here to jump to Project list''']]&lt;br /&gt;
&lt;br /&gt;
=Background and Preparation=&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list]&lt;br /&gt;
&lt;br /&gt;
=Projects=&lt;br /&gt;
* [[2014_Project_Week_Template | Template for project pages]]&lt;br /&gt;
&lt;br /&gt;
==TBI==&lt;br /&gt;
*[[2014_Project_Week:TBIatrophy|Multimodal neuroimaging for the quantification of brain atrophy at six months following severe traumatic brain injury]] (Andrei Irimia, SY Matthew Goh, Carinna M. Torgerson, John D. Van Horn)&lt;br /&gt;
*[[2014_Project_Week:TBIdemyelination|Systematic evaluation of axonal demyelination subsequent to traumatic brain injury using structural T1- and T2-weighted magnetic resonance imaging]] (Andrei Irimia, SY Matthew Goh, Carinna M. Torgerson, John D. Van Horn)&lt;br /&gt;
*[[2014_Project_Week:BrainAging|Mapping the effect of traumatic brain injury upon white matter connections in the human brain using 3D Slicer]] (Andrei Irimia, John D. Van Horn)&lt;br /&gt;
*[[2014_Project_Week:LongitudinalDTI|Patient-specific longitudinal DTI analysis in traumatic brain injury]] (Anuja Sharma, Andrei Irimia, Bo Wang, John D. Van Horn, Martin Styner, Guido gerig)&lt;br /&gt;
*[[2014_Project_Week:TBISegmentation|Testing the interactive segmentation algorithm for traumatic brain injury]] (Bo Wang, Marcel Prastawa, Andrei Irimia, John D. Van Horn, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
==Atrial Fibrillation==&lt;br /&gt;
*[[2014_Project_Week:MRAFusionRegistration|DEMRI LA Segmentation via Image Fusion (MRA)]] (Josh, Salma, Alan)&lt;br /&gt;
*[[2014_Project_Week:LAFibrosisVisualizationModule|LA Fibrosis / Scar Visualization]] (Josh, Salma, Alan)&lt;br /&gt;
*[[2014_Project_Week:CARMADocumentation|CARMA Extension Documentation Project]] (Josh, Salma)&lt;br /&gt;
*[[2014_Project_Week:GraphCutsLASegmentationModule|LA Segmentation module using multi-column Graph Cuts]] (Gopal, Salma, Josh, Rob, Ross)&lt;br /&gt;
*[[2014_Project_Week:JointImageAndShapeAnalysisForFibrosisDistribution|Joint Image and Shape Analysis for Fibrosis Distribution]](Yi Gao, LiangJia Zhu, Josh Cates, Rob MacLeod, Sylvain Bouix, Ron Kikinis, Allen Tannenbaum)&lt;br /&gt;
&lt;br /&gt;
==Cardiac==&lt;br /&gt;
*[[2014_Project_Week:CardiacStemCellMonitoring|Monitoring engrafted stem cells in cardiac tissue with time series manganese enhanced MRI]] (Karl Diedrich)&lt;br /&gt;
&lt;br /&gt;
==Slicer4 Extensions==&lt;br /&gt;
*[[2014_Project_Week:ShapePopulationViewer|Surface Visualization - ShapePopulationViewer]] (Alexis Girault, Francois Budin, Beatriz Panaigua, Martin Styner)&lt;br /&gt;
&lt;br /&gt;
==Huntington's Disease==&lt;br /&gt;
*[[2014_Project_Week:DWIDispersion|DWI Dispersion]] (Hans, CF, Peter Savadjiev)&lt;br /&gt;
*[[2014_Project_Week:DTIAnalysis|DTI Compressed Sensing?]] (Hans, CF)&lt;br /&gt;
*[[2014_Project_Week:Modules scripting|Slicer module scripting?]] (Dave)&lt;br /&gt;
*[[2014_Project_Week:DWIConverter|DWIConverter?]] (Hans, Kent)&lt;br /&gt;
*[[2014_Project_Week:Slicer_Based_Surface_Template_Estimation|Slicer Based Surface Template Estimation]] (Saurabh JHU, Steve Pieper, Hans Johnson, Josh Cates)&lt;br /&gt;
*[[2014_Project_Week:HD_4DShapes|4D shape analysis: application to HD ]] (James Fishbaugh,Hans Johnson, Guido Gerig)&lt;br /&gt;
*[[2014_Project_Week:Shape_Registration_and_Regression|Shape registration and regression in Slicer4 ]] (James Fishbaugh,Hans Johnson, Guido Gerig)&lt;br /&gt;
&lt;br /&gt;
==Head and Neck Cancer==&lt;br /&gt;
*[[2014_Project_Week:DIR_validation|DIR Validation]] (Nadya and Greg)&lt;br /&gt;
*[[2014_Project_Week:Hybrid_bspline|Hybrid B Spline]] (Nadya, Greg, Steve)&lt;br /&gt;
*[[2014_Project_Week:CarreraSlice|Interactive Segmentation]] (Ivan, LiangJia, Nadya, Yi, Greg, Allen)&lt;br /&gt;
&lt;br /&gt;
==Stroke==&lt;br /&gt;
&lt;br /&gt;
*[[2014_Project_Week:Multi-Tissue_Stroke_Segmentation|Multi-Tissue Stroke Segmentation]] (Ramesh, Polina B., Polina G.)&lt;br /&gt;
&lt;br /&gt;
==Brain Segmentation==&lt;br /&gt;
*[[2014_Project_Week:MultiAtlas_MultiImage_Segmentation|Multi-Atlas based Multi-Image Segmentation]] (Minjeong Kim, Xiaofeng Liu, Jim Miller, Dinggang Shen)&lt;br /&gt;
&lt;br /&gt;
==Image-Guided Interventions==&lt;br /&gt;
*[[2014_Project_Week:Ultrasound Visualization and Navigation in Neurosurgery|Ultrasound Visualization and Navigation in Neurosurgery]] (Matthew Toews, Alireza Mehrtash, Csaba Pinter, Andras Lasso, Steve Pieper, William M. Wells III)&lt;br /&gt;
*[[2014_Project_Week:OpenIGTLink| OpenIGTLink Interface: New data types and structures]] (Junichi Tokuda, Andras Lasso, Steve Piper, ???)&lt;br /&gt;
*[[2014_Project_Week:Statistical Shape Model for robotic spine surgery| Statistical Shape Model for robotic spine surgery]] (Marine Clogenson, ???)&lt;br /&gt;
&lt;br /&gt;
==Radiation Therapy==&lt;br /&gt;
*[[2014_Project_Week:DICOM_RT|DICOM RT Export]] (Greg Sharp, Kevin Wang, others??)&lt;br /&gt;
*[[2014_Project_Week:DICOM_SRO|DICOM Spatial Registration Export]] (Greg Sharp, Kevin Wang, others??)&lt;br /&gt;
*[[2014_Project_Week:Registration_Evaluation|Interactive Registration and Evaluation]] (Kevin Wang, Greg Sharp, others??)&lt;br /&gt;
*[[2014_Project_Week:External_Beam_Planning|External Beam Planning Visualization]] (Kevin Wang, Greg Sharp, Csaba Pinter)&lt;br /&gt;
&lt;br /&gt;
==Medical Robotics==&lt;br /&gt;
==[http://qiicr.org QIICR]==&lt;br /&gt;
&lt;br /&gt;
*[[2014_Project_Week:4D_NIfTI_Multivolume|4D NIfTI Multivolume Support]] (Jayashree, Andrey, Jim, John)&lt;br /&gt;
*[[2014_Project_Week:RT_FormatConversions|RT and ITK Format Conversions]] (Jayashree, Andras, Csaba. John)&lt;br /&gt;
*[[2014_Project_Week:BatchConvertDICOM|Python Scripting Slicer DICOM read/write to convert segmentation objects]] (Jayashree, Andrey, Alireza, Steve, Jc, Hans, John)&lt;br /&gt;
*[[2014_Project_Week:PkModeling_user_tool|User module for DCE modeling]] (Andrey, Jayashree, Jim, Alireza, Steve, Ron)&lt;br /&gt;
*[[2014_Project_Week:DICOM_enhanced_multiframe|DICOM enhanced multiframe object support]] (Andrey, Alireza, David Clunie, Jayashree, Steve, Reinhard, Jim)&lt;br /&gt;
*[[2014_Project_Week:Quantitative_Index_Computation|Quantitative Index Computation]] (Ethan Ulrich, Reinhard Beichel, Nicole, Andrey, Jim)&lt;br /&gt;
*[[2014_Project_Week:TCIA Browser Extension in Slicer|TCIA Browser Extension in Slicer]] (Alireza, Andrey, Steve, Ron)&lt;br /&gt;
&lt;br /&gt;
==TMJ-OA==&lt;br /&gt;
* [[2014_Winter_Project_Week:Constrain Fiducial along Suface|Constrain Fiducial along Suface]] (Vinicius Boen, Nicole Aucoin, Beatriz Paniagua)&lt;br /&gt;
* [[2014_Winter_Project_Week:Cropping Multiple Surfaces|Cropping multiple surfaces simultaneously]] (Alexander, Jc, Steve, Vinicius, Beatriz Paniagua)&lt;br /&gt;
* [[2014_Winter_Project_Week:Color Code Tables|Color Coded Tables]] (Beatriz Paniagua, Vinicius Boen, Nicole Aucoin, Steve Pieper, Francois Budin)&lt;br /&gt;
* [[2014_Winter_Project_Week:4DShape Analysis of mandibular changes|4DShape Analysis of mandibular changes]] (James Fishbaugh, Guido Gerig, Vinicius Boen)&lt;br /&gt;
&lt;br /&gt;
==Chronic Obstructive Pulmonary Disease ==&lt;br /&gt;
* [[2014_Winter_Project_Week:CIP Core|Chest Imaging Platform (CIP) - Core Infrastructure]] (Raul San Jose, Rola Harmouche, Pietro Nardelli, James Ross)&lt;br /&gt;
* [[2014_Winter_Project_Week:CIP Infrastructure Testing and SuperBuild|CIP Testing and SuperBuild]] (James Ross, Raul San Jose)&lt;br /&gt;
* [[2014_Winter_Project_Week:Slicer CIP Slicer MRML| Slicer CIP- MRML consolidation]] (Pietro Nardelli, Rola Harmouche,  James Ross, Raul San Jose)&lt;br /&gt;
* [[2014_Winter_Project_Week:Slicer CIP  Modules| Slicer CIP- Modules]] (Rola Harmouche, Pietro Nardelli, James Ross, Raul San Jose)&lt;br /&gt;
&lt;br /&gt;
==Infrastructure==&lt;br /&gt;
*[[2014_Project_Week:MRMLSceneSpeedUp|MRML Scene speed up]] (Jc, Andras Lasso)&lt;br /&gt;
*[[2014_Project_Week:MultidimensionalDataSupport|Multidimensional data support]] (Andras Lasso, Andriy Fedorov, Steve Pieper, JC, Kevin Wang)&lt;br /&gt;
*CLI - Resources? Conditionals? Autonaming? Provenance? CTK unification? (Jim Miller)&lt;br /&gt;
*[[2014_Project_Week:MarkupsModule|Markups Module]] (Nicole Aucoin)&lt;br /&gt;
* [[2014_Winter_Project_Week:Steered Registration|Steered Registration]] (Steve, Greg, Kevin, Vinicius, Marcel)&lt;br /&gt;
* [[2014_Winter_Project_Week:MRB Extension Dependencies|MRB Extension Dependencies]] (Steve, Jc, Jim, Nicole, Alex)&lt;br /&gt;
* [[2014_Winter_Project_Week:SubjectHierarchy|Subject hierarchy]] (Csaba Pinter, Andras Lasso, Steve Pieper, Jc, Jayashree, John, Alireza, Andrey)&lt;br /&gt;
* [[2014_Winter_Project_Week:IntegrationOfContourObject|Integration of Contour object]] (Csaba Pinter, Andras Lasso, Steve Pieper, ???)&lt;br /&gt;
* [[2014_Winter_Project_Week:NonlinearTransforms|Integration nonlinear transforms]] (Alex Yarmarkovich, Csaba Pinter, Andras Lasso, Steve Pieper, ???)&lt;br /&gt;
* [[2014_Winter_Project_Week:Logging|Logging (standardization, logging to file)]] (Nicole Aucoin, Steve Pieper, Jc, Andras Lasso, Csaba Pinter, ???)&lt;br /&gt;
* [[2014_Winter_Project_Week:XNATSlicerLink| 3DSlicer annotations in XNAT]] (Erwin Vast, Nicole Aucoin, Andrey Fedorov)&lt;br /&gt;
* [[2014_Winter_Project_Week:ParameterSerialization | JSON Parameter Serialization]] (Matt McCormick, ???)&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP_Utah_Atrial_Fibrillation_2014&amp;diff=83992</id>
		<title>DBP Utah Atrial Fibrillation 2014</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP_Utah_Atrial_Fibrillation_2014&amp;diff=83992"/>
		<updated>2013-12-13T04:36:13Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; [[AHM_2014#Agenda|Back to AHM_2014 Agenda]]&lt;br /&gt;
&lt;br /&gt;
*Time: 2-3pm&lt;br /&gt;
*Goal: Get together with your partners from algorithm and engineering to make plans for the upcoming year.&lt;br /&gt;
**Create Slicer modules&lt;br /&gt;
**Create Slicer extensions&lt;br /&gt;
**Create Slicer workflows&lt;br /&gt;
*DBP PI: Rob MacLeod&lt;br /&gt;
*Algorithms as extensions in Slicer: Ross Whitaker, Allen Tannenbaum&lt;br /&gt;
**Left atrial scar segmenter (Liangjia Zhu,  Yi Gao, Josh Cates, Alan Morris, Danny Perry, Greg Gardner, Rob MacLeod, Allen Tannenbaum)&lt;br /&gt;
**Left atrium segmenter (Liangjia Zhu,  Yi Gao, Josh Cates, Alan Morris, Danny Perry, Greg Gardner, Rob MacLeod, Allen Tannenbaum)&lt;br /&gt;
**Fibrosis distribution analysis (Yi Gao, Liangjia Zhu, Josh Cates, Alan Morris, Danny Perry, Greg Gardner, Rob MacLeod, Sylvain Bouix, Allen Tannenbaum)&lt;br /&gt;
*Engineering: workflows in collaboration with DBP: Jim Miller&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:GliomaSubtypeClassification&amp;diff=83774</id>
		<title>Projects:GliomaSubtypeClassification</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:GliomaSubtypeClassification&amp;diff=83774"/>
		<updated>2013-11-16T01:15:56Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Glioma Subtype Classification from Mass Spectrometry Data =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
Glioma histologies are the primary factor in the prognostic estimates and are used in determining the proper course of treatment. Furthermore, due to the sensitivity of cranial environment, real-time tumor cell classification and boundary detection can aid in the precision and completeness of tumor resection. By a recent improvement in mass spectrometry that allows for data collection in ambient environments without preparation, the goal is to provide surgeons with histopathological information for a resected sample.&lt;br /&gt;
&lt;br /&gt;
== Result ==&lt;br /&gt;
&lt;br /&gt;
== Key Investigators ==&lt;br /&gt;
Georgia Tech: Jacob Huang, Behnood Gholami, and Allen Tannenbaum&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:ProstateRegistration&amp;diff=83773</id>
		<title>Projects:ProstateRegistration</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:ProstateRegistration&amp;diff=83773"/>
		<updated>2013-11-16T01:15:46Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Particle Filter Registration of Medical Imagery =&lt;br /&gt;
&lt;br /&gt;
We transform the general problem of image registration to a sparse task of aligning point clouds. This is accomplished by considering the image as a probability density function, from which a point cloud is formed by randomly drawing samples. This allows us to apply a particle filtering technique for the registration. While point set registration and image registration are  generally studied separately, this approach attempts to bridge the two. Thus, our contribution is threefold. Firstly, our method can handle affine transformations. Secondly, registration of partial images is more natural. Lastly, the point cloud representation is much sparser than the usual image representation as a discrete function. This allows us to drastically reduce the computational cost associated with 2D and 3D registration task. &lt;br /&gt;
&lt;br /&gt;
== Prostate registration ==&lt;br /&gt;
Experimental results demonstrate the fast and robust convergence of the proposed algorithm. See Figures below:&lt;br /&gt;
&lt;br /&gt;
One the right side:&lt;br /&gt;
    Blue: prostate in prone position;&lt;br /&gt;
    Yellow: prostate in supine position.&lt;br /&gt;
&lt;br /&gt;
One the left:&lt;br /&gt;
    Blue: The same prostate in prone position;&lt;br /&gt;
    Pink: Result of registering the Yellow (supine) towards the Blue (prone).&lt;br /&gt;
&lt;br /&gt;
[[Image:ProstateRegSupineToProneInParaview.png|500px|]]&lt;br /&gt;
&lt;br /&gt;
== DWI registration  ==&lt;br /&gt;
When applying the method to register two point sets generated from DWI images, the local minima of the registration energy is successfully avoided and the figure below shows:&lt;br /&gt;
&lt;br /&gt;
[[Image:BrainBefore.png|500px|]]&lt;br /&gt;
&lt;br /&gt;
Shows the original DWI point set in blue and the affine transformed one in red. The objective is to register the red to the blue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:BrainAfter.png|500px|]]&lt;br /&gt;
&lt;br /&gt;
The result of using the particle filter image registration.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech Algorithms: Yi Gao, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Segmentation]] [[Category: Prostate]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:MATLABSlicerExampleModule&amp;diff=83772</id>
		<title>Projects:MATLABSlicerExampleModule</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:MATLABSlicerExampleModule&amp;diff=83772"/>
		<updated>2013-11-16T01:15:32Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;'''See the [http://www.slicer.org/slicerWiki/index.php/Documentation/Nightly/Extensions/MatlabBridge MatlabBridge] extension for a comprehensive solution for calling Matlab functions from Slicer'''&amp;lt;/font&amp;gt;&lt;br /&gt;
 Back to [[Algorithm:Stony Brook|Stony Brook Algorithms]], [[Engineering:Isomics|Isomics Engineering]]&lt;br /&gt;
= MATLAB Slicer Example Module =&lt;br /&gt;
&lt;br /&gt;
The aim of this project to provide a simple command-line module which enables MATLAB code to run within Slicer3.  In other words, Slicer3 is used for data I/O, parameter setting, and visualizations.  Then, behind-the-scenes, MATLAB code is called to perform image analysis computations.&lt;br /&gt;
&lt;br /&gt;
[http://teem.sourceforge.net/ TEEM] is used to pass data back-and-forth between MATLAB and Slicer3, using TEEM-based nrrdLoadWithMetadata() and nrrdSaveWithMetadata() functions.&lt;br /&gt;
&lt;br /&gt;
= Description of the Slicer3 Module =&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
Download the latest (last updated: 07 Sep 2008):  [[Media:MATLABSlicerExampleModule.zip|MATLABSlicerExampleModule.zip]]&lt;br /&gt;
&lt;br /&gt;
Files included in the Download:&lt;br /&gt;
* CLI-shell.sh - The command-line interface TCL shell script. This currently works only in Linux. Two lines towards the bottom must be edited to point to the proper places, see the comments at the end of the file.&lt;br /&gt;
* MATLAB/&lt;br /&gt;
** compilethis.m - this MATLAB script contains all of the information to be able to compile the nrrdLoadWithMetadata() and nrrdSaveWithMetadata() functions using Slicer3's TEEM libraries. Open it up and follow the instructions.&lt;br /&gt;
** nrrdLoadWithMetadata.c - this c function is used to load nrrd data into MATLAB with the header metadata information (currently supports all NRRD files that do not use key/value information and also has limited support for DWMRI key/value information).&lt;br /&gt;
** nrrdLoadWithMetadata.m - a MATLAB help file, which can be used within MATLAB by typing &amp;quot;help nrrdLoadWithMetadata&amp;quot;&lt;br /&gt;
** nrrdLoadWithMetadata.mexglx/.mexw32 - precompiled 32-bit functions (may not work for you until you recompile with the compilethis.m script).&lt;br /&gt;
** nrrdSaveWithMetadata.c - this c function is used to save nrrd data from MATLAB with the header metadata information (currently supports all NRRD files that do not use key/value information but does not support any key/value information).&lt;br /&gt;
** nrrdSaveWithMetadata.m - a MATLAB help file, which can be used within MATLAB by typing &amp;quot;help nrrdSaveWithMetadata&amp;quot;&lt;br /&gt;
** nrrdSaveWithMetadata.mexglx/.mexw32 - precompiled 32-bit functions (may not work for you until you recompile with the compilethis.m script).&lt;br /&gt;
** MATLABSlicerExampleModule.m - This is the MATLAB script which actually processes the data.  This is the part that can be replaced with your MATLAB script.&lt;br /&gt;
&lt;br /&gt;
== Install and Run the Example Module ==&lt;br /&gt;
&lt;br /&gt;
To try this out, DO THE FOLLOWING (Linux only):&lt;br /&gt;
# Make sure you have tclsh installed&lt;br /&gt;
# Download the zip file above&lt;br /&gt;
# Edit compilethis.m (follow the instructions inside the file) &amp;amp; and run within MATLAB to compile the nrrdLoadWithMetadata and nrrdSaveWithMetadata functions.&lt;br /&gt;
# Edit CLI-shell.sh (follow the instructions inside the file)&lt;br /&gt;
# Add the path in Slicer3 to point to this module:  View-&amp;gt;Application Settings-&amp;gt;Module Settings-&amp;gt;Add a preset&lt;br /&gt;
# Close Slicer3 and Reopen Slicer3&lt;br /&gt;
# Slicer3 will autodetect the example MATLAB module&lt;br /&gt;
# Load in some data&lt;br /&gt;
# Select the example module, set the two inputs and the two outputs, and press &amp;quot;Apply&amp;quot;&lt;br /&gt;
# If there are errors, it is likely due to messed up paths in your CLI-shell.sh script.&lt;br /&gt;
&lt;br /&gt;
== Description of nrrdLoadWithMetadata() ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% This function loads a nrrd volume into MATLAB with the associated&lt;br /&gt;
% metadata. Input is a string with the nrrd volume filename. Output&lt;br /&gt;
% is a struct containing the data and the metadata.&lt;br /&gt;
%&lt;br /&gt;
% The struct obeys the following conventions:&lt;br /&gt;
% - The fields in the struct are ordered as follows:&lt;br /&gt;
% -- 00 = data              (void *) nrrd-&amp;gt;data&lt;br /&gt;
% -- 01 = space             (int) nrrd-&amp;gt;space [enum]&lt;br /&gt;
% -- 02 = spacedirections   (double matrix) nrrd-&amp;gt;axis[NRRD_DIM_MAX].spaceDirection[NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- 03 = centerings        (int array) nrrd-&amp;gt;axis[NRRD_DIM_MAX].center [enum]&lt;br /&gt;
% -- 04 = kinds             (int array) nrrd-&amp;gt;axis[NRRD_DIM_MAX].kind [enum]&lt;br /&gt;
% -- 05 = spaceunits        (char * array) nrrd-&amp;gt;spaceUnits[NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- 06 = spaceorigin       (double array) nrrd-&amp;gt;spaceOrigin[NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- 07 = measurementframe  (double matrix) nrrd-&amp;gt;measurementFrame[NRRD_SPACE_DIM_MAX][NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- OPTIONAL:&lt;br /&gt;
% -- 08 = modality          (string) nrrd-&amp;gt;kvp[1]&lt;br /&gt;
% -- 09 = bvalue            (double) bKVP&lt;br /&gt;
% -- 10 = gradientdirections(double matrix) info[dwiIdx]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description of nrrdSaveWithMetadata() ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% This function saves a nrrd volume into MATLAB with the associated&lt;br /&gt;
% metadata. First input is a string with the nrrd volume&lt;br /&gt;
% filename. Second input is a MATLAB struct containing the data and&lt;br /&gt;
% the metadata according to the following conventions.&lt;br /&gt;
%&lt;br /&gt;
% The struct obeys the following conventions:&lt;br /&gt;
% - The fields in the struct are ordered as follows:&lt;br /&gt;
% -- 00 = data              (void *) nrrd-&amp;gt;data&lt;br /&gt;
% -- 01 = space             (int) nrrd-&amp;gt;space [enum]&lt;br /&gt;
% -- 02 = spacedirections   (double matrix) nrrd-&amp;gt;axis[NRRD_DIM_MAX].spaceDirection[NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- 03 = centerings        (int array) nrrd-&amp;gt;axis[NRRD_DIM_MAX].center [enum]&lt;br /&gt;
% -- 04 = kinds             (int array) nrrd-&amp;gt;axis[NRRD_DIM_MAX].kind [enum]&lt;br /&gt;
% -- 05 = spaceunits        (char * array) nrrd-&amp;gt;spaceUnits[NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- 06 = spaceorigin       (double array) nrrd-&amp;gt;spaceOrigin[NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- 07 = measurementframe  (double matrix) nrrd-&amp;gt;measurementFrame[NRRD_SPACE_DIM_MAX][NRRD_SPACE_DIM_MAX]&lt;br /&gt;
% -- NOT YET IMPLEMENTED:&lt;br /&gt;
% -- 08 = modality          (string) nrrd-&amp;gt;kvp[1]&lt;br /&gt;
% -- 09 = bvalue            (double) bKVP&lt;br /&gt;
% -- 10 = gradientdirections(double matrix) info[dwiIdx]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation for the enums used in TEEM-1.9.0 ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%&lt;br /&gt;
% The following is an expansion of the enums for TEEM-1.9.0&lt;br /&gt;
%&lt;br /&gt;
% -- 01 = space:&lt;br /&gt;
%   nrrdSpaceUnknown,&lt;br /&gt;
%   nrrdSpaceRightAnteriorSuperior,     /*  1: NIFTI-1 (right-handed) */&lt;br /&gt;
%   nrrdSpaceLeftAnteriorSuperior,      /*  2: standard Analyze (left-handed) */&lt;br /&gt;
%   nrrdSpaceLeftPosteriorSuperior,     /*  3: DICOM 3.0 (right-handed) */&lt;br /&gt;
%   nrrdSpaceRightAnteriorSuperiorTime, /*  4: */&lt;br /&gt;
%   nrrdSpaceLeftAnteriorSuperiorTime,  /*  5: */&lt;br /&gt;
%   nrrdSpaceLeftPosteriorSuperiorTime, /*  6: */&lt;br /&gt;
%   nrrdSpaceScannerXYZ,                /*  7: ACR/NEMA 2.0 (pre-DICOM 3.0) */&lt;br /&gt;
%   nrrdSpaceScannerXYZTime,            /*  8: */&lt;br /&gt;
%   nrrdSpace3DRightHanded,             /*  9: */&lt;br /&gt;
%   nrrdSpace3DLeftHanded,              /* 10: */&lt;br /&gt;
%   nrrdSpace3DRightHandedTime,         /* 11: */&lt;br /&gt;
%   nrrdSpace3DLeftHandedTime,          /* 12: */&lt;br /&gt;
%   nrrdSpaceLast&lt;br /&gt;
%&lt;br /&gt;
% -- 03 = centerings:&lt;br /&gt;
%   nrrdCenterUnknown,         /* 0: no centering known for this axis */&lt;br /&gt;
%   nrrdCenterNode,            /* 1: samples at corners of things&lt;br /&gt;
%                                 (how &amp;quot;voxels&amp;quot; are usually imagined)&lt;br /&gt;
%                                 |\______/|\______/|\______/|&lt;br /&gt;
%                                 X        X        X        X   */&lt;br /&gt;
%   nrrdCenterCell,            /* 2: samples at middles of things&lt;br /&gt;
%                                 (characteristic of histogram bins)&lt;br /&gt;
%                                  \___|___/\___|___/\___|___/&lt;br /&gt;
%                                      X        X        X       */&lt;br /&gt;
%   nrrdCenterLast&lt;br /&gt;
%&lt;br /&gt;
% -- 04 = kinds:&lt;br /&gt;
%   nrrdKindUnknown,&lt;br /&gt;
%   nrrdKindDomain,            /*  1: any image domain */&lt;br /&gt;
%   nrrdKindSpace,             /*  2: a spatial domain */&lt;br /&gt;
%   nrrdKindTime,              /*  3: a temporal domain */&lt;br /&gt;
%   /* -------------------------- end domain kinds */&lt;br /&gt;
%   /* -------------------------- begin range kinds */&lt;br /&gt;
%   nrrdKindList,              /*  4: any list of values, non-resample-able */&lt;br /&gt;
%   nrrdKindPoint,             /*  5: coords of a point */&lt;br /&gt;
%   nrrdKindVector,            /*  6: coeffs of (contravariant) vector */&lt;br /&gt;
%   nrrdKindCovariantVector,   /*  7: coeffs of covariant vector (eg gradient) */&lt;br /&gt;
%   nrrdKindNormal,            /*  8: coeffs of unit-length covariant vector */&lt;br /&gt;
%   /* -------------------------- end arbitrary size kinds */&lt;br /&gt;
%   /* -------------------------- begin size-specific kinds */&lt;br /&gt;
%   nrrdKindStub,              /*  9: axis with one sample (a placeholder) */&lt;br /&gt;
%   nrrdKindScalar,            /* 10: effectively, same as a stub */&lt;br /&gt;
%   nrrdKindComplex,           /* 11: real and imaginary components */&lt;br /&gt;
%   nrrdKind2Vector,           /* 12: 2 component vector */&lt;br /&gt;
%   nrrdKind3Color,            /* 13: ANY 3-component color value */&lt;br /&gt;
%   nrrdKindRGBColor,          /* 14: RGB, no colorimetry */&lt;br /&gt;
%   nrrdKindHSVColor,          /* 15: HSV, no colorimetry */&lt;br /&gt;
%   nrrdKindXYZColor,          /* 16: perceptual primary colors */&lt;br /&gt;
%   nrrdKind4Color,            /* 17: ANY 4-component color value */&lt;br /&gt;
%   nrrdKindRGBAColor,         /* 18: RGBA, no colorimetry */&lt;br /&gt;
%   nrrdKind3Vector,           /* 19: 3-component vector */&lt;br /&gt;
%   nrrdKind3Gradient,         /* 20: 3-component covariant vector */&lt;br /&gt;
%   nrrdKind3Normal,           /* 21: 3-component covector, assumed normalized */&lt;br /&gt;
%   nrrdKind4Vector,           /* 22: 4-component vector */&lt;br /&gt;
%   nrrdKindQuaternion,        /* 23: (w,x,y,z), not necessarily normalized */&lt;br /&gt;
%   nrrdKind2DSymMatrix,       /* 24: Mxx Mxy Myy */&lt;br /&gt;
%   nrrdKind2DMaskedSymMatrix, /* 25: mask Mxx Mxy Myy */&lt;br /&gt;
%   nrrdKind2DMatrix,          /* 26: Mxx Mxy Myx Myy */&lt;br /&gt;
%   nrrdKind2DMaskedMatrix,    /* 27: mask Mxx Mxy Myx Myy */&lt;br /&gt;
%   nrrdKind3DSymMatrix,       /* 28: Mxx Mxy Mxz Myy Myz Mzz */&lt;br /&gt;
%   nrrdKind3DMaskedSymMatrix, /* 29: mask Mxx Mxy Mxz Myy Myz Mzz */&lt;br /&gt;
%   nrrdKind3DMatrix,          /* 30: Mxx Mxy Mxz Myx Myy Myz Mzx Mzy Mzz */&lt;br /&gt;
%   nrrdKind3DMaskedMatrix,    /* 31: mask Mxx Mxy Mxz Myx Myy Myz Mzx Mzy Mzz */&lt;br /&gt;
%   nrrdKindLast&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
&lt;br /&gt;
Contact John Melonakos (jmelonak &amp;lt;at&amp;gt; ece.gatech.edu) if you have questions.  This project is currently in beta phase and is seeking input from the broader community.&lt;br /&gt;
&lt;br /&gt;
=2011 Updates=&lt;br /&gt;
&lt;br /&gt;
The published code relies on an earlier release of Slicer, which used teem 1.9.0. You can download that version of teem here: http://sourceforge.net/projects/teem/files/teem/1.9.0/ and compile separately. While configuring, set BUILD_SHARED_LIBS to ON, and TEEN_ZLIB to ON. Add the directory ${TEEM_BIN_HOME}/bin to your $LD_LIBRARY_PATH.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: John Melonakos (jmelonak &amp;lt;at&amp;gt; ece.gatech.edu), Yi Gao, Allen Tannenbaum&lt;br /&gt;
* Isomics: Steve Pieper&lt;br /&gt;
* BWH: C-F Westin, Gordon Kindlmann&lt;br /&gt;
&lt;br /&gt;
[[Category:Slicer]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Algorithm:GATech:DWMRI_Geodesic_Active_Contour&amp;diff=83771</id>
		<title>Algorithm:GATech:DWMRI Geodesic Active Contour</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Algorithm:GATech:DWMRI_Geodesic_Active_Contour&amp;diff=83771"/>
		<updated>2013-11-16T01:15:12Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
We want to extract the white matter tracts and volumetric fiber bundles from Diffusion-Weighted MRI scans. The idea is to use directional information in a new anisotropic energy functional based on Finsler geometry to extract optimal tracts and then to use region-based active contours to segment the full fiber bundle.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
''Fiber Tractography''&lt;br /&gt;
&lt;br /&gt;
We have implemented the algorithm in Matlab/C using the Fast Sweeping algorithm. We are in the process of porting the code to ITK.&lt;br /&gt;
&lt;br /&gt;
We are continuing to work on our new framework for white matter tractography in high angular resolution diffusion data. We base our work on concepts from Finsler geometry. Namely, a direction-dependent local cost is defined based on the diffusion data for every direction on the unit sphere. Minimum cost curves are determined by solving the Hamilton-Jacobi-Bellman using the Fast Sweeping algorithm. Classical costs based on the diffusion tensor field can be seen as a special case. While the minimum cost (or equivalently the travel time of a particle moving along the curve) and the anisotropic front propagation frameworks are related, front speed is related to particle speed through a Legendre transformation which can severely impact anisotropy information for front propagation techniques. Implementation details and results on high angular diffusion data show that this method can successfully take advantage of the increased angular resolution in high b-value diffusion weighted data despite lower signal to noise ratio.&lt;br /&gt;
&lt;br /&gt;
''Fiber Bundle Segmentation''&lt;br /&gt;
&lt;br /&gt;
We have developed a locally-constrained region-based approach which, when initialized on the fiber tracts, is able to provide volumetric fiber segmentations of the full diffusion bundles.&lt;br /&gt;
&lt;br /&gt;
''Data''&lt;br /&gt;
&lt;br /&gt;
We are using Harvard's high angular resolution datasets which currently consist of a population of 12 schizophrenics and 12 normal controls.&lt;br /&gt;
&lt;br /&gt;
''Visual Results''&lt;br /&gt;
&lt;br /&gt;
Recently, we have applied this method to the cingulum bundle, as shown in the following images:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 1. Results on Cingulum Bundle'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case24-coronal-tensors-edit.png |thumb|250px|Detailed View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case25-sagstream-tensors-edit.png|thumb|250px|Streamline Comparison]]&lt;br /&gt;
|-&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case26-anterior.png |thumb|250px|Anterior View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case26-posterior.png|thumb|250px|Posterior View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Previously, this method was applied to full brain fiber tractography, as shown in the following images:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 2. Results on full brain fiber tractograpy'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Tracts1.png |thumb|250px|Fiber tracking from high resolution data set.]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Tracts2.png|thumb|250px|Comparison of technique with streamline based on tensor field.]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This method may also be used in pattern detection applications, such as vessel segmentation:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 3. Results on Vessel Segmentation'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Vessels1.png |thumb|250px|Vessel Segmentation]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
''Statistical Results''&lt;br /&gt;
&lt;br /&gt;
We are currently investigating Cingulum Bundle fractional anisotropy (FA) differences between a population of 12 schizophrenics and 12 normal controls.  We find the anchor tracts as described above and then compute statistics for FA inside a tube of radii 1-3mm centered on the anchor tract.  So far using this method we have been unable to find a statistical difference between the normal controls and the schizophrenics.  Therefore, we are investigating a more precise extraction of the cingulum bundle using Finsler Levelsets, rather than using the primitive cylinder as is currently done.&lt;br /&gt;
&lt;br /&gt;
Download the current statistical results [[Media:ResultsAnchorTube.txt|here.‎]] (last updated 18/Apr/2007)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Project Status''&lt;br /&gt;
*Working 3D implementation in Matlab using the C-based Mex functions.&lt;br /&gt;
*Currently porting to ITK.&lt;br /&gt;
&lt;br /&gt;
''References:''&lt;br /&gt;
* V. Mohan, J. Melonakos, M. Niethammer, M. Kubicki, and A. Tannenbaum. Finsler Level Set Segmentation for Imagery in Oriented Domains. BMVC 2007. Under review.&lt;br /&gt;
* J. Melonakos, V. Mohan, M. Niethammer, K. Smith, M. Kubicki, and A. Tannenbaum. Finsler Tractography for White Matter Connectivity Analysis of the Cingulum Bundle. MICCAI 2007. Under review.&lt;br /&gt;
* J. Melonakos, E. Pichon, S. Angenet, and A. Tannenbaum. Finsler Active Contours. IEEE Transactions on Pattern Analysis and Machine Intelligence, to appear in 2007.&lt;br /&gt;
* E. Pichon and A. Tannenbaum. Curve segmentation using directional information, relation to pattern detection. In IEEE International Conference on Image Processing (ICIP), volume 2, pages 794-797, 2005.&lt;br /&gt;
* E. Pichon, C-F Westin, and A. Tannenbaum. A Hamilton-Jacobi-Bellman approach to high angular resolution diffusion tractography. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pages 180-187, 2005.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: John Melonakos, Vandana Mohan, Sam Dambreville, Allen Tannenbaum&lt;br /&gt;
* Harvard/BWH: Marek Kubicki, Marc Niethammer, Kate Smith, C-F Westin, Martha Shenton&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;br /&gt;
* [[NA-MIC_Collaborations|NA-MIC Collaborations]]&lt;br /&gt;
* [[Media:2007_Project_Half_Week_FinslerTractography.ppt| 4-block PPT Jan 2007]]&lt;br /&gt;
* [[Projects/Diffusion/2007_Project_Week_Geodesic_Tractography| June 2007 Project Week]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Geodesic_Active_Contour_DWMRI&amp;diff=83770</id>
		<title>Algorithm:GATech:Geodesic Active Contour DWMRI</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Geodesic_Active_Contour_DWMRI&amp;diff=83770"/>
		<updated>2013-11-16T01:15:01Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
We want to extract the white matter tracts and volumetric fiber bundles from Diffusion-Weighted MRI scans. The idea is to use directional information in a new anisotropic energy functional based on Finsler geometry to extract optimal tracts and then to use region-based active contours to segment the full fiber bundle.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
''''Fiber Tractography''''&lt;br /&gt;
We have implemented the algorithm in Matlab/C using the Fast Sweeping algorithm. We are in the process of porting the code to ITK.&lt;br /&gt;
&lt;br /&gt;
We are continuing to work on our new framework for white matter tractography in high angular resolution diffusion data. We base our work on concepts from Finsler geometry. Namely, a direction-dependent local cost is defined based on the diffusion data for every direction on the unit sphere. Minimum cost curves are determined by solving the Hamilton-Jacobi-Bellman using the Fast Sweeping algorithm. Classical costs based on the diffusion tensor field can be seen as a special case. While the minimum cost (or equivalently the travel time of a particle moving along the curve) and the anisotropic front propagation frameworks are related, front speed is related to particle speed through a Legendre transformation which can severely impact anisotropy information for front propagation techniques. Implementation details and results on high angular diffusion data show that this method can successfully take advantage of the increased angular resolution in high b-value diffusion weighted data despite lower signal to noise ratio.&lt;br /&gt;
&lt;br /&gt;
''''Fiber Bundle Segmentation''''&lt;br /&gt;
We have developed a locally-constrained region-based approach which, when initialized on the fiber tracts, is able to provide volumetric fiber segmentations of the full diffusion bundles.&lt;br /&gt;
&lt;br /&gt;
''Data''&lt;br /&gt;
&lt;br /&gt;
We are using Harvard's high angular resolution datasets which currently consist of a population of 12 schizophrenics and 12 normal controls.&lt;br /&gt;
&lt;br /&gt;
''Visual Results''&lt;br /&gt;
&lt;br /&gt;
Recently, we have applied this method to the cingulum bundle, as shown in the following images:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 1. Results on Cingulum Bundle'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case24-coronal-tensors-edit.png |thumb|250px|Detailed View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case25-sagstream-tensors-edit.png|thumb|250px|Streamline Comparison]]&lt;br /&gt;
|-&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case26-anterior.png |thumb|250px|Anterior View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case26-posterior.png|thumb|250px|Posterior View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Previously, this method was applied to full brain fiber tractography, as shown in the following images:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 2. Results on full brain fiber tractograpy'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Tracts1.png |thumb|250px|Fiber tracking from high resolution data set.]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Tracts2.png|thumb|250px|Comparison of technique with streamline based on tensor field.]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This method may also be used in pattern detection applications, such as vessel segmentation:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 3. Results on Vessel Segmentation'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Vessels1.png |thumb|250px|Vessel Segmentation]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
''Statistical Results''&lt;br /&gt;
&lt;br /&gt;
We are currently investigating Cingulum Bundle fractional anisotropy (FA) differences between a population of 12 schizophrenics and 12 normal controls.  We find the anchor tracts as described above and then compute statistics for FA inside a tube of radii 1-3mm centered on the anchor tract.  So far using this method we have been unable to find a statistical difference between the normal controls and the schizophrenics.  Therefore, we are investigating a more precise extraction of the cingulum bundle using Finsler Levelsets, rather than using the primitive cylinder as is currently done.&lt;br /&gt;
&lt;br /&gt;
Download the current statistical results [[Media:ResultsAnchorTube.txt|here.‎]] (last updated 18/Apr/2007)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Project Status''&lt;br /&gt;
*Working 3D implementation in Matlab using the C-based Mex functions.&lt;br /&gt;
*Currently porting to ITK.&lt;br /&gt;
&lt;br /&gt;
''References:''&lt;br /&gt;
* V. Mohan, J. Melonakos, M. Niethammer, M. Kubicki, and A. Tannenbaum. Finsler Level Set Segmentation for Imagery in Oriented Domains. BMVC 2007. Under review.&lt;br /&gt;
* J. Melonakos, V. Mohan, M. Niethammer, K. Smith, M. Kubicki, and A. Tannenbaum. Finsler Tractography for White Matter Connectivity Analysis of the Cingulum Bundle. MICCAI 2007. Under review.&lt;br /&gt;
* J. Melonakos, E. Pichon, S. Angenet, and A. Tannenbaum. Finsler Active Contours. IEEE Transactions on Pattern Analysis and Machine Intelligence, to appear in 2007.&lt;br /&gt;
* E. Pichon and A. Tannenbaum. Curve segmentation using directional information, relation to pattern detection. In IEEE International Conference on Image Processing (ICIP), volume 2, pages 794-797, 2005.&lt;br /&gt;
* E. Pichon, C-F Westin, and A. Tannenbaum. A Hamilton-Jacobi-Bellman approach to high angular resolution diffusion tractography. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pages 180-187, 2005.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: John Melonakos, Vandana Mohan, Sam Dambreville, Allen Tannenbaum&lt;br /&gt;
* Harvard/BWH: Marek Kubicki, Marc Niethammer, Kate Smith, C-F Westin, Martha Shenton&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;br /&gt;
* [[NA-MIC_Collaborations|NA-MIC Collaborations]]&lt;br /&gt;
* [[Media:2007_Project_Half_Week_FinslerTractography.ppt| 4-block PPT Jan 2007]]&lt;br /&gt;
* [[Projects/Diffusion/2007_Project_Week_Geodesic_Tractography| June 2007 Project Week]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Fast_Marching_Slicer_2&amp;diff=83769</id>
		<title>Algorithm:GATech:Fast Marching Slicer 2</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Fast_Marching_Slicer_2&amp;diff=83769"/>
		<updated>2013-11-16T01:14:45Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
We added the Fast Marching algorithm to Slicer 2.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
Completed.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
Eric Pichon, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;br /&gt;
* [[NA-MIC_Collaborations|NA-MIC Collaborations]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Centerline_Generation_for_Vessels&amp;diff=83768</id>
		<title>Algorithm:GATech:Centerline Generation for Vessels</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Centerline_Generation_for_Vessels&amp;diff=83768"/>
		<updated>2013-11-16T01:14:35Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
The goal of this work is to generate centerlines from segmented 3D surfaces of blood '''vessels''' using a harmonic skeletonization technique. The generated centerlines are used as a guide to visualize and evaluate stenoses in human coronary arteries. A harmonic skeleton is the center line of a multi-branched tubular surface extracted based on a harmonic function, which is the solution of the Laplace equation. This skeletonization method guarantees smoothness and connectivity and provides a fast and straightforward way to calculate local cross-sectional areas of the arteries, and thus provides the possibility to localize and evaluate coronary artery stenosis, which is a commonly seen pathology in coronary artery disease.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
[[Image:Skeletons.TIF|[[Image:874px-Skeletons.TIF.png]]]]&lt;br /&gt;
&lt;br /&gt;
Figure 1 shows the '''centerline''' extraction results on two coronary artery tress. Left: Coronaries and the skeleton of a healthy volunteer. Middle: Coronaries and the skeleton of a patient with plaques in LAD. Right: The skeleton of the middle coronaries.&lt;br /&gt;
&lt;br /&gt;
[[Image:Areas.TIF|[[Image:397px-Areas.TIF.png]]]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Stenosis_quan.TIF|[[Image:579px-Stenosis_quan.TIF.png]]]]&lt;br /&gt;
&lt;br /&gt;
[[Image:LAD_stenosis.JPG|[[Image:LAD_stenosis.JPG]]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt; Cross-sectional areas of '''vessels''' can be calculated using the centerlines as a guide. Figure 3 shows the areas at different locations along the '''vessels''' by specifying a ''u'' value of the harmonic function (solution to the Laplace equation). We can also show a continuous variation of the cross-sectional area along a certain vessel, as shown in Figure 4, where the areas along the left anterior descending (LAD) coronary artery of the second case in Figure 1 are measured. Figure 5 shows a stenosis detected in this vessel.&lt;br /&gt;
&lt;br /&gt;
''Ongoing Work''&lt;br /&gt;
&lt;br /&gt;
* This method is being tested and evaluated for its accuracy based on more clinical data sets.&lt;br /&gt;
* The algorithms used in this work are currently implemented in MATLAB and C.&lt;br /&gt;
&lt;br /&gt;
''References:''&lt;br /&gt;
&lt;br /&gt;
[1] Y. Yang, L. Zhu, S. Haker, A. Tannenbaum, and D. Giddens. [/Wiki/images/8/83/YangMICCAI2005.pdf  Harmonic Skeleton Guided Evaluation of Stenoses in Human Coronary Arteries]. In Proc. MICCAI 2005: International Conference Medical Image Computing and Computer-Assisted Intervention, Lecture Notes in Computer Science (3749). Springer-Verlag, pp.490-497, 2005&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;br /&gt;
* [[NA-MIC_Collaborations|NA-MIC Collaborations]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Statistical_PDE_methods_for_Segmentation&amp;diff=83767</id>
		<title>NA-MIC/Projects/Structural/Segmentation/Statistical PDE methods for Segmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Statistical_PDE_methods_for_Segmentation&amp;diff=83767"/>
		<updated>2013-11-16T01:14:23Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Objective:''' We want to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE frameowrk.&lt;br /&gt;
&lt;br /&gt;
'''Progress:''' We developped flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows.&lt;br /&gt;
&lt;br /&gt;
'''Completed:'''&lt;br /&gt;
&lt;br /&gt;
* A statistically based flow for image segmentation, using Fast Marching&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tright&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Gatech_SlicerModel2.jpg|[[Image:180px-Gatech_SlicerModel2.jpg|Figure 1:Screenshot from the Slicer Fast Marching module]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Gatech_SlicerModel2.jpg|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;Figure 1:Screenshot from the Slicer Fast Marching module&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Code''&lt;br /&gt;
&lt;br /&gt;
* The code has been integrated into the Slicer&lt;br /&gt;
* A user-oriented tutorial for the Fast Marching algorithm is available at:[http://www.bme.gatech.edu/groups/minerva/publications/papers/pichon.slicer.fastMarching/index.html Slicer Module Tutorial]&lt;br /&gt;
&lt;br /&gt;
''References:''&lt;br /&gt;
&lt;br /&gt;
* Eric Pichon, Allen Tannenbaum, and Ron Kikinis. A statistically based surface evolution method for medical image segmentation: presentation and validation. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), volume 2, pages 711-720, 2003. Note: Best student presentation in image segmentation award[http://www.bme.gatech.edu/groups/minerva/publications/papers/pichon-media2004-segmentation.pdf [1]]&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:''' Delphine Nain, Eric Pichon, Oleg Michailovich, Yogesh Rathi, James Malcolm, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook#The_Fast_Marching_algorithm_has_been_integrated_into_the_Slicer.:Stony Brook|Stony Brook Summary Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/General_purpose_Filtering_and_Classification&amp;diff=83766</id>
		<title>NA-MIC/Projects/Structural/General purpose Filtering and Classification</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/General_purpose_Filtering_and_Classification&amp;diff=83766"/>
		<updated>2013-11-16T01:14:08Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
To make new filters more accessible to the medical imaging community. These filters include geometric nonlinear smoothing filters for enhancement and a modified Bayesian classifier for segmentation.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
* Various anisotropic enhancement and Bayesian filters have been incorporated into ITK.&lt;br /&gt;
** [[http://www.itk.org/Insight/Doxygen/html/classitk_1_1BayesianClassifierImageFilter.html Bayesian Classifier Image Filter]]&lt;br /&gt;
** [[http://www.itk.org/Insight/Doxygen/html/classitk_1_1BayesianClassifierInitializationImageFilter.html Bayesian Classifier Initialization Image Filter]]&lt;br /&gt;
* A paper was published in the (Open Access) Insight Journal&lt;br /&gt;
** [[https://caddlab.rad.unc.edu/MIDAS/handle/1926/44 Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit]]&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: John Melonakos, Allen Tannenbaum&lt;br /&gt;
* Kitware: Karthik Krishnan, Luis Ibanez&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook#ITK_Bayesian_Classifier_Image_Filter|Stony Brook Summary Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:SphericalWaveletsInITK&amp;diff=83765</id>
		<title>Projects:SphericalWaveletsInITK</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:SphericalWaveletsInITK&amp;diff=83765"/>
		<updated>2013-11-16T01:13:57Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]], [[Algorithm:Stony Brook|Stony Brook Algorithms]], [[Algorithm:UNC|UNC Algorithms]]&lt;br /&gt;
&lt;br /&gt;
= ITK Spherical Wavelet Transform Filter =&lt;br /&gt;
&lt;br /&gt;
For a general description of how we are using spherical wavelets for shape analysis, see [[Algorithm:GATech:Multiscale_Shape_Analysis|Multiscale Shape Analysis]] &lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
This page outlines the steps we will take to code the Spherical Wavelet transform in ITK.&lt;br /&gt;
&lt;br /&gt;
* The best place to learn about the spherical wavelet transform that we will implement is in this paper, that also has pseudo-code.&lt;br /&gt;
&lt;br /&gt;
: Spherical Wavelets: Texture Processing (1995) Peter Schröder, Wim Sweldens&lt;br /&gt;
: http://citeseer.ist.psu.edu/oder95spherical.html&lt;br /&gt;
&lt;br /&gt;
In this paper, it is shown how to do decompose a scalar signal defined on a spherical mesh into spherical wavelet coefficients (analysis step, also called forward transform), and vice-versa (synthesis step, also called inverse transform).&lt;br /&gt;
&lt;br /&gt;
* With the goal of analyzing the scalar function defined on a spherical topological surface, the spherical parametrization is needed as the first step. Then, we can deem the function to be defined on a sphere. (If the function is a vector one, we can apply the analysis component by component. Thus the scalar function is used for simplicity in the future.)&lt;br /&gt;
&lt;br /&gt;
* For our specific case of SHAPE representation, the function we are going to analyze are the x, y and z coordinates of the original triangulated surface. &lt;br /&gt;
&lt;br /&gt;
* The numerical computation of spherical wavelet analysis requires the scalar function to be defined on a special triangulated sphere, the one generated by a recursive subdivision process. Also for a better performance, the base shape of the triangulation is selected as an icosahedron. The scalar function defined on the original spherical mesh is then interpolated onto the mesh generated by the recursive subdivision using bi-linear interpolation.&lt;br /&gt;
&lt;br /&gt;
* The forward wavelet transformation is carried out on the configuration of the previous step and a set of wavelet coefficients are obtained, called Gamma coefficients. The backward transformation can recover the Gamma coefficients into an approximation of the original scalar function.&lt;br /&gt;
&lt;br /&gt;
''Progress''&lt;br /&gt;
&lt;br /&gt;
* Before the spherical wavelet transformation, the scalar function is interpolated onto a sphere which is generated from recursively subdividing an icosahedron and which has the least number of vertexes more than that of the original mesh.&lt;br /&gt;
&lt;br /&gt;
* The whole process is done in one ITK filter, itkSWaveletFilter inherited from itkMeshSource class.&lt;br /&gt;
&lt;br /&gt;
* The processes going within the class are:&lt;br /&gt;
# The hierarchy of the recursive subdivision is built from the icosahedron. The parameters needed for wavelet decomposition are calculated and stored in the internally. The level of the subdivision is chosen so that the finest sphere in the subdivision coincides with the one in the previous interpolation step.&lt;br /&gt;
# The scalar function is set using the API SetScalarFunction().&lt;br /&gt;
# The wavelet coefficients are calculated from the finest level and downward. It's and implementation of the lifting scheme of the second generation of wavelet transformation.&lt;br /&gt;
# The coefficients of all levels are stored in one structure of vector and can be exported to a file if needed.&lt;br /&gt;
# If needed, the reconstruction of function can be carried out from the coefficients. The difference between the reconstructed function and the original function is around 10^{-16}.&lt;br /&gt;
&lt;br /&gt;
For a preliminary API, see [[NA-MIC/Projects/Structural/Shape_Analysis/ITK_Spherical_Wavelets_API|ITK Spherical Wavelets API]]&lt;br /&gt;
&lt;br /&gt;
''Current Status''&lt;br /&gt;
&lt;br /&gt;
* Basically it's all set. All thing we need are stored internally. &lt;br /&gt;
* So the only thing needs to be done is to choose what data to output and how to output them to fit in the whole process of shape analysis pipeline.&lt;br /&gt;
&lt;br /&gt;
''Next Steps''&lt;br /&gt;
&lt;br /&gt;
* We will discuss the filter interfaces to fit this into the whole procedure of shape analysis.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Yi Gao, Delphine Nain, Xavier LaFaucheur, John Melonakos&lt;br /&gt;
* GE: Jim Miller&lt;br /&gt;
* Kitware: Luis Ibanez&lt;br /&gt;
* UNC: Martin Styner&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC/Projects/Structural/Shape_Analysis/3D_Shape_Analysis_Using_Spherical_Wavelets|3D Shape Analysis Using Spherical Wavelets]]&lt;br /&gt;
* [[NA-MIC/Projects/Structural/Shape_Analysis/ITK_Spherical_Wavelets_API|ITK Spherical Wavelets API]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Stochastic_Methods_for_Segmentation&amp;diff=83764</id>
		<title>NA-MIC/Projects/Structural/Segmentation/Stochastic Methods for Segmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Stochastic_Methods_for_Segmentation&amp;diff=83764"/>
		<updated>2013-11-16T01:13:46Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:''' To develop new stochastic methods for implementing curvature driven flows for various medical tasks such as segmentation. This methodology will be used as an alternative to level set methods and has certain advantages including the ability to explicitly take into account noise models.&lt;br /&gt;
&lt;br /&gt;
'''Introduction''' We have continued our work on snakes using our stochastic framework. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. An example of a stochastic snake is illustrated in the Figure 1 below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tright&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Stochastic-snake.png|[[Image:180px-Stochastic-snake.png|Figure 1: Stochastic Snake Evolution Example.]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Stochastic-snake.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;Figure 1: Stochastic Snake Evolution Example.&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Progress:''' We now have working code written in C++ for the two dimensional case. We have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:''' Delphine Nain, Samuel Dambreville, Tony Yezzi, Gozde Unal, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook#Stochastic_Methods_for_Segmentation|Stony Brook Summary Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Optimal_transport_for_Registration&amp;diff=83763</id>
		<title>NA-MIC/Projects/Structural/Optimal transport for Registration</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Optimal_transport_for_Registration&amp;diff=83763"/>
		<updated>2013-11-16T01:13:34Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
We want to develop new elastic registration methods for brain imaging. The idea is to use optimal transport as the similarity metric underlying the registration procedure. In order to use this technique on 3D manifolds, they must first be warped to the 2D plane using conformal mapping techniques. Additionally, we would like to allow users to be able to specify anatomical landmarks in the form of artificial slits placed in the segmented brain surfaces. During the registration these slits would be forced to register to one another thus guiding the process and producing more accurate results.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
* This techniques has been implemented for standard 2D images and simple surfaces in [1] and [2] along with application to medical images.&lt;br /&gt;
* Code has been developed to accomplish the warping of the brain from a complex 3D surface containing several holes onto the plane.&lt;br /&gt;
* Optimal transport has been used to register heart [3] and vessel imagery [4]. These applications required surfaces to be more complex in that they contained holes and other topological challenges such as branches. As a result, they needed more complex flattening procedures.&lt;br /&gt;
&lt;br /&gt;
'''Ongoing:'''&lt;br /&gt;
&lt;br /&gt;
* Further development of algorithms to enable highly complex surfaces with embedded landmark information to be registered using optimal transport.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Steve Haker - Harvard&lt;br /&gt;
* Shawn Lankton - Georgia Tech&lt;br /&gt;
* Lei Zhu - Georgia Tech&lt;br /&gt;
* Allen Tannenbaum - Georgia Tech&lt;br /&gt;
* Ron Kikinis - Harvard&lt;br /&gt;
&lt;br /&gt;
'''References:'''&lt;br /&gt;
&lt;br /&gt;
* [1] Haker S, Tannenbaum A, Kikinis R. Mass Preserving Mappings and Image Registration. Proc MICCAI 2001, LCNS 2208; p 120-127&lt;br /&gt;
* [2] Haker S, Zhu L, Tannenbaum A, Angenent S. Optimal Mass Transport for Registration and Warping. IJCV, 60(3),225-240,2004; p 225-240&lt;br /&gt;
* [3] Zhu L, Haker S, Tannenbaum A. Mass Preserving Registration for Heart MR Images. Proc MICCAI 2005, LCNS 3750; p 147-154&lt;br /&gt;
* [4] Zhu L, Haker S, Tannenbaum A. Area-Preserving Mappings for the Visualization of Medical Structures. Proc MICCAI 2003, LCNS 2879; p 277-284&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook Algorithms]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:ShapeAnalysisOfCaudateAndCorpusCallosum&amp;diff=83762</id>
		<title>Projects:ShapeAnalysisOfCaudateAndCorpusCallosum</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:ShapeAnalysisOfCaudateAndCorpusCallosum&amp;diff=83762"/>
		<updated>2013-11-16T01:13:17Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]], [[Algorithm:Stony Brook|Stony Brook Algorithms]], [[Algorithm:UNC|UNC Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Shape Analysis for the caudate and corpus callosum data =&lt;br /&gt;
&lt;br /&gt;
Our objective is to improve shape analysis of the caudate and to parcellate the corpus callosum based on function.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
* UNC applied shape analysis pipeline to a dataset of manually segmented caudates of males with and without schizoptypal personality disorders. Differences were found mainly on the right caudates. No differences were found for regional parcellations of the corpus callosum.&lt;br /&gt;
* The open source shape analysis pipeline was transfered and installed at Harvard PNL.&lt;br /&gt;
* Harvard applied the UNC shape analysis software to a new dataset of manually segmented caudates of females with and without schizotypal personality disorder. The results confirmed previous findings in the male dataset. Statistically significant shape differences for the right caudate were found (see Fig 1).&lt;br /&gt;
* A less stringent method to correct for multiple comparisons (false discovery rate; FDR) was implemented (UNC) and applied to the the male and female caudata datasets (see Fig 2).&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
* BWH: Sylvain Bouix, Marek Kubicki, James Levitt, Marc Niethammer, Martha Shenton&lt;br /&gt;
* UNC: Martin Styner, Isabelle Corouge, Guido Gerig&lt;br /&gt;
* Georgia Tech: Delphine Nain, Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 1. Right female caudate results'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Rawpval_rc_back_min.png|thumb|252px|Medial]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Rawpval_rc_front_min.png|thumb|277px|Lateral]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Rawpval_rc_colorbar_min.png|thumb|52px|p value]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 2. Right female caudate results, corrected for multiple comparisons (FDR)'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Female_right_medial_ICV_caudate_FDR_pvalue_cropped.png|thumb|252px|Medial]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Female right lateral ICV caudate FDR pvalue cropped.png|thumb|277px|Lateral]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:FemaleFDRcolorbar.png|thumb|52px|p value]]&lt;br /&gt;
|}&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
* [[Progress_Report:Shape_Analysis|Shape Analysis Progress Report]]&lt;br /&gt;
* [[DBP:Harvard:Collaboration:UNC|DBP:Harvard:Collaboration:UNC]]&lt;br /&gt;
* [[DataRepository#Brockton_VA.2FHarvard_Structural_and_DTI_Images|Male caudate data]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Rule_based_segmentation:Striatum&amp;diff=83761</id>
		<title>NA-MIC/Projects/Structural/Segmentation/Rule based segmentation:Striatum</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Rule_based_segmentation:Striatum&amp;diff=83761"/>
		<updated>2013-11-16T01:10:49Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:''' Semi Automatic Segmentation and parcellation for the Basal Ganglia.&lt;br /&gt;
&lt;br /&gt;
'''Progress:''' We have developed an algorithm for delineation of the striatum into 5 physiological subregions (pre/post caudate, pre/post putamen, and nucleus accumbens) while requiring only minimal user input. We have implemented this algorithm from the geometric rules for delineating the striatum as defined by our Core 3 collaborator, Dr. James Levitt of the PNL, into a 3D SLICER module. The current run time for the algorithm is ~20 seconds after the initial user input. The user inputs a label map of the full striatum, the most superior/dorsal voxel of the putamen on each slice, and the anterior commisure voxel (see figure below). From these, the labelmap is delineated into the aforementioned subregions. The figure below shows a 3D model of the left and right striatum delineated into the five subregions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tright&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Striatum1.png|[[Image:180px-Striatum1.png]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Striatum1.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div class=&amp;quot;thumb tright&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Striatum2.png|[[Image:180px-Striatum2.png]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Striatum2.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* GTech: Ramsey Al-Hakim, Delphine Nain, Allen Tannenbaum.&lt;br /&gt;
* PNL: Sylvain Bouix, James Levitt, Marc Niethammer, Martha Shenton.&lt;br /&gt;
* Kitware: Luis Ibanez&lt;br /&gt;
* Isomics: Steve Pieper&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[DBP:Harvard:Collaboration:GTech|Harvard - Rule-based Segmentation]]&lt;br /&gt;
* [[Algorithm:Stony Brook#Rule_Based_Segmentation_Slicer_Modules||Stony Brook Summary Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Shape_Analysis/3D_Shape_Analysis_Using_Spherical_Wavelets&amp;diff=83760</id>
		<title>NA-MIC/Projects/Structural/Shape Analysis/3D Shape Analysis Using Spherical Wavelets</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Shape_Analysis/3D_Shape_Analysis_Using_Spherical_Wavelets&amp;diff=83760"/>
		<updated>2013-11-16T01:10:38Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
(Updated 09/12/2006)&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
We have developed a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. Our work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, non-global, non-uniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
* We developed a multiscale representation of 3D surfaces using conformal mappings and spherical wavelets. We then learned a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. This novel multiscale shape prior was shown to encode more descriptive and localized shape variations than the Active Shape Models (ASM) prior for a given training set size. The results were published in [1] on a prostate dataset.&lt;br /&gt;
&lt;br /&gt;
* We have replicated our results from [1] on the left caudate nucleus dataset from the Brockton dataset (Harvard, Core 3). Additionally, one of the nice application of our technique is the automatic discovery of uncorrelated shape variations in a dataset, at various scales. The visualization of resulting bands on the mean shape can in itself be interesting for shape analysis (see Figure) by indicating which surface patches co-vary across the training set. For example at scale 1, bands 1 and 2 indicate two uncorrelated shape processes in the caudate data that make sense anatomically: the variation of the head and of the body. It is also interesting that bands have compact spatial support, though this is not a constraint of our technique.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tright&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Gatech_caudateBands.PNG|[[Image:180px-Gatech_caudateBands.PNG|Figure 1: 2 examples bands discovered by the prior color-coded on the mean shape of the 29 left caudates from the Harvard Brockton dataset. The color shows the cumulative value of the wavelet bases that belong to that band. Higher value (light-blue to red) areas represent surface locations with correlated variations across shapes]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Gatech_caudateBands.PNG|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;Figure 1: 2 examples bands discovered by the prior color-coded on the mean shape of the 29 left caudates from the Harvard Brockton dataset. The color shows the cumulative value of the wavelet bases that belong to that band. Higher value (light-blue to red) areas represent surface locations with correlated variations across shapes&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''' Segmentation'''&amp;lt;nowiki&amp;gt;: Based on our representation, we derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. In [2] we report results on the caudate nucleus in the Brockton dataset (Harvard, Core 3). Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details. &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Ongoing:'''&lt;br /&gt;
&lt;br /&gt;
* ''' Classification'''&amp;lt;nowiki&amp;gt;: We are collaborating with Martin Styner (UNC, Core 1) to include our shape features in the UNC shape analysis pipeline. We obtained interesting preliminary results that have been verified by Jim Levitt (Harvard PNL, Core 3). See &amp;lt;/nowiki&amp;gt;[[NA-MIC/Projects/Structural/Shape_Analysis/Caudate_and_Corpus_Callosum|Caudate and Corpus Callosum Analysis]]. We have also discussed our results with Martin Styner during a UNC site visit (see [[Georgia_Tech_visit_to_UNC%2C_June_8-9|June 8-9, 2006, Georgia Tech visit to UNC: Shape Analysis Discussion]]). A scientific paper is being prepared with our results.&lt;br /&gt;
* ''' ITK Filter'''&amp;lt;nowiki&amp;gt;: We are developing an ITK Filter for the Spherical wavelet transform (See &amp;lt;/nowiki&amp;gt;[[NA-MIC/Projects/Structural/Shape_Analysis/Spherical_Wavelets_in_ITK|ITK Spherical Wavelet Transform Filter]])&lt;br /&gt;
&lt;br /&gt;
'''References:'''&lt;br /&gt;
&lt;br /&gt;
* [1] Nain D, Haker S, Bobick A, Tannenbaum A. Multiscale 3D Shape Analysis using Spherical Wavelets. Proc MICCAI, Oct 26-29 2005; p 459-467 [http://www.bme.gatech.edu/groups/minerva/publications/papers/nain.miccai2005.pdf [1]]&lt;br /&gt;
&lt;br /&gt;
* [2] Nain D, Haker S, Bobick A, Tannenbaum A. Shape-driven 3D Segmentation using&lt;br /&gt;
&lt;br /&gt;
Spherical Wavelets. Proc MICCAI, Oct 2-5, 2006. To Appear.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Delphine Nain, Aaron Bobick, Allen Tannenbaum, Yi Gao and Xavier Le Faucheur&lt;br /&gt;
* Harvard SPL: Steve Haker&lt;br /&gt;
&lt;br /&gt;
'''Collaborators'''&lt;br /&gt;
&lt;br /&gt;
* Core 1: Martin Styner (UNC)&lt;br /&gt;
* Core 2: Jim Miller (GE), Luis Ibanez (Kitware)&lt;br /&gt;
* Core 3: James Levitt, Marc Niethammer, Sylvain Bouix, Martha Shenton (Harvard PNL)&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC/Projects/Structural/Shape_Analysis/Caudate_and_Corpus_Callosum|Caudate and Corpus Callosum Analysis]].&lt;br /&gt;
* [[Georgia_Tech_visit_to_UNC%2C_June_8-9|June 8-9, 2006, Georgia Tech visit to UNC: Shape Analysis Discussion]]&lt;br /&gt;
* [[NA-MIC/Projects/Structural/Shape_Analysis/Spherical_Wavelets_in_ITK|ITK Spherical Wavelet Transform Filter]]&lt;br /&gt;
* [[Algorithm:Stony Brook#Multiscale_Shape_Analysis|Stony Brook Summary Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Rule_based_segmentation:DLPFC&amp;diff=83759</id>
		<title>NA-MIC/Projects/Structural/Segmentation/Rule based segmentation:DLPFC</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Structural/Segmentation/Rule_based_segmentation:DLPFC&amp;diff=83759"/>
		<updated>2013-11-16T01:10:18Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:''' Semi Automatic Segmentation of the DLPFC.&lt;br /&gt;
&lt;br /&gt;
'''Progress:''' We have developed an algorithm for Semi-Automatic Segmentation of the DLPFC based on the rules of Core 3 collaborator, Dr. James Fallon. This algorithm was tested last year in Matlab with successful results. This year, we implemented the algorithm into a 3D SLICER module which works with the current Editor Tab. A screenshot of the module is shown below. The ITK Bayesian Segmentation Filter is currently being incorporated into the module.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tright&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Dlpfc-slicer.png|[[Image:180px-Dlpfc-slicer.png|DLPFC Slicer Module]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Dlpfc-slicer.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;DLPFC Slicer Module&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Publication Reference:''' Ramsey Al-Hakim, James Fallon, Delphine Nain, John Melonakos, and Allen Tannenbaum. “A Dorsolateral Prefrontal Cortex Semi-Automatic Segmenter.” Proc SPIE Medical Imaging, 2006.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Ramsey Al-Hakim, John Melonakos, Delphine Nain, Allen Tannenbaum.&lt;br /&gt;
* UCI: James Fallon&lt;br /&gt;
* Kitware: Luis Ibanez&lt;br /&gt;
* Isomics: Steve Pieper&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook#Rule_Based_Segmentation_Slicer_Modules||Stony Brook Summary Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/fMRI_Analysis/Conformal_Flattening_for_fMRI_Visualization&amp;diff=83758</id>
		<title>NA-MIC/Projects/fMRI Analysis/Conformal Flattening for fMRI Visualization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/fMRI_Analysis/Conformal_Flattening_for_fMRI_Visualization&amp;diff=83758"/>
		<updated>2013-11-16T01:10:03Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:''' We want to develop new flattening methods for better visualizing neural activity from fMRI brain imagery. Our technique is based on conformal mappings which map the cortical surface onto a sphere in an angle preserving manner.&lt;br /&gt;
&lt;br /&gt;
'''Introduction''' We have worked on several techniques for flattening brain surfaces for visualization. Brain flattening has a number of uses including in functional magnetic resonance imaging data in order to better visualize neural activity within the three dimensional folds of the cortex.&lt;br /&gt;
&lt;br /&gt;
The basic idea is that the grey cortical matter surface is the same as that of a crumpled sheet, and in particular does not have any holes or self-intersections. Our approach to flattening such a surface is based on a well-known technique in the theory of Riemann surfaces from complex analysis and geometry, namely, that a surface of genus zero (no handles) without any holes or self-interesections can be mapped conformally onto the sphere, and any local portion thereof onto a disc. In this way, the brain surface may be flattened. The mapping is conformal in the sense that angles are preserved. Moreover, one can explicitly write down how the metric is transformed and thus the geodesics as well. Hence the flattening mapping can be used to obtain an atlas of the brain surface in a straightforward, canonical manner.&lt;br /&gt;
&lt;br /&gt;
The key observation is that the flattening function may be obtained as the solution of a second order elliptic equation on the surface to be flattened. For triangulated surfaces, there exist powerful reliable finite element procedures which can be employed to numerically approximate the flattening function. Some examples of flattened surfaces are given below in Figures 1 and 2.&lt;br /&gt;
&lt;br /&gt;
'''Progress:''' We have developed code for conformal flattening which has been incorporated into Slicer.&lt;br /&gt;
&lt;br /&gt;
Surface flattening is a technique that associates a coordinate in the image with each location on the surface of a geometric object. This technology can be used on a vessel surface, brain surface, or colon surface, just to list a few medical examples.&lt;br /&gt;
&lt;br /&gt;
[[Image:Flat1.png|thumb|180px|right|Figure 1: Flattening of the Cortical Surface]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Flat2.png|thumb|180px|right|Figure 2: Flattening of White/Gray Matter Surface]]&lt;br /&gt;
&lt;br /&gt;
(From S. Angenent, S. Haker, A. Tannenbaum, and R. Kikinis, “On the Laplace-Beltrami operator and brain surface flattening,” IEEE Trans. on Medical Imaging, Vol. 18, pp. 700-711, 1999.)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* GATech: Yi Gao, John Melonakos, Shawn Lankton, Allen Tannenbaum&lt;br /&gt;
* Harvard: Steven Haker, Ron Kikinis&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook#Conformal_Flattening|Stony Brook Page]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Diffusion_Image_Analysis/Anisotropic_Conformal_Metrics_for_DTI_Tractography&amp;diff=83757</id>
		<title>NA-MIC/Projects/Diffusion Image Analysis/Anisotropic Conformal Metrics for DTI Tractography</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NA-MIC/Projects/Diffusion_Image_Analysis/Anisotropic_Conformal_Metrics_for_DTI_Tractography&amp;diff=83757"/>
		<updated>2013-11-16T01:09:39Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:''' We want to extract the white matter tracts from Diffusion Tensor MR data. The idea is to use directional information in a new anisotropic energy functional based on Finsler geometry.&lt;br /&gt;
&lt;br /&gt;
'''Progress:''' We have implemented the algorithm in C++ using the fast-sweeping algorithm. We are in the process of porting the code to ITK.&lt;br /&gt;
&lt;br /&gt;
We are continuing to work on our new framework for white matter tractography in high angular resolution diffusion data. We base our work on concepts from Finsler geometry. Namely, a direction-dependent local cost is defined based on the diffusion data for every direction on the unit sphere. Minimum cost curves are determined by solving the Hamilton-Jacobi-Bellman using the fast-sweeping algorithm. Classical costs based on the diffusion tensor field can be seen as a special case. While the minimum cost (or equivalently the travel time of a particle moving along the curve) and the anisotropic front propagation frameworks are related, front speed is related to particle speed through a Legendre transformation which can severely impact anisotropy information for front propagation techniques. Implementation details and results on high angular diffusion data show that this method can successfully take advantage of the increased angular resolution in high b-value diffusion weighted data despite lower signal to noise ratio. (See Figures 1 and 2 at the end of this page for examples. This method also works nicely for the segmentation of blood vessels as is indicated in Figure 3.)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;''References:''&lt;br /&gt;
&lt;br /&gt;
# Eric Pichon and Allen Tannenbaum. Curve segmentation using directional information, relation to pattern detection. In IEEE International Conference on Image Processing (ICIP), volume 2, pages 794-797, 2005.&lt;br /&gt;
# Eric Pichon, Carl-Fredrik Westin, and Allen Tannenbaum. A Hamilton-Jacobi-Bellman approach to high angular resolution diffusion tractography. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pages 180-187, 2005&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Eric Pichon, [[User:Melonakos|John Melonakos]], Xavier Le Faucheur, Allen Tannenbaum&lt;br /&gt;
* Harvard/BWH: C-F Westin&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* Details of the project are available here: [[Algorithm:Stony Brook#White_Matter_Tractography|Algorithm:Stony Brook#White_Matter_Tractography]]&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tleft&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Tracts1.png|[[Image:180px-Tracts1.png|Figure 1: Fiber tracking from high resolution data set.]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Tracts1.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;Figure 1: Fiber tracking from high resolution data set.&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tleft&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Tracts2.png|[[Image:180px-Tracts2.png|Figure 2: Comparison of technique with streamline based on tensor field.]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Tracts2.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;Figure 2: Comparison of technique with streamline based on tensor field.&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
&amp;lt;div class=&amp;quot;thumb tleft&amp;quot;&amp;gt;&amp;lt;div style=&amp;quot;width: 182px&amp;quot;&amp;gt;[[Image:Vessels1.png|[[Image:180px-Vessels1.png|Figure 3: Vessel Segmentation]]]]&amp;lt;div class=&amp;quot;thumbcaption&amp;quot;&amp;gt;&amp;lt;div class=&amp;quot;magnify&amp;quot; style=&amp;quot;float: right&amp;quot;&amp;gt;[[Image:Vessels1.png|[[Image:magnify-clip.png|Enlarge]]]]&amp;lt;/div&amp;gt;Figure 3: Vessel Segmentation&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Finsler_Active_Contour_DWI&amp;diff=83756</id>
		<title>Algorithm:GATech:Finsler Active Contour DWI</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Algorithm:GATech:Finsler_Active_Contour_DWI&amp;diff=83756"/>
		<updated>2013-11-16T01:08:41Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
We want to extract the white matter tracts from Diffusion Weighted MRI scans. The idea is to use directional information in a new anisotropic energy functional based on Finsler geometry.&lt;br /&gt;
&lt;br /&gt;
'''Progress:'''&lt;br /&gt;
&lt;br /&gt;
We have implemented the algorithm in Matlab/C using the Fast Sweeping algorithm. We are in the process of porting the code to ITK.&lt;br /&gt;
&lt;br /&gt;
We are continuing to work on our new framework for white matter tractography in high angular resolution diffusion data. We base our work on concepts from Finsler geometry. Namely, a direction-dependent local cost is defined based on the diffusion data for every direction on the unit sphere. Minimum cost curves are determined by solving the Hamilton-Jacobi-Bellman using the Fast Sweeping algorithm. Classical costs based on the diffusion tensor field can be seen as a special case. While the minimum cost (or equivalently the travel time of a particle moving along the curve) and the anisotropic front propagation frameworks are related, front speed is related to particle speed through a Legendre transformation which can severely impact anisotropy information for front propagation techniques. Implementation details and results on high angular diffusion data show that this method can successfully take advantage of the increased angular resolution in high b-value diffusion weighted data despite lower signal to noise ratio.&lt;br /&gt;
&lt;br /&gt;
''Data''&lt;br /&gt;
&lt;br /&gt;
We are using Harvard's high angular resolution datasets which currently consist of a population of 12 schizophrenics and 12 normal controls.&lt;br /&gt;
&lt;br /&gt;
''Visual Results''&lt;br /&gt;
&lt;br /&gt;
Recently, we have applied this method to the cingulum bundle, as shown in the following images:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 1. Results on Cingulum Bundle'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case24-coronal-tensors-edit.png |thumb|250px|Detailed View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case25-sagstream-tensors-edit.png|thumb|250px|Streamline Comparison]]&lt;br /&gt;
|-&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case26-anterior.png |thumb|250px|Anterior View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Case26-posterior.png|thumb|250px|Posterior View of the Cingulum Bundle Anchor Tract]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Previously, this method was applied to full brain fiber tractography, as shown in the following images:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 2. Results on full brain fiber tractograpy'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Tracts1.png |thumb|250px|Fiber tracking from high resolution data set.]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Tracts2.png|thumb|250px|Comparison of technique with streamline based on tensor field.]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This method may also be used in pattern detection applications, such as vessel segmentation:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|+ '''Fig 3. Results on Vessel Segmentation'''&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Vessels1.png |thumb|250px|Vessel Segmentation]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
''Statistical Results''&lt;br /&gt;
&lt;br /&gt;
We are currently investigating Cingulum Bundle fractional anisotropy (FA) differences between a population of 12 schizophrenics and 12 normal controls.  We find the anchor tracts as described above and then compute statistics for FA inside a tube of radii 1-3mm centered on the anchor tract.  So far using this method we have been unable to find a statistical difference between the normal controls and the schizophrenics.  Therefore, we are investigating a more precise extraction of the cingulum bundle using Finsler Levelsets, rather than using the primitive cylinder as is currently done.&lt;br /&gt;
&lt;br /&gt;
Download the current statistical results [[Media:ResultsAnchorTube.txt|here.‎]] (last updated 18/Apr/2007)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Project Status''&lt;br /&gt;
*Working 3D implementation in Matlab using the C-based Mex functions.&lt;br /&gt;
*Currently porting to ITK.&lt;br /&gt;
&lt;br /&gt;
''References:''&lt;br /&gt;
* V. Mohan, J. Melonakos, M. Niethammer, M. Kubicki, and A. Tannenbaum. Finsler Level Set Segmentation for Imagery in Oriented Domains. BMVC 2007. Under review.&lt;br /&gt;
* J. Melonakos, V. Mohan, M. Niethammer, K. Smith, M. Kubicki, and A. Tannenbaum. Finsler Tractography for White Matter Connectivity Analysis of the Cingulum Bundle. MICCAI 2007. Under review.&lt;br /&gt;
* J. Melonakos, E. Pichon, S. Angenet, and A. Tannenbaum. Finsler Active Contours. IEEE Transactions on Pattern Analysis and Machine Intelligence, to appear in 2007.&lt;br /&gt;
* E. Pichon and A. Tannenbaum. Curve segmentation using directional information, relation to pattern detection. In IEEE International Conference on Image Processing (ICIP), volume 2, pages 794-797, 2005.&lt;br /&gt;
* E. Pichon, C-F Westin, and A. Tannenbaum. A Hamilton-Jacobi-Bellman approach to high angular resolution diffusion tractography. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pages 180-187, 2005.&lt;br /&gt;
&lt;br /&gt;
'''Key Investigators:'''&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: John Melonakos, Vandana Mohan, Allen Tannenbaum&lt;br /&gt;
* Harvard/BWH: Marek Kubicki, Marc Niethammer, Kate Smith, C-F Westin, Martha Shenton&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
* [[NA-MIC_Collaborations|NA-MIC Collaborations]]&lt;br /&gt;
* [[Media:2007_Project_Half_Week_FinslerTractography.ppt| 4-block PPT Jan 2007]]&lt;br /&gt;
* [[Projects/Diffusion/2007_Project_Week_Geodesic_Tractography| June 2007 Project Week]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:ImageSmoothSlicer2&amp;diff=83755</id>
		<title>Projects:ImageSmoothSlicer2</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:ImageSmoothSlicer2&amp;diff=83755"/>
		<updated>2013-11-16T01:08:23Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]]&lt;br /&gt;
&lt;br /&gt;
'''Objective:'''&lt;br /&gt;
&lt;br /&gt;
This algorithm performs smoothing of images by smoothing only along the level lines of images and not across them. This methodology preserves the edges while removing any noise from it. It works on the principle of &amp;lt;span class=&amp;quot;texhtml&amp;quot;&amp;gt;κ&amp;lt;sup&amp;gt;(1 / 3)&amp;lt;/sup&amp;gt;,κ&amp;lt;sup&amp;gt;(1 / 4)&amp;lt;/sup&amp;gt;&amp;lt;/span&amp;gt; smoothing of the level lines of an image. &amp;lt;span class=&amp;quot;texhtml&amp;quot;&amp;gt;κ&amp;lt;sup&amp;gt;(1 / 3)&amp;lt;/sup&amp;gt;&amp;lt;/span&amp;gt; performs smoothing for each of the slices in the 2D plane while &amp;lt;span class=&amp;quot;texhtml&amp;quot;&amp;gt;κ&amp;lt;sup&amp;gt;(1 / 4)&amp;lt;/sup&amp;gt;&amp;lt;/span&amp;gt; performs volumetric smoothing, ie., smoothing in the 3D plane. If one wants to perform smoothing for the entire volume, use &amp;lt;span class=&amp;quot;texhtml&amp;quot;&amp;gt;κ&amp;lt;sup&amp;gt;(1 / 4)&amp;lt;/sup&amp;gt;&amp;lt;/span&amp;gt; smoothing, whereas for smoothing only a single slice, use &amp;lt;span class=&amp;quot;texhtml&amp;quot;&amp;gt;κ&amp;lt;sup&amp;gt;(1 / 3)&amp;lt;/sup&amp;gt;&amp;lt;/span&amp;gt; smoothing filter.&lt;br /&gt;
&lt;br /&gt;
'''Links:'''&lt;br /&gt;
&lt;br /&gt;
* [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
* [[NA-MIC_Collaborations|NA-MIC Collaborations]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:RuleBasedSegmentation&amp;diff=83754</id>
		<title>Projects:RuleBasedSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:RuleBasedSegmentation&amp;diff=83754"/>
		<updated>2013-11-16T01:08:12Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[NA-MIC_Collaborations|NA-MIC_Collaborations]], [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Rule Based Segmentation = &lt;br /&gt;
&lt;br /&gt;
In this work, we provide software to semi-automate the implementation of segmentation procedures based on expert neuroanatomist rules. We have implemented our code in Slicer 2. We currently provide modules for the semi-automatic segmentation of the DLPFC and the Striatum.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
We have developed an algorithm for Semi-Automatic Segmentation of the DLPFC based on the rules of Core 3 collaborator, Dr. James Fallon. This algorithm was tested last year in Matlab with successful results. This year, we implemented the algorithm into a 3D SLICER module which works with the current Editor Tab. A screenshot of the module is shown below. The ITK Bayesian Segmentation Filter is currently being incorporated into the module. This is important, since we use Bayesian classifiers in order to enhance the Fallon method. The motivation of the DLPFC semi-automatic segmenter was to minimize segmentation time of the DLPFC by incorporating the rules of Dr. Fallon into an algorithm, while still giving the user control of the segmentation process.  The time to segment the DLPFC was reduced from over 30 minutes to approximately 5 minutes. The algorithm is based on the average proportional distances of the posterior boundary from the temporal lobe tip and the anterior boundary from the frontal pole. Each hemisphere must be done separately. The average shape is a parallelogram from the movement of the middle frontal gyrus dorsally as moving posteriorly through coronal slices  Dr. James Fallon has visited Georgia Tech in December 2005 to train our local researchers about his heuristic rules. He will be visiting again on May 17-18, 2007 for further testing and algorithmic development as well as clinical applications.&lt;br /&gt;
&lt;br /&gt;
''Striatum Progress''&lt;br /&gt;
&lt;br /&gt;
We have developed an algorithm for delineation of the striatum into 5 physiological subregions (pre/post caudate, pre/post putamen, and nucleus accumbens) while requiring only minimal user input. We have implemented this algorithm from the geometric rules for delineating the striatum as defined by our Core 3 collaborator, Dr. James Levitt of the PNL, into a 3D SLICER module. The current run time for the algorithm is ~20 seconds after the initial user input. The user inputs a label map of the full striatum, the most superior/dorsal voxel of the putamen on each slice, and the anterior commisure voxel (see figure below). From these, the labelmap is delineated into the aforementioned subregions. The figure below shows a 3D model of the left and right striatum delineated into the five subregions.&lt;br /&gt;
&lt;br /&gt;
''Striatum Representative Image and Descriptive Caption''&lt;br /&gt;
&lt;br /&gt;
[[Image:Striatum1.png|[[Image:Striatum1.png|Image:Striatum1.png]]]] [[Image:Striatum2.png|[[Image:Striatum2.png|Image:Striatum2.png]]]]&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Ramsey Al-Hakim, Delphine Nain, Allen Tannenbaum, John Melonakos&lt;br /&gt;
* PNL: Sylvain Bouix, James Levitt, Marc Niethammer, Martha Shenton.&lt;br /&gt;
* Kitware: Luis Ibanez&lt;br /&gt;
* Isomics: Steve Pieper&lt;br /&gt;
* UCI: James Fallon&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
[http://www.na-mic.org/Special:Publications?text=Projects%3ADLPFC&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:AFibLongitudinalAnalysis&amp;diff=83753</id>
		<title>Projects:AFibLongitudinalAnalysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:AFibLongitudinalAnalysis&amp;diff=83753"/>
		<updated>2013-11-16T01:04:21Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
The shape evolution of the left atrium in the atrial fibrillation patiens is studied longitudinally to reveal the difference between recover group (CG) and the AFib recurrence group (RG).&lt;br /&gt;
&lt;br /&gt;
= Method = &lt;br /&gt;
Four sets of shapes are input to the algorithm. The first set contains the shapes of the CG group taken at time 0. The second set contains the shapes of the same group taken at time 1. The third sets are the shapes of the RG group at time 0 and the fourth sets are the RG group at time 1.&lt;br /&gt;
&lt;br /&gt;
Subsequently, the shapes evolution profiles between two time points, for both groups, are constructed. This enables a continuous evolution path for both shape groups.&lt;br /&gt;
&lt;br /&gt;
The evolution paths are then traced and statistical tests are performed to evaluate the p-value maps indicating the differences between the two groups.&lt;br /&gt;
&lt;br /&gt;
= Results =&lt;br /&gt;
[[File:LongitudinalAFib.png|800px]]&lt;br /&gt;
&lt;br /&gt;
Shape difference. From left to right: mean shapes with p-value map at time 0, 0.1, ..., 1.0.&lt;br /&gt;
&lt;br /&gt;
From the evolution of the p-value maps from time 0 to time 1, we can observe the longitudinal shape differences between the cured group and the AFib recurrent group.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
*BWH: Yi Gao and Sylvain Bouix&lt;br /&gt;
*Georgia Tech: Liangjia Zhu&lt;br /&gt;
*Boston University: Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:ScarIdentification&amp;diff=83752</id>
		<title>Projects:ScarIdentification</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:ScarIdentification&amp;diff=83752"/>
		<updated>2013-11-16T01:04:13Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
To segment the scaring tissue from DE-MRI images, we proposed an effective method that utilizes the geometric and intensity information of the left atrial wall. The key&lt;br /&gt;
observation is that the scaring tissue lies around a thin layer outside the LA chamber and has high intensity values. In order to identify the scar in the&lt;br /&gt;
image, we first extract the wall region of the LA. Then, the extraction of the scar is restricted in the wall region.&lt;br /&gt;
&lt;br /&gt;
*[[File:ScarIntensities.png|800px]]&lt;br /&gt;
Scars in a DE-MRI image. The LA is highlighted with red contour. From left to right: scars in axial, sagittal, and coronal views, respectively.&lt;br /&gt;
&lt;br /&gt;
= Results =&lt;br /&gt;
*[[File:LAWall.png|800px]]&lt;br /&gt;
Extracted LA wall. From left to right: LA wall in axial, sagittal, and coronal views, respectively.&lt;br /&gt;
&lt;br /&gt;
*[[File:LAScar.png|800px]]&lt;br /&gt;
From left to right: scars in axial, sagittal, and coronal views, respectively.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
*Georgia Tech: Liangjia Zhu and Anthony Yezzi&lt;br /&gt;
*BWH: Yi Gao and Sylvain Bouix&lt;br /&gt;
*Boston University: Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
Y. Gao, L. Zhu, A. Yezzi, S. Bouix , A. Tannenbaum. Scar Segmentation in DE-MRI, IEEE International Symposium on Biomedical Imaging (ISBI) , 2012.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:RiskMassEstimation&amp;diff=83751</id>
		<title>Projects:RiskMassEstimation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:RiskMassEstimation&amp;diff=83751"/>
		<updated>2013-11-16T01:04:00Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
The proposed method has three major components: First, extract the coronary arteries from CTA image and identify stenoses with severe occlusions. Second, segment the left ventricular surfaces from CTA image. Third, estimate the affected mass based on the segmentations of the arteries and left ventricle. These steps are illustrated in the figure below.&lt;br /&gt;
&lt;br /&gt;
*[[File:RiskMassSteps.png|600px]]&lt;br /&gt;
Volume at risk estimation process. Left: Extract the area at risk (purple) on the epicardial surface given the stenosis location (greed dot) on the coronary arteries (red lines). Middle: Trace out the risk contour (yellow) on the endocardial surface. Right: Construct the volume at risk (purple).&lt;br /&gt;
&lt;br /&gt;
= Results =&lt;br /&gt;
In experiments, 11 human cardiac CTA images were used to validate the proposed computational framework. In addition, a comparison was made between the percentage of the myocardial mass at risk estimated from CTA images with their corresponding values determined from SPECT MPI. &lt;br /&gt;
&lt;br /&gt;
*[[File:RiskMassPlots.png|600px]]&lt;br /&gt;
Left: Correlation analysis. Right: Bland-Altman plot for %LV of CTA vs. SPECT&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
*Georgia Tech: Liangjia Zhu and Anthony Yezzi&lt;br /&gt;
*BWH: Yi Gao&lt;br /&gt;
*Boston University: Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
L. Zhu, Y. Gao, A. Yezzi, C. Arepalli , A. Stillman, and A. Tannenbaum. A Computational Framework for Estimating the Mass at Risk Caused by Stenoses using CT Angiography, Internatial Journal of Cardiac Imaging(IJCI), In preparation.&lt;br /&gt;
&lt;br /&gt;
L. Zhu, Y. Gao, V. Mohan, A. Stillman, T. Faber, A. Tannenbaum. Estimation of myocardial volume at risk from CT angiography, Proceedings of SPIE , pp.79632-38A, 2011.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:VentricleSegmentation&amp;diff=83750</id>
		<title>Projects:VentricleSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:VentricleSegmentation&amp;diff=83750"/>
		<updated>2013-11-16T01:03:51Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
= Ventricles Segmentation = &lt;br /&gt;
Extracting the myocardial wall of the left (LV) and right (RV) ventricles are important steps in the diagnosis of cardiac diseases. In this paper, we we propose an method for automatically extracting the ventricles from cardiac CT images, which integrates region growing with shape segmentation in a natural way. In this framework, the shape segmentation provides seed regions for region growing while the latter reconstructs a heart surface for shape decomposition.&lt;br /&gt;
 &lt;br /&gt;
= Description =&lt;br /&gt;
In the method, the left and right ventricles are located sequentially, in which each ventricle is detected by first identifying the endocardial surface and then segmenting&lt;br /&gt;
the epicardial surface. To this end, the endocardial surfaces are localized using their geometric features obtained on-line from a CT image. After that, a variational region-growing model is employed to extract the epicaridal surfaces of the ventricles. In particular, the location of the endocardial surface of the left ventricle is determined using an active contour model on the blood-pool surface constructed via thresholding. To localize the right ventricle, the active contour model is performed on a heart&lt;br /&gt;
surface extracted based on the left ventricle segmentation result.&lt;br /&gt;
&lt;br /&gt;
*[[File:FlowChartLRV.png|600px]]&lt;br /&gt;
Flowchart of the ventricles segmentation framework.&lt;br /&gt;
&lt;br /&gt;
= Results = &lt;br /&gt;
The proposed method has been tested using 30 human and 12 pig cardiac CT images. Examples of segmentation for human and pig data are shown below.&lt;br /&gt;
&lt;br /&gt;
*[[File:LRVWallShapeVar.png|600px]]&lt;br /&gt;
Myocardium segmentation results of human data with significantly different heart shapes.&lt;br /&gt;
&lt;br /&gt;
*[[File:LRVWallVolVar.png|600px]]&lt;br /&gt;
Myocardium segmentation results of pig data with different volume coverages.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
*Georgia Tech: Liangjia Zhu and Anthony Yezzi&lt;br /&gt;
*BWH: Yi Gao&lt;br /&gt;
*Boston University: Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Publication =&lt;br /&gt;
L. Zhu, Y. Gao, V. Appia, A. Yezzi, C. Arepalli , A. Stillman, and A. Tannenbaum. Automatic Extraction of the Myocardial Wall from CT Images using Shape Segmentation and Variational Region Growing, IEEE Transaction on Biomedical Engineering(TBME), submitted.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:LeftAtriumSegmentation&amp;diff=83749</id>
		<title>Projects:LeftAtriumSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:LeftAtriumSegmentation&amp;diff=83749"/>
		<updated>2013-11-16T01:03:43Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Automatic Segmentation of Left Atrium via Variational Region Growing=&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
Automatic segmentation of the left atrium (LA) from MRI data is a challenging but major task in medical imaging&lt;br /&gt;
analysis. An important application is concerned with the treatment of left atrial fibrillation [1]. Atrial fibrillation is a cardiac&lt;br /&gt;
arrhythmia characterized by unsynchronized electrical activity in the atrial chambers of the heart. One of the treatments&lt;br /&gt;
for such arrhythmia is the catheter ablation, which targets specific parts of the LA for radio-frequency ablation using&lt;br /&gt;
an intracardiac catheter [2]. Application of radio-frequency energy to the cardiac tissue causes thermal injury, which in&lt;br /&gt;
turn results into scar tissue. Successful ablation can eliminate, or isolate, the problematic sources of electrical activity and&lt;br /&gt;
effectively cure atrial fibrillation. In order to perform such ablation, the extraction of the LA from the late gadolinium&lt;br /&gt;
enhancement MR (LGE-MR) images is required and is often performed manually, which is a very time-consuming task.&lt;br /&gt;
However, automatic LA segmentation is challenging due to the following factors: 1) the LA sized is relatively small as&lt;br /&gt;
compared to the left ventricle (LV) or lungs in cardiac MRI images; 2) boundaries are not clearly defined when the blood&lt;br /&gt;
pool goes into the pulmonary veins from the LA; 3) the shape variability of LA is large across subjects.&lt;br /&gt;
&lt;br /&gt;
We propose an automatic approach for segmenting the left atrium from magnetic resonance imagery&lt;br /&gt;
(MRI). The segmentation problem is formulated as a problem in variational region growing. In particular, the method starts&lt;br /&gt;
locally by searching for a seed region of the left atrium from a given MR slice. A global constraint is imposed by applying a&lt;br /&gt;
shape prior to the left atrium represented by Zernike moments. The overall growing process is guided by the robust statistics of&lt;br /&gt;
intensities from the seed region along with the shape prior to capture the whole atrial region.&lt;br /&gt;
&lt;br /&gt;
The proposed method consists of two key steps: &lt;br /&gt;
*(1) search for a seed region of the LA from an image slice in the axial view. &lt;br /&gt;
&lt;br /&gt;
*(2) explore the LA region using a variational region-growing process. A shape prior is employed to drive the growing process towards atrium-like shapes. &lt;br /&gt;
Given a properly set seed region, a growing process driven by the robust statistics of the seed region is employed to explore the entire LA region. However, leakage is almost&lt;br /&gt;
inevitable because the statistics computed does not provide a global shape constraint on evolving contours. Hence, a shape&lt;br /&gt;
prior is applied to attract the growing process towards an expected shape.&lt;br /&gt;
&lt;br /&gt;
= Results =&lt;br /&gt;
*[[File:Image-LASegWithMomentsPrior.png|800px]]&lt;br /&gt;
Region-growing process driven by robust statistics and Zernike moments shape prior.&lt;br /&gt;
&lt;br /&gt;
*[[File:LASegRG2Atlas.png|800px]]&lt;br /&gt;
Comparison of the worst results obtained using the proposed method (first column) and the atlas-based method (second column) . From top to bottom:&lt;br /&gt;
the LA returned using the proposed method (red) and atlas-based method (green) in axial, coronal, and sagittal views, respectively.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
*Georgia Tech: Liangjia Zhu and Anthony Yezzi&lt;br /&gt;
*BWH: Yi Gao&lt;br /&gt;
*Boston University: Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
L. Zhu, Y. Gao, A. Yezzi, R. MacLeod, J. Cates, A. Tannenbaum. Automatic Segmentation of the Left Atrium from MRI Images Using Salient Feature and Contour Evolution, IEEE Engineering in Medicine and Biology Conference(EMBC), 2012.&lt;br /&gt;
&lt;br /&gt;
= References = &lt;br /&gt;
1. C. J. McGann, E. G. Kholmovski, R. S. Oakes, J. E. Blauer, Segerson N. M. Daccarett, M., K. J. Airey, N. Akoum, E. N. Fish, T. J. Badger,&lt;br /&gt;
E. V. R. DiBella, D. Parker, R. S. MacLeod, and N. F. Marrouche. New magnetic resonance imaging based method to define extent of left atrial wall injury after the ablation of atrial fibrillation. Journal of the American College of Cardiology, 2008.&lt;br /&gt;
&lt;br /&gt;
2. P. Jais, R. Weerasooriya, D.C. Shah, M. Hocini, L. Macle, K.J. Choi, C. Scavee, M. Ha ̈ssaguerre, and J. Clementy. Ablation therapy for atrial fibrillation (AF). Cardiovascular research, 54(2):337–346, 2002.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:MGH-HeadAndNeck-PtSetReg&amp;diff=83748</id>
		<title>Projects:MGH-HeadAndNeck-PtSetReg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:MGH-HeadAndNeck-PtSetReg&amp;diff=83748"/>
		<updated>2013-11-16T01:03:34Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Semi-Automatic Image Registration =&lt;br /&gt;
&lt;br /&gt;
We recognize that the difference between a failure of an automatic image registration approach and a success of a semi-automatic method can be a small amount of user input. The goal of this work is to register two CT volumes of different patients that are related by a large misalignment. The user sets two thresholds for each image: one for the bone mask and another for the flesh tissue. This operation is not time consuming but simplifies the registration task dramatically for the automatic algorithm.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
In this example, large misalignment is present between the two patients.&lt;br /&gt;
&lt;br /&gt;
* [[Image:PreRegFleshSkeleton.png | PreRegFleshSkeleton| 400px]]&lt;br /&gt;
Original Misalignment of the volumes.&lt;br /&gt;
&lt;br /&gt;
Point clouds are generated from label maps of bone. The computed registration field, which is guaranteed to be injective, is applied to the original CT volumes.&lt;br /&gt;
&lt;br /&gt;
* [[Image:SkeletonMisalignedView1.png | SkeletonMisalignedView1| 400px]]  [[Image:SkeletonAlignedView1.png | SkeletonMisalignedView1| 400px]]&lt;br /&gt;
Point clouds representing bone tissue of the patients (before and after registration).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another set of point clouds is generated by sampling from label maps of flesh. To avoid undoing the previous registration, regions belonging to the registered bone tissue from above are constrained not to move. Again, an injective deformation field is computed. &lt;br /&gt;
&lt;br /&gt;
* [[Image:FleshMis.png | FleshMis| 400px]]  [[Image:FleshAlignedView1.png | FleshAlignedView1| 400px]]&lt;br /&gt;
Point clouds representing flesh tissue of the patients (before and after registration). This step is constrained.&lt;br /&gt;
&lt;br /&gt;
The result of applying the two deformations computed by the proposed process are shown below. &lt;br /&gt;
&lt;br /&gt;
* [[Image:PostRegFleshSkeleton.png | PostRegFleshSkeleton| 400px]]&lt;br /&gt;
Aligned images using the two step registration process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Current State of Work ==&lt;br /&gt;
A pipeline composed of Matlab and mex-ed C++ code has been implemented.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Ivan Kolesov, Patricio Vela&lt;br /&gt;
* Boston University: Jehoon Lee, Allen Tannenbaum&lt;br /&gt;
* MGH: Gregory Sharp&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
''In Press''&lt;br /&gt;
&lt;br /&gt;
I. Kolesov, J. Lee, P.Vela, G. Sharp and A. Tannenbaum. Diffeomorphic Point Set Registration with Landmark Constraints. In Preparation for PAMI.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:InteractiveSegmentation&amp;diff=83747</id>
		<title>Projects:InteractiveSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:InteractiveSegmentation&amp;diff=83747"/>
		<updated>2013-11-16T01:03:26Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Interactive Image Segmentation With Active Contours =&lt;br /&gt;
&lt;br /&gt;
A driving clinical study for the present work is a population study of skeletal development in youth. Bone grows from the physis (growth plate), located in the middle of a long bone between the epiphysis and metaphysis. Full adult growth is reached when the physis disappears completely. Precise understanding of how the growth plates in femur and tibia change from childhood to adulthood enables improved surgical planning(e.g. determining tunnel placement for anterior cruciate ligament (ACL) reconstruction) to avoid stunting the patient's growth or compromising the stability of the knee. Currently, growth potential is measured by a physician using an x-ray scan of multiple bones; patients receive repeated doses of radiation exposure.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
In this work, interactive segmentation is integrated with an active contour model and segmentation is posed as a human-supervisory control problem. User input is tightly coupled with an automatic segmentation algorithm leveraging the user's high-level anatomical knowledge and the automated method's speed. Real-time visualization enables the user to quickly identify and correct the result in a sub-domain where the variational model's statistical assumptions do not agree with his expert knowledge. Methods developed in this work are applied to magnetic resonance imaging (MRI) volumes as part of a population study of human skeletal development. Segmentation time is reduced by approximately five times over similarly accurate manual segmentation of large bone structures.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Image:KSliceFlowChart.png | Rel Pred| 800px]]&lt;br /&gt;
Flowchart for the interactive segmentation approach. Notice the user's pivotal role in the process.&lt;br /&gt;
* [[Image:KSliceInptTimeChart.png | Eye Seg| 800px]]&lt;br /&gt;
Time-line of user input into the system. Note that user input is sparse, has local effect only, and decreases in frequency and magnitude over time.&lt;br /&gt;
* [[Image:KVoutSegTightMod.png | Eye Seg| 300px]] &lt;br /&gt;
Result of the segmentation.&lt;br /&gt;
&lt;br /&gt;
== Current State of Work ==&lt;br /&gt;
The described algorithm is implemented in c++ and delivered to physicians. We have begun to analyze the data they created by segmenting the knee with out tool. Future work incorporates shape prior into the segmentation and improves user interaction(according to feedback physician's provide us).&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Ivan Kolesov, Peter Karasev, and Karol Chudy&lt;br /&gt;
* Boston University: Allen Tannenbaum&lt;br /&gt;
* Emory University: Grant Muller and John Xerogeanes&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
''In Press''&lt;br /&gt;
&lt;br /&gt;
I. Kolesov, P.Karasev, G.Muller, K.Chudy, J.Xerogeanes, and A. Tannenbaum. Human Supervisory Control Framework for Interactive Medical Image Segmentation. MICCAI Workshop on Computational Biomechanics for Medicine 2011.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
P.Karasev, I.Kolesov, K.Chudy, G.Muller, J.Xerogeanes, and A. Tannenbaum. Interactive MRI Segmentation with Controlled Active Vision. IEEE CDC-ECC 2011.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:RegistrationTBI&amp;diff=83746</id>
		<title>Projects:RegistrationTBI</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:RegistrationTBI&amp;diff=83746"/>
		<updated>2013-11-16T01:03:17Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Multimodal Deformable Registration of Traumatic Brain Injury MR Volumes using Graphics Processing Units =&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
An estimated  1.7 million Americans sustain traumatic brain injuries (TBI's) every year.  The large number of recent TBI cases in soldiers returning from military conflicts has highlighted the critical need for improvement of TBI care and treatment, and has drawn sustained attention to the need for improved methodologies of TBI neuroimaging data analysis. Neuroimaging of TBI is vital for surgical planning by providing important information for anatomic localization and surgical navigation, as well as for monitoring patient case evolution over time. Approximately 2 days after the acute injury, magnetic resonance imaging (MRI) becomes preferable to computed tomography (CT) for the purpose of lesion characterization, and the use of various MR sequences tailored to capture distinct aspects of TBI pathology provides clinicians with essential complementary information for the assessment of TBI-related anatomical insults and pathophysiology.&lt;br /&gt;
&lt;br /&gt;
Image registration plays an essential role in a wide variety of TBI data analysis workflows. It aims to find a transformation between two image sets such that the transformed image becomes similar to the target image according to some chosen metric or criterion. Typically, a similarity measure is first established to quantify how `close` two image volumes are to each other. Next, the transformation that maximizes this similarity is typically computed through an optimization process which constrains the transformation to a predetermined class, such as rigid, affine or deformable. Numerous challenges associated with the task of TBI volume co-registration can exist if data acquisition is performed multimodally, and additional complexities can also arise due to the large degree of algorithmic robustness that may be required in order to properly address pathology-related deformations. Many conventional methods use the sum of squared differences of intensity values between two image sets as a similarity measure, which can perform poorly or even fail for TBI volume registration. Consequently, because the deformation of patient anatomy and soft tissues cannot typically be represented by rigid transforms, image registration often requires deformable image registration (DIR), i.e., the necessity of applying nonparametric infinite-dimensional transformations.&lt;br /&gt;
&lt;br /&gt;
This paper proposes to replace the Mutual Information (MI) criterion for registration with the Bhattachayya distance [1] within a multimodal DIR framework [3]. The advantage of BD over MI is the superior behavior of the square root function compared to that of the logarithm at zero, which yields a more stable algorithm.  &lt;br /&gt;
This framework we describe takes into account the physical models of tissue motion to regularize the deformation fields and also involves free-form deformation. On the other hand, the DIR algorithm is computationally expensive when implemented on conventional central processing units, which can be detrimental particularly when three-dimensional (3D) volumes-rather than 2D images-need to be co-registered. In clinical settings that involve acute TBI care, the amount of time required by the processing of neuroimaging data sets from patients in critical condition should be minimized.  To meet this clinical requirement, we have implemented our algorithm on a graphics processing unit (GPU) platform [2].&lt;br /&gt;
&lt;br /&gt;
== Result ==&lt;br /&gt;
&lt;br /&gt;
MR volumes were acquired at 3 T using a Siemens Trio TIM scanner (Siemens AG, Erlangen, Germany). Because assessing the time evolution of TBI between the acute to the chronic stage is of great interest in the clinical field in order to evaluate case evolution, scanning sessions were held both several days (acute baseline) as well as 6 months (chronic follow-up) after the traumatic injury event. To eliminate the effect of different scanner parameters during each scanning session, every subject was scanned using the same scanner for both acute and chronic time points. The MP-RAGE sequence (Mugler and Brookeman, 1990) was used to acquire T1-weighted images. In addition, MR data were also acquired using fluid-attenuated inversion recovery (FLAIR, (De Coene et al., 1992)), gradient-recalled echo (GRE) T2-weighted images as well as diffusion weighted imaging (DWI), and perfusion imaging.&lt;br /&gt;
&lt;br /&gt;
Before applying our deformable registration algorithm, all image volumes were co-registered by rigid-body transformation to the pre-contrast T1-weighted volume acquired during the acute baseline scanning session. This helps to correct for head tilt and reduce error in computing the local deformation fields. Another technique that was employed before performing the registration is skull stripping, which was useful in our case because images acquired at the acute stage exhibit appreciably more extracranial swelling compared to images acquired chronically. Since all modalities are co-registered to T1, we only need to perform the skull stripping once, i.e. on the T1 volume. Skull stripping is necessary because, without it, the DIR algorithm would deform the interior of the brain to match the outside boundary. This type of deformation is mathematically valid, but does not yield anatomically plausible results. Two possible solutions to this problem are either adding prior knowledge on the boundary or applying skull stripping, of which we opt for the latter due to its common usage in image processing. We use the BrainSuit software [http://users.loni.ucla.edu/~shattuck/brainsuite/corticalsurface/bse/] for the skull stripping. &lt;br /&gt;
&lt;br /&gt;
[[Image:ToT1e1.png|800px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Excluding preprocessing steps, the registration of two volumes of sizes 256x256x60 is found to require 6 seconds on the GPU. Registration results are illustrated for a 2D slice in the above figure for acute stage. The norm of the deformation fields and its 2D motion grid are also included. For T2, FLAIR and GRE volumes, the largest amount of deformation is observed bilaterally in the deep periventricular white matter, possibly as a consequence of hemorrhage and/or CSF infiltration into edemic regions, which can alter voxel intensities in GRE and FLAIR imaging, respectively. In the case of DWI, notable deformation is observed frontally and frontolaterally; in the former case, this may be the result of warping artifacts due to the large drop in the physical properties of tissues at the interfaces between brain, bone and air. In the latter case, the deformation is possibly due to the presence of TBI-related edema, which can substantially alter local diffusivity values. Similar effects due to these causes are observed with DWI and with perfusion imaging in both acute and chronic scans. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Work ==&lt;br /&gt;
&lt;br /&gt;
Future work will focus on registration of TBI volumes across time in terms of registering acute to chronic or vice versa. There are a large number of registration algorithms that assume the&lt;br /&gt;
smoothness of the vector flow, i.e., the deformation is&lt;br /&gt;
diffeomorphic. However, when registering TBI across time, the deformation  is not well-defined, let alone&lt;br /&gt;
to be diffeomorphic, at some regions where bleeding or lesion&lt;br /&gt;
occurs. It is challenging and important to design a registration&lt;br /&gt;
algorithm that can deal with topological changes for TBI patients.&lt;br /&gt;
One possible approach is to use combination of locally rigid and non-rigid transforms based on visual features such as MIND descriptor [4]. Some regions of new lesions cannot be explained&lt;br /&gt;
by minor elastic registration, thus simultaneous registration and segmentation of lesions is&lt;br /&gt;
needed. We are working on the improved volume matching with boundary conditions based&lt;br /&gt;
on this segmentation.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
Georgia Tech: Yifei Lou and Patricio Vela&lt;br /&gt;
&lt;br /&gt;
UAB: Arie Nakhmani and Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
UCLA: Andrei Irimia, Micah C. Chambers, Jack Van Horn and Paul M. Vespa&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
1.Yifei Lou, Andrei Irimia, Patricio Vela, Micah C. Chambers, Jack Van Horn, Paul M. Vespa and Allen Tannenbaum. Multimodal Deformable Registration of Traumatic Brain Injury MR Volumes  via the Bhattacharyya Distance. Submitted to IEEE Transactions on Bioengineering, 2012&lt;br /&gt;
&lt;br /&gt;
2. Yifei Lou, Xun Jia, Xuejun Gu and Allen Tannenbaum. A GPU-based Implementation of Multimodal Deformable Image Registration Based on Mutual Information or Bhattacharyya Distance. Insight Journal, 2011. [http://www.midasjournal.org/browse/publication/803]&lt;br /&gt;
&lt;br /&gt;
3. E. D’Agostino, F. Maes, D. Vandermeulen, and P. Suetens. A viscous fluid model for multimodal non-rigid image registration using mutual information,” MICCAI, 2002, pp. 541–548&lt;br /&gt;
&lt;br /&gt;
4. M.P. Heinrich, M. Jenkinson, M. Bhushan, T. Matin, F. Gleeson, M. Brady, J.A. Schnabel. MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Medical Image Analysis. Vol. 16(7) Oct. 2012, pp. 1423–1435, Special Issue on MICCAI 2011&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:MultiScaleShapeSegmentation&amp;diff=83745</id>
		<title>Projects:MultiScaleShapeSegmentation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:MultiScaleShapeSegmentation&amp;diff=83745"/>
		<updated>2013-11-16T01:03:09Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Multi-scale Shape Representation and Segmentation With Applications to Radiotherapy =&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
Brain metastases constitute the most common manifestation of cancer involving the central nervous system. Indeed, it has been estimated that over 170,000 cases of brain metastases are diagnosed annually in the United States alone [1]. During the past half-century, the cornerstone of treatment for this oncologic phenomenon has been whole brain irradiation (WBI) [2]. WBI has multiple salutary effects including rapid relief of neurological signs and symptoms as well as enhanced local control. Moreover, WBI represents an attractive clinical alternative because it can potentially suppress micrometastases that are undetectable with the current degree of resolution by MR imaging. Unfortunately, WBI may also engender side effects including memory deficits and decrements in quality of life [3].&lt;br /&gt;
&lt;br /&gt;
Stereotactic irradiation is an option that has gained popularity in the management of brain metastases. Stereotactic irradiation is appealing because it is of short duration, uses multiple intersecting beams to augment the dose within the tumor volume and provides a rapid dose fall-off thereby optimizing the dosimetric gradient outside the tumor. This rationale allowed Chang et al. [4] to mount a trial comparing WBI to stereotactic techniques for patients suffering from brain metastases. The results of the trial indicated less decline in learning and memory function within the stereotactic arm. Yet, others have questioned whether an unacceptable subset of patients (among those treated focally) failed intracranially, albeit outside the radiosurgical treatment volumes.&lt;br /&gt;
&lt;br /&gt;
In order for those receiving WBI to obtain the optimal intracranial control of disease while simultaneously preserving neurocognitive function, it is of strategic importance to recognize that the primary neurocognitive impairments are in memory. Since memory control is thought to be mediated by the hippocampus, attention has been turned to whole brain radiotherapeutic techniques that allow the sparing of the hippocampus. In order to be able to minimize dose deposition within the hippocampus, clinicians must be able to confidently identify that structure. The accuracy and consistency of segmentation can be improved by automating the process and including shape prior knowledge. Also, segmentation is a necessary step prior to registration; segmented structures provide landmarks and can be used to limit the number of free variables in deformable registration, which, in turn, leads to more accurate results.&lt;br /&gt;
&lt;br /&gt;
In order to automatically extract the hippocampus region from the MR images, we present in this work a multiscale representation for shapes with arbitrary topology, and a method to segment the target organ/tissue from medical images having very low contrast with respect to surrounding regions using multiscale shape information and local image features. In many previous works, shape knowledge was incorporated by first constructing a shape space from training cases, and then constraining the segmentation process to be within the learned shape space. However, such an approach has certain limitations due to the number of variations in the learned shape space. Moreover, small scale shape variances are usually overwhelmed by those in the large scale, and therefore the local shape information is lost. In this work, first we handle this problem by providing a multiscale shape representation using the wavelet transform. Consequently, the shape variances captured by the statistical learning step are also represented at various scales. In doing so, not only the number of shape variances are largely enriched, but also the small scale changes are nicely captured. Furthermore, in order to make full use of the training information, not only the shape but also the grayscale training images are utilized in a multi-atlas initialization procedure.&lt;br /&gt;
&lt;br /&gt;
== Result ==&lt;br /&gt;
&lt;br /&gt;
The segmentation method is applied on hippocampus. In the figure below, we show the segmentation results. In the first, third, and the fifth rows, the yellow colored shapes are the segmentation results output by the method. In the second, forth, and sixth rows, the colors on the shapes indicate the difference with the manual segmentation results: For each point on the shape (result of the segmentation algorithm), we compute the closest point on the manual segmented surface, and record the distance to that point. Such distances are encoded by the color shown in those rows.&lt;br /&gt;
&lt;br /&gt;
[[Image:MultiScaleHippoSegmentationHausdorf.png|800px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The method is also applied on caudate. Similarly to the hippocampus cases, in the figure below, we show the segmentation results. In the first and third rows, the yellow colored shapes are the segmentation results output by the method. In the second and forth rows, the colors on the shapes indicate the difference with the manual segmentation results: For each point on the shape (result of the segmentation algorithm), we compute the closest point on the manual segmented surface, and record the distance to that point. Such distances are encoded by the color shown in those rows.&lt;br /&gt;
&lt;br /&gt;
[[Image:MultiScaleCaudateSegmentationHausdorf.png|800px]]&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
Georgia Tech: Yi Gao and Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
# Gao, Y. and Corn, B. and Schifter, D. and Tannenbaum, A. ''Multiscale 3D Shape Representation and Segmentation with Applications to Hippocampal/Caudate Extraction from Brain MRI'', Medical Image Analysis, 2011 in press&lt;br /&gt;
# Patchell, R.: The management of brain metastases. Cancer Treat Rev. 29(6), 533–540 (2003)&lt;br /&gt;
# Knisely, J.: Focused attention on brain metastases. Lancet Oncol. 10, 1037–1044 (2009)&lt;br /&gt;
# Aoyama, H., Tago, M., Kato, N., et al.: Neurocognitive function of patients with brain metastasis who received either whole brain radiotherapy plus stereotactic radiosurgery or radiosurgery alone. In: Int. J. Radiat Oncol, vol. 68(5), pp. 1388–1395 (2007)&lt;br /&gt;
# Chang, E., Wefel, J., Hess, K., et al.: Neurocognition in patients with brain metastases treated with radiosurgery or radiosurgery plus whole-brain irradiation: a randomised controlled trial. Lancet Oncology 10(11), 1037–1044 (2009)&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:SegmentationEndocardialWall&amp;diff=83744</id>
		<title>Projects:SegmentationEndocardialWall</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:SegmentationEndocardialWall&amp;diff=83744"/>
		<updated>2013-11-16T01:03:00Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:3D_Segmentation_LA.png | 3D View of the Segmentation of Endocardial Wall&lt;br /&gt;
Image:2d_axial_LA.png | 2D View&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
Atrial fibrillation, a cardiac arrhythmia characterized by unsynchronized electrical activity in the atrial chambers of the heart, is a rapidly growing problem in modern societies. Electrical cardioversion and antiarrhythmic drugs are used to manage this condition, but suffer from low success rates and involve major side effects. In an alternative treatment, known as ''catheter ablation'', specific parts of the left atrium are targeted for radio frequency ablation using an intracardiac catheter. Application of radio frequency energy to the cardiac tissue causes thermal injury (lesions), which in turn results into scar tissue.  Successful ablation can eliminate, or isolate, the problematic sources of electrical activity and effectively cure atrial fibrillation.&lt;br /&gt;
&lt;br /&gt;
Magnetic resonance imaging (MRI) has been used for both pre- and and post-ablation assessment of the atrial wall. MRI can aid in selecting the right candidate for the ablation procedure and assessing post-ablation scar formations. Image processing techniques can be used for automatic segmentation of the atrial wall, which facilitates an accurate statistical assessment of the region. As a first step towards the general solution to the computer-assisted segmentation of the left atrial wall, in this research we propose a shape-based image segmentation framework to segment the endocardial wall of the left atrium.&lt;br /&gt;
&lt;br /&gt;
== Our Approach ==&lt;br /&gt;
&lt;br /&gt;
A powerful approach in medical image segmentation is active contour modeling wherein the boundaries of an object of interest are captured by minimizing an energy functional. The segmentation of the endocardial wall of the left atrium in delayed-enhancement magnetic resonance images (DE-MRI) using active contours is a challenging problem mainly due to the absence of clear boundaries. This usually leads either to contour ''leaks'', where the contour expands beyond the desired boundary, or partial segmentation, where the contour only captures the desired area partially. A shape-based segmentation approach can overcome this problem by using prior shape knowledge in the segmentation process. In this research, we use shape learning and shape-based image segmentation to identify the endocardial wall of the left atrium in the delayed-enhancement magnetic resonance images.&lt;br /&gt;
&lt;br /&gt;
== Slicer Module for Endocardium Segmentation ==&lt;br /&gt;
We extract the left atrium endocardium region (blood pool) from the DEMRI image using a multi-atlas based scheme. TODO add more detail and tutorial of using the module.&lt;br /&gt;
&lt;br /&gt;
== Slicer Module for Wall Segmentation ==&lt;br /&gt;
&lt;br /&gt;
With the endocardiac region having been segmented, we further extract the LA muscle wall by using the coupled surface evolution technique. This algorithm is now implemented as a Slicer module.&lt;br /&gt;
&lt;br /&gt;
The module can be installed from the extension manager:&lt;br /&gt;
&lt;br /&gt;
[[Image:WallSegmenterInstall.png|800px]]&lt;br /&gt;
&lt;br /&gt;
After that, the module can be found in the Segmentation module category. Its usage is shown below:&lt;br /&gt;
&lt;br /&gt;
[[Image:WallSegmenterUse.png|800px]]&lt;br /&gt;
&lt;br /&gt;
One of the results is shown below:&lt;br /&gt;
&lt;br /&gt;
[[Image:LAwallSegmenterResult.png|800px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Work ==&lt;br /&gt;
&lt;br /&gt;
The two teams from Georgia Tech and University of Utah held a teleconference on November 10, 2010. The Georgia Tech team requested 3-month pre-ablation, immediately-after-ablation, and 3-month post-ablation DE-MRI, their corresponding hand segmentation of the left atrial wall, and the blood pool MRA as well as the outcome of the ablation procedure for the DE-MRI's provided. The specific information of interest with regards to the procedure outcome involves recurrence of the atrial fibrillation (yes/no), the time of recurrence after the procedure (x number of months), and  any other relevant clinical data which its prediction could assist the physician before the ablation procedure (e.g., the severity of recurrence, etc.). The Utah team agreed to provide the data in a week. The two teams agreed to exchange their latest works/publications regarding the atrial fibrillation project.&lt;br /&gt;
&lt;br /&gt;
The Georgia Tech team proposed using machine learning techniques to predict the success of the ablation procedure and time of recurrence (among other information of interest for the physician) based on enhancements in the left atrial wall in pre-ablation, and immediately-after-procedure DE-MRI's. The feasibility of using pre-ablation DE-MRI for procedure success in already published by the Utah team.&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
* Gao Y., Gholami B., MacLeod R.S., Blauer J., Haddad W.M., Tannenbaum A. [http://www.na-mic.org/publications/item/view/1844 Segmentation of the Endocardial Wall of the Left Atrium using Local Region-Based Active Contours and Statistical Shape Learning.] Proceedings of SPIE Medical Imaging 2010.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
Georgia Tech: Behnood Gholami, Yi Gao, Wassim Haddad, and Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
University of Utah: Rob MacLeod, Josh Blauer, and Josh Cates&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:MGH-HeadAndNeck-RT&amp;diff=83743</id>
		<title>Projects:MGH-HeadAndNeck-RT</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:MGH-HeadAndNeck-RT&amp;diff=83743"/>
		<updated>2013-11-16T01:02:52Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Adaptive Radiotherapy for head, neck and thorax =&lt;br /&gt;
&lt;br /&gt;
Proton therapy is used to deliver accurate doses of radiation to people undergoing cancer treatment. At the beginning of treatment, a personalized plan describing the amount of radiation and location to which it must be delivered is created for the patient. However, over the course of the treatment, which lasts weeks, a person's anatomy is likely to change. Adaptive radiotherapy aims to improve fractionated radiotherapy by re-optimizing the radiation treatment plan for each session. To update the plan, the CT images acquired during a treatment session are registered to the treatment plan and doses of radiation to be delivered are re-calculated accordingly. Registering a patient scan to a model (ex.Atlas) provides important prior information for a segmentation algorithm and in the other direction, having segmentation of a structure, can be used for better registration; hence, segmentation and registration must be done concurrently. &lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
Initial experiments show that bony structures such as the mandible can be segmented accurately with a variational active contour. However, for soft tissue such as the brain stem, the intensity profile does not contain sufficient information for reasonably accurate segmentation. To deal with &amp;quot;soft boundaries&amp;quot; infinite dimensional active contours must be constrained by using shape priors and/or interactive user input. One way to constrain a segmentation is shown in our work in MTNS; there, known spatial relationships between structures is exploited. First, a structure we are confident in will be segmented. Using probabilistic PCA a metric used to describe how likely the structure whose segmentation we have obtained is; this metric is essentially a description of how confident we are in the correct segmentation. Then, the location of this structure will be used as prior information(it becomes a landmark) to segment a more difficult structure. Iteratively, the nth structure to be segmented will have n-1 priors do draw information from with a confidence metric for each prior. The likely location of the nth structure, calculated as described above, will serve as an input to constrain an active contours algorithm. Additionally, we have had excellent results when constraining structures of the eye to simple geometrical shapes such as ellipses and tubular to limit the number of free parameter. A sample segmentation of the eye ball is shown below. &lt;br /&gt;
&lt;br /&gt;
* [[Image:Model3D_upTrans.png | Rel Pred| 250px]] Prediction of mandible location(red) give a brainstem(yellow) and larynx(blue) segmentation; ground truth segmenation is green.&lt;br /&gt;
* [[Image:3D_eye.png | Eye Seg| 250px]] Segmentation of the eye ball.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Current State of Work ==&lt;br /&gt;
We have gathered a sample data set; currently, we are registering the scan to create a united patient model. This model will localize structures and constrain shape. Additionally, we are investigation interaction approaches for registration and segmentation that will &amp;quot;put the user in the loop&amp;quot; but greatly reduce the work compared to manual approaches.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
* Georgia Tech: Ivan Kolesov, Vandana Mohan, and Allen Tannenbaum&lt;br /&gt;
* Massachusetts General Hospital: Gregory Sharp&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
''In Press''&lt;br /&gt;
&lt;br /&gt;
I. Kolesov, V. Mohan, G. Sharp and A. Tannenbaum. Coupled Segmentation for Anatomical Structures by Combining Shape and Relational Spatial Information. MTNS 2010.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:PainAssessment&amp;diff=83742</id>
		<title>Projects:PainAssessment</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:PainAssessment&amp;diff=83742"/>
		<updated>2013-11-16T01:02:44Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Agitation and Pain Assessment Using Digital Imaging =&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
Pain assessment in patients who are unable to verbally&lt;br /&gt;
communicate with the medical staff is a challenging problem&lt;br /&gt;
in patient critical care. This problem is most prominently&lt;br /&gt;
encountered in sedated patients in the intensive care unit&lt;br /&gt;
(ICU) recovering from trauma and major surgery, as well&lt;br /&gt;
as infant patients and patients with brain injuries.&lt;br /&gt;
Current practice in the ICU requires the nursing staff in&lt;br /&gt;
assessing the pain and agitation experienced by the patient,&lt;br /&gt;
and taking appropriate action to ameliorate the patient’s&lt;br /&gt;
anxiety and discomfort.&lt;br /&gt;
&lt;br /&gt;
The fundamental limitations in sedation and pain assessment&lt;br /&gt;
in the ICU stem from subjective assessment criteria,&lt;br /&gt;
rather than quantifiable, measurable data for ICU sedation.&lt;br /&gt;
This often results in poor quality and inconsistent treatment&lt;br /&gt;
of patient agitation from nurse to nurse. Recent advances&lt;br /&gt;
in computer vision techniques can assist the medical staff&lt;br /&gt;
in assessing sedation and pain by constantly monitoring the&lt;br /&gt;
patient and providing the clinician with quantifiable data for&lt;br /&gt;
ICU sedation. An automatic pain assessment system can be&lt;br /&gt;
used within a decision support framework which can also&lt;br /&gt;
provide automated sedation and analgesia in the ICU.&lt;br /&gt;
In order to achieve closed-loop sedation control in the ICU,&lt;br /&gt;
a quantifiable feedback signal is required that reflects some&lt;br /&gt;
measure of the patient’s agitation. A non-subjective agitation&lt;br /&gt;
assessment algorithm can be a key component in developing&lt;br /&gt;
closed-loop sedation control algorithms for ICU sedation.&lt;br /&gt;
&lt;br /&gt;
Individuals in pain manifest their condition through &amp;quot;pain&lt;br /&gt;
behavior&amp;quot;, which includes facial expressions. Clinicians regard the patient’s facial expression as a valid indicator for&lt;br /&gt;
pain and pain intensity. Hence, correct interpretation of&lt;br /&gt;
the facial expressions of the patient and its correlation with&lt;br /&gt;
pain is a fundamental step in designing an automated pain&lt;br /&gt;
assessment system. Of course, other pain behaviors including&lt;br /&gt;
head movement and the movement of other body parts, along&lt;br /&gt;
with physiological indicators of pain, such as heart rate,&lt;br /&gt;
blood pressure, and respiratory rate responses should also&lt;br /&gt;
be included in such a system.&lt;br /&gt;
&lt;br /&gt;
Computer vision techniques can be used to quantify agitation&lt;br /&gt;
in sedated ICU patients. In particular, such techniques&lt;br /&gt;
can be used to develop objective agitation measurements&lt;br /&gt;
from patient motion. In the case of paraplegic patients, whole&lt;br /&gt;
body movement is not available, and hence, monitoring the&lt;br /&gt;
whole body motion is not a viable solution. In this case,&lt;br /&gt;
measuring head motion and facial grimacing for quantifying&lt;br /&gt;
patient agitation in critical care can be a useful alternative.&lt;br /&gt;
&lt;br /&gt;
== Pain Recognition using Sparse Kernel Machines ==&lt;br /&gt;
&lt;br /&gt;
Support Vector Machines (SVM) and Relevance Vector Machines (RVM) were&lt;br /&gt;
used to identify the facial expressions corresponding to pain.&lt;br /&gt;
A total of 21 subjects from the infant COPE database were&lt;br /&gt;
selected such that for each subject at least one photograph&lt;br /&gt;
corresponded to pain and one to non-pain. The total number&lt;br /&gt;
of photographs available for each subject ranged between 5&lt;br /&gt;
to 12, with a total of 181 photographs considered. We applied&lt;br /&gt;
the leave-one-out method for validation.&lt;br /&gt;
&lt;br /&gt;
The classification accuracy for the SVM algorithm&lt;br /&gt;
with a linear kernel was 90%. Applying the RVM algorithm&lt;br /&gt;
with a linear kernel to the same data set resulted in an&lt;br /&gt;
almost identical classification accuracy, namely, 91%. &lt;br /&gt;
&lt;br /&gt;
== Pain Intensity Assessment ==&lt;br /&gt;
&lt;br /&gt;
In addition to classification, the RVM algorithm provides&lt;br /&gt;
the posterior probability of the membership of a test image to&lt;br /&gt;
a class. As discussed earlier, using a Bayesian interpretation&lt;br /&gt;
of probability, the probability of an event can be interpreted&lt;br /&gt;
as the degree of the uncertainty associated with such an&lt;br /&gt;
event. This uncertainty can be used to estimate pain intensity.&lt;br /&gt;
&lt;br /&gt;
In particular, if a classifier is trained with a series of facial&lt;br /&gt;
images corresponding to pain and non-pain, then there is&lt;br /&gt;
some uncertainty for associating the facial image of a person&lt;br /&gt;
experiencing moderate pain to the pain class. The efficacy&lt;br /&gt;
of such an interpretation of the posterior probability was&lt;br /&gt;
validated by comparing the algorithm’s pain assessment&lt;br /&gt;
with that assessed by several experts (intensivists) and nonexperts.&lt;br /&gt;
&lt;br /&gt;
In order to compare the pain intensity assessment given by&lt;br /&gt;
the RVM algorithm with human assessment, we compared&lt;br /&gt;
the subjective measurement of the pain intensity assessed&lt;br /&gt;
by expert and non-expert examiners with the uncertainty in&lt;br /&gt;
the pain class membership (posterior probability) given by&lt;br /&gt;
the RVM algorithm. We chose 5 random infants from the&lt;br /&gt;
COPE database, and for each subject two photographs of&lt;br /&gt;
the face corresponding to the non-pain and pain conditions&lt;br /&gt;
were selected. In the selection process, photographs were&lt;br /&gt;
selected where the infant’s facial expression truly reflected&lt;br /&gt;
the pain condition—calm for non-pain and distressed for&lt;br /&gt;
pain—and a score of 0 and 100, respectively, was assigned&lt;br /&gt;
to these photographs to give the human examiner a fair prior&lt;br /&gt;
knowledge for the assessment of the pain intensity.&lt;br /&gt;
&lt;br /&gt;
Ten data examiners were asked to provide a score ranging&lt;br /&gt;
from 0 to 100 for each new photograph of the same subject,&lt;br /&gt;
using a multiple of 10 for the scores. Five examiners with no&lt;br /&gt;
medical expertise and five examiners with medical expertise&lt;br /&gt;
were selected for this assessment. The medical experts were&lt;br /&gt;
members of the clinical staff at the intensive care unit of&lt;br /&gt;
the Northeast Georgia Medical Center, Gainesville, GA,&lt;br /&gt;
consisting of one medical doctor, one nurse practitioner, and&lt;br /&gt;
three nurses. They were asked to assess the pain for a series&lt;br /&gt;
of random photographs of the same subject, with the criterion&lt;br /&gt;
that a score above 50 corresponds to pain, and with the higher&lt;br /&gt;
score corresponding to a higher pain intensity. Analogously,&lt;br /&gt;
a score below 50 corresponds to non-pain, with the higher&lt;br /&gt;
score corresponding to a higher level of discomfort. The&lt;br /&gt;
posterior probability given by the RVM algorithm with a&lt;br /&gt;
linear kernel for each corresponding photograph was rounded&lt;br /&gt;
off to the nearest multiple of 10.&lt;br /&gt;
&lt;br /&gt;
[[Image:Pain1.JPG|500px]]&lt;br /&gt;
[[Image:pain2.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
B. Gholami, W. M. Haddad, and A. Tannenbaum, “Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging,” IEEE Trans. Biomed. Eng., vol. 57, pp. 1457-1466.&lt;br /&gt;
&lt;br /&gt;
B. Gholami, W. M. Haddad, and A. R. Tannenbaum, “Agitation and Pain Assessment Using Digital Imaging,” Proc. IEEE Eng. Med. Biolog. Conf., Minneapolis, MN, pp. 2176-2179, 2009 (Awarded National Institute of Biomedical Imaging and Bioengineering/National Institute of Health Student Travel Fellowship).&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
Georgia Tech: Behnood Gholami, Wassim M. Haddad, and Allen Tannenbaum&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:TubularSurfaceSegmentationPopStudy&amp;diff=83741</id>
		<title>Projects:TubularSurfaceSegmentationPopStudy</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:TubularSurfaceSegmentationPopStudy&amp;diff=83741"/>
		<updated>2013-11-16T01:02:35Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Group study using the Tubular Surface model =&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
We have proposed a new framework for performing group studies on DW-MRI data using the Tubular Surface Model to study white-matter properties. We show that the model facilitates population studies by the natural registration that occurs by the sampling of WM properties along the fiber bundles center-lines. Further, by allowing us to characterize the discrimination ability of local regions of the fiber bundles, the framework allows us to identify the regions that are &amp;quot;affected&amp;quot; by the disorder under study. In our experiments, we have applied the framework to study the Cingulum Bundle towards discriminating Schizophrenia.&lt;br /&gt;
&lt;br /&gt;
== Some Results&lt;br /&gt;
The results below show the visualization of t-statistics with respect to the extracted features, with discrimination ability increasing from green to red. This demonstrates the ability of the framework to visualize the role of different regions of the fiber bundle in Schizophrenia.&lt;br /&gt;
* [[Image:GT-PopStudyVis_OnCBs_Case19-View1.jpg | Visualization of T-statistics on Cingulum Bundle surface (View 1)| 300px]] Visualization of T-statistics on Cingulum Bundle surface (View 1)&lt;br /&gt;
* [[Image:GT-PopStudyVis_OnCBs_Case19-View2.jpg | Visualization of T-statistics on Cingulum Bundle surface (View 2)| 300px]] Visualization of T-statistics on Cingulum Bundle surface (View 2)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
Georgia Tech: Allen Tannenbaum, Vandana Mohan&lt;br /&gt;
&lt;br /&gt;
BWH: Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
''In Print''&lt;br /&gt;
* [http://www.na-mic.org/publications/pages/display?search=Projects:TubularSurfaceSegmentationPopStudy&amp;amp;submit=Search&amp;amp;words=all&amp;amp;title=checked&amp;amp;keywords=checked&amp;amp;authors=checked&amp;amp;abstract=checked&amp;amp;sponsors=checked&amp;amp;searchbytag=checked| NA-MIC Publications Database on Group Study on DW-MRI using the Tubular Surface Model]&lt;br /&gt;
&lt;br /&gt;
''In Press''&lt;br /&gt;
&lt;br /&gt;
V. Mohan, G. Sundaramoorthi, M. Kubicki and A. Tannenbaum. Population Analysis of neural fiber bundles towards schizophrenia detection and characterization, using the Tubular Surface model. Neuroimage (in submission) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: MRI]]&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Projects:SoftPlaqueDetection&amp;diff=83740</id>
		<title>Projects:SoftPlaqueDetection</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Projects:SoftPlaqueDetection&amp;diff=83740"/>
		<updated>2013-11-16T01:02:26Z</updated>

		<summary type="html">&lt;p&gt;Ygao: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Algorithm:Stony Brook|Stony Brook University Algorithms]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Soft Plaque Detection =&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
The ability to detect and measure non-calcified plaques (also known as soft plaques) may improve physicians’ ability to predict cardiac events. This is a particularly challenging problem in computed tomography angiography (CTA) imagery because plaques may have similar appearance to nearby blood and muscle tissue. This paper presents an effective technique for automatically detecting soft plaques in CTA imagery using active contours driven by spatially localized probabilistic models. The proposed method identifies plaques that exist within the vessel wall by simultaneously segmenting the vessel from the inside-out and the outside-in using carefully chosen localized energies that allow the complex appearances of plaques and vessels to be modeled with simple statistics. This method is shown to be an effective way to detect the minute variations that distinguish plaques from healthy tissue. Experiments demonstrating the effectiveness of the algorithm are performed on eight datasets, and results are compared with detections provided by an expert cardiologist.&lt;br /&gt;
&lt;br /&gt;
This technique allows region-based segmentation energies to be spatially localized such that statistical models of the foreground and background adapt to image information as it changes over the domain of the image. This allows for improved modeling accuracy with simplified statistical models. Furthermore, it is particularly powerful for segmenting vessels, which often exhibit changing image intensities over their length, and for the identification of non-calcified plaques, which typically have only slight intensity differences from surrounding structures.&lt;br /&gt;
&lt;br /&gt;
First, vessels are segmented using a localized energy that is ideal for vessel segmentation, because it allows the surface to expand into areas of similar local intensity as long as a larger difference exists between local interiors and exteriors. This allows rapid segmentation of vessels despite changing intensities along the length of the vessel.&lt;br /&gt;
&lt;br /&gt;
Next, two initial surfaces are created that lie just inside and just outside the initial segmented surface. A separate localized energy is used to move the two surfaces toward each other.  This energy is chosen to emphasize local differences so that expansion into nearby regions that have slightly different intensities is discouraged, even if the local means are similar. This more stringent constraint is quite valuable when attempting to differentiate between vascular plaques and surrounding tissue.&lt;br /&gt;
&lt;br /&gt;
Areas where the two surfaces do not find the same boundary are identified as plaques.&lt;br /&gt;
&lt;br /&gt;
== Ongoing Work ==&lt;br /&gt;
&lt;br /&gt;
Future work on this method will include coupling the evolution of the interior and exterior surfaces so that information about local intensities and geometries can be shared in order to detect plaques more robustly. Furthermore, a larger study is planned in which a larger number of datasets will be analyzed, a quantitative analysis will be performed, and the method will be compared with intravascular ultrasound imagery to conﬁrm the presence and composition of detected plaques. We believe this work has the potential of being an important step forward in automatically detecting non-calcified plaques, which have been clearly linked with the occurrence of heart attacks and stroke.&lt;br /&gt;
&lt;br /&gt;
= Key Investigators =&lt;br /&gt;
&lt;br /&gt;
Georgia Tech: Shawn Lankton, Jacob Huang, Vandana Mohan, and Allen Tannenbaum&lt;br /&gt;
&lt;br /&gt;
= Publications =&lt;br /&gt;
&lt;br /&gt;
''In Press''&lt;br /&gt;
&lt;br /&gt;
*Soft Plaque Detection and Automatic Vessel Segmentation. PMMIA Workshop in MICCAI, Sep. 2009.&lt;/div&gt;</summary>
		<author><name>Ygao</name></author>
		
	</entry>
</feed>