<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gabor</id>
	<title>NAMIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gabor"/>
	<link rel="alternate" type="text/html" href="https://www.na-mic.org/wiki/Special:Contributions/Gabor"/>
	<updated>2026-04-16T22:50:00Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.33.0</generator>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2013_Winter_Project_Week&amp;diff=78551</id>
		<title>2013 Winter Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2013_Winter_Project_Week&amp;diff=78551"/>
		<updated>2012-12-07T00:39:55Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Project Events]], [[Events]]&lt;br /&gt;
 Back to [[Project Events]], [[AHM_2013]], [[Events]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[image:PW-SLC2013.png|300px]]&lt;br /&gt;
&lt;br /&gt;
= Dates.Venue.Registration =&lt;br /&gt;
&lt;br /&gt;
Please [[AHM_2013#Dates_Venue_Registration|click here for Dates, Venue, and Registration]] for this event.&lt;br /&gt;
&lt;br /&gt;
= [[AHM_2013#Agenda|'''AGENDA''']] and Project List=&lt;br /&gt;
&lt;br /&gt;
Please:&lt;br /&gt;
*  [[AHM_2013#Agenda|'''Click here for the agenda for AHM 2013 and Project Week''']].&lt;br /&gt;
*  [[#Projects|'''Click here to jump to Project list''']]&lt;br /&gt;
&lt;br /&gt;
=Background and Preparation=&lt;br /&gt;
&lt;br /&gt;
A summary of all past NA-MIC Project Events is available [[Project_Events#Past|here]].&lt;br /&gt;
&lt;br /&gt;
Please make sure that you are on the [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week mailing list]&lt;br /&gt;
&lt;br /&gt;
=Projects=&lt;br /&gt;
&lt;br /&gt;
==TBI==&lt;br /&gt;
&lt;br /&gt;
==Atrial Fibrillation==&lt;br /&gt;
* Scar Identification (LiangJia Zhu, Yi Gao, Josh Cates, Rob MacLeod, Allen Tannenbaum)&lt;br /&gt;
&lt;br /&gt;
==Huntington's Disease==&lt;br /&gt;
* [[2013_Project_Week:QualityAssuranceModule|Quality assurance module enhancements]] (Dave Welch, Hans Johnson)&lt;br /&gt;
* [[2013_Project_Week:PythonModules|Slicer interface to add Python modules to Slicer environment]] (Dave Welch)&lt;br /&gt;
* [[2013_Project_Week:FastFiducialRegistrationModule|Fast fiducial registration module]] (Dave Welch)&lt;br /&gt;
&lt;br /&gt;
==Head and Neck Cancer==&lt;br /&gt;
&lt;br /&gt;
==Prostate Interventions==&lt;br /&gt;
* BRAINSFit in ITK4: extra functionality and testing for prostate MR registration (Andrey, Hans)&lt;br /&gt;
* PkModeling for prostate DCE MRI (Jim, Andrey)&lt;br /&gt;
&lt;br /&gt;
==Neurosurgery==&lt;br /&gt;
&lt;br /&gt;
==General Image Guided Therapy==&lt;br /&gt;
* United SlicerIGT extension, repository, website (Tamas Ungi, Junichi Tokuda, Adam Rankin)&lt;br /&gt;
* Mobile image overlay augmented reality needle guidance (Adam Rankin, Tamas Ungi)&lt;br /&gt;
* SlicerRT: Radiotherapy extension (Csaba Pinter, Kevin Wang)&lt;br /&gt;
&lt;br /&gt;
==General Image Segmentation==&lt;br /&gt;
&lt;br /&gt;
==General Image Registration==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Kit Internals==&lt;br /&gt;
*[[2013_Project_Week:MarkupsModule|Slicer4 Markups Module]] (Nicole Aucoin)&lt;br /&gt;
*[[2013_Project_Week:ColorHierarchies|Slicer4 Color Hierarchies]] (Nicole Aucoin)&lt;br /&gt;
*[[2013_Project_Week:PETStandardUptakeValueComputation| PET/CT SUV Module for Clinicians]] (Sonia Pujol, Markus Van Tol, Nicole Aucoin)&lt;br /&gt;
*[[2013_Project_Week:SelfTests|Slicer4 Self Test  and [http://slicer-devel.65872.n3.nabble.com/ANN-Oldies-But-Goodies-Volume-Data-tt4026821.html Sample Data Refactor]]] (Steve Pieper, Jim Miller, Jc, Sankhesh Jhaveri)&lt;br /&gt;
*[[2013_Project_Week:SimplifyMRMLReference|Simplify MRML References]] - Issue [http://www.na-mic.org/Bug/view.php?id=2727 #2727] (Alex Yarmarkovich, Andras Lasso?, Steve Pieper, Nicole Aucoin?, Julien Finet ?, Sankhesh Jhaveri ?, Jc ?)&lt;br /&gt;
*[[2013_Project_Week:SlicerIPythonIntegration|Integration of IPython]] (Jc, Hans Johnson, Dave Welch, Steve Pieper)&lt;br /&gt;
*[[2013_Project_Week:SlicerDebianPackage|Slicer Debian package]] (Jc, Dominique Belhachemi ?)&lt;br /&gt;
*[[2013_Project_Week:SimplifyRendererMouseInteraction|Simplify renderer window mouse interaction]] - Mailing list [http://slicer-devel.65872.n3.nabble.com/Left-mouse-button-changes-window-level-Is-it-good-tt4026815.html thread] (Csaba ?, Greg?, Andriy?, Steve, Jc)&lt;br /&gt;
*[[2013_Project_Week:PotentialSolutionForDefiningRoleAttributesForVolumes|Potential solutions for defining roles and/or attributes for volumes that are preserved when the volume is processed.]] - Mailing list [http://slicer-devel.65872.n3.nabble.com/Volume-node-subclass-tt4026807.html thread] (Andras?, Greg?, Andriy?, Steve, Jc)&lt;br /&gt;
*[[2013_Project_Week:SteeredRegistration|Interactive Registration for Image Guided Therapy]] (Jim Miller, Steve Pieper, Kunlin Cao)&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2009_Annual_Scientific_Report&amp;diff=37533</id>
		<title>2009 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2009_Annual_Scientific_Report&amp;diff=37533"/>
		<updated>2009-05-15T22:25:27Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Clinical Component (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2009_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2009_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2008 and 2007 reports, which are provided here for reference: [[2008_Annual_Scientific_Report]], [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now completing its fifth year. The Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, as well as to oversee the training and dissemination of these tools to the medical research community. We are currently in year two of our second set of Driving Biological Projects (DBP), three of which involve diseases of the brain: (a) brain lesion analysis in neuropsychiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for velocardiofacial syndrome (VCFS). The fourth DBP takes the Center in a very new direction, (d) the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
Over the past five  years, NA-MIC has made substantial progress toward the attainment of its major objectives. In year one, the Center focused on forging alliances amongst its various cores and constituent groups to assure that the efforts of the cores were well integrated toward the attainment of common and specific goals.  To that end a great deal of effort went into defining the kinds of tools that would be needed for specific imaging applications. The second year emphasized the identification of key research thrusts that cut across all cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main technical themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of Center activity was devoted to the continuation of collaborative work to develop solutions for the various brain-oriented DBPs. The fourth year was focused on translating  collaborative knowledge and work to a new set of DBPs. In the current fifth year, a number of projects have made sufficient progress to warrant introduction as modules in Slicer, thereby making the Core 1 algorithms available to the general medical imaging community. Some of these algorithms are quite general and can be used for purposes far broader than the original DBPs. For example, a new point cloud registration algorithm developed for the prostate brachytherapy needle positioning project also can be used for DWI registration. Likewise, work on DTI/DWI tractography has been applied to the segmentation of blood vessels and soft plaque detection in the coronary arteries.&lt;br /&gt;
&lt;br /&gt;
Year five progress with respect to the current DBPs is relevant to the scope of this Annual Progress Report. As mentioned, we currently have three projects in the area of neuropsychiatric disorders: Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill). A fourth project from Johns Hopkins and Queens Universities involves the application of core technologies to imaging/robotics-guided treatments in prostate cancer. A number of papers have been published that specifically acknowledge the NA-MIC, and significant software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
Section 3 outlines specific aims fulfilled this year by the four roadmap projects: Section 3.1 describes the Stochastic Tractography Approach for Velocardiofacial Syndrome; Section 3.2 details the application of our work to Brachytherapy Needle Positioning for the Prostate; Section 3.3 outlines the Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus project; and Section 3.4 documents the Cortical Thickness for Autism project. For all of these projects, a synergism of effort has produced working computer modules that are user friendly and accessible to both medical researchers and clinicians. &lt;br /&gt;
&lt;br /&gt;
Section 4 describes year five work on the four infrastructure topics. These include: Diffusion Image Analysis (Section 4.1), Structural Analysis (Section 4.2), Functional MRI Analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4). Many of the algorithms produced by Cores 1-3 have been integrated into ITK and Slicer, including those concerning shape analysis (e.g., spherical wavelets), new segmentation algorithms (for DTI/DWI tractography and the segmentation of the prostate), and new approaches to registration (e.g., based on particle filtering).&lt;br /&gt;
&lt;br /&gt;
Finally, the last three sections of this Annual Progress Report highlight some of the work that the the Scientific Leadership believes is particularly significant to the the overall goals of the Center. Section 5 summarizes the benefits of several advanced algorithms, gives a description of the growing NAMIC-Toolkit, and documents the scope of our efforts in technology transfer and outreach. It is essential to emphasize that although the algorithms emanating from this Center were developed to solve specific clinical problems raised by the DBPs, in application, most of these algorithms have far more general utility and far greater potential impact on the medical imaging technical base. To this end, Section 6 draws attention to the impact and value of our work on biocomputing imaging at three different levels: within the Center, within the NIH-funded research community, and externally to the national and international community. To further illustrate the impact of our work, Section 7 provides some updated timelines with specific milestones achieved by the various NA-MIC cores. Section 8 lists publications pertinent to the current reporting period that acknowledge NA-MIC support, and Section 9 provides the External Advisory Report and our considered response.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that is useful in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand similarities and differences in anatomical connectivity between genetically related schizophrenia and velocardio-facial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity or connectivity in the arcuate fasciculus fiber bundle that is involved in language processing in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
The core science involved in this project is the Stochastic Tractography algorithm. This algorithm was developed and implemented collaboratively by MIT and BWH. Stochastic Tractography is a Bayesian approach to estimating nerve fiber tracts from images created by diffusion tensor imaging (DTI).&lt;br /&gt;
&lt;br /&gt;
In this approach, the diffusion tensor is used at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. The tracts then are sampled&lt;br /&gt;
between two user-selected regions of interest (ROIs), by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, atlas-based segmentation is used to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. This latter&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
of Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we have been working on applying several pre- and postprocessing steps to the algorithm pipeline. These steps include &amp;quot;eddy current&amp;quot; and &amp;quot;geometric distortion correction,&amp;quot; which were made available to us by the Utah group, as well as &amp;quot;DTI filtering&amp;quot; (BWH). White matter masks now also can be created based on T2 thresholding within the Slicer Stochastic Tractography module. These masks are more precise, as they do not rely on MRI-to-DTI co-registration. &lt;br /&gt;
&lt;br /&gt;
We also have been working on the datasets that apply to situations where fMRI activations as well as gray matter segmentations need to be registered to DTI data to permit seeding within predefined gray matter regions. Significant progress has been made in modality registration, and additional improvement is expected when &amp;quot;geometric distortion correction&amp;quot; becomes part of the analysis pipeline. &lt;br /&gt;
&lt;br /&gt;
Finally, we have been working on improved ways to visualize and quantify Stochastic Tractography output, not only by parametrizing fiber tracts, but also by creating connection probability distribution maps.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
This year, the Stochastic Tractography Slicer module was rewritten in Python. The new module was released in December, 2008, and presented at the All Hands Meeting in Salt Lake City. The module is now a functional component of Slicer3. Documentation for operating the module also has been created to facilitate user training. Current Engineering efforts are focused on maintaining the module, optimizing the module for use with other data formats, and adding new functionality, such as better registration, distortion correction, and ways of extracting and measuring functional anisotropy (FA) along nerve fiber tracts. &lt;br /&gt;
&lt;br /&gt;
The datasets used with the Stochastic Tractography module are computationally demanding. They involve higher spatial resolutions and many more diffusion directions as compared with White Matter Tractography, which was used previously. As well, the cortical ROIs tend to be much larger than white matter ROIs.  Hence, there is a pressing need for performance improvement. This need can be appreciated by examining the differences between Stochastic Tractography, where literally hundreds of tracts are generated from a single seed, and Deterministic Tractography, where only a single tract is generated.  Thus, some effort has been made to economize by use of multi-threading and parallel processing. A version of the Stochastic Tractography algorithm that uses large computer clusters also has been developed and can be downloaded and installed by individual users with minimal knowledge of parallel computing.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
In this reporting period, we have designed, implemented, or completed several studies to test the Stochastic Tractography algorithm on the newly released 3T NA-MIC data. These datasets consist of high resolution DTI, structural RM data, and automatic anatomical segmentations. Since these data already have been co-registered, cortical ROIs can be used as seeding points for Stochastic Tractography. &lt;br /&gt;
&lt;br /&gt;
The first of these clinical studies was designed to analyze the connections between the inferior frontal and superior temporal lobes, which represent important sites of the language network. The connections between these two regions were measured via Stochastic Tractography in a group of 20 chronic schizophrenia patients and 20 controls and then subjected to comparative analysis. We also examined gray matter volume in destination regions and attempted to estimate the relationship between gray and white matter abnormalities in schizophrenia. The results of this study were presented at the World Psychiatry Congress in Florence, Italy in April, 2009, and later that same month at the Harvard Psychiatry MYSELL conference.  &lt;br /&gt;
&lt;br /&gt;
Another current endeavor is the application of Stochastic Tractography to define the connections involved in emotional processing. [For this study, we are using cortical segmentations of the anterior cingulated gyrus, orbital-frontal gyrus, and amygdala and trace as well as quantify connections between these regions in healthy controls versus schizophrenia patients.] The results of this preliminary study were presented at MYSELL in April, 2009. Another presentation will be made at the Biological Psychiatry conference later this year. &lt;br /&gt;
&lt;br /&gt;
We have also been involved in two collaborative efforts. The first involves the use of DTI data acquired at UCI.  In this study, we have used the stochastic method to segment and measure the arcuate fasciculus in subjects with schizophrenia and language impairment, as evidenced in ERP data. In another collaboration, we are combining resting state fMRI data with DTI to measure connectivity between regions that form a functional network. Both of these projects are currently under way. &lt;br /&gt;
&lt;br /&gt;
Finally, a paper that discusses the qualitative use of Stochastic Tractography has been accepted for publication in Human Brain Mapping and is currently in press. Here, when we combined fMRI with DTI whole brain data analysis, we identified certain regions that expressed abnormal functional connectivity in schizophrenia. These regions were then assigned to certain anatomical structures (white matter tracts) based on their location and relationship to Stochastic Tractography output.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: MR-guided Prostate Biopsy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided needle-based therapy and biopsy in the management of prostate cancer. However, the accuracy of traditional prostate interventions that rely on transrectal ultrasound (TRUS) is limited by image fidelity, needle template guides, needle deflection, and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring such interventions because it provides excellent visualization of the prostate, its sub-structure, and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that permits prostate biopsy and brachytherapy procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies the transrectal approach to the prostate. With this approach, an endorectal coil and steerable needle guide, both tuned to 3T magnets and invariably to any particular scanner, are integrated into the MRI compatible manipulator.&lt;br /&gt;
&lt;br /&gt;
Under the NA-MIC initiative, the interface between image computing, visualization, intervention planning, and kinematic planning is being managed by an open-source system built on the NA-MIC toolkit and its components, that is, Slicer3 and ITK.  These tools are complemented by a collection of unsupervised prostate segmentation and registration methods that are important to the clinical performance of the interventional system as a whole.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
&lt;br /&gt;
Our group (GaTech) has worked on both the segmentation and registration of the prostate from MRI and ultrasound data. This process is described below.&lt;br /&gt;
&lt;br /&gt;
'''Prostate Segmentation'''&lt;br /&gt;
&lt;br /&gt;
The first step is to &amp;quot;extract&amp;quot; the prostate. We have provided two methods: a shape-based method and a semi-automatic method. More details are given below and images and further details may be found [http://www.na-mic.org/Wiki/index.php/Projects:ProstateSegmentation here]&lt;br /&gt;
&lt;br /&gt;
# ''A shape-based algorithm''. This process begins by learning a group of shapes, obtained by manually segmenting a set of prostate 3D images. With the shapes represented as the hyperbolic tangent of the signed distance functions, principle component analysis is employed to learn the shapes. Further, given a new prostate image, we search the learned shape space in order to find one shape best segment the given image. The fitness of one shape to segment the image is evaluated by an energy functional measuring the discrepancy of the statistical characteristics inside and outside the current segmentation boundary. Such method is robust to the noise in the images. Moreover, the whole algorithm pipeline has been integrated into the Slicer3 through the command line module.&lt;br /&gt;
# ''Semi-automatic method''. This method is based on a random walk segmentation algorithm. With user provided initial seed regions inside and out side the object (prostate), the algorithm computes a probability distribution over the image domain by solving a boundary value partial differential equation where the value at seed regions are fixed at 1.0 or 0.0, depending or whether they are object or background seeds. The resulting distribution indicates the probability of each voxel belonging to the object. Simply threshold by 0.5 gives the segmentation of the object. Moreover, if the result is not suitable, the user can edit the seed regions, and the new result is computed based on this previous result. This algorithm has been integrated into the transrectal prostate MRI module of Slier3.&lt;br /&gt;
&lt;br /&gt;
'''Prostate Registration'''&lt;br /&gt;
&lt;br /&gt;
We developed a nonlinear (affine) prostate registration method by treating prostate images as point sets. Then the iterative closest point algorithm is improved to register the point sets generated by the two images to be registered. The proposed method shows robustness to long distance transition and partial image structure. Moreover, such representation is much sparser than sampling image on the uniform grid thus the registration is very fast comparing two 3D volumetric&lt;br /&gt;
image registration.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the registration is viewed as a posterior estimation problem, in which the distributions of the affine and translation parameters are to be estimated. This can naturally be estimated using a particle filter framework. Through this, the method can handle the otherwise difficult cases where the two prostates are one supine and&lt;br /&gt;
one prone.&lt;br /&gt;
&lt;br /&gt;
More details are given [[Projects:pfPtSetImgReg|here...]]&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Vikal, Hayes)===&lt;br /&gt;
&lt;br /&gt;
An end-to-end slicer loadable module that interfaces with MRI-compatible robotic device to perform MRI guided prostate biopsy has been developed. A complete end-user tutorial documentation has been uploaded on the project wiki-page together with sample tutorial dataset (phantom) to facilitate user training.  This is one of the first modules that uses Slicer as an interventional tool contrary to traditional usage as a post-processing tool.&lt;br /&gt;
&lt;br /&gt;
This has been an year of ideas to implementation. The design hatched, was realized during this year. Specifically speaking, following functionality was implemented.&lt;br /&gt;
&lt;br /&gt;
An intuitive workflow based GUI, that clearly identifies four phases of the intervention (Device calibration/registration, prostate segmentation, targeting, and verification) guides the user through the process. &lt;br /&gt;
&lt;br /&gt;
The robotic device calibration/registration to scanner coordinates is achieved by means of segmenting fiducial markers in images. The registration parameters are used in targeting step for calculation of targeting parameters and needle tarjectory. The robotic device's optical encoders are interfaced, these sensors continuously sense the device rotation and needle angle, and are sent over USB interface, our module reads these on a 500 msec timer event. &lt;br /&gt;
&lt;br /&gt;
The prostate segmentation algorithm developed by our algorithm core collaborators at Georgia Tech was integrated during NAMIC programming at Utah this year. The details are provided in previous sub-section.&lt;br /&gt;
&lt;br /&gt;
The targeting step enables user to pick anatomic locations of interest by just clicking in any of 3 slice views. 3D Slicer's arbitrary reformat plane widget is an attractive feature which enables user/clinician to visualize and pick target in any desired plane. When a target is picked, the targeting parameters (device rotation and needle angle) for the device to hit the intended target are calculated and updated in the list of targets. Multiple targets can be picked and associated with a particular type of needle (e.g. biopsy or seed). Once a particular target is selected from the list of targets, the selected target is brought into view in all three slice views and highlighted in the 3D view, the information about the target and targeting parameters is displayed. Further, the robot's needle trajectory to the target is also visualized in 3D; this is a very crucial feedback for the clinician. &lt;br /&gt;
&lt;br /&gt;
The biopsy is performed, and a validation volume is acquired while needle still in the prostate; this validation volume is then used to perform validation analysis, to find out how accurately did the device hit the target.&lt;br /&gt;
&lt;br /&gt;
We've sought and received timely help from the engineering core. A couple of times, some functionality in Slicer 'core' was implemented for our specific requirements. The current engineering efforts are focussed on testing module at various levels, detecting and fixing bugs. We are in the process of designing a test protocol for functional and clinical evaluation of the software.  Efforts are also on to add more functionality e.g. additional dedicated MR room display window which will display the chosen 2D image view for a particular target, the required robot targeting paramaters, and the sensed robot parameters; load previously saved experiment for post-op analysis; visualize robot anatomical coverage at calibration step itself, which can be used to reposition device if necessary.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
&lt;br /&gt;
Since last year robotic hardware has gone through a major re-engineering and changes of design. We have completed detailed hardware safety tests and inaugurated the device to clinical use. We have treated the first batch of patients just recently. For the sake of clinical safety, we opted not to upgrade the interface software at the same time. In the meantime, all new image processing and visualization functions have been implemented in the 3D Slicer interface alone, no we longer retrofit the older existing software with major new features. The Slicer 3D based target planning and device control interface will inaugurated gradually during the project year.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
The primary goal of the MIND DPB is to examine changes in white matter lesions in adults with Neuropsychiatric Systemic Lupus Erythematosus (SLE). We want to be able to characterize lesion location, size, and intensity, and would also like to examine longitudinal changes of lesions in an SLE cohort. To accomplish this goal, we will create an end-to-end application entirely within NA-MIC Kit allowing individual analysis of white matter lesions. Such a workflow will then be applied to a clinical sample in the process of being collected.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component ===&lt;br /&gt;
The basic steps necessary for the white matter lesion analysis application entail first registration of T1, T2, and FLAIR images, second tissue classification into gray, white, csf, or lesion, thirdly clustering lesion for anatomical localization, and finally a summarization of lesion size and image intensity parameters within each unique lesion. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;Note Progress in the last year&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
&lt;br /&gt;
At the [http://www.na-mic.org/Wiki/index.php/2009_Winter_Project_Week January 2009 NA-MIC project week] a first pass of a lesion segmentation tutorial was provided to the community.  This was the first end-to-end workflow for this project and represented a significant step in the project.  Based on feedback from the community and the target clinical users of these tools, we identified additional steps to improve the system.&lt;br /&gt;
&lt;br /&gt;
The primary engineering effort has been directed to the following projects:&lt;br /&gt;
&lt;br /&gt;
'''Interface Improvements'''&lt;br /&gt;
&lt;br /&gt;
We have begun to look at the creation of a [[2009_Winter_Project_Week:HighLevelWizard| high level wizard]] as a front end to the processing task.  This interface would allow users to go through the steps without directly navigating the slicer modules and can also provide state management that will simplify the visualization efforts.&lt;br /&gt;
&lt;br /&gt;
'''Modularity and Deployment'''&lt;br /&gt;
&lt;br /&gt;
The current tutorial has been difficult for some users to implement due to the requirement that the lesion detection module be compiled locally on the user's machine.  Non-developers understandably find this to be a difficult requirement, so we are integrating the lesion segmentation code into the [http://www.slicer.org/slicerWiki/index.php/Slicer3:Loadable_Modules:Status Slicer3 loadable module project] so that pre-compiled versions of the module are available for users.  To implement this, we are following the templates provided by the [http://www.nitrc.org/projects/slicer3examples/ slicer example modules] provided on the nitrc.org website.  This infrastructure was created via a supplement to NA-MIC provided by the NITRC project.  The [http://www.nitrc.org/projects/lupuslesion/ project page on nitrc.org] is being updated as new features are added to the modules.&lt;br /&gt;
&lt;br /&gt;
'''Core Implementation Support'''&lt;br /&gt;
&lt;br /&gt;
During this period we have also worked on [[2009_Winter_Project_Week:LesionSegmentationEfficiency|optimizing the implementation]] of the core ITK code.  This effort has primarily been accomplished by the MIND group, with interactions as needed with the rest of the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
In addition, ongoing discussions with the rest of the NA-MIC community are encouraging code sharing among projects through modularization of common processing tasks and development of 'best of breed' routines for lesion detection and quantification.  These tools are then embodied as slicer modules for use in other applications such as brain tumor change tracking.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
&amp;lt;Note Progress in the last year&amp;gt;&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thickness in children with autism compared to typical controls. We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years. To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
We developed a specific project for our NA-MIC DBP focused on the goal of obtaining regional and local cortical thickness measurements on our pediatric dataset.   A secondary goal is to incorporate this measurement module into the NA-MIC toolkit application, Slicer3.  Lastly, the module would then be compared to other existing cortical thickness methods (e.g., FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Algorithm and Engineering===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences. As part of this project, we will create  end-to-end applications allowing individual and group analysis of regional and local cortical thickness. The regional and local cortical thickness analysis is based on separate pipelines and work in these areas is described below.&lt;br /&gt;
&lt;br /&gt;
REGIONAL:  A Slicer3 high-level module performing individual regional cortical thickness analysis was completed this past year: ARCTIC (Automatic Regional Cortical ThICkness). The default basic steps entail first probabilistic atlas-based automatic tissue segmentation, second atlas parcellation deformable registration and thirdly asymmetric cortical thickness measurement. The user has the possibility to skip some of these steps if the related images are provided, such as tissue segmentation label maps or parcellation maps. This application provides not only lobar cortical thickness measurements but also tissue segmentation volume information stored in spreadsheets. Moreover a quick quality control can be performed for each step within Slicer3 using a MRML scene displaying output volumes and surfaces.   ARCTIC’s first release is publicly available on NITRC (http://www.nitrc.org/projects/arctic/).   Documentation has been created for the tool on the NAMIC wiki pages, including two tutorials.  The tutorials won first prize at the NA-MIC 2009 annual meeting tutorial contest. Pediatric and adult brain atlases used by ARCTIC are also available on MIDAS (http://www.insight-journal.org/midas/collection/view/34). ARCTIC is still in development in order to improve its integration within Slicer3, but by the end of this project year, we expect ARCTIC to be cross-platform, with Windows and MAC executables available on NITRC. Moreover ARCTIC's source code will soon be available to the community via a SVN repository.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
LOCAL:  Regarding local cortical thickness analysis, improvement has been made on the pipeline level as this mesh-based method requires more steps than the regional one.  Main components for this pipeline include (1) tissue segmentation, (2) atlas-based ROI segmentation, (3) white matter map creation and post-processing, (4) genus-zero white matter map image and surface creation, (5) cortical thickness computation, (6) white matter mesh inflation, (7) sulcal depth computation, and (8) cortical correspondence on inflated meshes using a particle system. While several modules currently exist within Slicer3 to perform the first two steps, many applications have been developed and integrated within Slicer3 regarding the intermediate steps. The last step regarding cortical correspondence module is currently being tested.  We expect the whole the mesh-based local cortical thickness analysis pipeline to be fully working by the end of the current project year.  Work will then be focused on integrating this high-level module within Slicer3.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
Regarding clinical application, ARCTIC has been tested on a pediatric dataset but we plan to compare it with the state of the art application (e.g., FreeSurfer).   Results are available on ([[DBP2:UNC |our project DPB page]]).  A statistical study based on Pearson's correlation is thus currently in progress using 40+ cases from FreeSurfer's publicly available tutorial dataset.&lt;br /&gt;
&lt;br /&gt;
Once we have demonstrated adequate validity of the ARCTIC tool, and have completed work on the local cortical thickness pipeline (described above) we plan to conduct group based comparisons (autism vs. typical) examining regional and local cortical thickness differences in our pediatric sample.&lt;br /&gt;
&lt;br /&gt;
During the past year we have prepared a paper (in print) and presented our methods work.  See below.&lt;br /&gt;
&lt;br /&gt;
Papers and Presentations:&lt;br /&gt;
&lt;br /&gt;
IOguz I., Niethammer M., Cates J., Whitaker R., Fletcher T., Vachet C. , and Styner M., Cortical Correspondence with Probabilistic Fiber Connectivity, Information Processing in Medical Imaging, IPMI 2009, LNCS, in print.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
“Use of the Slicer3 Toolkit to Produce Regional Cortical Thickness Measurement of Pediatric MRI Data.”  H.C. Hazlett, C. Vachet, C. Mathieu, M. Styner, J. Piven presented at the 8th Annual International Meeting for Autism Research (IMFAR) Chicago, IL 2009.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
&lt;br /&gt;
The 2008/2009 year showed significant progress towards refining the DWI tools and applying existing implementations clinical studies. This progress is best documented by a total of 17 publications since the last year’s report, with 9 publications in high-impact journals (Neuroimage (3), IEEE TMI (2), MEDIA (1), Schizophr Res (4)), 4 publications in peer reviewed conference proceedings (MICCAI (3), ISBI (1)) and3 others in scientific workshops (MMBIA, MICCAI). These publications are excellent indicators that NAMIC tools and methodologies are competitive and get published in medical image analysis journals, but that their application to clinical studies and included validation and testing also get published in clinically-oriented journals. &lt;br /&gt;
Methods can be characterized by individual subject processing of DWI to extract fiber bundles of interests in particular patients are large-scale population-based analysis for group comparison and hypothesis testing. Significant progress in both can be reported in this reporting period. Core 1 partners contributed with further developments of methods for image preprocessing such as filtering and artifact removal, with improved tractography algorithms, methods for clustering of streamlines into meaningful tracts, group-wise analysis via computational anatomy tools, and methods for quantitative analysis of tracts to provide parameters for statistical analysis. Core 2 was significantly contributing by providing the computational environment for user-guided, interactive DTi analysis requiring a complex user interface and sophisticated visualization, but also developing plug-in capabilities for more automated processing modules. Core-3 made increasing use of these tools for data from clinical studies, where there was a handshaking between engineers of the Core 3 partners with methods’ developers of Core 1 and engineers of Core 2. Core 5 organized several training courses including DTI analysis, where participants could lean about the underlying imaging and image analysis concepts and the use of the Slicer software environment. &lt;br /&gt;
&lt;br /&gt;
The following list summarizes the major new contribution to Diffusion Image Analysis during the reporting period. &lt;br /&gt;
&lt;br /&gt;
* Fiber Tract Modeling, Clustering and Quantitative Analysis (MIT): Development on population-based analysis of DTI via clustering of fiber tracts for automatic labeling has been continued and resulted in a recent journal publication. As a new research direction, the group approaches the challenging problem of joint registration and segmentation of DWI fiber tractography, where tract labels are assigned in an iterative framework using registration of bundles to an atlas. This results in nonlinear joint registration of sets of DWI data into a common coordinate space, and at the same time automatic labeling of joint tracts. Quantitative analysis in population studies is based upon correspondence obtained via the clustering and labeling. The group applied this technique to various clinical datasets and reported results at 3 conferences.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Tractography (MIT, DBP 2): Stochastic tractograpy was a major research effort of this group during the reporting period. Initial prototype software was integrated into Slicer 3, which brought significant challenges w.r.t. user interaction, visualization, and definition of data structures for subsequent statistical analysis. Advantages of stochastic tractography are clearly shown in areas of crossing fibers, uncertainties, considerable noise – all situations where conventional deterministic tractography methods would fail. This project is a joint collaboration between Core 1, Core 2 and Core 3, nicely demonstrating the close interaction between method development, engineering and testing and validation in a clinical environment. &lt;br /&gt;
&lt;br /&gt;
* Geodesic Tractography Segmentation (Georgia Tech): As an alternative to streamline tractography, this project develops a technique for extraction of a minimum cost curve through the tensor field, resulting in an anchor curve between source and target regions specified by the user (journal publication). As an extension, volumetric fiber segmentation based on active contours but using the anchor curves as initialization has been developed. This led to a framework for tubular surface segmentations and was presented at two conference workshop and also resulted in a journal publication.&lt;br /&gt;
&lt;br /&gt;
* DTI processing and statistical tools (Utah 1): This research addresses the important problem of correction of artifacts of DWI. Image distortions due to Eddy currents in gradient directions and due to suszeptibility artifacts of the EPI acquisition are corrected via a combined scheme of aligning individual gradient images and calculating a nonlinear transformation between DWIs and a geometrically correct T2weighted image. The whole pipeline is written in ITK and is tested on a larger number of datasets. The methodology is in print and will be presented at a peer reviewed conference (IPMI 2009). This group also continued further development of the volumetric white matter connectivity tool, i.e. a method dual to tractography that optimizes a shortest path through the tensor field.&lt;br /&gt;
&lt;br /&gt;
* Population-based analysis of white matter tracts (Utah 2): This is a whole analysis system that starts with a large set of subjects’ DWI and results in a statistical analysis of selected fiber tracts [http://www.na-mic.org/Wiki/index.php/Projects:DTIPopulationAnalysis]. Steps include calculation of image features, linear and nonlinear registration into a common, unbiased coordinate system, user-guided selection of tracts of interests in atlas-space, mapping tract geometry back into individual images to collect subject-specific diffusion information, and finally statistical group analysis of along tract diffusion information. New in this period are the use of a Core 1 developed methodology for group-wise registration of population of images (collaboration with MIT partner [http://www.na-mic.org/Wiki/index.php/Projects:GroupwiseRegistration]) and the development of a statistical framework for tract analysis based on functional data analysis (FDA). The new methods are described in a conference and a journal publication. The whole system was applied to large studies of our Core 3 partner (PNL Harvard) and pediatric studies of our affiliated clinical partners at UNC. &lt;br /&gt;
&lt;br /&gt;
To summarize, these activities include the whole processing pipeline from data input via NRRD format, preprocessing and correction for artifacts and distortions, several choices for tractography tailored to different needs, and output of results for statistical analysis. The most recent progress of DTI tool development based on the point of view of the DBP 2 partner (Harvard) is summarized at [http://www.na-mic.org/Wiki/index.php/DBP2:Harvard], with links to all presently active activities.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
&lt;br /&gt;
* BWH: Marek Kubicki, Martha Shenton, Sylvain Bouix, Julien von Siebenthal, Thomas Whitford, Jennifer Fitzsimmons, Doug Terry, Jorge Alverado, Eric Melonakos, Alexandra Golby, Monica Lemmond, Carl-Fredrik Westin.&lt;br /&gt;
* MIT: Lauren O'Donnell, Polina Golland, , Tri Ngo&lt;br /&gt;
* Utah I: Tom Fletcher, Ross Whitaker, Ran Tao, Yongsheng Pan&lt;br /&gt;
* Utah II: Casey Goodlett, Sylvain Gouttard, Guido Gerig&lt;br /&gt;
* GA Tech: John Melonakos, Vandana Mohan, Shawn Lankton, Allen Tannenbaum&lt;br /&gt;
* GE: Xiaodong Tao, Jim Miller, Mahnaz Maddah&lt;br /&gt;
* Isomics: Steve Pieper&lt;br /&gt;
* Kitware: Luis Ibanez, Brad Davis&lt;br /&gt;
* UNC: Zhexing Liu, Martin Styner&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic summarizing internal collaborations is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki]. Details on methods and algorithms for DWI analysis can be found in the algorithm sections of the respective Core-1 partners [http://www.na-mic.org/Wiki/index.php/Algorithm:Main], the DBP-2 project descriptions [http://www.na-mic.org/Wiki/index.php/DBP2:Main], and the external and internal NAMIC collaboration pages [http://www.na-mic.org/Wiki/index.php/NA-MIC_Collaborations:New].&lt;br /&gt;
&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and hence research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows:&lt;br /&gt;
&lt;br /&gt;
Segmentation&lt;br /&gt;
&lt;br /&gt;
* Geodesic Tractography Segmentation: We proposed an image segmentation technique based on augmenting the conformal (or geodesic) active contour framework with directional information.  This has been applied successfully to the segmentation of neural fiber bundles such as the Cingulum Bundle. This framework has now been integrated into Slicer and is being tested on a population of brain data sets.&lt;br /&gt;
&lt;br /&gt;
* Tubular Surface Segmentation: We have proposed a new model for tubular surfaces that transforms the problem of detecting a surface in 3D space, to detecting a curve in 4D space. Besides allowing us to impose a &amp;quot;soft&amp;quot; tubular shape prior, this also leads to computational efficiency over conventional surface segmentation approaches. We have also developed the moving end points implementation of this framework wherein the required input is only a few points in the interior of the structure of interest. This yields the additional advantage that the framework simulatenously returns both the 3D segmentation and the 3D skeleton of the structure eliminating the need for apriori knowledge of end points, and an expensive skeletonization step. The framework is applicable to different tubular anatomical structures in the body. We have so far applied it successfully to the Cingulum Bundle, and blood vessels. &lt;br /&gt;
&lt;br /&gt;
* Local-global Segmentation: We have proposed a novel segmentation approach that combines the advantages of local and global approaches to segmentation, by using statistics over regions that are local to each point on the evolving countour. This makes it well suited to applications with contrast differences within the structure of interest such as in blood vessel segmentation, as well as applications like the neural fiber bundles where the diffusion profiles of voxels within the structure are locally similar but vary along the length of the fiber bundle itself.&lt;br /&gt;
&lt;br /&gt;
* Shape-based segmentation: Standard image based segmentation approaches perform poorly when there is little or no contrast along boundaries of different regions. In such cases segmentation is mostly performed manually using prior knowledge of the shape and relative location of the underlying structures combined with partially discernible boundaries. We have presented an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an Expectation-Maximization formulation of the Maximum A posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. We have applied the algorithm successfully to real MRI images, and we have also implemented it into 3D Slicer.&lt;br /&gt;
&lt;br /&gt;
* Re-Orientation Approach for Segmentation of DW-MRI: This work proposes a methodology to segment tubular fiber bundles from diffusion weighted magnetic resonance images (DW-MRI). Segmentation is simplified by locally reorienting diffusion information based on large-scale fiber bundle geometry. Segmentation is achieved through simple global statistical modeling of diffusion orientation which allows for a convex optimization formulation of the segmentation problem, combining orientation statistics and spatial regularization. The approach compares very favorably with segmentation by full-brain streamline tractography. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport based Registration: We have provided a computationaly efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the solution proposed by Haker et al. using multi-resolution and multigrid techniques to speed up the convergence. We also leverage the computation power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We extend the work by Haker et al. who compute the optimal warp from a first order partial differential equation, an improvement over earlier proposed higher order methods and those based on linear programming, and further implement the algorithm using a coarse-to-fine strategy resulting in phenomenol improvement in convergence. We have applied it successfully to the registration of 3D brain MRI datasets (preoperative and intra-operative), and are currently extending it to the non-rigid registration of baseline DWI to brain MRI data.&lt;br /&gt;
&lt;br /&gt;
* Atlas Regularization for Image Segmentation: Atlas-based approaches have demonstrated the ability to automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. Unfortunately, the accuracy of this type of method often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this paper, we improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets has shown that the new procedure improves the segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies. &lt;br /&gt;
&lt;br /&gt;
* Point-set Rigid Registration: We have proposed a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation. Moreover, we incorporate stochastic dynamics to model the uncertainity of the registration process. We treat motion as a local variation in the pose parameters obatined from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainity. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temoral coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.We applied the algorithm to different alignments of point clouds and it successfully found the correct optimal transformation that aligns two given point clouds despite the differing geometry around the local neighborhood of a point within their respective sets. &lt;br /&gt;
&lt;br /&gt;
* Regularization for Optimal Mass Transport: To extend the flexibility of the existing OMT algorithm, we added a regularization term to the functional being minimized. This term controls the tradeoff between how well two images match after registration versus how warped the transformation map can become. A weighted sum of squared differences is used to penalize having to move mass over long distances; this addition also helps to keep the transformation physically accurate by reducing the likelihood that the transformation grid will fold over itself and keeping the grid smooth.&lt;br /&gt;
&lt;br /&gt;
* Registration of DW-MRI to structural MRI: Optimal Mass Transport was applied to the problem of correcting EPI distortion in DW-MRI. A mask for white matter in DW-MRI was registered to the white matter mask extracted from the structural MRI for the same patient. Prior to registration, it is important to normalize intensities in the two masks; this was done by dividing the images into regions and uniformly normalizing over each region to assure the sum of the intensities is equal. Then, once a transformation between the white matter masks was calculated, this transformation was applied to the original DW-MRI image. &lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework using SPHARM-PDM: We have provided an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. These segmentations are first processed to fill any interior holes. The processed binary segmentations are converted to surface meshes, and a spherical parametrization is computed for the surface meshes using a area-preserving, distortion minimizing spherical mapping. The SPHARM description is computed from the mesh and its spherical parametrization. Using the first order ellipsoid from the spherical harmonic coefficients, the spherical parametrizations are aligned to establish correspondence across all surfaces. The SPHARM description is then sampled into a triangulated surfaces (SPHARM-PDM) via icosahedron subdivision of the spherical parametrization. These SPHARM-PDM surfaces are all spatially aligned using rigid Procrustes alignment. Group differences between groups of surfaces are computed for simple group wise comparison using the standard robust Hotelling T 2 two sample metric. The tool further provides a new statistical method that allows for testing of and controlling with subject covariates via a permutation testing of GLM based MANCOVA metrics. Statistical p-values, both raw and corrected for multiple comparisons, result in significance maps. We provide additional visualization of the group tests via mean difference magnitude and vector maps, maps of the group covariance information, local correlation and z-scores. We have a stable implementation, and current development focuses on integrating the current command line tools into Slicer via the Slicer execution model and XNAT integration. A first Slicer module prototype has been developed without XNAT integration.&lt;br /&gt;
&lt;br /&gt;
* Population studies using Tubular Surface Model: We have proposed a tubular shape model for the Cingulum Bundle which models a tubular surface as a center-line coupled with a radius function at every point along the center-line. This model shows potential for population studies on the Cingulum Bundle which is believed to be involved in Schizophrenia, since it provides a natural way of sampling the structure to build a feature representation of it. We are currently segmenting the Cingulum Bundle from a population of brain data sets, towards performing this population analysis using the Pott's Model.&lt;br /&gt;
&lt;br /&gt;
* Automatic Outlining of Sulci on a Brain Surface: We present a method to automatically extract certain key features on a surface. We apply this technique to outline sulci on the cortical surface of a brain, where the data is taken to be a 3D triangulated mesh formed from the segmentation of MR image slices. The problem is posed as energy minimization using penalizing the arc-length of segmenting curve using conformal factor involving the mean curvature of the underlying surface. The computation is made practical for dense meshes via the use of a sparse-field method to track the level set interfaces and regularized least-squares estimation of geometric quantities.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
&lt;br /&gt;
Needs to be updated:&lt;br /&gt;
&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Nicolas Augier, Marc Niethammer, Beatriz Paniagua&lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm, Ivan Kolosev&lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
&lt;br /&gt;
* Connectivity Analysis: &lt;br /&gt;
&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of functionally homogeneous networks in the brain. We developed a new method for characterizing functional connectivity patterns from fMRI. In contrast to the seed-based analysis typically employed to identify networks of co-activation, we propose to use clustering to simultaneously estimate the networks and their representative time courses, which effectively replace user-specified seeds. &lt;br /&gt;
During this year, we validated this method for characterizing functional connectivity patterns from fMRI. To investigate the sensitivity of the analysis to the generative model of the signal, we implemented and compared two distinct algorithms, the mixture-model clustering and the spectral clustering, in application to this problem. We validated our approach in a rest state fMRI scans of 45 healthy subjects. Our results demonstrate that the detected networks are stable across subjects and across methods.&lt;br /&gt;
At the same time, we worked with the Harvard DBP to identify relevant clinical data sets in which our approach promises to identify effect of a disorder. We have started a collaboration to apply the method to a group of schizophrenia patients and normal controls.&lt;br /&gt;
&lt;br /&gt;
* Distortion Correction for EPI-Based Functional Imaging&lt;br /&gt;
&lt;br /&gt;
We developed and demonstrated a method for correcting the distortions present in echo planar images (EPI) and registering the EPI image to structural MRI scans. Our approach does not require acquiring fieldmaps, modifying EPI acquisition parameters, or having detailed knowledge of the shim system. The technique consists of two steps. First, a classifier is used to segment structural MR into an air/tissue susceptibility model. The resulting tissue map serves as input to a first order perturbation field model to compute a subject-specific fieldmap. The classifier is trained based on MR-CT image pairs, using MR intensities as features and exploiting air segmentation in the CT images to construct labels. Second, a simultaneous shim estimation and registration algorithm is employed to solve for the lower order field perturbations (shim parameters) needed to accurately unwarp and register the EPI data.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* MIT: Polina Golland, Danial Lashkari, Archana Venkataraman, Clare Poynton&lt;br /&gt;
* Harvard/BWH: Sylvain Bouix, Marek Kubicki, Carl Frederick Westin, Sandy Wells&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Summary of Progress===&lt;br /&gt;
The NAMIC-Kit consists of a framework of advanced computational resources including libraries, toolkits and applications; as well as the support infrastructure for testing, documenting, and deploying leading edge medical imaging algorithms and software tools. The framework has been carefully constructed to provide low-level access to libraries and modules for advanced users, plus high-level application access that non-computer professionals can use to address a variety of problems in biomedical computing.  &lt;br /&gt;
&lt;br /&gt;
In this fifth year of the NA-MIC projects the focus has been on integration. Much of the foundational infrastructure has been established; however to effectively transition advanced biomedical technology and improve the usability of the software the various subsystems that compose the NAMIC-Kit have been extended to accommodate advanced algorithmic development and optimize work flow. The activities in this year's efforts can be broadly categorized as follows:&lt;br /&gt;
&lt;br /&gt;
* Slicer3 and the Software Framework:&lt;br /&gt;
* Data integration:&lt;br /&gt;
* Software process:&lt;br /&gt;
* Software releases:&lt;br /&gt;
&lt;br /&gt;
===Slicer3 and the Software Framework===&lt;br /&gt;
One of the major achievements of the past year has been the release of [http://www.slicer.org/slicerWiki/index.php/Documentation-3.4 version 3.4 of 3D Slicer] in May of 2009.  A number of important improvements have been made by the Engineering Core and significant new functionality has been added throug other NA-MIC cores and collaborators since the release of version 3.2 in August of 2008.  A few notable examples include:&lt;br /&gt;
&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:Saving-Documentation-3.4 An Integrated Data Save Dialog]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:Fiducials-Documentation-3.4 Significant Rework of the Fiducials Interface]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:Slices-Documentation-3.4 A Slices Module to Support Advanced Visualization Modes]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:Editor-Documentation Significant Improvements to the Interactive Label Map Editor]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:ChangeTracker-Documentation-3.4 Integration of a Brain Tumor Change Tracking Module in collaboration with the Brain Science Foundation]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:IA_FEMesh-Documentation-3.4 Integration of a Finite Element Meshing Module as a deliverable of the NA-MIC Collaboration Grant at the University of Iowa]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Modules:FetchMI-Documentation-3.4 A Medical Informatics Interface to XNAT in collaboration with BIRN]&lt;br /&gt;
* [http://www.slicer.org/slicerWiki/index.php/Slicer3:Python The Ability to Interactively Script Slicer in Python as well as Tcl]&lt;br /&gt;
&lt;br /&gt;
In addition, there have been major extensions to the diffusion imaging tools, registration tools, filters, image guided therapy, and other core changes that enhance the utility and applicability of the software.&lt;br /&gt;
&lt;br /&gt;
===Data Integration===&lt;br /&gt;
&lt;br /&gt;
===Software Process===&lt;br /&gt;
One of the challenges facing developers has been the requirement to implement, test and deploy software systems across multiple computing platforms. NAMIC continues to push the state of the art with further development of the CMake, CTest, and CPack tools for cross-platform development, testing, and packaging, respectively...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Software Releases===&lt;br /&gt;
The NAMIC-Kit can be represented as a pyramid of capabilities, with the base consisting of toolkits and libraries, and the apex standing in for the Slicer3 user application. In between, Slicer modules are stand-alone executables that can be integrated directly into the Slicer3 application, including GUI integration, while work-flows are groups of modules that are integrated together to manifest sophisticated segmentation, registration and biomedical computing algorithms. In a coordinated NAMIC effort, major releases of these many components were realized over the past year. This includes, but is not limited to:&lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
The NAMIC Engineering Core has to a great extent realized its goal of engaging a wider biomedical community. This community extends worldwide and has leveraged the efforts of many developers beyond the direct influence of NAMIC. This has resulted in significant advances at relatively low cost. Having said that, the senior members of the Core 2 team consist of the following personnel.&lt;br /&gt;
&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper, Alex Yarmarkovich, Curt Lisle, Terry Lorber&lt;br /&gt;
* WUSTL - Dan Marcus&lt;br /&gt;
* UCSD - Jeffrey Grethe&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Highlights(Schroeder)=&lt;br /&gt;
===Advanced Algorithms===&lt;br /&gt;
&lt;br /&gt;
===NAMIC-Kit===&lt;br /&gt;
&lt;br /&gt;
===Outreach and Technology Transfer===&lt;br /&gt;
Cores 4-5-6 continue to support, train and dissemniate to the NAMIC community, and the broader biomedical computing community.&lt;br /&gt;
* The Slicer community held several workshops and tutorials. In xxx a satellite event was held for the international Organization for Human Brain Mapping at the annual meeting in xxx. The xx workshop on xx hosted xx participants representing xx countries from around the world, xx states within the US and xxdifferent laboratories including xx NIH institutes. In addition, &amp;lt;note how many slicer tutorials were held and where etc&amp;gt;&lt;br /&gt;
* Project Week continues to be a successful NAMIC venue. These semi-annual events are held in Boston in June, and January in Salt Lake City. These events are well attended with approximately 100 participants, of which about a third are outside collaborators. At the last Project Week in Salt Lake City, approximately xx projects were realized.&lt;br /&gt;
* NAMIC continues to participate in conferences and other technical venues. For example, NAMIC hosted xxx&lt;br /&gt;
&lt;br /&gt;
===Tutorial for Autism DBP: Cortical Thickness Measurement=== &lt;br /&gt;
As part of the 2009 NA-MIC All-Hands-Meeting, a &amp;quot;Tutorial Contest&amp;quot; was held in which a panel of judges from across the Cores, reviewed submitted entries on basis of &amp;lt;add criteria here&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The winning entry was for cortical thickness analysis tools for the UNC Autism DBP. It shows the user how to perform analysis of regional cortical thickness.  Two tutorials were included in this entry: 1) The ARCTIC tutorial for automatic analysis in which the user learns how to load input volumes, run the end-to-end module ARCTIC to generate cortical thickness information and display output volumes, and 2) The Slicer 3 tutorial for step by step analysis in which the user learns how to run individually the UNC external modules within Slicer3 in order to perform a regional cortical thickness analysis.&lt;br /&gt;
&lt;br /&gt;
=Impact and Value to Biocomputing (Miller)=&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, and weekly telephone&lt;br /&gt;
conferences. Over the past 18 months, the engineering core has visited&lt;br /&gt;
each algorithm core site to support the specific infrastructure needs&lt;br /&gt;
of each group.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, CDash, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization, image&lt;br /&gt;
analysis and data management. Slicer3 supports multiplanar reformat,&lt;br /&gt;
oblique reformat, surface and volume rendering, comparison viewers,&lt;br /&gt;
tracked cursors, and multiple image layer blending. Slicer3 can&lt;br /&gt;
communicate with an XNAT database to download data and upload results.&lt;br /&gt;
Slicer3 provides a multi-layer plugin mechanisms which allows&lt;br /&gt;
researchers to quickly and easily integrate and distribute their&lt;br /&gt;
technology with Slicer3. Plugins can be authored as separate&lt;br /&gt;
executables, shared libraries, Python scripts, or as full first class&lt;br /&gt;
Slicer3 modules. These plugins can be distributed with Slicer3 or&lt;br /&gt;
distributed on a site maintained by the researcher (for instance on&lt;br /&gt;
the Neuroimaging Informatics Tools and Resources Clearinghouse).&lt;br /&gt;
Slicer3 is available to all center participants and the external&lt;br /&gt;
community through its source code repository, official binary&lt;br /&gt;
releases, and unofficial nightly binary snapshots. There are 15&lt;br /&gt;
training modules on the Slicer3 User Training 101 webpage to educate&lt;br /&gt;
Slicer3 Users on basic image review, using advanced modules, and&lt;br /&gt;
integrating new technology into Slicer3.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The current roadmap projects are ''Stochastic&lt;br /&gt;
Tractography for VCSF'', ''Prostate Biopsy Needle Positioning Robot&lt;br /&gt;
Integration'', ''Brain Lesion Analysis in Neuropsychiatric Systemic&lt;br /&gt;
Lupus Erythematosus'', and ''Cortical Thickness for Autism''. For each&lt;br /&gt;
roadmap project, the software tools, exemplar data, and a tutorial are&lt;br /&gt;
provided to the community to allow others to reproduce the results and&lt;br /&gt;
apply the workflows in their own research programs. Along with the&lt;br /&gt;
four roadmap tutorials, five other tutorials were presented at the&lt;br /&gt;
2009 Tutorial Contest held at the NA-MIC All Hands Meeting in January&lt;br /&gt;
2009.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. For example, in&lt;br /&gt;
2008 and 2009, eight NA-MIC tractography algorithms were evaluated. At&lt;br /&gt;
the All Hands Meeting in 2008, a distributed group of researchers&lt;br /&gt;
reported on a qualitative study on the tractography methods. At the&lt;br /&gt;
All Hands Meeting in 2009, the same group reported back on&lt;br /&gt;
quantitative measures of sensitivity and specificity. The NA-MIC&lt;br /&gt;
algorithm groups collaborate on a broad spectrum of methods for&lt;br /&gt;
structural image analysis, diffusion image analysis, and functional&lt;br /&gt;
image analysis and orchestrate the solutions to the DBP workflows and&lt;br /&gt;
roadmaps. These efforts have lead to fundamental advancements in shape&lt;br /&gt;
representation, shape analysis, groupwise registration, diffusion&lt;br /&gt;
estimation, segmentation, and quantification, and functional&lt;br /&gt;
estimation, distortion correction, and clustering.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center&lt;br /&gt;
for four R01's: ''Automated FE Mesh Development', ''Measuring Alcohol&lt;br /&gt;
and Stress Interactions with Structural and Perfusion MRI'', ''An&lt;br /&gt;
Integrated System for Image-Guided Radiofrequency Ablation of Liver&lt;br /&gt;
Tumors'', and ''Development and Dissemination of Robust Brain MRI&lt;br /&gt;
Measurement Tools''. Several other proposals have been submitted and&lt;br /&gt;
are under evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot; as well as&lt;br /&gt;
to other NIH calls.&lt;br /&gt;
&lt;br /&gt;
NA-MIC also collaborates on the Slicer3 platform with the NIH funded&lt;br /&gt;
Neuroimage Analysis Center and the National Center for Image-Guided&lt;br /&gt;
Therapy. The NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project&lt;br /&gt;
is also leveraging NA-MIC and Slicer3 technology. A collaboration with&lt;br /&gt;
the Simbios NCBC is evaluating NA-MIC tools for model generation from&lt;br /&gt;
diagnostic images. NA-MIC collaborates with the NIH funded&lt;br /&gt;
Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) on&lt;br /&gt;
distribution of Slicer3 plugin modules. A Slicer3 training session was&lt;br /&gt;
held at NCI in August of 2008. Slicer3 is listed one of the DICOM&lt;br /&gt;
Viewers on the National Biomedical Imaging Archive at NCI.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2009. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. Two of the break&lt;br /&gt;
out sessions at the Winter Project Week were organized by researchers&lt;br /&gt;
from outside of NA-MIC. The Project Week in June of 2009 is being&lt;br /&gt;
expanded to be a joint event for NA-MIC, the Neuroimage Analysis&lt;br /&gt;
Center, the National Center for Image-Guided Therapy, the Harvard&lt;br /&gt;
Catalyst, and CIMIT.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, CDash and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and CTest. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Slicer3 was downloaded 3300 times during the current reporting&lt;br /&gt;
period. Slicer3 is also being used as an image analysis platform in&lt;br /&gt;
several fields outside of medical image analysis, in particular,&lt;br /&gt;
biological image analysis, astronomy, and industrial inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Nearly&lt;br /&gt;
150 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestiguous journals and conferences in the&lt;br /&gt;
field. Overall, there are 269 publications acknowledging NA-MIC&lt;br /&gt;
support. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  In 2008 alone, NA-MIC hosted 12 workshops and training&lt;br /&gt;
sessions at 12 venues, including training sessions at NCI, RSNA, and&lt;br /&gt;
MICCAI. These workshops and tutorials were individually targeted to&lt;br /&gt;
the specific needs and interests of clinicians, biomedical engineers,&lt;br /&gt;
or algorithm developers. Two hundred and fifty clinical, biomedical,&lt;br /&gt;
and algorithm researchers attended these events.&lt;br /&gt;
&lt;br /&gt;
= Timeline (Ross)=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;The table needs to be updated&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This section of the report gives the milestones for years 1 through 5 that are associated with the timelines in the original proposal. We have organized the milestones by core. For each milestone we have indicated the proposed year of completion and a very brief description of the current status. In some cases the milestones include ongoing work, and we have try to indicate that in the status. We have also included tables that list any significant changes to the proposed timelines. On the wiki page, we have links to the notes from the various PIs that give more details on their progress and the status of the milestones.&lt;br /&gt;
&lt;br /&gt;
'''These tables demonstrate that the project is, on the whole, proceeding according to the originally planned schedule.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Core 1: Algorithms ==&lt;br /&gt;
&lt;br /&gt;
=== Timelines and Milestones ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Proposed time of completion'''&lt;br /&gt;
| '''Status'''&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Shape-based segmentation'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 1.1&lt;br /&gt;
| Methods to learn shape representations&lt;br /&gt;
| Year 2&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 1.2&lt;br /&gt;
| Shape in atlas-driven segmentation&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 1.3&lt;br /&gt;
| Validate and refine approach&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Shape analysis'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 2.1&lt;br /&gt;
| Methods to compute statistics of shapes&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 2.3&lt;br /&gt;
| Validation of shape methods on application data&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed, refinements ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 3&lt;br /&gt;
| '''Analysis of DTI data'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 3.1&lt;br /&gt;
| Fiber geometry&lt;br /&gt;
| Year 3&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 3.2&lt;br /&gt;
| Fiber statistics&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed, new developments ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''MIT'''&lt;br /&gt;
| 3.3&lt;br /&gt;
| Validation on real data&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed, refinements ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Processing of DTI data'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 1.1&lt;br /&gt;
| Filtering of DTI&lt;br /&gt;
| Year 2&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 1.2&lt;br /&gt;
| Quantitative analysis of DTI&lt;br /&gt;
| Year 3&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 1.3&lt;br /&gt;
| Segmentation of cortex/WM&lt;br /&gt;
| Year 3&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 1.4&lt;br /&gt;
| Segmentation analysis of white matter tracts&lt;br /&gt;
| Year 3&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 1.5&lt;br /&gt;
| Joint analysis of DTI and functional data&lt;br /&gt;
| Year 5&lt;br /&gt;
| Ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2&lt;br /&gt;
| Nonparametric Shape Analysis&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2.1&lt;br /&gt;
| Framework in place&lt;br /&gt;
| Year 3&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2.2&lt;br /&gt;
| Demonstration on shape of neuranatomy (from Core 3)&lt;br /&gt;
| Year 4&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2.3&lt;br /&gt;
| Development for multiobject complexes&lt;br /&gt;
| Year 4&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2.4&lt;br /&gt;
| Demonstration of NP shape representations on clinical hypotheses from Core 3&lt;br /&gt;
| Year 5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2.6&lt;br /&gt;
| Integration into NAMIC-kit&lt;br /&gt;
| Year 5&lt;br /&gt;
| In progress&lt;br /&gt;
|-&lt;br /&gt;
| '''Utah'''&lt;br /&gt;
| 2.7&lt;br /&gt;
| Shape regression&lt;br /&gt;
| Year 5&lt;br /&gt;
| Complete, validation ongoing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Statistical shape analysis'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 1.1&lt;br /&gt;
| Comparative anal. of shape anal. schemes&lt;br /&gt;
| Year 2&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 1.3&lt;br /&gt;
| Statistical shape analysis incl. patient variable&lt;br /&gt;
| Year 5&lt;br /&gt;
| Complete, extensions ongoing.&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Structural analysis of DW-MRI'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 2.1&lt;br /&gt;
| DTI tractography tools&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 2.2&lt;br /&gt;
| Geometric characterization of fiber tracts&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''UNC'''&lt;br /&gt;
| 2.3&lt;br /&gt;
| Quant. anal. of diffusion along fiber tracts&lt;br /&gt;
| Year 5&lt;br /&gt;
| Completed.&lt;br /&gt;
|-&lt;br /&gt;
| '''GaTech'''&lt;br /&gt;
| 1.1&lt;br /&gt;
| ITK Implementation of PDEs&lt;br /&gt;
| Year 2&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''GaTech'''&lt;br /&gt;
| 1.1&lt;br /&gt;
| Applications to Core 3 data&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''GaTech'''&lt;br /&gt;
| 1.2&lt;br /&gt;
| New statistic models&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''GaTech'''&lt;br /&gt;
| 1.2&lt;br /&gt;
| Shape anaylsis&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''GaTech'''&lt;br /&gt;
| 2.0&lt;br /&gt;
| Integration in to Slicer&lt;br /&gt;
| Year 4-5&lt;br /&gt;
| Ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Registration'''&lt;br /&gt;
| Modified (see AR 2008)&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Group DTI Statistics'''&lt;br /&gt;
| Modified (see AR 2008)&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 3&lt;br /&gt;
| '''Diffusion Segmentation '''&lt;br /&gt;
| Modified (see AR 2008)&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 4&lt;br /&gt;
| '''Group Morphometry Statistics'''&lt;br /&gt;
| Modified (see AR 2008)&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 5&lt;br /&gt;
| XNAT Desktop&lt;br /&gt;
| Years 4-5&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 5.1&lt;br /&gt;
| Establish requirements for desktop version of XNAT &lt;br /&gt;
| Years 4-5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 5.2&lt;br /&gt;
| Develop implementation plan for prototype&lt;br /&gt;
| Years 4-5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 5.3&lt;br /&gt;
| Implement prototype version &lt;br /&gt;
| Years 4-5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 5.4&lt;br /&gt;
| Implement alpha version&lt;br /&gt;
| Year 5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 6&lt;br /&gt;
| XNAT Central&lt;br /&gt;
| Years 4-5&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 6.1&lt;br /&gt;
| Deploy XNAT Central, a public access XNAT host &lt;br /&gt;
| Years 4-5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 6.2&lt;br /&gt;
| Coordinate with NAMIC sites to upload project data&lt;br /&gt;
| Years 4-5&lt;br /&gt;
| Incomplete (ongoing)&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 6.3&lt;br /&gt;
| Continue developing XNAT Central based on feedback from NAMIC sites&lt;br /&gt;
| Years 4-5&lt;br /&gt;
| Complete, refinement ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 7&lt;br /&gt;
| NAMIC Kit integration&lt;br /&gt;
| Years 4-5&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 7.1&lt;br /&gt;
| Implement web services to exchange data with Slicer, Batchmake, and other client applications&lt;br /&gt;
| Years 4-5&lt;br /&gt;
|  Complete, testing ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 7.2&lt;br /&gt;
| Add XNAT Desktop to standard NAMIC kit distribution&lt;br /&gt;
| Year 5-6&lt;br /&gt;
| Incomplete.  Modified&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Timeline Modifications ===&lt;br /&gt;
=== Timeline Modifications ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Modification'''&lt;br /&gt;
|-&lt;br /&gt;
| '''MGH'''&lt;br /&gt;
| 7.2&lt;br /&gt;
| Add XNAT Desktop to standard NAMIC kit distribution&lt;br /&gt;
| Testing is underway and XNAT capabilities will be included in NAMIC at the end of Year 5 or early in Year 6&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== [[Core_1_Timeline_Notes|Core 1 Timeline Notes ]] ===&lt;br /&gt;
&lt;br /&gt;
== Core 2: Engineering ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Core 2 Timelines and Milestones ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Proposed time of completion'''&lt;br /&gt;
| '''Status'''&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Define software architecture'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 1&lt;br /&gt;
| Object design&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 1&lt;br /&gt;
| Identify patterns&lt;br /&gt;
| Yr 3&lt;br /&gt;
| Patterns for processing scalar and vector images, models, fiducials complete. Patterns for diffusion weighted completed, fMRI ongoing.&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 1&lt;br /&gt;
| Create frameworks&lt;br /&gt;
| Yr 3&lt;br /&gt;
| Frameworks for processing scalar and vector images, models, fiducials complete. Frameworks for diffusion weighted completed, fMRI ongoing.&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Software engineering process'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 2&lt;br /&gt;
| Extreme programming&lt;br /&gt;
| Yr 1-6&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 2&lt;br /&gt;
| Process automatiion&lt;br /&gt;
| Yr 3&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 2&lt;br /&gt;
| Refactoring&lt;br /&gt;
| Yr 3&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 3&lt;br /&gt;
| '''Automated quality system'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 3&lt;br /&gt;
| DART deployment&lt;br /&gt;
| Yr 2&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 3&lt;br /&gt;
| Persistent testing system&lt;br /&gt;
| Yr 5&lt;br /&gt;
| Complete (ongoing support)&lt;br /&gt;
|-&lt;br /&gt;
| '''GE'''&lt;br /&gt;
| 3&lt;br /&gt;
| Automatic defect detection&lt;br /&gt;
| Yr 5&lt;br /&gt;
| Complete (ongoing support, revisions)&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Cross-platform development'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| Deploy environment (CMake, CTest)&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| DART Integration and testing&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| Documentation tools&lt;br /&gt;
| Yr 2&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Integration tools'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| File Formats/IO facilities&lt;br /&gt;
| Yr 2&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| CableSWIG deployment&lt;br /&gt;
| Yr 3&lt;br /&gt;
| Complete (integration ongoing)&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| Establish XML schema&lt;br /&gt;
| Yr 4&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| '''Technology delivery'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Deploy applications&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete (ongoing)&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Establish plug-in repository&lt;br /&gt;
| Yr 2&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Cpack&lt;br /&gt;
| Yr 4-5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 1&lt;br /&gt;
| NAMIC builds of slicer&lt;br /&gt;
| Years 2--5&lt;br /&gt;
| Complete (testing ongoing)&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 1&lt;br /&gt;
| Schizophrenia and DBP intefaces&lt;br /&gt;
| Year 3---5&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 2&lt;br /&gt;
| ITK Integration tools&lt;br /&gt;
| Year 1---3&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 2&lt;br /&gt;
| Experiment Control Interfaces&lt;br /&gt;
| Year 2---5&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 2&lt;br /&gt;
| fMRI/DTI algorithm support&lt;br /&gt;
| Year 2---5&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 2&lt;br /&gt;
| New DBP algorithm support&lt;br /&gt;
| Year 2---6&lt;br /&gt;
| Ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 3&lt;br /&gt;
| Compatible build process&lt;br /&gt;
| Year 1---3&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 3&lt;br /&gt;
| Dart Integration&lt;br /&gt;
| Year 1---2&lt;br /&gt;
| Completed (maintainence ongoing)&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 3&lt;br /&gt;
| Test scripts for new code&lt;br /&gt;
| Year 2---5&lt;br /&gt;
| Ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''UCSD'''&lt;br /&gt;
| 1&lt;br /&gt;
| Grid computing---base&lt;br /&gt;
| Year 1&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''UCSD'''&lt;br /&gt;
| 1&lt;br /&gt;
| Grid enabled algorithms&lt;br /&gt;
| Year 3&lt;br /&gt;
| First version (GWiz alpha) available - initial integration with Slicer3 and execution model.&lt;br /&gt;
|-&lt;br /&gt;
| '''UCSD'''&lt;br /&gt;
| 1&lt;br /&gt;
| Testing infrastructure&lt;br /&gt;
| Year 4&lt;br /&gt;
| Completed (testing ongoing)&lt;br /&gt;
|-&lt;br /&gt;
| '''UCSD'''&lt;br /&gt;
| 2&lt;br /&gt;
| Data grid --- compatibility&lt;br /&gt;
| Year 2&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''UCSD'''&lt;br /&gt;
| 2&lt;br /&gt;
| Data grid --- Slicer access&lt;br /&gt;
| Year 2&lt;br /&gt;
| Completed&lt;br /&gt;
|-&lt;br /&gt;
| '''UCSD'''&lt;br /&gt;
| 3&lt;br /&gt;
| Data mediation --- deploy&lt;br /&gt;
| Year 1&lt;br /&gt;
| Modified (see Annual Report 2008)&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 1&lt;br /&gt;
| Debabeler functionality&lt;br /&gt;
| Year 1&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 2&lt;br /&gt;
| SLIPIE Interpretation (Layer 1)&lt;br /&gt;
| Year 1--Year2&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 3&lt;br /&gt;
| SLIPIE Interpretation (Layer 2)&lt;br /&gt;
| Year 1--Year2&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 3&lt;br /&gt;
| Developing ITK Modules&lt;br /&gt;
| Year2&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 4&lt;br /&gt;
| Integrating SRB (GSI-enabled)&lt;br /&gt;
| Year2&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 5&lt;br /&gt;
| Integrating IDA&lt;br /&gt;
| Year2&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 5&lt;br /&gt;
| Integrating External Visualization Applications&lt;br /&gt;
| Year2&lt;br /&gt;
| Modified&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 6&lt;br /&gt;
| DTI Analysis&lt;br /&gt;
| Year 3-6&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 6.1&lt;br /&gt;
| Implemenation of mechanically-based DTI analysis in ITK&lt;br /&gt;
| Year 4&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 6.2&lt;br /&gt;
| Integration of command-line module into Slicer&lt;br /&gt;
| Year 5&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''UCLA'''&lt;br /&gt;
| 6.3&lt;br /&gt;
| Testing/evaluation of DTI analysis module (pilot study)&lt;br /&gt;
| Year 5-6&lt;br /&gt;
| Ongoing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Core 2 Timeline Modifications ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Modification'''&lt;br /&gt;
|-&lt;br /&gt;
| '''Isomics'''&lt;br /&gt;
| 3&lt;br /&gt;
| Data mediation&lt;br /&gt;
| Delayed pending integration of databases into NAMIC infractructure&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== [[Core_2_Timeline_Notes|Core 2 Timeline Notes ]] ===&lt;br /&gt;
&lt;br /&gt;
== Core 3: Driving Biological Problems ==&lt;br /&gt;
&lt;br /&gt;
The Core 3 projects submitted R01 style proposals, as specified in the RFA, and did not submit timelines.&lt;br /&gt;
&lt;br /&gt;
== Core 4: Service ==&lt;br /&gt;
&lt;br /&gt;
=== Core 4 Timelines and Milestones ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Proposed time of completion'''&lt;br /&gt;
| '''Status'''&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Implement Development Farms'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| Deploy platforms&lt;br /&gt;
| Yrs 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 1&lt;br /&gt;
| Communications&lt;br /&gt;
| Yrs 1&lt;br /&gt;
| Complete, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Establish software process'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| Secure developer database&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| Collect guidelines&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| Manage software submission process&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| Configure process tools&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 2&lt;br /&gt;
| Survey community&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| '''Deploy NAMIC Tools'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Toolkits&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Integration tools&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Applications&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 3&lt;br /&gt;
| Integrate new computing resources&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 4&lt;br /&gt;
| '''Provide support'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 4&lt;br /&gt;
| Esablish support infrastructure&lt;br /&gt;
| Yrs 1--5&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 4&lt;br /&gt;
| NAMIC support&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Kitware'''&lt;br /&gt;
| 5&lt;br /&gt;
| Manage NAMIC Software Releases&lt;br /&gt;
| Yrs 1--5&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Core 4 Timeline Modifications ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Modification'''&lt;br /&gt;
|-&lt;br /&gt;
| Kitware&lt;br /&gt;
| 2-5&lt;br /&gt;
| Various&lt;br /&gt;
| Refined/modified the sub aims&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== [[Core_4_Timeline_Notes|Core 4 Timeline Notes ]] ===&lt;br /&gt;
&lt;br /&gt;
== Core 5: Training ==&lt;br /&gt;
&lt;br /&gt;
=== Core 5 Timelines and Milestones ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Proposed time of completion'''&lt;br /&gt;
| '''Status'''&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 1&lt;br /&gt;
| '''Formal Training Guidllines'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 1&lt;br /&gt;
| Functional neuroanatomy&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 1&lt;br /&gt;
| Clinical correlations&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 2&lt;br /&gt;
| '''Mentoring'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 2&lt;br /&gt;
| Programming workshops&lt;br /&gt;
| Yrs 1-5&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 2&lt;br /&gt;
| One-on-one mentoring, Cores 1, 2, 3&lt;br /&gt;
| Yrs 1-5&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 3&lt;br /&gt;
| '''Collaborative work environment'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 3&lt;br /&gt;
| Wiki&lt;br /&gt;
| Yrs 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 3&lt;br /&gt;
| Mailing lists&lt;br /&gt;
| Yrs 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 3&lt;br /&gt;
| Regular telephone conferences&lt;br /&gt;
| Yrs 1-5&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 4&lt;br /&gt;
| '''Educational component for tools'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 4&lt;br /&gt;
| Slicer training modules&lt;br /&gt;
| Yr 2-5&lt;br /&gt;
| Slicer 2.x tutorials complete, More than 10 Slicer 3 tutorials and modules.&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 5&lt;br /&gt;
| '''Demonstrations and hands-on training'''&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| '''Harvard'''&lt;br /&gt;
| 5&lt;br /&gt;
| Various workshops and conferences&lt;br /&gt;
| Yrs 1--5&lt;br /&gt;
| On schedule, ongoing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Core 5 Timeline Modifications ===&lt;br /&gt;
&lt;br /&gt;
None.&lt;br /&gt;
&lt;br /&gt;
=== [[Core_5_Timeline_Notes|Core 5 Timeline Notes ]] ===&lt;br /&gt;
&lt;br /&gt;
== Core 6: Dissemination ==&lt;br /&gt;
&lt;br /&gt;
=== Core 6 Timelines and Milestones ===&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
| '''Group'''&lt;br /&gt;
| '''Aim'''&lt;br /&gt;
| '''Milestone'''&lt;br /&gt;
| '''Proposed time of completion'''&lt;br /&gt;
| '''Status'''&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 1&lt;br /&gt;
| Create a collaboration metholdology for NA-MIC&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 1.1&lt;br /&gt;
| develop a selection process&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 1.2&lt;br /&gt;
| guidelines to govern the collaborations&lt;br /&gt;
| Yr 1-2&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 1.3&lt;br /&gt;
| Provide on-site training&lt;br /&gt;
| Yr 1-6&lt;br /&gt;
| Complete for current tools (ongoing for tool refinement)&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 1.4&lt;br /&gt;
| develop a web site infrastructure&lt;br /&gt;
| Yr 1&lt;br /&gt;
| Complete&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 2&lt;br /&gt;
| Facilitate communication between NA-MIC developers and wider research community&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 2.1&lt;br /&gt;
| develop materials describing NAMIC technology&lt;br /&gt;
| Yr 1-6&lt;br /&gt;
| On Schedule&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 2.2&lt;br /&gt;
| participate in scientific meetings&lt;br /&gt;
| Yr 2-6&lt;br /&gt;
| On Schedule&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 2.3&lt;br /&gt;
| Document interactions with external researchers&lt;br /&gt;
| Yr 2-6&lt;br /&gt;
| On Schedule&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 2.4&lt;br /&gt;
| Coordinate publication strategies&lt;br /&gt;
| Yr 3-6&lt;br /&gt;
| On Schedule&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 3&lt;br /&gt;
| Develop a publicly accessible internet resource of data, software, documentation, and publication of new discoveries&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 3.1&lt;br /&gt;
| On-line repository of NAMIC related publications and presentations&lt;br /&gt;
| Yr 1-6&lt;br /&gt;
| On Schedule&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 3.2&lt;br /&gt;
| On-line repository of NAMIC tutorial and training material&lt;br /&gt;
| Yr 1-6&lt;br /&gt;
| On Schedule&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 3.3&lt;br /&gt;
| Index and a searchable database&lt;br /&gt;
| Yr 1-2&lt;br /&gt;
| Done&lt;br /&gt;
|-&lt;br /&gt;
| Isomics and BWH&lt;br /&gt;
| 3.4&lt;br /&gt;
| Automated feedback systems that track software downloads&lt;br /&gt;
| Yr 3&lt;br /&gt;
| Done&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Core 6 Timeline Modifications ===&lt;br /&gt;
Dissemination efforts that were on-going in Year 5 will be extended into the at-cost extension Year 6. The dissemination function is shared between Isomics and BWH.&lt;br /&gt;
&lt;br /&gt;
=== [[Core_6_Timeline_Notes|Core 6 Timeline Notes ]] ===&lt;br /&gt;
&lt;br /&gt;
=Appendix A Publications (Mastrogiacomo)=&lt;br /&gt;
A list should be mined from the publications database and attached here in MS word format.&lt;br /&gt;
&lt;br /&gt;
=Appendix B EAB Report and Response (Kapur)=&lt;br /&gt;
===EAB Report===&lt;br /&gt;
&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=35382</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=35382"/>
		<updated>2009-02-21T16:42:40Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Active */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
====Active====&lt;br /&gt;
* [http://www.cs.queensu.ca/~gabor/OpenJobs/ITK-Programmer.htm Image-Guided Surgery Applications Engineer at the Perk Lab, Queen's University, Canada]&lt;br /&gt;
&lt;br /&gt;
* [[Summer Intern at GE Research]], post resume at [http://www.ge.com/careers www.ge.com/careers]  Job #1001997&lt;br /&gt;
&lt;br /&gt;
====Expired====&lt;br /&gt;
* Closed: [[SPL-Postdoc|Post-doctoral Fellow in Radiology, Surgical Planning Laboratory, Brigham and Women's Hospital and Harvard Medical School]]&lt;br /&gt;
* [[Slicer-Tester|Medical Image Computing Software Open Source Engineer]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=35381</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=35381"/>
		<updated>2009-02-21T16:42:17Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Expired */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
====Active====&lt;br /&gt;
* [[Summer Intern at GE Research]], post resume at [http://www.ge.com/careers www.ge.com/careers]  Job #1001997&lt;br /&gt;
&lt;br /&gt;
====Expired====&lt;br /&gt;
* Closed: [[SPL-Postdoc|Post-doctoral Fellow in Radiology, Surgical Planning Laboratory, Brigham and Women's Hospital and Harvard Medical School]]&lt;br /&gt;
* [[Slicer-Tester|Medical Image Computing Software Open Source Engineer]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34524</id>
		<title>AHM2009:JHU</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34524"/>
		<updated>2009-01-08T04:56:20Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Detailed Information about the Pipeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[AHM_2009#Agenda|Back to AHM 2009 Agenda]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
==JHU Roadmap Project==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Menard.jpg|thumb|350px|Prostate intervention (biopsy) in closed MR scanner.]]&lt;br /&gt;
|[[Image:Patient.jpg|Close-up of the transrectal procedure|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Robot.jpg|thumb|350px|Transrectal prostate intervention robot assembled.]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateCalibration.jpg|thumb|360px|Fiducial calibration of interventional robot]]&lt;br /&gt;
|[[Image:TRPB_ProstateSegmentation.JPG|semi-automatic prostate segmentation|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateBiopsyRobot.jpg|thumb|350px|MR-compatible Trans-Rectal Prostate Robot.]]&lt;br /&gt;
|[[Image:TRProstateTargeting.jpg|thumb|360px|Trajectory calculation from target locations]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
== Overview ==&lt;br /&gt;
;* Who is the targeted user?&lt;br /&gt;
The only definitive method to diagnose prostate cancer is to perform biopsy. The current gold standard is Trans Rectal UltraSound (TRUS) guided biopsies. TRUS biopsies lack in sensitivity and specificity. MRI has recently been investigated as an attractive alternative to image and localize prostate cancer. It is imperative to take advantage of multi-parametric MRI imaging to perform prostate biopsy. However, to perform MRI-guided biopsy, there is a physical limitation of working in a very small space. We have developed a completely MR-compatible robot that solves the problem. This SLICER based module we are developing, aims to provide end-to-end interventional solution that combines software imaging functionality and interfaces with our specific hardware to perform the biopsy. The targeted users are the clinicians who are currently investigating our MRI compatible robotic device.&lt;br /&gt;
&lt;br /&gt;
;* What problem does the pipeline solve?&lt;br /&gt;
There are several Slicer features that are crucial to image-guided therapy that are utilized in this module:&lt;br /&gt;
&lt;br /&gt;
'''Oriented volumes and image slice reformatting'''&lt;br /&gt;
Each volume acquired during the biopsy procedure has its own orientation, since images are acquired according to the orientation of the instrument, which is at an oblique angle to the MR scanner's coordinate axes.  What we have added is that, for each workflow step, a particular volume is specified as the &amp;quot;primary&amp;quot; and, when overlays are performed e.g. for verification, Slicer displays the primary image in its original orientation and reslices the others.  The displayed slice orientation automatically changes to match the primary whenever the workflow step changes.&lt;br /&gt;
&lt;br /&gt;
'''Multiple fiducial lists'''&lt;br /&gt;
This module maintains two Slicer fiducial lists: for registration, and for targeting.  Like the image orientation, we have added automatic switching between display of fiducials according to the workflow step.&lt;br /&gt;
&lt;br /&gt;
'''Communication with hardware devices'''&lt;br /&gt;
The module uses optical encoders that are attached to the joints of the biopsy device to verify that the position of the device matches that of the plan.&lt;br /&gt;
&lt;br /&gt;
;* How does the pipeline compare to state of the art?&lt;br /&gt;
SLICER's ability to work with volumes, slice reformatting, and seamless integration with image analysis algorithms makes it uniquely suited to our problem. One does not have to reinvent the wheel, as there is so much functionality that is available at hand and easily accessible as plug and play components. There is an existing pipeline/application that interfaces with our specific hardware, however, it is rather difficult to extend its functionality to cater near future requirements. So, we believe SLICER module is the way to go for us.&lt;br /&gt;
&lt;br /&gt;
==Detailed Information about the Pipeline==&lt;br /&gt;
We have created an interactive loadable module that provides a workflow interface for MR-guided transrectal prostate biopsy. The MR images are captured with the help of an endorectal coil which is mounted on the same shaft as the biopsy needle.&lt;br /&gt;
The steps in the workflow are as follows:&lt;br /&gt;
#'''Calibration:''' &lt;br /&gt;
#:This is the first step in workflow. The objective is to register the image to the robot via MR fiducials. For this, first a MR scan (calibration volume) is done to optimally image the fiducials. The volume is loaded up inside SLICER, from the TR Prostate Biopsy module's wizard GUI. The registration method is based on first segmenting the fiducials as seen in image. The segmentation algorithm developed by Csaba/Axel at Johns Hopkins, primarily uses morphological operations to localize the fiducials. The parameters of segmentation are available on wizard GUI; these include: approximate physical dimensions of fiducials, thresholds. The semi-automatic segmentation process is initiated by user providing one click each per fiducial. After the fiducials are segmented, the registration is triggered automatically inside, which uses prior knowledge about mechanical design of the device, and knowledge of placement of fiducials. The registration algorithm finds two axis lines (one per pair of fiducials), and computes the angle and distance between those axes. The segmentation, and registration results are displayed in GUI. The registration results (angle and distance between axes) are bench-marked against the mechanically measured ground truth. If the user/clinician are not happy with the results, he/she can modify the parameters and do re-segmentation and recalculate registration.[[Image:TRProstateBiopsy Calibration.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#'''Segmentation:''' &lt;br /&gt;
#:After the robot is registered, the next step (press next on wizard workflow) is to acquire prostate volume, and segment prostate (Algorithm by Yi Gao/ Allen Tannenbaum GeorgiaTech)[[Image:TRPB_ProstateSegmentation.JPG|thumb|320px|center|Segmentation.]]&lt;br /&gt;
#'''Registration:''' &lt;br /&gt;
#: (MR1, MR2,...MRSI -- work in progress&lt;br /&gt;
#'''Targeting:''' &lt;br /&gt;
#:In this step, clinician marks biopsy targets (by click). The robot rotation angle and needle angle is computed along-with needle trajectory and depth (automatically). The information about the target's RAS location, and the needle targeting parameters is populated in the list in wizard GUI. Selecting a particular target in the list brings the target in view in all three slices. Multiple targets can be marked. The clinician selects the target to perform biopsy on. The clinician then dials in the rotation angle, and needle angle on the device, and performs the biopsy. The sensor data from the optical encoders on the robot is continuously read and updated on the SLICER's slice views, about the current depth/orientation of needle. [[Image:TRProstateTargeting.jpg|thumb|320px|center|Trajectory calculation from target locations]]&lt;br /&gt;
#'''Verification:'''&lt;br /&gt;
#:After, the needle is in, a confirmation scan is taken to verify actual biopsy locations against the planned targets. The verification volume is loaded in SLICER from module GUI. The user picks up the target to validated from the list, and then clicks at two needle ends to mark the needle. The distance and angle errors are calculated and populated in the list. [Work in progress...]&lt;br /&gt;
&lt;br /&gt;
Currently, we have implemented the GUI of all the steps in workflow. The functionality of calibration step is complete. The functionality of segmentation step is being implemented during the project week, working in coordination with Yi/Allen. We are in the process of implementing functionality for the rest of steps.&lt;br /&gt;
This module provides a demonstration of how Slicer modules can be created for specific interventional devices.&lt;br /&gt;
&lt;br /&gt;
==Software &amp;amp; documentation==&lt;br /&gt;
&lt;br /&gt;
* The TRProstateBiopsy module is in the &amp;quot;Queens&amp;quot; directory of the NAMICSandBox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/TRProstateBiopsy/ online]&lt;br /&gt;
* Tutorial is forthcoming&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
&lt;br /&gt;
* DBP: &lt;br /&gt;
**Gabor Fichtinger, Phd, Queens School of Computing, Queens University [[Image:QueensLogo.jpg|50px]]&lt;br /&gt;
**Gabor Fichtinger, Phd, Louis Whitcomb, Phd, LCSR, Johns Hopkins University [[Image:LCSRLogo.gif|80px]]&lt;br /&gt;
* Core 1: Allen Tannenbaum, Phd, Yi Gao, Phd student, Georgia Tech University [[Image:GTLogo.gif|80px]]&lt;br /&gt;
* Core 2: Steve Pieper, Katie Hayes[[Image:Kitware.png|60px]]&lt;br /&gt;
* Contact: Gabor Fichtinger, gabor at cs.queensu.ca&lt;br /&gt;
&lt;br /&gt;
==Outreach==&lt;br /&gt;
&lt;br /&gt;
* Publication Links to the PubDB.&lt;br /&gt;
* Planned outreach activities (including presentations, tutorials/workshops) at conferences&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34522</id>
		<title>AHM2009:JHU</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34522"/>
		<updated>2009-01-08T04:51:54Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Detailed Information about the Pipeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[AHM_2009#Agenda|Back to AHM 2009 Agenda]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
==JHU Roadmap Project==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Menard.jpg|thumb|350px|Prostate intervention (biopsy) in closed MR scanner.]]&lt;br /&gt;
|[[Image:Patient.jpg|Close-up of the transrectal procedure|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Robot.jpg|thumb|350px|Transrectal prostate intervention robot assembled.]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateCalibration.jpg|thumb|360px|Fiducial calibration of interventional robot]]&lt;br /&gt;
|[[Image:TRPB_ProstateSegmentation.JPG|semi-automatic prostate segmentation|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateBiopsyRobot.jpg|thumb|350px|MR-compatible Trans-Rectal Prostate Robot.]]&lt;br /&gt;
|[[Image:TRProstateTargeting.jpg|thumb|360px|Trajectory calculation from target locations]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
== Overview ==&lt;br /&gt;
;* Who is the targeted user?&lt;br /&gt;
The only definitive method to diagnose prostate cancer is to perform biopsy. The current gold standard is Trans Rectal UltraSound (TRUS) guided biopsies. TRUS biopsies lack in sensitivity and specificity. MRI has recently been investigated as an attractive alternative to image and localize prostate cancer. It is imperative to take advantage of multi-parametric MRI imaging to perform prostate biopsy. However, to perform MRI-guided biopsy, there is a physical limitation of working in a very small space. We have developed a completely MR-compatible robot that solves the problem. This SLICER based module we are developing, aims to provide end-to-end interventional solution that combines software imaging functionality and interfaces with our specific hardware to perform the biopsy. The targeted users are the clinicians who are currently investigating our MRI compatible robotic device.&lt;br /&gt;
&lt;br /&gt;
;* What problem does the pipeline solve?&lt;br /&gt;
There are several Slicer features that are crucial to image-guided therapy that are utilized in this module:&lt;br /&gt;
&lt;br /&gt;
'''Oriented volumes and image slice reformatting'''&lt;br /&gt;
Each volume acquired during the biopsy procedure has its own orientation, since images are acquired according to the orientation of the instrument, which is at an oblique angle to the MR scanner's coordinate axes.  What we have added is that, for each workflow step, a particular volume is specified as the &amp;quot;primary&amp;quot; and, when overlays are performed e.g. for verification, Slicer displays the primary image in its original orientation and reslices the others.  The displayed slice orientation automatically changes to match the primary whenever the workflow step changes.&lt;br /&gt;
&lt;br /&gt;
'''Multiple fiducial lists'''&lt;br /&gt;
This module maintains two Slicer fiducial lists: for registration, and for targeting.  Like the image orientation, we have added automatic switching between display of fiducials according to the workflow step.&lt;br /&gt;
&lt;br /&gt;
'''Communication with hardware devices'''&lt;br /&gt;
The module uses optical encoders that are attached to the joints of the biopsy device to verify that the position of the device matches that of the plan.&lt;br /&gt;
&lt;br /&gt;
;* How does the pipeline compare to state of the art?&lt;br /&gt;
SLICER's ability to work with volumes, slice reformatting, and seamless integration with image analysis algorithms makes it uniquely suited to our problem. One does not have to reinvent the wheel, as there is so much functionality that is available at hand and easily accessible as plug and play components. There is an existing pipeline/application that interfaces with our specific hardware, however, it is rather difficult to extend its functionality to cater near future requirements. So, we believe SLICER module is the way to go for us.&lt;br /&gt;
&lt;br /&gt;
==Detailed Information about the Pipeline==&lt;br /&gt;
We have created an interactive loadable module that provides a workflow interface for MR-guided transrectal prostate biopsy. The MR images are captured with the help of an endorectal coil which is mounted on the same shaft as the biopsy needle.&lt;br /&gt;
The steps in the workflow are as follows:&lt;br /&gt;
#'''Calibration:''' &lt;br /&gt;
#:This is the first step in workflow. The objective is to register the image to the robot via MR fiducials. For this, first a MR scan (calibration volume) is done to optimally image the fiducials. The volume is loaded up inside SLICER, from the TR Prostate Biopsy module's wizard GUI. The registration method is based on first segmenting the fiducials as seen in image. The segmentation algorithm developed by Csaba/Axel at Johns Hopkins, primarily uses morphological operations to localize the fiducials. The parameters of segmentation are available on wizard GUI; these include: approximate physical dimensions of fiducials, thresholds. The semi-automatic segmentation process is initiated by user providing one click each per fiducial. After the fiducials are segmented, the registration is triggered automatically inside, which uses prior knowledge about mechanical design of the device, and knowledge of placement of fiducials. The registration algorithm finds two axis lines (one per pair of fiducials), and computes the angle and distance between those axes. The segmentation, and registration results are displayed in GUI. The registration results (angle and distance between axes) are bench-marked against the mechanically measured ground truth. If the user/clinician are not happy with the results, he/she can modify the parameters and do re-segmentation and recalculate registration.[[Image:TRProstateBiopsy Calibration.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#'''Segmentation:''' &lt;br /&gt;
#:After the robot is registered, the next step (press next on wizard workflow) is to acquire prostate volume, and segment prostate (Algorithm by Yi Gao/ Allen Tannenbaum GeorgiaTech)[[Image:TRPB_ProstateSegmentation.JPG|thumb|320px|center|Segmentation.]]&lt;br /&gt;
#'''Registration (MR1, MR2,...MRSI -- forthcoming:''' &lt;br /&gt;
#'''Targeting:''' &lt;br /&gt;
#:In this step, clinician marks biopsy targets (by click). The robot rotation angle and needle angle is computed along-with needle trajectory and depth (automatically). The information about the target's RAS location, and the needle targeting parameters is populated in the list in wizard GUI. Selecting a particular target in the list brings the target in view in all three slices. Multiple targets can be marked. The clinician selects the target to perform biopsy on. The clinician then dials in the rotation angle, and needle angle on the device, and performs the biopsy. The sensor data from the optical encoders on the robot is continuously read and updated on the SLICER's slice views, about the current depth/orientation of needle. [[Image:TRProstateTargeting.jpg|thumb|320px|center|Trajectory calculation from target locations]]&lt;br /&gt;
#'''Verification:'''&lt;br /&gt;
#:After, the needle is in, a confirmation scan is taken to verify actual biopsy locations against the planned targets. The verification volume is loaded in SLICER from module GUI. The user picks up the target to validated from the list, and then clicks at two needle ends to mark the needle. The distance and angle errors are calculated and populated in the list. [snapshot]&lt;br /&gt;
&lt;br /&gt;
Currently, we have implemented the GUI of all the steps in workflow. The functionality of calibration step is complete. The functionality of segmentation step is being implemented during the project week, working in coordination with Yi/Allen. We are in the process of implementing functionality for the rest of steps.&lt;br /&gt;
This module provides a demonstration of how Slicer modules can be created for specific interventional devices.&lt;br /&gt;
&lt;br /&gt;
==Software &amp;amp; documentation==&lt;br /&gt;
&lt;br /&gt;
* The TRProstateBiopsy module is in the &amp;quot;Queens&amp;quot; directory of the NAMICSandBox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/TRProstateBiopsy/ online]&lt;br /&gt;
* Tutorial is forthcoming&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
&lt;br /&gt;
* DBP: &lt;br /&gt;
**Gabor Fichtinger, Phd, Queens School of Computing, Queens University [[Image:QueensLogo.jpg|50px]]&lt;br /&gt;
**Gabor Fichtinger, Phd, Louis Whitcomb, Phd, LCSR, Johns Hopkins University [[Image:LCSRLogo.gif|80px]]&lt;br /&gt;
* Core 1: Allen Tannenbaum, Phd, Yi Gao, Phd student, Georgia Tech University [[Image:GTLogo.gif|80px]]&lt;br /&gt;
* Core 2: Steve Pieper, Katie Hayes[[Image:Kitware.png|60px]]&lt;br /&gt;
* Contact: Gabor Fichtinger, gabor at cs.queensu.ca&lt;br /&gt;
&lt;br /&gt;
==Outreach==&lt;br /&gt;
&lt;br /&gt;
* Publication Links to the PubDB.&lt;br /&gt;
* Planned outreach activities (including presentations, tutorials/workshops) at conferences&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34520</id>
		<title>AHM2009:JHU</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34520"/>
		<updated>2009-01-08T04:47:15Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[AHM_2009#Agenda|Back to AHM 2009 Agenda]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
==JHU Roadmap Project==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Menard.jpg|thumb|350px|Prostate intervention (biopsy) in closed MR scanner.]]&lt;br /&gt;
|[[Image:Patient.jpg|Close-up of the transrectal procedure|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Robot.jpg|thumb|350px|Transrectal prostate intervention robot assembled.]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateCalibration.jpg|thumb|360px|Fiducial calibration of interventional robot]]&lt;br /&gt;
|[[Image:TRPB_ProstateSegmentation.JPG|semi-automatic prostate segmentation|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateBiopsyRobot.jpg|thumb|350px|MR-compatible Trans-Rectal Prostate Robot.]]&lt;br /&gt;
|[[Image:TRProstateTargeting.jpg|thumb|360px|Trajectory calculation from target locations]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
== Overview ==&lt;br /&gt;
;* Who is the targeted user?&lt;br /&gt;
The only definitive method to diagnose prostate cancer is to perform biopsy. The current gold standard is Trans Rectal UltraSound (TRUS) guided biopsies. TRUS biopsies lack in sensitivity and specificity. MRI has recently been investigated as an attractive alternative to image and localize prostate cancer. It is imperative to take advantage of multi-parametric MRI imaging to perform prostate biopsy. However, to perform MRI-guided biopsy, there is a physical limitation of working in a very small space. We have developed a completely MR-compatible robot that solves the problem. This SLICER based module we are developing, aims to provide end-to-end interventional solution that combines software imaging functionality and interfaces with our specific hardware to perform the biopsy. The targeted users are the clinicians who are currently investigating our MRI compatible robotic device.&lt;br /&gt;
&lt;br /&gt;
;* What problem does the pipeline solve?&lt;br /&gt;
There are several Slicer features that are crucial to image-guided therapy that are utilized in this module:&lt;br /&gt;
&lt;br /&gt;
'''Oriented volumes and image slice reformatting'''&lt;br /&gt;
Each volume acquired during the biopsy procedure has its own orientation, since images are acquired according to the orientation of the instrument, which is at an oblique angle to the MR scanner's coordinate axes.  What we have added is that, for each workflow step, a particular volume is specified as the &amp;quot;primary&amp;quot; and, when overlays are performed e.g. for verification, Slicer displays the primary image in its original orientation and reslices the others.  The displayed slice orientation automatically changes to match the primary whenever the workflow step changes.&lt;br /&gt;
&lt;br /&gt;
'''Multiple fiducial lists'''&lt;br /&gt;
This module maintains two Slicer fiducial lists: for registration, and for targeting.  Like the image orientation, we have added automatic switching between display of fiducials according to the workflow step.&lt;br /&gt;
&lt;br /&gt;
'''Communication with hardware devices'''&lt;br /&gt;
The module uses optical encoders that are attached to the joints of the biopsy device to verify that the position of the device matches that of the plan.&lt;br /&gt;
&lt;br /&gt;
;* How does the pipeline compare to state of the art?&lt;br /&gt;
SLICER's ability to work with volumes, slice reformatting, and seamless integration with image analysis algorithms makes it uniquely suited to our problem. One does not have to reinvent the wheel, as there is so much functionality that is available at hand and easily accessible as plug and play components. There is an existing pipeline/application that interfaces with our specific hardware, however, it is rather difficult to extend its functionality to cater near future requirements. So, we believe SLICER module is the way to go for us.&lt;br /&gt;
&lt;br /&gt;
==Detailed Information about the Pipeline==&lt;br /&gt;
We have created an interactive loadable module that provides a workflow interface for MR-guided transrectal prostate biopsy. The MR images are captured with the help of an endorectal coil which is mounted on the same shaft as the biopsy needle.&lt;br /&gt;
The steps in the workflow are as follows:&lt;br /&gt;
#'''Calibration:''' &lt;br /&gt;
#:This is the first step in workflow. The objective is to register the image to the robot via MR fiducials. For this, first a MR scan (calibration volume) is done to optimally image the fiducials. The volume is loaded up inside SLICER, from the TR Prostate Biopsy module's wizard GUI. The registration method is based on first segmenting the fiducials as seen in image. The segmentation algorithm developed by Csaba/Axel at Johns Hopkins, primarily uses morphological operations to localize the fiducials. The parameters of segmentation are available on wizard GUI; these include: approximate physical dimensions of fiducials, thresholds. The semi-automatic segmentation process is initiated by user providing one click each per fiducial. After the fiducials are segmented, the registration is triggered automatically inside, which uses prior knowledge about mechanical design of the device, and knowledge of placement of fiducials. The registration algorithm finds two axis lines (one per pair of fiducials), and computes the angle and distance between those axes. The segmentation, and registration results are displayed in GUI. The registration results (angle and distance between axes) are bench-marked against the mechanically measured ground truth. If the user/clinician are not happy with the results, he/she can modify the parameters and do re-segmentation and recalculate registration.[[Image:TRProstateBiopsy Calibration.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#'''Segmentation:''' &lt;br /&gt;
#:After the robot is registered, the next step (press next on wizard workflow) is to acquire prostate volume, and segment prostate (Algorithm by Yi Gao/ Allen Tannenbaum GeorgiaTech)[[Image:TRPB_ProstateSegmentation.JPG|thumb|320px|center|Segmentation.]]&lt;br /&gt;
#'''Targeting:''' &lt;br /&gt;
#:In this step, clinician marks biopsy targets (by click). The robot rotation angle and needle angle is computed along-with needle trajectory and depth (automatically). The information about the target's RAS location, and the needle targeting parameters is populated in the list in wizard GUI. Selecting a particular target in the list brings the target in view in all three slices. Multiple targets can be marked. The clinician selects the target to perform biopsy on. The clinician then dials in the rotation angle, and needle angle on the device, and performs the biopsy. The sensor data from the optical encoders on the robot is continuously read and updated on the SLICER's slice views, about the current depth/orientation of needle. [[Image:TRProstateTargeting.jpg|thumb|320px|center|Trajectory calculation from target locations]]&lt;br /&gt;
#'''Verification:'''&lt;br /&gt;
#:After, the needle is in, a confirmation scan is taken to verify actual biopsy locations against the planned targets. The verification volume is loaded in SLICER from module GUI. The user picks up the target to validated from the list, and then clicks at two needle ends to mark the needle. The distance and angle errors are calculated and populated in the list. [snapshot]&lt;br /&gt;
&lt;br /&gt;
Currently, we have implemented the GUI of all the steps in workflow. The functionality of calibration step is complete. The functionality of segmentation step is being implemented during the project week, working in coordination with Yi/Allen. We are in the process of implementing functionality for the rest of steps.&lt;br /&gt;
This module provides a demonstration of how Slicer modules can be created for specific interventional devices.&lt;br /&gt;
==Software &amp;amp; documentation==&lt;br /&gt;
&lt;br /&gt;
* The TRProstateBiopsy module is in the &amp;quot;Queens&amp;quot; directory of the NAMICSandBox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/TRProstateBiopsy/ online]&lt;br /&gt;
* Tutorial is forthcoming&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
&lt;br /&gt;
* DBP: &lt;br /&gt;
**Gabor Fichtinger, Phd, Queens School of Computing, Queens University [[Image:QueensLogo.jpg|50px]]&lt;br /&gt;
**Gabor Fichtinger, Phd, Louis Whitcomb, Phd, LCSR, Johns Hopkins University [[Image:LCSRLogo.gif|80px]]&lt;br /&gt;
* Core 1: Allen Tannenbaum, Phd, Yi Gao, Phd student, Georgia Tech University [[Image:GTLogo.gif|80px]]&lt;br /&gt;
* Core 2: Steve Pieper, Katie Hayes[[Image:Kitware.png|60px]]&lt;br /&gt;
* Contact: Gabor Fichtinger, gabor at cs.queensu.ca&lt;br /&gt;
&lt;br /&gt;
==Outreach==&lt;br /&gt;
&lt;br /&gt;
* Publication Links to the PubDB.&lt;br /&gt;
* Planned outreach activities (including presentations, tutorials/workshops) at conferences&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34517</id>
		<title>AHM2009:JHU</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=AHM2009:JHU&amp;diff=34517"/>
		<updated>2009-01-08T04:32:05Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[AHM_2009#Agenda|Back to AHM 2009 Agenda]]&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
==JHU Roadmap Project==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Menard.jpg|thumb|350px|Prostate intervention (biopsy) in closed MR scanner.]]&lt;br /&gt;
|[[Image:Patient.jpg|Close-up of the transrectal procedure|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:Robot.jpg|thumb|350px|Transrectal prostate intervention robot assembled.]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{| &lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateBiopsyRobot.jpg|thumb|350px|MR-compatible Trans-Rectal Prostate Robot.]]&lt;br /&gt;
|[[Image:TRPB_ProstateSegmentation.JPG|semi-automatic prostate segmentation|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|[[Image:TRProstateCalibration.jpg|thumb|360px|Fiducial calibration of interventional robot]]&lt;br /&gt;
|valign=&amp;quot;top&amp;quot;|[[Image:TRProstateTargeting.jpg|thumb|360px|Trajectory calculation from target locations]]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
== Overview ==&lt;br /&gt;
;* Who is the targeted user?&lt;br /&gt;
The only definitive method to diagnose prostate cancer is to perform biopsy. The current gold standard is Trans Rectal UltraSound (TRUS) guided biopsies. TRUS biopsies lack in sensitivity and specificity. MRI has recently been investigated as an attractive alternative to image and localize prostate cancer. It is imperative to take advantage of multi-parametric MRI imaging to perform prostate biopsy. However, to perform MRI-guided biopsy, there is a physical limitation of working in a very small space. We have developed a completely MR-compatible robot that solves the problem. This SLICER based module we are developing, aims to provide end-to-end interventional solution that combines software imaging functionality and interfaces with our specific hardware to perform the biopsy. The targeted users are the clinicians who are currently investigating our MRI compatible robotic device.&lt;br /&gt;
&lt;br /&gt;
;* What problem does the pipeline solve?&lt;br /&gt;
There are several Slicer features that are crucial to image-guided therapy that are utilized in this module:&lt;br /&gt;
&lt;br /&gt;
'''Oriented volumes and image slice reformatting'''&lt;br /&gt;
Each volume acquired during the biopsy procedure has its own orientation, since images are acquired according to the orientation of the instrument, which is at an oblique angle to the MR scanner's coordinate axes.  What we have added is that, for each workflow step, a particular volume is specified as the &amp;quot;primary&amp;quot; and, when overlays are performed e.g. for verification, Slicer displays the primary image in its original orientation and reslices the others.  The displayed slice orientation automatically changes to match the primary whenever the workflow step changes.&lt;br /&gt;
&lt;br /&gt;
'''Multiple fiducial lists'''&lt;br /&gt;
This module maintains two Slicer fiducial lists: for registration, and for targeting.  Like the image orientation, we have added automatic switching between display of fiducials according to the workflow step.&lt;br /&gt;
&lt;br /&gt;
'''Communication with hardware devices'''&lt;br /&gt;
The module uses optical encoders that are attached to the joints of the biopsy device to verify that the position of the device matches that of the plan.&lt;br /&gt;
&lt;br /&gt;
;* How does the pipeline compare to state of the art?&lt;br /&gt;
SLICER's ability to work with volumes, slice reformatting, and seamless integration with image analysis algorithms makes it uniquely suited to our problem. One does not have to reinvent the wheel, as there is so much functionality that is available at hand and easily accessible as plug and play components. There is an existing pipeline/application that interfaces with our specific hardware, however, it is rather difficult to extend its functionality to cater near future requirements. So, we believe SLICER module is the way to go for us.&lt;br /&gt;
&lt;br /&gt;
==Detailed Information about the Pipeline==&lt;br /&gt;
We have created an interactive loadable module that provides a workflow interface for MR-guided transrectal prostate biopsy. The MR images are captured with the help of an endorectal coil which is mounted on the same shaft as the biopsy needle.&lt;br /&gt;
The steps in the workflow are as follows:&lt;br /&gt;
#'''Calibration:''' &lt;br /&gt;
#:This is the first step in workflow. The objective is to register the image to the robot via MR fiducials. For this, first a MR scan (calibration volume) is done to optimally image the fiducials. The volume is loaded up inside SLICER, from the TR Prostate Biopsy module's wizard GUI. The registration method is based on first segmenting the fiducials as seen in image. The segmentation algorithm developed by Csaba/Axel at Johns Hopkins, primarily uses morphological operations to localize the fiducials. The parameters of segmentation are available on wizard GUI; these include: approximate physical dimensions of fiducials, thresholds. The semi-automatic segmentation process is initiated by user providing one click each per fiducial. After the fiducials are segmented, the registration is triggered automatically inside, which uses prior knowledge about mechanical design of the device, and knowledge of placement of fiducials. The registration algorithm finds two axis lines (one per pair of fiducials), and computes the angle and distance between those axes. The segmentation, and registration results are displayed in GUI. The registration results (angle and distance between axes) are bench-marked against the mechanically measured ground truth. If the user/clinician are not happy with the results, he/she can modify the parameters and do re-segmentation and recalculate registration.[[Image:TRProstateBiopsy Calibration.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#'''Segmentation:''' &lt;br /&gt;
#:After the robot is registered, the next step (press next on wizard workflow) is to acquire prostate volume, and segment prostate (Algorithm by Yi Gao/ Allen Tannenbaum GeorgiaTech)[[Image:TRPB_ProstateSegmentation.JPG|thumb|320px|center|Segmentation.]]&lt;br /&gt;
#'''Targeting:''' &lt;br /&gt;
#:In this step, clinician marks biopsy targets (by click). The robot rotation angle and needle angle is computed along-with needle trajectory and depth (automatically). The information about the target's RAS location, and the needle targeting parameters is populated in the list in wizard GUI. Selecting a particular target in the list brings the target in view in all three slices. Multiple targets can be marked. The clinician selects the target to perform biopsy on. The clinician then dials in the rotation angle, and needle angle on the device, and performs the biopsy. The sensor data from the optical encoders on the robot is continuously read and updated on the SLICER's slice views, about the current depth/orientation of needle. [[Image:TRProstateTargeting.jpg|thumb|320px|center|Trajectory calculation from target locations]]&lt;br /&gt;
#'''Verification:'''&lt;br /&gt;
#:After, the needle is in, a confirmation scan is taken to verify actual biopsy locations against the planned targets. The verification volume is loaded in SLICER from module GUI. The user picks up the target to validated from the list, and then clicks at two needle ends to mark the needle. The distance and angle errors are calculated and populated in the list. [snapshot]&lt;br /&gt;
&lt;br /&gt;
Currently, we have implemented the GUI of all the steps in workflow. The functionality of calibration step is complete. The functionality of segmentation step is being implemented during the project week, working in coordination with Yi/Allen. We are in the process of implementing functionality for the rest of steps.&lt;br /&gt;
This module provides a demonstration of how Slicer modules can be created for specific interventional devices.&lt;br /&gt;
==Software &amp;amp; documentation==&lt;br /&gt;
&lt;br /&gt;
* The TRProstateBiopsy module is in the &amp;quot;Queens&amp;quot; directory of the NAMICSandBox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/TRProstateBiopsy/ online]&lt;br /&gt;
* Tutorial is forthcoming&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
&lt;br /&gt;
* DBP: &lt;br /&gt;
**Gabor Fichtinger, Phd, Queens School of Computing, Queens University [[Image:QueensLogo.jpg|50px]]&lt;br /&gt;
**Gabor Fichtinger, Phd, Louis Whitcomb, Phd, LCSR, Johns Hopkins University [[Image:LCSRLogo.gif|80px]]&lt;br /&gt;
* Core 1: Allen Tannenbaum, Phd, Yi Gao, Phd student, Georgia Tech University [[Image:GTLogo.gif|80px]]&lt;br /&gt;
* Core 2: Steve Pieper, Katie Hayes[[Image:Kitware.png|60px]]&lt;br /&gt;
* Contact: Gabor Fichtinger, gabor at cs.queensu.ca&lt;br /&gt;
&lt;br /&gt;
==Outreach==&lt;br /&gt;
&lt;br /&gt;
* Publication Links to the PubDB.&lt;br /&gt;
* Planned outreach activities (including presentations, tutorials/workshops) at conferences&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Robot.jpg&amp;diff=34516</id>
		<title>File:Robot.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Robot.jpg&amp;diff=34516"/>
		<updated>2009-01-08T04:27:43Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Patient.jpg&amp;diff=34515</id>
		<title>File:Patient.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Patient.jpg&amp;diff=34515"/>
		<updated>2009-01-08T04:27:24Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Menard.jpg&amp;diff=34514</id>
		<title>File:Menard.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Menard.jpg&amp;diff=34514"/>
		<updated>2009-01-08T04:26:51Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Dani-PerkStation.JPG&amp;diff=32521</id>
		<title>File:Dani-PerkStation.JPG</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Dani-PerkStation.JPG&amp;diff=32521"/>
		<updated>2008-11-29T14:44:08Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32520</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32520"/>
		<updated>2008-11-29T14:43:44Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Image Overlay in clinical setup]]&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
[[Image:Dani-PerkStation.JPG|thumb|320px|center|Perk Station in training / demo mode.]]&lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32519</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32519"/>
		<updated>2008-11-29T14:42:34Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Image Overlay in clinical setup]]&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
[[Image:Dani-PerkStation.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32518</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32518"/>
		<updated>2008-11-29T14:41:28Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Image Overlay in clinical setup]]&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
[[Image:Dani-Perk-Station.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32517</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32517"/>
		<updated>2008-11-29T14:37:50Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Image Overlay in clinical setup]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Dani-Perk-Station.JPG|600px|thumb|Perk Station in demonstration / training mode]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32516</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32516"/>
		<updated>2008-11-29T14:35:40Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Image Overlay in clinical setup]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Perk-Station-Demo.JPG|600px|thumb|Perk Station in demonstration / training mode]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Dani-PerkStation.jpg&amp;diff=32515</id>
		<title>File:Dani-PerkStation.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Dani-PerkStation.jpg&amp;diff=32515"/>
		<updated>2008-11-29T14:35:03Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32514</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32514"/>
		<updated>2008-11-29T14:32:32Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Image Overlay in clinical setup]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Perk-Station|600px|thumb|Perk Station in demonstration / training mode]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32513</id>
		<title>DBP2:Queens:PerkStation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:PerkStation&amp;diff=32513"/>
		<updated>2008-11-29T14:28:54Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:CT-Image-Overlay-Zinreich.JPG|600px|thumb|Schematics of image overlay]]&lt;br /&gt;
&lt;br /&gt;
Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
&lt;br /&gt;
=PERK Station (Image overlay to perform/train image-guided needle interventions)=&lt;br /&gt;
&lt;br /&gt;
==Objective:==&lt;br /&gt;
&lt;br /&gt;
The objective of this project (PERK Station) is to develop a end-to-end solution implemented as a Slicer 3 module to assist in performing/training for image-guided percutaneous needle interventions. The software, along-with its hardware, overlays the image (CT/MR) acquired on the patient/phantom. The physician/trainee looks at the patient/phantom through the mirror showing the image overlay and the CT/MR image appears to be floating inside the body with the correct size and position, as if the physician/trainee had 2D ‘X-ray vision’. &lt;br /&gt;
&lt;br /&gt;
==Description:==&lt;br /&gt;
&lt;br /&gt;
The PERK Station comprises of image overlay, laser overlay, and standard tracked freehand navigation in a single suite. The end-to-end solution software module along-with its hardware, operates in two modes: a) Clinical b) Training. &lt;br /&gt;
&lt;br /&gt;
* '''Clinical mode''': In clinical mode, it enables to perform an image-guided percutaneous needle biopsies. The workflow in clinical mode consists of four steps:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:The objective of this step is to register the image overlay device with patient/phantom lying on scanner table. In this stage, the software sends the image to the secondary device, in correct physical dimensions. The secondary monitor is mounted with a semi-transparent mirror at 45 degree angle. Thus, the image displayed on monitor, gets projected on the mirror, and when seen through the mirror, the image appears to be floating on the patient/phantom. Based on how the secondary monitor is mounted w.r.t mirror, a horizontal or vertical flip may be required. Once correct flip arrangement in chosen, the image as seen on SLICER's slice viewer display should correspond to what is being seen through the mirror. Now the software enables the user who could be physician to translate/rotate the image as seen through the mirror, so that it aligns with the fiducials mounted/strapped on patient/phantom to achieve registration. This fiducial alignment achieves in-plane registration. For registration in z-plane, the image projection plane should be coincident with the laser-guide plane, which is also the plane of acquisition. It is worthwhile to mention that registration of image only takes place on the secondary monitor. Even though, the image is scaled, moved, rotated, and flipped, the image as displayed on the slice viewer of SLICER stays undisturbed. This enables the physician/user to zoom in/out of image in slice viewer for planning, without affecting the calibration. There is an option to save calibration done in a xml file, which is very useful. [[Image:PerkStationClinical_Calibrate.JPG|thumb|320px|center|Calibration step GUI.]]&lt;br /&gt;
#Planning: &lt;br /&gt;
#:Once the system is calibrated, and registered with patient, the software moves to next step. In this step, the entry and target points are given by mouse-clicks. The software calculates the insertion angle w.r.t vertical and insertion depth. Also, the software overlays the needle guide on the secondary monitor/mirror to assist the physician/user to perform the intervention. There is an option to reset the plan, in case the physician wishes to perform another needle intervention with same image.  [[Image:PerkStationClinical_Plan.JPG|thumb|320px|center|Plan step GUI.]]&lt;br /&gt;
#Insertion: &lt;br /&gt;
#:After planning, in the insertion step, further depth perception lines appear in gradations of 10mms to help the physician to insert the needle at correct depth. [[Image:PerkStationClinical_Insert.JPG|thumb|320px|center|Insert step GUI.]]&lt;br /&gt;
#Validation:&lt;br /&gt;
#:After the needle insertion is complete, the physician/user acquires a validation image/volume with needle inside patient/phantom. The validation volume/image is added to scene. The physician/user can give the actual needle entry and end points to get error calculations. [[Image:PerkStationClinical_Validation.JPG|thumb|320px|center|Validate step GUI.]]&lt;br /&gt;
* '''Training mode''': In training mode, it provides feedback to trainees in a controlled environment for performing image-guided percutaneous needle interventions. The workflow consists of an additional step of 'Evaluation' in addition to the four steps described earlier. In following description, only the difference are highlighted:&lt;br /&gt;
#Calibration: &lt;br /&gt;
#:In this step, a different wizard GUI is loaded; the software does not automatically display the image in correct dimensions, rather it relies on the student/user's input. The calculation of amount of translation and rotation required to align the system is also left to the user.&lt;br /&gt;
#Planning: &lt;br /&gt;
#:In this step too, the calculation of insertion depth and insertion angle is left to user to input.&lt;br /&gt;
#Insertion:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Validation:&lt;br /&gt;
#:This step is more or less the same&lt;br /&gt;
#Evaluation:&lt;br /&gt;
#:In this step, various errors made in calculations are displayed to the student/user to objectively assess his/her performance in the intervention&lt;br /&gt;
&lt;br /&gt;
==Progress:==&lt;br /&gt;
The software is almost complete in its functionality. It is a dynamically loadable module which means one does not need to modify any of the SLICER code to integrate this module. In terms of compliance to SLICER's interactive module architecture, the software code needs to be reviewed by one of engineering core members.&lt;br /&gt;
&lt;br /&gt;
==Current deployment/usage:==&lt;br /&gt;
* Clinical mode: The software has been delivered to team at Johns Hopkins University, Baltimore. It is currently being used in phantom and cadaver trials.&lt;br /&gt;
* Training mode: The software and hardware (designed and developed by Paweena U-Thainual and Iulian) integrated system has debuted as a part of Fall course at Queens' University in School of Computing for undergrad teaching taught by Dr Gabor Fichtinger.&lt;br /&gt;
&lt;br /&gt;
==Software source code:==&lt;br /&gt;
Available on the NA-MIC Sandbox - access [http://svn.na-mic.org/NAMICSandBox/trunk/Queens/PerkStationModule/ PerkStationModule]&lt;br /&gt;
==Software installation instructions:==&lt;br /&gt;
* Installing Slicer: go to the [http://wiki.na-mic.org/Wiki/index.php/IGT:ToolKit/Install-Slicer3 Slicer3 Install] site.&lt;br /&gt;
* Build Module from source code, copy PerkStationModule.dll file generated to directory ''SlicerInstallationDir''/lib/Slicer3/Modules&lt;br /&gt;
&lt;br /&gt;
==Tutorial (end-to-end):==&lt;br /&gt;
* Perk Station 'Clinical' mode: [[Media:PERK_STATION_WorkflowTutorialClinical.pdf‎|PERK_Station_ClinicalMode_Workflow]]&lt;br /&gt;
&lt;br /&gt;
==Publications==&lt;br /&gt;
&lt;br /&gt;
==Team==&lt;br /&gt;
*PI: Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*Hardware: Paweena U-Thainual, Queen's University(paweena@cs.queensu.ca), Iulian Iordachita, Johns Hopkins University (iordachita@jhu.edu)&lt;br /&gt;
*Software Engineer: Siddharth Vikal, Queen’s University (vikal at cs.queensu.ca)&lt;br /&gt;
*JHU Software Engineer Support: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
*Host Institutes: Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
*[[DBP2:JHU|JHU DBP 2]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:CT-Image-Overlay-Zinreich.JPG&amp;diff=32512</id>
		<title>File:CT-Image-Overlay-Zinreich.JPG</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:CT-Image-Overlay-Zinreich.JPG&amp;diff=32512"/>
		<updated>2008-11-29T14:28:19Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30015</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30015"/>
		<updated>2008-09-11T09:44:57Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* [http://www.cs.queensu.ca/~gabor/OpenJobs/ITK-Programmer.htm Image-Guided Surgery Applications Engineer at the Perk Lab, Queen's University, Canada]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30014</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30014"/>
		<updated>2008-09-11T09:41:58Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* http://www.cs.queensu.ca/~gabor/OpenJobs/ITK-Programmer.htm|2008-09-11 Image-Guided Surgery Applications Engineer at the Perk Lab, Queen's University, Canada&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30013</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30013"/>
		<updated>2008-09-11T09:40:49Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* [[http://www.cs.queensu.ca/~gabor/OpenJobs/ITK-Programmer.htm|2008-09-11 Image-Guided Surgery Applications Engineer at the Perk Lab, Queen's University, Canada]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30012</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30012"/>
		<updated>2008-09-11T09:38:53Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* [[www.cs.queensu.ca/~gabor/OpenJobs/ITK-Programmer.htm|2008-09-11 Image-Guided Surgery Applications Engineer at the Perk Lab, Queen's University, Canada]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30011</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30011"/>
		<updated>2008-09-11T09:31:40Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* [[Job_Opening:Queens|2008-09-11 Image-Guided Surgery Applications Engineer at the Perk Lab, Queen's University, Canada]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30010</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30010"/>
		<updated>2008-09-11T09:30:30Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* [[Job_Opening:Queens|2008-09-11 Image Computing Applications Engineer -- Image Guided Surgery, Perk Lab, Queen's University, Canada]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30009</id>
		<title>Resources</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Resources&amp;diff=30009"/>
		<updated>2008-09-11T09:29:24Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Job Openings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== [[Collaborator:Resources|Resources for Collaborators]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information for investigators who would like to collaborate with NAMIC.&lt;br /&gt;
&lt;br /&gt;
=== Data ===&lt;br /&gt;
&lt;br /&gt;
All NA-MIC Data is available at the following link:  [[Data|NA-MIC Data]]&lt;br /&gt;
&lt;br /&gt;
=== Software: NA-MIC kit ===&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Kit consists of all software that is being made available under the NA-MIC project. This software follows the NIH guidelines for open software development. In this section, we provide information about the components of the NA-MIC kit as well as supporting software tools that are being used by the software developers on the project.&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC-Kit|Software Resources for NA-MIC Kit]]&lt;br /&gt;
* [[Engineering:SandBox|Development Sandbox ]]&lt;br /&gt;
&lt;br /&gt;
=== Publications Guidelines and Resources ===&lt;br /&gt;
&lt;br /&gt;
The [[Publications:Main|publications page]] contains information on publications guidelines for NAMIC, the funding acknowledement text, as well as the acknowledgements/references associated with each of the data sets.&lt;br /&gt;
&lt;br /&gt;
=== Mailing Lists ===&lt;br /&gt;
&lt;br /&gt;
These are the mailing lists associated with NA-MIC. If you are a participant in the project, please make sure that you are signed up for all the mailing lists that apply to your role and interests in the projects. These lists are moderated and maintained by Kitware.&lt;br /&gt;
&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo NAMIC-Algo]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-algo-pi NAMIC-Algo PIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-all NAMIC-All]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio1 NAMIC-Bio1]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-bio2 NAMIC-Bio2]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-developers NAMIC-Developers]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dissemination NAMIC-Dissemination]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-eng NAMIC-Eng]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-leadership NAMIC-Leadership]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-mgt NAMIC-Mgt]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week na-mic-project-week]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-service NAMIC-Service]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-sitepis NAMIC-SitePIs]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-training NAMIC-Training]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-dti NAMIC-DTI Community]&lt;br /&gt;
* [http://public.kitware.com/cgi-bin/mailman/listinfo/namic-shapeanalysis NAMIC-ShapeAnalysis Community]&lt;br /&gt;
&lt;br /&gt;
=== [[NIH-Page|NIH Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains useful information provided by our NIH officers.&lt;br /&gt;
&lt;br /&gt;
=== [[Mbirn:Main_Page|Morphometry BIRN Page]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains information about the [http://www.nbirn.net Morphometry Biomedical Informatics Research Network] Project&lt;br /&gt;
&lt;br /&gt;
=== NA-MIC Powerpoint ===&lt;br /&gt;
&lt;br /&gt;
* [[Media:NA-MIC_Powerpoint_Template.ppt|NA-MIC Powerpoint Template]]&lt;br /&gt;
* [[Media:NAMIC-Intro-Feb-04-2005.ppt|NA-MIC introduction slides]]&lt;br /&gt;
&lt;br /&gt;
=== [[NAMIC_Logos_Templates|NAMIC Logos and Templates]] ===&lt;br /&gt;
&lt;br /&gt;
* This page contains links to files containing the NA-MIC logo and templates.&lt;br /&gt;
&lt;br /&gt;
=== Job Openings ===&lt;br /&gt;
&lt;br /&gt;
* [[Job_Opening:2008_02_19_HMS|2008-02-19 Image Computing Applications Engineer, Psychiatry Neuroimaging Laboratory, Harvard Medical School]]&lt;br /&gt;
* [[Job_Opening:Queens|Image Computing Applications Engineer -- Image Guided Surgery, Perk Lab, Queen's University, Canada]]&lt;br /&gt;
&lt;br /&gt;
=== Wikis ===&lt;br /&gt;
&lt;br /&gt;
We are often asked about mediawiki and other wikis. Here is some [[Information_on_wikis|information on wikis]].&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Miccai_2008_Prostate_Workshop&amp;diff=28501</id>
		<title>Miccai 2008 Prostate Workshop</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Miccai_2008_Prostate_Workshop&amp;diff=28501"/>
		<updated>2008-07-25T20:30:53Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Confirmed attendees */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Workshop title=&lt;br /&gt;
Prostate image analysis and computer-assisted intervention&lt;br /&gt;
&lt;br /&gt;
=Place, Time=&lt;br /&gt;
*MICCAI 2008, New York University, New York, NY&lt;br /&gt;
*Room # to be announced&lt;br /&gt;
*8am-12pm, September 10, 2008&lt;br /&gt;
&lt;br /&gt;
=Registration=&lt;br /&gt;
&lt;br /&gt;
Fee: $100-150   (see registration site for detail)&lt;br /&gt;
&lt;br /&gt;
Please visit the registration site for the main MICCAI conference.&lt;br /&gt;
https://www.registrationassistant.com/p/rg.asp?Event=234DD6A3C6421F5F40C1B&lt;br /&gt;
&lt;br /&gt;
Those who are not attending the main conference can also register for the workshop only at the registration site.&lt;br /&gt;
&lt;br /&gt;
=Keywords=&lt;br /&gt;
Prostate, Image guided therapy, medical Image analysis, Focused ultrasound, novel image acquisition, robotics, device, &lt;br /&gt;
&lt;br /&gt;
=Sponsor=&lt;br /&gt;
National Center for Image Guided Therapy, Brigham and Women’s Hospital and Harvard Medical School&lt;br /&gt;
&lt;br /&gt;
=General Organizers=&lt;br /&gt;
&lt;br /&gt;
Clare Tempany, MD&lt;br /&gt;
&lt;br /&gt;
Professor of Radiology&lt;br /&gt;
&lt;br /&gt;
Vice Chairman of Radiology Research&lt;br /&gt;
&lt;br /&gt;
Brigham &amp;amp; Women's Hospital&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nobuhiko Hata, PhD&lt;br /&gt;
&lt;br /&gt;
Technical Director, Image guided Therapy Program&lt;br /&gt;
&lt;br /&gt;
Assistant Professor of Radiology&lt;br /&gt;
&lt;br /&gt;
Brigham and Women’s Hospital and Harvard Medical School&lt;br /&gt;
&lt;br /&gt;
=Workshop secretary=&lt;br /&gt;
Nobuhiko Hata, hata@bwh.harvard.edu&lt;br /&gt;
617-732-5809&lt;br /&gt;
Department of Radiology, Brigham and Women’s Hostapil&lt;br /&gt;
75 Francis St, Boston, MA&lt;br /&gt;
www.nicgt.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Objectives=&lt;br /&gt;
The objective of this workshop is discuss state-of-art in medical image processing and computer assisted intervention of prostate, In this half-day workshop, we will share our recent advances and identify translational technologies useful in clinical application of prostate diagnosis and threpy. We will also identify possible synergies among the participants in future strategy. The agenda will include plenary talks that will review clinical and technical aspects of prostate imaging and therapy as well as 10+ abstract presentations.  One of the organizers of the workshop has been committed to MICCAI as a clinician.&lt;br /&gt;
&lt;br /&gt;
=Program=&lt;br /&gt;
&lt;br /&gt;
*8:00am: Welcome remarks by Nobuhiko Hata&lt;br /&gt;
*8:10 - 8:30am:  Talk, Prostate image analysis and intervention - Clinician's perspective, Clare Tempnay, Brigham and Women's Hospital&lt;br /&gt;
*8:30 - 9:30am: abstract talks&lt;br /&gt;
**15 min + 5 min QA x 3&lt;br /&gt;
==Intervention==&lt;br /&gt;
#Podder et al, Ultrasound Image-guided Robotic Brachytherapy Systems for Prostate Seed Implant &lt;br /&gt;
#Tse et al, Haptic Device for MR-guided Transrectal Prostate Biopsy&lt;br /&gt;
#Tokuda, et al, Software / Hardware Integration for MRI-guided Robotic Prostate Intervention using Open IGT Link&lt;br /&gt;
&lt;br /&gt;
*9:30am-10:40am&lt;br /&gt;
**Coffee Break&lt;br /&gt;
&lt;br /&gt;
*9:40am-11:20am&lt;br /&gt;
==Image analysis for intervention==&lt;br /&gt;
#Ou et al, Optimized Biopsy Procedures for Estimating Gleason Score and Prostate Cancer Volume&lt;br /&gt;
#Vikal, et al, Prostate contouring in MRI guided biopsy&lt;br /&gt;
#Mamou et al, Ultrasonic detection and imaging of two common types of prostate-brachytherapy seeds using singular spectrum analysis&lt;br /&gt;
==Imaging==&lt;br /&gt;
#Wen et al, Imaging of the prostate with vibro-elastography: preliminary patient results &lt;br /&gt;
#Madabhushi et al, Multi-protocol Prostate MR Image Analysis: Image Segmentation, Registration, and Computer-aided Diagnosis&lt;br /&gt;
&lt;br /&gt;
*11:20am-11:30am&lt;br /&gt;
**Coffee Break&lt;br /&gt;
&lt;br /&gt;
*11:30am-12:30am&lt;br /&gt;
==Image analysis for diagnosis==&lt;br /&gt;
#Dowling et al, Fast automatic correction of non-rigid motion artefacts in MRI of the abdomen&lt;br /&gt;
#Aboofazeli et al, Automated detection of prostate cancer using wavelet transform features of ultrasound RF time series &lt;br /&gt;
#Mahdavi et al, 3D Prostate Segmentation in Ultrasound Images using Image Deformation and Shape Fitting&lt;br /&gt;
&lt;br /&gt;
*12:30-12:40am&lt;br /&gt;
#Concluding remarks, Clare Tempany&lt;br /&gt;
&lt;br /&gt;
=Accepted Oral Papers=&lt;br /&gt;
--[[User:Noby|Noby]] 21:42, 14 July 2008 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Intervention==&lt;br /&gt;
#Tse et al, Haptic Device for MR-guided Transrectal Prostate Biopsy&lt;br /&gt;
#Podder et al, Ultrasound Image-guided Robotic Brachytherapy Systems for Prostate Seed Implant &lt;br /&gt;
#Tokuda, et al, Software / Hardware Integration for MRI-guided Robotic Prostate Intervention using Open IGT Link&lt;br /&gt;
&lt;br /&gt;
==Image analysis for intervention==&lt;br /&gt;
#Ou et al, Optimized Biopsy Procedures for Estimating Gleason Score and Prostate Cancer Volume&lt;br /&gt;
#Vikal, et al, Prostate contouring in MRI guided biopsy&lt;br /&gt;
#Mamou et al, Ultrasonic detection and imaging of two common types of prostate-brachytherapy seeds using singular spectrum analysis&lt;br /&gt;
&lt;br /&gt;
==Image analysis for diagnosis==&lt;br /&gt;
#Dowling et al, Fast automatic correction of non-rigid motion artefacts in MRI of the abdomen&lt;br /&gt;
#Aboofazeli et al, Automated detection of prostate cancer using wavelet transform features of ultrasound RF time series &lt;br /&gt;
#Mahdavi et al, 3D Prostate Segmentation in Ultrasound Images using Image Deformation and Shape Fitting&lt;br /&gt;
&lt;br /&gt;
==Imaging==&lt;br /&gt;
#Wen et al, Imaging of the prostate with vibro-elastography: preliminary patient results &lt;br /&gt;
#Madabhushi et al, Multi-protocol Prostate MR Image Analysis: Image Segmentation, Registration, and Computer-aided Diagnosis&lt;br /&gt;
&lt;br /&gt;
=Confirmed attendees=&lt;br /&gt;
*Nobuhiko Hata, Brigham and Women's Hospital&lt;br /&gt;
*Clare Tempnay, Brigham and Women's Hospital&lt;br /&gt;
*Junichi Tokuda, Brigham and Women's Hospital&lt;br /&gt;
*Gabor Fichtinger, Queen's University&lt;br /&gt;
&lt;br /&gt;
=Call For Abstracts (closed)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Abstracts will be peer-reviewed by a scientific program committee. &lt;br /&gt;
&lt;br /&gt;
*The abstracts should comprise at least one page up to a maximum of two pages. The abstracts will be published as part of the conference proceedings.&lt;br /&gt;
&lt;br /&gt;
*Authors should submit a PDF file that will print on a PostScript printer. Electronic submission is required.&lt;br /&gt;
&lt;br /&gt;
*The abstracts should be submitted to the conference secretary, Nobuhiko Hata, by email (hata@bwh.harvard.edu).&lt;br /&gt;
&lt;br /&gt;
*Important Dates&lt;br /&gt;
&lt;br /&gt;
    * (Extended) abstracts due: July 20, 2008&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28478</id>
		<title>2008 Progress Report NIH QnA</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28478"/>
		<updated>2008-07-24T23:24:25Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Queens Response (Gabor Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Please address promptly (by July 30th, 5pm)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We received [[Media:5_U54_EB005149-05_NIH_Response.pdf|this letter]] from our NIH program officers.  Below are excerpted specific questions from this letter, with names of NA-MIC team members responsible for the answers.&lt;br /&gt;
&lt;br /&gt;
=MGH=&lt;br /&gt;
*What progress has been made with the MGH subcontract in the past year?&lt;br /&gt;
**There's no progress report summary following the Statement of Intent for MGH (page 42). We recognize that MGH's budget was significantly reduced this past year, as a good portion was re-budgeted to Washington University beginning in December 2007. Nevertheless, we expect some progress to have been made.&lt;br /&gt;
**The scientific progress report lists MGH as participating in some efforts (Shape Based Segmentation and Registration; Spherical Wavelets; and Shape Analysis with Overcomplete Wavelets), but there are no MGH personnel listed as key investigators for that section.&lt;br /&gt;
*With regard to future work (Dr. Fischl's consultation), the statement of work refers to &amp;quot;integration of FreeSurfer with ITK and 3D Slicer.&amp;quot; What does that mean? Wasn't that effort abandoned.&lt;br /&gt;
*The progress report is somewhat confusing as to whether FreeSurfer is being used to study cortical correspondence. On page 250 (Section 3.2) a collaboration between MGH and MIT is mentioned with respect to cortical correspondence, while the UNC progress report states that MOL is being used to explore cortical correspondence. Please clarify.&lt;br /&gt;
==MGH Response ('''Bruce Fischl''')==&lt;br /&gt;
&lt;br /&gt;
The overcomplete wavelet project has resulted in a conference publication, and we are actively working on it's completion. We are currently working on incorporating geometric invariants to replace the coordinate functions, and also on augmenting our existing neonate datasets with one or two more manually labeled cases so that we can assess how well our model fits the data.&lt;br /&gt;
&lt;br /&gt;
Going forward we intend to improve the FreeSurfer/Slicer/ITK interoperability. Towards this end we have worked with Core 2 to incorporate support for our internal formats in Slicer, and intend to include support for ITK formats such as NRRD directly into FreeSurfer. This will significantly increase the ease of development of algorithms that take advantage of the two platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also have an ongoing collaboration with Drs. Polina Golland, Mert Sabuncu and Thomas Yeo at MIT regarding improving cortical correspondence. Specifically, the existing FreeSurfer algorithms are being used to generate a baseline state-of-the-art accuracy measure that will be used to assess the performance of novel registration algorithms. Here we are working on both fast diffeormophic registration, and have a recent conference paper describing the &amp;quot;spherical demons&amp;quot; algorithm that improves execution time significantly without sacrificing performance. In addition, we as part of Mr. Yeo's Ph.D. research we are developing techniques for the optimal alignment of either archectonically or functionally defined cortical regions, defined using whole-brain histology and functional MRI respectively.&lt;br /&gt;
&lt;br /&gt;
=UCLA=&lt;br /&gt;
&lt;br /&gt;
*What progress has been made with the UCLA subcontract in the past year?&lt;br /&gt;
**There are no publications listed in progress report. A search of NA-MIC publications database shows only one paper with Art Toga or Nathan Hageman's name on it, and that's the iTools paper (from the Software and Data Integration Working Group).&lt;br /&gt;
**Timeline information is out of date. It has not been updated to reflect the changes in the statement of work that was agreed upon in August 2007.&lt;br /&gt;
==UCLA Response ('''Arthur Toga''')==&lt;br /&gt;
&lt;br /&gt;
=Queens=&lt;br /&gt;
*Regarding the Brachytherapy Needle Positioning Robot Integration DBP:&lt;br /&gt;
**Which grant is funding the patient data collection? If it is supported by NA-MIC, please let me know because NIBIB will have to approve your Data Safety Monitoring Plan.&lt;br /&gt;
**There seems to be some scientific overlap between this project and other NIH-funded grants, R01 EB002963 (PI: Whitcomb [previously Fichtinger]) and R01 CA111288 (PI:Tempany). The statement of work for the NA-MIC subcontract states, &amp;quot;The deliverables of the contract is professional-grade clinical software engineering of the above modules based on the NA-MIC toolkit (to the extent reasonable and possible) and to develop end applications based on Slicer, for clinical trials in image-gu ided prostate biopsy.&amp;quot; However, this goal also falls within the System Integration aim (Aim 3) of the NCI grant and the System Integration aim (Aim 3) of the NIBIB grant. Please provide us with additional clarification to distinguish these projects in terms of their aims. Also, please confirm that Dr. Gobbi and Mr. Vikal are being supported at no more than 3 and 6 months, respectively by other grants (as they are listed for 9 and 6 months on the NAMIC subcontract).&lt;br /&gt;
==Queens Response ('''Gabor Fichtinger''')==&lt;br /&gt;
Q1: Which grant is funding the patient data collection?  -- The Queen's NAMIC team does not conduct clinical trials. For development and testing purposes, we use previously acquired anonymous image data provided by our clinical collaborators. The data we use does not contain patient identification information. Under the NAMIC grant, we shall not perform clinical trials. We will hand over the developed system to clinical collaborators who are supported by NAMIC.&lt;br /&gt;
&lt;br /&gt;
Q2: There seems to be some scientific overlap -- There are indeed close synergies but no overlap. First: R01 EB002963 (PI: Whitcomb) concentrated on developing a transrectal prostate biopsy robot and its rapid clinical testing. For this purpose, a straightforward and minimalistic system was built to operate the robot. This system is not based on Slicer. The objective of our RoadMap project to empower this robot with superior capaboilities of Slicer and the NAMIC toolkit. The grants therefore are ideally synergistic, without any overlap. Also importantly, R01 EB002963 (PI: Whitcomb) will end on July 31, 2008, before the anniversary date of the NAMIC grant. Second, R01 CA111288 (PI:Tempany) is a Biomedical Research Partnership (BRP) that includes Acoustic Medsystems, Inc., a company whose responsibility is system's integration within the company's quality control environment. Dr. Fichtinger's NAMIC team at Queen's is not involved in system development for R01 CA111288 (PI:Tempany).  &lt;br /&gt;
&lt;br /&gt;
Q3: We confirm that Dr. Gobbi and Mr. Vikal are being supported at no more than 3 and 6 months, respectively by other grants (as they are listed for 9 and 6 months on the NAMIC subcontract).&lt;br /&gt;
&lt;br /&gt;
=Kitware=&lt;br /&gt;
*Kitware is working on a text, &amp;quot;Practical Software Process&amp;quot;, to document the NA-MIC software process. How will that be distributed? Will NA-MIC funds cover all the costs so that the text can be distributed free of charge?&lt;br /&gt;
==Kitware Response ('''Will Schroeder''')==&lt;br /&gt;
&lt;br /&gt;
=Dissemination Core=&lt;br /&gt;
*What does it mean when you say that you NA-MIC hosted the Workshop on Open Source and Open Data at MICCAI 2007? Did NA-MIC providing financial support? Set the agenda? Invite participants? Please clarify.&lt;br /&gt;
==Dissemination Core Response ('''Tina Kapur''')==&lt;br /&gt;
NA-MIC personnel contributed their time to conduct typical tasks involved in chairing a workshop such as soliciting manuscripts and open reviews, setting the agenda, and inviting speakers for keynote presentations.  No financial support was needed or provided by NA-MIC other than travel of NA-MIC presenters to the conference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Questions for non-urgent consideration:=&lt;br /&gt;
==MIND==&lt;br /&gt;
*Systemic Lupus Erythematous project&lt;br /&gt;
**Has this project driven any new algorithm development?&lt;br /&gt;
**Have the new tools been tested on other lesions? What other sorts of lesions are likely to benefit from them?&lt;br /&gt;
**Has manual segmentation (which is serving as the gold standard for this project) been shown to have low inter-observer variability?&lt;br /&gt;
===Mind Response ('''Jeremy Bockholt''')===&lt;br /&gt;
&lt;br /&gt;
==Structural Image Analysis==&lt;br /&gt;
*Is NA-MIC supporting the UNC-Ied (Martin Styner) 3D Segmentation in the Clinic Workshop at MICCAI 2008? If so, in what ways?&lt;br /&gt;
*Does NA-MIC have any process planned for eliminating an algorithm from its toolkit if a competing algorithm outperforms it?&lt;br /&gt;
*Are there any plans to integrate results from the segmentation workshop into documentation for Slicer in such a way that they are readily accessible to users choosing between Slicer modules?&lt;br /&gt;
===Structural Image Analysis Response ('''Martin Styner with input from other Core 1 and 2 people''')===&lt;br /&gt;
&lt;br /&gt;
'''Response (to #2 above):''' NA-MIC is the provider of a research software platform.  We do not believe that eliminating algorithms from the NA-MIC kit is in the best interest of the research community because even if a particular algorithm is not actively used as part of an end-to-end clinical solution today, or does not perform as well as another one in the context of a particular task, we want to remain open to the possibility that it could be a key enabler of solutions in the future.  However, for algorithms that are part of end-to-end solutions to problems that we are actively working on, we are making a concerted effort to provide publicly accessible tutorial materials to explain and encourage their adoption.&lt;br /&gt;
&lt;br /&gt;
==Miscellaneous==&lt;br /&gt;
===Kitware ('''Will Schroeder''')===&lt;br /&gt;
Some concerns were raised regarding the blurring of the distinction between NA-MIC, ITK and VTK. We recognize the contributions that NA-MIC-funded programmers have made to both ITK and VTK and we recognize that the relationship between NA-MIC and the other toolkits is beneficial to a broad development and user community. However, in some cases there seems to be insufficient acknowledgement in the report of the tools that predated NAMIC and helped lay the groundwork for it; for example CMake and DART.&lt;br /&gt;
===St. Louis ('''Dan Marcus''')===&lt;br /&gt;
XNAT is open-source using the XNAT License . What does that mean? Is it different than the NA-MIC license?&lt;br /&gt;
===Isomics ('''Steve Pieper''')===&lt;br /&gt;
Isomics' statement of work states: Significant effort will be devoted to re-architecting core components of 3D Slicer to make them better interoperate with other NA-MIC tools. Wasn't Slicer developed in parallel with NA-MIC tools? How have incompatibilities arisen?&lt;br /&gt;
&lt;br /&gt;
Response: Version 2.x of 3D Slicer existed for a number of years prior to NA-MIC and is still used by NA-MIC participants for specific tasks.  Development of version 3.x is now about 2 years old and has been developed with NA-MIC tools as the foundation.  A number of functional blocks continue to be ported and re-architected to leverage the new environment.  Examples include the interactive diffusion imaging modules, the Label Map Editor, and the Image Guided Therapy interfaces.  In addition, as new functionality such as XNAT and Grid Wizard are added to the NA-MIC Kit, new interfaces are required.&lt;br /&gt;
&lt;br /&gt;
===UCSD ('''Jeff Grethe''')===&lt;br /&gt;
The report includes no specific information on progress at UCSD in developing and supporting grid computing for NA-MIC.&lt;br /&gt;
===Training Core ('''Randy Gollub''')===&lt;br /&gt;
Please tell us a bit more about the training core's program to provide one-on-one mentoring (it's mentioned in the timeline , but there's no information on the Wiki). How is it structured? Who can be mentored? How does one arrange for mentorship?&lt;br /&gt;
===Some updates to the timeline are needed:===&lt;br /&gt;
====Bruce Fischl====&lt;br /&gt;
The timeline for MGH indicates that many of the tasks have been modified, but these modifications aren't listed in the table of timeline modifications.&lt;br /&gt;
====Dan Marcus====&lt;br /&gt;
Why not list Wash U in the timeline for the appropriate tasks?&lt;br /&gt;
====Ross Whitaker====&lt;br /&gt;
Certain tasks have been removed from the Utah aims because they have been &amp;quot;subsumed by Core 1-2 partners&amp;quot;, presumably MGH. Now that the plans have changed so that MGH no longer plans to do this work, this needs to be updated.&lt;br /&gt;
====Will Schroeder====&lt;br /&gt;
*Has there been progress in the migration from LONI to batchmake? It is not yet listed as complete in the Isomics timeline.&lt;br /&gt;
*Based on the timeline, Kitware has completed its tasks . Some new tasks are listed in the statement of work and should be entered into the timeline.&lt;br /&gt;
====Tina Kapur====&lt;br /&gt;
For the future (assuming and hoping there is a future!) , it would be nice to have publications listed at the end of each relevant section (in addition to the Additional Information links currently provided). Several members of the Center team commented on the lack of details in the progress report and wanted to follow up by reading the relevant publications.&lt;br /&gt;
&lt;br /&gt;
'''Response:''' Thanks for the suggestion.  We will follow it in the future.&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28477</id>
		<title>2008 Progress Report NIH QnA</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28477"/>
		<updated>2008-07-24T23:20:11Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Queens Response (Gabor Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Please address promptly (by July 30th, 5pm)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We received [[Media:5_U54_EB005149-05_NIH_Response.pdf|this letter]] from our NIH program officers.  Below are excerpted specific questions from this letter, with names of NA-MIC team members responsible for the answers.&lt;br /&gt;
&lt;br /&gt;
=MGH=&lt;br /&gt;
*What progress has been made with the MGH subcontract in the past year?&lt;br /&gt;
**There's no progress report summary following the Statement of Intent for MGH (page 42). We recognize that MGH's budget was significantly reduced this past year, as a good portion was re-budgeted to Washington University beginning in December 2007. Nevertheless, we expect some progress to have been made.&lt;br /&gt;
**The scientific progress report lists MGH as participating in some efforts (Shape Based Segmentation and Registration; Spherical Wavelets; and Shape Analysis with Overcomplete Wavelets), but there are no MGH personnel listed as key investigators for that section.&lt;br /&gt;
*With regard to future work (Dr. Fischl's consultation), the statement of work refers to &amp;quot;integration of FreeSurfer with ITK and 3D Slicer.&amp;quot; What does that mean? Wasn't that effort abandoned.&lt;br /&gt;
*The progress report is somewhat confusing as to whether FreeSurfer is being used to study cortical correspondence. On page 250 (Section 3.2) a collaboration between MGH and MIT is mentioned with respect to cortical correspondence, while the UNC progress report states that MOL is being used to explore cortical correspondence. Please clarify.&lt;br /&gt;
==MGH Response ('''Bruce Fischl''')==&lt;br /&gt;
&lt;br /&gt;
The overcomplete wavelet project has resulted in a conference publication, and we are actively working on it's completion. We are currently working on incorporating geometric invariants to replace the coordinate functions, and also on augmenting our existing neonate datasets with one or two more manually labeled cases so that we can assess how well our model fits the data.&lt;br /&gt;
&lt;br /&gt;
Going forward we intend to improve the FreeSurfer/Slicer/ITK interoperability. Towards this end we have worked with Core 2 to incorporate support for our internal formats in Slicer, and intend to include support for ITK formats such as NRRD directly into FreeSurfer. This will significantly increase the ease of development of algorithms that take advantage of the two platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also have an ongoing collaboration with Drs. Polina Golland, Mert Sabuncu and Thomas Yeo at MIT regarding improving cortical correspondence. Specifically, the existing FreeSurfer algorithms are being used to generate a baseline state-of-the-art accuracy measure that will be used to assess the performance of novel registration algorithms. Here we are working on both fast diffeormophic registration, and have a recent conference paper describing the &amp;quot;spherical demons&amp;quot; algorithm that improves execution time significantly without sacrificing performance. In addition, we as part of Mr. Yeo's Ph.D. research we are developing techniques for the optimal alignment of either archectonically or functionally defined cortical regions, defined using whole-brain histology and functional MRI respectively.&lt;br /&gt;
&lt;br /&gt;
=UCLA=&lt;br /&gt;
&lt;br /&gt;
*What progress has been made with the UCLA subcontract in the past year?&lt;br /&gt;
**There are no publications listed in progress report. A search of NA-MIC publications database shows only one paper with Art Toga or Nathan Hageman's name on it, and that's the iTools paper (from the Software and Data Integration Working Group).&lt;br /&gt;
**Timeline information is out of date. It has not been updated to reflect the changes in the statement of work that was agreed upon in August 2007.&lt;br /&gt;
==UCLA Response ('''Arthur Toga''')==&lt;br /&gt;
&lt;br /&gt;
=Queens=&lt;br /&gt;
*Regarding the Brachytherapy Needle Positioning Robot Integration DBP:&lt;br /&gt;
**Which grant is funding the patient data collection? If it is supported by NA-MIC, please let me know because NIBIB will have to approve your Data Safety Monitoring Plan.&lt;br /&gt;
**There seems to be some scientific overlap between this project and other NIH-funded grants, R01 EB002963 (PI: Whitcomb [previously Fichtinger]) and R01 CA111288 (PI:Tempany). The statement of work for the NA-MIC subcontract states, &amp;quot;The deliverables of the contract is professional-grade clinical software engineering of the above modules based on the NA-MIC toolkit (to the extent reasonable and possible) and to develop end applications based on Slicer, for clinical trials in image-gu ided prostate biopsy.&amp;quot; However, this goal also falls within the System Integration aim (Aim 3) of the NCI grant and the System Integration aim (Aim 3) of the NIBIB grant. Please provide us with additional clarification to distinguish these projects in terms of their aims. Also, please confirm that Dr. Gobbi and Mr. Vikal are being supported at no more than 3 and 6 months, respectively by other grants (as they are listed for 9 and 6 months on the NAMIC subcontract).&lt;br /&gt;
==Queens Response ('''Gabor Fichtinger''')==&lt;br /&gt;
Q1: Which grant is funding the patient data collection?  -- The Queen's NAMIC team does not collect clinical data. For development and testing purposes, we use previously acquired anonymous image data provided by our clinical collaborators. Our data does not contain any patient identification information. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Q2: There seems to be some scientific overlap -- There are indeed close synergies but no overlap. First: R01 EB002963 (PI: Whitcomb) concentrated on developing a transrectal prostate biopsy robot and its rapid clinical testing. For this purpose, a straightforward and minimalistic system was built to operate the robot. This system is not based on Slicer. The objective of our RoadMap project to empower this robot with superior capaboilities of Slicer and the NAMIC toolkit. The grants therefore are ideally synergistic, without any overlap. Also importantly, R01 EB002963 (PI: Whitcomb) will end on July 31, 2008, before the anniversary date of the NAMIC grant. Second, R01 CA111288 (PI:Tempany) is a Biomedical Research Partnership (BRP) that includes Acoustic Medsystems, Inc., a company whose responsibility is system's integration within the company's quality control environment. Dr. Fichtinger's NAMIC team at Queen's is not involved in system development for R01 CA111288 (PI:Tempany).  &lt;br /&gt;
&lt;br /&gt;
Q3: We confirm that Dr. Gobbi and Mr. Vikal are being supported at no more than 3 and 6 months, respectively by other grants (as they are listed for 9 and 6 months on the NAMIC subcontract).&lt;br /&gt;
&lt;br /&gt;
=Kitware=&lt;br /&gt;
*Kitware is working on a text, &amp;quot;Practical Software Process&amp;quot;, to document the NA-MIC software process. How will that be distributed? Will NA-MIC funds cover all the costs so that the text can be distributed free of charge?&lt;br /&gt;
==Kitware Response ('''Will Schroeder''')==&lt;br /&gt;
&lt;br /&gt;
=Dissemination Core=&lt;br /&gt;
*What does it mean when you say that you NA-MIC hosted the Workshop on Open Source and Open Data at MICCAI 2007? Did NA-MIC providing financial support? Set the agenda? Invite participants? Please clarify.&lt;br /&gt;
==Dissemination Core Response ('''Tina Kapur''')==&lt;br /&gt;
NA-MIC personnel contributed their time to conduct typical tasks involved in chairing a workshop such as soliciting manuscripts and open reviews, setting the agenda, and inviting speakers for keynote presentations.  No financial support was needed or provided by NA-MIC other than travel of NA-MIC presenters to the conference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Questions for non-urgent consideration:=&lt;br /&gt;
==MIND==&lt;br /&gt;
*Systemic Lupus Erythematous project&lt;br /&gt;
**Has this project driven any new algorithm development?&lt;br /&gt;
**Have the new tools been tested on other lesions? What other sorts of lesions are likely to benefit from them?&lt;br /&gt;
**Has manual segmentation (which is serving as the gold standard for this project) been shown to have low inter-observer variability?&lt;br /&gt;
===Mind Response ('''Jeremy Bockholt''')===&lt;br /&gt;
&lt;br /&gt;
==Structural Image Analysis==&lt;br /&gt;
*Is NA-MIC supporting the UNC-Ied (Martin Styner) 3D Segmentation in the Clinic Workshop at MICCAI 2008? If so, in what ways?&lt;br /&gt;
*Does NA-MIC have any process planned for eliminating an algorithm from its toolkit if a competing algorithm outperforms it?&lt;br /&gt;
*Are there any plans to integrate results from the segmentation workshop into documentation for Slicer in such a way that they are readily accessible to users choosing between Slicer modules?&lt;br /&gt;
===Structural Image Analysis Response ('''Martin Styner with input from other Core 1 and 2 people''')===&lt;br /&gt;
&lt;br /&gt;
'''Response (to #2 above):''' NA-MIC is the provider of a research software platform.  We do not believe that eliminating algorithms from the NA-MIC kit is in the best interest of the research community because even if a particular algorithm is not actively used as part of an end-to-end clinical solution today, or does not perform as well as another one in the context of a particular task, we want to remain open to the possibility that it could be a key enabler of solutions in the future.  However, for algorithms that are part of end-to-end solutions to problems that we are actively working on, we are making a concerted effort to provide publicly accessible tutorial materials to explain and encourage their adoption.&lt;br /&gt;
&lt;br /&gt;
==Miscellaneous==&lt;br /&gt;
===Kitware ('''Will Schroeder''')===&lt;br /&gt;
Some concerns were raised regarding the blurring of the distinction between NA-MIC, ITK and VTK. We recognize the contributions that NA-MIC-funded programmers have made to both ITK and VTK and we recognize that the relationship between NA-MIC and the other toolkits is beneficial to a broad development and user community. However, in some cases there seems to be insufficient acknowledgement in the report of the tools that predated NAMIC and helped lay the groundwork for it; for example CMake and DART.&lt;br /&gt;
===St. Louis ('''Dan Marcus''')===&lt;br /&gt;
XNAT is open-source using the XNAT License . What does that mean? Is it different than the NA-MIC license?&lt;br /&gt;
===Isomics ('''Steve Pieper''')===&lt;br /&gt;
Isomics' statement of work states: Significant effort will be devoted to re-architecting core components of 3D Slicer to make them better interoperate with other NA-MIC tools. Wasn't Slicer developed in parallel with NA-MIC tools? How have incompatibilities arisen?&lt;br /&gt;
&lt;br /&gt;
Response: Version 2.x of 3D Slicer existed for a number of years prior to NA-MIC and is still used by NA-MIC participants for specific tasks.  Development of version 3.x is now about 2 years old and has been developed with NA-MIC tools as the foundation.  A number of functional blocks continue to be ported and re-architected to leverage the new environment.  Examples include the interactive diffusion imaging modules, the Label Map Editor, and the Image Guided Therapy interfaces.  In addition, as new functionality such as XNAT and Grid Wizard are added to the NA-MIC Kit, new interfaces are required.&lt;br /&gt;
&lt;br /&gt;
===UCSD ('''Jeff Grethe''')===&lt;br /&gt;
The report includes no specific information on progress at UCSD in developing and supporting grid computing for NA-MIC.&lt;br /&gt;
===Training Core ('''Randy Gollub''')===&lt;br /&gt;
Please tell us a bit more about the training core's program to provide one-on-one mentoring (it's mentioned in the timeline , but there's no information on the Wiki). How is it structured? Who can be mentored? How does one arrange for mentorship?&lt;br /&gt;
===Some updates to the timeline are needed:===&lt;br /&gt;
====Bruce Fischl====&lt;br /&gt;
The timeline for MGH indicates that many of the tasks have been modified, but these modifications aren't listed in the table of timeline modifications.&lt;br /&gt;
====Dan Marcus====&lt;br /&gt;
Why not list Wash U in the timeline for the appropriate tasks?&lt;br /&gt;
====Ross Whitaker====&lt;br /&gt;
Certain tasks have been removed from the Utah aims because they have been &amp;quot;subsumed by Core 1-2 partners&amp;quot;, presumably MGH. Now that the plans have changed so that MGH no longer plans to do this work, this needs to be updated.&lt;br /&gt;
====Will Schroeder====&lt;br /&gt;
*Has there been progress in the migration from LONI to batchmake? It is not yet listed as complete in the Isomics timeline.&lt;br /&gt;
*Based on the timeline, Kitware has completed its tasks . Some new tasks are listed in the statement of work and should be entered into the timeline.&lt;br /&gt;
====Tina Kapur====&lt;br /&gt;
For the future (assuming and hoping there is a future!) , it would be nice to have publications listed at the end of each relevant section (in addition to the Additional Information links currently provided). Several members of the Center team commented on the lack of details in the progress report and wanted to follow up by reading the relevant publications.&lt;br /&gt;
&lt;br /&gt;
'''Response:''' Thanks for the suggestion.  We will follow it in the future.&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28476</id>
		<title>2008 Progress Report NIH QnA</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28476"/>
		<updated>2008-07-24T22:47:53Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Queens Response (Gabor Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Please address promptly (by July 30th, 5pm)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We received [[Media:5_U54_EB005149-05_NIH_Response.pdf|this letter]] from our NIH program officers.  Below are excerpted specific questions from this letter, with names of NA-MIC team members responsible for the answers.&lt;br /&gt;
&lt;br /&gt;
=MGH=&lt;br /&gt;
*What progress has been made with the MGH subcontract in the past year?&lt;br /&gt;
**There's no progress report summary following the Statement of Intent for MGH (page 42). We recognize that MGH's budget was significantly reduced this past year, as a good portion was re-budgeted to Washington University beginning in December 2007. Nevertheless, we expect some progress to have been made.&lt;br /&gt;
**The scientific progress report lists MGH as participating in some efforts (Shape Based Segmentation and Registration; Spherical Wavelets; and Shape Analysis with Overcomplete Wavelets), but there are no MGH personnel listed as key investigators for that section.&lt;br /&gt;
*With regard to future work (Dr. Fischl's consultation), the statement of work refers to &amp;quot;integration of FreeSurfer with ITK and 3D Slicer.&amp;quot; What does that mean? Wasn't that effort abandoned.&lt;br /&gt;
*The progress report is somewhat confusing as to whether FreeSurfer is being used to study cortical correspondence. On page 250 (Section 3.2) a collaboration between MGH and MIT is mentioned with respect to cortical correspondence, while the UNC progress report states that MOL is being used to explore cortical correspondence. Please clarify.&lt;br /&gt;
==MGH Response ('''Bruce Fischl''')==&lt;br /&gt;
&lt;br /&gt;
The overcomplete wavelet project has resulted in a conference publication, and we are actively working on it's completion. We are currently working on incorporating geometric invariants to replace the coordinate functions, and also on augmenting our existing neonate datasets with one or two more manually labeled cases so that we can assess how well our model fits the data.&lt;br /&gt;
&lt;br /&gt;
Going forward we intend to improve the FreeSurfer/Slicer/ITK interoperability. Towards this end we have worked with Core 2 to incorporate support for our internal formats in Slicer, and intend to include support for ITK formats such as NRRD directly into FreeSurfer. This will significantly increase the ease of development of algorithms that take advantage of the two platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also have an ongoing collaboration with Drs. Polina Golland, Mert Sabuncu and Thomas Yeo at MIT regarding improving cortical correspondence. Specifically, the existing FreeSurfer algorithms are being used to generate a baseline state-of-the-art accuracy measure that will be used to assess the performance of novel registration algorithms. Here we are working on both fast diffeormophic registration, and have a recent conference paper describing the &amp;quot;spherical demons&amp;quot; algorithm that improves execution time significantly without sacrificing performance. In addition, we as part of Mr. Yeo's Ph.D. research we are developing techniques for the optimal alignment of either archectonically or functionally defined cortical regions, defined using whole-brain histology and functional MRI respectively.&lt;br /&gt;
&lt;br /&gt;
=UCLA=&lt;br /&gt;
&lt;br /&gt;
*What progress has been made with the UCLA subcontract in the past year?&lt;br /&gt;
**There are no publications listed in progress report. A search of NA-MIC publications database shows only one paper with Art Toga or Nathan Hageman's name on it, and that's the iTools paper (from the Software and Data Integration Working Group).&lt;br /&gt;
**Timeline information is out of date. It has not been updated to reflect the changes in the statement of work that was agreed upon in August 2007.&lt;br /&gt;
==UCLA Response ('''Arthur Toga''')==&lt;br /&gt;
&lt;br /&gt;
=Queens=&lt;br /&gt;
*Regarding the Brachytherapy Needle Positioning Robot Integration DBP:&lt;br /&gt;
**Which grant is funding the patient data collection? If it is supported by NA-MIC, please let me know because NIBIB will have to approve your Data Safety Monitoring Plan.&lt;br /&gt;
**There seems to be some scientific overlap between this project and other NIH-funded grants, R01 EB002963 (PI: Whitcomb [previously Fichtinger]) and R01 CA111288 (PI:Tempany). The statement of work for the NA-MIC subcontract states, &amp;quot;The deliverables of the contract is professional-grade clinical software engineering of the above modules based on the NA-MIC toolkit (to the extent reasonable and possible) and to develop end applications based on Slicer, for clinical trials in image-gu ided prostate biopsy.&amp;quot; However, this goal also falls within the System Integration aim (Aim 3) of the NCI grant and the System Integration aim (Aim 3) of the NIBIB grant. Please provide us with additional clarification to distinguish these projects in terms of their aims. Also, please confirm that Dr. Gobbi and Mr. Vikal are being supported at no more than 3 and 6 months, respectively by other grants (as they are listed for 9 and 6 months on the NAMIC subcontract).&lt;br /&gt;
==Queens Response ('''Gabor Fichtinger''')==&lt;br /&gt;
Which grant is funding the patient data collection?  -- The Queen's NAMIC team does not collect clinical data under this grant. For testing and development purposes, we use previously acquired anonymous image data provided by our clinical collaborators who do not receive funding from this NAMIC grant. The data not contain any patient identification information.&lt;br /&gt;
&lt;br /&gt;
=Kitware=&lt;br /&gt;
*Kitware is working on a text, &amp;quot;Practical Software Process&amp;quot;, to document the NA-MIC software process. How will that be distributed? Will NA-MIC funds cover all the costs so that the text can be distributed free of charge?&lt;br /&gt;
==Kitware Response ('''Will Schroeder''')==&lt;br /&gt;
&lt;br /&gt;
=Dissemination Core=&lt;br /&gt;
*What does it mean when you say that you NA-MIC hosted the Workshop on Open Source and Open Data at MICCAI 2007? Did NA-MIC providing financial support? Set the agenda? Invite participants? Please clarify.&lt;br /&gt;
==Dissemination Core Response ('''Tina Kapur''')==&lt;br /&gt;
NA-MIC personnel contributed their time to conduct typical tasks involved in chairing a workshop such as soliciting manuscripts and open reviews, setting the agenda, and inviting speakers for keynote presentations.  No financial support was needed or provided by NA-MIC other than travel of NA-MIC presenters to the conference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Questions for non-urgent consideration:=&lt;br /&gt;
==MIND==&lt;br /&gt;
*Systemic Lupus Erythematous project&lt;br /&gt;
**Has this project driven any new algorithm development?&lt;br /&gt;
**Have the new tools been tested on other lesions? What other sorts of lesions are likely to benefit from them?&lt;br /&gt;
**Has manual segmentation (which is serving as the gold standard for this project) been shown to have low inter-observer variability?&lt;br /&gt;
===Mind Response ('''Jeremy Bockholt''')===&lt;br /&gt;
&lt;br /&gt;
==Structural Image Analysis==&lt;br /&gt;
*Is NA-MIC supporting the UNC-Ied (Martin Styner) 3D Segmentation in the Clinic Workshop at MICCAI 2008? If so, in what ways?&lt;br /&gt;
*Does NA-MIC have any process planned for eliminating an algorithm from its toolkit if a competing algorithm outperforms it?&lt;br /&gt;
*Are there any plans to integrate results from the segmentation workshop into documentation for Slicer in such a way that they are readily accessible to users choosing between Slicer modules?&lt;br /&gt;
===Structural Image Analysis Response ('''Martin Styner with input from other Core 1 and 2 people''')===&lt;br /&gt;
&lt;br /&gt;
'''Response (to #2 above):''' NA-MIC is the provider of a research software platform.  We do not believe that eliminating algorithms from the NA-MIC kit is in the best interest of the research community because even if a particular algorithm is not actively used as part of an end-to-end clinical solution today, or does not perform as well as another one in the context of a particular task, we want to remain open to the possibility that it could be a key enabler of solutions in the future.  However, for algorithms that are part of end-to-end solutions to problems that we are actively working on, we are making a concerted effort to provide publicly accessible tutorial materials to explain and encourage their adoption.&lt;br /&gt;
&lt;br /&gt;
==Miscellaneous==&lt;br /&gt;
===Kitware ('''Will Schroeder''')===&lt;br /&gt;
Some concerns were raised regarding the blurring of the distinction between NA-MIC, ITK and VTK. We recognize the contributions that NA-MIC-funded programmers have made to both ITK and VTK and we recognize that the relationship between NA-MIC and the other toolkits is beneficial to a broad development and user community. However, in some cases there seems to be insufficient acknowledgement in the report of the tools that predated NAMIC and helped lay the groundwork for it; for example CMake and DART.&lt;br /&gt;
===St. Louis ('''Dan Marcus''')===&lt;br /&gt;
XNAT is open-source using the XNAT License . What does that mean? Is it different than the NA-MIC license?&lt;br /&gt;
===Isomics ('''Steve Pieper''')===&lt;br /&gt;
Isomics' statement of work states: Significant effort will be devoted to re-architecting core components of 3D Slicer to make them better interoperate with other NA-MIC tools. Wasn't Slicer developed in parallel with NA-MIC tools? How have incompatibilities arisen?&lt;br /&gt;
&lt;br /&gt;
Response: Version 2.x of 3D Slicer existed for a number of years prior to NA-MIC and is still used by NA-MIC participants for specific tasks.  Development of version 3.x is now about 2 years old and has been developed with NA-MIC tools as the foundation.  A number of functional blocks continue to be ported and re-architected to leverage the new environment.  Examples include the interactive diffusion imaging modules, the Label Map Editor, and the Image Guided Therapy interfaces.  In addition, as new functionality such as XNAT and Grid Wizard are added to the NA-MIC Kit, new interfaces are required.&lt;br /&gt;
&lt;br /&gt;
===UCSD ('''Jeff Grethe''')===&lt;br /&gt;
The report includes no specific information on progress at UCSD in developing and supporting grid computing for NA-MIC.&lt;br /&gt;
===Training Core ('''Randy Gollub''')===&lt;br /&gt;
Please tell us a bit more about the training core's program to provide one-on-one mentoring (it's mentioned in the timeline , but there's no information on the Wiki). How is it structured? Who can be mentored? How does one arrange for mentorship?&lt;br /&gt;
===Some updates to the timeline are needed:===&lt;br /&gt;
====Bruce Fischl====&lt;br /&gt;
The timeline for MGH indicates that many of the tasks have been modified, but these modifications aren't listed in the table of timeline modifications.&lt;br /&gt;
====Dan Marcus====&lt;br /&gt;
Why not list Wash U in the timeline for the appropriate tasks?&lt;br /&gt;
====Ross Whitaker====&lt;br /&gt;
Certain tasks have been removed from the Utah aims because they have been &amp;quot;subsumed by Core 1-2 partners&amp;quot;, presumably MGH. Now that the plans have changed so that MGH no longer plans to do this work, this needs to be updated.&lt;br /&gt;
====Will Schroeder====&lt;br /&gt;
*Has there been progress in the migration from LONI to batchmake? It is not yet listed as complete in the Isomics timeline.&lt;br /&gt;
*Based on the timeline, Kitware has completed its tasks . Some new tasks are listed in the statement of work and should be entered into the timeline.&lt;br /&gt;
====Tina Kapur====&lt;br /&gt;
For the future (assuming and hoping there is a future!) , it would be nice to have publications listed at the end of each relevant section (in addition to the Additional Information links currently provided). Several members of the Center team commented on the lack of details in the progress report and wanted to follow up by reading the relevant publications.&lt;br /&gt;
&lt;br /&gt;
'''Response:''' Thanks for the suggestion.  We will follow it in the future.&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28475</id>
		<title>2008 Progress Report NIH QnA</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Progress_Report_NIH_QnA&amp;diff=28475"/>
		<updated>2008-07-24T22:45:36Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Queens Response (Gabor Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Please address promptly (by July 30th, 5pm)=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We received [[Media:5_U54_EB005149-05_NIH_Response.pdf|this letter]] from our NIH program officers.  Below are excerpted specific questions from this letter, with names of NA-MIC team members responsible for the answers.&lt;br /&gt;
&lt;br /&gt;
=MGH=&lt;br /&gt;
*What progress has been made with the MGH subcontract in the past year?&lt;br /&gt;
**There's no progress report summary following the Statement of Intent for MGH (page 42). We recognize that MGH's budget was significantly reduced this past year, as a good portion was re-budgeted to Washington University beginning in December 2007. Nevertheless, we expect some progress to have been made.&lt;br /&gt;
**The scientific progress report lists MGH as participating in some efforts (Shape Based Segmentation and Registration; Spherical Wavelets; and Shape Analysis with Overcomplete Wavelets), but there are no MGH personnel listed as key investigators for that section.&lt;br /&gt;
*With regard to future work (Dr. Fischl's consultation), the statement of work refers to &amp;quot;integration of FreeSurfer with ITK and 3D Slicer.&amp;quot; What does that mean? Wasn't that effort abandoned.&lt;br /&gt;
*The progress report is somewhat confusing as to whether FreeSurfer is being used to study cortical correspondence. On page 250 (Section 3.2) a collaboration between MGH and MIT is mentioned with respect to cortical correspondence, while the UNC progress report states that MOL is being used to explore cortical correspondence. Please clarify.&lt;br /&gt;
==MGH Response ('''Bruce Fischl''')==&lt;br /&gt;
&lt;br /&gt;
The overcomplete wavelet project has resulted in a conference publication, and we are actively working on it's completion. We are currently working on incorporating geometric invariants to replace the coordinate functions, and also on augmenting our existing neonate datasets with one or two more manually labeled cases so that we can assess how well our model fits the data.&lt;br /&gt;
&lt;br /&gt;
Going forward we intend to improve the FreeSurfer/Slicer/ITK interoperability. Towards this end we have worked with Core 2 to incorporate support for our internal formats in Slicer, and intend to include support for ITK formats such as NRRD directly into FreeSurfer. This will significantly increase the ease of development of algorithms that take advantage of the two platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We also have an ongoing collaboration with Drs. Polina Golland, Mert Sabuncu and Thomas Yeo at MIT regarding improving cortical correspondence. Specifically, the existing FreeSurfer algorithms are being used to generate a baseline state-of-the-art accuracy measure that will be used to assess the performance of novel registration algorithms. Here we are working on both fast diffeormophic registration, and have a recent conference paper describing the &amp;quot;spherical demons&amp;quot; algorithm that improves execution time significantly without sacrificing performance. In addition, we as part of Mr. Yeo's Ph.D. research we are developing techniques for the optimal alignment of either archectonically or functionally defined cortical regions, defined using whole-brain histology and functional MRI respectively.&lt;br /&gt;
&lt;br /&gt;
=UCLA=&lt;br /&gt;
&lt;br /&gt;
*What progress has been made with the UCLA subcontract in the past year?&lt;br /&gt;
**There are no publications listed in progress report. A search of NA-MIC publications database shows only one paper with Art Toga or Nathan Hageman's name on it, and that's the iTools paper (from the Software and Data Integration Working Group).&lt;br /&gt;
**Timeline information is out of date. It has not been updated to reflect the changes in the statement of work that was agreed upon in August 2007.&lt;br /&gt;
==UCLA Response ('''Arthur Toga''')==&lt;br /&gt;
&lt;br /&gt;
=Queens=&lt;br /&gt;
*Regarding the Brachytherapy Needle Positioning Robot Integration DBP:&lt;br /&gt;
**Which grant is funding the patient data collection? If it is supported by NA-MIC, please let me know because NIBIB will have to approve your Data Safety Monitoring Plan.&lt;br /&gt;
**There seems to be some scientific overlap between this project and other NIH-funded grants, R01 EB002963 (PI: Whitcomb [previously Fichtinger]) and R01 CA111288 (PI:Tempany). The statement of work for the NA-MIC subcontract states, &amp;quot;The deliverables of the contract is professional-grade clinical software engineering of the above modules based on the NA-MIC toolkit (to the extent reasonable and possible) and to develop end applications based on Slicer, for clinical trials in image-gu ided prostate biopsy.&amp;quot; However, this goal also falls within the System Integration aim (Aim 3) of the NCI grant and the System Integration aim (Aim 3) of the NIBIB grant. Please provide us with additional clarification to distinguish these projects in terms of their aims. Also, please confirm that Dr. Gobbi and Mr. Vikal are being supported at no more than 3 and 6 months, respectively by other grants (as they are listed for 9 and 6 months on the NAMIC subcontract).&lt;br /&gt;
==Queens Response ('''Gabor Fichtinger''')==&lt;br /&gt;
Q1 for Queen's:  Our team does not collect clinical data under the NAMIC grant. We use previously acquired anonymous image data provided by our clinical collaborators who do not receive funding from this NAMIC grant. Tha data we use does not contain any patient identification information.&lt;br /&gt;
&lt;br /&gt;
=Kitware=&lt;br /&gt;
*Kitware is working on a text, &amp;quot;Practical Software Process&amp;quot;, to document the NA-MIC software process. How will that be distributed? Will NA-MIC funds cover all the costs so that the text can be distributed free of charge?&lt;br /&gt;
==Kitware Response ('''Will Schroeder''')==&lt;br /&gt;
&lt;br /&gt;
=Dissemination Core=&lt;br /&gt;
*What does it mean when you say that you NA-MIC hosted the Workshop on Open Source and Open Data at MICCAI 2007? Did NA-MIC providing financial support? Set the agenda? Invite participants? Please clarify.&lt;br /&gt;
==Dissemination Core Response ('''Tina Kapur''')==&lt;br /&gt;
NA-MIC personnel contributed their time to conduct typical tasks involved in chairing a workshop such as soliciting manuscripts and open reviews, setting the agenda, and inviting speakers for keynote presentations.  No financial support was needed or provided by NA-MIC other than travel of NA-MIC presenters to the conference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Questions for non-urgent consideration:=&lt;br /&gt;
==MIND==&lt;br /&gt;
*Systemic Lupus Erythematous project&lt;br /&gt;
**Has this project driven any new algorithm development?&lt;br /&gt;
**Have the new tools been tested on other lesions? What other sorts of lesions are likely to benefit from them?&lt;br /&gt;
**Has manual segmentation (which is serving as the gold standard for this project) been shown to have low inter-observer variability?&lt;br /&gt;
===Mind Response ('''Jeremy Bockholt''')===&lt;br /&gt;
&lt;br /&gt;
==Structural Image Analysis==&lt;br /&gt;
*Is NA-MIC supporting the UNC-Ied (Martin Styner) 3D Segmentation in the Clinic Workshop at MICCAI 2008? If so, in what ways?&lt;br /&gt;
*Does NA-MIC have any process planned for eliminating an algorithm from its toolkit if a competing algorithm outperforms it?&lt;br /&gt;
*Are there any plans to integrate results from the segmentation workshop into documentation for Slicer in such a way that they are readily accessible to users choosing between Slicer modules?&lt;br /&gt;
===Structural Image Analysis Response ('''Martin Styner with input from other Core 1 and 2 people''')===&lt;br /&gt;
&lt;br /&gt;
'''Response (to #2 above):''' NA-MIC is the provider of a research software platform.  We do not believe that eliminating algorithms from the NA-MIC kit is in the best interest of the research community because even if a particular algorithm is not actively used as part of an end-to-end clinical solution today, or does not perform as well as another one in the context of a particular task, we want to remain open to the possibility that it could be a key enabler of solutions in the future.  However, for algorithms that are part of end-to-end solutions to problems that we are actively working on, we are making a concerted effort to provide publicly accessible tutorial materials to explain and encourage their adoption.&lt;br /&gt;
&lt;br /&gt;
==Miscellaneous==&lt;br /&gt;
===Kitware ('''Will Schroeder''')===&lt;br /&gt;
Some concerns were raised regarding the blurring of the distinction between NA-MIC, ITK and VTK. We recognize the contributions that NA-MIC-funded programmers have made to both ITK and VTK and we recognize that the relationship between NA-MIC and the other toolkits is beneficial to a broad development and user community. However, in some cases there seems to be insufficient acknowledgement in the report of the tools that predated NAMIC and helped lay the groundwork for it; for example CMake and DART.&lt;br /&gt;
===St. Louis ('''Dan Marcus''')===&lt;br /&gt;
XNAT is open-source using the XNAT License . What does that mean? Is it different than the NA-MIC license?&lt;br /&gt;
===Isomics ('''Steve Pieper''')===&lt;br /&gt;
Isomics' statement of work states: Significant effort will be devoted to re-architecting core components of 3D Slicer to make them better interoperate with other NA-MIC tools. Wasn't Slicer developed in parallel with NA-MIC tools? How have incompatibilities arisen?&lt;br /&gt;
&lt;br /&gt;
Response: Version 2.x of 3D Slicer existed for a number of years prior to NA-MIC and is still used by NA-MIC participants for specific tasks.  Development of version 3.x is now about 2 years old and has been developed with NA-MIC tools as the foundation.  A number of functional blocks continue to be ported and re-architected to leverage the new environment.  Examples include the interactive diffusion imaging modules, the Label Map Editor, and the Image Guided Therapy interfaces.  In addition, as new functionality such as XNAT and Grid Wizard are added to the NA-MIC Kit, new interfaces are required.&lt;br /&gt;
&lt;br /&gt;
===UCSD ('''Jeff Grethe''')===&lt;br /&gt;
The report includes no specific information on progress at UCSD in developing and supporting grid computing for NA-MIC.&lt;br /&gt;
===Training Core ('''Randy Gollub''')===&lt;br /&gt;
Please tell us a bit more about the training core's program to provide one-on-one mentoring (it's mentioned in the timeline , but there's no information on the Wiki). How is it structured? Who can be mentored? How does one arrange for mentorship?&lt;br /&gt;
===Some updates to the timeline are needed:===&lt;br /&gt;
====Bruce Fischl====&lt;br /&gt;
The timeline for MGH indicates that many of the tasks have been modified, but these modifications aren't listed in the table of timeline modifications.&lt;br /&gt;
====Dan Marcus====&lt;br /&gt;
Why not list Wash U in the timeline for the appropriate tasks?&lt;br /&gt;
====Ross Whitaker====&lt;br /&gt;
Certain tasks have been removed from the Utah aims because they have been &amp;quot;subsumed by Core 1-2 partners&amp;quot;, presumably MGH. Now that the plans have changed so that MGH no longer plans to do this work, this needs to be updated.&lt;br /&gt;
====Will Schroeder====&lt;br /&gt;
*Has there been progress in the migration from LONI to batchmake? It is not yet listed as complete in the Isomics timeline.&lt;br /&gt;
*Based on the timeline, Kitware has completed its tasks . Some new tasks are listed in the statement of work and should be entered into the timeline.&lt;br /&gt;
====Tina Kapur====&lt;br /&gt;
For the future (assuming and hoping there is a future!) , it would be nice to have publications listed at the end of each relevant section (in addition to the Additional Information links currently provided). Several members of the Center team commented on the lack of details in the progress report and wanted to follow up by reading the relevant publications.&lt;br /&gt;
&lt;br /&gt;
'''Response:''' Thanks for the suggestion.  We will follow it in the future.&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens:Introduction&amp;diff=27442</id>
		<title>DBP2:Queens:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens:Introduction&amp;diff=27442"/>
		<updated>2008-06-24T15:49:52Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Team and Institutes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[DBP2:JHU|JHU DBP 2]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
=Segmentation and Registration Tools for Robotic Prostate Interventions=&lt;br /&gt;
&lt;br /&gt;
===Team and Institutes===&lt;br /&gt;
&lt;br /&gt;
*'''PI:''' Gabor Fichtinger, Queen’s University (gabor at cs.queensu.ca)&lt;br /&gt;
*'''Co-I:''' Purang Abolmaesumi, Queen’s University (purang at cs.queensu.ca)&lt;br /&gt;
*Software Engineer Lead: David Gobbi, Queen’s University (dgobbi at cs.queensu.ca)&lt;br /&gt;
*Software Engineer Support: Siddharth Vikal, Queen’s University (siddharthvikal at cs.queensu.ca)&lt;br /&gt;
*NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital (hayes at bwh.harvard.edu)&lt;br /&gt;
*NA-MIC Algorithms Contact: Allen Tannenbaum, PhD, GeorgiaTech (tannenba at ece.gatech.edu)&lt;br /&gt;
*'''Host Institutes:''' Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
===Research Goals===&lt;br /&gt;
The Queen’s &amp;amp; Hopkins teams are developing novel systems and procedures for prostate cancer interventions, such as biopsy and needle-based local therapies. &lt;br /&gt;
&lt;br /&gt;
Prostate cancer is the most common subcutaneous cancer in American men. In 2007 will be an estimated 220,000 new cases of prostate cancer and 28,000 deaths caused by prostate cancer in the United States alone. [1]&lt;br /&gt;
&lt;br /&gt;
The current standard of care for verifying the existence of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. TRUS provides limited diagnostic accuracy and image resolution. In a study [2] the authors conclude that TRUS is not accurate for tumor localization and therefore the precise identification and sampling of individual cancerous tumor sites is limited. As a result, the sensitivity of TRUS biopsy is only between 60% and 85%. [3, 4]&lt;br /&gt;
&lt;br /&gt;
Targeted biopsies of suspicious areas identified by MRI could potentially increase the sensitivity of prostate biopsies. To address this problem the investigators have several active research projects in prostate biopsy and therapies under direct MRI guidance inside the bore. We have developed and clinically tried a semi-robotic device and system for planning and execution of prostate biopsy under MRI guidance [5]. We have conducted several clinical trials [5] and more are to follow. The generic workflow is as follows:&lt;br /&gt;
 &lt;br /&gt;
# '''Pre-Op:''' segment the prostate, identify suspicious areas, plan targets;&lt;br /&gt;
# '''Intra-Op:''' import plan, update plan, execute the biopsy/therapy&lt;br /&gt;
# '''Post-Op:''' compare post-op  data with plan, evaluate technical variables&lt;br /&gt;
&lt;br /&gt;
Currently, these functions are achieved by fragmented in-house code, some based on VTK/ ITK. &lt;br /&gt;
&lt;br /&gt;
The objective of this DBP will be professional-grade clinical software engineering of existing and upcoming functionality in Slicer.  This will allow the team to focus on project specific tasks, and benefit from the Slicer's advances in IGT capability.&lt;br /&gt;
&lt;br /&gt;
===Experimental Data===&lt;br /&gt;
The system for MRI guided transrectal prostate interventions was tested in patients [5] and a new embodiment has been tested recently in phantom experiments at NIH (Bethesda, MD) on a 3T Philips Intera MRI scanner (Philips Medical Systems, Best, NL) using standard MR compatible biopsy needles and non artifact producing glass needles [7]. The system has been tried on humans at NIH. Replication of the system for multiple collaborating clinical sites (Princess Margaret Hospital in Toronto, BWH in Boston, Johns Hopkins in Baltimore) is in progress.&lt;br /&gt;
&lt;br /&gt;
====Patient Data:====&lt;br /&gt;
Typically, 3D axial MRI prostate datasets (patient positions mixed between prone and supine) acquired using different endorectal coils (coil diameter = 13 mm for two datasets, coil diameter = 26 mm for third) were used for algorithm evaluation.  The scans were performed on Philips Intera 3T MRI system; T2-weighted images acquired using Spin Echo (SE) sequence with following parameters: SENSE protocol with acceleration factor of 1; TE/TR = 180 / 7155 ms for some datasets, TE/TR = 120 / 7155 ms for some others; matrix 256 x 256; field of view 140 x 140 mm; voxel size 0.55 x 0.55 mm; slice thickness 3 mm.&lt;br /&gt;
&lt;br /&gt;
====Phantom Data:====&lt;br /&gt;
'''Biopsy Needle Accuracies:''' The manipulator was placed in a prostate phantom and its initial position was registered. Twelve targets were selected within all areas of the prostate on T2 weighted axial TSE images. Targets one through four were selected in the base of the prostate, targets five through eight in the mid gland, and targets nine through twelve in the apex of the prostate. For each target, the targeting program calculated the necessary targeting parameters for the needle placement.&lt;br /&gt;
&lt;br /&gt;
The phantom was pulled out of the MRI scanner on the scanner table, the physician rotated the manipulator, adjusted the needle angle and inserted the biopsy needle according to the displayed parameters.&lt;br /&gt;
&lt;br /&gt;
The phantom was rolled back into the scanner to confirm the location of the needle on axial TSE proton density images which show the void created by the biopsy needle tip close to the target point. The in-plane error for each of the twelve biopsies, defined as the distance of the target to the biopsy needle line was subsequently calculated to assess the accuracy of the system. &lt;br /&gt;
&lt;br /&gt;
The needle line was defined by finding the first and the last slice of the acquired confirmation volume, where the needle void is clearly visible. The center of the needle void on the first slice and the center of the void on the last slice define the needle line. The out of plane error is not critical in biopsy procedures due to the length of the biopsy core and was not calculated. Hence, from the purpose of accuracy, there is no need for a more precise motorized needle insertion. The average in-plane error for the biopsy needles was 2.1 mm with a maximum error of 2.9 mm.&lt;br /&gt;
&lt;br /&gt;
'''Glass Needle Accuracies:''' The void created by the biopsy needle is mostly due to susceptibility artifact caused by the metallic needle. The void is not concentric around the biopsy needle and depends on the orientation of the needle to the direction of the main magnetic field in the scanner (B0), and the direction of the spatially encoding magnetic field gradients [6]. Consequently, center of needle voids do not necessarily correspond to actual needle centers.&lt;br /&gt;
&lt;br /&gt;
And since the same imaging sequence and similar orientation of the needle is used for all targets in a procedure, a systematic shift between needle void and actual needle might occur, which introduces a bias in the accuracy calculations. To explore this theory, every biopsy needle placement in the prostate phantom was followed by a placement of a glass needle to the same depth. The void created by the glass needle is purely caused by a lack of protons in the glass compared to the surrounding tissue, and is thus artifact free and concentric to the needle. The location of the glass needle was again confirmed by acquiring axial TSE proton density images. The average in-plane error for the glass needles was 1.3 mm with a maximum error of 1.7 mm.&lt;br /&gt;
&lt;br /&gt;
The procedure time for six needle biopsies not including the glass needle insertion was measured at 45 minutes.&lt;br /&gt;
&lt;br /&gt;
===Current Image Software===&lt;br /&gt;
The targeting program runs on a laptop computer located in the control room. The only data transfer between laptop and scanner computer are DICOM image transfers. The fiber optic encoders from the robot interface via a USB counter (USDigital, Vancouver, Washington) to the laptop computer.&lt;br /&gt;
&lt;br /&gt;
The targeting software displays the acquired MR images, provides the automatic segmentation for the initial registration of the manipulator, allows the physician to select targets for needle placements, it provides targeting parameters for the placement of the needle, and tracks rotation and needle angle change provided by the encoders, while the manipulator is moved on target.&lt;br /&gt;
&lt;br /&gt;
After targeting the software overlays the target and projected needle path with the confirmation volume scan. This allows the physician to quickly asses the success of the intervention.&lt;br /&gt;
&lt;br /&gt;
===Image Processing Needs===&lt;br /&gt;
&lt;br /&gt;
Although the current software covers the intervention needs for the first project, additional functions are necessary to allow easy and quick access to the data before, during and after procedure, and to accommodate the needs of the other two projects.&lt;br /&gt;
&lt;br /&gt;
Segmentation and deformable registration functions, 3D visualization instead of the current 2D view, and the extensive data analysis technology of Slicer are all on the projects software specification list.&lt;br /&gt;
&lt;br /&gt;
A basic requirement is the good memory management and stable software. Since the program is running on a laptop, there are only very limited resources available (CPU and memory).&lt;br /&gt;
&lt;br /&gt;
The automatic segmentation algorithms are not always accurate, so it should provide interactive correction capabilities, like moving the slider to change the threshold followed by re-segmentation. Other interactive part is modification of the proposed needle path within the robot constraints.&lt;br /&gt;
&lt;br /&gt;
LPS coordinate system: During the procedure targets are selected using the 2D projection image obtained from the scanner and the target coordinates are in the DICOM image coordinate system. This is also used to display parameters for the manual prescription and for the real time tracking.&lt;br /&gt;
&lt;br /&gt;
As each landmarking defines an independent coordinate system, and this Frame of Reference (FoR) correspondence between volumes is essential for the patient safety. The operating personnel should not be able to use registration data from one volume and target other volume if there’s no transformation between the two coordinate systems.&lt;br /&gt;
&lt;br /&gt;
With OpenTracker capable MRI scanners we would like to use real time needle tracking.&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
&lt;br /&gt;
Manifold benefits exist for both NA-MIC and the Brigham-Hopkins joint program in MRI-guided prostate interventions, owing to existing loops of collaborations, cross-compatibility of research (MR guided prostate interventions), and shared Slicer/VTK/ITK based software platforms.&lt;br /&gt;
&lt;br /&gt;
The project's clinical partners are based in the intramural research program of the National Cancer Institute. Thus the proposed NA-MIC DBP will tie a significant segment of extramural cancer research into a prominent intramural effort, thereby leading to a better understanding, coherency, and active collaboration between these otherwise disjoint efforts. For NA-MIC the benefits are also tangible: the functions will be developed in a controlled and professional environment in the CISST ERC that has been in close collaboration with NA-MIC/Brigham. The development environment used in both groups are similar, in that we both base our image processing tools on VTK, ITK and Slicer and uses many of the same development tools, including CVS, CMake, Doxygen and Dart. In short, the proposed work will be conducted on a shared platform (VTK, ITK, and Slicer) with a compatible development process, and thus the results will be directly absorbable by NA-MIC.&lt;br /&gt;
&lt;br /&gt;
===Projects===&lt;br /&gt;
&lt;br /&gt;
*[[Collaboration/JHU/Brachytherapy needle positioning robot integration|Brachytherapy needle positioning robot integration]]&lt;br /&gt;
*[[DBP2:JHU:Roadmap|MRI-compatible transrectal prostate biopsy device]]&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
# Jemal, A., Siegel, R., Ward, E., Murray, T., Xu, J., Thun, M.J., &amp;quot;Cancer statistics&amp;quot;, 2007 ''CA Cancer J Clin'' 57(1) (2007) 43–66&lt;br /&gt;
# Yu, K.K., Hricak, H., &amp;quot;Imaging prostate cancer&amp;quot;, ''Radiol Clin North Am'' 38(1) (2000) 59–85, viii&lt;br /&gt;
# Norberg, M., Egevad, L., Holmberg, L., Sparn, P., Norln, B.J., Busch, C., &amp;quot;The sextant protocol for ultrasound-guided core biopsies of the prostate underestimates the presence of cancer&amp;quot;, ''Urology'' 50(4) (1997) 562–566&lt;br /&gt;
# Terris, M.K., &amp;quot;Sensitivity and specificity of sextant biopsies in the detection of prostate cancer: preliminary report&amp;quot;, ''Urology'' 54(3) (1999) 486–489&lt;br /&gt;
# Krieger A, Csoma C, Guion P, Iordachita I, Metzger G, Qian D, Singh A, Whitcomb L, Fichtinger G, &amp;quot;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;quot;, MICCAI 2007&lt;br /&gt;
# DiMaio, S.P., Kacher, D.F., Ellis, R.E., Fichtinger, G., Hata, N., Zientara, G.P., Panych, L.P., Kikinis, R., Jolesz, F.A., &amp;quot;Needle artifact localization in 3T MR images&amp;quot;, ''Stud Health Technol Inform'' 119 (2006) 120–125&lt;br /&gt;
# Krieger A, Csoma C, Iordachita I, Guion P, Fichtinger G, Whitcomb LL, &amp;quot;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;quot;, MICCAI 2007&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=27440</id>
		<title>DBP2:Queens</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=27440"/>
		<updated>2008-06-24T15:46:44Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Segmentation and Registration Tools for Robotic Prostate Interventions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[DBP2:Main|NA-MIC DBP 2]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
= Overview of JHU DBP 2 =&lt;br /&gt;
== Image-Guided Percutaneous Surgery ==&lt;br /&gt;
&lt;br /&gt;
The Queen’s &amp;amp; Hopkins teams are developing novel systems and procedures for image-guided percutaneous needle-based surgical procedures. We develop a closely related family of systems guided by magnetic resonance imaging (MRI) for the diagnostics and treatment of prostate cancer and musculoskeletal conditions. The overall objective of this DBP is to create working clinical applications based on the NAMIC toolkit, namely Slicer and ITK/VTK tools. [[DBP2:JHU:Introduction|More...]]&lt;br /&gt;
&lt;br /&gt;
Data is provided at the following link: '''[[Data:DBP2:JHU|JHU Data]]'''.&lt;br /&gt;
&lt;br /&gt;
= JHU Roadmap Project =&lt;br /&gt;
&lt;br /&gt;
{| cellpadding=&amp;quot;10&amp;quot; border=&amp;quot;1&amp;quot; style=&amp;quot;background:lightblue&amp;quot;&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;width:15%&amp;quot; | [[Image:TRProstateBiopsyModule.png|200px]]&lt;br /&gt;
| style=&amp;quot;width:85%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== [[DBP2:JHU:Roadmap|Prostate Biopsy Needle Positioning Robot Integration]] ==&lt;br /&gt;
&lt;br /&gt;
The Queen’s/Hopkins team is developing novel devices and procedures for cancer interventions, including biopsy and therapies.  Our roadmap project involves the development of an application for image-guided trans-rectal biopsy, utilizing a needle positioning robot with an integrated MRI RF coil.  [[DBP2:JHU:Roadmap|More...]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Main&amp;diff=27439</id>
		<title>DBP2:Main</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Main&amp;diff=27439"/>
		<updated>2008-06-24T15:40:19Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Cores|NA-MIC Cores]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
{| style=&amp;quot;background: #FFFFFF&amp;quot; cellspacing=&amp;quot;10&amp;quot; align=&amp;quot;right&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
{| style=&amp;quot;background: #cccccc&amp;quot; border=&amp;quot;00&amp;quot; cellspacing=&amp;quot;3&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
{|&lt;br /&gt;
|+ &amp;lt;b&amp;gt;Featured Article&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
{|&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; width=&amp;quot;100px&amp;quot; align=&amp;quot;left&amp;quot; valign=&amp;quot;top&amp;quot; | [[Image:ProstateDiagram.png|200px|]]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;250pt&amp;quot; valign=&amp;quot;top&amp;quot; | Gabor Fichtinger, Jonathan Fiene, Christopher W. Kennedy, Gernot Kronreif, Iulian I. Iordachita, Danny Y. Song, E. Clif Burdette, and Peter Kazanzides.&lt;br /&gt;
&lt;br /&gt;
[[Media:Gabor-robotic07.pdf|Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]].&lt;br /&gt;
&lt;br /&gt;
Medical Image Analysis, 2008 (in press)&lt;br /&gt;
&lt;br /&gt;
[[DPB2:Past_Featured_Articles|Abstract]]&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
[[DPB2:Past_Featured_Articles|Past featured articles]]&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= DBP 2 Overview =&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Driving Biological Problems (DBP) Core 3.2 identifies key biological problems to drive NA-MIC projects from 2007 to 2010. From the beginning of the NCBC project, NIH planned for a three year cycle for the DBPs. In accordance with this policy, in 2007 the DBPs were shifted from schizophrenia to lupus, autism, velocardiofacial syndrome (VCSF), and prostate cancer. The process of selecting these DBPs is detailed [[DBP2:Selection|here]].&lt;br /&gt;
&lt;br /&gt;
= DBP 2 Projects - Organized by Site =&lt;br /&gt;
&lt;br /&gt;
* [[DBP2:Harvard|Harvard: Velocardiofacial Syndrome (VCFS) as a Genetic Model for Schizophrenia]]&lt;br /&gt;
* [[DBP2:JHU|JHU/Queen's: Image-Guided Percutaneous Surgery]]&lt;br /&gt;
* [[DBP2:MIND|MIND: The Analysis of Brain Lesions in Neuropsychiatric Systemic Lupus Erythematosus]]&lt;br /&gt;
* [[DBP2:UNC|UNC: Longitudinal MRI study of early brain development in neuropsychiatric disorder: UNC Autism Study]]&lt;br /&gt;
&lt;br /&gt;
= DBP 2 Projects - Organized by Subject Matter =&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC_Collaborations:New|NA-MIC Collaborations]]&lt;br /&gt;
&lt;br /&gt;
= Meetings and Events =&lt;br /&gt;
&lt;br /&gt;
Cross-links to meetings and events specifically related to these DBPs.&lt;br /&gt;
&lt;br /&gt;
*[[2007_December_Slicer_IGT_Programming|December 12-14, 2007, Slicer IGT Programming Event]]&lt;br /&gt;
* [[Oct-26-2006-MIND-visit_Visiting_the_MIND_institute|October 26, 2006: New DBP MIND Institute Visit]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DPB2:Past_Featured_Articles&amp;diff=27438</id>
		<title>DPB2:Past Featured Articles</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DPB2:Past_Featured_Articles&amp;diff=27438"/>
		<updated>2008-06-24T15:36:57Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[DBP2:Main|NA-MIC DBP 2]]&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;div id=&amp;quot;fichtinger_miccai_07&amp;quot;&amp;gt;'''G. Fichtinger et al. [[Media:Gabor-robotic07.pdf| Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]'''&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; width=&amp;quot;100px&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
[[Image:ProstateDiagram.png|thumb|Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | We present a robotically assisted prostate brachytherapy system and test results in training phantoms and Phase-I clinical trials. The system consists of a transrectal ultrasound (TRUS) and a spatially co-registered robot, fully integrated with an FDA-approved commercial treatment planning system. The salient feature of the system is a small parallel robot affixed to the mounting posts of the template. The robot replaces the template interchangeably, using the same coordinate system. Established clinical hardware, workflow and calibration remain intact. In all phantom experiments, we recorded the first insertion attempt without adjustment. All clinically relevant locations in the prostate were reached. Non-parallel needle trajectories were achieved. The pre-insertion transverse and rotational errors (measured with a Polaris optical tracker relative to the template's coordinate frame) were 0.25 mm (STD=0.17 mm) and 0.75 deg (STD=0.37 deg). In phantoms, needle tip placement errors measured in TRUS were 1.04 mm (STD=0.50 mm). A Phase-I clinical feasibility and safety trial has been successfully completed with the system. We encountered needle tip positioning errors of a magnitude greater than 4 mm in only 2 out of 179 robotically guided needles, in contrast to manual template guidance where errors of this magnitude are much more common. Further clinical trials are necessary to determine whether apparent benefits of the robotic assistant will lead to improvements in clinical efficacy and outcomes.&lt;br /&gt;
|&lt;br /&gt;
Gabor Fichtinger, Jonathan Fiene, Christopher W. Kennedy, Gernot Kronreif, Iulian I. Iordachita, Danny Y. Song, E. Clif Burdette, and Peter Kazanzides, [[Media:Gabor-robotic07.pdf| Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]. Medical Image Analysis, 2008 (in press)&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DPB2:Past_Featured_Articles&amp;diff=27437</id>
		<title>DPB2:Past Featured Articles</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DPB2:Past_Featured_Articles&amp;diff=27437"/>
		<updated>2008-06-24T15:35:39Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[DBP2:Main|NA-MIC DBP 2]]&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;div id=&amp;quot;fichtinger_miccai_07&amp;quot;&amp;gt;'''G. Fichtinger et al. [[Media:Gabor-robotic07.pdf| Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]'''&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; width=&amp;quot;100px&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
[[Image:ProstateDiagram.png|thumb|Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | We present a robotically assisted prostate brachytherapy system and test results in training phantoms and Phase-I clinical trials. The system consists of a transrectal ultrasound (TRUS) and a spatially co-registered robot, fully integrated with an FDA-approved commercial treatment planning system. The salient feature of the system is a small parallel robot affixed to the mounting posts of the template. The robot replaces the template interchangeably, using the same coordinate system. Established clinical hardware, workflow and calibration remain intact. In all phantom experiments, we recorded the first insertion attempt without adjustment. All clinically relevant locations in the prostate were reached. Non-parallel needle trajectories were achieved. The pre-insertion transverse and rotational errors (measured with a Polaris optical tracker relative to the template's coordinate frame) were 0.25 mm (STD=0.17 mm) and 0.75 deg (STD=0.37 deg). In phantoms, needle tip placement errors measured in TRUS were 1.04 mm (STD=0.50 mm). A Phase-I clinical feasibility and safety trial has been successfully completed with the system. We encountered needle tip positioning errors of a magnitude greater than 4 mm in only 2 out of 179 robotically guided needles, in contrast to manual template guidance where errors of this magnitude are much more common. Further clinical trials are necessary to determine whether apparent benefits of the robotic assistant will lead to improvements in clinical efficacy and outcomes.|-&lt;br /&gt;
|&lt;br /&gt;
Gabor Fichtinger, Jonathan Fiene, Christopher W. Kennedy, Gernot Kronreif, Iulian I. Iordachita, Danny Y. Song, E. Clif Burdette, and Peter Kazanzides, [[Media:Gabor-robotic07.pdf| Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]. Medical Image Analysis, 2008 (in press)&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DPB2:Past_Featured_Articles&amp;diff=27436</id>
		<title>DPB2:Past Featured Articles</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DPB2:Past_Featured_Articles&amp;diff=27436"/>
		<updated>2008-06-24T15:34:53Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[DBP2:Main|NA-MIC DBP 2]]&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;div id=&amp;quot;fichtinger_miccai_07&amp;quot;&amp;gt;'''G. Fichtinger et al. [[Media:Gabor-robotic07.pdf| Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]'''&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; width=&amp;quot;100px&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
[[Image:ProstateDiagram.png|thumb|Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | We present a robotically assisted prostate brachytherapy system and test results in training phantoms and Phase-I clinical trials. The system consists of a transrectal ultrasound (TRUS) and a spatially co-registered robot, fully integrated with an FDA-approved commercial treatment planning system. The salient feature of the system is a small parallel robot affixed to the mounting posts of the template. The robot replaces the template interchangeably, using the same coordinate system. Established clinical hardware, workflow and calibration remain intact. In all phantom experiments, we recorded the first insertion attempt without adjustment. All clinically relevant locations in the prostate were reached. Non-parallel needle trajectories were achieved. The pre-insertion transverse and rotational errors (measured with a Polaris optical tracker relative to the template's coordinate frame) were 0.25 mm (STD=0.17 mm) and 0.75 deg (STD=0.37 deg). In phantoms, needle tip placement errors measured in TRUS were 1.04 mm (STD=0.50 mm). A Phase-I clinical feasibility and safety trial has been successfully completed with the system. We encountered needle tip positioning errors of a magnitude greater than 4 mm in only 2 out of 179 robotically guided needles, in contrast to manual template guidance where errors of this magnitude are much more common. Further clinical trials are necessary to determine whether apparent benefits of the robotic assistant will lead to improvements in clinical efficacy and outcomes.|-&lt;br /&gt;
|&lt;br /&gt;
Gabor Fichtinger, Jonathan Fiene, Christopher W. Kennedy, Gernot Kronreif, Iulian I. Iordachita, Danny Y. Song, E. Clif Burdette, and Peter Kazanzides, [[Media:Gabor-robotic07.pdf| Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]]. Proceedings of the 10th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2007, LNCS 4791, pp. 119–127, 2007.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Main&amp;diff=27435</id>
		<title>DBP2:Main</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Main&amp;diff=27435"/>
		<updated>2008-06-24T15:32:34Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; Back to [[Cores|NA-MIC Cores]]&lt;br /&gt;
__NOTOC__&lt;br /&gt;
{| style=&amp;quot;background: #FFFFFF&amp;quot; cellspacing=&amp;quot;10&amp;quot; align=&amp;quot;right&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
{| style=&amp;quot;background: #cccccc&amp;quot; border=&amp;quot;00&amp;quot; cellspacing=&amp;quot;3&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
{|&lt;br /&gt;
|+ &amp;lt;b&amp;gt;Featured Article&amp;lt;/b&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
{|&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; width=&amp;quot;100px&amp;quot; align=&amp;quot;left&amp;quot; valign=&amp;quot;top&amp;quot; | [[Image:ProstateDiagram.png|200px|]]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;250pt&amp;quot; valign=&amp;quot;top&amp;quot; | Gabor Fichtinger, Jonathan Fiene, Christopher W. Kennedy, Gernot Kronreif, Iulian I. Iordachita, Danny Y. Song, E. Clif Burdette, and Peter Kazanzides.&lt;br /&gt;
&lt;br /&gt;
[[Media:Gabor-robotic07.pdf|Robotic Assistance for Ultrasound Guided Prostate Brachytherapy]].&lt;br /&gt;
&lt;br /&gt;
Medical Image Analysis, 2008 (in press)&lt;br /&gt;
&lt;br /&gt;
[[DPB2:Past_Featured_Articles|Abstract]]&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
[[DPB2:Past_Featured_Articles|Past featured articles]]&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= DBP 2 Overview =&lt;br /&gt;
&lt;br /&gt;
The NA-MIC Driving Biological Problems (DBP) Core 3.2 identifies key biological problems to drive NA-MIC projects from 2007 to 2010. From the beginning of the NCBC project, NIH planned for a three year cycle for the DBPs. In accordance with this policy, in 2007 the DBPs were shifted from schizophrenia to lupus, autism, velocardiofacial syndrome (VCSF), and prostate cancer. The process of selecting these DBPs is detailed [[DBP2:Selection|here]].&lt;br /&gt;
&lt;br /&gt;
= DBP 2 Projects - Organized by Site =&lt;br /&gt;
&lt;br /&gt;
* [[DBP2:Harvard|Harvard: Velocardiofacial Syndrome (VCFS) as a Genetic Model for Schizophrenia]]&lt;br /&gt;
* [[DBP2:JHU|JHU: Segmentation and Registration Tools for Robotic Prostate Interventions]]&lt;br /&gt;
* [[DBP2:MIND|MIND: The Analysis of Brain Lesions in Neuropsychiatric Systemic Lupus Erythematosus]]&lt;br /&gt;
* [[DBP2:UNC|UNC: Longitudinal MRI study of early brain development in neuropsychiatric disorder: UNC Autism Study]]&lt;br /&gt;
&lt;br /&gt;
= DBP 2 Projects - Organized by Subject Matter =&lt;br /&gt;
&lt;br /&gt;
* [[NA-MIC_Collaborations:New|NA-MIC Collaborations]]&lt;br /&gt;
&lt;br /&gt;
= Meetings and Events =&lt;br /&gt;
&lt;br /&gt;
Cross-links to meetings and events specifically related to these DBPs.&lt;br /&gt;
&lt;br /&gt;
*[[2007_December_Slicer_IGT_Programming|December 12-14, 2007, Slicer IGT Programming Event]]&lt;br /&gt;
* [[Oct-26-2006-MIND-visit_Visiting_the_MIND_institute|October 26, 2006: New DBP MIND Institute Visit]]&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Summer_Project_Week&amp;diff=25376</id>
		<title>2008 Summer Project Week</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Summer_Project_Week&amp;diff=25376"/>
		<updated>2008-05-21T01:22:04Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Attendee List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Engineering:Programming_Events|Programming/Project Events]]&lt;br /&gt;
&lt;br /&gt;
[[Image:ProjectWeek-2008.png|thumb|220px|right|Summer 2008]]&lt;br /&gt;
&lt;br /&gt;
== Logistics ==&lt;br /&gt;
&lt;br /&gt;
'''Dates:''' June 23-27, 2008&lt;br /&gt;
&lt;br /&gt;
'''Location:''' MIT. [[Meeting_Locations:MIT_Grier_A_%26B|Grier Rooms A &amp;amp; B: 34-401A &amp;amp; 34-401B]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Registration Fee:''' $260 (this will cover the cost of breakfast, lunch and coffee breaks for the week). Due by Friday, June 13th, 2008. Please make checks out to &amp;quot;Massachusetts Institute of Technology&amp;quot; and mail to: Donna Kaufman, MIT, 77 Massachusetts Ave., 38-409a, Cambridge, MA 02139&lt;br /&gt;
&lt;br /&gt;
If you are attending for one day only, the registration fee is not required.&lt;br /&gt;
&lt;br /&gt;
'''Hotel:''' We have a group rate of $239/night (plus tax) for a room with either 1 king or 2 queen beds at the [http://www.hotelatmit.com Hotel at MIT (now called Le Meridien)]. [http://www.starwoodmeeting.com/StarGroupsWeb/booking/reservation?id=0805167317&amp;amp;key=4FD1B  Please click here to reserve.]This rate is good only through June 1.&lt;br /&gt;
&lt;br /&gt;
Here is some information about several other Boston area hotels that are convenient to NA-MIC events: [[Boston_Hotels|Boston_Hotels]]. Summer is tourist season in Boston, so please book your rooms early.&lt;br /&gt;
&lt;br /&gt;
([[Project Week Logistics Checklist|This is a checklist for the onsite planning items]])&lt;br /&gt;
&lt;br /&gt;
==Introduction to NA-MIC Project Week==&lt;br /&gt;
&lt;br /&gt;
NA-MIC Project Week is a hands on activity -- programming using the [[NA-MIC-Kit|NA-MIC Kit]], algorithm design, and clinical application -- that has become one of the major events in the [[NA-MIC-Kit|NA-MIC Kit]] calendar. This event is the seventh of the [[Engineering:Programming_Events|'''series''']]. It is held in the summer at MIT (typically the last week of June), and a shorter version is held in Salt Lake City in the winter (typically the second week of January).  &lt;br /&gt;
The main goal of these events if to move forward the deliverables of NA-MIC. NA-MIC participants and their collaborators are welcome to attend.  &lt;br /&gt;
&lt;br /&gt;
* NA-MIC Members: Participation in this event is voluntary -- if you don't think this will help you move forward in your work, there is no obligation to attend.&lt;br /&gt;
* Ideal candidates are those who want to contribute to the [[NA-MIC-Kit|NA-MIC Kit]], and those who can help make it happen.&lt;br /&gt;
* This is not an introduction to the components of the [[NA-MIC-Kit|NA-MIC Kit]].&lt;br /&gt;
* NA-MIC Core 1 (Algorithms) - bring your algorithms and code to work on in the company of Core 2 engineers and Core 3 scientists.&lt;br /&gt;
* NA-MIC Core 2 (Engineering) - bring your code for infrastructure and applications to extend the [[NA-MIC-Kit|NA-MIC Kit]] capabilities, integrate Core 1 algorithms, and refine workflows for Core 3.&lt;br /&gt;
* NA-MIC Core 3 (DBP) - bring your data to work on with the [[NA-MIC-Kit|NA-MIC Kit]] and get assistance and provide feedback to Core 1 scientists and Core 2 engineers.&lt;br /&gt;
* External Collaborators - if you are working on a project that uses the [[NA-MIC-Kit|NA-MIC kit]], and want to participate to get help from NA-MIC Engineering, please send an email to Tina Kapur (tkapur at bwh.harvard.edu).  Please note that the event is open to people outside NA-MIC, subject to availability.&lt;br /&gt;
* Everyone should '''bring a laptop'''. We will have four projectors.&lt;br /&gt;
* About half the time will be spent working on projects and the other half in project related discussions.&lt;br /&gt;
* You '''do''' need to be actively working on a NA-MIC related project in order to make this investment worthwhile for everyone.&lt;br /&gt;
&lt;br /&gt;
== Agenda==&lt;br /&gt;
* Monday &lt;br /&gt;
** noon-1pm lunch &lt;br /&gt;
**1pm: Welcome (Ron Kikinis)&lt;br /&gt;
** 1:05-3:30pm Introduce [[#Projects|Projects]] using templated wiki pages (all Project Leads) ([[NA-MIC/Projects/Theme/Template|Wiki Template]]) &lt;br /&gt;
** 3:30-5:30pm Start project work&lt;br /&gt;
* Tuesday &lt;br /&gt;
** 8:30am breakfast&lt;br /&gt;
** 9:00-9:45am: NA-MIC Software Process &lt;br /&gt;
** 10-10:30am [[Project Week 2008 Slicer 3.0 Update|Slicer 3.0 Update]] (Jim Miller, Steve Pieper)&lt;br /&gt;
** noon lunch&lt;br /&gt;
** 2:30-3:30pm: [[Project Week 2008 Special topic breakout: Non-Linear Registration]] &lt;br /&gt;
** 5:30pm adjourn for day&lt;br /&gt;
* Wednesday &lt;br /&gt;
** 8:30am breakfast&lt;br /&gt;
** 9:00-12pm [[Project Week 2008 Special topic breakout: ITK]] (Luis Ibanez)&lt;br /&gt;
** noon lunch&lt;br /&gt;
** 2:30-3:30pm: [[Project Week 2008 Special topic breakout: XNAT Database]] (Daniel Marcus)&lt;br /&gt;
** 5:30pm adjourn for day&lt;br /&gt;
* Thursday&lt;br /&gt;
** 8:30am breakfast&lt;br /&gt;
** noon lunch&lt;br /&gt;
**2:30-3:30pm [[Project Week 2008 Special topic breakout: GWE]] (Marco Ruiz)&lt;br /&gt;
** 5:30pm adjourn for day&lt;br /&gt;
* Friday &lt;br /&gt;
** 8:30am breakfast&lt;br /&gt;
** 10am-noon: Project Progress using update [[#Projects|Project Wiki pages]]&lt;br /&gt;
** noon lunch boxes and adjourn.  (Next one [[AHM_2009| in Utah the week of Jan 5, 2009]])&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
# Please make sure that you are on the http://public.kitware.com/cgi-bin/mailman/listinfo/na-mic-project-week mailing list&lt;br /&gt;
&lt;br /&gt;
# [[Engineering:TCON_2008|May 08 and May 15 TCON DBPs ONLY]] at 3pm ET to discuss NA-MIC DBP Projects ONLY. &lt;br /&gt;
# [[Engineering:TCON_2008|May 22 TCON#1]] at 3pm ET to discuss NA-MIC Engr Core Projects and Assign/Verify Teams&lt;br /&gt;
# [[Engineering:TCON_2008|May 29 TCON#2]] at 3pm ET to discuss NA-MIC ALGORITHMS Core Lead Projects.  Project leads should sign up for a slot [[Engineering:TCON_2008|here]]. Projects will be discussed in order of the signups. &lt;br /&gt;
# [[Engineering:TCON_2008|June 5 TCON#3]] at 3pm ET to discuss NA-MIC EXTERNAL Collaborations.  All NIH funded &amp;quot;collaborations with NCBC&amp;quot; leads should call. Project leads should sign up for a slot [[Engineering:TCON_2008|here]].  Projects will be discussed in order of the signups. &lt;br /&gt;
# [[Engineering:TCON_2008|June 12 TCON#4]] at 3pm ET to discuss NA-MIC EXTERNAL Collaborations.  All other collaboration leads should call. Project leads should sign up for a slot [[Engineering:TCON_2008|here]].  Projects will be discussed in order of the signups. &lt;br /&gt;
# [[Engineering:TCON_2008|June 19 TCON#5]] at 3pm ET to tie loose ends.  Anyone with un-addressed questions should call.&lt;br /&gt;
# By 3pm ET on June 12, 2008: [[NA-MIC/Projects/Theme/Template|Complete a templated wiki page for your project]]. Please do not edit the template page itself, but create a new page for your project and cut-and-paste the text from this template page.  If you have questions, please send an email to tkapur at bwh.harvard.edu.&lt;br /&gt;
# By 3pm on June 19, 2008: Create a directory for each project on the [[Engineering:SandBox|NAMIC Sandbox]] (Zack)&lt;br /&gt;
## Commit on each sandbox directory the code examples/snippets that represent our first guesses of appropriate methods. (Luis and Steve will help with this, as needed)&lt;br /&gt;
## Gather test images in any of the Data sharing resources we have (e.g. the BIRN). These ones don't have to be many. At least three different cases, so we can get an idea of the modality-specific characteristics of these images. Put the IDs of these data sets on the wiki page. (the participants must do this.)&lt;br /&gt;
## Setup nightly tests on a separate Dashboard, where we will run the methods that we are experimenting with. The test should post result images and computation time. (Zack)&lt;br /&gt;
# Please note that by the time we get to the project event, we should be trying to close off a project milestone rather than starting to work on one...&lt;br /&gt;
&lt;br /&gt;
== A History in Wiki Links ==&lt;br /&gt;
&lt;br /&gt;
A history of all the programming/project events in NA-MIC is available by following [[Engineering:Programming_Events|this link]].&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
===DBP II===&lt;br /&gt;
These are projects by the new set of DBPS:&lt;br /&gt;
#[[DBP2:Harvard|Velocardio Facial Syndrome (VCFS) as a Genetic Model for Schizophrenia]] (Harvard: Marek Kubicki, PI)&lt;br /&gt;
##Add Projects for this DBP here...&lt;br /&gt;
#[[DBP2:UNC|Longitudinal MRI Study of Early Brain Development in Autism]] (UNC: Heather Hazlett, Joseph Piven, PI)&lt;br /&gt;
##Add Projects for this DBP here...&lt;br /&gt;
#[[DBP2:MIND|Analysis of Brain Lesions in Lupus]] (MIND/UNM: Jeremy Bockholt, Charles Gasparovic PI)&lt;br /&gt;
##Add Projects for this DBP here...&lt;br /&gt;
#[[DBP2:JHU|Segmentation and Registration Tools for Robotic Prostate Intervention]] (Queens/JHU: Gabor Fichtinger, PI)&lt;br /&gt;
##Add Projects for this DBP here...&lt;br /&gt;
&lt;br /&gt;
===Structural Analysis===&lt;br /&gt;
&lt;br /&gt;
===Diffusion Image Analysis===&lt;br /&gt;
&lt;br /&gt;
===Calibration/Validation===&lt;br /&gt;
&lt;br /&gt;
===NA-MIC Kit - Slicer 3===&lt;br /&gt;
&lt;br /&gt;
===External Collaborations===&lt;br /&gt;
#[[NA-MIC/Projects/Collaboration/UWA-Perth]] (Adam Wittek)&lt;br /&gt;
#[[NA-MIC/Projects/Collaboration/MRSI Module for Slicer]] (Bjoern Menze)&lt;br /&gt;
&lt;br /&gt;
===Non-Medical Collaborations===&lt;br /&gt;
&lt;br /&gt;
==Attendee List==&lt;br /&gt;
# Ron Kikinis, BWH&lt;br /&gt;
# Gary Christensen, The University of Iowa&lt;br /&gt;
# Jeffrey Hawley, Gary Christensen's student&lt;br /&gt;
# Kate Raising, Gary Christensen's student&lt;br /&gt;
# Nathan Fritze, Gary Christensen's student&lt;br /&gt;
# Paul Song, Gary Christensen's student&lt;br /&gt;
# Cheng Zhang, Gary Christensen's student&lt;br /&gt;
# Ying Wei, Gary Christensen's student&lt;br /&gt;
# Nathan Burnette, The University of Iowa&lt;br /&gt;
# Steve Pieper, Isomics, Core 2/6&lt;br /&gt;
# Dana C. Peters, BIDMC Harvard Medical&lt;br /&gt;
# Jason Taclas, BIDMC Harvard Medical&lt;br /&gt;
# Nicole Aucoin, BWH, Core 2&lt;br /&gt;
# Will Schroeder, Kitware, Cores 2/4&lt;br /&gt;
# Sebastien Barre, Kitware, Core 2&lt;br /&gt;
# Julien Jomier, Kitware, Core 2&lt;br /&gt;
# Luis Ibanez, Kitware, Core 2&lt;br /&gt;
# Curtis Lisle, KnowledgeVis, Core 2&lt;br /&gt;
# Katie Hayes, BWH, Core 2&lt;br /&gt;
# Randy Gollub, MGH, Core 5&lt;br /&gt;
# Clement Vachet, UNC, Core 3&lt;br /&gt;
# Casey Goodlett, Utah, Core 1&lt;br /&gt;
# Jeffrey Grethe, UCSD, Core 2&lt;br /&gt;
# Marco Ruiz, UCSD, Core 2&lt;br /&gt;
# Zhen Qian, Rutgers University&lt;br /&gt;
# Jinghao Zhou, Rutgers University&lt;br /&gt;
# Luca Antiga, Mario Negri Institute&lt;br /&gt;
# Adam Wittek, The University of Western Australia&lt;br /&gt;
# Grand Joldes, The University of Western Australia&lt;br /&gt;
# Jamie Berger, The University of Western Australia&lt;br /&gt;
# Serdar Balci, MIT, Core 1&lt;br /&gt;
# Bryce Kim, MIT, Core1&lt;br /&gt;
# Vincent Magnotta, The University of Iowa&lt;br /&gt;
# Tina Kapur, BWH, Core 6&lt;br /&gt;
# Carling Cheung, Robarts Research Institute / The University of Western Ontario&lt;br /&gt;
# Danielle Pace, Robarts Research Institute / The University of Western Ontario&lt;br /&gt;
# Sean Megason, Dept of Systems Biology, Harvard Medical School&lt;br /&gt;
# Alex Gouaillard, Dept of Systems Biology, Harvard Medical School&lt;br /&gt;
# Kishore Mosaliganti, Dept of Systems Biology, Harvard Medical School&lt;br /&gt;
# Arnaud Gelas, Dept of Systems Biology, Harvard Medical School&lt;br /&gt;
# Sonia Pujol, Surgical Planning Laboratory, BWH&lt;br /&gt;
# Bjoern Menze, (then) Surgical Planning Laboratory, BWH&lt;br /&gt;
# Alex Yarmarkovich, Isomics, Core 2&lt;br /&gt;
# Sylvain Bouix, BWH, Core 3&lt;br /&gt;
# Chris Churas, UCSD, Core 2&lt;br /&gt;
# John Melonakos, Georgia Tech, Core 1&lt;br /&gt;
# Yi Gao, Georgia Tech, Core 1&lt;br /&gt;
# Tauseef Rehman, Georgia Tech, Core 1&lt;br /&gt;
# Clare Poynton, MIT, Core 1&lt;br /&gt;
# H. Jeremy Bockholt, MRN Lupus DBP Core 3&lt;br /&gt;
# Mark Scully, MRN Lupus DBP Core 3&lt;br /&gt;
# Gabor Fichtinger, Queen's, Core 2&lt;br /&gt;
# David Gobbi, Queen's, Core 2&lt;br /&gt;
# Purang Abolmaesumi, Queen's, Core 2&lt;br /&gt;
# Siddharth Vikal, Queen's, Core 2&lt;br /&gt;
&lt;br /&gt;
==Pictures==&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24630</id>
		<title>2008 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24630"/>
		<updated>2008-05-15T15:34:52Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Overview (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2008_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2008_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2007 report, which is provided here for reference: [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first  driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on  the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.&lt;br /&gt;
&lt;br /&gt;
Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work  (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page,  and software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism.   Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4).  In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 4.8, provides a timeline of Center activities.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
At the core of this project is the stochastic tractography algorithm&lt;br /&gt;
developed and implemented in collaboration between MIT and&lt;br /&gt;
BWH. Stochastic Tractography is a Bayesian approach to estimating&lt;br /&gt;
nerve fiber tracts from DTI images.&lt;br /&gt;
&lt;br /&gt;
We first use the diffusion tensor at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. We then sample the tracts&lt;br /&gt;
between two user-selected ROIs, by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, we use atlas-based segmentation to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. As such, this&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
in Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we tested the algorithm first on the already&lt;br /&gt;
available to NAMIC dataset of schizophrenia subjects acquired on&lt;br /&gt;
1.5T. This step allowed us to optimize algorithm to our dataset, as&lt;br /&gt;
well as to develop the pipeline for data analysis that would be then&lt;br /&gt;
easily transferable to other image sets and structures.&lt;br /&gt;
&lt;br /&gt;
Next step, also accomplished this last year, was to apply the&lt;br /&gt;
algorithm to new, higher resolution NAMIC dataset, and to study&lt;br /&gt;
smaller white matter connections including cingulum bundle, arcuate&lt;br /&gt;
fasciculus, uncinate fasciculus and internal capsule. This step was&lt;br /&gt;
accomplished and data presented at the Santa Fee meeting in October&lt;br /&gt;
2007.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of testing phase, we started analysis of arcuate&lt;br /&gt;
fasciculus, language related fiber bundle, in new 3T, high resolution&lt;br /&gt;
dataset.  Our current work focuses on improving the parameterization&lt;br /&gt;
of the tracts, in order to obtain FA measurements along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
Stochastic Tractography slicer module has been finished, and presented&lt;br /&gt;
at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module&lt;br /&gt;
documentation have been also created. Current engineering efforts are&lt;br /&gt;
concentrated on maintaining the module, optimizing it for working with&lt;br /&gt;
other data formats, and adding new functionality, such as better&lt;br /&gt;
registration, distortion correction and ways of extracting and&lt;br /&gt;
measuring FA along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
Over the last year, we tested the algorithm on the already available&lt;br /&gt;
NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior&lt;br /&gt;
Limb of the internal capsule, large structure connecting thalamus with&lt;br /&gt;
frontal lobe, were extracted, and analyzed in group of 20&lt;br /&gt;
schizophrenics, and 20 control subjects. We presented the results&lt;br /&gt;
showing group differences in FA values at the ACNP symposium in&lt;br /&gt;
December 2007. Next, stochastic tractography was tested, and optimized&lt;br /&gt;
for new, high resolution DTI dataset acquired on 3T GE magnet.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of the testing phase, we started analysis of&lt;br /&gt;
arcuate fasciculus, language related fiber bundle, in 20 controls and&lt;br /&gt;
20 chronic schizophrenics. For each subject, we performed the white&lt;br /&gt;
matter segmentation and extracted regions interconnected by Arcuate&lt;br /&gt;
Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as&lt;br /&gt;
another ROI that would guide the tract (&amp;quot;waypoint&amp;quot; ROI). We presented&lt;br /&gt;
the preliminary results of the probabilistic tractography and the&lt;br /&gt;
statistics of FA extracted for each tract for a small set of 7&lt;br /&gt;
patients and 12 controls at the AHM in January 2008. The full study is&lt;br /&gt;
currently underway.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided&lt;br /&gt;
needle-based therapy and biopsy in the management of prostate&lt;br /&gt;
cancer. The accuracy of traditional prostate interventions performed using&lt;br /&gt;
transrectal ultrasound (TRUS) is limited by image fidelity, needle&lt;br /&gt;
template guides, needle deflection and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring&lt;br /&gt;
such interventions due to its excellent visualization of the prostate, its&lt;br /&gt;
sub-structure and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy&lt;br /&gt;
procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies transrectal approach to the prostate. An MRI compatible manipulator is equipped with steerable needle   &lt;br /&gt;
guide and endorectal imaging coil, both tuned to 3T magnets, invariable to any particular scanner. &lt;br /&gt;
&lt;br /&gt;
Under the NAMIC initiative, the image computing,visualization, intervention planning, and kinematic planning interface is being accomplizhed with open source system built on the NAMIC toolkit and its components, such as ITK. &lt;br /&gt;
&lt;br /&gt;
It is particular clinical importance of incorporating unsupervised prostate segmentation and registration methods, also develped under the NAMIC umbrella.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.&lt;br /&gt;
&lt;br /&gt;
====Prostate Segmentation====&lt;br /&gt;
&lt;br /&gt;
We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at [[Projects:ProstateSegmentation|GaTech Algorithm Prostate Segmentation]].&lt;br /&gt;
&lt;br /&gt;
1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background.  This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are&lt;br /&gt;
already part of the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.&lt;br /&gt;
&lt;br /&gt;
====Prostate Registration====&lt;br /&gt;
&lt;br /&gt;
The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.&lt;br /&gt;
&lt;br /&gt;
In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.&lt;br /&gt;
&lt;br /&gt;
Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm.  Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information).  Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Hayes)===&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current robotic prostate biopsy and implant system has been applied on over 50 patients. The system is being replicated for multicenter trials at Johns Hopkins (Baltimore), NIH (Bethesda), Brigham and Womens Hospital (Boston), and Princess Margaret Hospital (Toronto). Of these, NIH and Princess Margaret have completed the ethics board approval and will commence trials in May 2008. Others will follow suite shortly. In the meantime, components of the underlying intervention planning and monitoring system are being replaced with NAMIC modules. The current vtk-based overall interface is being replaced with Slicer, which will accommodate existing and forthcoming vtk/itk modules. Ongoing clinical trials will seamlessly absorb the NAMIC system interface, based on detailed functional equivalency tests to be conducted. (Note that most IRB-s do not require resubmission of the protocol when the interface software is updated, as long as the system's functionality is guaranteed to be intact.)&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
===Algorithm Component (Whitaker)===&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls.  We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years.  To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Styner)===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences.&lt;br /&gt;
Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality. &lt;br /&gt;
&lt;br /&gt;
====Cortical thickness measurement====&lt;br /&gt;
The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (regional)====&lt;br /&gt;
&lt;br /&gt;
For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (local)====&lt;br /&gt;
Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework. &lt;br /&gt;
&lt;br /&gt;
====Statistical analysis/Hypothesis testing====&lt;br /&gt;
Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Miller, Vachet)===&lt;br /&gt;
&lt;br /&gt;
Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK.  These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing).  During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period.  Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows.&lt;br /&gt;
&lt;br /&gt;
Structural Segmentation&lt;br /&gt;
&lt;br /&gt;
* Directional Based Segmentation&lt;br /&gt;
We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Segmentation&lt;br /&gt;
&lt;br /&gt;
We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
* Statistical PDE Methods for Segmentation&lt;br /&gt;
&lt;br /&gt;
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer. &lt;br /&gt;
&lt;br /&gt;
* Atlas Renormalization for Improved Brain MR Image Segmentation&lt;br /&gt;
&lt;br /&gt;
Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.&lt;br /&gt;
&lt;br /&gt;
*Multiscale Shape Segmentation Techniques&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport Registration&lt;br /&gt;
The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets. &lt;br /&gt;
&lt;br /&gt;
* Diffusion Tensor Image Processing Tools&lt;br /&gt;
	&lt;br /&gt;
We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups. &lt;br /&gt;
&lt;br /&gt;
* Point Set Rigid Registration&lt;br /&gt;
&lt;br /&gt;
We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
* Cortical Correspondence using Particle System&lt;br /&gt;
&lt;br /&gt;
In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
* Multimodal Atlas &lt;br /&gt;
&lt;br /&gt;
In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.&lt;br /&gt;
&lt;br /&gt;
* Groupwise Registration&lt;br /&gt;
&lt;br /&gt;
We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.&lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework Using SPHARM-PDM&lt;br /&gt;
&lt;br /&gt;
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.&lt;br /&gt;
&lt;br /&gt;
* Multiscale Shape Analysis&lt;br /&gt;
&lt;br /&gt;
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.&lt;br /&gt;
&lt;br /&gt;
* Population Analysis of Anatomical Variability&lt;br /&gt;
&lt;br /&gt;
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis with Overcomplete Wavelets&lt;br /&gt;
&lt;br /&gt;
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.&lt;br /&gt;
&lt;br /&gt;
*Shape based Segmentation and Registration&lt;br /&gt;
&lt;br /&gt;
When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.&lt;br /&gt;
&lt;br /&gt;
*Spherical Wavelets&lt;br /&gt;
&lt;br /&gt;
In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Xavier Barbero &lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm &lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of&lt;br /&gt;
functionally homogeneous networks in the brain. Over the past year, we&lt;br /&gt;
demonstrated a method for identifying large-scale networks in brain&lt;br /&gt;
activation that simultaneously estimates the optimal representative&lt;br /&gt;
time courses that summarize the fMRI data well and the partition of&lt;br /&gt;
the volume into a set of disjoint regions that are best explained by&lt;br /&gt;
these representative time courses. &lt;br /&gt;
&lt;br /&gt;
In the classical functional connectivity analysis, networks of&lt;br /&gt;
interest are defined based on correlation with the mean time course of&lt;br /&gt;
a user-selected `seed' region. Further, the user has to also specify a&lt;br /&gt;
subject-specific threshold at which correlation values are deemed&lt;br /&gt;
significant. In this project, we simultaneously estimate the optimal&lt;br /&gt;
representative time courses that summarize the fMRI data well and the&lt;br /&gt;
partition of the volume into a set of disjoint regions that are best&lt;br /&gt;
explained by these representative time courses. This approach to&lt;br /&gt;
functional connectivity analysis offers two advantages. First, is&lt;br /&gt;
removes the sensitivity of the analysis to the details of the seed&lt;br /&gt;
selection. Second, it substantially simplifies group analysis by&lt;br /&gt;
eliminating the need for the subject-specific threshold. Our&lt;br /&gt;
experimental results indicate that the functional segmentation&lt;br /&gt;
provides a robust, anatomically meaningful and consistent model for&lt;br /&gt;
functional connectivity in fMRI.&lt;br /&gt;
&lt;br /&gt;
We are currently exploring the applications of this methodology to&lt;br /&gt;
characterizing connectivity in the rest-state data in clinical&lt;br /&gt;
populations. We are also comparing the empirical findings with the&lt;br /&gt;
results of ICA decomposition, which is commonly used for data-driven&lt;br /&gt;
fMRI analysis. Our goal in this study is to identify differences in&lt;br /&gt;
connectivity between the patient populations and normal controls.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
#  MIT: Polina Golland, Danial Lashkari, Bryce Kim &lt;br /&gt;
# Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
==Other Projects==&lt;br /&gt;
Any Project(s) not covered by the 8 sections above&lt;br /&gt;
&lt;br /&gt;
==Highlights(Schroeder)==&lt;br /&gt;
===EM Segmenter or TBD===&lt;br /&gt;
===DTI progress or TBD===&lt;br /&gt;
===Outreach (Gollub)===&lt;br /&gt;
&lt;br /&gt;
NAMIC outreach is a joint effort of Cores 4, 5 and 6.  The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated.  This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links).  The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials.  With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3.  Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers .  The &amp;quot;Hello World&amp;quot; tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3.  Both these tutorials are available via the web (link).   These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release.  The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes.  The single &amp;quot;no-show&amp;quot; was due to a European flight cancellation.  The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together.  The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In January 2008 we debuted the &amp;quot;Hello World&amp;quot; tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators.  This very constructive presentation was used to make significant improvements in the presentation and delivery of this material.  In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH.  Again, this presentation was used to make significant improvements in the presentation and delivery of the material.  In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials.  This was attended by approximately 20 individuals coming from a wide range of backgrounds.  Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory  into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).&lt;br /&gt;
* Text here about Project Events 5 &amp;amp; 6 from Tina if not already included elsewhere.&lt;br /&gt;
* Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)&lt;br /&gt;
* Slicer IGT event December 2007 (tina?)&lt;br /&gt;
* Wiki to web&lt;br /&gt;
* Impact as measured by number of downloads of tutorial materials (help someone)&lt;br /&gt;
* Should the DTI tractography validation project be written up somewhere, if so where?  I will do it if it isn't already assigned.&lt;br /&gt;
&lt;br /&gt;
==Impact and Value to Biocomputing (Miller)==&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone&lt;br /&gt;
conferences.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization and data&lt;br /&gt;
management. Slicer3 also supports the research within NA-MIC by&lt;br /&gt;
providing plugin mechanisms which allow researchers to quickly and&lt;br /&gt;
easily integrate and distribute their technology with Slicer3. Slicer3&lt;br /&gt;
is available to all center participants and the external community&lt;br /&gt;
through its source code repository, official binary releases, and&lt;br /&gt;
unofficial nightly binary snapshots.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The community will be able to reproduce these workflows&lt;br /&gt;
and roadmaps in their own research programs.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. The NA-MIC&lt;br /&gt;
algorithm groups collaborate and orchestrate the solutions to the&lt;br /&gt;
DBP workflows and roadmaps.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: &amp;quot;Automated FE Mesh Development&amp;quot;, &amp;quot;Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI&amp;quot;, and &amp;quot;An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors&amp;quot;. Several other proposals have been submitted and are under&lt;br /&gt;
evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot;. NA-MIC also&lt;br /&gt;
collaborates on the Slicer3 platform with the NIH funded Neuroimage&lt;br /&gt;
Analysis Center and the National Center for Image-Guided Therapy. The&lt;br /&gt;
NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project is also&lt;br /&gt;
leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the&lt;br /&gt;
NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse&lt;br /&gt;
on distribution of Slicer3 plugin modules.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2008. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. In fact, a&lt;br /&gt;
breakout session at the Winter Project Week on &amp;quot;Geometry and Topology&lt;br /&gt;
Processing of Meshes&amp;quot; was organized by four researchers from outside&lt;br /&gt;
of NA-MIC.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, Dart 2 and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and Dart. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Finally, Slicer 3 is being used as an image analysis&lt;br /&gt;
platform in several fields outside of medical image analysis, in&lt;br /&gt;
particular, biological image analysis, astronomy, and industrial&lt;br /&gt;
inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Over&lt;br /&gt;
100 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestigious journals and conferences in the&lt;br /&gt;
field. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  NA-MIC co-sponsored the &amp;quot;Third Annual Open Source&lt;br /&gt;
Workshop&amp;quot; at the Medical Image Computing and Computer-Assisted&lt;br /&gt;
Intervention (MICCAI) 2007 conference.  The proceedings of the&lt;br /&gt;
workshop are published on the electronic Insight Journal, another&lt;br /&gt;
NIH-funded activity. NA-MIC sponsored three training workshops on&lt;br /&gt;
NA-MIC tools for the Biocomputing community in this fiscal year and&lt;br /&gt;
plans to hold sessions at upcoming MICCAI and RSNA conferences.&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Timeline (Whitaker)==&lt;br /&gt;
&lt;br /&gt;
==Appendix A Publications (Kapur)==&lt;br /&gt;
These will be mined from the SPL publications database.  All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
==Appendix B EAB Report and Response (Kapur)==&lt;br /&gt;
===EAB Report===&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24629</id>
		<title>2008 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24629"/>
		<updated>2008-05-15T15:34:26Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Overview (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2008_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2008_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2007 report, which is provided here for reference: [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first  driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on  the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.&lt;br /&gt;
&lt;br /&gt;
Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work  (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page,  and software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism.   Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4).  In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 4.8, provides a timeline of Center activities.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
At the core of this project is the stochastic tractography algorithm&lt;br /&gt;
developed and implemented in collaboration between MIT and&lt;br /&gt;
BWH. Stochastic Tractography is a Bayesian approach to estimating&lt;br /&gt;
nerve fiber tracts from DTI images.&lt;br /&gt;
&lt;br /&gt;
We first use the diffusion tensor at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. We then sample the tracts&lt;br /&gt;
between two user-selected ROIs, by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, we use atlas-based segmentation to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. As such, this&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
in Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we tested the algorithm first on the already&lt;br /&gt;
available to NAMIC dataset of schizophrenia subjects acquired on&lt;br /&gt;
1.5T. This step allowed us to optimize algorithm to our dataset, as&lt;br /&gt;
well as to develop the pipeline for data analysis that would be then&lt;br /&gt;
easily transferable to other image sets and structures.&lt;br /&gt;
&lt;br /&gt;
Next step, also accomplished this last year, was to apply the&lt;br /&gt;
algorithm to new, higher resolution NAMIC dataset, and to study&lt;br /&gt;
smaller white matter connections including cingulum bundle, arcuate&lt;br /&gt;
fasciculus, uncinate fasciculus and internal capsule. This step was&lt;br /&gt;
accomplished and data presented at the Santa Fee meeting in October&lt;br /&gt;
2007.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of testing phase, we started analysis of arcuate&lt;br /&gt;
fasciculus, language related fiber bundle, in new 3T, high resolution&lt;br /&gt;
dataset.  Our current work focuses on improving the parameterization&lt;br /&gt;
of the tracts, in order to obtain FA measurements along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
Stochastic Tractography slicer module has been finished, and presented&lt;br /&gt;
at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module&lt;br /&gt;
documentation have been also created. Current engineering efforts are&lt;br /&gt;
concentrated on maintaining the module, optimizing it for working with&lt;br /&gt;
other data formats, and adding new functionality, such as better&lt;br /&gt;
registration, distortion correction and ways of extracting and&lt;br /&gt;
measuring FA along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
Over the last year, we tested the algorithm on the already available&lt;br /&gt;
NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior&lt;br /&gt;
Limb of the internal capsule, large structure connecting thalamus with&lt;br /&gt;
frontal lobe, were extracted, and analyzed in group of 20&lt;br /&gt;
schizophrenics, and 20 control subjects. We presented the results&lt;br /&gt;
showing group differences in FA values at the ACNP symposium in&lt;br /&gt;
December 2007. Next, stochastic tractography was tested, and optimized&lt;br /&gt;
for new, high resolution DTI dataset acquired on 3T GE magnet.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of the testing phase, we started analysis of&lt;br /&gt;
arcuate fasciculus, language related fiber bundle, in 20 controls and&lt;br /&gt;
20 chronic schizophrenics. For each subject, we performed the white&lt;br /&gt;
matter segmentation and extracted regions interconnected by Arcuate&lt;br /&gt;
Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as&lt;br /&gt;
another ROI that would guide the tract (&amp;quot;waypoint&amp;quot; ROI). We presented&lt;br /&gt;
the preliminary results of the probabilistic tractography and the&lt;br /&gt;
statistics of FA extracted for each tract for a small set of 7&lt;br /&gt;
patients and 12 controls at the AHM in January 2008. The full study is&lt;br /&gt;
currently underway.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided&lt;br /&gt;
needle-based therapy and biopsy in the management of prostate&lt;br /&gt;
cancer. The accuracy of traditional prostate interventions performed using&lt;br /&gt;
transrectal ultrasound (TRUS) is limited by image fidelity, needle&lt;br /&gt;
template guides, needle deflection and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring&lt;br /&gt;
such interventions due to its excellent visualization of the prostate, its&lt;br /&gt;
sub-structure and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy&lt;br /&gt;
procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies transrectal approach to the prostate. An MRI compatible manipulator is equipped with steerable needle   &lt;br /&gt;
guide and endorectal imaging coil, both tuned to 3T magnets, invariable to any particular scanner. &lt;br /&gt;
&lt;br /&gt;
Under the NAMIC initiative, the image computing,visualization, intervention planning, and kinematic planning interface is being accomplizhed with open source system built on the NAMIC toolkit and its components, such as ITK. &lt;br /&gt;
&lt;br /&gt;
It is particular clinical importance of adding unsupervised prostate segmentation and registration methods, also develped under the NAMIC umbrella.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.&lt;br /&gt;
&lt;br /&gt;
====Prostate Segmentation====&lt;br /&gt;
&lt;br /&gt;
We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at [[Projects:ProstateSegmentation|GaTech Algorithm Prostate Segmentation]].&lt;br /&gt;
&lt;br /&gt;
1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background.  This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are&lt;br /&gt;
already part of the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.&lt;br /&gt;
&lt;br /&gt;
====Prostate Registration====&lt;br /&gt;
&lt;br /&gt;
The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.&lt;br /&gt;
&lt;br /&gt;
In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.&lt;br /&gt;
&lt;br /&gt;
Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm.  Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information).  Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Hayes)===&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current robotic prostate biopsy and implant system has been applied on over 50 patients. The system is being replicated for multicenter trials at Johns Hopkins (Baltimore), NIH (Bethesda), Brigham and Womens Hospital (Boston), and Princess Margaret Hospital (Toronto). Of these, NIH and Princess Margaret have completed the ethics board approval and will commence trials in May 2008. Others will follow suite shortly. In the meantime, components of the underlying intervention planning and monitoring system are being replaced with NAMIC modules. The current vtk-based overall interface is being replaced with Slicer, which will accommodate existing and forthcoming vtk/itk modules. Ongoing clinical trials will seamlessly absorb the NAMIC system interface, based on detailed functional equivalency tests to be conducted. (Note that most IRB-s do not require resubmission of the protocol when the interface software is updated, as long as the system's functionality is guaranteed to be intact.)&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
===Algorithm Component (Whitaker)===&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls.  We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years.  To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Styner)===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences.&lt;br /&gt;
Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality. &lt;br /&gt;
&lt;br /&gt;
====Cortical thickness measurement====&lt;br /&gt;
The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (regional)====&lt;br /&gt;
&lt;br /&gt;
For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (local)====&lt;br /&gt;
Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework. &lt;br /&gt;
&lt;br /&gt;
====Statistical analysis/Hypothesis testing====&lt;br /&gt;
Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Miller, Vachet)===&lt;br /&gt;
&lt;br /&gt;
Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK.  These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing).  During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period.  Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows.&lt;br /&gt;
&lt;br /&gt;
Structural Segmentation&lt;br /&gt;
&lt;br /&gt;
* Directional Based Segmentation&lt;br /&gt;
We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Segmentation&lt;br /&gt;
&lt;br /&gt;
We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
* Statistical PDE Methods for Segmentation&lt;br /&gt;
&lt;br /&gt;
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer. &lt;br /&gt;
&lt;br /&gt;
* Atlas Renormalization for Improved Brain MR Image Segmentation&lt;br /&gt;
&lt;br /&gt;
Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.&lt;br /&gt;
&lt;br /&gt;
*Multiscale Shape Segmentation Techniques&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport Registration&lt;br /&gt;
The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets. &lt;br /&gt;
&lt;br /&gt;
* Diffusion Tensor Image Processing Tools&lt;br /&gt;
	&lt;br /&gt;
We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups. &lt;br /&gt;
&lt;br /&gt;
* Point Set Rigid Registration&lt;br /&gt;
&lt;br /&gt;
We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
* Cortical Correspondence using Particle System&lt;br /&gt;
&lt;br /&gt;
In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
* Multimodal Atlas &lt;br /&gt;
&lt;br /&gt;
In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.&lt;br /&gt;
&lt;br /&gt;
* Groupwise Registration&lt;br /&gt;
&lt;br /&gt;
We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.&lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework Using SPHARM-PDM&lt;br /&gt;
&lt;br /&gt;
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.&lt;br /&gt;
&lt;br /&gt;
* Multiscale Shape Analysis&lt;br /&gt;
&lt;br /&gt;
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.&lt;br /&gt;
&lt;br /&gt;
* Population Analysis of Anatomical Variability&lt;br /&gt;
&lt;br /&gt;
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis with Overcomplete Wavelets&lt;br /&gt;
&lt;br /&gt;
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.&lt;br /&gt;
&lt;br /&gt;
*Shape based Segmentation and Registration&lt;br /&gt;
&lt;br /&gt;
When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.&lt;br /&gt;
&lt;br /&gt;
*Spherical Wavelets&lt;br /&gt;
&lt;br /&gt;
In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Xavier Barbero &lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm &lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of&lt;br /&gt;
functionally homogeneous networks in the brain. Over the past year, we&lt;br /&gt;
demonstrated a method for identifying large-scale networks in brain&lt;br /&gt;
activation that simultaneously estimates the optimal representative&lt;br /&gt;
time courses that summarize the fMRI data well and the partition of&lt;br /&gt;
the volume into a set of disjoint regions that are best explained by&lt;br /&gt;
these representative time courses. &lt;br /&gt;
&lt;br /&gt;
In the classical functional connectivity analysis, networks of&lt;br /&gt;
interest are defined based on correlation with the mean time course of&lt;br /&gt;
a user-selected `seed' region. Further, the user has to also specify a&lt;br /&gt;
subject-specific threshold at which correlation values are deemed&lt;br /&gt;
significant. In this project, we simultaneously estimate the optimal&lt;br /&gt;
representative time courses that summarize the fMRI data well and the&lt;br /&gt;
partition of the volume into a set of disjoint regions that are best&lt;br /&gt;
explained by these representative time courses. This approach to&lt;br /&gt;
functional connectivity analysis offers two advantages. First, is&lt;br /&gt;
removes the sensitivity of the analysis to the details of the seed&lt;br /&gt;
selection. Second, it substantially simplifies group analysis by&lt;br /&gt;
eliminating the need for the subject-specific threshold. Our&lt;br /&gt;
experimental results indicate that the functional segmentation&lt;br /&gt;
provides a robust, anatomically meaningful and consistent model for&lt;br /&gt;
functional connectivity in fMRI.&lt;br /&gt;
&lt;br /&gt;
We are currently exploring the applications of this methodology to&lt;br /&gt;
characterizing connectivity in the rest-state data in clinical&lt;br /&gt;
populations. We are also comparing the empirical findings with the&lt;br /&gt;
results of ICA decomposition, which is commonly used for data-driven&lt;br /&gt;
fMRI analysis. Our goal in this study is to identify differences in&lt;br /&gt;
connectivity between the patient populations and normal controls.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
#  MIT: Polina Golland, Danial Lashkari, Bryce Kim &lt;br /&gt;
# Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
==Other Projects==&lt;br /&gt;
Any Project(s) not covered by the 8 sections above&lt;br /&gt;
&lt;br /&gt;
==Highlights(Schroeder)==&lt;br /&gt;
===EM Segmenter or TBD===&lt;br /&gt;
===DTI progress or TBD===&lt;br /&gt;
===Outreach (Gollub)===&lt;br /&gt;
&lt;br /&gt;
NAMIC outreach is a joint effort of Cores 4, 5 and 6.  The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated.  This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links).  The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials.  With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3.  Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers .  The &amp;quot;Hello World&amp;quot; tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3.  Both these tutorials are available via the web (link).   These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release.  The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes.  The single &amp;quot;no-show&amp;quot; was due to a European flight cancellation.  The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together.  The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In January 2008 we debuted the &amp;quot;Hello World&amp;quot; tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators.  This very constructive presentation was used to make significant improvements in the presentation and delivery of this material.  In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH.  Again, this presentation was used to make significant improvements in the presentation and delivery of the material.  In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials.  This was attended by approximately 20 individuals coming from a wide range of backgrounds.  Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory  into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).&lt;br /&gt;
* Text here about Project Events 5 &amp;amp; 6 from Tina if not already included elsewhere.&lt;br /&gt;
* Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)&lt;br /&gt;
* Slicer IGT event December 2007 (tina?)&lt;br /&gt;
* Wiki to web&lt;br /&gt;
* Impact as measured by number of downloads of tutorial materials (help someone)&lt;br /&gt;
* Should the DTI tractography validation project be written up somewhere, if so where?  I will do it if it isn't already assigned.&lt;br /&gt;
&lt;br /&gt;
==Impact and Value to Biocomputing (Miller)==&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone&lt;br /&gt;
conferences.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization and data&lt;br /&gt;
management. Slicer3 also supports the research within NA-MIC by&lt;br /&gt;
providing plugin mechanisms which allow researchers to quickly and&lt;br /&gt;
easily integrate and distribute their technology with Slicer3. Slicer3&lt;br /&gt;
is available to all center participants and the external community&lt;br /&gt;
through its source code repository, official binary releases, and&lt;br /&gt;
unofficial nightly binary snapshots.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The community will be able to reproduce these workflows&lt;br /&gt;
and roadmaps in their own research programs.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. The NA-MIC&lt;br /&gt;
algorithm groups collaborate and orchestrate the solutions to the&lt;br /&gt;
DBP workflows and roadmaps.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: &amp;quot;Automated FE Mesh Development&amp;quot;, &amp;quot;Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI&amp;quot;, and &amp;quot;An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors&amp;quot;. Several other proposals have been submitted and are under&lt;br /&gt;
evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot;. NA-MIC also&lt;br /&gt;
collaborates on the Slicer3 platform with the NIH funded Neuroimage&lt;br /&gt;
Analysis Center and the National Center for Image-Guided Therapy. The&lt;br /&gt;
NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project is also&lt;br /&gt;
leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the&lt;br /&gt;
NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse&lt;br /&gt;
on distribution of Slicer3 plugin modules.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2008. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. In fact, a&lt;br /&gt;
breakout session at the Winter Project Week on &amp;quot;Geometry and Topology&lt;br /&gt;
Processing of Meshes&amp;quot; was organized by four researchers from outside&lt;br /&gt;
of NA-MIC.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, Dart 2 and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and Dart. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Finally, Slicer 3 is being used as an image analysis&lt;br /&gt;
platform in several fields outside of medical image analysis, in&lt;br /&gt;
particular, biological image analysis, astronomy, and industrial&lt;br /&gt;
inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Over&lt;br /&gt;
100 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestigious journals and conferences in the&lt;br /&gt;
field. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  NA-MIC co-sponsored the &amp;quot;Third Annual Open Source&lt;br /&gt;
Workshop&amp;quot; at the Medical Image Computing and Computer-Assisted&lt;br /&gt;
Intervention (MICCAI) 2007 conference.  The proceedings of the&lt;br /&gt;
workshop are published on the electronic Insight Journal, another&lt;br /&gt;
NIH-funded activity. NA-MIC sponsored three training workshops on&lt;br /&gt;
NA-MIC tools for the Biocomputing community in this fiscal year and&lt;br /&gt;
plans to hold sessions at upcoming MICCAI and RSNA conferences.&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Timeline (Whitaker)==&lt;br /&gt;
&lt;br /&gt;
==Appendix A Publications (Kapur)==&lt;br /&gt;
These will be mined from the SPL publications database.  All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
==Appendix B EAB Report and Response (Kapur)==&lt;br /&gt;
===EAB Report===&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24628</id>
		<title>2008 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24628"/>
		<updated>2008-05-15T15:32:21Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Clinical Component (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2008_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2008_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2007 report, which is provided here for reference: [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first  driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on  the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.&lt;br /&gt;
&lt;br /&gt;
Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work  (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page,  and software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism.   Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4).  In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 4.8, provides a timeline of Center activities.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
At the core of this project is the stochastic tractography algorithm&lt;br /&gt;
developed and implemented in collaboration between MIT and&lt;br /&gt;
BWH. Stochastic Tractography is a Bayesian approach to estimating&lt;br /&gt;
nerve fiber tracts from DTI images.&lt;br /&gt;
&lt;br /&gt;
We first use the diffusion tensor at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. We then sample the tracts&lt;br /&gt;
between two user-selected ROIs, by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, we use atlas-based segmentation to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. As such, this&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
in Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we tested the algorithm first on the already&lt;br /&gt;
available to NAMIC dataset of schizophrenia subjects acquired on&lt;br /&gt;
1.5T. This step allowed us to optimize algorithm to our dataset, as&lt;br /&gt;
well as to develop the pipeline for data analysis that would be then&lt;br /&gt;
easily transferable to other image sets and structures.&lt;br /&gt;
&lt;br /&gt;
Next step, also accomplished this last year, was to apply the&lt;br /&gt;
algorithm to new, higher resolution NAMIC dataset, and to study&lt;br /&gt;
smaller white matter connections including cingulum bundle, arcuate&lt;br /&gt;
fasciculus, uncinate fasciculus and internal capsule. This step was&lt;br /&gt;
accomplished and data presented at the Santa Fee meeting in October&lt;br /&gt;
2007.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of testing phase, we started analysis of arcuate&lt;br /&gt;
fasciculus, language related fiber bundle, in new 3T, high resolution&lt;br /&gt;
dataset.  Our current work focuses on improving the parameterization&lt;br /&gt;
of the tracts, in order to obtain FA measurements along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
Stochastic Tractography slicer module has been finished, and presented&lt;br /&gt;
at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module&lt;br /&gt;
documentation have been also created. Current engineering efforts are&lt;br /&gt;
concentrated on maintaining the module, optimizing it for working with&lt;br /&gt;
other data formats, and adding new functionality, such as better&lt;br /&gt;
registration, distortion correction and ways of extracting and&lt;br /&gt;
measuring FA along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
Over the last year, we tested the algorithm on the already available&lt;br /&gt;
NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior&lt;br /&gt;
Limb of the internal capsule, large structure connecting thalamus with&lt;br /&gt;
frontal lobe, were extracted, and analyzed in group of 20&lt;br /&gt;
schizophrenics, and 20 control subjects. We presented the results&lt;br /&gt;
showing group differences in FA values at the ACNP symposium in&lt;br /&gt;
December 2007. Next, stochastic tractography was tested, and optimized&lt;br /&gt;
for new, high resolution DTI dataset acquired on 3T GE magnet.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of the testing phase, we started analysis of&lt;br /&gt;
arcuate fasciculus, language related fiber bundle, in 20 controls and&lt;br /&gt;
20 chronic schizophrenics. For each subject, we performed the white&lt;br /&gt;
matter segmentation and extracted regions interconnected by Arcuate&lt;br /&gt;
Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as&lt;br /&gt;
another ROI that would guide the tract (&amp;quot;waypoint&amp;quot; ROI). We presented&lt;br /&gt;
the preliminary results of the probabilistic tractography and the&lt;br /&gt;
statistics of FA extracted for each tract for a small set of 7&lt;br /&gt;
patients and 12 controls at the AHM in January 2008. The full study is&lt;br /&gt;
currently underway.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided&lt;br /&gt;
needle-based therapy and biopsy in the management of prostate&lt;br /&gt;
cancer. The accuracy of traditional prostate interventions performed using&lt;br /&gt;
transrectal ultrasound (TRUS) is limited by image fidelity, needle&lt;br /&gt;
template guides, needle deflection and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring&lt;br /&gt;
such interventions due to its excellent visualization of the prostate, its&lt;br /&gt;
sub-structure and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy&lt;br /&gt;
procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies transrectal approach to the prostate. An MRI compatible manipulator is equipped with steerable needle   &lt;br /&gt;
guide and endorectal imaging coil, both tuned to 3T magnets, invariable to any particular scanner. &lt;br /&gt;
&lt;br /&gt;
Under the NAMIC initiative, the image computing,visualization, intervention planning, and kinematic planning interface is being accomplizhed withj opn source system built on the NAMIC toolkit and its components, such as ITK.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.&lt;br /&gt;
&lt;br /&gt;
====Prostate Segmentation====&lt;br /&gt;
&lt;br /&gt;
We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at [[Projects:ProstateSegmentation|GaTech Algorithm Prostate Segmentation]].&lt;br /&gt;
&lt;br /&gt;
1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background.  This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are&lt;br /&gt;
already part of the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.&lt;br /&gt;
&lt;br /&gt;
====Prostate Registration====&lt;br /&gt;
&lt;br /&gt;
The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.&lt;br /&gt;
&lt;br /&gt;
In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.&lt;br /&gt;
&lt;br /&gt;
Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm.  Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information).  Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Hayes)===&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current robotic prostate biopsy and implant system has been applied on over 50 patients. The system is being replicated for multicenter trials at Johns Hopkins (Baltimore), NIH (Bethesda), Brigham and Womens Hospital (Boston), and Princess Margaret Hospital (Toronto). Of these, NIH and Princess Margaret have completed the ethics board approval and will commence trials in May 2008. Others will follow suite shortly. In the meantime, components of the underlying intervention planning and monitoring system are being replaced with NAMIC modules. The current vtk-based overall interface is being replaced with Slicer, which will accommodate existing and forthcoming vtk/itk modules. Ongoing clinical trials will seamlessly absorb the NAMIC system interface, based on detailed functional equivalency tests to be conducted. (Note that most IRB-s do not require resubmission of the protocol when the interface software is updated, as long as the system's functionality is guaranteed to be intact.)&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
===Algorithm Component (Whitaker)===&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls.  We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years.  To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Styner)===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences.&lt;br /&gt;
Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality. &lt;br /&gt;
&lt;br /&gt;
====Cortical thickness measurement====&lt;br /&gt;
The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (regional)====&lt;br /&gt;
&lt;br /&gt;
For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (local)====&lt;br /&gt;
Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework. &lt;br /&gt;
&lt;br /&gt;
====Statistical analysis/Hypothesis testing====&lt;br /&gt;
Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Miller, Vachet)===&lt;br /&gt;
&lt;br /&gt;
Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK.  These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing).  During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period.  Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows.&lt;br /&gt;
&lt;br /&gt;
Structural Segmentation&lt;br /&gt;
&lt;br /&gt;
* Directional Based Segmentation&lt;br /&gt;
We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Segmentation&lt;br /&gt;
&lt;br /&gt;
We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
* Statistical PDE Methods for Segmentation&lt;br /&gt;
&lt;br /&gt;
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer. &lt;br /&gt;
&lt;br /&gt;
* Atlas Renormalization for Improved Brain MR Image Segmentation&lt;br /&gt;
&lt;br /&gt;
Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.&lt;br /&gt;
&lt;br /&gt;
*Multiscale Shape Segmentation Techniques&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport Registration&lt;br /&gt;
The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets. &lt;br /&gt;
&lt;br /&gt;
* Diffusion Tensor Image Processing Tools&lt;br /&gt;
	&lt;br /&gt;
We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups. &lt;br /&gt;
&lt;br /&gt;
* Point Set Rigid Registration&lt;br /&gt;
&lt;br /&gt;
We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
* Cortical Correspondence using Particle System&lt;br /&gt;
&lt;br /&gt;
In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
* Multimodal Atlas &lt;br /&gt;
&lt;br /&gt;
In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.&lt;br /&gt;
&lt;br /&gt;
* Groupwise Registration&lt;br /&gt;
&lt;br /&gt;
We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.&lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework Using SPHARM-PDM&lt;br /&gt;
&lt;br /&gt;
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.&lt;br /&gt;
&lt;br /&gt;
* Multiscale Shape Analysis&lt;br /&gt;
&lt;br /&gt;
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.&lt;br /&gt;
&lt;br /&gt;
* Population Analysis of Anatomical Variability&lt;br /&gt;
&lt;br /&gt;
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis with Overcomplete Wavelets&lt;br /&gt;
&lt;br /&gt;
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.&lt;br /&gt;
&lt;br /&gt;
*Shape based Segmentation and Registration&lt;br /&gt;
&lt;br /&gt;
When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.&lt;br /&gt;
&lt;br /&gt;
*Spherical Wavelets&lt;br /&gt;
&lt;br /&gt;
In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Xavier Barbero &lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm &lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of&lt;br /&gt;
functionally homogeneous networks in the brain. Over the past year, we&lt;br /&gt;
demonstrated a method for identifying large-scale networks in brain&lt;br /&gt;
activation that simultaneously estimates the optimal representative&lt;br /&gt;
time courses that summarize the fMRI data well and the partition of&lt;br /&gt;
the volume into a set of disjoint regions that are best explained by&lt;br /&gt;
these representative time courses. &lt;br /&gt;
&lt;br /&gt;
In the classical functional connectivity analysis, networks of&lt;br /&gt;
interest are defined based on correlation with the mean time course of&lt;br /&gt;
a user-selected `seed' region. Further, the user has to also specify a&lt;br /&gt;
subject-specific threshold at which correlation values are deemed&lt;br /&gt;
significant. In this project, we simultaneously estimate the optimal&lt;br /&gt;
representative time courses that summarize the fMRI data well and the&lt;br /&gt;
partition of the volume into a set of disjoint regions that are best&lt;br /&gt;
explained by these representative time courses. This approach to&lt;br /&gt;
functional connectivity analysis offers two advantages. First, is&lt;br /&gt;
removes the sensitivity of the analysis to the details of the seed&lt;br /&gt;
selection. Second, it substantially simplifies group analysis by&lt;br /&gt;
eliminating the need for the subject-specific threshold. Our&lt;br /&gt;
experimental results indicate that the functional segmentation&lt;br /&gt;
provides a robust, anatomically meaningful and consistent model for&lt;br /&gt;
functional connectivity in fMRI.&lt;br /&gt;
&lt;br /&gt;
We are currently exploring the applications of this methodology to&lt;br /&gt;
characterizing connectivity in the rest-state data in clinical&lt;br /&gt;
populations. We are also comparing the empirical findings with the&lt;br /&gt;
results of ICA decomposition, which is commonly used for data-driven&lt;br /&gt;
fMRI analysis. Our goal in this study is to identify differences in&lt;br /&gt;
connectivity between the patient populations and normal controls.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
#  MIT: Polina Golland, Danial Lashkari, Bryce Kim &lt;br /&gt;
# Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
==Other Projects==&lt;br /&gt;
Any Project(s) not covered by the 8 sections above&lt;br /&gt;
&lt;br /&gt;
==Highlights(Schroeder)==&lt;br /&gt;
===EM Segmenter or TBD===&lt;br /&gt;
===DTI progress or TBD===&lt;br /&gt;
===Outreach (Gollub)===&lt;br /&gt;
&lt;br /&gt;
NAMIC outreach is a joint effort of Cores 4, 5 and 6.  The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated.  This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links).  The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials.  With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3.  Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers .  The &amp;quot;Hello World&amp;quot; tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3.  Both these tutorials are available via the web (link).   These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release.  The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes.  The single &amp;quot;no-show&amp;quot; was due to a European flight cancellation.  The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together.  The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In January 2008 we debuted the &amp;quot;Hello World&amp;quot; tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators.  This very constructive presentation was used to make significant improvements in the presentation and delivery of this material.  In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH.  Again, this presentation was used to make significant improvements in the presentation and delivery of the material.  In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials.  This was attended by approximately 20 individuals coming from a wide range of backgrounds.  Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory  into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).&lt;br /&gt;
* Text here about Project Events 5 &amp;amp; 6 from Tina if not already included elsewhere.&lt;br /&gt;
* Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)&lt;br /&gt;
* Slicer IGT event December 2007 (tina?)&lt;br /&gt;
* Wiki to web&lt;br /&gt;
* Impact as measured by number of downloads of tutorial materials (help someone)&lt;br /&gt;
* Should the DTI tractography validation project be written up somewhere, if so where?  I will do it if it isn't already assigned.&lt;br /&gt;
&lt;br /&gt;
==Impact and Value to Biocomputing (Miller)==&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone&lt;br /&gt;
conferences.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization and data&lt;br /&gt;
management. Slicer3 also supports the research within NA-MIC by&lt;br /&gt;
providing plugin mechanisms which allow researchers to quickly and&lt;br /&gt;
easily integrate and distribute their technology with Slicer3. Slicer3&lt;br /&gt;
is available to all center participants and the external community&lt;br /&gt;
through its source code repository, official binary releases, and&lt;br /&gt;
unofficial nightly binary snapshots.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The community will be able to reproduce these workflows&lt;br /&gt;
and roadmaps in their own research programs.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. The NA-MIC&lt;br /&gt;
algorithm groups collaborate and orchestrate the solutions to the&lt;br /&gt;
DBP workflows and roadmaps.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: &amp;quot;Automated FE Mesh Development&amp;quot;, &amp;quot;Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI&amp;quot;, and &amp;quot;An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors&amp;quot;. Several other proposals have been submitted and are under&lt;br /&gt;
evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot;. NA-MIC also&lt;br /&gt;
collaborates on the Slicer3 platform with the NIH funded Neuroimage&lt;br /&gt;
Analysis Center and the National Center for Image-Guided Therapy. The&lt;br /&gt;
NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project is also&lt;br /&gt;
leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the&lt;br /&gt;
NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse&lt;br /&gt;
on distribution of Slicer3 plugin modules.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2008. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. In fact, a&lt;br /&gt;
breakout session at the Winter Project Week on &amp;quot;Geometry and Topology&lt;br /&gt;
Processing of Meshes&amp;quot; was organized by four researchers from outside&lt;br /&gt;
of NA-MIC.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, Dart 2 and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and Dart. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Finally, Slicer 3 is being used as an image analysis&lt;br /&gt;
platform in several fields outside of medical image analysis, in&lt;br /&gt;
particular, biological image analysis, astronomy, and industrial&lt;br /&gt;
inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Over&lt;br /&gt;
100 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestigious journals and conferences in the&lt;br /&gt;
field. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  NA-MIC co-sponsored the &amp;quot;Third Annual Open Source&lt;br /&gt;
Workshop&amp;quot; at the Medical Image Computing and Computer-Assisted&lt;br /&gt;
Intervention (MICCAI) 2007 conference.  The proceedings of the&lt;br /&gt;
workshop are published on the electronic Insight Journal, another&lt;br /&gt;
NIH-funded activity. NA-MIC sponsored three training workshops on&lt;br /&gt;
NA-MIC tools for the Biocomputing community in this fiscal year and&lt;br /&gt;
plans to hold sessions at upcoming MICCAI and RSNA conferences.&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Timeline (Whitaker)==&lt;br /&gt;
&lt;br /&gt;
==Appendix A Publications (Kapur)==&lt;br /&gt;
These will be mined from the SPL publications database.  All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
==Appendix B EAB Report and Response (Kapur)==&lt;br /&gt;
===EAB Report===&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24627</id>
		<title>2008 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24627"/>
		<updated>2008-05-15T15:17:13Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Overview (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2008_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2008_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2007 report, which is provided here for reference: [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first  driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on  the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.&lt;br /&gt;
&lt;br /&gt;
Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work  (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page,  and software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism.   Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4).  In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 4.8, provides a timeline of Center activities.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
At the core of this project is the stochastic tractography algorithm&lt;br /&gt;
developed and implemented in collaboration between MIT and&lt;br /&gt;
BWH. Stochastic Tractography is a Bayesian approach to estimating&lt;br /&gt;
nerve fiber tracts from DTI images.&lt;br /&gt;
&lt;br /&gt;
We first use the diffusion tensor at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. We then sample the tracts&lt;br /&gt;
between two user-selected ROIs, by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, we use atlas-based segmentation to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. As such, this&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
in Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we tested the algorithm first on the already&lt;br /&gt;
available to NAMIC dataset of schizophrenia subjects acquired on&lt;br /&gt;
1.5T. This step allowed us to optimize algorithm to our dataset, as&lt;br /&gt;
well as to develop the pipeline for data analysis that would be then&lt;br /&gt;
easily transferable to other image sets and structures.&lt;br /&gt;
&lt;br /&gt;
Next step, also accomplished this last year, was to apply the&lt;br /&gt;
algorithm to new, higher resolution NAMIC dataset, and to study&lt;br /&gt;
smaller white matter connections including cingulum bundle, arcuate&lt;br /&gt;
fasciculus, uncinate fasciculus and internal capsule. This step was&lt;br /&gt;
accomplished and data presented at the Santa Fee meeting in October&lt;br /&gt;
2007.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of testing phase, we started analysis of arcuate&lt;br /&gt;
fasciculus, language related fiber bundle, in new 3T, high resolution&lt;br /&gt;
dataset.  Our current work focuses on improving the parameterization&lt;br /&gt;
of the tracts, in order to obtain FA measurements along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
Stochastic Tractography slicer module has been finished, and presented&lt;br /&gt;
at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module&lt;br /&gt;
documentation have been also created. Current engineering efforts are&lt;br /&gt;
concentrated on maintaining the module, optimizing it for working with&lt;br /&gt;
other data formats, and adding new functionality, such as better&lt;br /&gt;
registration, distortion correction and ways of extracting and&lt;br /&gt;
measuring FA along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
Over the last year, we tested the algorithm on the already available&lt;br /&gt;
NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior&lt;br /&gt;
Limb of the internal capsule, large structure connecting thalamus with&lt;br /&gt;
frontal lobe, were extracted, and analyzed in group of 20&lt;br /&gt;
schizophrenics, and 20 control subjects. We presented the results&lt;br /&gt;
showing group differences in FA values at the ACNP symposium in&lt;br /&gt;
December 2007. Next, stochastic tractography was tested, and optimized&lt;br /&gt;
for new, high resolution DTI dataset acquired on 3T GE magnet.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of the testing phase, we started analysis of&lt;br /&gt;
arcuate fasciculus, language related fiber bundle, in 20 controls and&lt;br /&gt;
20 chronic schizophrenics. For each subject, we performed the white&lt;br /&gt;
matter segmentation and extracted regions interconnected by Arcuate&lt;br /&gt;
Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as&lt;br /&gt;
another ROI that would guide the tract (&amp;quot;waypoint&amp;quot; ROI). We presented&lt;br /&gt;
the preliminary results of the probabilistic tractography and the&lt;br /&gt;
statistics of FA extracted for each tract for a small set of 7&lt;br /&gt;
patients and 12 controls at the AHM in January 2008. The full study is&lt;br /&gt;
currently underway.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided&lt;br /&gt;
needle-based therapy and biopsy in the management of prostate&lt;br /&gt;
cancer. The accuracy of traditional prostate interventions performed using&lt;br /&gt;
transrectal ultrasound (TRUS) is limited by image fidelity, needle&lt;br /&gt;
template guides, needle deflection and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring&lt;br /&gt;
such interventions due to its excellent visualization of the prostate, its&lt;br /&gt;
sub-structure and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy&lt;br /&gt;
procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies transrectal approach to the prostate. An MRI compatible manipulator is equipped with steerable needle   &lt;br /&gt;
guide and endorectal imaging coil, both tuned to 3T magnets, invariable to any particular scanner. &lt;br /&gt;
&lt;br /&gt;
Under the NAMIC initiative, the image computing,visualization, intervention planning, and kinematic planning interface is being accomplizhed withj opn source system built on the NAMIC toolkit and its components, such as ITK.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.&lt;br /&gt;
&lt;br /&gt;
====Prostate Segmentation====&lt;br /&gt;
&lt;br /&gt;
We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at [[Projects:ProstateSegmentation|GaTech Algorithm Prostate Segmentation]].&lt;br /&gt;
&lt;br /&gt;
1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background.  This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are&lt;br /&gt;
already part of the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.&lt;br /&gt;
&lt;br /&gt;
====Prostate Registration====&lt;br /&gt;
&lt;br /&gt;
The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.&lt;br /&gt;
&lt;br /&gt;
In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.&lt;br /&gt;
&lt;br /&gt;
Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm.  Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information).  Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Hayes)===&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
The current system applies transrectal approach to the prostate. An MRI compatible manipulator is equipped with steerable needle   &lt;br /&gt;
guide and endorectal imaging coil, both tuned to 3T magnets, invariable to any particular scanner.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
===Algorithm Component (Whitaker)===&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls.  We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years.  To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Styner)===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences.&lt;br /&gt;
Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality. &lt;br /&gt;
&lt;br /&gt;
====Cortical thickness measurement====&lt;br /&gt;
The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (regional)====&lt;br /&gt;
&lt;br /&gt;
For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (local)====&lt;br /&gt;
Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework. &lt;br /&gt;
&lt;br /&gt;
====Statistical analysis/Hypothesis testing====&lt;br /&gt;
Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Miller, Vachet)===&lt;br /&gt;
&lt;br /&gt;
Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK.  These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing).  During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period.  Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows.&lt;br /&gt;
&lt;br /&gt;
Structural Segmentation&lt;br /&gt;
&lt;br /&gt;
* Directional Based Segmentation&lt;br /&gt;
We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Segmentation&lt;br /&gt;
&lt;br /&gt;
We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
* Statistical PDE Methods for Segmentation&lt;br /&gt;
&lt;br /&gt;
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer. &lt;br /&gt;
&lt;br /&gt;
* Atlas Renormalization for Improved Brain MR Image Segmentation&lt;br /&gt;
&lt;br /&gt;
Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.&lt;br /&gt;
&lt;br /&gt;
*Multiscale Shape Segmentation Techniques&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport Registration&lt;br /&gt;
The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets. &lt;br /&gt;
&lt;br /&gt;
* Diffusion Tensor Image Processing Tools&lt;br /&gt;
	&lt;br /&gt;
We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups. &lt;br /&gt;
&lt;br /&gt;
* Point Set Rigid Registration&lt;br /&gt;
&lt;br /&gt;
We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
* Cortical Correspondence using Particle System&lt;br /&gt;
&lt;br /&gt;
In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
* Multimodal Atlas &lt;br /&gt;
&lt;br /&gt;
In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.&lt;br /&gt;
&lt;br /&gt;
* Groupwise Registration&lt;br /&gt;
&lt;br /&gt;
We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.&lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework Using SPHARM-PDM&lt;br /&gt;
&lt;br /&gt;
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.&lt;br /&gt;
&lt;br /&gt;
* Multiscale Shape Analysis&lt;br /&gt;
&lt;br /&gt;
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.&lt;br /&gt;
&lt;br /&gt;
* Population Analysis of Anatomical Variability&lt;br /&gt;
&lt;br /&gt;
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis with Overcomplete Wavelets&lt;br /&gt;
&lt;br /&gt;
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.&lt;br /&gt;
&lt;br /&gt;
*Shape based Segmentation and Registration&lt;br /&gt;
&lt;br /&gt;
When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.&lt;br /&gt;
&lt;br /&gt;
*Spherical Wavelets&lt;br /&gt;
&lt;br /&gt;
In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Xavier Barbero &lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm &lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of&lt;br /&gt;
functionally homogeneous networks in the brain. Over the past year, we&lt;br /&gt;
demonstrated a method for identifying large-scale networks in brain&lt;br /&gt;
activation that simultaneously estimates the optimal representative&lt;br /&gt;
time courses that summarize the fMRI data well and the partition of&lt;br /&gt;
the volume into a set of disjoint regions that are best explained by&lt;br /&gt;
these representative time courses. &lt;br /&gt;
&lt;br /&gt;
In the classical functional connectivity analysis, networks of&lt;br /&gt;
interest are defined based on correlation with the mean time course of&lt;br /&gt;
a user-selected `seed' region. Further, the user has to also specify a&lt;br /&gt;
subject-specific threshold at which correlation values are deemed&lt;br /&gt;
significant. In this project, we simultaneously estimate the optimal&lt;br /&gt;
representative time courses that summarize the fMRI data well and the&lt;br /&gt;
partition of the volume into a set of disjoint regions that are best&lt;br /&gt;
explained by these representative time courses. This approach to&lt;br /&gt;
functional connectivity analysis offers two advantages. First, is&lt;br /&gt;
removes the sensitivity of the analysis to the details of the seed&lt;br /&gt;
selection. Second, it substantially simplifies group analysis by&lt;br /&gt;
eliminating the need for the subject-specific threshold. Our&lt;br /&gt;
experimental results indicate that the functional segmentation&lt;br /&gt;
provides a robust, anatomically meaningful and consistent model for&lt;br /&gt;
functional connectivity in fMRI.&lt;br /&gt;
&lt;br /&gt;
We are currently exploring the applications of this methodology to&lt;br /&gt;
characterizing connectivity in the rest-state data in clinical&lt;br /&gt;
populations. We are also comparing the empirical findings with the&lt;br /&gt;
results of ICA decomposition, which is commonly used for data-driven&lt;br /&gt;
fMRI analysis. Our goal in this study is to identify differences in&lt;br /&gt;
connectivity between the patient populations and normal controls.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
#  MIT: Polina Golland, Danial Lashkari, Bryce Kim &lt;br /&gt;
# Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
==Other Projects==&lt;br /&gt;
Any Project(s) not covered by the 8 sections above&lt;br /&gt;
&lt;br /&gt;
==Highlights(Schroeder)==&lt;br /&gt;
===EM Segmenter or TBD===&lt;br /&gt;
===DTI progress or TBD===&lt;br /&gt;
===Outreach (Gollub)===&lt;br /&gt;
&lt;br /&gt;
NAMIC outreach is a joint effort of Cores 4, 5 and 6.  The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated.  This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links).  The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials.  With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3.  Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers .  The &amp;quot;Hello World&amp;quot; tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3.  Both these tutorials are available via the web (link).   These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release.  The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes.  The single &amp;quot;no-show&amp;quot; was due to a European flight cancellation.  The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together.  The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In January 2008 we debuted the &amp;quot;Hello World&amp;quot; tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators.  This very constructive presentation was used to make significant improvements in the presentation and delivery of this material.  In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH.  Again, this presentation was used to make significant improvements in the presentation and delivery of the material.  In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials.  This was attended by approximately 20 individuals coming from a wide range of backgrounds.  Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory  into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).&lt;br /&gt;
* Text here about Project Events 5 &amp;amp; 6 from Tina if not already included elsewhere.&lt;br /&gt;
* Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)&lt;br /&gt;
* Slicer IGT event December 2007 (tina?)&lt;br /&gt;
* Wiki to web&lt;br /&gt;
* Impact as measured by number of downloads of tutorial materials (help someone)&lt;br /&gt;
* Should the DTI tractography validation project be written up somewhere, if so where?  I will do it if it isn't already assigned.&lt;br /&gt;
&lt;br /&gt;
==Impact and Value to Biocomputing (Miller)==&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone&lt;br /&gt;
conferences.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization and data&lt;br /&gt;
management. Slicer3 also supports the research within NA-MIC by&lt;br /&gt;
providing plugin mechanisms which allow researchers to quickly and&lt;br /&gt;
easily integrate and distribute their technology with Slicer3. Slicer3&lt;br /&gt;
is available to all center participants and the external community&lt;br /&gt;
through its source code repository, official binary releases, and&lt;br /&gt;
unofficial nightly binary snapshots.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The community will be able to reproduce these workflows&lt;br /&gt;
and roadmaps in their own research programs.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. The NA-MIC&lt;br /&gt;
algorithm groups collaborate and orchestrate the solutions to the&lt;br /&gt;
DBP workflows and roadmaps.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: &amp;quot;Automated FE Mesh Development&amp;quot;, &amp;quot;Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI&amp;quot;, and &amp;quot;An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors&amp;quot;. Several other proposals have been submitted and are under&lt;br /&gt;
evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot;. NA-MIC also&lt;br /&gt;
collaborates on the Slicer3 platform with the NIH funded Neuroimage&lt;br /&gt;
Analysis Center and the National Center for Image-Guided Therapy. The&lt;br /&gt;
NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project is also&lt;br /&gt;
leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the&lt;br /&gt;
NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse&lt;br /&gt;
on distribution of Slicer3 plugin modules.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2008. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. In fact, a&lt;br /&gt;
breakout session at the Winter Project Week on &amp;quot;Geometry and Topology&lt;br /&gt;
Processing of Meshes&amp;quot; was organized by four researchers from outside&lt;br /&gt;
of NA-MIC.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, Dart 2 and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and Dart. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Finally, Slicer 3 is being used as an image analysis&lt;br /&gt;
platform in several fields outside of medical image analysis, in&lt;br /&gt;
particular, biological image analysis, astronomy, and industrial&lt;br /&gt;
inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Over&lt;br /&gt;
100 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestigious journals and conferences in the&lt;br /&gt;
field. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  NA-MIC co-sponsored the &amp;quot;Third Annual Open Source&lt;br /&gt;
Workshop&amp;quot; at the Medical Image Computing and Computer-Assisted&lt;br /&gt;
Intervention (MICCAI) 2007 conference.  The proceedings of the&lt;br /&gt;
workshop are published on the electronic Insight Journal, another&lt;br /&gt;
NIH-funded activity. NA-MIC sponsored three training workshops on&lt;br /&gt;
NA-MIC tools for the Biocomputing community in this fiscal year and&lt;br /&gt;
plans to hold sessions at upcoming MICCAI and RSNA conferences.&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Timeline (Whitaker)==&lt;br /&gt;
&lt;br /&gt;
==Appendix A Publications (Kapur)==&lt;br /&gt;
These will be mined from the SPL publications database.  All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
==Appendix B EAB Report and Response (Kapur)==&lt;br /&gt;
===EAB Report===&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24626</id>
		<title>2008 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24626"/>
		<updated>2008-05-15T15:16:21Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Clinical Component (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2008_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2008_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2007 report, which is provided here for reference: [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first  driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on  the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.&lt;br /&gt;
&lt;br /&gt;
Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work  (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page,  and software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism.   Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4).  In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 4.8, provides a timeline of Center activities.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
At the core of this project is the stochastic tractography algorithm&lt;br /&gt;
developed and implemented in collaboration between MIT and&lt;br /&gt;
BWH. Stochastic Tractography is a Bayesian approach to estimating&lt;br /&gt;
nerve fiber tracts from DTI images.&lt;br /&gt;
&lt;br /&gt;
We first use the diffusion tensor at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. We then sample the tracts&lt;br /&gt;
between two user-selected ROIs, by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, we use atlas-based segmentation to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. As such, this&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
in Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we tested the algorithm first on the already&lt;br /&gt;
available to NAMIC dataset of schizophrenia subjects acquired on&lt;br /&gt;
1.5T. This step allowed us to optimize algorithm to our dataset, as&lt;br /&gt;
well as to develop the pipeline for data analysis that would be then&lt;br /&gt;
easily transferable to other image sets and structures.&lt;br /&gt;
&lt;br /&gt;
Next step, also accomplished this last year, was to apply the&lt;br /&gt;
algorithm to new, higher resolution NAMIC dataset, and to study&lt;br /&gt;
smaller white matter connections including cingulum bundle, arcuate&lt;br /&gt;
fasciculus, uncinate fasciculus and internal capsule. This step was&lt;br /&gt;
accomplished and data presented at the Santa Fee meeting in October&lt;br /&gt;
2007.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of testing phase, we started analysis of arcuate&lt;br /&gt;
fasciculus, language related fiber bundle, in new 3T, high resolution&lt;br /&gt;
dataset.  Our current work focuses on improving the parameterization&lt;br /&gt;
of the tracts, in order to obtain FA measurements along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
Stochastic Tractography slicer module has been finished, and presented&lt;br /&gt;
at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module&lt;br /&gt;
documentation have been also created. Current engineering efforts are&lt;br /&gt;
concentrated on maintaining the module, optimizing it for working with&lt;br /&gt;
other data formats, and adding new functionality, such as better&lt;br /&gt;
registration, distortion correction and ways of extracting and&lt;br /&gt;
measuring FA along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
Over the last year, we tested the algorithm on the already available&lt;br /&gt;
NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior&lt;br /&gt;
Limb of the internal capsule, large structure connecting thalamus with&lt;br /&gt;
frontal lobe, were extracted, and analyzed in group of 20&lt;br /&gt;
schizophrenics, and 20 control subjects. We presented the results&lt;br /&gt;
showing group differences in FA values at the ACNP symposium in&lt;br /&gt;
December 2007. Next, stochastic tractography was tested, and optimized&lt;br /&gt;
for new, high resolution DTI dataset acquired on 3T GE magnet.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of the testing phase, we started analysis of&lt;br /&gt;
arcuate fasciculus, language related fiber bundle, in 20 controls and&lt;br /&gt;
20 chronic schizophrenics. For each subject, we performed the white&lt;br /&gt;
matter segmentation and extracted regions interconnected by Arcuate&lt;br /&gt;
Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as&lt;br /&gt;
another ROI that would guide the tract (&amp;quot;waypoint&amp;quot; ROI). We presented&lt;br /&gt;
the preliminary results of the probabilistic tractography and the&lt;br /&gt;
statistics of FA extracted for each tract for a small set of 7&lt;br /&gt;
patients and 12 controls at the AHM in January 2008. The full study is&lt;br /&gt;
currently underway.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided&lt;br /&gt;
needle-based therapy and biopsy in the management of prostate&lt;br /&gt;
cancer. The accuracy of traditional prostate interventions performed using&lt;br /&gt;
transrectal ultrasound (TRUS) is limited by image fidelity, needle&lt;br /&gt;
template guides, needle deflection and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring&lt;br /&gt;
such interventions due to its excellent visualization of the prostate, its&lt;br /&gt;
sub-structure and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy&lt;br /&gt;
procedures to be performed entirely inside a 3T closed MRI scanner. Under the NAMIC initiative, the image computing,visualization, intervention planning, and kinematic planning interface is being accomplizhed withj opn source system built on the NAMIC toolkit and its components, such as ITK.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.&lt;br /&gt;
&lt;br /&gt;
====Prostate Segmentation====&lt;br /&gt;
&lt;br /&gt;
We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at [[Projects:ProstateSegmentation|GaTech Algorithm Prostate Segmentation]].&lt;br /&gt;
&lt;br /&gt;
1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background.  This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are&lt;br /&gt;
already part of the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.&lt;br /&gt;
&lt;br /&gt;
====Prostate Registration====&lt;br /&gt;
&lt;br /&gt;
The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.&lt;br /&gt;
&lt;br /&gt;
In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.&lt;br /&gt;
&lt;br /&gt;
Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm.  Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information).  Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Hayes)===&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
The current system applies transrectal approach to the prostate. An MRI compatible manipulator is equipped with steerable needle   &lt;br /&gt;
guide and endorectal imaging coil, both tuned to 3T magnets, invariable to any particular scanner.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
===Algorithm Component (Whitaker)===&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls.  We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years.  To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Styner)===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences.&lt;br /&gt;
Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality. &lt;br /&gt;
&lt;br /&gt;
====Cortical thickness measurement====&lt;br /&gt;
The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (regional)====&lt;br /&gt;
&lt;br /&gt;
For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (local)====&lt;br /&gt;
Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework. &lt;br /&gt;
&lt;br /&gt;
====Statistical analysis/Hypothesis testing====&lt;br /&gt;
Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Miller, Vachet)===&lt;br /&gt;
&lt;br /&gt;
Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK.  These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing).  During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period.  Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows.&lt;br /&gt;
&lt;br /&gt;
Structural Segmentation&lt;br /&gt;
&lt;br /&gt;
* Directional Based Segmentation&lt;br /&gt;
We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Segmentation&lt;br /&gt;
&lt;br /&gt;
We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
* Statistical PDE Methods for Segmentation&lt;br /&gt;
&lt;br /&gt;
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer. &lt;br /&gt;
&lt;br /&gt;
* Atlas Renormalization for Improved Brain MR Image Segmentation&lt;br /&gt;
&lt;br /&gt;
Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.&lt;br /&gt;
&lt;br /&gt;
*Multiscale Shape Segmentation Techniques&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport Registration&lt;br /&gt;
The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets. &lt;br /&gt;
&lt;br /&gt;
* Diffusion Tensor Image Processing Tools&lt;br /&gt;
	&lt;br /&gt;
We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups. &lt;br /&gt;
&lt;br /&gt;
* Point Set Rigid Registration&lt;br /&gt;
&lt;br /&gt;
We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
* Cortical Correspondence using Particle System&lt;br /&gt;
&lt;br /&gt;
In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
* Multimodal Atlas &lt;br /&gt;
&lt;br /&gt;
In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.&lt;br /&gt;
&lt;br /&gt;
* Groupwise Registration&lt;br /&gt;
&lt;br /&gt;
We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.&lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework Using SPHARM-PDM&lt;br /&gt;
&lt;br /&gt;
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.&lt;br /&gt;
&lt;br /&gt;
* Multiscale Shape Analysis&lt;br /&gt;
&lt;br /&gt;
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.&lt;br /&gt;
&lt;br /&gt;
* Population Analysis of Anatomical Variability&lt;br /&gt;
&lt;br /&gt;
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis with Overcomplete Wavelets&lt;br /&gt;
&lt;br /&gt;
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.&lt;br /&gt;
&lt;br /&gt;
*Shape based Segmentation and Registration&lt;br /&gt;
&lt;br /&gt;
When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.&lt;br /&gt;
&lt;br /&gt;
*Spherical Wavelets&lt;br /&gt;
&lt;br /&gt;
In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Xavier Barbero &lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm &lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of&lt;br /&gt;
functionally homogeneous networks in the brain. Over the past year, we&lt;br /&gt;
demonstrated a method for identifying large-scale networks in brain&lt;br /&gt;
activation that simultaneously estimates the optimal representative&lt;br /&gt;
time courses that summarize the fMRI data well and the partition of&lt;br /&gt;
the volume into a set of disjoint regions that are best explained by&lt;br /&gt;
these representative time courses. &lt;br /&gt;
&lt;br /&gt;
In the classical functional connectivity analysis, networks of&lt;br /&gt;
interest are defined based on correlation with the mean time course of&lt;br /&gt;
a user-selected `seed' region. Further, the user has to also specify a&lt;br /&gt;
subject-specific threshold at which correlation values are deemed&lt;br /&gt;
significant. In this project, we simultaneously estimate the optimal&lt;br /&gt;
representative time courses that summarize the fMRI data well and the&lt;br /&gt;
partition of the volume into a set of disjoint regions that are best&lt;br /&gt;
explained by these representative time courses. This approach to&lt;br /&gt;
functional connectivity analysis offers two advantages. First, is&lt;br /&gt;
removes the sensitivity of the analysis to the details of the seed&lt;br /&gt;
selection. Second, it substantially simplifies group analysis by&lt;br /&gt;
eliminating the need for the subject-specific threshold. Our&lt;br /&gt;
experimental results indicate that the functional segmentation&lt;br /&gt;
provides a robust, anatomically meaningful and consistent model for&lt;br /&gt;
functional connectivity in fMRI.&lt;br /&gt;
&lt;br /&gt;
We are currently exploring the applications of this methodology to&lt;br /&gt;
characterizing connectivity in the rest-state data in clinical&lt;br /&gt;
populations. We are also comparing the empirical findings with the&lt;br /&gt;
results of ICA decomposition, which is commonly used for data-driven&lt;br /&gt;
fMRI analysis. Our goal in this study is to identify differences in&lt;br /&gt;
connectivity between the patient populations and normal controls.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
#  MIT: Polina Golland, Danial Lashkari, Bryce Kim &lt;br /&gt;
# Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
==Other Projects==&lt;br /&gt;
Any Project(s) not covered by the 8 sections above&lt;br /&gt;
&lt;br /&gt;
==Highlights(Schroeder)==&lt;br /&gt;
===EM Segmenter or TBD===&lt;br /&gt;
===DTI progress or TBD===&lt;br /&gt;
===Outreach (Gollub)===&lt;br /&gt;
&lt;br /&gt;
NAMIC outreach is a joint effort of Cores 4, 5 and 6.  The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated.  This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links).  The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials.  With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3.  Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers .  The &amp;quot;Hello World&amp;quot; tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3.  Both these tutorials are available via the web (link).   These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release.  The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes.  The single &amp;quot;no-show&amp;quot; was due to a European flight cancellation.  The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together.  The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In January 2008 we debuted the &amp;quot;Hello World&amp;quot; tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators.  This very constructive presentation was used to make significant improvements in the presentation and delivery of this material.  In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH.  Again, this presentation was used to make significant improvements in the presentation and delivery of the material.  In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials.  This was attended by approximately 20 individuals coming from a wide range of backgrounds.  Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory  into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).&lt;br /&gt;
* Text here about Project Events 5 &amp;amp; 6 from Tina if not already included elsewhere.&lt;br /&gt;
* Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)&lt;br /&gt;
* Slicer IGT event December 2007 (tina?)&lt;br /&gt;
* Wiki to web&lt;br /&gt;
* Impact as measured by number of downloads of tutorial materials (help someone)&lt;br /&gt;
* Should the DTI tractography validation project be written up somewhere, if so where?  I will do it if it isn't already assigned.&lt;br /&gt;
&lt;br /&gt;
==Impact and Value to Biocomputing (Miller)==&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone&lt;br /&gt;
conferences.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization and data&lt;br /&gt;
management. Slicer3 also supports the research within NA-MIC by&lt;br /&gt;
providing plugin mechanisms which allow researchers to quickly and&lt;br /&gt;
easily integrate and distribute their technology with Slicer3. Slicer3&lt;br /&gt;
is available to all center participants and the external community&lt;br /&gt;
through its source code repository, official binary releases, and&lt;br /&gt;
unofficial nightly binary snapshots.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The community will be able to reproduce these workflows&lt;br /&gt;
and roadmaps in their own research programs.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. The NA-MIC&lt;br /&gt;
algorithm groups collaborate and orchestrate the solutions to the&lt;br /&gt;
DBP workflows and roadmaps.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: &amp;quot;Automated FE Mesh Development&amp;quot;, &amp;quot;Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI&amp;quot;, and &amp;quot;An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors&amp;quot;. Several other proposals have been submitted and are under&lt;br /&gt;
evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot;. NA-MIC also&lt;br /&gt;
collaborates on the Slicer3 platform with the NIH funded Neuroimage&lt;br /&gt;
Analysis Center and the National Center for Image-Guided Therapy. The&lt;br /&gt;
NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project is also&lt;br /&gt;
leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the&lt;br /&gt;
NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse&lt;br /&gt;
on distribution of Slicer3 plugin modules.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2008. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. In fact, a&lt;br /&gt;
breakout session at the Winter Project Week on &amp;quot;Geometry and Topology&lt;br /&gt;
Processing of Meshes&amp;quot; was organized by four researchers from outside&lt;br /&gt;
of NA-MIC.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, Dart 2 and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and Dart. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Finally, Slicer 3 is being used as an image analysis&lt;br /&gt;
platform in several fields outside of medical image analysis, in&lt;br /&gt;
particular, biological image analysis, astronomy, and industrial&lt;br /&gt;
inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Over&lt;br /&gt;
100 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestigious journals and conferences in the&lt;br /&gt;
field. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  NA-MIC co-sponsored the &amp;quot;Third Annual Open Source&lt;br /&gt;
Workshop&amp;quot; at the Medical Image Computing and Computer-Assisted&lt;br /&gt;
Intervention (MICCAI) 2007 conference.  The proceedings of the&lt;br /&gt;
workshop are published on the electronic Insight Journal, another&lt;br /&gt;
NIH-funded activity. NA-MIC sponsored three training workshops on&lt;br /&gt;
NA-MIC tools for the Biocomputing community in this fiscal year and&lt;br /&gt;
plans to hold sessions at upcoming MICCAI and RSNA conferences.&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Timeline (Whitaker)==&lt;br /&gt;
&lt;br /&gt;
==Appendix A Publications (Kapur)==&lt;br /&gt;
These will be mined from the SPL publications database.  All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
==Appendix B EAB Report and Response (Kapur)==&lt;br /&gt;
===EAB Report===&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24625</id>
		<title>2008 Annual Scientific Report</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2008_Annual_Scientific_Report&amp;diff=24625"/>
		<updated>2008-05-15T15:13:16Z</updated>

		<summary type="html">&lt;p&gt;Gabor: /* Overview (Fichtinger) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[2008_Progress_Report]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Guidelines for preparation=&lt;br /&gt;
&lt;br /&gt;
*[[2008_Progress_Report#Scientific Report Timeline]] - Main point is that May 15 is the date by which all sections below need to be completed.  No extensions are possible.&lt;br /&gt;
*DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under &amp;quot;Other&amp;quot;.  &lt;br /&gt;
*The outline for this report is similar to the 2007 report, which is provided here for reference: [[2007_Annual_Scientific_Report]].&lt;br /&gt;
*In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: [[NA-MIC_Internal_Collaborations]].&lt;br /&gt;
*Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
=Introduction (Tannenbaum)=&lt;br /&gt;
&lt;br /&gt;
The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first  driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on  the prostate: brachytherapy needle positioning robot integration.&lt;br /&gt;
&lt;br /&gt;
We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.&lt;br /&gt;
&lt;br /&gt;
Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work  (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page,  and software development is continuing as well.&lt;br /&gt;
&lt;br /&gt;
In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism.   Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4).  In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 4.8, provides a timeline of Center activities.&lt;br /&gt;
&lt;br /&gt;
=Clinical Roadmap Projects=&lt;br /&gt;
==Roadmap Project: Stochastic Tractography for VCFS (Kubicki)==&lt;br /&gt;
===Overview (Kubicki)===&lt;br /&gt;
The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the &amp;quot;stochastic tractography&amp;quot; tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Golland)===&lt;br /&gt;
At the core of this project is the stochastic tractography algorithm&lt;br /&gt;
developed and implemented in collaboration between MIT and&lt;br /&gt;
BWH. Stochastic Tractography is a Bayesian approach to estimating&lt;br /&gt;
nerve fiber tracts from DTI images.&lt;br /&gt;
&lt;br /&gt;
We first use the diffusion tensor at each voxel in the volume to&lt;br /&gt;
construct a local probability distribution for the fiber direction&lt;br /&gt;
around the principal direction of diffusion. We then sample the tracts&lt;br /&gt;
between two user-selected ROIs, by simulating a random walk between&lt;br /&gt;
the regions, based the local transition probabilities inferred from&lt;br /&gt;
the DTI image.&lt;br /&gt;
&lt;br /&gt;
The resulting collection of fibers and the associated FA values&lt;br /&gt;
provide useful statistics on the properties of connections between the&lt;br /&gt;
two regions. To constrain the sampling process to the relevant white&lt;br /&gt;
matter region, we use atlas-based segmentation to label ventricles and&lt;br /&gt;
gray matter and to exclude them from the search space. As such, this&lt;br /&gt;
step relies heavily on the registration and segmentation functionality&lt;br /&gt;
in Slicer.&lt;br /&gt;
&lt;br /&gt;
Over the last year, we tested the algorithm first on the already&lt;br /&gt;
available to NAMIC dataset of schizophrenia subjects acquired on&lt;br /&gt;
1.5T. This step allowed us to optimize algorithm to our dataset, as&lt;br /&gt;
well as to develop the pipeline for data analysis that would be then&lt;br /&gt;
easily transferable to other image sets and structures.&lt;br /&gt;
&lt;br /&gt;
Next step, also accomplished this last year, was to apply the&lt;br /&gt;
algorithm to new, higher resolution NAMIC dataset, and to study&lt;br /&gt;
smaller white matter connections including cingulum bundle, arcuate&lt;br /&gt;
fasciculus, uncinate fasciculus and internal capsule. This step was&lt;br /&gt;
accomplished and data presented at the Santa Fee meeting in October&lt;br /&gt;
2007.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of testing phase, we started analysis of arcuate&lt;br /&gt;
fasciculus, language related fiber bundle, in new 3T, high resolution&lt;br /&gt;
dataset.  Our current work focuses on improving the parameterization&lt;br /&gt;
of the tracts, in order to obtain FA measurements along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Davis)===&lt;br /&gt;
Stochastic Tractography slicer module has been finished, and presented&lt;br /&gt;
at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module&lt;br /&gt;
documentation have been also created. Current engineering efforts are&lt;br /&gt;
concentrated on maintaining the module, optimizing it for working with&lt;br /&gt;
other data formats, and adding new functionality, such as better&lt;br /&gt;
registration, distortion correction and ways of extracting and&lt;br /&gt;
measuring FA along the tracts.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Kubicki)===&lt;br /&gt;
Over the last year, we tested the algorithm on the already available&lt;br /&gt;
NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior&lt;br /&gt;
Limb of the internal capsule, large structure connecting thalamus with&lt;br /&gt;
frontal lobe, were extracted, and analyzed in group of 20&lt;br /&gt;
schizophrenics, and 20 control subjects. We presented the results&lt;br /&gt;
showing group differences in FA values at the ACNP symposium in&lt;br /&gt;
December 2007. Next, stochastic tractography was tested, and optimized&lt;br /&gt;
for new, high resolution DTI dataset acquired on 3T GE magnet.&lt;br /&gt;
&lt;br /&gt;
Upon the completion of the testing phase, we started analysis of&lt;br /&gt;
arcuate fasciculus, language related fiber bundle, in 20 controls and&lt;br /&gt;
20 chronic schizophrenics. For each subject, we performed the white&lt;br /&gt;
matter segmentation and extracted regions interconnected by Arcuate&lt;br /&gt;
Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as&lt;br /&gt;
another ROI that would guide the tract (&amp;quot;waypoint&amp;quot; ROI). We presented&lt;br /&gt;
the preliminary results of the probabilistic tractography and the&lt;br /&gt;
statistics of FA extracted for each tract for a small set of 7&lt;br /&gt;
patients and 12 controls at the AHM in January 2008. The full study is&lt;br /&gt;
currently underway.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:Harvard:Brain_Segmentation_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)==&lt;br /&gt;
===Overview (Fichtinger)===&lt;br /&gt;
Numerous studies have demonstrated the efficacy of image-guided&lt;br /&gt;
needle-based therapy and biopsy in the management of prostate&lt;br /&gt;
cancer. The accuracy of traditional prostate interventions performed using&lt;br /&gt;
transrectal ultrasound (TRUS) is limited by image fidelity, needle&lt;br /&gt;
template guides, needle deflection and tissue deformation. Magnetic Resonance&lt;br /&gt;
Imaging (MRI) is an ideal modality for guiding and monitoring&lt;br /&gt;
such interventions due to its excellent visualization of the prostate, its&lt;br /&gt;
sub-structure and surrounding tissues. &lt;br /&gt;
&lt;br /&gt;
We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy&lt;br /&gt;
procedures to be performed entirely inside a 3T closed MRI scanner. Under the NAMIC initiative, the image computing,visualization, intervention planning, and kinematic planning interface is being accomplizhed withj opn source system built on the NAMIC toolkit and its components, such as ITK.&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Tannenbaum)===&lt;br /&gt;
We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.&lt;br /&gt;
&lt;br /&gt;
====Prostate Segmentation====&lt;br /&gt;
&lt;br /&gt;
We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at [[Projects:ProstateSegmentation|GaTech Algorithm Prostate Segmentation]].&lt;br /&gt;
&lt;br /&gt;
1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background.  This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are&lt;br /&gt;
already part of the NA-MIC SandBox.&lt;br /&gt;
&lt;br /&gt;
3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.&lt;br /&gt;
&lt;br /&gt;
====Prostate Registration====&lt;br /&gt;
&lt;br /&gt;
The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.&lt;br /&gt;
&lt;br /&gt;
In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.&lt;br /&gt;
&lt;br /&gt;
Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm.  Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information).  Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Hayes)===&lt;br /&gt;
===Clinical Component (Fichtinger)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:JHU:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)==&lt;br /&gt;
===Overview (Bockholt)===&lt;br /&gt;
===Algorithm Component (Whitaker)===&lt;br /&gt;
===Engineering Component (Pieper)===&lt;br /&gt;
===Clinical Component (Bockholt)===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:MIND:Roadmap here on the NA-MIC wiki].&lt;br /&gt;
==Roadmap Project: Cortical Thickness for Autism(Hazlett)== &lt;br /&gt;
===Overview (Hazlett)===&lt;br /&gt;
&lt;br /&gt;
A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls.  We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years.  To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).&lt;br /&gt;
&lt;br /&gt;
===Algorithm Component (Styner)===&lt;br /&gt;
&lt;br /&gt;
The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences.&lt;br /&gt;
Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality. &lt;br /&gt;
&lt;br /&gt;
====Cortical thickness measurement====&lt;br /&gt;
The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (regional)====&lt;br /&gt;
&lt;br /&gt;
For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.&lt;br /&gt;
&lt;br /&gt;
====Cortical correspondence (local)====&lt;br /&gt;
Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework. &lt;br /&gt;
&lt;br /&gt;
====Statistical analysis/Hypothesis testing====&lt;br /&gt;
Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.&lt;br /&gt;
&lt;br /&gt;
===Engineering Component (Miller, Vachet)===&lt;br /&gt;
&lt;br /&gt;
Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK.  These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.&lt;br /&gt;
&lt;br /&gt;
===Clinical Component (Hazlett)===&lt;br /&gt;
So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing).  During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period.  Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this project is available [http://wiki.na-mic.org/Wiki/index.php/DBP2:UNC:Cortical_Thickness_Roadmap here on the NA-MIC wiki].&lt;br /&gt;
&lt;br /&gt;
=Four Infrastructure Topics=&lt;br /&gt;
==Diffusion Image Analysis (Gerig)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:DiffusionImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==Structural Analysis(Tannenbaum)==&lt;br /&gt;
===Progress===&lt;br /&gt;
Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on. &lt;br /&gt;
&lt;br /&gt;
An overview of selected progress highlights under these broad topics follows.&lt;br /&gt;
&lt;br /&gt;
Structural Segmentation&lt;br /&gt;
&lt;br /&gt;
* Directional Based Segmentation&lt;br /&gt;
We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.&lt;br /&gt;
&lt;br /&gt;
* Stochastic Segmentation&lt;br /&gt;
&lt;br /&gt;
We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.&lt;br /&gt;
&lt;br /&gt;
* Statistical PDE Methods for Segmentation&lt;br /&gt;
&lt;br /&gt;
Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer. &lt;br /&gt;
&lt;br /&gt;
* Atlas Renormalization for Improved Brain MR Image Segmentation&lt;br /&gt;
&lt;br /&gt;
Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.&lt;br /&gt;
&lt;br /&gt;
*Multiscale Shape Segmentation Techniques&lt;br /&gt;
&lt;br /&gt;
The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.&lt;br /&gt;
&lt;br /&gt;
Registration&lt;br /&gt;
&lt;br /&gt;
* Optimal Mass Transport Registration&lt;br /&gt;
The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets. &lt;br /&gt;
&lt;br /&gt;
* Diffusion Tensor Image Processing Tools&lt;br /&gt;
	&lt;br /&gt;
We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups. &lt;br /&gt;
&lt;br /&gt;
* Point Set Rigid Registration&lt;br /&gt;
&lt;br /&gt;
We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.&lt;br /&gt;
&lt;br /&gt;
* Cortical Correspondence using Particle System&lt;br /&gt;
&lt;br /&gt;
In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).&lt;br /&gt;
&lt;br /&gt;
* Multimodal Atlas &lt;br /&gt;
&lt;br /&gt;
In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.&lt;br /&gt;
&lt;br /&gt;
* Groupwise Registration&lt;br /&gt;
&lt;br /&gt;
We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.&lt;br /&gt;
&lt;br /&gt;
Shape Analysis&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis Framework Using SPHARM-PDM&lt;br /&gt;
&lt;br /&gt;
The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.&lt;br /&gt;
&lt;br /&gt;
* Multiscale Shape Analysis&lt;br /&gt;
&lt;br /&gt;
We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.&lt;br /&gt;
&lt;br /&gt;
* Population Analysis of Anatomical Variability&lt;br /&gt;
&lt;br /&gt;
In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.&lt;br /&gt;
&lt;br /&gt;
* Shape Analysis with Overcomplete Wavelets&lt;br /&gt;
&lt;br /&gt;
In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.&lt;br /&gt;
&lt;br /&gt;
*Shape based Segmentation and Registration&lt;br /&gt;
&lt;br /&gt;
When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.&lt;br /&gt;
&lt;br /&gt;
*Spherical Wavelets&lt;br /&gt;
&lt;br /&gt;
In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu&lt;br /&gt;
* UNC: Martin Styner, Ipek Oguz, Xavier Barbero &lt;br /&gt;
* Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer &lt;br /&gt;
* GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm &lt;br /&gt;
* Isomics: Steve Pieper &lt;br /&gt;
* GE: Bill Lorensen, Jim Miller &lt;br /&gt;
* Kitware: Luis Ibanez, Karthik Krishnan&lt;br /&gt;
* UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran &lt;br /&gt;
* BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:StructuralImageAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==fMRI Analysis (Golland)==&lt;br /&gt;
===Progress===&lt;br /&gt;
One of the major goals in analysis of fMRI data is the detection of&lt;br /&gt;
functionally homogeneous networks in the brain. Over the past year, we&lt;br /&gt;
demonstrated a method for identifying large-scale networks in brain&lt;br /&gt;
activation that simultaneously estimates the optimal representative&lt;br /&gt;
time courses that summarize the fMRI data well and the partition of&lt;br /&gt;
the volume into a set of disjoint regions that are best explained by&lt;br /&gt;
these representative time courses. &lt;br /&gt;
&lt;br /&gt;
In the classical functional connectivity analysis, networks of&lt;br /&gt;
interest are defined based on correlation with the mean time course of&lt;br /&gt;
a user-selected `seed' region. Further, the user has to also specify a&lt;br /&gt;
subject-specific threshold at which correlation values are deemed&lt;br /&gt;
significant. In this project, we simultaneously estimate the optimal&lt;br /&gt;
representative time courses that summarize the fMRI data well and the&lt;br /&gt;
partition of the volume into a set of disjoint regions that are best&lt;br /&gt;
explained by these representative time courses. This approach to&lt;br /&gt;
functional connectivity analysis offers two advantages. First, is&lt;br /&gt;
removes the sensitivity of the analysis to the details of the seed&lt;br /&gt;
selection. Second, it substantially simplifies group analysis by&lt;br /&gt;
eliminating the need for the subject-specific threshold. Our&lt;br /&gt;
experimental results indicate that the functional segmentation&lt;br /&gt;
provides a robust, anatomically meaningful and consistent model for&lt;br /&gt;
functional connectivity in fMRI.&lt;br /&gt;
&lt;br /&gt;
We are currently exploring the applications of this methodology to&lt;br /&gt;
characterizing connectivity in the rest-state data in clinical&lt;br /&gt;
populations. We are also comparing the empirical findings with the&lt;br /&gt;
results of ICA decomposition, which is commonly used for data-driven&lt;br /&gt;
fMRI analysis. Our goal in this study is to identify differences in&lt;br /&gt;
connectivity between the patient populations and normal controls.&lt;br /&gt;
&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
#  MIT: Polina Golland, Danial Lashkari, Bryce Kim &lt;br /&gt;
# Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC_Internal_Collaborations:fMRIAnalysis here on the NA-MIC wiki].&lt;br /&gt;
==NA-MIC Kit Theme (Schroeder)==&lt;br /&gt;
===Progress===&lt;br /&gt;
===Key Investigators===&lt;br /&gt;
* Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman&lt;br /&gt;
* GE - Jim Miller, Xiaodong Tao&lt;br /&gt;
* Isomics - Steve Pieper&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
Additional Information for this topic is available [http://wiki.na-mic.org/Wiki/index.php/NA-MIC-Kit here on the NA-MIC wiki].&lt;br /&gt;
==Other Projects==&lt;br /&gt;
Any Project(s) not covered by the 8 sections above&lt;br /&gt;
&lt;br /&gt;
==Highlights(Schroeder)==&lt;br /&gt;
===EM Segmenter or TBD===&lt;br /&gt;
===DTI progress or TBD===&lt;br /&gt;
===Outreach (Gollub)===&lt;br /&gt;
&lt;br /&gt;
NAMIC outreach is a joint effort of Cores 4, 5 and 6.  The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated.  This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links).  The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials.  With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3.  Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers .  The &amp;quot;Hello World&amp;quot; tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3.  Both these tutorials are available via the web (link).   These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release.  The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes.  The single &amp;quot;no-show&amp;quot; was due to a European flight cancellation.  The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together.  The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In January 2008 we debuted the &amp;quot;Hello World&amp;quot; tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators.  This very constructive presentation was used to make significant improvements in the presentation and delivery of this material.  In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH.  Again, this presentation was used to make significant improvements in the presentation and delivery of the material.  In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials.  This was attended by approximately 20 individuals coming from a wide range of backgrounds.  Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory  into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).&lt;br /&gt;
* Text here about Project Events 5 &amp;amp; 6 from Tina if not already included elsewhere.&lt;br /&gt;
* Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)&lt;br /&gt;
* Slicer IGT event December 2007 (tina?)&lt;br /&gt;
* Wiki to web&lt;br /&gt;
* Impact as measured by number of downloads of tutorial materials (help someone)&lt;br /&gt;
* Should the DTI tractography validation project be written up somewhere, if so where?  I will do it if it isn't already assigned.&lt;br /&gt;
&lt;br /&gt;
==Impact and Value to Biocomputing (Miller)==&lt;br /&gt;
NA-MIC impacts Biocomputing through a variety of mechanisms.  First,&lt;br /&gt;
NA-MIC produces scientific results, methodologies, workflows,&lt;br /&gt;
algorithms, imaging platforms, and software engineering tools and&lt;br /&gt;
paradigms in an open enviroment that contributes directly to the body of&lt;br /&gt;
knowledge available to the field. Second, NA-MIC science and&lt;br /&gt;
technology enables the entire medical imaging community to build on&lt;br /&gt;
NA-MIC results, methods, and techniques, to concentrate on the new&lt;br /&gt;
science instead of developing supporting infrastructure, to leverage&lt;br /&gt;
NA-MIC scientists and engineers to adapt NA-MIC technology to new&lt;br /&gt;
problem domains, and to leverage NA-MIC infrastructure to distribute&lt;br /&gt;
their own technology to a larger community.&lt;br /&gt;
&lt;br /&gt;
===Impact within the Center===&lt;br /&gt;
Within the center, NA-MIC has formed a community around its software&lt;br /&gt;
engineering tools, imaging platforms, algorithms, and clinical&lt;br /&gt;
workflows. The NA-MIC calendar includes the All Hands Meeting and&lt;br /&gt;
Winter Project Week, the Spring Algorithm Meeting, the Summer Project&lt;br /&gt;
Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone&lt;br /&gt;
conferences.&lt;br /&gt;
&lt;br /&gt;
The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have&lt;br /&gt;
enabled the development and distribution of a cross-platform, nightly&lt;br /&gt;
tested, end-user application, Slicer3, that is a complex union of&lt;br /&gt;
novel application code, visualization tools (VTK), imaging libraries&lt;br /&gt;
(ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting&lt;br /&gt;
languages (TCL, Python). The NA-MIC software engineering tools have been&lt;br /&gt;
essential in the development and distribution of the Slicer3 imaging&lt;br /&gt;
platform to the NA-MIC community.&lt;br /&gt;
&lt;br /&gt;
NA-MIC's end-user application, Slicer3, supports the research within&lt;br /&gt;
NA-MIC by providing a base application for visualization and data&lt;br /&gt;
management. Slicer3 also supports the research within NA-MIC by&lt;br /&gt;
providing plugin mechanisms which allow researchers to quickly and&lt;br /&gt;
easily integrate and distribute their technology with Slicer3. Slicer3&lt;br /&gt;
is available to all center participants and the external community&lt;br /&gt;
through its source code repository, official binary releases, and&lt;br /&gt;
unofficial nightly binary snapshots.&lt;br /&gt;
&lt;br /&gt;
NA-MIC drives the development of platforms and algorithms through the&lt;br /&gt;
needs and research of its DBPs. Each DBP has selected specific&lt;br /&gt;
workflows and roadmaps as focal points for development with a goal of&lt;br /&gt;
providing the community with complete end-to-end solutions using&lt;br /&gt;
NA-MIC tools. The community will be able to reproduce these workflows&lt;br /&gt;
and roadmaps in their own research programs.&lt;br /&gt;
&lt;br /&gt;
NA-MIC algorithms are designed and used to address specific needs of&lt;br /&gt;
the DBPs. Multiple solution paths are explored and compared within&lt;br /&gt;
NA-MIC, resulting in recommendations to the field. The NA-MIC&lt;br /&gt;
algorithm groups collaborate and orchestrate the solutions to the&lt;br /&gt;
DBP workflows and roadmaps.&lt;br /&gt;
&lt;br /&gt;
===Impact within NIH Funded Research===&lt;br /&gt;
Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: &amp;quot;Automated FE Mesh Development&amp;quot;, &amp;quot;Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI&amp;quot;, and &amp;quot;An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors&amp;quot;. Several other proposals have been submitted and are under&lt;br /&gt;
evaluation for the &amp;quot;Collaborations with NCBC PAR&amp;quot;. NA-MIC also&lt;br /&gt;
collaborates on the Slicer3 platform with the NIH funded Neuroimage&lt;br /&gt;
Analysis Center and the National Center for Image-Guided Therapy. The&lt;br /&gt;
NIH funded &amp;quot;BRAINS Morphology and Image Analysis&amp;quot; project is also&lt;br /&gt;
leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the&lt;br /&gt;
NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse&lt;br /&gt;
on distribution of Slicer3 plugin modules.&lt;br /&gt;
&lt;br /&gt;
===National and International Impact===&lt;br /&gt;
NA-MIC events and tools garner national and international interest.&lt;br /&gt;
Over 100 researchers participated in the NA-MIC All Hands Meeting and&lt;br /&gt;
Winter Project Week in January 2008. Many of these participants were&lt;br /&gt;
from outside of NA-MIC, attending the meetings to gain access to the&lt;br /&gt;
NA-MIC tools and researchers. These external researchers are&lt;br /&gt;
contributing ideas and technology back into NA-MIC. In fact, a&lt;br /&gt;
breakout session at the Winter Project Week on &amp;quot;Geometry and Topology&lt;br /&gt;
Processing of Meshes&amp;quot; was organized by four researchers from outside&lt;br /&gt;
of NA-MIC.&lt;br /&gt;
&lt;br /&gt;
Components of the NA-MIC kit are used globally.  The software&lt;br /&gt;
engineering tools of CMake, Dart 2 and CTest are used by many open&lt;br /&gt;
source projects and commercial applications. For example, the K&lt;br /&gt;
Desktop Environment (KDE) for Linux and Unix workstations uses CMake&lt;br /&gt;
and Dart. KDE is one of the largest open source projects in the&lt;br /&gt;
world. Many open source projects and commercial products are&lt;br /&gt;
benefiting from the NA-MIC related contributions to ITK and&lt;br /&gt;
VTK. Finally, Slicer 3 is being used as an image analysis&lt;br /&gt;
platform in several fields outside of medical image analysis, in&lt;br /&gt;
particular, biological image analysis, astronomy, and industrial&lt;br /&gt;
inspection.&lt;br /&gt;
&lt;br /&gt;
NA-MIC science is recognized by the medical imaging community. Over&lt;br /&gt;
100 NA-MIC related publications are listed on PubMed. Many of these&lt;br /&gt;
publications are in the most prestigious journals and conferences in the&lt;br /&gt;
field. Portions of the DBP workflows and roadmaps are already being&lt;br /&gt;
utilized by researchers in the broader community and in the&lt;br /&gt;
development of commercial products.&lt;br /&gt;
&lt;br /&gt;
NA-MIC sponsored several events to promote NA-MIC tools and&lt;br /&gt;
methodologies.  NA-MIC co-sponsored the &amp;quot;Third Annual Open Source&lt;br /&gt;
Workshop&amp;quot; at the Medical Image Computing and Computer-Assisted&lt;br /&gt;
Intervention (MICCAI) 2007 conference.  The proceedings of the&lt;br /&gt;
workshop are published on the electronic Insight Journal, another&lt;br /&gt;
NIH-funded activity. NA-MIC sponsored three training workshops on&lt;br /&gt;
NA-MIC tools for the Biocomputing community in this fiscal year and&lt;br /&gt;
plans to hold sessions at upcoming MICCAI and RSNA conferences.&lt;br /&gt;
&lt;br /&gt;
==NA-MIC Timeline (Whitaker)==&lt;br /&gt;
&lt;br /&gt;
==Appendix A Publications (Kapur)==&lt;br /&gt;
These will be mined from the SPL publications database.  All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.&lt;br /&gt;
&lt;br /&gt;
==Appendix B EAB Report and Response (Kapur)==&lt;br /&gt;
===EAB Report===&lt;br /&gt;
===Response to EAB Report===&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=14813</id>
		<title>DBP2:Queens</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=14813"/>
		<updated>2007-08-25T11:20:13Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;background: #f0f0f0;font: smaller;color: black;font-style: italic;&amp;quot;&amp;gt;[[DBP2]] &amp;amp;gt; DBP2:JHU&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;+2&amp;quot;&amp;gt;Segmentation and Registration Tools for Robotic Prostate Interventions&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Team and Institute==&lt;br /&gt;
* PI: Gabor Fichtinger, PhD: gabor at cs.queensu.ca&lt;br /&gt;
* co-PI: Purang Abolmaesumi, PhD: purang at cs.queensu.ca&lt;br /&gt;
* Queen's Engineering Contact: David Gobbi, PhD: dgobbi at cs.queensu.ca&lt;br /&gt;
* Hopkins Engineering Contact: Csaba Csoma, BSc, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
* NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
* NA-MIC Algorithms Contact: Allen Tannenbaum, PhD, GeorgiaTech, tannenba at ece.gatech.edu&lt;br /&gt;
&lt;br /&gt;
* '''Affiliation/Institution:''' Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Research Goals==&lt;br /&gt;
The Queen’s &amp;amp; Hopkins teams are developing novel systems and procedures for prostate cancer interventions, such as biopsy and needle-based local therapies. &lt;br /&gt;
&lt;br /&gt;
Prostate cancer is the most common subcutaneous cancer in American men. In 2007 will be an estimated 220,000 new cases of prostate cancer and 28,000 deaths caused by prostate cancer in the United States alone. [1]&lt;br /&gt;
&lt;br /&gt;
The current standard of care for verifying the existence of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. TRUS provides limited diagnostic accuracy and image resolution. In a study [2] the authors conclude that TRUS is not accurate for tumor localization and therefore the precise identification and sampling of individual cancerous tumor sites is limited. As a result, the sensitivity of TRUS biopsy is only between 60% and 85%. [3, 4]&lt;br /&gt;
&lt;br /&gt;
Targeted biopsies of suspicious areas identified by MRI could potentially increase the sensitivity of prostate biopsies. To address this problem the investigators have several active research projects in prostate biopsy and therapies under direct MRI guidance inside the bore. We have developed and clinically tried a semi-robotic device and system for planning and execution of prostate biopsy under MRI guidance [5]. We have conducted several clinical trials [5] and more are to follow. The generic workflow is as follows:&lt;br /&gt;
 &lt;br /&gt;
# '''Pre-Op:''' segment the prostate, identify suspicious areas, plan targets;&lt;br /&gt;
# '''Intra-Op:''' import plan, update plan, execute the biopsy/therapy&lt;br /&gt;
# '''Post-Op:''' compare post-op  data with plan, evaluate technical variables&lt;br /&gt;
&lt;br /&gt;
Currently, these functions are achieved by fragmented in-house code, some based on VTK/ ITK. &lt;br /&gt;
&lt;br /&gt;
The objective of this DBP will be professional-grade clinical software engineering of existing and upcoming functions Slicer. This will allow the team to focus on project specific tasks, and benefit from the advances IGT capabilities of Slicer.&lt;br /&gt;
&lt;br /&gt;
==Experimental Data==&lt;br /&gt;
The system for MRI guided transrectal prostate interventions was tested in patients [5] and a new embodiment has been tested recently in phantom experiments at NIH (Bethesda, MD) on a 3T Philips Intera MRI scanner (Philips Medical Systems, Best, NL) using standard MR compatible biopsy needles and non artifact producing glass needles [7]. The system has been tried on humans at NIH. Replication of the system for multiple collaborating clinical sites (Princess Margaret Hospital in Toronto, BWH in Boston, Johns Hopkins in Baltimore) is in progress.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Patient Data:===&lt;br /&gt;
Typically, 3D axial MRI prostate datasets (patient positions mixed between prone and supine) acquired using different endorectal coils (coil diameter = 13 mm for two datasets, coil diameter = 26 mm for third) were used for algorithm evaluation.  The scans were performed on Philips Intera 3T MRI system; T2-weighted images acquired using Spin Echo (SE) sequence with following parameters: SENSE protocol with acceleration factor of 1; TE/TR = 180 / 7155 ms for some datasets, TE/TR = 120 / 7155 ms for some others; matrix 256 x 256; field of view 140 x 140 mm; voxel size 0.55 x 0.55 mm; slice thickness 3 mm.&lt;br /&gt;
&lt;br /&gt;
===Phantom Data:===&lt;br /&gt;
'''Biopsy Needle Accuracies:''' The manipulator was placed in a prostate phantom and its initial position was registered. Twelve targets were selected within all areas of the prostate on T2 weighted axial TSE images. Targets one through four were selected in the base of the prostate, targets five through eight in the mid gland, and targets nine through twelve in the apex of the prostate. For each target, the targeting program calculated the necessary targeting parameters for the needle placement.&lt;br /&gt;
&lt;br /&gt;
The phantom was pulled out of the MRI scanner on the scanner table, the physician rotated the manipulator, adjusted the needle angle and inserted the biopsy needle according to the displayed parameters.&lt;br /&gt;
&lt;br /&gt;
The phantom was rolled back into the scanner to confirm the location of the needle on axial TSE proton density images which show the void created by the biopsy needle tip close to the target point. The in-plane error for each of the twelve biopsies, defined as the distance of the target to the biopsy needle line was subsequently calculated to assess the accuracy of the system. &lt;br /&gt;
&lt;br /&gt;
The needle line was defined by finding the first and the last slice of the acquired confirmation volume, where the needle void is clearly visible. The center of the needle void on the first slice and the center of the void on the last slice define the needle line. The out of plane error is not critical in biopsy procedures due to the length of the biopsy core and was not calculated. Hence, from the purpose of accuracy, there is no need for a more precise motorized needle insertion. The average in-plane error for the biopsy needles was 2.1 mm with a maximum error of 2.9 mm.&lt;br /&gt;
&lt;br /&gt;
'''Glass Needle Accuracies:''' The void created by the biopsy needle is mostly due to susceptibility artifact caused by the metallic needle. The void is not concentric around the biopsy needle and depends on the orientation of the needle to the direction of the main magnetic field in the scanner (B0), and the direction of the spatially encoding magnetic field gradients [6]. Consequently, center of needle voids do not necessarily correspond to actual needle centers.&lt;br /&gt;
&lt;br /&gt;
And since the same imaging sequence and similar orientation of the needle is used for all targets in a procedure, a systematic shift between needle void and actual needle might occur, which introduces a bias in the accuracy calculations. To explore this theory, every biopsy needle placement in the prostate phantom was followed by a placement of a glass needle to the same depth. The void created by the glass needle is purely caused by a lack of protons in the glass compared to the surrounding tissue, and is thus artifact free and concentric to the needle. The location of the glass needle was again confirmed by acquiring axial TSE proton density images. The average in-plane error for the glass needles was 1.3 mm with a maximum error of 1.7 mm.&lt;br /&gt;
&lt;br /&gt;
The procedure time for six needle biopsies not including the glass needle insertion was measured at 45 minutes.&lt;br /&gt;
&lt;br /&gt;
==Current Image Software==&lt;br /&gt;
The targeting program runs on a laptop computer located in the control room. The only data transfer between laptop and scanner computer are DICOM image transfers. The fiber optic encoders from the robot interface via a USB counter (USDigital, Vancouver, Washington) to the laptop computer.&lt;br /&gt;
&lt;br /&gt;
The targeting software displays the acquired MR images, provides the automatic segmentation for the initial registration of the manipulator, allows the physician to select targets for needle placements, it provides targeting parameters for the placement of the needle, and tracks rotation and needle angle change provided by the encoders, while the manipulator is moved on target.&lt;br /&gt;
&lt;br /&gt;
After targeting the software overlays the target and projected needle path with the confirmation volume scan. This allows the physician to quickly asses the success of the intervention.&lt;br /&gt;
&lt;br /&gt;
==Image Processing Needs==&lt;br /&gt;
Although the current software covers the intervention needs for the first project, additional functions are necessary to allow easy and quick access to the data before, during and after procedure, and to accommodate the needs of the other two projects.&lt;br /&gt;
&lt;br /&gt;
Segmentation and deformable registration functions, 3D visualization instead of the current 2D view, and the extensive data analysis technology of Slicer are all on the projects software specification list.&lt;br /&gt;
&lt;br /&gt;
A basic requirement is the good memory management and stable software. Since the program is running on a laptop, there are only very limited resources available (CPU and memory).&lt;br /&gt;
&lt;br /&gt;
The automatic segmentation algorithms are not always accurate, so it should provide interactive correction capabilities, like moving the slider to change the threshold followed by re-segmentation. Other interactive part is modification of the proposed needle path within the robot constraints.&lt;br /&gt;
&lt;br /&gt;
LPS coordinate system: During the procedure targets are selected using the 2D projection image obtained from the scanner and the target coordinates are in the DICOM image coordinate system. This is also used to display parameters for the manual prescription and for the real time tracking.&lt;br /&gt;
&lt;br /&gt;
As each landmarking defines an independent coordinate system, and this Frame of Reference (FoR) correspondence between volumes is essential for the patient safety. The operating personnel should not be able to use registration data from one volume and target other volume if there’s no transformation between the two coordinate systems.&lt;br /&gt;
&lt;br /&gt;
With OpenTracker capable MRI scanners we would like to use real time needle tracking.&lt;br /&gt;
&lt;br /&gt;
==Summary==&lt;br /&gt;
Manifold benefits exist for both NA-MIC and the Brigham-Hopkins joint program in MRI-guided prostate interventions, owing to existing loops of collaborations, cross-compatibility of research (MR guided prostate interventions), and shared Slicer/VTK/ITK based software platforms.&lt;br /&gt;
&lt;br /&gt;
The project's clinical partners are based in the intramural research program of the National Cancer Institute. Thus the proposed NA-MIC DBP will tie a significant segment of extramural cancer research into a prominent intramural effort, thereby leading to a better understanding, coherency, and active collaboration between these otherwise disjoint efforts. For NA-MIC the benefits are also tangible: the functions will be developed in a controlled and professional environment in the CISST ERC that has been in close collaboration with NA-MIC/Brigham. The development environment used in both groups are similar, in that we both base our image processing tools on VTK, ITK and Slicer and uses many of the same development tools, including CVS, CMake, Doxygen and Dart. In short, the proposed work will be conducted on a shared platform (VTK, ITK, and Slicer) with a compatible development process, and thus the results will be directly absorbable by NA-MIC.&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
*[[Collaboration/JHU/Brachytherapy needle positioning robot integration|Brachytherapy needle positioning robot integration]] &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
# Jemal, A., Siegel, R., Ward, E., Murray, T., Xu, J., Thun, M.J.: &amp;lt;br&amp;gt;Cancer statistics, 2007&amp;lt;br&amp;gt; CA Cancer J Clin 57(1) (2007) 43–66&lt;br /&gt;
# Yu, K.K., Hricak, H.: &amp;lt;br&amp;gt;Imaging prostate cancer&amp;lt;br&amp;gt; Radiol Clin North Am 38(1) (2000) 59–85, viii&lt;br /&gt;
# Norberg, M., Egevad, L., Holmberg, L., Sparn, P., Norln, B.J., Busch, C.: &amp;lt;br&amp;gt;The sextant protocol for ultrasound-guided core biopsies of the prostate underestimates the presence of cancer&amp;lt;br&amp;gt; Urology 50(4) (1997) 562–566&lt;br /&gt;
# Terris, M.K.: &amp;lt;br&amp;gt;Sensitivity and specificity of sextant biopsies in the detection of prostate cancer: preliminary report&amp;lt;br&amp;gt; Urology 54(3) (1999) 486–489&lt;br /&gt;
# Krieger A, Csoma C, Guion P, Iordachita I, Metzger G, Qian D, Singh A, Whitcomb L, Fichtinger G: &amp;lt;br&amp;gt;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;lt;br&amp;gt; MICCAI 2007&lt;br /&gt;
# DiMaio, S.P., Kacher, D.F., Ellis, R.E., Fichtinger, G., Hata, N., Zientara, G.P., Panych, L.P., Kikinis, R., Jolesz, F.A.: &amp;lt;br&amp;gt;Needle artifact localization in 3T MR images&amp;lt;br&amp;gt; Stud Health Technol Inform 119 (2006) 120–125&lt;br /&gt;
# Krieger A, Csoma C, Iordachita I, Guion P, Fichtinger G, Whitcomb LL, &amp;lt;br&amp;gt;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;lt;br&amp;gt; MICCAI 2007 (accepted)&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=14812</id>
		<title>DBP2:Queens</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=14812"/>
		<updated>2007-08-25T11:19:09Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;background: #f0f0f0;font: smaller;color: black;font-style: italic;&amp;quot;&amp;gt;[[DBP2]] &amp;amp;gt; DBP2:JHU&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;+2&amp;quot;&amp;gt;Segmentation and Registration Tools for Robotic Prostate Interventions&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Team and Institute==&lt;br /&gt;
* PI: Gabor Fichtinger, PhD: gabor at cs.queensu.ca&lt;br /&gt;
* co-PI: Purang Abolmaesumi, PhD: purang at cs.queensu.ca&lt;br /&gt;
* Queen's Engineering Contact: David Gobbi, PhD: dgobbi at cs.queensu.ca&lt;br /&gt;
* Hopkins Engineering Contact: Csaba Csoma, BSc, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
* NA-MIC Engineering Contact: Katie Hayes, MSc, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
* NA-MIC Algorithms Contact: Allen Tannenbaum, PhD, GeorgiaTech, tannenba at ece.gatech.edu&lt;br /&gt;
&lt;br /&gt;
* '''Affiliation/Institution:''' Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Research Goals==&lt;br /&gt;
The Queen’s &amp;amp; Hopkins teams are developing novel systems and procedures for prostate cancer interventions, such as biopsy and needle-based local therapies. &lt;br /&gt;
&lt;br /&gt;
Prostate cancer is the most common subcutaneous cancer in American men. In 2007 will be an estimated 220,000 new cases of prostate cancer and 28,000 deaths caused by prostate cancer in the United States alone. [1]&lt;br /&gt;
&lt;br /&gt;
The current standard of care for verifying the existence of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. TRUS provides limited diagnostic accuracy and image resolution. In a study [2] the authors conclude that TRUS is not accurate for tumor localization and therefore the precise identification and sampling of individual cancerous tumor sites is limited. As a result, the sensitivity of TRUS biopsy is only between 60% and 85%. [3, 4]&lt;br /&gt;
&lt;br /&gt;
Targeted biopsies of suspicious areas identified by MRI could potentially increase the sensitivity of prostate biopsies. To address this problem the investigators have several active research projects in prostate biopsy and therapies under direct MRI guidance inside the bore. We have developed and clinically tried a semi-robotic device and system for planning and execution of prostate biopsy under MRI guidance [5]. We have conducted several clinical trials [5] and more are to follow. The generic workflow is as follows:&lt;br /&gt;
 &lt;br /&gt;
# '''Pre-Op:''' segment the prostate, identify suspicious areas, plan targets;&lt;br /&gt;
# '''Intra-Op:''' import plan, update plan, execute the biopsy/therapy&lt;br /&gt;
# '''Post-Op:''' compare post-op  data with plan, evaluate technical variables&lt;br /&gt;
&lt;br /&gt;
Currently, these functions are achieved by fragmented in-house code, some based on VTK/ ITK. &lt;br /&gt;
&lt;br /&gt;
The objective of this DBP will be professional-grade clinical software engineering of existing and upcoming functions Slicer. This will allow the team to focus on project specific tasks, and benefit from the advances IGT capabilities of Slicer.&lt;br /&gt;
&lt;br /&gt;
==Experimental Data==&lt;br /&gt;
The system for MRI guided transrectal prostate interventions was tested in patients [5] and a new embodiment has been tested recently in phantom experiments at NIH (Bethesda, MD) on a 3T Philips Intera MRI scanner (Philips Medical Systems, Best, NL) using standard MR compatible biopsy needles and non artifact producing glass needles [7]. The system has been tried on humans at NIH. Replication of the system for multiple collaborating clinical sites (Princess Margaret Hospital in Toronto, BWH in Boston, Johns Hopkins in Baltimore) is in progress.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Patient Data:===&lt;br /&gt;
Typically, 3D axial MRI prostate datasets (patient positions mixed between prone and supine) acquired using different endorectal coils (coil diameter = 13 mm for two datasets, coil diameter = 26 mm for third) were used for algorithm evaluation.  The scans were performed on Philips Intera 3T MRI system; T2-weighted images acquired using Spin Echo (SE) sequence with following parameters: SENSE protocol with acceleration factor of 1; TE/TR = 180 / 7155 ms for some datasets, TE/TR = 120 / 7155 ms for some others; matrix 256 x 256; field of view 140 x 140 mm; voxel size 0.55 x 0.55 mm; slice thickness 3 mm.&lt;br /&gt;
&lt;br /&gt;
===Phantom Data:===&lt;br /&gt;
'''Biopsy Needle Accuracies:''' The manipulator was placed in a prostate phantom and its initial position was registered. Twelve targets were selected within all areas of the prostate on T2 weighted axial TSE images. Targets one through four were selected in the base of the prostate, targets five through eight in the mid gland, and targets nine through twelve in the apex of the prostate. For each target, the targeting program calculated the necessary targeting parameters for the needle placement.&lt;br /&gt;
&lt;br /&gt;
The phantom was pulled out of the MRI scanner on the scanner table, the physician rotated the manipulator, adjusted the needle angle and inserted the biopsy needle according to the displayed parameters.&lt;br /&gt;
&lt;br /&gt;
The phantom was rolled back into the scanner to confirm the location of the needle on axial TSE proton density images which show the void created by the biopsy needle tip close to the target point. The in-plane error for each of the twelve biopsies, defined as the distance of the target to the biopsy needle line was subsequently calculated to assess the accuracy of the system. &lt;br /&gt;
&lt;br /&gt;
The needle line was defined by finding the first and the last slice of the acquired confirmation volume, where the needle void is clearly visible. The center of the needle void on the first slice and the center of the void on the last slice define the needle line. The out of plane error is not critical in biopsy procedures due to the length of the biopsy core and was not calculated. Hence, from the purpose of accuracy, there is no need for a more precise motorized needle insertion. The average in-plane error for the biopsy needles was 2.1 mm with a maximum error of 2.9 mm.&lt;br /&gt;
&lt;br /&gt;
'''Glass Needle Accuracies:''' The void created by the biopsy needle is mostly due to susceptibility artifact caused by the metallic needle. The void is not concentric around the biopsy needle and depends on the orientation of the needle to the direction of the main magnetic field in the scanner (B0), and the direction of the spatially encoding magnetic field gradients [6]. Consequently, center of needle voids do not necessarily correspond to actual needle centers.&lt;br /&gt;
&lt;br /&gt;
And since the same imaging sequence and similar orientation of the needle is used for all targets in a procedure, a systematic shift between needle void and actual needle might occur, which introduces a bias in the accuracy calculations. To explore this theory, every biopsy needle placement in the prostate phantom was followed by a placement of a glass needle to the same depth. The void created by the glass needle is purely caused by a lack of protons in the glass compared to the surrounding tissue, and is thus artifact free and concentric to the needle. The location of the glass needle was again confirmed by acquiring axial TSE proton density images. The average in-plane error for the glass needles was 1.3 mm with a maximum error of 1.7 mm.&lt;br /&gt;
&lt;br /&gt;
The procedure time for six needle biopsies not including the glass needle insertion was measured at 45 minutes.&lt;br /&gt;
&lt;br /&gt;
==Current Image Software==&lt;br /&gt;
The targeting program runs on a laptop computer located in the control room. The only data transfer between laptop and scanner computer are DICOM image transfers. The fiber optic encoders from the robot interface via a USB counter (USDigital, Vancouver, Washington) to the laptop computer.&lt;br /&gt;
&lt;br /&gt;
The targeting software displays the acquired MR images, provides the automatic segmentation for the initial registration of the manipulator, allows the physician to select targets for needle placements, it provides targeting parameters for the placement of the needle, and tracks rotation and needle angle change provided by the encoders, while the manipulator is moved on target.&lt;br /&gt;
&lt;br /&gt;
After targeting the software overlays the target and projected needle path with the confirmation volume scan. This allows the physician to quickly asses the success of the intervention.&lt;br /&gt;
&lt;br /&gt;
==Image Processing Needs==&lt;br /&gt;
Although the current software covers the intervention needs for the first project, additional functions are necessary to allow easy and quick access to the data before, during and after procedure, and to accommodate the needs of the other two projects.&lt;br /&gt;
&lt;br /&gt;
Segmentation and deformable registration functions, 3D visualization instead of the current 2D view, and the extensive data analysis technology of Slicer are all on the projects software specification list.&lt;br /&gt;
&lt;br /&gt;
A basic requirement is the good memory management and stable software. Since the program is running on a laptop, there are only very limited resources available (CPU and memory).&lt;br /&gt;
&lt;br /&gt;
The automatic segmentation algorithms are not always accurate, so it should provide interactive correction capabilities, like moving the slider to change the threshold followed by re-segmentation. Other interactive part is modification of the proposed needle path within the robot constraints.&lt;br /&gt;
&lt;br /&gt;
LPS coordinate system: During the procedure targets are selected using the 2D projection image obtained from the scanner and the target coordinates are in the DICOM image coordinate system. This is also used to display parameters for the manual prescription and for the real time tracking.&lt;br /&gt;
&lt;br /&gt;
As each landmarking defines an independent coordinate system, and this Frame of Reference (FoR) correspondence between volumes is essential for the patient safety. The operating personnel should not be able to use registration data from one volume and target other volume if there’s no transformation between the two coordinate systems.&lt;br /&gt;
&lt;br /&gt;
With OpenTracker capable MRI scanners we would like to use real time needle tracking.&lt;br /&gt;
&lt;br /&gt;
==Summary==&lt;br /&gt;
Manifold benefits exist for both NA-MIC and the Brigham-Hopkins joint program in MRI-guided prostate interventions, owing to existing loops of collaborations, cross-compatibility of research (MR guided prostate interventions), and shared Slicer/VTK/ITK based software platforms.&lt;br /&gt;
&lt;br /&gt;
The project's clinical partners are based in the intramural research program of the National Cancer Institute. Thus the proposed NA-MIC DBP will tie a significant segment of extramural cancer research into a prominent intramural effort, thereby leading to a better understanding, coherency, and active collaboration between these otherwise disjoint efforts. For NA-MIC the benefits are also tangible: the functions will be developed in a controlled and professional environment in the CISST ERC that has been in close collaboration with NA-MIC/Brigham. The development environment used in both groups are similar, in that we both base our image processing tools on VTK, ITK and Slicer and uses many of the same development tools, including CVS, CMake, Doxygen and Dart. In short, the proposed work will be conducted on a shared platform (VTK, ITK, and Slicer) with a compatible development process, and thus the results will be directly absorbable by NA-MIC.&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
*[[Collaboration/JHU/Brachytherapy needle positioning robot integration|Brachytherapy needle positioning robot integration]] &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
# Jemal, A., Siegel, R., Ward, E., Murray, T., Xu, J., Thun, M.J.: &amp;lt;br&amp;gt;Cancer statistics, 2007&amp;lt;br&amp;gt; CA Cancer J Clin 57(1) (2007) 43–66&lt;br /&gt;
# Yu, K.K., Hricak, H.: &amp;lt;br&amp;gt;Imaging prostate cancer&amp;lt;br&amp;gt; Radiol Clin North Am 38(1) (2000) 59–85, viii&lt;br /&gt;
# Norberg, M., Egevad, L., Holmberg, L., Sparn, P., Norln, B.J., Busch, C.: &amp;lt;br&amp;gt;The sextant protocol for ultrasound-guided core biopsies of the prostate underestimates the presence of cancer&amp;lt;br&amp;gt; Urology 50(4) (1997) 562–566&lt;br /&gt;
# Terris, M.K.: &amp;lt;br&amp;gt;Sensitivity and specificity of sextant biopsies in the detection of prostate cancer: preliminary report&amp;lt;br&amp;gt; Urology 54(3) (1999) 486–489&lt;br /&gt;
# Krieger A, Csoma C, Guion P, Iordachita I, Metzger G, Qian D, Singh A, Whitcomb L, Fichtinger G: &amp;lt;br&amp;gt;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;lt;br&amp;gt; MICCAI 2007&lt;br /&gt;
# DiMaio, S.P., Kacher, D.F., Ellis, R.E., Fichtinger, G., Hata, N., Zientara, G.P., Panych, L.P., Kikinis, R., Jolesz, F.A.: &amp;lt;br&amp;gt;Needle artifact localization in 3T MR images&amp;lt;br&amp;gt; Stud Health Technol Inform 119 (2006) 120–125&lt;br /&gt;
# Krieger A, Csoma C, Iordachita I, Guion P, Fichtinger G, Whitcomb LL, &amp;lt;br&amp;gt;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;lt;br&amp;gt; MICCAI 2007 (submitted)&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=14811</id>
		<title>DBP2:Queens</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=DBP2:Queens&amp;diff=14811"/>
		<updated>2007-08-25T11:16:33Z</updated>

		<summary type="html">&lt;p&gt;Gabor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;background: #f0f0f0;font: smaller;color: black;font-style: italic;&amp;quot;&amp;gt;[[DBP2]] &amp;amp;gt; DBP2:JHU&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;+2&amp;quot;&amp;gt;Segmentation and Registration Tools for Robotic Prostate Interventions&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Team and Institute==&lt;br /&gt;
* PI: Gabor Fichtinger, PhD: gabor at cs.queensu.ca&lt;br /&gt;
* co-PI: Purang Abolmaesumi, PhD: purang at cs.queensu.ca&lt;br /&gt;
* Queen's Engineering Contact: David Gobbi&lt;br /&gt;
* Hopkins Engineering Contact: Csaba Csoma, Johns Hopkins University, csoma at jhu.edu&lt;br /&gt;
* NA-MIC Engineering Contact: Katie Hayes, Brigham and Women's Hospital, hayes at bwh.harvard.edu&lt;br /&gt;
* NA-MIC Algorithms Contact: Allen Tannenbaum, GeorgiaTech&lt;br /&gt;
&lt;br /&gt;
* '''Affiliation/Institution:''' Queen's University &amp;amp; Johns Hopkins University&lt;br /&gt;
&lt;br /&gt;
==Research Goals==&lt;br /&gt;
The Queen’s &amp;amp; Hopkins teams are developing novel systems and procedures for prostate cancer interventions, such as biopsy and needle-based local therapies. &lt;br /&gt;
&lt;br /&gt;
Prostate cancer is the most common subcutaneous cancer in American men. In 2007 will be an estimated 220,000 new cases of prostate cancer and 28,000 deaths caused by prostate cancer in the United States alone. [1]&lt;br /&gt;
&lt;br /&gt;
The current standard of care for verifying the existence of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. TRUS provides limited diagnostic accuracy and image resolution. In a study [2] the authors conclude that TRUS is not accurate for tumor localization and therefore the precise identification and sampling of individual cancerous tumor sites is limited. As a result, the sensitivity of TRUS biopsy is only between 60% and 85%. [3, 4]&lt;br /&gt;
&lt;br /&gt;
Targeted biopsies of suspicious areas identified by MRI could potentially increase the sensitivity of prostate biopsies. To address this problem the investigators have several active research projects in prostate biopsy and therapies under direct MRI guidance inside the bore. We have developed and clinically tried a semi-robotic device and system for planning and execution of prostate biopsy under MRI guidance [5]. We have conducted several clinical trials [5] and more are to follow. The generic workflow is as follows:&lt;br /&gt;
 &lt;br /&gt;
# '''Pre-Op:''' segment the prostate, identify suspicious areas, plan targets;&lt;br /&gt;
# '''Intra-Op:''' import plan, update plan, execute the biopsy/therapy&lt;br /&gt;
# '''Post-Op:''' compare post-op  data with plan, evaluate technical variables&lt;br /&gt;
&lt;br /&gt;
Currently, these functions are achieved by fragmented in-house code, some based on VTK/ ITK. &lt;br /&gt;
&lt;br /&gt;
The objective of this DBP will be professional-grade clinical software engineering of existing and upcoming functions Slicer. This will allow the team to focus on project specific tasks, and benefit from the advances IGT capabilities of Slicer.&lt;br /&gt;
&lt;br /&gt;
==Experimental Data==&lt;br /&gt;
The system for MRI guided transrectal prostate interventions was tested in patients [5] and a new embodiment has been tested recently in phantom experiments at NIH (Bethesda, MD) on a 3T Philips Intera MRI scanner (Philips Medical Systems, Best, NL) using standard MR compatible biopsy needles and non artifact producing glass needles [7]. The system has been tried on humans at NIH. Replication of the system for multiple collaborating clinical sites (Princess Margaret Hospital in Toronto, BWH in Boston, Johns Hopkins in Baltimore) is in progress.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Patient Data:===&lt;br /&gt;
Typically, 3D axial MRI prostate datasets (patient positions mixed between prone and supine) acquired using different endorectal coils (coil diameter = 13 mm for two datasets, coil diameter = 26 mm for third) were used for algorithm evaluation.  The scans were performed on Philips Intera 3T MRI system; T2-weighted images acquired using Spin Echo (SE) sequence with following parameters: SENSE protocol with acceleration factor of 1; TE/TR = 180 / 7155 ms for some datasets, TE/TR = 120 / 7155 ms for some others; matrix 256 x 256; field of view 140 x 140 mm; voxel size 0.55 x 0.55 mm; slice thickness 3 mm.&lt;br /&gt;
&lt;br /&gt;
===Phantom Data:===&lt;br /&gt;
'''Biopsy Needle Accuracies:''' The manipulator was placed in a prostate phantom and its initial position was registered. Twelve targets were selected within all areas of the prostate on T2 weighted axial TSE images. Targets one through four were selected in the base of the prostate, targets five through eight in the mid gland, and targets nine through twelve in the apex of the prostate. For each target, the targeting program calculated the necessary targeting parameters for the needle placement.&lt;br /&gt;
&lt;br /&gt;
The phantom was pulled out of the MRI scanner on the scanner table, the physician rotated the manipulator, adjusted the needle angle and inserted the biopsy needle according to the displayed parameters.&lt;br /&gt;
&lt;br /&gt;
The phantom was rolled back into the scanner to confirm the location of the needle on axial TSE proton density images which show the void created by the biopsy needle tip close to the target point. The in-plane error for each of the twelve biopsies, defined as the distance of the target to the biopsy needle line was subsequently calculated to assess the accuracy of the system. &lt;br /&gt;
&lt;br /&gt;
The needle line was defined by finding the first and the last slice of the acquired confirmation volume, where the needle void is clearly visible. The center of the needle void on the first slice and the center of the void on the last slice define the needle line. The out of plane error is not critical in biopsy procedures due to the length of the biopsy core and was not calculated. Hence, from the purpose of accuracy, there is no need for a more precise motorized needle insertion. The average in-plane error for the biopsy needles was 2.1 mm with a maximum error of 2.9 mm.&lt;br /&gt;
&lt;br /&gt;
'''Glass Needle Accuracies:''' The void created by the biopsy needle is mostly due to susceptibility artifact caused by the metallic needle. The void is not concentric around the biopsy needle and depends on the orientation of the needle to the direction of the main magnetic field in the scanner (B0), and the direction of the spatially encoding magnetic field gradients [6]. Consequently, center of needle voids do not necessarily correspond to actual needle centers.&lt;br /&gt;
&lt;br /&gt;
And since the same imaging sequence and similar orientation of the needle is used for all targets in a procedure, a systematic shift between needle void and actual needle might occur, which introduces a bias in the accuracy calculations. To explore this theory, every biopsy needle placement in the prostate phantom was followed by a placement of a glass needle to the same depth. The void created by the glass needle is purely caused by a lack of protons in the glass compared to the surrounding tissue, and is thus artifact free and concentric to the needle. The location of the glass needle was again confirmed by acquiring axial TSE proton density images. The average in-plane error for the glass needles was 1.3 mm with a maximum error of 1.7 mm.&lt;br /&gt;
&lt;br /&gt;
The procedure time for six needle biopsies not including the glass needle insertion was measured at 45 minutes.&lt;br /&gt;
&lt;br /&gt;
==Current Image Software==&lt;br /&gt;
The targeting program runs on a laptop computer located in the control room. The only data transfer between laptop and scanner computer are DICOM image transfers. The fiber optic encoders from the robot interface via a USB counter (USDigital, Vancouver, Washington) to the laptop computer.&lt;br /&gt;
&lt;br /&gt;
The targeting software displays the acquired MR images, provides the automatic segmentation for the initial registration of the manipulator, allows the physician to select targets for needle placements, it provides targeting parameters for the placement of the needle, and tracks rotation and needle angle change provided by the encoders, while the manipulator is moved on target.&lt;br /&gt;
&lt;br /&gt;
After targeting the software overlays the target and projected needle path with the confirmation volume scan. This allows the physician to quickly asses the success of the intervention.&lt;br /&gt;
&lt;br /&gt;
==Image Processing Needs==&lt;br /&gt;
Although the current software covers the intervention needs for the first project, additional functions are necessary to allow easy and quick access to the data before, during and after procedure, and to accommodate the needs of the other two projects.&lt;br /&gt;
&lt;br /&gt;
Segmentation and deformable registration functions, 3D visualization instead of the current 2D view, and the extensive data analysis technology of Slicer are all on the projects software specification list.&lt;br /&gt;
&lt;br /&gt;
A basic requirement is the good memory management and stable software. Since the program is running on a laptop, there are only very limited resources available (CPU and memory).&lt;br /&gt;
&lt;br /&gt;
The automatic segmentation algorithms are not always accurate, so it should provide interactive correction capabilities, like moving the slider to change the threshold followed by re-segmentation. Other interactive part is modification of the proposed needle path within the robot constraints.&lt;br /&gt;
&lt;br /&gt;
LPS coordinate system: During the procedure targets are selected using the 2D projection image obtained from the scanner and the target coordinates are in the DICOM image coordinate system. This is also used to display parameters for the manual prescription and for the real time tracking.&lt;br /&gt;
&lt;br /&gt;
As each landmarking defines an independent coordinate system, and this Frame of Reference (FoR) correspondence between volumes is essential for the patient safety. The operating personnel should not be able to use registration data from one volume and target other volume if there’s no transformation between the two coordinate systems.&lt;br /&gt;
&lt;br /&gt;
With OpenTracker capable MRI scanners we would like to use real time needle tracking.&lt;br /&gt;
&lt;br /&gt;
==Summary==&lt;br /&gt;
Manifold benefits exist for both NA-MIC and the Brigham-Hopkins joint program in MRI-guided prostate interventions, owing to existing loops of collaborations, cross-compatibility of research (MR guided prostate interventions), and shared Slicer/VTK/ITK based software platforms.&lt;br /&gt;
&lt;br /&gt;
The project's clinical partners are based in the intramural research program of the National Cancer Institute. Thus the proposed NA-MIC DBP will tie a significant segment of extramural cancer research into a prominent intramural effort, thereby leading to a better understanding, coherency, and active collaboration between these otherwise disjoint efforts. For NA-MIC the benefits are also tangible: the functions will be developed in a controlled and professional environment in the CISST ERC that has been in close collaboration with NA-MIC/Brigham. The development environment used in both groups are similar, in that we both base our image processing tools on VTK, ITK and Slicer and uses many of the same development tools, including CVS, CMake, Doxygen and Dart. In short, the proposed work will be conducted on a shared platform (VTK, ITK, and Slicer) with a compatible development process, and thus the results will be directly absorbable by NA-MIC.&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
*[[Collaboration/JHU/Brachytherapy needle positioning robot integration|Brachytherapy needle positioning robot integration]] &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
# Jemal, A., Siegel, R., Ward, E., Murray, T., Xu, J., Thun, M.J.: &amp;lt;br&amp;gt;Cancer statistics, 2007&amp;lt;br&amp;gt; CA Cancer J Clin 57(1) (2007) 43–66&lt;br /&gt;
# Yu, K.K., Hricak, H.: &amp;lt;br&amp;gt;Imaging prostate cancer&amp;lt;br&amp;gt; Radiol Clin North Am 38(1) (2000) 59–85, viii&lt;br /&gt;
# Norberg, M., Egevad, L., Holmberg, L., Sparn, P., Norln, B.J., Busch, C.: &amp;lt;br&amp;gt;The sextant protocol for ultrasound-guided core biopsies of the prostate underestimates the presence of cancer&amp;lt;br&amp;gt; Urology 50(4) (1997) 562–566&lt;br /&gt;
# Terris, M.K.: &amp;lt;br&amp;gt;Sensitivity and specificity of sextant biopsies in the detection of prostate cancer: preliminary report&amp;lt;br&amp;gt; Urology 54(3) (1999) 486–489&lt;br /&gt;
# Krieger A, Csoma C, Guion P, Iordachita I, Metzger G, Qian D, Singh A, Whitcomb L, Fichtinger G: &amp;lt;br&amp;gt;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;lt;br&amp;gt; MICCAI 2007&lt;br /&gt;
# DiMaio, S.P., Kacher, D.F., Ellis, R.E., Fichtinger, G., Hata, N., Zientara, G.P., Panych, L.P., Kikinis, R., Jolesz, F.A.: &amp;lt;br&amp;gt;Needle artifact localization in 3T MR images&amp;lt;br&amp;gt; Stud Health Technol Inform 119 (2006) 120–125&lt;br /&gt;
# Krieger A, Csoma C, Iordachita I, Guion P, Fichtinger G, Whitcomb LL, &amp;lt;br&amp;gt;Design and Preliminary Accuracy Studies of an MRI-Guided Transrectal Prostate Intervention System&amp;lt;br&amp;gt; MICCAI 2007 (submitted)&lt;/div&gt;</summary>
		<author><name>Gabor</name></author>
		
	</entry>
</feed>