2012 Progress Report HIGHLIGHTS
THIS IS A PLACEHOLDER -- NOT YET COMPLETED
The scope of NA-MIC activities includes advanced medical image analysis research combined with leading edge software processes and computational platforms. To reflect these activities, the NA-MIC Computer Science Core efforts are organized around two teams: Algorithms and Engineering. Their joint output is the NA-MIC Kit which embodies a comprehensive set of analysis techniques in a well architected, documented, and widely used platform as described in the following paragraphs.
Algorithms. The NA-MIC Computer Science Algorithm effort responds to the challenges of the DBPs to expand the horizons of medical image analysis. As a result, the Algorithm activities are typically highly experimental, creating new approaches that are rapidly prototyped, tested, and improved.
Engineering. The NA-MIC Computer Science Engineering effort supports the needs of the Algorithms effort by creating integrated software platforms supporting research and eventual deployment of advanced technology. The Engineering team also develops and maintains processes used to build and sustain a large research community.
NA-MIC Kit. The NA-MIC Kit consists of an integrated set of interoperable free open source software (FOSS) packages; developed, supported and deployed using a collaborative, agile, high quality software process. The NA-MIC Kit has been constructed as a layered architecture to provide a spectrum of capabilities, ranging from compute-intensive algorithms to easy-to-use applications. Hence users and developers can choose to engage the NA-MIC Kit at a variety of levels, including developing extensions which can be readily deployed to the broader biomedical imaging community.
In the following subsections we highlight the accomplishments from this reporting period for algorithms, engineering, and NA-MIC kit.
The Algorithms team develops computational methods supporting patient-specific analysis of medical images. This requires analysis of images that vary significantly from one patient to another, or from one time point to another, which present distinct challenges to existing state-of-art medical image analysis algorithms. These technical challenges were addressed using four computational approaches: (1) Statistical models of anatomy and pathology; (2) Geometric correspondence; (3) User interactive tools for segmentation; and (4) Longitudinal and time-series analysis. Highlights of these efforts are described in the following.
Statistical models of anatomy and pathology. A great deal of progress has been made by using modeling approaches that systematically capture the statistics of a problem domain from a collection of examples and then use these statistics to interpret novel images. Some of the approaches include:
- Non-Parametric Priors for Segmentation are based on nonparametric, probabilistic models for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject.
- Fast Nearest-Neighor Lookup in Large Image Databases has been found to improve segmentation quality. Multiatlases or nonparametric atlas-based techniques for image segmentation require registration of a test image with a small set of very similar images from a database.
- Atlases and Registration for DTI Processing are novel methods that enhances the co-registration of DTI data either to a prior image of the same subject or to an existing atlas with predefined fiber tracts or regional white matter parcellation, and are applied in cases of large brain pathology (e.g, TBI).
Geometric correspondence. Establishing anatomical correspondences between pairs of patients, groups of patients, patients and templates, and individual patients over time is important for automatic and user-assisted image analysis. The ability to establish geometric correspondences, with and without expert guidance, in challenging clinical circumstances is essential for the DBPs. Progress in two areas was realized this year.
- Stochastic Point Set Registration provides non-rigid point set registration algorithms that seeks an optimal set of radial basis functions to describe the registration. Preliminary results on 2D and 3D data demonstrate the algorithm’s robustness to data sets with noise and with missing information.
- Automatic Correspondences For Shape Ensembles has seen improvements in robustness of our entropy-based correspondence system. For example, we have developed a method for particles to interact on surfaces using geodesic distances, improving the behavior of the system on sharp features or convoluted shapes.
User interactive tools for segmentation. The work performed in the past year addresses important aspects of user-interactive segmentation. The patient-specific analysis required by the DBPs has presented images of patients with pathologies and/or injuries that sometimes defy automated approaches. We have focused our research on three principal areas.
- Controlled Based Interactive Segmentation is a novel contribution based on a modeling formulation that represents interactive segmentation as a feedback system, enabling a principled merging of automated methods and user input.
- Globally Optimal Segmentation are a set of methods that rely on global optimization of energy functions via graph cuts. Results on delayed contrast MRI from the atrial fibrillation project are quite promising, and this work is currently under review for publication.
- Patient-Specific Segmentation Framework for Longitudinal MR Images of Traumatic Brain Injury address the need for robust, reproducible segmentations of MR images of TBI are crucial for quantitative analysis of recovery and treatment efficacy. Validation of the new automatic segmentation compared to expert segmentations of acute and chronic images was provided on 3 longitudinal TBI datasets, demonstrating that joint segmentation of 4D multi-time point data is superior than individual segmentations.
Longitudinal and time-series analysis. An important component of patient-specific data analysis is the ability to analyze multiple images from the same patient over time, as a disease or injury progresses or responds to treatment, or to assess neurodevelopment or neurodegeneration. Longitudinal image analysis is important for all four DBPs in this project; we have focused in the past year in the areas described below.
- Connectivity Changes in Disease demonstrated a novel probabilistic framework to merge information from diffusion weighted imaging tractography and resting-state functional magnetic resonance imaging correlations to identify connectivity patterns in the brain. The method simultaneously infers the templates of latent connectivity for each population and the differences in connectivity between the groups.
- Modeling Pathology Evolution is used in brain tumor patients to monitor the state of disease and to evaluate therapeutic options. This work investigated a joint generative model of tumor growth and of image observation that naturally handles multimodal and longitudinal data, important for TBI.
- Longitudinal Analysis of DTI Change Trajectories develops models that represents the growth trajectories of individual subjects to study and understand white matter changes in neurodevelopment, neurodegeneration and disease progress. Application of this methodology to study early brain development in a longitudinal neuroimaging study, including validation of reproducibility, has been shown.
- Analysis of Longitudinal Shape Variability via Subject Specific Growth Modeling are statistical analyses of longitudinal imaging data which are crucial for understanding normal anatomical development as well as disease progression. We have developed a new type of growth model parameterized by acceleration, whereas standard methods typically control the velocity. This mimics the behavior of biological tissue; cross validation experiments show that our method is robust to missing observations, as well as being less sensitive to noise, and is therefore more likely to capture the underlying biological growth.
- Longitudinal and Time Series Analysis are novel methods for longitudinal registration and time series regression. These methods enable compact approximation of an image time-series through an initial image and an initial momentum, resulting in in dramatically simplified computations.
The Engineering Team build bridges between the various NAMIC cores and ultimately to the wider biomedical computing community. Working with the Algorithms Team, it deploys leading edge biomedical computing tools back to the DBPs, which are then used to perform impactful health research. In addition, the tools developed by the Engineering Team are used to train and disseminate technologies across research community. The Team places particular focus on developing sustainable communities through the creation of open platforms, quality-inducing software processes, and integration to a broad variety of computational tools and databases. The following describes some of the highlights of the past year's work.
- Slicer 4.0 Release
- modern, stable platform
- Extending Slicer
- Python has been adopted as the preferred scripting language,
- Slicer Extension Manager is now the "Slicer Catalog.
- New Features
- Multivolume analysis
- Interactive methods
- Modern Cross-Platform Design Patterns:
- Efficiency and Robustness
- Expectation Maximization (EM) Segmenter;
2.3 NA-MIC Kit
The NA-MIC Kit is designed to accelerate the pace of research and facilitate clinical evaluation. Along these lines, the past year realized significant milestones towards the creation of a stable research platform, supporting the ability to easily extend and disseminate novel additions, all in the context of a world-wide, broad research community. Beyond the major highlights related to Slicer 4.0 application platform described in the previous section, the following are a few of the highlights of the past year.
- CMake and its associated software process tools (CTest, CDash, and CPack) are used to build, test and deploy software in a cross-platform manner. CMake continues one of the most well-known pieces of the NA-MIC Kit, with more than 2000 known downloads per day (as well as being included by various Linux distributions). CMake 2.8.7 was released with NA-MIC support.
- CDash Package Manager (CDash 2.0.2) was released with support from NA-MIC. Cne of the most significant contributions to CDash from NA-MIC was the package upload process. This process enables the many Slicer testing machines to upload the executables and packages created during testing to the main CDash server. This, in turn, allows users to download those testing packages and run additional tests or use them in their research. This complete automation of the test-release cycle is a massive time-saver for the Service core and has greatly reduced the time to discover and resolve bugs and to improve the stability of Slicer.
- Significant data integration efforts were completed over the past year. XNAT was greatly improved in its usability and interfaces. DICOM support was greatly enhanced, including the ability to embed Slicer MRML scene files as DICOM lollipops, meaning that Slicer data exchange across the DICOM standard is now possible. In addition, DCMTK was integrated into the NA-MIC Kit, meaning that DICOM support and functionality was greatly increased.
- Community (CTK, BRAINSFit,
- Plans: Slicer 4.1 including charting and Slicer Catalog.