2007 APR NIH Questions and Answers

From NAMIC Wiki
Jump to: navigation, search
Home < 2007 APR NIH Questions and Answers

In a letter from Grace Peng, dated July 31 2007 the center team asks the following questions:



Contents

The weakest and probably most difficult parts of the NA-MIC effort are validation and comparison across algorithms. The validation that is being performed needs to be more systematic and coordinated like the tractography validation effort. Perhaps methods that engage the user community could be tried. (Ross Whitaker)


The question of validation has been an important and challenging issue within the field of medical image analysis, in general, and the image analysis work in NA-MIC is no exception.

There has been comparison and validation work in several areas of NAMIC. For instance, the tractography work and associated publications [???] has included comparisons of conventional tractography with stochastic methods [???] and Hamilton Jacobi formulations [???]. Courage et al. examine different methods of finding fiber correspondences [??] in order to be able to do statistics between fibers. Indeed, these comparisons are an necessary requirement for publishing new methodologies. Likewise, in the shape analysis work, Styner at al. [???] have compared alternative method of parameterization groups of shape, Cates et al. [???] have included comparisons of parametric and nonparametric shape correspondences, and ?????GaTech have systematically quantified improvements in shape parameterization and analysis with spherical wavelets.

In addition to algorithm validation, there is software validation. This software validation is being accomplished in several ways. First, is through a systematic engineering process that focus on continuous and/or nightly builds, careful tracking of contributions and changes to code, and regular regression testing. Our software development environment and process is world class. Second, is the interaction with biological/medical collaborators within and outside NAMIC.

We believe we can do better, and the comments from NIH on our 2006 report in this regard have provided us with some good incentives and directions. One strategy we have is to make move this validation to a level that is more systematic and benefits from greater exposure. This is the case with our project weeks, where we have encouraged collaborators to bring their data and experiment with algorithms in direct contact with the algorithm developers. This increased level of attention is also evident in the upcoming "Tractography Measures Conference" ( http://www.na-mic.org/Wiki/index.php/SanteFe.Tractography.Conference ). At this workshop we will have our biological collaborators, with data already preprocessed and available on Wiki, a set of Core 1 scientists with algorithms and results, and an agreement on some measures to compare different methods for the analysis of DTI. In a recent planning meeting (phone conference) we agreed to invite a selection of prominent nonNA-MIC participants, who we think can come to the table with very useful and competitive DTI analysis methods. We believe this workshop, and the philosophy behind it, sets a new standard in the comparing and validating algorithms.

These efforts are informed by a great deal of experience of NA-MIC Core 1 in image processing and computer vision, and this experience suggests that we should be cautious. For some years, for instance, image denoising, and in particular wavelet-shrinkage-based algorithms, have been compared on a standard set of photographs, and the literature shows a progression of wavelet-shrinkage strategies that produce progressively better RMS errors [????]. While this progress has been impressive, it has produced a series of algorithms and parameter settings that are particularly well suited to those data sets, and which do not necessarily generalize to images that are not well represented in that data [awate2005]. Thus, close attention to comparison and validation against particular metrics and data sets tends to skew algorithms toward those measures. Another example is in the computer vision problem of object recognition. A database of test images with classes of different common objects, such as toys, automobiles, etc., of different types, in somewhat natural surroundings and various degrees of occlusion and lighting are available [???] for training and testing algorithms for object recognition. As with image denoising, the field has benefited significantly, but there are those who argued that these databases and the metrics used to analyze them have focused researchers on a particular class of algorithms---typically statistical classifiers based on large numbers of low-level features. Because the field generally enforces new algorithms to comparison against these databases, it can be difficult for researchers to move beyond the family techniques that have proven effect in this regard. Thus, insistence on specific databases of images and imaging problems can, in some cases, stifle creativity and discourage researchers from looking at problems from different points of view.

Thus, we believe comparisons of different algorithms cannot rely solely on databases of standard images, but must also be evaluated in their direct impact on clinical problems. Thus, for example, the upcoming "Tractography Measures Conference" will include extensive participation from our collaborators at Brigham and Women's (DBP) who are applying DTI analysis to clinical problems. In general, the DBPs and the associated R01s, a long with extensive list of collaborators at each site that are not formally part of NA-MIC, will provide a richer context for validating and comparing algorithms.

To summarize, NA-MIC researchers have a track record of validating and comparing algorithms. However, we are extending our methods and our goals for validating and comparing algorithms to try to take advantage of the unique set of partners, clinical problems, and software infrastructure available in NA-MIC. At the same time we are cautious about avoid rigid, prescriptive metrics for comparison and maintain that the best evaluation of image analysis algorithms lies in their ability to be used by a wide range of researchers to produce new clinical or scientific results.

To what problems is DTI best applicable? Is it applicable across age ranges?(Ross Whitaker)

NA-MIC, as a "National Center for Biomedical Computing", is focused on algorithms and software for medical image analysis. Our work is driven by medical and biological problems defined by our collaborators (e.g. DBPs) both within and outside of the project, and it is {\em their} interests and hypotheses regarding DTI that drive the software development. Thus, the question of the appropriateness of clinical problems, in the short term and long term, is not central to the charter of the center. {\em We are providing our clinical collaborators and the field as a whole a set of software tools that will help determine the how and where DTI is most useful. }

That said, the project is informed about the role of DTI in scientific and clinical practice, and this guides our decisions about the allocation of resources in DTI. The conventional wisdom is that DTI is most applicable to problems that entail the evaluation of white matter fiber tracts. Disorders such as multiple sclerosis are particularly appropriate for using DTI to evaluate white matter lesions that are the sine quo non of this disorder. Other disorders such as stroke, Alzheimer's disease, and schizophrenia are also disorders that are demonstrate abnormalities in white matter, and thus DTI may help to further characterize white matter pathology in these diseases. With respect to whether or not DTI is applicable across age ranges, we now know that white matter develops changes are not constant with age, and thus having a tool that makes possible the evaluation of white matter changes associated with normal development, across the life span, is important. Until recently, it was generally thought that DTI would be ineffective for studying children, because of their incomplete myelination and the prevailing hypothesis that DTI relies very critically on myelin structure. However, recent work by NA-MIC researchers [???UNC-guido] and others [???], suggests that water diffusion in children as young as ??? years may be sufficiently anisotropic to discern white matter tracts. This exciting work is ongoing and we are pleased to report that NA-MIC researchers are at the forefront.

There is no gold standard for evaluating white matter using DTI post-processing tools, and thus some trial and error will be needed to determine which tools are optimal at which ages. We believe that the tools we develop are sufficiently robust to be be used to evaluate white matter changes across a wide range of ages. Furthermore, the underlying infrastructure of NA-MIC goes beyond DTI and the algorithms we are developing (e.g. for manipulating tensors, solving shortest paths, comparing large set of geometric structures) will have applications beyond DTI.

Although the NA-MIC Wiki contains information on who is using the NA-MIC kit and what are they using it for, the next annual report should either summarize this information or provide a link to the information. (Tina Kapur)


The next annual report should include a link and reference to the User Manual for the NA-MIC Kit.(Will Schroeder)



What is the rational for choosing a particular method (tool) for solving a particular problem (DBP)? Why was a particular method (tool) chosen for development? Is there a listing of which tool might be helpful for which family of problems? Please provide more specific details to these questions as they have been asked previously by the Center Team. (Ross Whitaker)


A clinical project between Toronto and BWH still is recruitment phase in planning a DTI and genetic study of psychosis. What would be the genetic component? (Martha Shenton)

Drs. Martha Shenton (BWH) and James Kennedy (University of Toronto) are beginning a collaboration based on mutual interests, although the specific goals have yet to be worked out. More specifically, Dr. Shenton is very much interested in developing further expertise in her laboratory in the area of genetics, particularly in the area of white matter genes and their association with white matter fiber tract abnormalities evaluated using DTI in schizophrenia. Dr. Shenton has an instructor in Psychiatry in her laboratory who will be visiting Dr. Kennedy’s laboratory for a one week period in August of 2007, to be followed by several later visits, in order to learn state-of-the-art techniques used for evaluating white matter genes and their role in schizophrenia.

In parallel, Dr. Kennedy is very much interested in developing further expertise in his laboratory in the area of neuroimaging, particularly in the area of MR morphometry and DTI measures of white matter in schizophrenia, which he would like to correlate with genetic data involving white matter genes. Following up on this interest, Dr. Kennedy has a 4th year resident in psychiatry at the University of Toronto School of Medicine who works in his laboratory and who is visiting Dr. Shenton’s laboratory from July 1, 2007 to December 31, 2007, in order to learn state-of-the-art neuroimaging techniques, including DTI and its application to understanding white matter pathology in schizophrenia.

The common thread with respect to the genetic component is thus a focus on white matter genes that are relevant to schizophrenia. At this point it is too early to determine where this collaborative effort will go, although it is clear that there is a tremendous amount of interest on both Dr. Shenton’s and Dr. Kennedy’s part, and the hope is that these early efforts will come to fruition in a more extensive collaboration as well as grant funding that supports this collaborative endeavor.


The visualization tool allows the overlay of spherical, vector and ellipsoid data onto surfaces via versatile color maps. Is this extensible to other data, such a genetic or molecular data? (Steve Pieper)

The NA-MIC Kit is a set of compatible tools including utilities, libraries, and applications. At the application level, there are many promising areas of genetic or molecular research to which 3D Slicer has not been applied. 3D Slicer is extensible though, with current active projects and pending collaboration grant proposals to adapt and enhance the application to process microscopy data. For example, Drs. Bryan Smith and Mark Ellisman of UCSD are working on this topic through a supplement via the NCBC program. In addition Drs. Machiraju, Pieper, Aylward, and Davis of Ohio State, Isomics, and Kitware have jointly applied for a NA-MIC collaboration grant with the goal of implementing advanced image analysis algorithms that are well adapted to detecting cellular structures (currently in review). Dr. Gouaillard of CalTech is also collaborating with NA-MIC to adapt tools from the Center of Excellence in Genomic Science (CEGS) to work with their studies of the zebra fish embryogenesis. Beyond these specific examples, a wide range of research applications from surgery planning to astronomy have been enabled by the software. As the slicer3 platform matures, an even larger range of applications is anticipated. At the library and utility levels an even greater diversity of applications is possible as demonstrated by the range of applications using VTK and the applications developed on ITK.

Our approach to extending our software into new fields, such as the wider ranges of genetic or molecular images mentioned in the question, is to identify collaborators who need new image computing solutions of the type NA-MIC is providing. These collaborations often start through technical points of contact; programmers often research open source tools and begin 'tinkering' to see what can be re-used in a new application. If there is sufficient interest, these experiments can grow into collaboration in new fields. For example the collaboration with University of Iowa on Finite Element Meshing applies the software in a new direction that other NA-MIC developers had not been exploring.