Difference between revisions of "Training:Tractography.Validation"

From NAMIC Wiki
Jump to: navigation, search
Line 7: Line 7:
  
  
The answers to these questions will vary depending on the specific algorithm and its application.  The group agree that the best way to proceed was to chose one very specific example that is highly relevant to the NAMIC work to date and focus efforts on that.  The methods that arise from this can then be applied to additional areas.
+
The answers to these questions will vary depending on the specific algorithm and its application.  The group agreed that the best way to proceed was to chose one very specific example that is highly relevant to the NAMIC work to date and focus efforts on that.  The methods that arise from this can then be applied to additional areas.
  
 
We agreed to begin by studying the results obtained by applying each of the tractography tools to a single dataset and then gathering to present the results to one another and discuss how best to quantify the similarities and differences.   
 
We agreed to begin by studying the results obtained by applying each of the tractography tools to a single dataset and then gathering to present the results to one another and discuss how best to quantify the similarities and differences.   
Line 33: Line 33:
 
1-5.  That window avoids MCCAI, BIRN, SFN, New Mexico's ballonfest, and the Jewish High Holidays.  Please let us know immediately if that week is a NO GO for your group.
 
1-5.  That window avoids MCCAI, BIRN, SFN, New Mexico's ballonfest, and the Jewish High Holidays.  Please let us know immediately if that week is a NO GO for your group.
  
* The workshop/retreat will include a presentation from each group of their results AND a recommendation for how to statstically compare and quantify similarities and differences within and across groups.  
+
* The workshop/retreat will include a presentation from each group of their results AND a recommendation for how to statstically compare and quantify similarities and differences within and across tools.  
  
 
* Suggestion was made to have an outside statistics expert present to facilitate and inform the discussion.  Suggestions for who might be appropriate are welcome.
 
* Suggestion was made to have an outside statistics expert present to facilitate and inform the discussion.  Suggestions for who might be appropriate are welcome.
Line 47: Line 47:
  
  
 
+
Please make comments and corrections to this page.
 
* Collaboration between Cores 1, 3 and 5.
 
* Leadership by Randy Gollub, Ross Whitaker and Guido Gerig
 
* Powered by Sonia Pujol
 
* Made relvant by Marek Kubicki
 
Contributors include
 

Revision as of 15:40, 1 June 2007

Home < Training:Tractography.Validation

A new initiative has begun in response to a shared vision among Cores 1, 3 and 5 that the field of medical image analysis would be well served by work in the area of validation, calibration and assessment of reliability. This vision was also articulated by our External Advisory Committee (add link) who recommended that this work be added to the NAMIC mission. A thorough and lively discussion of this topic was held during the 2006 All Hands Meeting (add link). Discussions have continued among our participants since then and as a result a plan for the initial work on this front has been articulated. This page will serve as the coordinating site for this effort which we expect to evolve with time.

There are many outstanding questions in this domain that we agree are interesting and worth considering such as:

  1. What benchmarks should be used to assess performance of a NAMIC Toolkit algorithm?
  2. How can we assess the performance of an algorithm if we have no access to the ground truth of what it is measuring (e.g. the white matter of the brain with tractography)?
  3. What statistical methods are most appropriate for quanitfying and testing significance of these assessments?


The answers to these questions will vary depending on the specific algorithm and its application. The group agreed that the best way to proceed was to chose one very specific example that is highly relevant to the NAMIC work to date and focus efforts on that. The methods that arise from this can then be applied to additional areas.

We agreed to begin by studying the results obtained by applying each of the tractography tools to a single dataset and then gathering to present the results to one another and discuss how best to quantify the similarities and differences.

Details:

  • Data sets to be provided by Marek Kubicki (put link to descriptor page and download instructions here). N= 5 or 10 or 15 (TBD) each schizophrenic and healthy subjects that are de-identified and not marked as to diagnosis. Each subject will have a 3T high resolution DTI scan, mMRI scan and expert generated Regions of Interest (ROIs) for each subject that are needed for tract definition (put link to acquisition parameter details here). Hopefully this will be done within the next week or two.
  • The tracts to be studied are the cingulum bundle, the uncinate fasiculus, and teh arcuate fasiculus on the left and right sides. (Link to the definition of the tracts to be put here- Randy or Marek). This list of tracts is open for further discussion but needs to be completed soon- hopefully within the next week or two.
  • Each tool developer is responsible for downloading and analyzing the data, optimizing their own algorithm as needed. Keep careful notes on your final processing methods as you will need to teach them to Sonia Pujol who will repeat the analysis independently using all the tools herself for the data to be included in a summary manuscript.
  • Metrics to be collected need to be finalized by this group, but suggestions include measure of FA along the tract, size and/or volume of tract, spatial localization of tract, measure of connectivity. Perhaps also some way to look at group results towards the goal of being able to make statements about differences in health and disease? Please write in more suggestions and detailed methods here.
  • Particiipating algorithms (and tester) include:
  1. fiber tracking the UNC way (Guido Gerig or his designee)
  2. Slicer tract tool (Ron Kikinis to designate)
  3. POI tool (Bruce Fischl/Dennis Jen)
  4. Volumetric connectivity (Ross Whitaker/Tom Fletcher)
  5. Fisler (Allen Tannenbaum/John Melonakos)
  6. Medinrea (PF Filliard)
  7. GTRACT (Vince Magnotta)
  8. Cluster tool (? Lauren O'Donnell) may not be appropriate for this
  • We will then hold a 2 day workshop/retreat in Santa Fe, NM. Candidate dates are 2 days during the week of Oct

1-5. That window avoids MCCAI, BIRN, SFN, New Mexico's ballonfest, and the Jewish High Holidays. Please let us know immediately if that week is a NO GO for your group.

  • The workshop/retreat will include a presentation from each group of their results AND a recommendation for how to statstically compare and quantify similarities and differences within and across tools.
  • Suggestion was made to have an outside statistics expert present to facilitate and inform the discussion. Suggestions for who might be appropriate are welcome.
  • The outcome of the workshop/retreat will be a set of final agreed upon measurement metrics and method for compare and quantify similarities and differences within and across tools. And the training of Sonia Pujol to run each of the analysis tools.
  • We will hold a follow-up workshop as part of the NAMIC 2008 AHM at which time Sonia will present her results to the broader audience and we can refine our methods for comparing and quantifying similarities and differences within and across tools.
  • Final goals are:
  1. To write up the results for publication with all of us as contributing authors.
  2. To make the dataset and our analyzed results available to the scientific community
  3. To use our findings to begin to establish benchmarking methods for the NAMIC toolkit.


Please make comments and corrections to this page.