Difference between revisions of "Training:Tractography.Validation"

From NAMIC Wiki
Jump to: navigation, search
 
Line 1: Line 1:
Collaboration between Cores 1, 3 and 5.
+
A new initiative has begun in response to a shared vision among Cores 1, 3 and 5 that the field of medical image analysis would be well served by work in the area of validation, calibration and assessment of reliability.  This vision was also articulated by our External Advisory Committee (add link) who recommended that this work be added to the NAMIC mission.  A thorough and lively discussion of this topic was held during the 2006 All Hands Meeting (add link).  Discussions have continued among our participants since then and as a result a plan for the initial work on this front has been articulated.  This page will serve as the coordinating site for this effort which we expect to evolve with time.
Leadership by Randy Gollub, Ross Whitaker and Guido Gerig
+
 
Powered by Sonia Pujol
+
There are many outstanding questions in this domain that we agree are interesting and worth considering:
Made relvant by Marek Kubicki
+
# What benchmarks should be used to assess performance of an algorithm?
 +
# How do you assess the performance of an algorithm if you have no access to the ground truth of what it is measuring?
 +
 
 +
 
 +
 
 +
 
 +
* Collaboration between Cores 1, 3 and 5.
 +
* Leadership by Randy Gollub, Ross Whitaker and Guido Gerig
 +
* Powered by Sonia Pujol
 +
* Made relvant by Marek Kubicki
 
Contributors include
 
Contributors include

Revision as of 14:01, 1 June 2007

Home < Training:Tractography.Validation

A new initiative has begun in response to a shared vision among Cores 1, 3 and 5 that the field of medical image analysis would be well served by work in the area of validation, calibration and assessment of reliability. This vision was also articulated by our External Advisory Committee (add link) who recommended that this work be added to the NAMIC mission. A thorough and lively discussion of this topic was held during the 2006 All Hands Meeting (add link). Discussions have continued among our participants since then and as a result a plan for the initial work on this front has been articulated. This page will serve as the coordinating site for this effort which we expect to evolve with time.

There are many outstanding questions in this domain that we agree are interesting and worth considering:

  1. What benchmarks should be used to assess performance of an algorithm?
  2. How do you assess the performance of an algorithm if you have no access to the ground truth of what it is measuring?



  • Collaboration between Cores 1, 3 and 5.
  • Leadership by Randy Gollub, Ross Whitaker and Guido Gerig
  • Powered by Sonia Pujol
  • Made relvant by Marek Kubicki

Contributors include