Difference between revisions of "Event:2011-Registration-Retreat-Tuesday"

From NAMIC Wiki
Jump to: navigation, search
Line 4: Line 4:
 
=Tuesday registration topics=
 
=Tuesday registration topics=
  
==Grand challenge in registration==
+
==1 Grand challenge in registration==
 
Example from the vision community is the [http://www.computer.org/portal/web/csdl/doi/10.1109/CVPR.2005.268 face recognition grand challenge]
 
Example from the vision community is the [http://www.computer.org/portal/web/csdl/doi/10.1109/CVPR.2005.268 face recognition grand challenge]
  
Line 34: Line 34:
 
* Asking clinicians if this is good enough, get to the relevance. Then ask the practical questions, is this fast enough, robust enough, ...
 
* Asking clinicians if this is good enough, get to the relevance. Then ask the practical questions, is this fast enough, robust enough, ...
  
==What works using current technology==
+
==2 What works using current technology==
  
==White paper outline==
+
==3 White paper outline==

Revision as of 17:03, 22 February 2011

Home < Event:2011-Registration-Retreat-Tuesday
 Back to Registration Brainstorming 2011


Tuesday registration topics

1 Grand challenge in registration

Example from the vision community is the face recognition grand challenge


  • Grand challenge, pick problem that current technology fails on. Will have a greater impact that a normal miccai contest, tweaking current methods.
  • Define a general data set, that is complex enough to force new technology.
  • Example data set full body registration, for example mice
  • How do we define good registration,
  • Vanderbilt data set. Blind evaluation and “you cheat you lose” approach.
  • Look at taxonomy, see what checks off: if speed is important, if ...
  • Use a clinical outcome for the quality of the result? Use a secondary system, that relies on the registration to make its decision.
  • Subjective clinical decisions often not reliable (example size of ventricles, normal, enlarged, hugely enlarged)
  • Several grand challenges, for example estimate the uncertainty of the registration.
  • What can today’s method to well? Good start to find a grand challenge.
  • Pig, 1000 lead balls. CT the Pig, move, CT again. Do radiation therapy. Shrink tumors. etc.
  • Need a grant to get such a project going.
  • The balls migrate over time, can we use anatomical landmarks. Can we use features in the data for landmarks that also will be used for driving the algorithm?
  • Error bars on positions of landmarks.
  • Find landmarks, easier in bone, vascularture, gyration patterns, more difficult with breast, and in white matter.
  • Using anatomical feature for registration often robust (vasculature, ..).
  • Define validation strategies that most people agree on, but is strongly related to the applications.
  • What is the aspect, robustness, accuracy, speed? Need to be specific in a challenge.
  • Two types of registration, having visual landmarks, or not. If no visible features, still models of stiffness and physical properties can meaningfully predict movement.
  • Point landmarks, Synthetic data, what are the taxonomy for metrics?
  • Other user (metrics?) critera are: is it to slow, is it useful? Amount of user interaction, etc.
  • Marketing, grant challenge should capture imagination, should not be technology oriented. A vision that can capture attention, and funding.
  • Come up with a medically relevant topic.
  • Asking clinicians if this is good enough, get to the relevance. Then ask the practical questions, is this fast enough, robust enough, ...

2 What works using current technology

3 White paper outline