Difference between revisions of "Event:2011-Registration-Retreat-Tuesday"

From NAMIC Wiki
Jump to: navigation, search
Line 42: Line 42:
 
# Phase II: run on commodity hardware, run time and memory constraints.
 
# Phase II: run on commodity hardware, run time and memory constraints.
  
==== Longitudinal Change Detection ====
+
=== Longitudinal Change Detection ===
  
 
Provide baseline and follow-up images (e.g., non-contrast abdominal CT) for a set of cases.
 
Provide baseline and follow-up images (e.g., non-contrast abdominal CT) for a set of cases.
Line 67: Line 67:
 
# Solution may not actually involve registration
 
# Solution may not actually involve registration
  
==== Multimodal Diagnosis ====
+
=== Multi-modality Diagnosis ===
  
 
For a number of cases, provide a collection of images of various modalities, e.g., CT with and without CE, PET, MRI, US.
 
For a number of cases, provide a collection of images of various modalities, e.g., CT with and without CE, PET, MRI, US.
Line 81: Line 81:
 
# Solution may not actually involve registration.
 
# Solution may not actually involve registration.
  
==== Intra-subject Registration ====
+
=== Intra-subject Registration ===
  
 
Provide baseline and follow-up date in a given modality (e.g., CT). Hard-to-register data, e.g., whole-body or abdominal.
 
Provide baseline and follow-up date in a given modality (e.g., CT). Hard-to-register data, e.g., whole-body or abdominal.

Revision as of 17:52, 22 February 2011

Home < Event:2011-Registration-Retreat-Tuesday
 Back to Registration Brainstorming 2011


Tuesday registration topics

1 Grand challenge in registration

The motivation for having a grand challenge in registration is to define a problem where current technology fails, and that is interesting for the community to work on. This will likely lead to novel and relevant solutions. A grand challenge will likely have larger impact than more traditional short term contests that aims at finding the best method available with current technology and careful settings of algorithmic parameters. The task will be to register data sets that are complex enough to force new technology to be developed. Example of grand challenges from other communities, such as for example the vision community is the DARPA Grand Challenges and the Face Recognition Grand Challenge

  • Example of such a data is full body registration, with for example data from mice CT.
  • One issue that comes up in a grand challenge is how to define goodness/success. How do we define what is a good registration?
  • Vanderbilt data set. Blind evaluation and “you cheat you lose” approach.
  • Look at taxonomy, see what checks off: if speed is important, if ...
  • Use a clinical outcome for the quality of the result? Use a secondary system, that relies on the registration to make its decision.
  • Subjective clinical decisions often not reliable (example size of ventricles, normal, enlarged, hugely enlarged)
  • Several grand challenges, for example estimate the uncertainty of the registration.
  • What can today’s method to well? Good start to find a grand challenge.
  • Pig, 1000 lead balls. CT the Pig, move, CT again. Do radiation therapy. Shrink tumors. etc.
  • Need a grant to get such a project going.
  • The balls migrate over time, can we use anatomical landmarks. Can we use features in the data for landmarks that also will be used for driving the algorithm?
  • Error bars on positions of landmarks.
  • Find landmarks, easier in bone, vascularture, gyration patterns, more difficult with breast, and in white matter.
  • Using anatomical feature for registration often robust (vasculature, ..).
  • Define validation strategies that most people agree on, but is strongly related to the applications.
  • What is the aspect, robustness, accuracy, speed? Need to be specific in a challenge.
  • Two types of registration, having visual landmarks, or not. If no visible features, still models of stiffness and physical properties can meaningfully predict movement.
  • Point landmarks, Synthetic data, what are the taxonomy for metrics?
  • Other user (metrics?) critera are: is it to slow, is it useful? Amount of user interaction, etc.
  • Marketing, grant challenge should capture imagination, should not be technology oriented. A vision that can capture attention, and funding.
  • Come up with a medically relevant topic.
  • Asking clinicians if this is good enough, get to the relevance. Then ask the practical questions, is this fast enough, robust enough, ...

Possible Challenges

Challenge should be motivated by clinical problem and solution should use standard clinical imaging data, not specialized acquisitions.

To prevent "cheating" (more likely: over-training):

  • Keep test data secret.
  • Run software entries on organizer system using secret data.

Consider multi-phase challenges:

  1. Phase I: run on any hardware; no resource constraints.
  2. Phase II: run on commodity hardware, run time and memory constraints.

Longitudinal Change Detection

Provide baseline and follow-up images (e.g., non-contrast abdominal CT) for a set of cases.

Task: For each case, detect and name difference between anatomies using a pre-defined vocabulary. For each case, the answer is zero (for control cases) or more statements "Structure+Change" where

  1. "Structure" is one of the following (could use subset of RadLex:
    1. Heart
    2. Lung left
    3. Lung right
    4. Spleen
    5. etc.
  2. "Change" is one of the following:
    1. Missing
    2. Appeared
    3. VolumeIncreased
    4. VolumeDecreased
    5. Lesion appeared

Pros:

  1. Easy and efficient to produce validation standard (does not require identification of large numbers of landmarks)
  2. Marketable as an important, "grand" challenge

Cons:

  1. Solution may not actually involve registration

Multi-modality Diagnosis

For a number of cases, provide a collection of images of various modalities, e.g., CT with and without CE, PET, MRI, US.

Task: Produce diagnosis for each case.

Preferred are conditions where clinicians are currently unable to diagnose based on imaging, but invasive diagnostic procedure are available that can generate a ground truth (e.g., diffuse heart conditions).

Pros:

  1. Marketable as important, "grand" challenge

Cons:

  1. Solution may not actually involve registration.

Intra-subject Registration

Provide baseline and follow-up date in a given modality (e.g., CT). Hard-to-register data, e.g., whole-body or abdominal.

Task: compute a dense deformation between baseline and follow-up images

Pros:

  1. Most directly related to registration.

Cons:

  1. Validation is not straight forward. Could involve prospective or intrinsic landmarks, multi-channel acquisition with landmarks visible in a "secret" image channel.

Gold Standard needed for comparing challenge entries:

  • Prospective using artifical markers: Markers should be airbrushed in the images provided to participants, similar to Vanderbilt data set.
  • Anatomical landmarks: set of landmarks should be large/dense and be kept secret from participants.
  • Extracted features (e.g., white matter sheets, skeletons): need to be careful, because while these may be invariant in one subject over time, they may not be comparable across subjects. Consequently, more landmarks should be available for longitudinal than inter-subject registration, because invariant features exist (e.g., in breast, prostate) that do not exist across subjects.
  • Contrast-enhanced images could be used to obtain (e.g., vascular) landmarks for validation, but non-contrast images provided to challenge participants.
  • Different types of landmarks include:
    1. bones, bone features, tips
    2. vascular structures, e.g., branching points
    3. organs and features on their surface

Interpolation vs. Extrapolation:

  • At visible landmarks, their alignment measures registration accuracy directly (modeling, interpolation).
  • At invisible landmarks, alignment measures performance of the registration priors (prediction, extrapolation).


2 What works using current technology

3 White paper outline