Difference between revisions of "Event:2011-Registration-Retreat-Tuesday"
(20 intermediate revisions by 3 users not shown) | |||
Line 4: | Line 4: | ||
=Tuesday registration topics= | =Tuesday registration topics= | ||
− | + | ==1 Grand challenge in registration== | |
− | |||
+ | The motivation for having a grand challenge in registration is to define a problem where current technology fails, and that is interesting for the community to work on. This will likely lead to novel and relevant solutions. A grand challenge will likely have larger impact than more traditional short term contests that aims at finding the best method available with current technology and careful settings of algorithmic parameters. The task will be to register data sets that are complex enough to force new technology to be developed. Example of grand challenges from other communities, such as for example the vision community is the [http://www.darpa.mil/grandchallenge/overview.asp DARPA Grand Challenges] and the [http://www.computer.org/portal/web/csdl/doi/10.1109/CVPR.2005.268 Face Recognition Grand Challenge] | ||
− | + | === Possible Challenges === | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Challenge should be '''motivated by clinical problem''' and '''solution should use standard clinical imaging''' data, not specialized acquisitions. | |
− | * White paper outline | + | To prevent "cheating" (more likely: over-training): |
+ | * Keep test data secret. | ||
+ | * Run software entries on organizer system using secret data. | ||
+ | |||
+ | Consider multi-phase challenges: | ||
+ | # Phase I: run on any hardware; no resource constraints. | ||
+ | # Phase II: run on commodity hardware, run time and memory constraints. | ||
+ | |||
+ | === Longitudinal Change Detection === | ||
+ | |||
+ | Provide baseline and follow-up images (e.g., non-contrast abdominal CT) for a set of cases. | ||
+ | |||
+ | Task: For each case, detect and name difference between anatomies using a pre-defined vocabulary. For each case, the answer is zero (for control cases) or more statements "Structure+Change" where | ||
+ | # "Structure" is one of the following (could use subset of [http://www.radlex.org/ RadLex]: | ||
+ | ## Heart | ||
+ | ## Lung left | ||
+ | ## Lung right | ||
+ | ## Spleen | ||
+ | ## etc. | ||
+ | # "Change" is one of the following: | ||
+ | ## Missing | ||
+ | ## Appeared | ||
+ | ## VolumeIncreased | ||
+ | ## VolumeDecreased | ||
+ | ## Lesion appeared | ||
+ | |||
+ | Pros: | ||
+ | # Easy and efficient to produce validation standard (does not require identification of large numbers of landmarks) | ||
+ | # Marketable as an important, "grand" challenge | ||
+ | |||
+ | Cons: | ||
+ | # Solution may not actually involve registration | ||
+ | |||
+ | === Multi-modality Diagnosis === | ||
+ | |||
+ | For a number of cases, provide a collection of images of various modalities, e.g., CT with and without CE, PET, MRI, US. | ||
+ | |||
+ | Task: Produce diagnosis for each case. | ||
+ | |||
+ | Preferred are conditions where clinicians are currently unable to diagnose based on imaging, but invasive diagnostic procedure are available that can generate a ground truth (e.g., diffuse heart conditions). | ||
+ | |||
+ | Pros: | ||
+ | # Marketable as important, "grand" challenge | ||
+ | |||
+ | Cons: | ||
+ | # Solution may not actually involve registration. | ||
+ | |||
+ | === Intra-subject Registration === | ||
+ | |||
+ | Provide baseline and follow-up date in a given modality (e.g., CT). Hard-to-register data, e.g., whole-body or abdominal. | ||
+ | |||
+ | Task: compute a dense deformation between baseline and follow-up images | ||
+ | |||
+ | Pros: | ||
+ | # Most directly related to registration. | ||
+ | |||
+ | Cons: | ||
+ | # Validation is not straight forward. Could involve prospective or intrinsic landmarks, multi-channel acquisition with landmarks visible in a "secret" image channel. | ||
+ | |||
+ | Gold Standard needed for comparing challenge entries: | ||
+ | * Prospective using artifical markers: Markers should be airbrushed in the images provided to participants, similar to Vanderbilt data set. | ||
+ | * Anatomical landmarks: set of landmarks should be large/dense and be kept secret from participants. | ||
+ | * Extracted features (e.g., white matter sheets, skeletons): need to be careful, because while these may be invariant in one subject over time, they may not be comparable across subjects. Consequently, more landmarks should be available for longitudinal than inter-subject registration, because invariant features exist (e.g., in breast, prostate) that do not exist across subjects. | ||
+ | * Contrast-enhanced images could be used to obtain (e.g., vascular) landmarks for validation, but non-contrast images provided to challenge participants. | ||
+ | * Different types of landmarks include: | ||
+ | *# bones, bone features, tips | ||
+ | *# vascular structures, e.g., branching points | ||
+ | *# organs and features on their surface | ||
+ | |||
+ | Interpolation vs. Extrapolation: | ||
+ | * At visible landmarks, their alignment measures registration accuracy directly (modeling, interpolation). | ||
+ | * At invisible landmarks, alignment measures performance of the registration priors (prediction, extrapolation). | ||
+ | |||
+ | ===Unsorted Notes=== | ||
+ | * Example of such a data is full body registration, with for example data from mice CT. | ||
+ | * One issue that comes up in a grand challenge is how to define goodness/success. How do we define what is a good registration? | ||
+ | * Vanderbilt data set. Blind evaluation and “you cheat you lose” approach. | ||
+ | * Look at taxonomy, see what checks off: if speed is important, if ... | ||
+ | * Use a clinical outcome for the quality of the result? Use a secondary system, that relies on the registration to make its decision. | ||
+ | * Subjective clinical decisions often not reliable (example size of ventricles, normal, enlarged, hugely enlarged) | ||
+ | * Several grand challenges, for example estimate the uncertainty of the registration. | ||
+ | * What can today’s method to well? Good start to find a grand challenge. | ||
+ | * Pig, 1000 lead balls. CT the Pig, move, CT again. Do radiation therapy. Shrink tumors. etc. | ||
+ | * Need a grant to get such a project going. | ||
+ | * The balls migrate over time, can we use anatomical landmarks. Can we use features in the data for landmarks that also will be used for driving the algorithm? | ||
+ | * Error bars on positions of landmarks. | ||
+ | * Find landmarks, easier in bone, vascularture, gyration patterns, more difficult with breast, and in white matter. | ||
+ | * Using anatomical feature for registration often robust (vasculature, ..). | ||
+ | * Define validation strategies that most people agree on, but is strongly related to the applications. | ||
+ | * What is the aspect, robustness, accuracy, speed? Need to be specific in a challenge. | ||
+ | * Two types of registration, having visual landmarks, or not. If no visible features, still models of stiffness and physical properties can meaningfully predict movement. | ||
+ | * Point landmarks, Synthetic data, what are the taxonomy for metrics? | ||
+ | * Other user (metrics?) critera are: is it to slow, is it useful? Amount of user interaction, etc. | ||
+ | * Marketing, grant challenge should capture imagination, should not be technology oriented. A vision that can capture attention, and funding. | ||
+ | * Come up with a medically relevant topic. | ||
+ | * Asking clinicians if this is good enough, get to the relevance. Then ask the practical questions, is this fast enough, robust enough, ... | ||
+ | |||
+ | ==2 What works using current technology== | ||
+ | |||
+ | Draft Word document: [[Media:What works.docx|What works]] | ||
+ | |||
+ | ==3 White paper outline== |
Latest revision as of 18:30, 22 February 2011
Home < Event:2011-Registration-Retreat-TuesdayBack to Registration Brainstorming 2011
Contents
Tuesday registration topics
1 Grand challenge in registration
The motivation for having a grand challenge in registration is to define a problem where current technology fails, and that is interesting for the community to work on. This will likely lead to novel and relevant solutions. A grand challenge will likely have larger impact than more traditional short term contests that aims at finding the best method available with current technology and careful settings of algorithmic parameters. The task will be to register data sets that are complex enough to force new technology to be developed. Example of grand challenges from other communities, such as for example the vision community is the DARPA Grand Challenges and the Face Recognition Grand Challenge
Possible Challenges
Challenge should be motivated by clinical problem and solution should use standard clinical imaging data, not specialized acquisitions.
To prevent "cheating" (more likely: over-training):
- Keep test data secret.
- Run software entries on organizer system using secret data.
Consider multi-phase challenges:
- Phase I: run on any hardware; no resource constraints.
- Phase II: run on commodity hardware, run time and memory constraints.
Longitudinal Change Detection
Provide baseline and follow-up images (e.g., non-contrast abdominal CT) for a set of cases.
Task: For each case, detect and name difference between anatomies using a pre-defined vocabulary. For each case, the answer is zero (for control cases) or more statements "Structure+Change" where
- "Structure" is one of the following (could use subset of RadLex:
- Heart
- Lung left
- Lung right
- Spleen
- etc.
- "Change" is one of the following:
- Missing
- Appeared
- VolumeIncreased
- VolumeDecreased
- Lesion appeared
Pros:
- Easy and efficient to produce validation standard (does not require identification of large numbers of landmarks)
- Marketable as an important, "grand" challenge
Cons:
- Solution may not actually involve registration
Multi-modality Diagnosis
For a number of cases, provide a collection of images of various modalities, e.g., CT with and without CE, PET, MRI, US.
Task: Produce diagnosis for each case.
Preferred are conditions where clinicians are currently unable to diagnose based on imaging, but invasive diagnostic procedure are available that can generate a ground truth (e.g., diffuse heart conditions).
Pros:
- Marketable as important, "grand" challenge
Cons:
- Solution may not actually involve registration.
Intra-subject Registration
Provide baseline and follow-up date in a given modality (e.g., CT). Hard-to-register data, e.g., whole-body or abdominal.
Task: compute a dense deformation between baseline and follow-up images
Pros:
- Most directly related to registration.
Cons:
- Validation is not straight forward. Could involve prospective or intrinsic landmarks, multi-channel acquisition with landmarks visible in a "secret" image channel.
Gold Standard needed for comparing challenge entries:
- Prospective using artifical markers: Markers should be airbrushed in the images provided to participants, similar to Vanderbilt data set.
- Anatomical landmarks: set of landmarks should be large/dense and be kept secret from participants.
- Extracted features (e.g., white matter sheets, skeletons): need to be careful, because while these may be invariant in one subject over time, they may not be comparable across subjects. Consequently, more landmarks should be available for longitudinal than inter-subject registration, because invariant features exist (e.g., in breast, prostate) that do not exist across subjects.
- Contrast-enhanced images could be used to obtain (e.g., vascular) landmarks for validation, but non-contrast images provided to challenge participants.
- Different types of landmarks include:
- bones, bone features, tips
- vascular structures, e.g., branching points
- organs and features on their surface
Interpolation vs. Extrapolation:
- At visible landmarks, their alignment measures registration accuracy directly (modeling, interpolation).
- At invisible landmarks, alignment measures performance of the registration priors (prediction, extrapolation).
Unsorted Notes
- Example of such a data is full body registration, with for example data from mice CT.
- One issue that comes up in a grand challenge is how to define goodness/success. How do we define what is a good registration?
- Vanderbilt data set. Blind evaluation and “you cheat you lose” approach.
- Look at taxonomy, see what checks off: if speed is important, if ...
- Use a clinical outcome for the quality of the result? Use a secondary system, that relies on the registration to make its decision.
- Subjective clinical decisions often not reliable (example size of ventricles, normal, enlarged, hugely enlarged)
- Several grand challenges, for example estimate the uncertainty of the registration.
- What can today’s method to well? Good start to find a grand challenge.
- Pig, 1000 lead balls. CT the Pig, move, CT again. Do radiation therapy. Shrink tumors. etc.
- Need a grant to get such a project going.
- The balls migrate over time, can we use anatomical landmarks. Can we use features in the data for landmarks that also will be used for driving the algorithm?
- Error bars on positions of landmarks.
- Find landmarks, easier in bone, vascularture, gyration patterns, more difficult with breast, and in white matter.
- Using anatomical feature for registration often robust (vasculature, ..).
- Define validation strategies that most people agree on, but is strongly related to the applications.
- What is the aspect, robustness, accuracy, speed? Need to be specific in a challenge.
- Two types of registration, having visual landmarks, or not. If no visible features, still models of stiffness and physical properties can meaningfully predict movement.
- Point landmarks, Synthetic data, what are the taxonomy for metrics?
- Other user (metrics?) critera are: is it to slow, is it useful? Amount of user interaction, etc.
- Marketing, grant challenge should capture imagination, should not be technology oriented. A vision that can capture attention, and funding.
- Come up with a medically relevant topic.
- Asking clinicians if this is good enough, get to the relevance. Then ask the practical questions, is this fast enough, robust enough, ...
2 What works using current technology
Draft Word document: What works