Algorithm:GATech:DWMRI Musings

From NAMIC Wiki
Revision as of 02:39, 26 July 2007 by Melonakos (talk | contribs)
Jump to: navigation, search
Home < Algorithm:GATech:DWMRI Musings

DW-MRI Musings

Back to Georgia Tech DW-MRI Geodesic Active Contours.

This page contains a hodge podge of thoughts on Diffusion-Weighted MRI (DW-MRI) processing. From a bird's eye view, the most general question for a DW-MRI data processor is: What clinically relevant information can be leveraged from DW-MRI data and used for scientific and clinical studies? Furthermore, what algorithms, programming constructs, clinical guidance, and training are needed to make this happen? The following are some quick thoughts that may be considered when attempting to answer these questions, in no particular order:

DW-MRI vs DT-MRI

In answering the questions posed above, it is common for people to assume from the onset that we are using terminology related to tensors (i.e. DT-MRI, DTI). The raw data coming out of the scanner is referred to as DW-MRI or DWI. The difference between the "T" and the "W" is that the "T" has undergone a preprocessing step which computes tensors at each voxel which are a function of the original "W" data. Hence, the data resulting from this preprocessing step is referred to at DT-MRI. There are advantages and disadvantages to using this preprocessing tensor construction step. The point here is that when we think about trying to find ways to go from the scanner data to clinical answers, we are talking about going from DW-MRI data to clinical answers. In other words, it is okay to think outside the box and to consider ways in which ellipsoidal constraints arising from the tensor model may be meaningfully relaxed.

Synchronizing Terminology

An important step in approaching the questions above, is the development of a set of common terms which we can use to describe our work. The following is a list of commonly used and confused terms in this line of research with corresponding definitions. Note that some things (such as fractional anisotropy (FA), b-values, etc) have straightforward definitions and need no further attempt at solidifying. These definitions are certainly not perfect and could benefit greatly from community input:

  • DW-MRI, DWI - Diffusion-Weighted Magnetic Resonance Imagery - the raw data coming out of the scanner - small values correspond to strong diffusion.
  • DT-MRI, DTI - Diffusion-Tensor Magnetic Resonance Imagery - the 6-component data resulting from the construction of tensors from DW-MRI data - high eigenvalues correspond to strong diffusion.
  • fibers, tracts, streamlines - open curves representing possible diffusion paths through the brain - while the paths themselves certainly do not correspond to any micro-architecture, they provide possible paths through which particles may diffuse through the volume.
  • fiber bundles
  • fiber clusters
  • connectivity maps
  • fractional anisotropy, FA

Streamline Ups and Downs

coming soon

Validation Soap Box

When we consider assessing the utility of a particular tool in serving the needs of our clinical customers, we should stop and consider the factors upon which our assessment is based and biased. This is particular important when attempting to validate results in an environment absent of ground truth. In the absence of ground truth, tools are judged according to: visual appeal, reproducibility, user-friendliness, and (if you're shooting for the stars and want something for which there is ground truth) accuracy in diagnosis.

Broadly speaking, each tool consists of four components which contribute toward the utility of the tool: the algorithm, the engineering, the incorporation of clinical knowledge, and the training. It is common to interchange the terms tool and algorithm when assessing the utility of a tool. However, especially in an environment where results are primarily judged by the visual appeal and user-friendliness, this is misleading. The fear here is that streamline-based algorithms which have been around the longest and have enjoyed a great deal of engineering input, clinical guidance, and community training, will somehow be inordinately attended to because the judgment criterion used for these tools is based on these non-algorithmic components. It is useful to ask the question, "If algorithm A had received as much engineering, clinical input, and training as algorithm B, how would this change things?"

Algorithm Categorization

coming soon

Beware of the Segmentation Discrimination Paradox

coming soon

Beware of Mr. Legendre

coming soon