Difference between revisions of "EMSegmenter"

From NAMIC Wiki
Jump to: navigation, search
Line 93: Line 93:
 
#[[media:four.jpg|3D view of segmented white matter surface from thresholding]]
 
#[[media:four.jpg|3D view of segmented white matter surface from thresholding]]
 
#[[media:five.jpg|3D view of segmented white matter surface from EM]]
 
#[[media:five.jpg|3D view of segmented white matter surface from EM]]
#[[media:six.jpg|longituidinal MS, one subect, segmentaiton result without EM]]
+
#
#[[media:seven.jpg|" " " with EM]]
+
#[[media:seven.jpg|longituidinal MS, one subect, segmentaiton result without EM]]
#[[media:eight.jpg|surface coil image]]
+
#[[media:eight.jpg|" " " with EM]]
#[[media:nine.jpg|surface coil image corrected by EM algorithm]]
+
#[[media:nine.jpg|surface coil image]]
 +
#[[media:ten.jpg|surface coil image corrected by EM algorithm]]

Revision as of 18:19, 12 April 2007

Home < EMSegmenter

History

Tracing the history of the EMSegmenter...

1995

  • Image:
  • Strengths:
  • Weaknesses:
  • Publications:

1999

  • Image:
  • Strengths:
  • Weaknesses:
  • Publications:


EM with Priors

  • File:Ron-ISBI-07.zip are the research related slides - I have sometimes a couple of slides on a topic so you can choose.
  • The Feedback slides featuring the transition from Slicer 2 to Slicer 3 were generated by Brad and me.

sw

The EM segmenter grew out of a collaboration with Shenton's group in about 1993. The goal was to get good automatic segmentations of white matter and gray matter from T1 weighted MRI. The biggest difficulty was the intensity inhomogeneities, or "shading", artifact in the images. The effect of the artifact was that a single threshold could not be used to separate white matter and gray matter. At that time, the MRI scanner used for research at BWH had an annoyingly large shading artifact.

Various approaches to the problem were tried, some giving good results, but there were remaining imperfections in the results. Eventually, we decided to construct an explicit representation of the intensity artifact, and attempt to recover the artifact and the segmentation simultaneously.

We chose the Expectation Maximization (EM) algorithm, a statistical estimation method that is used when some data is considered to be "missing".

The result was an iterative algoritm that alternates between two steps.

In the "E" step, the probability of the tissue label at each voxel is estimated, given the image data and the current estimate of the intensity artifact.

In the "M" step, the intensity artifact is re-estimated, given the image data and current estimate of the tissue label probabilities.

The EM segmenter proved to be very robust to shading artifacts, but in addition, it was also robust to "inter-scan inhomogeneities". With previous classification approaches to segmentation, "training" was needed on a per-scan basis, because of intensity changes from scan to scan.

The EM segmener was the first algorithm that could produce high quality segmentations of white matter and gray matter from MRI, with no manual intervention needed on a per case basis. This proved to be very valuable in a large longitudinal study of MS in the period 1994 - 1995.


Subsequent developments:

Tina Kapur: added MRF models and Mean Field solver

Kilian Pohl: * Added the use of anatomical atlases of specific brain parts, e.g., the hippocampus : started to be a brain "parcellator"

* Added simultaneous registration of atlas and subject

* developed a hierarchial method for parcellation and validated it on schizophrenia data


* developed a mean-field level-set post-processor that is effective for reducing the effects of noise.

---

images:

  1. slice of T1 weighted mr (right temporal lobe has bad "shading"
  2. threshoding result
  3. EM result
  4. 3D view of segmented white matter surface from thresholding
  5. 3D view of segmented white matter surface from EM
  6. longituidinal MS, one subect, segmentaiton result without EM
  7. " " " with EM
  8. surface coil image
  9. surface coil image corrected by EM algorithm