Difference between revisions of "EMSegmenter"

From NAMIC Wiki
Jump to: navigation, search
Line 22: Line 22:
  
 
==sw==
 
==sw==
 +
 +
[[media:One.jpg|One.jpg]]
 +
 
The EM segmenter grew out of a collaboration with Shenton's group in
 
The EM segmenter grew out of a collaboration with Shenton's group in
 
about 1993.  The goal was to get good automatic segmentations of white
 
about 1993.  The goal was to get good automatic segmentations of white
Line 86: Line 89:
 
images:
 
images:
  
1) slice of T1 weighted mr (right temporal lobe has bad "shading"
+
1) [[media:one.jpg|slice of T1 weighted mr (right temporal lobe has bad "shading"]]
2) threshoding result
+
2) [[media:two.jpg|threshoding result]]
 
3) EM result
 
3) EM result
 
4) 3D view of segmented white matter surface from thresholding
 
4) 3D view of segmented white matter surface from thresholding

Revision as of 18:12, 12 April 2007

Home < EMSegmenter

History

Tracing the history of the EMSegmenter...

1995

  • Image:
  • Strengths:
  • Weaknesses:
  • Publications:

1999

  • Image:
  • Strengths:
  • Weaknesses:
  • Publications:


EM with Priors

  • File:Ron-ISBI-07.zip are the research related slides - I have sometimes a couple of slides on a topic so you can choose.
  • The Feedback slides featuring the transition from Slicer 2 to Slicer 3 were generated by Brad and me.

sw

One.jpg

The EM segmenter grew out of a collaboration with Shenton's group in about 1993. The goal was to get good automatic segmentations of white matter and gray matter from T1 weighted MRI. The biggest difficulty was the intensity inhomogeneities, or "shading", artifact in the images. The effect of the artifact was that a single threshold could not be used to separate white matter and gray matter. At that time, the MRI scanner used for research at BWH had an annoyingly large shading artifact.

Various approaches to the problem were tried, some giving good results, but there were remaining imperfections in the results. Eventually, we decided to construct an explicit representation of the intensity artifact, and attempt to recover the artifact and the segmentation simultaneously.

We chose the Expectation Maximization (EM) algorithm, a statistical estimation method that is used when some data is considered to be "missing".

The result was an iterative algoritm that alternates between two steps.

In the "E" step, the probability of the tissue label at each voxel is estimated, given the image data and the current estimate of the intensity artifact.

In the "M" step, the intensity artifact is re-estimated, given the image data and current estimate of the tissue label probabilities.

The EM segmenter proved to be very robust to shading artifacts, but in addition, it was also robust to "inter-scan inhomogeneities". With previous classification approaches to segmentation, "training" was needed on a per-scan basis, because of intensity changes from scan to scan.

The EM segmener was the first algorithm that could produce high quality segmentations of white matter and gray matter from MRI, with no manual intervention needed on a per case basis. This proved to be very valuable in a large longitudinal study of MS in the period 1994 - 1995.


Subsequent developments:

Tina Kapur: added MRF models and Mean Field solver

Kilian Pohl: * Added the use of anatomical atlases of specific brain parts, e.g., the hippocampus : started to be a brain "parcellator"

* Added simultaneous registration of atlas and subject

* developed a hierarchial method for parcellation and validated it on schizophrenia data


* developed a mean-field level-set post-processor that is effective for reducing the effects of noise.

---

images:

1) slice of T1 weighted mr (right temporal lobe has bad "shading" 2) threshoding result 3) EM result 4) 3D view of segmented white matter surface from thresholding 5) 3D view of segmented white matter surface from EM 7) longituidinal MS, one subect, segmentaiton result without EM 8) " " " with EM 9) surface coil image 10) surface coil image corrected by EM algorithm