Multimodality Image Registration for TBI

From NAMIC Wiki
Revision as of 14:28, 24 June 2011 by Yflou (talk | contribs) (→‎Key Investigators)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Home < Multimodality Image Registration for TBI

Multimodality Image Registration for Traumatic Brain Injury (TBI)

Key Investigators

  • Georgia Tech: Yifei Lou and Allen Tannenbaum
  • UCLA: Micah Chambers, Andrei Irimia


Objective

  • Understanding brain injury using (multimodal) deformable image registration
  • Robust registrations inspire of topological changes (enforcing zero flow?)
  • The algorithm is based on a viscous fluid model, which can handle larger deformable as compared to the B-spline type of methods
  • CUDA-based implementation, which takes 1 min for 256x256x60


Approach, Plan

  • Integration into Slicer3 Module
  • Learn more about TBI and our data set from Micah (UCLA NA-MIC TBI DBP team member)
  • Validate algorithm on additional TBI datasets from UCLA

Progress

  • Learn more about TBI and ITK/Slicer
  • Demonstrate the efficiency of my algorithm on TBI data
  • Its failure in one registration case suggests us dividing 12 modalities into 2 subgroups and co-register within group
  • plan to write a paper and integrate my algorithm into ITK/Slicer



References

1 Yifei Lou and Allen Tannenbaum. Multimodal Deformable Image Registration via the Bhattacharyya Distance. Submitted to IEEE Trans. Image Process. 2011

2 Yifei Lou, Xun Jia, Xuejun Gu and Allen Tannenbaum. A GPU-based Implementation of Multimodal Deformable Image Registration Based on Mutual Information or Bhattacharyya Distance. Insight Journal, 2011. [[1]]

Delivery Mechanism

This work will be delivered to the NAMIC Kit as a

  1. NITRIC distribution
  2. Slicer Module
    1. Built-in: NO
    2. Extension -- commandline: NO
    3. Extension -- loadable: NO