Difference between revisions of "ITK Registration Optimization"

From NAMIC Wiki
Jump to: navigation, search
Line 20: Line 20:
 
*# '''DTI: components of the diffusion tensor''' [[DTI-non-rigid]] (Sylvain)
 
*# '''DTI: components of the diffusion tensor''' [[DTI-non-rigid]] (Sylvain)
  
== Performance Requirements and Use Cases ==
+
== Hardware Platform Requirements and Use Cases ==
  
 
* Requirements
 
* Requirements
 +
*# Shared memory
 
*# Single and multi-core machines
 
*# Single and multi-core machines
 
*# Single and multi-processor machines
 
*# Single and multi-processor machines
Line 28: Line 29:
  
 
* Use-cases
 
* Use-cases
 +
*# Intel Core2Duo
 
*# Intel quad-core Xeon processors (?)
 
*# Intel quad-core Xeon processors (?)
 
*# 6 CPU Sun, Solaris 8 (SPL: vision)
 
*# 6 CPU Sun, Solaris 8 (SPL: vision)

Revision as of 13:18, 30 March 2007

Home < ITK Registration Optimization

Goals

There are two components to this research

  1. Identify registration algorithms that are suitable for non-rigid registration problems that are indemic to NA-MIC
  2. Develop implementations of those algorithms that take advantage of multi-core and multi-processor hardware.

Algorithmic Requirements and Use Cases

  • Requirements
    1. relatively robust, with few parameters to tweak
    2. runs on grey scale images
    3. has already been published
    4. relatively fast (ideally speaking a few minutes for volume to volume).
    5. not patented
    6. can be implemented in ITK and parallelized.

Hardware Platform Requirements and Use Cases

  • Requirements
    1. Shared memory
    2. Single and multi-core machines
    3. Single and multi-processor machines
    4. AMD and Intel - Windows, Linux, and SunOS
  • Use-cases
    1. Intel Core2Duo
    2. Intel quad-core Xeon processors (?)
    3. 6 CPU Sun, Solaris 8 (SPL: vision)
    4. 12 CPU Sun, Solaris 8 (SPL: forest and ocean)
    5. 16 core Opteron (SPL: john, ringo, paul, george)
    6. 16 core, Sun Fire, AMDOpteron (UNC: Styner)

Data

Workplan

  1. Quantify current performance and bottlenecks
    1. Identify timing tools (cross platform, multi-threaded)
    2. For each use-case
      1. Centralized data and provide easy access
      2. Identify relevant registration algorithm(s)
      3. Develop traditional ITK-style implementations
      4. Develop timing tests using implementations and data
    3. Across use-cases
      1. Identify ITK classes/functions common to implementations (e.g., interpolation/resampling)
      2. Develop timing tests specific to these common sub-classes
    4. Compute performance on multiple platforms

Progress Highlights

  1. Quantify current performance and bottlenecks

Related Pages

Performance Measurement