Difference between revisions of "ITK Registration Optimization"

From NAMIC Wiki
Jump to: navigation, search
Line 60: Line 60:
 
= Related Pages =
 
= Related Pages =
  
** [[Non Rigid Registration]]
+
* [[Non Rigid Registration]]
** [[Slicer3:Performance_Analysis]]
+
* [[Slicer3:Performance_Analysis]]
** [http://www.na-mic.org/Wiki/index.php/User:Barre/ITK_Registration_Optimization User:Barre/ITK Registration Optimization]
+
* [http://www.na-mic.org/Wiki/index.php/User:Barre/ITK_Registration_Optimization User:Barre/ITK Registration Optimization]
  
 
= Performance Measurement =
 
= Performance Measurement =

Revision as of 13:19, 30 March 2007

Home < ITK Registration Optimization

Goals

There are two components to this research

  1. Identify registration algorithms that are suitable for non-rigid registration problems that are indemic to NA-MIC
  2. Develop implementations of those algorithms that take advantage of multi-core and multi-processor hardware.

Algorithmic Requirements and Use Cases

  • Requirements
    1. relatively robust, with few parameters to tweak
    2. runs on grey scale images
    3. has already been published
    4. relatively fast (ideally speaking a few minutes for volume to volume).
    5. not patented
    6. can be implemented in ITK and parallelized.

Hardware Platform Requirements and Use Cases

  • Requirements
    1. Shared memory
    2. Single and multi-core machines
    3. Single and multi-processor machines
    4. AMD and Intel - Windows, Linux, and SunOS
  • Use-cases
    1. Intel Core2Duo
    2. Intel quad-core Xeon processors (?)
    3. 6 CPU Sun, Solaris 8 (SPL: vision)
    4. 12 CPU Sun, Solaris 8 (SPL: forest and ocean)
    5. 16 core Opteron (SPL: john, ringo, paul, george)
    6. 16 core, Sun Fire, AMDOpteron (UNC: Styner)

Data

Workplan

  1. Quantify current performance and bottlenecks
    1. Identify timing tools (cross platform, multi-threaded)
    2. For each use-case
      1. Centralized data and provide easy access
      2. Identify relevant registration algorithm(s)
      3. Develop traditional ITK-style implementations
      4. Develop timing tests using implementations and data
    3. Across use-cases
      1. Identify ITK classes/functions common to implementations (e.g., interpolation/resampling)
      2. Develop timing tests specific to these common sub-classes
    4. Compute performance on multiple platforms

Progress Highlights

  1. Quantify current performance and bottlenecks

Related Pages

Performance Measurement