Difference between revisions of "ITK Registration Optimization/2007-04-06-tcon"

From NAMIC Wiki
Jump to: navigation, search
Line 38: Line 38:
 
* Ideal set of tests is machine specific
 
* Ideal set of tests is machine specific
 
** e.g., Number of threads and image size
 
** e.g., Number of threads and image size
 +
 +
= Tasks =
 +
 +
=== Julien ===
 +
# Work with Seb to get reports from Amber2
 +
#* Result
 +
#** Amber2 is allocated to another project - therefore work will transition to machines at SPL
 +
# Define role of experiments and batches
 +
#* Work with Seb to integrate with cmake dashboard
 +
#* New experiment = new cvs tag
 +
#* New batch = nightly (possibly only if cvs has changed)
 +
# CMake knows of # of CPUs and CPU cores
 +
# CMake knows of memory available
 +
 +
=== Brad ===
 +
# Continue to develop registration pipelines
 +
#* Commit into CVS
 +
#* Implement as ctests
 +
# Optimize the meansquareddifferenceimagetoimagemetric
 +
 +
=== Seb ===
 +
# Setup CMake Dashboard
 +
# Add md5 encryption function to CMake for BatchMake passwords
 +
# Work with Julien on BatchMake Dashboard designs
 +
# Investigate other opportunities for optimization
 +
 +
=== Stephen ===
 +
# Get Seb/Brad access to SPL machines
 +
# Continue to optimize MattesMIMetric
 +
# Determine BMDashboard table structure
 +
# Have programs switch between baseline, optimized, and both testing/reporting

Revision as of 15:46, 6 April 2007

Home < ITK Registration Optimization < 2007-04-06-tcon

Agenda

Tests

  • Two types of tests
    • Baseline: LinearInterp is useful for profiling
      • Profile reports in Reports subdir
    • Optimization: OptMattesMI is a self-contained test
      • Submit to dashboard
        • test name / method
        • non-optimized speed
        • optimized speed
        • optimized error (difference from non-optimized results)

Timing

  • Priority
  • Thread affinity

Performance Dashboard

  • Ranking computers?
    • CPU, memory, etc in dashboard
    • MFlops as measured by Whetstone?
  • Public submission of performance
    • Logins and passwords configured in cmake
    • Encryption in cmake?
  • Organization of Experiments/Dashboards
    • When new experiment?
  • Appropriate summary statistics
    • Per machine: batch -vs- speed/error
    • Per test: mflops -vs- speed/error
    • All, batch -vs- % change in performance

Review of OptMattesMI

  • Lessons learned
    • Mutex bad
    • Memory good

ctest suite

  • Ideal set of tests is machine specific
    • e.g., Number of threads and image size

Tasks

Julien

  1. Work with Seb to get reports from Amber2
    • Result
      • Amber2 is allocated to another project - therefore work will transition to machines at SPL
  2. Define role of experiments and batches
    • Work with Seb to integrate with cmake dashboard
    • New experiment = new cvs tag
    • New batch = nightly (possibly only if cvs has changed)
  3. CMake knows of # of CPUs and CPU cores
  4. CMake knows of memory available

Brad

  1. Continue to develop registration pipelines
    • Commit into CVS
    • Implement as ctests
  2. Optimize the meansquareddifferenceimagetoimagemetric

Seb

  1. Setup CMake Dashboard
  2. Add md5 encryption function to CMake for BatchMake passwords
  3. Work with Julien on BatchMake Dashboard designs
  4. Investigate other opportunities for optimization

Stephen

  1. Get Seb/Brad access to SPL machines
  2. Continue to optimize MattesMIMetric
  3. Determine BMDashboard table structure
  4. Have programs switch between baseline, optimized, and both testing/reporting