Difference between revisions of "User:Barre/ITK Registration Optimization"

From NAMIC Wiki
Jump to: navigation, search
Line 90: Line 90:
  
 
I read a few articles ([http://www.linuxjournal.com/article/6799 here], [http://www.open-mpi.org/software/plpa/ here], [http://www.uwsg.iu.edu/hypermail/linux/kernel/0409.0/1974.html here]) and came to the conclusion that:
 
I read a few articles ([http://www.linuxjournal.com/article/6799 here], [http://www.open-mpi.org/software/plpa/ here], [http://www.uwsg.iu.edu/hypermail/linux/kernel/0409.0/1974.html here]) and came to the conclusion that:
* the sched_setaffinity API is not reliable accross Linux vendor/kernels,  
+
* the sched_setaffinity API is not reliable accross Linux vendor/kernels (see the [http://www.open-mpi.org/software/plpa/ Portable Linux Processor Affinity] (PLPA) library though,  
 
* there is too much a risk influencing the algorithms being timed,
 
* there is too much a risk influencing the algorithms being timed,
 
* binding CPU affinity for timing purposes make only sense for real-time application, where events to be timed are very small in duration (in such cases, calling a timer start() could happen on one cpu/thread, calling stop() could happen on the other, potentially resulting in negative timings). This does not apply to us, as we are trying to measure performance over a reasonable amount of time (minutes to hours).
 
* binding CPU affinity for timing purposes make only sense for real-time application, where events to be timed are very small in duration (in such cases, calling a timer start() could happen on one cpu/thread, calling stop() could happen on the other, potentially resulting in negative timings). This does not apply to us, as we are trying to measure performance over a reasonable amount of time (minutes to hours).

Revision as of 13:59, 6 April 2007

ITK Registration Optimization (BW NAC) Project

My (Sebastien Barre) notes so far. Once the dust settles, the relevant sections will be moved to the project pages listed below.

Project

The ultimate specific goal is B-Spline registration optimization for linux and windows on multi-core and multi-processor, shared memory machines. [...] Also, setup tools and a reporting mechanism for ITK speed to be monitored and reported by us and others. BWH is the driving force behind this work.

Contacts

Quick Links

Source Code

 cvs -d :pserver:<login>@public.kitware.com:/cvsroot/BWHITKOptimization login

Enter your VTK <login>, and password, then:

 cvs -d :pserver:<login>@public.kitware.com:/cvsroot/BWHITKOptimization co BWHITKOptimization

to checkout the code.

You can browse the repository online using ViewCVS as well.

Testing Data


Potential Issues with Timing

  • Repository was updated so that it can compile on Unix.

__rtdsc()

CallMonWin includes <intrin.h> to call __rdtsc(), a header that does not exist in Microsoft compilers prior to Visual Studio 8/2005. It seems however that one can call __rdtsc() directly from assembly:

A few articles advises against the use of __rdtsc(), especially in a multicore/multithread context:

The suggested alternative is to use Performance Counters. Hardware counters are actually not an OS feature per se, but a CPU feature that has been around for some time. They provide high-resolution timers that can be used to monitor a wide range of resources:

The issue remains on how to access those counters in a cross-platform way:

  • PAPI: "The Performance API (PAPI) project specifies a standard API for accessing hardware performance counters".
    Stephen/Christian reported that Dual Core CPUs were not supported, but it seems from the release notes for PAPI 3.5 (2006-11-09) that both Intel Core2Duo and Pentium D (i.e. dual core) are indeed supported.

Process Priority

Whatever our choices, several articles also suggest to bump the application's priority to real-time before performing testing to make sure the wall-clock() results are as realistic as possible. It is however very important to set it back to normal.

  • Windows:

See last paragraph. Use GetPriorityClass, SetPriorityClass, GetThreadPriority, SetThreadPriority. After experimenting with that API, it seems that users with Admnistrative priviledges will be able to access REALTIME_PRIORITY_CLASS, whether users with Users priviledges only will only access HIGH_PRIORITY_CLASS. Note that the below code will not fail for normal users, HIGH will be picked instead of REALTIME. In any case, I quote: "Use extreme care when using the high-priority class, because a high-priority class application can use nearly all available CPU time."; indeed, mouse interaction is pretty much impossible, and some applications like IM will disconnect after losing socket connection. Let's stick to HIGH.

	DWORD dwPriorityClass = GetPriorityClass(GetCurrentProcess());
	int nPriority = GetThreadPriority(GetCurrentThread());
	SetPriorityClass(GetCurrentProcess(),REALTIME_PRIORITY_CLASS);
	SetThreadPriority(GetCurrentThread(),THREAD_PRIORITY_TIME_CRITICAL);
[...]g
        SetThreadPriority(GetCurrentThread(),nPriority);
	SetPriorityClass(GetCurrentProcess(),dwPriorityClass);
  • Unix:

The getpriority(), setpriority(), and nice() functions can be used to change the priority of processes. The getpriority() call returns the current nice value for a process, process group, or a user. The returned nice value is in the range of [-NZERO, NZERO-1]. NZERO is defined in /usr/include/limits.h. The default process priority always has the value 0 for UNIX. The setpriority() call sets the current nice value for a process, process group, or a user to the value of value + NZERO. It is important to note that setting a higher priority is only allowed if you are root or if the program has its suid set, in order to avoid rogue program/virus to claim system resources. In practice, it is likely to prevent us from increasing the priority on Unix.

Update: itkHighPriorityRealTimeClock, a subclass of itkRealTimeClock, has been created. Since we wanted to be compatible with the itkRealTimeClock API, no Start() and Stop() methods were created to increase (respectively restore) the process/thread priority; this is done automatically from the class constructor (respectively destructor) methods instead. The drawback to this approach is that a class that would use an itkHighPriorityRealTimeClock as a member variable would have its priority bumped as soon as it is created. This does not apply to us per-se, as we favor allocating clock objects right before the section that needs to be timed. As noted above, this is likely not to help us on Unix.

Thread Affinity

We should consider setting the thread affinity to make sure that the starting time is recorded on the same thread as the ending time. Will that constrain the rest of the program to run on a single thread, very good question.

  • Windows:

Using SetThreadAffinityMask. Also check Sleep(0), reported in a few discussions, including this long one.

  • Unix:

Using sched_setaffinity, but seems to be Linux-only (not POSIX).

I read a few articles (here, here, here) and came to the conclusion that:

  • the sched_setaffinity API is not reliable accross Linux vendor/kernels (see the Portable Linux Processor Affinity (PLPA) library though,
  • there is too much a risk influencing the algorithms being timed,
  • binding CPU affinity for timing purposes make only sense for real-time application, where events to be timed are very small in duration (in such cases, calling a timer start() could happen on one cpu/thread, calling stop() could happen on the other, potentially resulting in negative timings). This does not apply to us, as we are trying to measure performance over a reasonable amount of time (minutes to hours).

However, once our optimization have been tested, it might be interesting to see if CPU affinity can be used to improve cache performace: "[...] But the real problem comes into play when processes bounce between processors: they constantly cause cache invalidations, and the data they want is never in the cache when they need it. Thus, cache miss rates grow very large. CPU affinity protects against this and improves cache performance. A second benefit of CPU affinity is a corollary to the first. If multiple threads are accessing the same data, it might make sense to bind them all to the same processor. Doing so guarantees that the threads do not contend over data and cause cache misses. [...]".

Test Platforms

The primary target platform at the 8, 16, and 32 processor machines at BWH. However, preliminary tests have been performed on KHQ computers.

KHQ

A full software stack was compiled on several machines at Kitware. Each component was build in two flavors, both shared/debug and static/release:

  • Tcl/Tk 8.4
  • VTK (cvs)
  • ITK (cvs)
  • ITK Applications (cvs)
  • FLTK (1.1 svn)
  • BWHItkOptimization (cvs)

All platforms are so far described in the BWHItkOptimization/Results directory:

Host #CPU CPU Freq RAM Arch OS Login
amber2 2 Pentium Xeon 2.8 GHz 4 GB 64 bits Linux 2.6 (Red Hat Enterprise 4) kitware (ssh, vnc; cd ~/barre)
fury 1 Pentium 4 (hyperthread) 2.8 GHz 1 GB 32 bits Linux 2.6 (Fedora Core 4) barre, jjomier, aylward
mcpluto 1 Pentium D 3.0 GHz 4 GB 64 bits Linux 2.6 (Debian Etch) davisb
panzer 1 Intel Core Duo (dual core) 1.66 GHz 1 GB 32 bits Mac OS X 10.4.8 barre, jjomier, aylward
sanakhan 1 Pentium M 1.8 GHz 1 GB 32 bits Windows XP SP2 barre
tetsuo 1 Pentium D (dual core) 3.2 GHz 2 GB 32 bits Windows XP SP2 barre

Tests

  • To run tests using the ITK test driver. From the bin directory run:
 ./OptimizationTests

and select the test you want to run.

  • LinearInterp: (to describe)

BWH

Systems to be described. We will set up our performancing framework so that it can be run in an automated fashion, using CTest and a dashboard. Running from the spl machine should at that point only require one of us to checkout the code, build and ctest it every night.

Status

  • kcachegrind and timing are being performed on amber2. Stay tuned.
  • valgrind is not supported on x86_64 architecture :( Now using fury instead of amber2.
  • RegTests/RunLinearInterpTest.sh.in is configured automatically to run and times LinearInterp with various combinations of threads, size and factor parameters.
    • It was run on fury (release static): Results/fury.kitware.timings-rel.txt
    • It was run on fury (debug): Results/fury.kitware.timings-dbg.txt
    • It was run on amber2 (release static): Results/amber2.kitware.timings-rel.txt

Action Items