Difference between revisions of "Problems and Solutions on SPL Machine Blog"

From NAMIC Wiki
Jump to: navigation, search
 
(22 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
----
 
----
  
With the install of Fedora 7 we have been having some problems, namely with the OpenGL driver recognition by SCIRun.
+
Jan 08:  There are currently no SPL specific problems with SCIRun on SPL machines. There is a more general bug in SCIRun related to questionably threadsafe code, specifically DLOpen calls which are primarily in the dynamic compilation portion of SCIRun. These show up often on the fat nodesJeroen is working to eliminate dynamic compilation, and eliminate these bugs which manifest more frequently and randomly with large networks and multicore machines which "stress" the thread safety of codeIf you are using SCIRun and running into these bugs please let him know.
 
 
For example when we utilize the tools on the previous page SCIRun gets pointed at the correct drivers as shown here:
 
 
 
spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin% ldd scirun | grep GL
 
        libGL.so.1 => /usr/lib64/nvidia/libGL.so.1 (0x0000003f8e200000)
 
        libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x000000360c200000)
 
        libGLcore.so.1 => /usr/lib64/nvidia/libGLcore.so.1 (0x0000003f80e00000)
 
spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin%
 
 
 
When these drivers are recognized SCIRun rus as expectedHowever, each time SCIRun is run it reverts back to the wrong drivers as evidenced here.
 
 
 
spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin% ldd scirun | grep GL
 
        libGL.so.1 => /usr/lib64/libGL.so.1 (0x0000003f84800000)
 
        libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x000000360c200000)
 
 
 
After this, any OpenGL dependent modules crash upon opening. If you repeat the steps "unsetenv" and run Dav's script again you get back to:
 
 
 
 
 
spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin% ldd scirun | grep GL
 
        libGL.so.1 => /usr/lib64/nvidia/libGL.so.1 (0x0000003f8e200000)
 
        libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x000000360c200000)
 
        libGLcore.so.1 => /usr/lib64/nvidia/libGLcore.so.1 (0x0000003f80e00000)
 
spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin%
 
 
 
 
 
This did not occur previously with the Fedora 5, and it is unclear why it is doing so nowAny ideas?  A new version of SCIRun is opened every time a run a script so it prohibits the use of scripts on SPL machines.
 

Latest revision as of 19:14, 28 January 2008

Home < Problems and Solutions on SPL Machine Blog

Current Problems on Debugging for SCIRun on SPL Machines


Jan 08: There are currently no SPL specific problems with SCIRun on SPL machines. There is a more general bug in SCIRun related to questionably threadsafe code, specifically DLOpen calls which are primarily in the dynamic compilation portion of SCIRun. These show up often on the fat nodes. Jeroen is working to eliminate dynamic compilation, and eliminate these bugs which manifest more frequently and randomly with large networks and multicore machines which "stress" the thread safety of code. If you are using SCIRun and running into these bugs please let him know.