Difference between revisions of "Problems and Solutions on SPL Machine Blog"

From NAMIC Wiki
Jump to: navigation, search
Line 29: Line 29:
  
  
This did not occur previously with the Fedora 5, and it is unclear why it is doing so now.  Any ideas?  A new version of SCIRun is opened every time a new bundle is run in our script so it prohibits the use of scripts on SPL machines.  Is this a consequence of dynamic compilation, for example you have to run the script after everytime you do a new cmake and build. Does the dynamica compilation essentiall reset things every time by rebuilding components of SCIRun?
+
This did not occur previously with the Fedora 5, and it is unclear why it is doing so now.  Any ideas?  A new version of SCIRun is opened every time a new bundle is run in our script so it prohibits the use of scripts on SPL machines.  Is this a consequence of dynamic compilation, for example you have to run the script after everytime you do a new cmake and build, and it is building everytime it runs as well? Does the dynamic compilation essentially reset things every time by rebuilding components of SCIRun? Clearly this is over my head and input is appreciated.--[[User:Mjolley|Mjolley]] 13:38, 20 August 2007 (EDT)

Revision as of 17:38, 20 August 2007

Home < Problems and Solutions on SPL Machine Blog

Current Problems on Debugging for SCIRun on SPL Machines


With the install of Fedora 7 we have been having some problems, namely with the OpenGL driver recognition by SCIRun.

For example when we utilize the tools on the previous page SCIRun gets pointed at the correct drivers as shown here:

spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin% ldd scirun | grep GL

       libGL.so.1 => /usr/lib64/nvidia/libGL.so.1 (0x0000003f8e200000)
       libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x000000360c200000)
       libGLcore.so.1 => /usr/lib64/nvidia/libGLcore.so.1 (0x0000003f80e00000)


When these drivers are recognized SCIRun runs as expected. However, each time SCIRun is run it reverts back to the wrong drivers as evidenced here.

spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin% ldd scirun | grep GL

       libGL.so.1 => /usr/lib64/libGL.so.1 (0x0000003f84800000)
       libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x000000360c200000)

After this, any OpenGL dependent modules crash upon opening. If you repeat the steps "unsetenv" and run Dav's script again you get back to:


spl_tm64_1:/workspace/mjolley/Modeling/trunk/SCIRun/bin% ldd scirun | grep GL

       libGL.so.1 => /usr/lib64/nvidia/libGL.so.1 (0x0000003f8e200000)
       libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x000000360c200000)
       libGLcore.so.1 => /usr/lib64/nvidia/libGLcore.so.1 (0x0000003f80e00000)


This did not occur previously with the Fedora 5, and it is unclear why it is doing so now. Any ideas? A new version of SCIRun is opened every time a new bundle is run in our script so it prohibits the use of scripts on SPL machines. Is this a consequence of dynamic compilation, for example you have to run the script after everytime you do a new cmake and build, and it is building everytime it runs as well? Does the dynamic compilation essentially reset things every time by rebuilding components of SCIRun? Clearly this is over my head and input is appreciated.--Mjolley 13:38, 20 August 2007 (EDT)