Difference between revisions of "Slicer Grid Computing"

From NAMIC Wiki
Jump to: navigation, search
Line 2: Line 2:
  
 
* Stephen Aylward (KitWare) presented BatchMake interface to Condor
 
* Stephen Aylward (KitWare) presented BatchMake interface to Condor
* Neil Jones (BIRN) presented (briefly) GridWizard
+
* Neil Jones (BIRN) presented GridWizard
* Group discussed different data storage technologies: MIDAS, BIRN Data Repository/XNAT, etc.
+
* Group discussed different data storage technologies: MIDAS, BIRN Federated Data/XNAT/HID, etc.
  
 
== Where we are ==
 
== Where we are ==
  
Both BatchMake and GridWizard have (partially working) demos of launching multiple processes on remote machines.  However, there's still a lack of clarity of exactly what it is that the Slicer community needs.  To that end, it's worth figuring out the answers to a few questions:
+
Both BatchMake and GridWizard have (partially working) demos of launching multiple processes on remote machines.  However, there's still a set of questions as to how these systems will interface with the Slicer environment to meet community needs.  To that end, it's worth figuring out the answers to a few questions:
  
* What is the main community we should target?  Core 1, 2, or 3?
+
* What is the main community we should target?  Will there be different modes of interaction for algorithm developers and domain scientists?
* What are two specific use cases of a Slicer user interacting with batch processing facilities through Slicer?  Where does data come from, where does it need to end up?
+
* What are two specific use cases of a Slicer user interacting with batch processing facilities through Slicer?  Where does data come from, where does it need to end up? What data needs to get passed back to Slicer itself?
 
* What are two specific use cases of a NAMIC user interacting with batch processing facilities, perhaps outside of Slicer?
 
* What are two specific use cases of a NAMIC user interacting with batch processing facilities, perhaps outside of Slicer?
* Stephen (Aylward) presented a soup-to-nuts backend for managing "experiments", data, and displaying metrics computed from images in graphical form.
+
* Stephen (Aylward) presented a soup-to-nuts backend for managing "experiments", data, and displaying metrics computed from images in graphical form. Neil (Jones) discussed the infrastructure to manage the running of large number of jobs and the portal dashboard used to track the jobs.
** Does this fit in well with the above use cases?
+
** What facilities within Slicer or within other environements (e.g. portal or stand-alone applications) are necessary to track and manage these jobs.
** If it is a requirement, who supports this? Stephen pointed out that the code is all in PHP and not terribly portable to new machines; this will need to be RPMd if BIRN is responsible for it.
 
 
* What look-and-feel of modules in Slicer is required? Do we need special "grid"-type parameters in the execution model, or should we just adhere to "min parameter", "max parameter", "step" notion?
 
* What look-and-feel of modules in Slicer is required? Do we need special "grid"-type parameters in the execution model, or should we just adhere to "min parameter", "max parameter", "step" notion?
 
** Can we handle configuration of either tool out of band from Slicer?
 
** Can we handle configuration of either tool out of band from Slicer?
 
** Can we handle monitoring out of band from Slicer?
 
** Can we handle monitoring out of band from Slicer?
  
Until now, we have been content with infrastructure development: developing tools to perform generic batch processing.  But it's becoming clear that we can't really make much more progress (aside from just improving the tools but not deploying them) without concrete goals.
+
Until now, we have been focused on the infrastructure development: developing tools to perform generic batch processing.  The tools are at a stage where progress (aside from just improving the tools but not deploying them) can be made in applying these to specific research needs.

Revision as of 15:28, 27 June 2007

Home < Slicer Grid Computing

Items discussed

  • Stephen Aylward (KitWare) presented BatchMake interface to Condor
  • Neil Jones (BIRN) presented GridWizard
  • Group discussed different data storage technologies: MIDAS, BIRN Federated Data/XNAT/HID, etc.

Where we are

Both BatchMake and GridWizard have (partially working) demos of launching multiple processes on remote machines. However, there's still a set of questions as to how these systems will interface with the Slicer environment to meet community needs. To that end, it's worth figuring out the answers to a few questions:

  • What is the main community we should target? Will there be different modes of interaction for algorithm developers and domain scientists?
  • What are two specific use cases of a Slicer user interacting with batch processing facilities through Slicer? Where does data come from, where does it need to end up? What data needs to get passed back to Slicer itself?
  • What are two specific use cases of a NAMIC user interacting with batch processing facilities, perhaps outside of Slicer?
  • Stephen (Aylward) presented a soup-to-nuts backend for managing "experiments", data, and displaying metrics computed from images in graphical form. Neil (Jones) discussed the infrastructure to manage the running of large number of jobs and the portal dashboard used to track the jobs.
    • What facilities within Slicer or within other environements (e.g. portal or stand-alone applications) are necessary to track and manage these jobs.
  • What look-and-feel of modules in Slicer is required? Do we need special "grid"-type parameters in the execution model, or should we just adhere to "min parameter", "max parameter", "step" notion?
    • Can we handle configuration of either tool out of band from Slicer?
    • Can we handle monitoring out of band from Slicer?

Until now, we have been focused on the infrastructure development: developing tools to perform generic batch processing. The tools are at a stage where progress (aside from just improving the tools but not deploying them) can be made in applying these to specific research needs.