Difference between revisions of "Engineering:UCSD"

From NAMIC Wiki
Jump to: navigation, search
Line 26: Line 26:
  
 
''July - September 2007''
 
''July - September 2007''
* Inception, analysis, architecture, design and implementation of Grid Wizard Enterprise (GWE).
+
* Inception, analysis, architecture, design and implementation of Grid Wizard Enterprise (GWE) based on prior Grid Wizard prototype developed as part of NA-MIC.
  
 
''October 2007''
 
''October 2007''
Line 98: Line 98:
 
* NAMIC Winter Project Week and AHM 2009.
 
* NAMIC Winter Project Week and AHM 2009.
 
* [[2009_Winter_Project_Week_GWE_Catalogs|"GWE integration with catalog files" project]] at the NAMIC Winter Project Week 2009.
 
* [[2009_Winter_Project_Week_GWE_Catalogs|"GWE integration with catalog files" project]] at the NAMIC Winter Project Week 2009.
 
  
 
= Miscellaneous =
 
= Miscellaneous =

Revision as of 15:37, 5 January 2009

Home < Engineering:UCSD

UCSD

GWE-logo.jpg

Overview

A core activity of UCSD is the development of infrastructure and support for the utilization of distributed computation resources (i.e. Grid Wizard Enterprise). This infrastructure allows, for example, Slicer3 to execute work in a distributed grid environment and enable NA-MIC algorithms to be tested in such a distributed environment. This will allow for much quicker validation of algorithms developed in Core 1 and also test effects of parameter settings, through large scale parameter searches. Work described in the previous progress report led to a prototype grid interface (aka Grid Wizard or gwiz). The overall purpose of this work in NA-MIC is to facilitate a "run-everywhere" philosophy for algorithm developers. By adopting a standard for algorithm "self-description" that is followed when command line executables are written, Slicer and distributed computational environments should be able to use the executables directly in their environment.

A basic requirement for the GWE environment is that a researcher expert in a particular scientific discipline should not need to also become an expert in grid computing in order to produce an application that uses grid technology. It is important to note that GWE is not meant to be another grid middleware package, rather, it is meant to be a large-scale job launching and management tool that bridges the gulf between these biomedical researchers and current grid middleware by:

  • Providing the researcher with the ability to easily configure the heterogeneous clustered/grid resources that they have access to.
  • Allowing a researcher to easily specify large parametric computational jobs using the same general syntax as is used in the command line invocation of the analysis algorithms (e.g. through the P2EL language) or through integration with community developed biomedical applications (e.g. Slicer3).
  • Managing the most common house keeping tasks required to ensure end-to-end success of a computation thereby relieving the researcher of this burden.

Grid Wizard Enterprise (GWE) Background

The field of high performance computing (HPC) has provided a wide array of strategies for supplying additional computing power to the goal of reducing the total “clock time” required to complete large scale analyses. These strategies range from the development of higher performance hardware to the assembly of large networks of commodity computers. However, for the non-computational scientist wishing to utilize these services, usable software remains elusive. Here we present a software design and implementation of a tool, Grid Wizard Enterprise (GWE; http://www.gridwizardenterprise.org/), aimed at providing a solution to the particular problem of the adoption of advanced grid technologies by biomedical researchers. GWE provides an intuitive environment and tools that bridge this gulf between the researcher and current grid technologies allowing them to run inter-independent computational processes faster by brokering their execution across a virtual grid of computational resources with a minimum of user intervention. The GWE architecture has been designed in close collaboration with NA-MIC researchers and supports the majority of every-day tasks performed by computational scientists in the fields of computational biology and medical image analysis.

GWE Information

Monthly Progress (since July 2007)

July - September 2007

  • Inception, analysis, architecture, design and implementation of Grid Wizard Enterprise (GWE) based on prior Grid Wizard prototype developed as part of NA-MIC.

October 2007

  • Presentation and live demo of first GWE prototype at BIRN AHM. (PowerPoint / PDF)

December 2007

January 2008

  • Unit tests of GWE's first release candidate components.
  • Internal pre-release of first version of GWE.
  • Release of GWE guides, technical details and collaboration tools in the GWE project site.

February 2008

  • First release of GWE (version 0.6.alpha). See its 'features' page for details.

March 2008

April 2008

  • GWE version 0.6.2.alpha released. See its release notes for details.
  • Wrote paper "Simplifying the Utilization of Grid Computation using Grid Wizard Enterprise" to be submitted to the 'MICCAI Grid Workshop'.

May 2008

June 2008

July 2008

August 2008

  • GWE version 0.6.4.alpha released. See its release notes for details.

September 2008

October 2008

  • Collected user requests at the BIRN AHM.

November 2008

December 2008

January 2009

Miscellaneous

  • Weekly NAMIC engineering teleconferences.
  • GWE testbed and user's support.

Dissemination Activities Prior to Monthly Progress Outlined Above

  • Introductory meeting and demonstration with Tina Kapur & Birn-CC.
  • Hosted NAMIC dissemination event (February 17-18).
  • Taught Data Grid course at UCSD dissemination event.
  • Attended NAMIC dissemination event.
  • Attended SLC AHM.
  • Instructions available for deploying a "tunneled" SRB server.

Infrastructure Prior to Monthly Progress Outlined Above

  • Researched, tested and deployed a newly configured SRB server for NAMIC that allows for the tunneling of all SRB commands via SSH. This tunneling has been tested with the command line (SCommands), Java (JARGON) and Windows (InQ) clients. The current staging server is running at UCSD and is available for testing.
  • A server co-located at BWH will be discussed at the AHM.
  • Leveraged BWH BIRN Rack to provide gigabit connection for na-mic.org.
  • Researching and developing a parallel system for backend parallel processing of Slicer3 algorithms
  • Discussed the use of BatchMake for submitting grid-like jobs to a Condor pool

Data Sharing Prior to Monthly Progress Outlined Above

  • Hosting NAMIC data on data grid accessible to all NAMIC participants.
  • Providing data grid and Portal support to all NAMIC participants.
  • Provided custom project space for NAMIC in BIRN Portal.
  • Provided account generation for first batch of NAMIC users. New users can now utilize the new account request feature in the Portal.
  • Working with Isomics to assist core 3 sites in their data uploads.
  • Provided template data hierarchy constructs and integration of hierarchy in data grid.
  • Provided statistics to Tina Kapur on account creation, number of uploaded data sets, and audit information.