Difference between revisions of "NAMIC Wiki:Community Pipelines"
m (Update from Wiki) |
(No difference)
|
Latest revision as of 13:39, 18 December 2006
Home < NAMIC Wiki:Community PipelinesPipelines in NAMIC can be divided into two main categories:
- Dataflow Pipelines at the C++ Class level (as done in ITK/VTK)
- Workflow Pipelines at the level of a clinical study or experiment.
The first type is fairly well established and standardized; the second is evolving. This page is meant to explore the implementation of Workflow Pipelines for NAMIC.
Workflow Pipeline Tools
The LONI Pipeline is being developed at UCLA for NAMIC and other projects to provide a high level user interface to processing tools. Each processing step is at the level of a unix process. The LONI Pipeline interfaces with Sun Grid Engine to distribute these processes across multiple machines. LONIPipelineSummary
Workflows Needed in NAMIC
Each DBP has provided multi-subject research data sets for testing. The Workflow Pipeline should support the following basic steps:
- Allow selection of a dataset (with a browse tool)
- Iterate through the subjects
- download the data from the DataRepository to the processing machine
- invoke the processing tool
- upload results back
The best initial example of this would be to use one of the example ITK executables as presented in the Dissemination Workshops, such as a gaussian smoothing filter as described in these powerpoint slides.
Other needed workflows include:
- Processing DTI data across subject populations
- Group statistical analysis with respect to genetic and clinical data
- other...