FBIRN:TuesPMRoadmapMarch2006.2

From NAMIC Wiki
Jump to: navigation, search
Home < FBIRN:TuesPMRoadmapMarch2006.2

To do:

  1. B0 warping included into FIPS
  2. BH program included into FIPS
  3. QA metrics determined and included as queryable in the HID--this includes using Syam's QA tools as an initial modularization effort
  4. FIPS/HID interaction proof of concept



Discussion

Lee--2 ideas:

  1. include effect size as part of the level 1 analysis input to level 2
    1. beta weight is only one of the things to come out of a GLM, can also get percent change, Cohen's d, PVAF, etc.
    2. will also want ROI volume, number of activated voxels or similar measures
  2. base the image format on the NIfTI standard
    1. put all our xml header info into the NIFTI header

Doug--the issues as identified by the Sunday night group or elsewhere

  1. connecting FIPS to the HID
    1. Action item: HID/FIPS proof of concept: FIPS to make a call to the HID and extract some information from it, e.g. diagnosis/group label
    2. Jeremy and Jeff to create a "webservice" to query the HID and extract information
    3. what about using FIPS outside the HID environment? That's the nice thing of using the webservice--it can talk to any database. And we'd have the option of not talking to any database.
  2. QA and repair
    1. looking for spikes and motion artifacts--raw data level
      1. Greg M and Greg B both interested in this, to FIPS-ify the kinds of things Chris was reporting on. See QA metrics below.
    2. masking issues
    3. registration checks
  3. Modularization: being able to insert other pieces into the pipeline
    1. we're close to having modularization on the pre-processing side of things. The Duke pre-processing pipeline should be FIPS-enabled. That will lead to a more generalized method for developing and plugging in other modules (rules to be published/disseminated).
    2. Changing the order of the steps is more tricky. E.g., right now the normalization is done on the analyzed maps, and doing that before the GLM stage brings with it some issues. Lee is very interested in incorporating Worsley's Brainstat.
      1. defining the design matrix in an interoperable way will be tricky (i.e., interpreting an FSL design matrix so another program can understand it)
    3. how hard is it to include B0 unwarping into FIPS? ans: could be done in the next month.
  4. Centralized configurations--e.g., distribute flacs with FIPS; when you run fips-fla and specify a flac, it could look on the SRB first, but then if the SRB is down your analyses stop.
    1. is a standardized analysis going to be applied to the pre-processed data so that it's available on the SRB? Greg B. doing the pre-processing on all sites.
      1. user feedback: a number of users want just the rough and ready beta maps with some log of the problematic data. UCSD's ready to provide that.
      2. Action item: FLACS--we need a meeting of the minds to hammer out the parameters for the centralized flacs, in the next two weeks.
      3. Action item: QA metrics--Randy: UCSD has downloaded the data and is ready to pre-process it. But the QA data is a different issue. With Syam's help and Brendan and Dave, take the QA programs and rip through all the data--can we do that while running locally on the SRB? Can run on one of the machines in the physical rack on the local data, but then FIPS has to be on each rack (which will happen in the April release of the racks).
        1. is the "right" version of FIPS included in the ROCKS release?
  5. Timing, and computational power/cluster
    1. FIPS not currently amenable to parallel analysis; works on a local filesystem
    2. Action item: Cutting out the upload/download middle man. But that's lower priority. Explore the options of writing wrappers to work on current scripts rather than re-writing it. Also explore the options with LONI. Good topic for the FBIRN programming week!!
  6. When are we going to bring the inter-site calibration into it?
    1. Rough and ready, pre-processed data up in the very short term
      1. including basic first level analyses without intersite correction
    2. Then what?
      1. We want an uncorrected dataset and a corrected dataset to show that the intersite corrections work on Phase II data
      2. Intensity normalization, smooth-to is already included in FIPS (data have to be re-run to include smooth-to); Doug working with Gary to include BH into FIPS by October (hopefully earlier). Greg B. offers to help with programming that. SFNR farther in the distance.
      3. Coregistration is still a question: register to the T1? T2? an atlas? The simplest thing that we can publish is to coreg directly to MNI-152, so that's what we're going to do.
      4. what to do with outliers/artifacts? Steve S. uses interpolation within AFNI to remove problematic images.
        1. Action item: Automation of the artifact removal. This is a lower priority than getting QA metrics into the queryable database.

Greg B: There's a push to provide two kinds of datasets to the BIRN community. The current plan is to provide in the short-term a pre-processed dataset with a QA log but with no repair done. It's too early to pick a repair algorithm.

Greg M: Syam's tools do that and dump an XML file, which is queryable. Chris and Syam and Dave K have talked about this and think of this as an initial version of the QA log.

  • Some artifacts affect the pre-processing and some don't. Those that do should be removed automatically.