Algorithm:Future Plans

From NAMIC Wiki
Jump to: navigation, search
Home < Algorithm:Future Plans

NAMIC Core 1 Plans March 2005

This document is intended to serve as an outline of plans for NAMIC Core 1. This includes:

  • Goals of Core 1
  • Interactions with Core 3
  • Plans for integration of tools into the toolkit (including interactions directly with Core 2)
  • Scientific agendas as executed at Core 1 sites

Goals:

  • What have we learned about the pathophysiology of schizophrenia?
    • How did or will NAMIC tools provide valuable leverage in asking and answering questions about schizophrenia?
  • What is the vision of the NAMIC software solution?
    • How do the needs of the end-users get factored into design of software toolkit?
    • How do the interests of the algorithms developers get matched to those needs?


Timeline:

To focus the development of relevant tools for the Core 3 collaborators, it is critical that essential information be gathered and coordinated, and that teams of investigators be coordinated for focused interaction. The list below is a proposed set of needs, and associated timetable for addressing them.


Requirements dashboard

The goal of this stage is to assemble a coherent collection of requirements needs for software in the toolkit, especially as it supports Core 3 activities. Note that there probably needs to be some decision making process to cull out Core 3 requests that are not feasible/reasonable either for technical reasons or for time constraint reasons.

  • Date Action Responsibility
  • April 1 Management Core convenes requirements dashboard working group.
  • April 15 Specifications for requirements dashboard due.
  • May 1 Phase 1 dashboard release.
  • May 15 Requirements dashboard Core feedback due.
  • June 1 Phase 2 dashboard release.
  • June 1 Requirements entered.
  • June 30 Quantitative process review.
  • July 10 Lessons learned. Process improvement recommendations.

Requirements elicitation/extraction

A more detailed cut at this plan, focused on getting requirements is listed below:

  • April 15 Management Core defines small cross-functional teams for requirements extraction.
  • May 1 Cross-functional teams TCON with Core 3 team members for software profile and requirements elicitation/extraction.
  • June 1 Requirements dashboard entries due.
  • June 1 Cores 1 and 3 feedback on functional requirements specification.
  • June 30 Tool integration Round 2 (below).
  • June 30 Quantitative process review.
  • June 30 Requirements satisfaction read-out.
  • July 10 Lessons learned. Process improvement.


Requirements elicitation/gathering:

  • Cross-functional team (Cores 1,2,4,6,…) meet with Core 3 investigators to profile current tools and elicit functional requirements for NAMIC solutions.
  • Profile existing software solutions at Core 3 sites.
  1. What software tools do you use and why?
  2. What is your vision of the ideal software solution?
  3. Do you use FSL, SPM2, SPM99, AFNI, BrainVoyager, VoxBo, MedX, …?
  4. What do you like and dislike about these software tools?
  5. What are their strengths and weaknesses?
  6. On a scale of 1-10, how would you rate these tools?
  • Elicit functional requirements.
  1. What do you need? Why is that important for your research?
  2. Using a scale of 1-10, can you prioritize these in terms of anticipated difficulty and scientific impact?
  3. What questions will you be able to ask with these tools that you can’t ask now?
  4. If you had to identify 3 scientific concepts that all team members should be familiar with, what would these be, e.g., statistical mapping, the localization problem, … ?
  5. On a scale of 1-10, how well is the current software process serving your needs? How can we improve the process?
  6. Do you think we ‘get it’?

Requirements management:

  • Define requirements management framework
  1. Define quantitative metrics for functional requirements satisfaction.
  2. Define channels for continual input from Core 3.
  3. Functional requirements entered into requirements management dashboard.
  4. Specific individuals from Cores 1, 2, 4, 6, … designated as requirements champions to advocate for requirements at design meetings.

Lessons learned:

  • What lessons can we learn from the mixed successes of other large collaborative scientific software projections?
  1. “Get it”: Embrace software processes/philosophy which will allow the Core 1 investigators to say that we understand their needs and challenges.
  2. Get religion about requirements.
  3. Distinction between functional and non-functional requirements.
  4. Align personal goals/wants early.
  5. Monitor the ground.
  6. Regular use cases.
  7. Build investigator focus into software processes.
  8. Priority on scientific/clinical output.


Software definition:

We need to develop a coherent vision of what the NAMIC software system will be, and how to effectively deliver it to our current Core 3 collaborators, as well as to future collaborators.

  • Define NAMIC solution
  1. What are the software needs?
  2. What is the NAMIC solution?
  3. What are the strengths and weaknesses of the NAMIC solution relative to existing solutions?
  4. Is there a framework for interoperability with existing software solutions at Core 1 and 3 sites?
  5. Integration of new methods into the toolkit (Slicer, ITK)
  6. Build adoption for NAMIC solutions at Core 3 sites. Profile and address barriers to adoption.
  7. Collaboration with Core 3 to use new tools to investigate clinical questions

Toolkit integration:

We need to assemble a rough schedule of expected tools. Initially this comes from Core 1 interests, clearly as we work on requirements from the Core 3 side, we will need to modify this. However, this provides a starting point for capturing current algorithm development plans.

Expected Tools and Working Groups

  • MIT
  1. Shape-guided level sets for segmentation (already exists in ITK)
  2. DTI analysis tools (already exists in Slicer)
  3. Shape-based MRF segmentation (expected by July 1, Kilian Pohl is already working on incorporating existing code base into Slicer)
  4. Population shape analysis – joint collaboration with UNC (Martin Styner). (Prototype pipeline expected by August 1, integration with Slicer an open issue to due code complexities)
  • MGH
  1. QBALL (done)
  2. Tensor-based statistical group comparison (June 30)
  • UTAH
  1. Tensor statistics and representations (April 30)
  2. DTI interpolation (June 30)
  3. DTI filtering (June 30)
  4. Hypothesis testing for tensors (expected by September but work is still in more speculative stage, and will require feedback from users before a more firm timetable is possible)
  • UNC
  1. Tensor statistics and representations
  2. Clustering tools
  3. Prototype platform for analysis
  4. Shape representations and analysis tools
  • Georgia Tech
  1. Rule based segmentation methods (June 30)
  2. Statistical based segmentation methods (Implemented in ITK)
  3. DTI analysis methods (June 30)
  4. Shape analysis tools (August 31)
  5. Expected Core 3 Collaborations
  • MIT
  1. Population analysis of shape (with Martha Shenton)
  2. DTI analysis (with Andy Saykin)
  3. fMRI analysis (with Andy Saykin)
  • MGH
  1. DTI group comparison of sz SNP subtypes (MGH-Irvine)
  • UTAH
  1. DTI statistics and processing (with Martha Shenton)
  2. Tensor hypothesis testing (with Martha Shenton)
  • UNC
  1. DTI properties – with BWH
  2. Shape analysis – with BWH
  • Georgia Tech
  1. Segmentation of structures – with Irvine, BWH
  2. Level set analysis tools – with Utah
  3. Shape analysis – with UNC

Suggested organizational structure:

  • Set up a timeline for integration of tools
  • Set up a specific plan for meeting with Core 3 partners
  • Set up specific sets of data in coordination with Core 2 and Core 3