Difference between revisions of "TractographyWorkshop Core1 ActionPlan"

From NAMIC Wiki
Jump to: navigation, search
 
(24 intermediate revisions by 4 users not shown)
Line 2: Line 2:
  
 
== At the Workshop we agreed to complete the following: ==
 
== At the Workshop we agreed to complete the following: ==
 
 
1)  Define a NA-MIC endorsed DWI pre- and post processing pipeline that uses NA-MIC toolkit software when available and other freely available software if unanimously agreed upon by the group (e.g. some FSL tools are in widespread use, but for academic use only) for this project.  Any tools not compatible with NAMIC licensing that are essential to the pipeline will be put on a short list for future NA-MIC development. <br />
 
1)  Define a NA-MIC endorsed DWI pre- and post processing pipeline that uses NA-MIC toolkit software when available and other freely available software if unanimously agreed upon by the group (e.g. some FSL tools are in widespread use, but for academic use only) for this project.  Any tools not compatible with NAMIC licensing that are essential to the pipeline will be put on a short list for future NA-MIC development. <br />
 
2)  Curate and post in the NA-MIC Publication database one or more sets of DWI data to be used within NA-MIC for analytic tool development, testing and calibration.<br />
 
2)  Curate and post in the NA-MIC Publication database one or more sets of DWI data to be used within NA-MIC for analytic tool development, testing and calibration.<br />
 
3)  Complete a rigorous analysis of the properties of the tractography approaches in use or under development within NA-MIC Core 1 teams on these data sets, including test-retest reliability.<br />
 
3)  Complete a rigorous analysis of the properties of the tractography approaches in use or under development within NA-MIC Core 1 teams on these data sets, including test-retest reliability.<br />
 
4)  Prepare and submit for publication a scholarly report of this work, with Sonia Pujol taking the lead under the mentorship of CF Westin, Ross, Guido and Randy.  All participants in the work will share authorship.
 
4)  Prepare and submit for publication a scholarly report of this work, with Sonia Pujol taking the lead under the mentorship of CF Westin, Ross, Guido and Randy.  All participants in the work will share authorship.
 
 
<br />
 
<br />
  
 
== Brief summary of the presentations, comparative analysis of tractography methods/approaches: ==
 
== Brief summary of the presentations, comparative analysis of tractography methods/approaches: ==
 +
There were both common and disparate results across tractography approaches.  All methods resulted in a high degree of intersubject variability.  That drove the decision to find a dataset that included at least one within subject replication.  Even greater detail as to how algorithms differ in reporting results were noted, e.g. weighting number of streamlines per voxel and how that affects the voxelwise statistical calculations (hence the decision below to use volume rather than number of streamlines in next iteration).  The same data sets were found to be outliers and/or posed greater challenges in several of the algorithms.  Many of the algorithms couldn't use the posted ROIs because they needed volume ROIs not slice plane ROIs.  The different laboratories actually used fairly similar methods to generate volume based ROIs so coming to consensus on how to do this for the next round was easy.  This was one of the most time consuming steps, so getting it correct on the next round will be a big advantage.  The presentations also highlighted the exact details of how different preprocessing steps and choice of ROIs affected the outcome of the different algorithms.  This led to a unanimous agreement to chose only one pre-processing pipeline for the next phase of our work (needs to include white matter mask) and greatly simplified the decision-making for the next phase.  We were all struck by the vast range of results of the different tractography algorithms even after controlling for many of the preprocessing steps.  Notably, not every site completed and/or presented analysis of full cohort/ROI set.  We agreed to wait until the next round when improved consistency in data processing and complete results for each algorithm were available before making any cross algorithm comparisons.There was unanimous agreement that this effort is timely for the field of DTI.
 
<br />
 
<br />
There were both common and disparate results across tractography approaches.  All methods resulted in a high degree of intersubject variability.  That drove the decision to find a dataset that included at least one within subject replication.  Even greater detail as to how algorithms differ in reporting results were noted, e.g. weighting number of streamlines per voxel and how that affects the voxelwise statistical calculations (hence the decision below to use volume rather than number of streamlines in next iteration).  The same data sets were found to be outliers and/or posed greater challenges in several of the algorithms.  Many of the algorithms couldn't use the posted ROIs because they needed volume ROIs not slice plane ROIs.  The different laboratories actually used fairly similar methods to generate volume based ROIs so coming to consensus on how to do this for the next round was easy.  This was one of the most time consuming steps, so getting it correct on the next round will be a big advantage.  The presentations also highlighted the exact details of how different preprocessing steps and choice of ROIs affected the outcome of the different algorithms.  This led to a unanimous agreement to chose only one pre-processing pipeline for the next phase of our work (needs to include white matter mask) and greatly simplified the decision-making for the next phase.  We were all struck by the vast range of results of the different tractography algorithms even after controlling for many of the preprocessing steps.  Notably, not every site completed and/or presented analysis of full cohort/ROI set.  We agreed to wait until the next round when improved consistency in data processing and complete results for each algorithm were available before making any cross algorithm comparisons.
 
<br />
 
There was unanimous agreement that this effort is timely for the field of DTI.
 
  
 
== Methods for Phase 2 NA-MIC DWI tractography analysis ==
 
== Methods for Phase 2 NA-MIC DWI tractography analysis ==
<br />
+
1)  All participants agreed to continue, so list of algorithms will be the same as presented in Santa Fe with potential addition of others if needed.  That will be decided at the January AHM.  
1)  All participants agreed to continue, so list of algorithms will be the same as presented in Santa Fe with potential addition of others if needed.  That will be decided at the January AHM. <br /><br />
+
 
2)  Agreed to change datasets in favor of a different dataset with more directions and that has two identical sessions (test-retest) so that within subject reliability can be assessed for each algorithm.  Candidate dataset under consideration is the 10 MIND subject Reliability data from MGH.  Sylvain B (BWH) volunteered to make nrrd headers for the 10 MIND subjects data from MGH test/retest with help from Jeremy, Vince, and Randy as needed. Sylvain B (BWH) has initial dataset and will report back on Friday any problems before preparing the rest.  He is posting this initial dataset on the portal (see download instructions) with a corrected nrrd header and the initial preprocessing steps completed.  It is ready for all sites to test out.  Missing is the distortion correction using the field maps that is still being worked on at MGH/BWH.  <br /><br />
+
2)  Agreed to change datasets in favor of a different dataset with more directions and that has two identical sessions (test-retest) so that within subject reliability can be assessed for each algorithm.  Candidate dataset under consideration is the 10 MIND subject Reliability data from MGH.  Sylvain B (BWH) volunteered to make nrrd headers for the 10 MIND subjects data from MGH test/retest with help from Jeremy, Vince, and Randy as needed. Sylvain B (BWH) has initial dataset and will report back on Friday any problems before preparing the rest.  He is posting this initial dataset on the portal (see download instructions) with a corrected nrrd header and the initial preprocessing steps completed.  It is ready for all sites to test out.  Missing is the distortion correction using the field maps that is still being worked on at MGH/BWH.   
3)  We will use the same 5 tracts used for the Santa Fe Workshop plus the Corpus Callosum (CC).  These ROIs do a good job of spanning the range of tractography challenges (e.g. large to small, various amounts of crossing fibers, various degrees of curvature).  The ROIs need to be redone to be a volume rather than a plane.  Agreed to use same definitions for locating the centroid of the ROI then expand to make a volume ROI.  Sonia and Randy to make first pass in the same initial subject, validate with Marek's lab and then send around to be sure they work with all of the algorithms.<br />
+
 
<br />
+
3)  We will use the same 5 tracts used for the Santa Fe Workshop plus the Corpus Callosum (CC).  These ROIs do a good job of spanning the range of tractography challenges (e.g. large to small, various amounts of crossing fibers, various degrees of curvature).  The ROIs need to be redone to be a volume rather than a plane.  Agreed to use same definitions for locating the centroid of the ROI then expand to make a volume ROI.  Sonia and Randy to make first pass in the same initial subject, validate with Marek's lab and then send around to be sure they work with all of the algorithms.
  
 
== Downloading the data: ==
 
== Downloading the data: ==
<br />
+
1) If you do not already have an account with BIRN/SRB, request one [https://portal.nbirn.net here].  Send [mailto:jbockholt@mrn.org jeremy] an e-mail message so that he can remind BIRN to expedite the account request for NA-MIC project.
1) If you do not already have an account with BIRN/SRB, request one [[https://portal.nbirn.net here]].  Send [mailto: jbockholt@mrn.org jeremy] an e-mail message so that he can remind BIRN to expedite the account request for NA-MIC project.
+
 
2) If/When you have a BIRN/SRB account, send [mailto: jbockholt@mrn.org jeremy] an e-mail message so that he can invite you to the NA-MIC DTI validation project.
+
2) If/When you have a BIRN/SRB account, send [mailto:jbockholt@mrn.org jeremy] an e-mail message so that he can invite you to the NA-MIC DTI validation project. You will not be able to download the data unless you are invited to the data sharing project for this.
2) Use the SRB SCommands to get the data
+
 
 +
3) Use the SRB SCommands to get the data
 
<code>Scd /home/Projects/NAMIC_DTI_VALIDITY__0074/Data</code>
 
<code>Scd /home/Projects/NAMIC_DTI_VALIDITY__0074/Data</code>
<code>M02100024_visit1.tar.gz</code>
 
<code>M02100024_visit2.tar.gz</code>
 
<code>M52200010_visit2.tar.gz</code>
 
<code>M52200011_visit1.tar.gz</code>
 
<code>M52200011_visit2.tar.gz</code>
 
<code>M52200012_visit1.tar.gz</code>
 
<code>M52200012_visit2.tar.gz</code>
 
<code>M87101083_visit1.tar.gz</code>
 
<code>M87101083_visit2.tar.gz</code>
 
<code>M87101118_visit1.tar.gz</code>
 
<code>M87101118_visit2.tar.gz</code>
 
<code>M87102103_visit1.tar.gz</code>
 
<code>M87102103_visit2.tar.gz</code>
 
<code>M87102104_visit1.tar.gz</code>
 
<code>M87102104_visit2.tar.gz</code>
 
<code>M87102113_visit2.tar.gz</code>
 
  
Sget M02100024_visit1.tar.gz
+
==== List of all of the data-sets ====
 +
For each visit of the 10 subjects:
 +
** the tar.gz archive contains the initial raw data,
 +
** the tgz archive contains the pre-processed data (Eddy current and EPI correction).
 +
 
 +
*'''Contents of the initial raw datasets'''
 +
**dti/
 +
***.dcm contains the raw DTI dicom images
 +
***.nrrd contains the corresponding NRRD headers for the dicom
 +
**t1/
 +
***.dcm contains the raw T1-weighted dicom images
 +
**t2/
 +
***.dcm contains the raw T2-weighted dicom images
 +
**fieldmag/
 +
***.dcm contains the magnitude image of field map
 +
**fieldphase/
 +
***.dcm contains the phase image of field map
 +
 
 +
*'''Contents of the pre-processed datasets'''
 +
{| border="1"
 +
|+ Example contents of the M02100024_visit1_nhdr.tgz file
 +
|-
 +
! FileName
 +
! Description
 +
|-
 +
| M02100024_visit1/M02100024_visit1_dti-Ed-Epi.nhdr
 +
| Eddy current corrected, EPI corrected and weighted least square estimation diffusion tensor
 +
|-
 +
| M02100024_visit1/M02100024_visit1_dwi-Ed-Epi.nhdr
 +
| Eddy current corrected and EPI corrected dwi
 +
|-
 +
| M02100024_visit1/M02100024_visit1_dti-Ed.nhdr
 +
| Eddy current corrected and weighted least square estimation diffusion tensor
 +
|-
 +
| M02100024_visit1/M02100024_visit1_dwi-Ed.nhdr
 +
| Eddy current corrected dwi
 +
|-
 +
| M02100024_visit1/M02100024_visit1_dwi.nhdr
 +
| Raw dwi
 +
|-
 +
| M02100024_visit1/M02100024_visit1_fieldmag1.nhdr
 +
| Field Map used for EPI correction.
 +
|-
 +
| M02100024_visit1/M02100024_visit1_fieldmag2.nhdr
 +
|
 +
|-
 +
| M02100024_visit1/M02100024_visit1_fieldphase1.nhdr
 +
|
 +
|-
 +
| M02100024_visit1/M02100024_visit1_fieldphase2.nhdr
 +
|
 +
|-
 +
| M02100024_visit1/M02100024_visit1_t1.nhdr
 +
| t1 weighted scan
 +
|-
 +
| M02100024_visit1/M02100024_visit1_t2.nhdr
 +
| t2 weighted scan
 +
|}
 +
 
 +
You can specifically grab one of data-sets at a time by following the example below
 +
<code>Sget M02100024_visit1.tar.gz</code>
 +
<code>Sget M02100024_visit1_nhdr.gz</code>
 +
 
 +
* '''List of datasets'''
 +
 
 +
case M02100023
 +
 
 +
<code>M02100023_visit1.tar.gz</code><br>
 +
<code>M02100023_visit2.tar.gz</code><br>
 +
n.b., M02100023 does not have EPI corrected as the fieldmap does not work.
 +
 
 +
case M02100024
 +
 
 +
'''<code>M02100024_roi.tar.gz</code><br>'''
 +
<code>M02100024_visit1.tar.gz</code><br>
 +
<code>M02100024_visit2.tar.gz</code><br>
 +
'''<code>M02100024_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M02100024_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M52200010
 +
 
 +
'''<code>M52200010_roi.tar.gz</code><br>'''
 +
<code>M52200010_visit1.tar.gz</code><br>
 +
<code>M52200010_visit2.tar.gz</code><br>
 +
'''<code>M52200010_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M52200011
 +
 
 +
'''<code>M52200011_roi.tar.gz</code><br>'''
 +
<code>M52200011_visit1.tar.gz</code><br>
 +
<code>M52200011_visit2.tar.gz</code><br>
 +
'''<code>M52200011_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M52200011_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M52200012
 +
 
 +
'''<code>M52200012_roi.tar.gz</code><br>'''
 +
<code>M52200012_visit1.tar.gz</code><br>
 +
<code>M52200012_visit2.tar.gz</code><br>
 +
'''<code>M52200012_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M52200012_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M87101083
 +
 
 +
'''<code>M87101083_roi.tar.gz</code><br>'''
 +
<code>M87101083_visit1.tar.gz</code><br>
 +
<code>M87101083_visit2.tar.gz</code><br>
 +
'''<code>M87101083_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M87101083_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M87101118
 +
 
 +
'''<code>M87101118_roi.tar.gz</code><br>'''
 +
<code>M87101118_visit1.tar.gz</code><br>
 +
<code>M87101118_visit2.tar.gz</code><br>
 +
'''<code>M87101118_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M87101118_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M87102103
 +
 
 +
'''<code>M87102103_roi.tar.gz</code><br>'''
 +
<code>M87102103_visit1.tar.gz</code><br>
 +
<code>M87102103_visit2.tar.gz</code><br>
 +
'''<code>M87102103_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M87102103_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M87102104
 +
 
 +
<code> M87102104_roi.tar.gz</code><br>'''
 +
<code>M87102104_visit1.tar.gz</code><br>
 +
<code>M87102104_visit2.tar.gz</code><br>
 +
'''<code>M87102104_visit1_nhdr.tgz</code><br>'''
 +
'''<code>M87102104_visit2_nhdr.tgz</code><br>'''
 +
 
 +
case M87102113
 +
 
 +
<code> M87102113_roi.tar.gz</code><br>'''
 +
<code>M87102113_visit1.tar.gz</code><br>
 +
<code>M87102113_visit2.tar.gz</code><br>
 +
'''<code>M87102113_visit2_nhdr.tgz</code><br>'''
 +
 
 +
====Regions of Interest:====
 +
 
 +
The ROIs are located in /home/Projects/NAMIC_DTI_VALIDITY__0074/Data
 +
 
 +
** Cingulum, Cingulum Hippocampal, Internal Capsule, Uncinate Fasciculus:
 +
left side: source: label #2 ; sink: label #7
 +
 
 +
right side: source: label #8; sink: label #9
 +
 
 +
** Fornix:
 +
left side: source #2; sink #7
 +
 
 +
right side: source #2; sink #9
 +
 
 +
** Corpus Callosum Forceps Major, Corpus Callosum Forceps Minor:
 +
source: label #2; sink: label #7
 +
 
  
 
== Preprocessing stream: ==
 
== Preprocessing stream: ==
<br />
 
 
1)  Start with DWI data and NiFTY header + gradient directions (UPLOAD- raw)<br />
 
1)  Start with DWI data and NiFTY header + gradient directions (UPLOAD- raw)<br />
 
2)  Field Maps are available and Sylvain has verified that using them to correct the distortions would be desirable.  Still to be done is to get help in using them to do the correction.  Randy will work with MGH collaborators to get this to work and will update the group on Friday.  (UPLOAD) <br />
 
2)  Field Maps are available and Sylvain has verified that using them to correct the distortions would be desirable.  Still to be done is to get help in using them to do the correction.  Randy will work with MGH collaborators to get this to work and will update the group on Friday.  (UPLOAD) <br />
Line 60: Line 197:
 
9)  Each group will be responsible for implementing their own algorithm starting at whatever point in this stream is appropriate for their software.  All agreed NOT to use alternate methods to accomplish any of the afore listed steps.<br />
 
9)  Each group will be responsible for implementing their own algorithm starting at whatever point in this stream is appropriate for their software.  All agreed NOT to use alternate methods to accomplish any of the afore listed steps.<br />
 
<br />
 
<br />
 +
 +
=== Challenges in data processing ===
 +
Please list below any challenges that you encounter in processing the data.
 +
 +
# Automatic mask generation, threshold detection failed for provided tensors.  workaround: Reestimate tensors with fixed threshold.
 +
# Image frame and measurement frame not equivalent for tensor data.
 +
# Several datasets corrupted
 +
## M87102113_visit1 - fixed (Sylvain)
 +
## M02100023_visit1 and visit2 - should not be used in study (Sylvain)
 +
## M52200010_visit1 - fixed (Sylvain)
 +
# ROI origin not the same as tensor data.  workaround: set origin to (0,0,0) in ROI images.
  
 
== Outcome metrics: ==
 
== Outcome metrics: ==
<br />
 
 
This is still under discussion, but for the next round of presentations at the January All Hands meeting we agreed to do the following:<br />
 
This is still under discussion, but for the next round of presentations at the January All Hands meeting we agreed to do the following:<br />
 
1) Everyone will email Sonia Pujol their slides from the Santa Fe meeting and she and I will use them to compile an Excel worksheet for each laboratory to fill in as they process the new dataset.  This will include information such as: <br />
 
1) Everyone will email Sonia Pujol their slides from the Santa Fe meeting and she and I will use them to compile an Excel worksheet for each laboratory to fill in as they process the new dataset.  This will include information such as: <br />
a)  Space carved (Casey's DTIprocess tool that generates a volume label map measure from traceline).  This will give volumes, overlap, mean and Std Dev of FA, trace, mode.  We will use these for test-retest metrics.  Each group will pass these label maps to Sonia and she will generate these measures for the AHM presentation.  <br />
+
a)  Space carved (Casey's [http://www.sci.utah.edu/~gcasey/research/code DTIprocess] tool that generates a volume label map measure from traceline).  This will give volumes, overlap, mean and Std Dev of FA, trace, mode.  We will use these for test-retest metrics.  Each group will pass these label maps to Sonia and she will generate these measures for the AHM presentation.  <br />
 
b) User interface, hardware/software (processor speed, platform, RAM), operator time<br />
 
b) User interface, hardware/software (processor speed, platform, RAM), operator time<br />
 
c) Parameter settings for each algorithm<br />
 
c) Parameter settings for each algorithm<br />
Line 74: Line 221:
 
Further discussion of how best to parameterize tracts will be a key point for the January AHM<br />
 
Further discussion of how best to parameterize tracts will be a key point for the January AHM<br />
  
 +
==Uploading your results==
 +
The following directory has been created on the SRB for uploading your results<br />
 +
 +
/home/Projects/NAMIC_DTI_VALIDITY__0074/Results<br />
 +
 +
The below example assumes that you have the SRB Scommands installed and configured for a user that is a member of the NAMIC_DTI_VALIDITY__0074 group<br />
  
 +
<code>Scd /home/Projects/NAMIC_DTI_VALIDITY__0074/Results</code><br />
 +
<code>Smkdir MyTestResultsDir</code><br />
 +
<code>Schmod a NAMIC_DTI_VALIDITY__0074 groups MyTestResultsDir</code><br />
 +
<code>Scd MyTestResultsDir</code><br />
 +
<code>Sput MyTestFile .</code><br />
 +
<code>Schmod a NAMIC_DTI_VALIDITY__0074 groups MyTestFile</code><br />
 
<br />
 
<br />
  
 
== Next steps:==
 
== Next steps:==
<br />
+
1) [http://www.na-mic.org/Wiki/index.php/Tractography_Study_Telephone_Conference_Nov_16_2007 Telephone Conference Call from November 16, 2007 Meeting Notes] Agenda items include feedback on sample data set & ROIs  Call in information: 1-800-861-4084, ID 1040119 #<br />
1) T-con November 16th 2 PM EST/ noon MST Agenda items include feedback on sample data set & ROIs  Call in information: 1-800-861-4084, ID 1040119 #<br />
 
 
2)  Next face to face gathering will be at the AHM, Randy scheduled time on the agenda Wednesday to continue this project <br />
 
2)  Next face to face gathering will be at the AHM, Randy scheduled time on the agenda Wednesday to continue this project <br />
 
3)  Proper implementation of DTI gradient orientation system in ITK, nrrd, TEEM, etc (Casey/Tom to file bug report, bring it up in an upcoming Engineering T-con, plan for work on it next Project week))<br />
 
3)  Proper implementation of DTI gradient orientation system in ITK, nrrd, TEEM, etc (Casey/Tom to file bug report, bring it up in an upcoming Engineering T-con, plan for work on it next Project week))<br />
  
 
== Miscellaneous Notes ==
 
== Miscellaneous Notes ==
 
 
Group explored potential data sets (UNC n=1, 10 acquisitions with 6 directions; MIND n=10, 8 acquisitions, 2x at each of 3 sites with 6 directions and 2x at 1 site with 60 directions) that are available as needed.
 
Group explored potential data sets (UNC n=1, 10 acquisitions with 6 directions; MIND n=10, 8 acquisitions, 2x at each of 3 sites with 6 directions and 2x at 1 site with 60 directions) that are available as needed.
 
 
<br />
 
<br />
  
 
Return to [[Projects/Diffusion/2007_Project_Week_Contrasting_Tractography_Measures | Contrasting Tractography Project Page]]
 
Return to [[Projects/Diffusion/2007_Project_Week_Contrasting_Tractography_Measures | Contrasting Tractography Project Page]]

Latest revision as of 22:56, 17 December 2007

Home < TractographyWorkshop Core1 ActionPlan

This page refers to an active research project within NA-MIC. If you are interested in participating, we welcome your input. Please contact Randy Gollub

At the Workshop we agreed to complete the following:

1) Define a NA-MIC endorsed DWI pre- and post processing pipeline that uses NA-MIC toolkit software when available and other freely available software if unanimously agreed upon by the group (e.g. some FSL tools are in widespread use, but for academic use only) for this project. Any tools not compatible with NAMIC licensing that are essential to the pipeline will be put on a short list for future NA-MIC development.
2) Curate and post in the NA-MIC Publication database one or more sets of DWI data to be used within NA-MIC for analytic tool development, testing and calibration.
3) Complete a rigorous analysis of the properties of the tractography approaches in use or under development within NA-MIC Core 1 teams on these data sets, including test-retest reliability.
4) Prepare and submit for publication a scholarly report of this work, with Sonia Pujol taking the lead under the mentorship of CF Westin, Ross, Guido and Randy. All participants in the work will share authorship.

Brief summary of the presentations, comparative analysis of tractography methods/approaches:

There were both common and disparate results across tractography approaches. All methods resulted in a high degree of intersubject variability. That drove the decision to find a dataset that included at least one within subject replication. Even greater detail as to how algorithms differ in reporting results were noted, e.g. weighting number of streamlines per voxel and how that affects the voxelwise statistical calculations (hence the decision below to use volume rather than number of streamlines in next iteration). The same data sets were found to be outliers and/or posed greater challenges in several of the algorithms. Many of the algorithms couldn't use the posted ROIs because they needed volume ROIs not slice plane ROIs. The different laboratories actually used fairly similar methods to generate volume based ROIs so coming to consensus on how to do this for the next round was easy. This was one of the most time consuming steps, so getting it correct on the next round will be a big advantage. The presentations also highlighted the exact details of how different preprocessing steps and choice of ROIs affected the outcome of the different algorithms. This led to a unanimous agreement to chose only one pre-processing pipeline for the next phase of our work (needs to include white matter mask) and greatly simplified the decision-making for the next phase. We were all struck by the vast range of results of the different tractography algorithms even after controlling for many of the preprocessing steps. Notably, not every site completed and/or presented analysis of full cohort/ROI set. We agreed to wait until the next round when improved consistency in data processing and complete results for each algorithm were available before making any cross algorithm comparisons.There was unanimous agreement that this effort is timely for the field of DTI.

Methods for Phase 2 NA-MIC DWI tractography analysis

1) All participants agreed to continue, so list of algorithms will be the same as presented in Santa Fe with potential addition of others if needed. That will be decided at the January AHM.

2) Agreed to change datasets in favor of a different dataset with more directions and that has two identical sessions (test-retest) so that within subject reliability can be assessed for each algorithm. Candidate dataset under consideration is the 10 MIND subject Reliability data from MGH. Sylvain B (BWH) volunteered to make nrrd headers for the 10 MIND subjects data from MGH test/retest with help from Jeremy, Vince, and Randy as needed. Sylvain B (BWH) has initial dataset and will report back on Friday any problems before preparing the rest. He is posting this initial dataset on the portal (see download instructions) with a corrected nrrd header and the initial preprocessing steps completed. It is ready for all sites to test out. Missing is the distortion correction using the field maps that is still being worked on at MGH/BWH.

3) We will use the same 5 tracts used for the Santa Fe Workshop plus the Corpus Callosum (CC). These ROIs do a good job of spanning the range of tractography challenges (e.g. large to small, various amounts of crossing fibers, various degrees of curvature). The ROIs need to be redone to be a volume rather than a plane. Agreed to use same definitions for locating the centroid of the ROI then expand to make a volume ROI. Sonia and Randy to make first pass in the same initial subject, validate with Marek's lab and then send around to be sure they work with all of the algorithms.

Downloading the data:

1) If you do not already have an account with BIRN/SRB, request one here. Send jeremy an e-mail message so that he can remind BIRN to expedite the account request for NA-MIC project.

2) If/When you have a BIRN/SRB account, send jeremy an e-mail message so that he can invite you to the NA-MIC DTI validation project. You will not be able to download the data unless you are invited to the data sharing project for this.

3) Use the SRB SCommands to get the data Scd /home/Projects/NAMIC_DTI_VALIDITY__0074/Data

List of all of the data-sets

For each visit of the 10 subjects:

    • the tar.gz archive contains the initial raw data,
    • the tgz archive contains the pre-processed data (Eddy current and EPI correction).
  • Contents of the initial raw datasets
    • dti/
      • .dcm contains the raw DTI dicom images
      • .nrrd contains the corresponding NRRD headers for the dicom
    • t1/
      • .dcm contains the raw T1-weighted dicom images
    • t2/
      • .dcm contains the raw T2-weighted dicom images
    • fieldmag/
      • .dcm contains the magnitude image of field map
    • fieldphase/
      • .dcm contains the phase image of field map
  • Contents of the pre-processed datasets
Example contents of the M02100024_visit1_nhdr.tgz file
FileName Description
M02100024_visit1/M02100024_visit1_dti-Ed-Epi.nhdr Eddy current corrected, EPI corrected and weighted least square estimation diffusion tensor
M02100024_visit1/M02100024_visit1_dwi-Ed-Epi.nhdr Eddy current corrected and EPI corrected dwi
M02100024_visit1/M02100024_visit1_dti-Ed.nhdr Eddy current corrected and weighted least square estimation diffusion tensor
M02100024_visit1/M02100024_visit1_dwi-Ed.nhdr Eddy current corrected dwi
M02100024_visit1/M02100024_visit1_dwi.nhdr Raw dwi
M02100024_visit1/M02100024_visit1_fieldmag1.nhdr Field Map used for EPI correction.
M02100024_visit1/M02100024_visit1_fieldmag2.nhdr
M02100024_visit1/M02100024_visit1_fieldphase1.nhdr
M02100024_visit1/M02100024_visit1_fieldphase2.nhdr
M02100024_visit1/M02100024_visit1_t1.nhdr t1 weighted scan
M02100024_visit1/M02100024_visit1_t2.nhdr t2 weighted scan

You can specifically grab one of data-sets at a time by following the example below Sget M02100024_visit1.tar.gz Sget M02100024_visit1_nhdr.gz

  • List of datasets

case M02100023

M02100023_visit1.tar.gz
M02100023_visit2.tar.gz
n.b., M02100023 does not have EPI corrected as the fieldmap does not work.

case M02100024

M02100024_roi.tar.gz
M02100024_visit1.tar.gz
M02100024_visit2.tar.gz
M02100024_visit1_nhdr.tgz
M02100024_visit2_nhdr.tgz

case M52200010

M52200010_roi.tar.gz
M52200010_visit1.tar.gz
M52200010_visit2.tar.gz
M52200010_visit2_nhdr.tgz

case M52200011

M52200011_roi.tar.gz
M52200011_visit1.tar.gz
M52200011_visit2.tar.gz
M52200011_visit1_nhdr.tgz
M52200011_visit2_nhdr.tgz

case M52200012

M52200012_roi.tar.gz
M52200012_visit1.tar.gz
M52200012_visit2.tar.gz
M52200012_visit1_nhdr.tgz
M52200012_visit2_nhdr.tgz

case M87101083

M87101083_roi.tar.gz
M87101083_visit1.tar.gz
M87101083_visit2.tar.gz
M87101083_visit1_nhdr.tgz
M87101083_visit2_nhdr.tgz

case M87101118

M87101118_roi.tar.gz
M87101118_visit1.tar.gz
M87101118_visit2.tar.gz
M87101118_visit1_nhdr.tgz
M87101118_visit2_nhdr.tgz

case M87102103

M87102103_roi.tar.gz
M87102103_visit1.tar.gz
M87102103_visit2.tar.gz
M87102103_visit1_nhdr.tgz
M87102103_visit2_nhdr.tgz

case M87102104

M87102104_roi.tar.gz
M87102104_visit1.tar.gz
M87102104_visit2.tar.gz
M87102104_visit1_nhdr.tgz
M87102104_visit2_nhdr.tgz

case M87102113

M87102113_roi.tar.gz
M87102113_visit1.tar.gz
M87102113_visit2.tar.gz
M87102113_visit2_nhdr.tgz

Regions of Interest:

The ROIs are located in /home/Projects/NAMIC_DTI_VALIDITY__0074/Data

    • Cingulum, Cingulum Hippocampal, Internal Capsule, Uncinate Fasciculus:

left side: source: label #2 ; sink: label #7

right side: source: label #8; sink: label #9

    • Fornix:

left side: source #2; sink #7

right side: source #2; sink #9

    • Corpus Callosum Forceps Major, Corpus Callosum Forceps Minor:

source: label #2; sink: label #7


Preprocessing stream:

1) Start with DWI data and NiFTY header + gradient directions (UPLOAD- raw)
2) Field Maps are available and Sylvain has verified that using them to correct the distortions would be desirable. Still to be done is to get help in using them to do the correction. Randy will work with MGH collaborators to get this to work and will update the group on Friday. (UPLOAD)
3) Eddy Current Correction (affine registration) (to be done at BWH by Sylvain/Sonia) (UPLOAD)
4) Put into nrrd format (to be done at BWH by Sylvain/Sonia)
5) Use weighted least squares tensor estimation using TEEM library (to be done at BWH by Sylvain/Sonia) (UPLOAD)
6) T1 white matter mask co-registered to eddy current corrected DWI data (NA-MIC affine registration tool) (Freesurfer white matter + ? vs. EMSegmentation- Sonia/Sylvain to determine based on what works best) (UPLOAD)
7) ROIs will be drawn in DWI/DTI space (to be done at BWH by Sylvain/Sonia) (UPLOAD)
8) Affine registration transformation to bring retest into test space (use this for mapping ROIs and outcome label maps only from test to retest for each subject) (UPLOAD)
9) Each group will be responsible for implementing their own algorithm starting at whatever point in this stream is appropriate for their software. All agreed NOT to use alternate methods to accomplish any of the afore listed steps.

Challenges in data processing

Please list below any challenges that you encounter in processing the data.

  1. Automatic mask generation, threshold detection failed for provided tensors. workaround: Reestimate tensors with fixed threshold.
  2. Image frame and measurement frame not equivalent for tensor data.
  3. Several datasets corrupted
    1. M87102113_visit1 - fixed (Sylvain)
    2. M02100023_visit1 and visit2 - should not be used in study (Sylvain)
    3. M52200010_visit1 - fixed (Sylvain)
  4. ROI origin not the same as tensor data. workaround: set origin to (0,0,0) in ROI images.

Outcome metrics:

This is still under discussion, but for the next round of presentations at the January All Hands meeting we agreed to do the following:
1) Everyone will email Sonia Pujol their slides from the Santa Fe meeting and she and I will use them to compile an Excel worksheet for each laboratory to fill in as they process the new dataset. This will include information such as:
a) Space carved (Casey's DTIprocess tool that generates a volume label map measure from traceline). This will give volumes, overlap, mean and Std Dev of FA, trace, mode. We will use these for test-retest metrics. Each group will pass these label maps to Sonia and she will generate these measures for the AHM presentation.
b) User interface, hardware/software (processor speed, platform, RAM), operator time
c) Parameter settings for each algorithm

2) We will try to use Casey's FiberCompare multiple traceline visualization tool to compare results
3) Sonia will also use the label map results to explore ways to analyze them, e.g. Staple algorithm to find common agreement (specificity and sensitivity)

Further discussion of how best to parameterize tracts will be a key point for the January AHM

Uploading your results

The following directory has been created on the SRB for uploading your results

/home/Projects/NAMIC_DTI_VALIDITY__0074/Results

The below example assumes that you have the SRB Scommands installed and configured for a user that is a member of the NAMIC_DTI_VALIDITY__0074 group

Scd /home/Projects/NAMIC_DTI_VALIDITY__0074/Results
Smkdir MyTestResultsDir
Schmod a NAMIC_DTI_VALIDITY__0074 groups MyTestResultsDir
Scd MyTestResultsDir
Sput MyTestFile .
Schmod a NAMIC_DTI_VALIDITY__0074 groups MyTestFile

Next steps:

1) Telephone Conference Call from November 16, 2007 Meeting Notes Agenda items include feedback on sample data set & ROIs Call in information: 1-800-861-4084, ID 1040119 #
2) Next face to face gathering will be at the AHM, Randy scheduled time on the agenda Wednesday to continue this project
3) Proper implementation of DTI gradient orientation system in ITK, nrrd, TEEM, etc (Casey/Tom to file bug report, bring it up in an upcoming Engineering T-con, plan for work on it next Project week))

Miscellaneous Notes

Group explored potential data sets (UNC n=1, 10 acquisitions with 6 directions; MIND n=10, 8 acquisitions, 2x at each of 3 sites with 6 directions and 2x at 1 site with 60 directions) that are available as needed.

Return to Contrasting Tractography Project Page