Difference between revisions of "July31T-con/Synopsis"

From NAMIC Wiki
Jump to: navigation, search
Line 1: Line 1:
The Problem:  Each of the algorithms has an optimal voxel dimension and they are not the same.  We must decide what format of the data will be used as the common starting point.
+
 
 +
== The Problem:  Each of the algorithms has an optimal voxel dimension and they are not the same.  We must decide what format of the data will be used as the common starting point. ==
 +
 
  
 
I have three goals here, 1) to summarize the new information gained during the past 9 days since we had our T-con, 2) to capture the essence of the discussions that took place in pursuit of that information and 3) to propose a final solution regarding the format of the starting data to be processed by all the groups for the Santa Fe meeting that will be a reasonable compromise between the in principle optimal but completely impossible solution of all groups using the "same data" and the "every group for themselves" solution that will require more work in Sante Fe to sort out our results.
 
I have three goals here, 1) to summarize the new information gained during the past 9 days since we had our T-con, 2) to capture the essence of the discussions that took place in pursuit of that information and 3) to propose a final solution regarding the format of the starting data to be processed by all the groups for the Santa Fe meeting that will be a reasonable compromise between the in principle optimal but completely impossible solution of all groups using the "same data" and the "every group for themselves" solution that will require more work in Sante Fe to sort out our results.

Revision as of 22:05, 8 August 2007

Home < July31T-con < Synopsis

The Problem: Each of the algorithms has an optimal voxel dimension and they are not the same. We must decide what format of the data will be used as the common starting point.

I have three goals here, 1) to summarize the new information gained during the past 9 days since we had our T-con, 2) to capture the essence of the discussions that took place in pursuit of that information and 3) to propose a final solution regarding the format of the starting data to be processed by all the groups for the Santa Fe meeting that will be a reasonable compromise between the in principle optimal but completely impossible solution of all groups using the "same data" and the "every group for themselves" solution that will require more work in Sante Fe to sort out our results.


From Guido on August 6: "Randy and all,

I did not update the Wiki, and this is indeed a very interesting discussion. Although C-F proposes that we should not force anyone to accept a specific data interpolation, I fear that any comparison a the forthcoming meeting becomes very difficult. I understand the concerns of C-F and Marc that cutting the high frequencies indeed seems to cut some information. Would some labs us upinterpolated, no-interpolated, down-interpolated data with different interpolation schemes, everyone would work with different input data which by nature already would be seen as systematic differences. The telephone conference clearly showed that some groups really can't use the raw data but would down-sample them to an isotropic grid. Forcing all the participating groups to use this non-isotropic data might "cut" some labs out of this comparison. Maybe we really have to make a compromise just for this comparison to provide a downinterpolated, close-to-isotropic dataset for everyone and leave the more sophisticated analysis of optimal "redoing of the GE upsampling" or the use of the raw GE data for additional research to be discussed at the meeting?

Randy and all, could we propose the following: a) we decide about using the non-EPI corrected data for the time being (given the short time left for the meeting, not because EPI correction might not be useful but because we first need to quantitatively show its advantage)

b) we provide downsampled images (sampling the 256x256 back to a 144x144 grid with the standard procedure as used by Marc and provide this close to isotropic dataset to everyone. This will ensure that every group has access to a standard, close to isotropic dataset (1,67x1.67x1.7).

c) leave it to every group to use the original matrix, more sophisticated down- or upsampling, and to compare - and discuss implications/differences as part of the report.

Please understand that I don't want to stop the current, exciting discussion about the optimal reformatting of the data or redoing the GE upsampling, but I would like to see this as a component of the workshop itself. I think that all these questions is a very important part of the NAMIC DTI toolkit and recommendations for users, and of course providing the tools to perform a correct/optimal data preprocessing."

Later that day Sylvain expressed the following concerns: 1) All ROIs were drawn on the original data and they will have to be downsampled too. 2) "WRT to Guido's point > c) leave it to every group to use the original matrix, more > sophisticated down- or upsampling, and to compare - and discuss > implications/differences as part of the report. I am somewhat less convinced about this. I say if we want to be consistent, then only one single dataset should be provided so that all methods start with the same data.

Unless there is *strong* opposition against this, the PNL will prepare for each case: 1- A downsampled 1.67x1.67x1.7mm data set of DWIs in NRRD format. 2- Its corresponding plain vanilla linear least square tensor fit in NRRD format. 3- The associated ROIs downsampled to same resolution in NRRD format. All other data sets should be removed from this project (original and epi corrected).

For consistency, if the candidate tracking methods uses DTI as input then the tensor data *must* be used. Differences in tensor estimation techniques should not justify poor tracking, or at least should be observable in all the tracking techniques. Similarly, If the candidate tracking methods works directly on the DWIs then the DWI data *must* be used.

There is no perfect solution to finding the ideal data set. This is the lowest common denominator to all techniques within the NAMIC community and beyond. Given the large variability in processing techniques the least we should do is use the exact same input.

Best, -Sylvain


Return to July31T-con Page