Mbirn: DTI Acquisition Protocol

From NAMIC Wiki
Jump to: navigation, search
Home < Mbirn: DTI Acquisition Protocol

Notes from the first MBIRN DTI Acquisition Working Group T-Con April 26, 2006 Participants: S. Mori, J Farrell (JHU), A. Song (Duke), K Helmer (MGH), and guest C. Pierpaoli (NIH)


The parameters used by C. Pierpaoli for a NIH-funded MRI study of normal brain development were used as a starting point for discussion.


Those parameters are:

  • six b-values + one b=0
  • b = 0, 1000 s/mm2
  • TR = 6 sec
  • TE = min for full acquisition
  • Number of slices = minimum needed to cover entire brain including the cerebellum
  • Orientation = axial
  • Slice thickness = 3.0 mm
  • Slice gap = 0 mm
  • Want 3x3x3 isotropic voxels so either: FOV = 192 mm and 64x64 or FOV = 384 and 128x128.
  • 4 data sets per subject
  • No zero filling or interpolation
  • Acquisition time limited to 5-7 min
  • This study had only GE and Siemens scanners, all were 1.5T.


  • b = 0 issue

At that time, they were limited to collecting only b = 0 not b = low since the former was all that was available on the GE scanners. It was noted that b = 0 is not truly b = 0, but rather more like b = 30-100 s/mm2, so the issue of flow attenuation and primer-crusher behavior of the diffusion gradients are taken care of. However, it is common to treat the b = low data as if it were truly b = 0 for data processing purposes. CP currently uses an exact solution method, using the integral of the gradient waveforms for the calculation of the b-values. On GE scanners, the b-values are output to a separate file for later use. Note that b = big is usually calculated ignoring the imaging gradients. GE doesn’t allow you to select multiple (b NE 0) b-values. CP also thought that the perception that the b-values from GE being unreliable were just due to the fact that if you use multiple non-zero b-values, you have to make sure that the software doesn't rescale the sequence and calculate a different TE. This will obviously compromise the image intensity for those images.


  • DTI, SNR, and Averaging

The question of whether or not to average data was discussed. It was noted that some sites used the repeat scans to replace bad data (usually due to movement) in the original data. It was thought that the presence of physiological noise made it better not to average the data together, but rather to treat all the data as separate measurements in the tensor fitting. AS said that at Duke they do a 15 direction scan and repeat as many times as they have time for. The question of what is the best way to do this, JF noted, is related to what you are trying to measure: If you are interested in high anisotropy regions then you can get away with lower SNR data, but the SNR requirements are stricter for lower anisotropy regions (as you are measuring a smaller deviation). It also depends upon what measure you intend to use in your analysis. It is generally better to get more directions than more averages, due to the physiological noise and the need to replace bad data sets due to motion. It was noted that for a 5 min scan there was an 80% success rate for getting a good data set, but that the success rate was much much lower for two scans.


  • Distortion Corrections

AS noted that, in general, it’s good to know your scanner’s eddy current situation and they check this monthly. If it’s good, you don’t have to much/any correction, butt if it’s bad, you need to adjust it or at the very least know about it. CP noted that they do eddy current distortion correction post acquisition on the subjects themselves and the method they have works well for b-values up to about 1200 s/mm2. There is also the problem of extra attenuation due to eddy currents so it is good to address this issue in hardware as well. Also you have to know the exact position of the subject if you are going to correct for this in software and that the software has to carry this information through the calculation. It is also possible to collect data with (sequentially) positive and negative signed gradients to assess the system.


  • Spatial Normalization

CP noted that everytime that you rotate the images you affect the data quality. Their tack is to do all corrections in native space and then do a single rotation into a common stereotaxic space (no scaling, no shearing). The tensor is then calculated as well as any scalar measures. Those maps can then be deformed so that averaging of subject data can be done. SM noted that this part probably was not an appropriate project for BIRN and it is not clear anyway what the “correct” method is since there are a number of issues with doing these transformations. It is probably the case that you just have to pick a reasonable method and apply it consistently.


  • BIRN Study

SM and JF thought it would be useful for BIRN to have each site to do a self-calibration since there is no good DTI phantom that could be shipped to each site. The protocol would be 6 directions, 5 repetitions of the data set, b = 0,1000 s/mm2. The idea would be to keep scanning the person for about an hour thus acquiring a lot of data and you could calculate FA using different SNR data (constructed from parts of the full data acquired during the session). The question of how to compare this data across sites was raised given that each site would be scanning a different person. With this study, you could figure out the number of repetitions of data sets needed to meet the SNR requirements for a given structure and measure. CP raised the issue of physiological noise and that there also would be hardware variability that would be convolved in this, but since there is no DTI phantom, you can’t get at each separately. A water phantom is probably not good enough.


  • Summary and Action Items

KH will post these notes and a preliminary set of acquisition parameters on a wiki page for the review and suggestions of the other participants. They will then meet again by t-con to discuss moving forward at each site with a SNR study as raised by SM and JF.