Difference between revisions of "Projects:RegistrationLibrary:RegLib C08"

From NAMIC Wiki
Jump to: navigation, search
(Created page with 'Back to ARRA main page <br> Back to Registration main page <br> [[Projects:RegistrationDocumentation:UseCaseInv…')
 
m (Text replacement - "http://www.slicer.org/slicerWiki/index.php/" to "https://www.slicer.org/wiki/")
 
(37 intermediate revisions by one other user not shown)
Line 3: Line 3:
 
[[Projects:RegistrationDocumentation:UseCaseInventory|Back to Registration Use-case Inventory]] <br>
 
[[Projects:RegistrationDocumentation:UseCaseInventory|Back to Registration Use-case Inventory]] <br>
  
==Slicer Registration Use Case Exampe #8: Intra-subject whole-body PET-CT ==
+
= <small>v3.6.1</small> [[Image:Slicer3-6Announcement-v1.png‎|150px]]  Slicer Registration Library Case #08: Intra-subject whole-body PET-CT =
 
+
=== Input ===
{| style="color:#bbbbbb; background-color:#333333;" cellpadding="10" cellspacing="0" border="0"
+
{| style="color:#bbbbbb; " cellpadding="10" cellspacing="0" border="0"
|[[Image:RegLib C05 KneeMRI1.png|200px|lleft|this is the fixed reference image. All images are aligned into this space]]  
+
|[[Image:RegLib_C08_CT1_thumbnail.png|150px|lleft|this is the fixed CT image. All images are aligned into this space]]
|[[Image:Arrow_left_gray.jpg|100px|lleft]]  
+
|[[Image:RegLib_C08_PET1_thumbnail.png|150px|lleft|this is the fixed PET  image. All images are aligned into this space]]  
|[[Image:RegLib C05 KneeMRI2.png|200px|lleft|this is the moving image. The transform is calculated by matching this to the reference image]]
+
|[[Image:RegArrow_NonRigid.png|100px|lleft]]  
|align="left"|LEGEND<br>
+
|[[Image:RegLib_C08_CT2_thumbnail.png|150px|lleft|this is the moving image. The transform is calculated by matching this to the reference image]]
[[Image:Button_red_fixed.jpg|20px|lleft]]  this indicates the reference image that is fixed and does not move. All other images are aligned into this space and resolution<br>
+
|[[Image:RegLib_C08_PET2_thumbnail.png|150px|lleft|this is the moving image. The transform is calculated by matching this to the reference image]]
[[Image:Button_green_moving.jpg|20px|lleft]]  this indicates the moving image that determines the registration transform.  <br>
 
|-
 
|[[Image:Button_red_fixed.jpg|40px|lleft]]  T1 SPGR
 
|
 
|[[Image:Button_green_moving.jpg|40px|lleft]] T1 SPGR
 
 
|-
 
|-
|0.9375 x 0.9375 x 1.4 mm<br> 256 x 256 x 112<br>RAS
+
|fixed image/target
 +
|fixed image/target
 
|
 
|
|0.9375 x 0.9375 x 1.2 mm<br> 256 x 256 x 130<br>RAS
+
|moving image
 +
|moving image
 
|}
 
|}
 +
 +
=== Modules ===
 +
*'''Slicer 3.6.1''' recommended modules:  [https://www.slicer.org/wiki/Modules:RegisterImages-Documentation-3.6 '''Expert Automated Registration'''] +  [https://www.slicer.org/wiki/Modules:DeformableB-SplineRegistration-Documentation-3.6 '''Fast Nonrigid BSpline'''] or  [https://www.slicer.org/wiki/Modules:BRAINSFit '''BrainsFit'''],
 +
 
===Objective / Background ===
 
===Objective / Background ===
The final goal is to align a segmentation prior model to aid in cartilage segmentation.
+
Change assessment.
 
=== Keywords ===
 
=== Keywords ===
MRI, knee, inter-subject, segmentation
+
PET-CT, whole-body, change assessment
  
 
===Input Data===
 
===Input Data===
*[[Image:Button_red_fixed_white.jpg|20px]]reference/fixed : T1 SPGR , 0.9375 x 0.9375 x 1.4 mm voxel size, axial, RAS orientation.
+
*reference/fixed : baseline '''CT''': 0.97  x 0.97 x 3.27 mm , 512 x 512 x 267;  '''PET''': 4.7 x 4.7 x 3.3 mm , 128 x 128 x 267
*[[Image:Button_green_moving_white.jpg|20px]] moving: T1 SPGR , 0.9375 x 0.9375 x 1.2 mm voxel size, sagittal, RAS orientation.
+
*moving: '''CT''': 0.98 x 0.98 x 5, 512 x 512 x 195;  '''PET''': 4.1 x 4.1 x 5 mm, 168 x 168 x 195
  
 
=== Registration Results===
 
=== Registration Results===
{| style="color:#bbbbbb; background-color:#333333;" cellpadding="10" cellspacing="0" border="0"
+
<gallery  widths="550px" heights="250px" perrow="2">
|}
+
 
 +
Image:RegLib_C08_unregistered.png|Original unregistered CT images.
 +
Image:RegLib_C08_PET-CT_AnimGif.gif|Automated Affine alignment (RegisterImages) removes global differences. Differences due to posture and breathing remain.
 +
Image:RegLib_C08_PET-CT_AnimGif_BSpl1_Result.gif‎|BSpline registration of full volumes achieves better alignment in the abdominal and thoracic region of interest. But the solution is suboptimal because the algorithm is distracted by the strong differences in head position.Rather than trying to address this with more DOF, we calculate a BSpline transform based on croped images including only the main region of interest.
 +
Image:RegLib_C08_PET-CT_AnimGif_BSpl2_ResCrop.gif‎|After BSpline registration of cropped volumes. The registering of thoracic and abdominal regions only produces good results. We still apply this transform to the uncropped image.
 +
Image:RegLib_C08_AGif_BFitreg.gif‎‎|Affine + BSpline registration obtained from BRAINSfit module
 +
Image:RegLib C08 PETaligned AnimGif.gif|PET image of second timepoint aligned with first. Obtained by resampling the original PET with the obtained BSpline transform.
 +
 
 +
</gallery>
  
 
===Download ===
 
===Download ===
*'''[[Media:RegLib_05_KneeMRI.zip‎|download entire package  <small> (Data,Presets,Tutorial, Solution, zip file 9.8 MB) </small>]]'''
+
*Data
 +
**[[Media:RegLib_C08_WholeBody_PET-CT.zip‎‎|'''Raw''' (uncropped) data set only (use to try out cropping procedure) <small> (NRRD volumes, zip file 135 MB) </small>]]
 +
**[[Media:RegLib_C08_Data.zip‎ |'''Cropped''' data set, incl. solution transforms & presets<small> (zip file .nrrrd files 93 MB) </small>]]
 +
*Presets
 +
**[[Media:RegLib_C08_Presets_BRAINSFit.mrml‎ |'''Registration Presets'''<small> (.mrml files 9kB) </small>]]
 +
**[[Projects:RegistrationDocumentation:ParameterPresetsTutorial|Link to User Guide: How to Load/Save Registration Parameter Presets]]
 +
*Documentation
 +
**[[Media:RegLib_C08_Tutorial.ppt|step-by-step tutorial (PowerPoint) <small> (.ppt 2 MB) </small>]]'''
 +
**[[Media:RegLib_C08_Tutorial.pdf|step-by-step tutorial (PDF) <small> (.pdf 4 MB) </small>]]'''
 +
 
 +
=== Procedure ===
 +
*'''Phase I: Preprocessing: center'''
 +
#the two volume sets have different origins specified in their header files. We reset both to obtain a rough alignment:
 +
##Go to ''Volumes'' module and select the ''Info'' tab
 +
##From ''Active Volume'' menu, select s1_CT; then click the ''Center Volume'' button
 +
##repeat for s1_PET, s2_CT etc.
 +
*'''Phase II: Preprocessing: crop'''
 +
#Crop to region of interest: to avoid bias from the strong differences in head positiion, we reduce the effective FOV do the abdominal region of interest
 +
##Go to the “Extract Subvolume ROI” module.
 +
##click in respective slice view (axial, sagittal, coronal) to se the boundaries. When done select
 +
##We clip both s1_CT and s2_CT between the 5th lumbar and the 5th thoracic vertebrae. For a separate tutorial on how to use the Subvolume module, see the slicer training compendium
 +
##To skip this step, load the ready-made cropped volumes from the example dataset: s1_CT_crop , s2_CT_crop.
 +
*'''Phase III: Affine Registration (Expert Automated Module)'''
 +
#Open ''Expert Automated Registration Module''
 +
##Fixed Image: s1_CT
 +
##Moving Image: s2_CT
 +
##Save Transform: create new, rename to "Xf1_s2-s1_Affine"
 +
##if running on uncropped/uncentered: check ''Initialization: Centers of Mass''
 +
##Registration: Pipeline Affine Metric: MattesMI
 +
##Expected offset magnitude: 50
 +
##Expected rotation,scale,skew magnitude: leave at default
 +
##“Advanced Affine Registration Parameters” Tab: Affine Max Iterations: 10, Affine sampling ratio: 0.02
 +
##Click: ''Apply''
 +
##Go to ''Data'' module and drag the moving volume inside the newly created transform to see the result
 +
*'''Phase IV: Non-rigid Registration (Fast Nonrigid BSpline Module)'''
 +
#Open ''Fast Nonrigid BSpline'' module
 +
##Fixed Image: s1_CT_crop
 +
##Moving Image: s2_CT_crop
 +
##Initial Transform: Xform_Aff0_Init
 +
##Output transform: create new, rename to “Xform_BSpl2”
 +
##Output Volume: create new, rename to “s2_CT_BSpl2”
 +
##Iterations: 120; Grid Size: 9; Histogram Bins: 50
 +
##Spatial Samples: 150000
 +
----
 +
*'''Phase III-IV alternative: BRAINSFit''': Affine + BSpline
 +
#Go to ''BRAINSFit'' registration module
 +
##select parameter presets from pulldown menu: '''Xf2_S2-s1_cropped'' or set the parameters below:
 +
##Fixed Image: s1_CT_crop
 +
##Moving Image: s2_CT_crop
 +
##check: include Rigid, include ScaleVersor3D, include Affine, include BSpline
 +
##Output: Slicer BSpline Transform: create new, rename to "Xf2_s2c-s1c_BFit"
 +
##Output Image Volume: create new, rename to "S2_CT_crop_Xf2"
 +
##Output Image Pixel Type: "short"
 +
##Registration Parameters: Number of grid subdivisions: 3,3,3; leave rest a default settings
 +
##Click: Apply
 +
for more details see the tutorial under Downloads
 +
 
  
 
<!--
 
<!--
 
comment
 
comment
 
-->
 
-->
 
  
 
=== Discussion: Registration Challenges ===
 
=== Discussion: Registration Challenges ===
 
*accuracy is the critical criterion here. We need the registration error (residual misalignment) to be smaller than the change we want to measure/detect. Agreement on what constitutes good alignment can therefore vary greatly.
 
*accuracy is the critical criterion here. We need the registration error (residual misalignment) to be smaller than the change we want to measure/detect. Agreement on what constitutes good alignment can therefore vary greatly.
*the two images have strong differences in coil inhomogeneity. This affects less the registration quality but hampers evaluation. Most of the difference does not become apparent until after registration in direct juxtaposition. Bias field correction beforehand is recommended.
+
*the two series have different voxel sizes
*we have slightly different voxel sizes
+
*because of the large FOV we have strong non-rigid deformations from differences in patient/limb positions etc.  
*if the pathology change is substantial it might affect the registration.
+
*images are large volumes (>100 MB total)
 
+
*image content reaches border of image on two sides
 +
*2 images pairs have to be aligned, i.e. the calculated transform must be applied to the second (PET) image.
 
=== Discussion: Key Strategies ===
 
=== Discussion: Key Strategies ===
*the two images have identical contrast, hence we consider "sharper" cost functions, such as NormCorr or MeanSqrd
+
*to calculate the transform, we use the images with the most accurate geometric representation and the smallest expected change, i.e. we align the follow-up CT to the baseline CT and then apply the transforms to the PET image.
*we have aliasing at the image margins that should be masked out
+
*because of the non-rigid differences due to posture and breathing we will need to apply a 2-step registration with an affine alignment followed by a BSpline.
*the two images are not too far apart initially
+
*the strong differences in head position is likely to distract the registration and lead to suboptimal results. Hence we produce a cropped version of the two CT images to calculate the BSpline transform.
*the bone appears largely as signal void, making it hard to distinguish from background
+
*the two images are far apart initially, we will need some form of initialization. We will try an automated alignment first. If this fails, we do a 2-step process with manual initial alignment, followed by automated affine.
*because accuracy is more important than speed here, we increase the sampling rate from the default 2% to 15%.
+
*because accuracy is more important than speed here, we increase the iterations and sampling rates. Note however the large image size, which makes comparable sampling % still large compared to other datasets.
*we also expect minimal differences in scale & distortion: so we can either set the expected values to 0 or run a rigid registration
+
*the two images have identical contrast, hence we could consider "sharper" cost functions, such as NormCorr or MeanSqrd. However, since these are not (yet) available for the BSpline registration.
*we test the result in areas with good anatomical detail and contrast, far away from the pathology. With rigid body motion a local measure of registration accuracy is representative and can give us a valid limit of detectable change.
 

Latest revision as of 17:36, 10 July 2017

Home < Projects:RegistrationLibrary:RegLib C08

Back to ARRA main page
Back to Registration main page
Back to Registration Use-case Inventory

v3.6.1 Slicer3-6Announcement-v1.png Slicer Registration Library Case #08: Intra-subject whole-body PET-CT

Input

this is the fixed CT image. All images are aligned into this space this is the fixed PET image. All images are aligned into this space lleft this is the moving image. The transform is calculated by matching this to the reference image this is the moving image. The transform is calculated by matching this to the reference image
fixed image/target fixed image/target moving image moving image

Modules

Objective / Background

Change assessment.

Keywords

PET-CT, whole-body, change assessment

Input Data

  • reference/fixed : baseline CT: 0.97 x 0.97 x 3.27 mm , 512 x 512 x 267; PET: 4.7 x 4.7 x 3.3 mm , 128 x 128 x 267
  • moving: CT: 0.98 x 0.98 x 5, 512 x 512 x 195; PET: 4.1 x 4.1 x 5 mm, 168 x 168 x 195

Registration Results

Download

Procedure

  • Phase I: Preprocessing: center
  1. the two volume sets have different origins specified in their header files. We reset both to obtain a rough alignment:
    1. Go to Volumes module and select the Info tab
    2. From Active Volume menu, select s1_CT; then click the Center Volume button
    3. repeat for s1_PET, s2_CT etc.
  • Phase II: Preprocessing: crop
  1. Crop to region of interest: to avoid bias from the strong differences in head positiion, we reduce the effective FOV do the abdominal region of interest
    1. Go to the “Extract Subvolume ROI” module.
    2. click in respective slice view (axial, sagittal, coronal) to se the boundaries. When done select
    3. We clip both s1_CT and s2_CT between the 5th lumbar and the 5th thoracic vertebrae. For a separate tutorial on how to use the Subvolume module, see the slicer training compendium
    4. To skip this step, load the ready-made cropped volumes from the example dataset: s1_CT_crop , s2_CT_crop.
  • Phase III: Affine Registration (Expert Automated Module)
  1. Open Expert Automated Registration Module
    1. Fixed Image: s1_CT
    2. Moving Image: s2_CT
    3. Save Transform: create new, rename to "Xf1_s2-s1_Affine"
    4. if running on uncropped/uncentered: check Initialization: Centers of Mass
    5. Registration: Pipeline Affine Metric: MattesMI
    6. Expected offset magnitude: 50
    7. Expected rotation,scale,skew magnitude: leave at default
    8. “Advanced Affine Registration Parameters” Tab: Affine Max Iterations: 10, Affine sampling ratio: 0.02
    9. Click: Apply
    10. Go to Data module and drag the moving volume inside the newly created transform to see the result
  • Phase IV: Non-rigid Registration (Fast Nonrigid BSpline Module)
  1. Open Fast Nonrigid BSpline module
    1. Fixed Image: s1_CT_crop
    2. Moving Image: s2_CT_crop
    3. Initial Transform: Xform_Aff0_Init
    4. Output transform: create new, rename to “Xform_BSpl2”
    5. Output Volume: create new, rename to “s2_CT_BSpl2”
    6. Iterations: 120; Grid Size: 9; Histogram Bins: 50
    7. Spatial Samples: 150000

  • Phase III-IV alternative: BRAINSFit: Affine + BSpline
  1. Go to BRAINSFit registration module
    1. select parameter presets from pulldown menu: 'Xf2_S2-s1_cropped or set the parameters below:
    2. Fixed Image: s1_CT_crop
    3. Moving Image: s2_CT_crop
    4. check: include Rigid, include ScaleVersor3D, include Affine, include BSpline
    5. Output: Slicer BSpline Transform: create new, rename to "Xf2_s2c-s1c_BFit"
    6. Output Image Volume: create new, rename to "S2_CT_crop_Xf2"
    7. Output Image Pixel Type: "short"
    8. Registration Parameters: Number of grid subdivisions: 3,3,3; leave rest a default settings
    9. Click: Apply

for more details see the tutorial under Downloads


Discussion: Registration Challenges

  • accuracy is the critical criterion here. We need the registration error (residual misalignment) to be smaller than the change we want to measure/detect. Agreement on what constitutes good alignment can therefore vary greatly.
  • the two series have different voxel sizes
  • because of the large FOV we have strong non-rigid deformations from differences in patient/limb positions etc.
  • images are large volumes (>100 MB total)
  • image content reaches border of image on two sides
  • 2 images pairs have to be aligned, i.e. the calculated transform must be applied to the second (PET) image.

Discussion: Key Strategies

  • to calculate the transform, we use the images with the most accurate geometric representation and the smallest expected change, i.e. we align the follow-up CT to the baseline CT and then apply the transforms to the PET image.
  • because of the non-rigid differences due to posture and breathing we will need to apply a 2-step registration with an affine alignment followed by a BSpline.
  • the strong differences in head position is likely to distract the registration and lead to suboptimal results. Hence we produce a cropped version of the two CT images to calculate the BSpline transform.
  • the two images are far apart initially, we will need some form of initialization. We will try an automated alignment first. If this fails, we do a 2-step process with manual initial alignment, followed by automated affine.
  • because accuracy is more important than speed here, we increase the iterations and sampling rates. Note however the large image size, which makes comparable sampling % still large compared to other datasets.
  • the two images have identical contrast, hence we could consider "sharper" cost functions, such as NormCorr or MeanSqrd. However, since these are not (yet) available for the BSpline registration.