Execution Model Reference Systems

From NAMIC Wiki
Revision as of 09:11, 13 January 2007 by Lantiga (talk | contribs) (→‎The issue)
Jump to: navigation, search
Home < Execution Model Reference Systems

The issue

In Slicer3, every geometric entity is associated with a transform (via the MRML tree hierarchy): every image, model or fiducial (vtkImageData, vtkPolyData, ...) is associated with a vtkMatrix4x4 defining its orientation in space. Therefore, the actual point coordinates stored in vtkPolyData may be expressed in arbitrary coordinate systems, but they will be properly located in the MRML scene, in the RAS coordinate system. Also, vtkImageData objects, which are defined in VTK as structured grids oriented with the x,y,z axes, can have arbitrary orientations in space.

While this is a powerful mechanism, maintaining consistency within the Execution Model is challenging.

When calling a CLI plugin, Slicer writes temporary data files that are fed to the CLI:

  • the vtkPolyData point coordinates are first transformed in RAS (i.e. its points are transformed with the proper vtkMatrix4x4) and the file is written in VTK format. When the CLI program reads the VTK file, it implicitly has the data and the transform (which has become the identity matrix after the transformation has been applied berfore the call to the CLI program).
  • fiducials are passed along as a comma-separated list of float arguments, and are expressed in RAS (for now, although the extension to LPS and IJK has been planned - by the way, if one specifies LPS, he should also specify LPS of what image, otherwise one fiducial in LPS corresponds to many fiducials in RAS)
  • vtkImageData is stored in a nrrd file, with the information about the vtkMatrix4x4 stored in the header as space origin and space directions (which also include spacing); when the Archetype reader in the CLI reads the file in, the two objects are recreated: in the current implementation the vtkImageData has origin and spacing information, while vtkMatrix4x4 expresses the RAS to IJK transform.

The problem is that vtkImageData as a stand-alone obejct lives in LPS, while fiducials and geometry live in RAS. Therefore, if a CLI program has to locate one point of a surface on top of an image, the plugin coder has to transform the point and then probe the image. This adds an additional layer to what VTK usually requires, and potentially means a major code revision for VTK-based projects that have to be integrated with Slicer.

A potential solution for this problem would be the following:

extend the Archetype reader to output the RAS to LPS matrix (i.e. origin and spacing would be factored out from the vtkMatrix4x4). This way, assuming all images in the CLI have the same orientation matrix (which is reasonable since it's a constraint imposed by VTK), all vtkPolyData and fiducials can be transformed before the actual processing without losing their shape, and point location or extraction could be left unmodified in the CLI algorithm code. The plugin coder would have to worry about transforms only at the beginning and at the end of the CLI program, which would ease the effort of interfacing existing projects with Slicer. [1]

Optionally, the RAS to LPS solution could be implemented in Slicer itself. The XML description would indicate which image entity provides the RAS to LPS transform, and all geometry and fiducials woudl be transformed in this coordinate system before the call to the CLI. After the CLI spits out data in the reference image LPS, Slicer would take care of applying the RAS to LPS inverse to whatever it loads up.

The benefit of this approach is nullified if two images with different orientations are passed to the CLI, but i) the RAS to LPS could be optional, ii) it wouldn't hurt a orientation-aware CLI anyway. The disadvantage is that potentially large vtkPolyData obejcts have to be duplicated (but this is common to all solutions except the first).

  1. This is similar to what happens in the ModelMaker CLI module, except that in ModelMaker origin and spacing are set to 0,0,0 and 1,1,1 respectively and geometry is generated in IJK coordinates. This is not viable as a general solution because i) most image processing algorithms are voxel spacing-aware ii) surface curvature, etc change in case of anisotropic spacing.