<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mathieu</id>
	<title>NAMIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mathieu"/>
	<link rel="alternate" type="text/html" href="https://www.na-mic.org/wiki/Special:Contributions/Mathieu"/>
	<updated>2026-04-11T00:35:34Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.33.0</generator>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2010_Project_Week_DICOM_supplement_145&amp;diff=55366</id>
		<title>2010 Project Week DICOM supplement 145</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2010_Project_Week_DICOM_supplement_145&amp;diff=55366"/>
		<updated>2010-06-25T14:06:51Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Standard support for discrete 2 and 3 manifold storage in ITK */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2010.png|[[2010_Summer_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Mathieu Malaterre: CoSMo Software&lt;br /&gt;
* Alex. Gouaillard: CoSMo SOftware, A*STAR&lt;br /&gt;
* Luis Ibanez: Kitware Inc.&lt;br /&gt;
&lt;br /&gt;
==Project==&lt;br /&gt;
DICOM Supplement 145 provide a way to go over the 32bits limits and allow storing of large images. We propose to implement this specification.&lt;br /&gt;
In addition, we would implement DICOM specification for JPEG 2000 Part 2 Multi-component Image Compression. This portion of the standard provides higher&lt;br /&gt;
compression ratio for storing multicomponent images. &lt;br /&gt;
Finally by implementing DICOM supplement 132, we would provides support for storing of surfaces and 3D volumes, as an addition to the 2D RTSTRUCT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for Large multicomponent images in ITK ===&lt;br /&gt;
DICOM currently defines Image IODs by storing the rows and columns as unsigned short&lt;br /&gt;
integer. This means that an image can only be at most of size 2^16 * 2^16 pixels.&lt;br /&gt;
This is a limitation for Microscopy Images as , for example, typical Whole Slice Images an be 60,000 * 80,000 pixels. Since images are generally stored with 24-bit color&lt;br /&gt;
pixels, this means WSI can go up to 15Gb. Confocal Microscopy Images add one additional&lt;br /&gt;
dimension, and are today already capable of acquiring 24 channels. They are reported to need up&lt;br /&gt;
to petabytes.&lt;br /&gt;
For this reason DICOM Supplement 145 defines a way to store images into multiple&lt;br /&gt;
DICOM files providing a mean to work around this 32bits limitation of DICOM.&lt;br /&gt;
By implementing this Supplement (which is still in Ballot), we would provide to the&lt;br /&gt;
ITK community a proof of concept and allow people to start saving larges images using DICOM.&lt;br /&gt;
This would allow the re-using of technologies and prevent people from re-inventing the wheel&lt;br /&gt;
and start using a new file format to exchange medical image (second system effect). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for multicomponent images compression in ITK ===&lt;br /&gt;
Microscopy Images, on top of being larger sometimes by several orders of magnitudes from medical images, are also multi-component. Even though ITK handle multicomponent images per say, by defining the right pixel type, nothing is available today for the storage and compression of images that would have more than 3 channels (RGB). Since 2001, the DICOM standard allows JPEG 2000 compression. GDCM 2.x has supported a portion of the standard by providing an API to allow transfer syntax such as JPEG 2000 Image Compression (1.2.840.10008.1.2.4.90 and 1.2.840.10008.1.2.4.91). However the standard also includes JPEG 2000 Part 2 Multi-component Image Compression (1.2.840.10008.1.2.4.92 and 1.2.840.10008.1.2.4.93). The latter has never made into GDCM / ITK, or any other open source DICOM toolkit, since as quoted from the presentation: “Image Compression Refresher – JPEG 2000 and 3D, David Clunie”: the compression gain was modest (using lossless compression). We see now an opportunity that this compression can make it into GDCM / ITK since microscopy images would perfectly fit into the original design of the compression (ISO/IEC 15444-2:2003 Annex J). This would ease the dissemination of large dataset by reusing standard compression techniques, since this would greatly reduce the size of those file datasets. This will be particularly useful for microscopic images. At the time of writing no other open-source DICOM toolkit offer this compression algorithm. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for discrete 2 and 3 manifold storage in ITK ===&lt;br /&gt;
From its early design, ITK has always offered some n-dimensional n-manifold (polygonal meshes) support through the itk::Mesh class. However at the time of writing of this proposal, there is still no official way to read or write those meshes from and to a filesystem in ITK. Only an hybrid solution is available in Insight Applications. However it implies a dependency to the entire VTK library which is an overkill most of the time.&lt;br /&gt;
Thanks to the work on itk::QuadEdgeMesh some progress have been made toward that goal. The Review directory surrently holds a very simple VTK PolyData reader and writer. However, it only supports legacy vtk files using ASCII encoding. For its defense, this implementation was only made for regression testing and illustration of the filters usage.&lt;br /&gt;
We are proposing here to fill this gap in the ITK toolkit and implement DICOM's Supplement 132, part of the standard since 2008, which would add surfaces and volumes meshes (2 and 3 manifolds) storage capacity to GDCM /ITK. This would add support for surfaces and volume (2 and 3 manifolds in n dimensional space, see supplemental material annex).&lt;br /&gt;
We suggest 2 different options. The first option would allow for storage of itk::QuadEdgeMesh as a DICOM file. The second options would allow for storage of an itk::Mesh as a DICOM file. Of course reading the corresponding structures from a DICOM file would also be provided.&lt;br /&gt;
As an illustration, we make a clear distinction in what itk::QuadEdgeMesh handles and what itk::Mesh can handle. Both 2 and 3 manifold objects are handled in the DICOM standard.&lt;br /&gt;
We do anticipate a large portion of this task to be validation testing. Since there are no other serialization mechanism available in ITK, we will need to setup an hybrid system with VTK to perform validation on the dataset written (a 3D VTK mesh will be used as input for the tests). For this we would be re-using code from InsightApplications/Auxiliary/vtk/vtk2itk.cxx.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
The objectives of the project week is to share with the community and make a list of persons interested and of corresponding efforts before we start implementing. Typically at the end of the week we would like to have identified existing parts, ongoing effort and have a roadmap available possibly with other groups joining in depending on needs, manpower and expertise.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
'''1.''' Item #1: Compare existing technologies: HDF5, DICOM, JPEG2000, TIFF (XMP)&lt;br /&gt;
&lt;br /&gt;
'''2.''' Item #2: Identify technologies in ITK: openjpeg v1 which does not support tiles (required for streaming)&lt;br /&gt;
&lt;br /&gt;
'''3.''' Item #3: Identify group which would need this technology: maybe CTK ?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''*''' Worked with Kishore for ITK/JPEG 2000 reader. Code review / Made it compile&lt;br /&gt;
&lt;br /&gt;
'''*''' Sync with upstream openjpeg team: there are 400 regression tests now !&lt;br /&gt;
&lt;br /&gt;
'''*''' Need to synchronize effort with GDCM 2.x and OpenJPEG v2 update. Prepare patch for GDCM 2.x to use openjpeg v2.&lt;br /&gt;
&lt;br /&gt;
'''*''' Worked with GoFigure2 team to identify the need:&lt;br /&gt;
** passing algorithm more than passing data.&lt;br /&gt;
** image data&lt;br /&gt;
** very large image 4D&lt;br /&gt;
** mesh based segmentation (2d / 3d mesh)&lt;br /&gt;
** annotations (label)&lt;br /&gt;
&lt;br /&gt;
'''*''' XMP technogoly is based on XML, which imply ASCII serialization (poor performance, very verbose).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a&lt;br /&gt;
&lt;br /&gt;
* ITK Module&lt;br /&gt;
* Other: GDCM extension (gdcm is included in ITK)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*the official DICOM Standard Proposal [ftp://medical.nema.org/medical/dicom/supps/sup145_09.pdf here]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2010_Project_Week_DICOM_supplement_145&amp;diff=55319</id>
		<title>2010 Project Week DICOM supplement 145</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2010_Project_Week_DICOM_supplement_145&amp;diff=55319"/>
		<updated>2010-06-25T13:53:41Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Standard support for discrete 2 and 3 manifold storage in ITK */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2010.png|[[2010_Summer_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Mathieu Malaterre: CoSMo Software&lt;br /&gt;
* Alex. Gouaillard: CoSMo SOftware, A*STAR&lt;br /&gt;
* Luis Ibanez: Kitware Inc.&lt;br /&gt;
&lt;br /&gt;
==Project==&lt;br /&gt;
DICOM Supplement 145 provide a way to go over the 32bits limits and allow storing of large images. We propose to implement this specification.&lt;br /&gt;
In addition, we would implement DICOM specification for JPEG 2000 Part 2 Multi-component Image Compression. This portion of the standard provides higher&lt;br /&gt;
compression ratio for storing multicomponent images. &lt;br /&gt;
Finally by implementing DICOM supplement 132, we would provides support for storing of surfaces and 3D volumes, as an addition to the 2D RTSTRUCT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for Large multicomponent images in ITK ===&lt;br /&gt;
DICOM currently defines Image IODs by storing the rows and columns as unsigned short&lt;br /&gt;
integer. This means that an image can only be at most of size 2^16 * 2^16 pixels.&lt;br /&gt;
This is a limitation for Microscopy Images as , for example, typical Whole Slice Images an be 60,000 * 80,000 pixels. Since images are generally stored with 24-bit color&lt;br /&gt;
pixels, this means WSI can go up to 15Gb. Confocal Microscopy Images add one additional&lt;br /&gt;
dimension, and are today already capable of acquiring 24 channels. They are reported to need up&lt;br /&gt;
to petabytes.&lt;br /&gt;
For this reason DICOM Supplement 145 defines a way to store images into multiple&lt;br /&gt;
DICOM files providing a mean to work around this 32bits limitation of DICOM.&lt;br /&gt;
By implementing this Supplement (which is still in Ballot), we would provide to the&lt;br /&gt;
ITK community a proof of concept and allow people to start saving larges images using DICOM.&lt;br /&gt;
This would allow the re-using of technologies and prevent people from re-inventing the wheel&lt;br /&gt;
and start using a new file format to exchange medical image (second system effect). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for multicomponent images compression in ITK ===&lt;br /&gt;
Microscopy Images, on top of being larger sometimes by several orders of magnitudes from medical images, are also multi-component. Even though ITK handle multicomponent images per say, by defining the right pixel type, nothing is available today for the storage and compression of images that would have more than 3 channels (RGB). Since 2001, the DICOM standard allows JPEG 2000 compression. GDCM 2.x has supported a portion of the standard by providing an API to allow transfer syntax such as JPEG 2000 Image Compression (1.2.840.10008.1.2.4.90 and 1.2.840.10008.1.2.4.91). However the standard also includes JPEG 2000 Part 2 Multi-component Image Compression (1.2.840.10008.1.2.4.92 and 1.2.840.10008.1.2.4.93). The latter has never made into GDCM / ITK, or any other open source DICOM toolkit, since as quoted from the presentation: “Image Compression Refresher – JPEG 2000 and 3D, David Clunie”: the compression gain was modest (using lossless compression). We see now an opportunity that this compression can make it into GDCM / ITK since microscopy images would perfectly fit into the original design of the compression (ISO/IEC 15444-2:2003 Annex J). This would ease the dissemination of large dataset by reusing standard compression techniques, since this would greatly reduce the size of those file datasets. This will be particularly useful for microscopic images. At the time of writing no other open-source DICOM toolkit offer this compression algorithm. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for discrete 2 and 3 manifold storage in ITK ===&lt;br /&gt;
From its early design, ITK has always offered some n-dimensional n-manifold (polygonal meshes) support through the itk::Mesh class. However at the time of writing of this proposal, there is still no official way to read or write those meshes from and to a filesystem in ITK. Only an hybrid solution is available in Insight Applications. However it implies a dependency to the entire VTK library which is an overkill most of the time.&lt;br /&gt;
Thanks to the work on itk::QuadEdgeMesh some progress have been made toward that goal. The Review directory surrently holds a very simple VTK PolyData reader and writer. However, it only supports legacy vtk files using ASCII encoding. For its defense, this implementation was only made for regression testing and illustration of the filters usage.&lt;br /&gt;
We are proposing here to fill this gap in the ITK toolkit and implement DICOM's Supplement 132, part of the standard since 2008, which would add surfaces and volumes meshes (2 and 3 manifolds) storage capacity to GDCM /ITK. This would add support for surfaces and volume (2 and 3 manifolds in n dimensional space, see supplemental material annex).&lt;br /&gt;
We suggest 2 different options. The first option would allow for storage of itk::QuadEdgeMesh as a DICOM file. The second options would allow for storage of an itk::Mesh as a DICOM file. Of course reading the corresponding structures from a DICOM file would also be provided.&lt;br /&gt;
As an illustration, we make a clear distinction in what itk::QuadEdgeMesh handles and what itk::Mesh can handle. Both 2 and 3 manifold objects are handled in the DICOM standard.&lt;br /&gt;
We do anticipate a large portion of this task to be validation testing. Since there are no other serialization mechanism available in ITK, we will need to setup an hybrid system with VTK to perform validation on the dataset written (a 3D VTK mesh will be used as input for the tests). For this we would be re-using code from InsightApplications/Auxiliary/vtk/vtk2itk.cxx.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
The objectives of the project week is to share with the community and make a list of persons interested and of corresponding efforts before we start implementing. Typically at the end of the week we would like to have identified existing parts, ongoing effort and have a roadmap available possibly with other groups joining in depending on needs, manpower and expertise.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
'''1.''' Item #1: Compare existing technologies: HDF5, DICOM, JPEG2000, TIFF (XMP)&lt;br /&gt;
&lt;br /&gt;
'''2.''' Item #2: Identify technologies in ITK: openjpeg v1 which does not support tiles (required for streaming)&lt;br /&gt;
&lt;br /&gt;
'''3.''' Item #3: Identify group which would need this technology: maybe CTK ?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''*''' Worked with Kishore for ITK/JPEG 2000 reader. Code review / Made it compile&lt;br /&gt;
&lt;br /&gt;
'''*''' Need to synchronize effort with GDCM 2.x and OpenJPEG v2 update. Prepare patch for GDCM 2.x to use openjpeg v2.&lt;br /&gt;
&lt;br /&gt;
'''*''' Worked with GoFigure2 team to identify the need:&lt;br /&gt;
** passing algorithm more than passing data.&lt;br /&gt;
** image data&lt;br /&gt;
** very large image 4D&lt;br /&gt;
** mesh based segmentation (2d / 3d mesh)&lt;br /&gt;
** annotations (label)&lt;br /&gt;
&lt;br /&gt;
'''*''' XMP technogoly is based on XML, which imply ASCII serialization (poor performance, very verbose).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a&lt;br /&gt;
&lt;br /&gt;
* ITK Module&lt;br /&gt;
* Other: GDCM extension (gdcm is included in ITK)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*the official DICOM Standard Proposal [ftp://medical.nema.org/medical/dicom/supps/sup145_09.pdf here]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=2010_Project_Week_DICOM_supplement_145&amp;diff=55299</id>
		<title>2010 Project Week DICOM supplement 145</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=2010_Project_Week_DICOM_supplement_145&amp;diff=55299"/>
		<updated>2010-06-25T13:49:12Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Standard support for discrete 2 and 3 manifold storage in ITK */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Image:PW-MIT2010.png|[[2010_Summer_Project_Week#Projects|Projects List]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
* Mathieu Malaterre: CoSMo Software&lt;br /&gt;
* Alex. Gouaillard: CoSMo SOftware, A*STAR&lt;br /&gt;
* Luis Ibanez: Kitware Inc.&lt;br /&gt;
&lt;br /&gt;
==Project==&lt;br /&gt;
DICOM Supplement 145 provide a way to go over the 32bits limits and allow storing of large images. We propose to implement this specification.&lt;br /&gt;
In addition, we would implement DICOM specification for JPEG 2000 Part 2 Multi-component Image Compression. This portion of the standard provides higher&lt;br /&gt;
compression ratio for storing multicomponent images. &lt;br /&gt;
Finally by implementing DICOM supplement 132, we would provides support for storing of surfaces and 3D volumes, as an addition to the 2D RTSTRUCT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for Large multicomponent images in ITK ===&lt;br /&gt;
DICOM currently defines Image IODs by storing the rows and columns as unsigned short&lt;br /&gt;
integer. This means that an image can only be at most of size 2^16 * 2^16 pixels.&lt;br /&gt;
This is a limitation for Microscopy Images as , for example, typical Whole Slice Images an be 60,000 * 80,000 pixels. Since images are generally stored with 24-bit color&lt;br /&gt;
pixels, this means WSI can go up to 15Gb. Confocal Microscopy Images add one additional&lt;br /&gt;
dimension, and are today already capable of acquiring 24 channels. They are reported to need up&lt;br /&gt;
to petabytes.&lt;br /&gt;
For this reason DICOM Supplement 145 defines a way to store images into multiple&lt;br /&gt;
DICOM files providing a mean to work around this 32bits limitation of DICOM.&lt;br /&gt;
By implementing this Supplement (which is still in Ballot), we would provide to the&lt;br /&gt;
ITK community a proof of concept and allow people to start saving larges images using DICOM.&lt;br /&gt;
This would allow the re-using of technologies and prevent people from re-inventing the wheel&lt;br /&gt;
and start using a new file format to exchange medical image (second system effect). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for multicomponent images compression in ITK ===&lt;br /&gt;
Microscopy Images, on top of being larger sometimes by several orders of magnitudes from medical images, are also multi-component. Even though ITK handle multicomponent images per say, by defining the right pixel type, nothing is available today for the storage and compression of images that would have more than 3 channels (RGB). Since 2001, the DICOM standard allows JPEG 2000 compression. GDCM 2.x has supported a portion of the standard by providing an API to allow transfer syntax such as JPEG 2000 Image Compression (1.2.840.10008.1.2.4.90 and 1.2.840.10008.1.2.4.91). However the standard also includes JPEG 2000 Part 2 Multi-component Image Compression (1.2.840.10008.1.2.4.92 and 1.2.840.10008.1.2.4.93). The latter has never made into GDCM / ITK, or any other open source DICOM toolkit, since as quoted from the presentation: “Image Compression Refresher – JPEG 2000 and 3D, David Clunie”: the compression gain was modest (using lossless compression). We see now an opportunity that this compression can make it into GDCM / ITK since microscopy images would perfectly fit into the original design of the compression (ISO/IEC 15444-2:2003 Annex J). This would ease the dissemination of large dataset by reusing standard compression techniques, since this would greatly reduce the size of those file datasets. This will be particularly useful for microscopic images. At the time of writing no other open-source DICOM toolkit offer this compression algorithm. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Standard support for discrete 2 and 3 manifold storage in ITK ===&lt;br /&gt;
From its early design, ITK has always offered some n-dimensional n-manifold (polygonal meshes) support through the itk::Mesh class. However at the time of writing of this proposal, there is still no official way to read or write those meshes from and to a filesystem in ITK. Only an hybrid solution is available in Insight Applications. However it implies a dependency to the entire VTK library which is an overkill most of the time.&lt;br /&gt;
Thanks to the work on itk::QuadEdgeMesh some progress have been made toward that goal. The Review directory surrently holds a very simple VTK PolyData reader and writer. However, it only supports legacy vtk files using ASCII encoding. For its defense, this implementation was only made for regression testing and illustration of the filters usage.&lt;br /&gt;
We are proposing here to fill this gap in the ITK toolkit and implement DICOM's Supplement 132, part of the standard since 2008, which would add surfaces and volumes meshes (2 and 3 manifolds) storage capacity to GDCM /ITK. This would add support for surfaces and volume (2 and 3 manifolds in n dimensional space, see supplemental material annex).&lt;br /&gt;
We suggest 2 different options. The first option would allow for storage of itk::QuadEdgeMesh as a DICOM file. The second options would allow for storage of an itk::Mesh as a DICOM file. Of course reading the corresponding structures from a DICOM file would also be provided.&lt;br /&gt;
As an illustration, we make a clear distinction in what itk::QuadEdgeMesh handles and what itk::Mesh can handle. Both 2 and 3 manifold objects are handled in the DICOM standard.&lt;br /&gt;
We do anticipate a large portion of this task to be validation testing. Since there are no other serialization mechanism available in ITK, we will need to setup an hybrid system with VTK to perform validation on the dataset written (a 3D VTK mesh will be used as input for the tests). For this we would be re-using code from InsightApplications/Auxiliary/vtk/vtk2itk.cxx.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 20px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Objective&amp;lt;/h3&amp;gt;&lt;br /&gt;
The objectives of the project week is to share with the community and make a list of persons interested and of corresponding efforts before we start implementing. Typically at the end of the week we would like to have identified existing parts, ongoing effort and have a roadmap available possibly with other groups joining in depending on needs, manpower and expertise.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 27%; float: left; padding-right: 3%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Approach, Plan&amp;lt;/h3&amp;gt;&lt;br /&gt;
'''1.''' Item #1: Compare existing technologies: HDF5, DICOM, JPEG2000, TIFF &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 40%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;Progress&amp;lt;/h3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''*''' Worked with Kishore for ITK/JPEG 2000 reader. Code review / Made it compile&lt;br /&gt;
&lt;br /&gt;
'''*''' Need to synchronize effort with GDCM 2.x and OpenJPEG v2 update&lt;br /&gt;
&lt;br /&gt;
'''*''' Worked with GoFigure2 team to identify the need:&lt;br /&gt;
** passing algorithm more than passing data.&lt;br /&gt;
** image data&lt;br /&gt;
** very large image 4D&lt;br /&gt;
** mesh based segmentation (2d / 3d mesh)&lt;br /&gt;
** annotations (label)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 97%; float: left;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delivery Mechanism==&lt;br /&gt;
&lt;br /&gt;
This work will be delivered to the NA-MIC Kit as a&lt;br /&gt;
&lt;br /&gt;
* ITK Module&lt;br /&gt;
* Other: GDCM extension (gdcm is included in ITK)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*the official DICOM Standard Proposal [ftp://medical.nema.org/medical/dicom/supps/sup145_09.pdf here]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=User:Mathieu&amp;diff=51409</id>
		<title>User:Mathieu</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=User:Mathieu&amp;diff=51409"/>
		<updated>2010-04-13T11:44:21Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:mathieu.malaterre@gmail.com Mathieu Malaterre]&lt;br /&gt;
&lt;br /&gt;
[http://www.mathieumalaterre.com Home Page]&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Microscopy_Image_Analysis&amp;diff=51408</id>
		<title>Microscopy Image Analysis</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Microscopy_Image_Analysis&amp;diff=51408"/>
		<updated>2010-04-13T10:34:03Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Participants */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Open Workshop on Microscopy Image Analysis in ITK and VTK =&lt;br /&gt;
The goal of this workshop is to foster the growth of a community of scientists interested in microscopy image analysis for biology using ITK and VTK&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
Optical microscopy is by far the most common form of imaging in biomedical research due to its high spatial resolution (subcellular), high specificity (molecular in the case of fluorescence), and suitability for use in living specimens. A Google Scholar search for &amp;quot;fluorescence microscopy&amp;quot;, only one of several types of optical microscopy, returns 1.7 million articles compared with &amp;lt; 1 million for &amp;quot;MRI&amp;quot;. Traditionally, the vast majority of these users of microscopy have performed qualitative analysis on a small number of images, but this is quickly changing. There is increasingly a need to perform quantitative analysis on microscopy images and to perform this analysis on large image sets (&amp;gt;100,000 images). In addition to higher throughput, recent advances in microscopy have made higher dimensional imaging commonplace. Researchers now routinely capture microscopy images over the dimensions of space (x,y,z), time (t), and multiple channels of color (lambda). Due to the large datasets, high dimensions, and complexity of analysis, current approaches to microscopy image analysis relying on Java, Matlab, and “home brew” applications are reaching their limits. We believe that a community based effort centered on developing microscopy-specific algorithms and applications built on the C++ class libraries of VTK and ITK represents the best path forward.&lt;br /&gt;
&lt;br /&gt;
== Focus ==&lt;br /&gt;
The focus of this workshop will be on segmentation and tracking of cells in optical microscopy images. Segmentation and tracking of cells represents a very common problem in microscopy image analysis. Although there is a common pipeline for many users (e.g. image preprocessing to remove noise, detection of seeds, detection of cells at single timepoints, tracking movements over time, data analysis) the algorithm parameters and algorithms themselves are often dependent on the specifics of the experimental setup. There is thus a strong need to develop a framework to allow users to choose algorithms and tune parameters to most importantly achieve robust segmentation and secondarily minimize computational cost.&lt;br /&gt;
&lt;br /&gt;
==Format==&lt;br /&gt;
The format for this meeting will be as a “track” within the NAMIC Project Week 2010 meeting at MIT in Boston, MA on June 21-25. Participants in this workshop should all have specific coding projects relating to cell segmentation and tracking that they wish to complete within the week. Ideally these projects should be collaborative. At the beginning of the meeting on Monday, workshop participants will present a 1 slide summary of the goals of their project as part of the overall meeting. For the rest of the week, workshop participants will sit in a common area and code on their projects. We will also have a microscopy breakout session on Wednesday. These project weeks tend to be quite productive because of the concentration of available expertise at the meeting. During the week we will also break from the coding to have a more formal discussion of our current individual efforts, the needs of the microscopy community, the technical issues of combining and exchanging code, and how we should move forward.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
* Monday afternoon- 1 slide lightning talk of project planned for the week&lt;br /&gt;
* Wednesday afternoon&lt;br /&gt;
** Current efforts (15 minute talks per lab)&lt;br /&gt;
** Roundtable discussion of standards/interfaces&lt;br /&gt;
*** Image file types&lt;br /&gt;
*** Input-output interface for segmentation and tracking filters &lt;br /&gt;
*** Format for outputted data (e.g. automatic annotations of cell size, intensity, cell type) &lt;br /&gt;
*** Greatest common denominator of code: ITK classes, compound filters in ITK, plugins?&lt;br /&gt;
*** Common human tasks&lt;br /&gt;
**** Manual segmentation and editing of results&lt;br /&gt;
**** Visualization of results&lt;br /&gt;
** Future directions&lt;br /&gt;
* Friday- 1 slide summary of results for the week&lt;br /&gt;
* The rest of the time will be spent coding on projects&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
The meat of this workshop is project work. This work should be collaborative to fully take advantage of everyone being together at the conference, to learn other people's approaches, and to flesh out the important needs of microscopy image analysis. If you need help formulating a project please contact Arnaud Gelas (arnaud_gelas@hms.harvard.edu) who can help as a matchmaker. Please list your projects below&lt;br /&gt;
* DICOM supplement [ftp://medical.nema.org/medical/dicom/supps/sup145_09.pdf 145]: Microscopy Image in the Dicom Standard ( Malaterre, Gouaillard )&lt;br /&gt;
* Microscopy pre-processing extension of ITK: convolution, deconvolution, wavelets and more ( Laehman, Gouaillard )&lt;br /&gt;
* Flow Cytometry ( Gouaillard )&lt;br /&gt;
* --&lt;br /&gt;
&lt;br /&gt;
== Participants ==&lt;br /&gt;
Please add your name to the list if you are interested in participating in this workshop&lt;br /&gt;
# Raghu Machiraju, Ohio State University&lt;br /&gt;
# Kannappan Palaniappan, University of Missouri&lt;br /&gt;
# Badri Roysam, Rensselaer Polytechnic Institute&lt;br /&gt;
# Arnaud Gelas, Harvard Medical School&lt;br /&gt;
# Kishore Mosaliganti, Harvard Medical School&lt;br /&gt;
# Nicolas Rannou, Harvard Medical School&lt;br /&gt;
# Antonin Perrot-Audet, Harvard Medical School&lt;br /&gt;
# Lydie Souhait, Harvard Medical School&lt;br /&gt;
# Sean Megason, Harvard Medical School&lt;br /&gt;
# Luis Ibanez, Kitware&lt;br /&gt;
# Gaetan Lehmann, INRA, Platform of Microscopy and Imaging of Micro-Organism, Animals and Ailments&lt;br /&gt;
# Mathieu Malaterre, CoSMo&lt;br /&gt;
# Alex. Gouaillard. A*STAR / CoSMo&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&amp;diff=29240</id>
		<title>NAMIC Wiki:DTI:DICOM for DWI and DTI</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&amp;diff=29240"/>
		<updated>2008-08-05T09:57:55Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Private vendor: Siemens */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page should serve as a place where information about DICOM and DWI/DTI data can be maintained. With time, this information could be used as part of automated solutions for learning all the necessary DWI-related information from a DICOM series. A collection of tools for DICOM is [[DICOM|here]].&lt;br /&gt;
&lt;br /&gt;
As long DICOM support for DWI information is vendor-specific and/or non-conformant with the info here, the [[NAMIC_Wiki:DTI:Nrrd_format|Nrrd format ]] provides a means of recording the DWI-specific information once it is known.&lt;br /&gt;
&lt;br /&gt;
== DICOM for DWI ==&lt;br /&gt;
&lt;br /&gt;
The recommended tags to use in DICOM are as follows:&lt;br /&gt;
&lt;br /&gt;
 0018 9075 CS 1 Diffusion Directionality&lt;br /&gt;
 0018 9076 SQ 1 Diffusion Gradient Direction Sequence&lt;br /&gt;
 0018 9087 FD 1 Diffusion b-value&lt;br /&gt;
 0018 9089 FD 3 Diffusion Gradient Orientation&lt;br /&gt;
 0018 9117 SQ 1 MR Diffusion Sequence&lt;br /&gt;
 0018 9147 CS 1 Diffusion Anisotropy Type&lt;br /&gt;
&lt;br /&gt;
These are defined in [ftp://medical.nema.org/medical/dicom/final/sup49_ft.pdf Supplement 49]. In particular see section C.8.12.5.9 &amp;quot;MR Diffusion Macro&amp;quot; on pages 94 and 95.&lt;br /&gt;
&lt;br /&gt;
The tags are also referenced in http://medical.nema.org/dicom/2004/04_06PU.PDF (see pages 28-29) as well as in some [http://medical.nema.org/Dicom/minutes/WG-07/WG-07_2005/Minutes-2005-10-20-21-Denver.doc Working Group Minutes] (see pages 155-156).&lt;br /&gt;
&lt;br /&gt;
Two points of interest relative to the NRRD format:&lt;br /&gt;
&lt;br /&gt;
* The definition of &amp;quot;Diffusion Gradient Orientation&amp;quot; implies that the measurement frame is exactly the identity transform.&lt;br /&gt;
* There appears to be no means of recording the full B-matrix when it is known. This is not an issue for any NAMIC datasets, but can arise in small-bore imaging.&lt;br /&gt;
&lt;br /&gt;
== Private vendor: GE ==&lt;br /&gt;
&lt;br /&gt;
For GE scanners, Signa Excite 12.0 and later, the following tags are reserved for diffusion weighted images:&lt;br /&gt;
&lt;br /&gt;
* (0019,10e0) : # DTI diffusion directions (release 10.0 &amp;amp; above)&lt;br /&gt;
* (0019,10df) : # DTI diffusion directions (release 9.0 &amp;amp; below)&lt;br /&gt;
* (0019,10d9) : Concatenated SAT {# DTI Diffusion Dir., release 9.0 &amp;amp; below}&lt;br /&gt;
* (0021,105A) : diffusion direction&lt;br /&gt;
* (0043,1039) : Slop_int_6... slop_int_9: (in the GEMS_PARM_01 block)&lt;br /&gt;
** 6: b_value&lt;br /&gt;
** 7: private imaging options 2&lt;br /&gt;
** 8: ihtagging&lt;br /&gt;
** 9: ihtagspc&lt;br /&gt;
&lt;br /&gt;
This information can be found in http://www.gehealthcare.com/usen/interoperability/dicom/docs/5162373r1.pdf&lt;br /&gt;
&lt;br /&gt;
Unfortunately the [[DataRepository|Dartmouth DWI data ]] (from a GE Signa scanner) does not conform to this (nor do they use the nominally standard 0x0018 tags), as can be seen by running:&lt;br /&gt;
&lt;br /&gt;
 dcdump S4.100 | &amp;amp; grep \(0x0019,0x10&lt;br /&gt;
&lt;br /&gt;
which includes:&lt;br /&gt;
&lt;br /&gt;
 (0x0019,0x10d9) DS Concatenated SAT      VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
 (0x0019,0x10df) DS User Data     VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
 (0x0019,0x10e0) DS User Data     VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so all of the tags which are supposed to store # gradient directions store the value 0! In addition, there is:&lt;br /&gt;
&lt;br /&gt;
 (0x0021,0x105a) SL Integer Slop          VR=&amp;lt;SL&amp;gt;   VL=&amp;lt;0x0004&amp;gt;  [0x00000000]&lt;br /&gt;
&lt;br /&gt;
so the supposed representation of diffusion-direction is also empty. The Dartmouth data has the following tags describing the scanner and software version:&lt;br /&gt;
&lt;br /&gt;
 (0008,1090) LO [GENESIS_SIGNA]&lt;br /&gt;
 (0018,1020) LO [09]&lt;br /&gt;
&lt;br /&gt;
In GE DWI images (software version 12.0)&lt;br /&gt;
&lt;br /&gt;
 (0008,1090) LO [SIGNA EXCITE]&lt;br /&gt;
 (0018,1020) LO [12\LX\MR Software release:12.0_M4_0520.a]&lt;br /&gt;
&lt;br /&gt;
diffusion directions are stored under the following tags:&lt;br /&gt;
&lt;br /&gt;
 (0019,10bb) DS [0.430617]&lt;br /&gt;
 (0019,10bc) DS [-0.804161]&lt;br /&gt;
 (0019,10bd) DS [-0.420008]&lt;br /&gt;
&lt;br /&gt;
== Private vendor: Siemens ==&lt;br /&gt;
&lt;br /&gt;
A Siemens DICOM Conformance Statement is available at&lt;br /&gt;
&lt;br /&gt;
 http://www.medical.siemens.com/siemens/en_INT/rg_marcom_FBAs/files/brochures/DICOM/mr/dcs_trio.pdf&lt;br /&gt;
&lt;br /&gt;
No diffusion related tags specified.&lt;br /&gt;
&lt;br /&gt;
David Tuch has stated (in email from December 21, 2005):&lt;br /&gt;
&lt;br /&gt;
 The diffusion gradient information and coordinate frame are not provided&lt;br /&gt;
 in the DICOM hdr for the MGH diffusion sequences.&lt;br /&gt;
&lt;br /&gt;
'''Tag 0029,1010''' may include all necessary information&lt;br /&gt;
&lt;br /&gt;
If you have installed spm (and Matlab), the following spm-File extracts the gradient info:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 P= spm_get(Inf,'*','Selct some files')&lt;br /&gt;
 hdr=spm_dicom_headers(P)&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(1).val&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(2).val&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(3).val&lt;br /&gt;
&lt;br /&gt;
look for &amp;quot;spm_dicom_headers.m&amp;quot; (google or on your hard disk), this spm File shows you how to decode the tag data.&lt;br /&gt;
&lt;br /&gt;
It is easy to write a C or C++ program that does the same, if you have the spm_dicom_headers.m&lt;br /&gt;
&lt;br /&gt;
Credits: Jan Klein &amp;lt;klein AT mevis DOT de&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another solution is simply to download GDCM 2.x application (gdcmdump) and type:&lt;br /&gt;
&lt;br /&gt;
 $ gdcmdump --csa input.dcm&lt;br /&gt;
&lt;br /&gt;
Output should look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(0029,0010)siemens csa header&lt;br /&gt;
Image shadow data (0029,xx10)&lt;br /&gt;
&lt;br /&gt;
0 - 'EchoLinePosition' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '192     '&lt;br /&gt;
1 - 'EchoColumnPosition' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '128     '&lt;br /&gt;
2 - 'EchoPartitionPosition' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '16      '&lt;br /&gt;
3 - 'UsedChannelMask' VM 1, VR UL, SyngoDT 9, NoOfItems 6, Data '255     '&lt;br /&gt;
4 - 'Actual3DImaPartNumber' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
5 - 'ICE_Dims' VM 1, VR LO, SyngoDT 19, NoOfItems 6, Data 'X_1_1_1_1_1_1_1_1_1_1_1_136'&lt;br /&gt;
6 - 'B_value' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
7 - 'Filter1' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
8 - 'Filter2' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
9 - 'ProtocolSliceNumber' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '0       '&lt;br /&gt;
10 - 'RealDwellTime' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '7500    '&lt;br /&gt;
11 - 'PixelFile' VM 1, VR UN, SyngoDT 0, NoOfItems 0, Data&lt;br /&gt;
12 - 'PixelFileName' VM 1, VR UN, SyngoDT 0, NoOfItems 0, Data&lt;br /&gt;
13 - 'SliceMeasurementDuration' VM 1, VR DS, SyngoDT 3, NoOfItems 6, Data '73375.00000000'&lt;br /&gt;
14 - 'SequenceMask' VM 1, VR UL, SyngoDT 9, NoOfItems 6, Data '0       '&lt;br /&gt;
15 - 'AcquisitionMatrixText' VM 1, VR SH, SyngoDT 22, NoOfItems 6, Data '256*256'&lt;br /&gt;
16 - 'MeasuredFourierLines' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '0       '&lt;br /&gt;
17 - 'FlowEncodingDirection' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
18 - 'FlowVenc' VM 1, VR FD, SyngoDT 4, NoOfItems 0, Data&lt;br /&gt;
19 - 'PhaseEncodingDirectionPositive' VM 1, VR IS, SyngoDT 6, NoOfItems 6, Data '1       '&lt;br /&gt;
20 - 'NumberOfImagesInMosaic' VM 1, VR US, SyngoDT 10, NoOfItems 0, Data&lt;br /&gt;
21 - 'DiffusionGradientDirection' VM 3, VR FD, SyngoDT 4, NoOfItems 0, Data&lt;br /&gt;
22 - 'ImageGroup' VM 1, VR US, SyngoDT 10, NoOfItems 0, Data&lt;br /&gt;
23 - 'SliceNormalVector' VM 3, VR FD, SyngoDT 4, NoOfItems 0, Data&lt;br /&gt;
24 - 'DiffusionDirectionality' VM 1, VR CS, SyngoDT 16, NoOfItems 0, Data&lt;br /&gt;
25 - 'TimeAfterStart' VM 1, VR DS, SyngoDT 3, NoOfItems 6, Data '0.00000000'&lt;br /&gt;
26 - 'FlipAngle' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
27 - 'SequenceName' VM 1, VR SH, SyngoDT 22, NoOfItems 0, Data&lt;br /&gt;
28 - 'RepetitionTime' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
29 - 'EchoTime' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
30 - 'NumberOfAverages' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
31 - 'VoxelThickness' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
32 - 'VoxelPhaseFOV' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
33 - 'VoxelReadoutFOV' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
34 - 'VoxelPositionSag' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
35 - 'VoxelPositionCor' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
36 - 'VoxelPositionTra' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
37 - 'VoxelNormalSag' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
38 - 'VoxelNormalCor' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
39 - 'VoxelNormalTra' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
40 - 'VoxelInPlaneRot' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
41 - 'ImagePositionPatient' VM 3, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
42 - 'ImageOrientationPatient' VM 6, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
43 - 'PixelSpacing' VM 2, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
44 - 'SliceLocation' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
45 - 'SliceThickness' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
46 - 'SpectrumTextRegionLabel' VM 1, VR SH, SyngoDT 22, NoOfItems 0, Data&lt;br /&gt;
47 - 'Comp_Algorithm' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
48 - 'Comp_Blended' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
49 - 'Comp_ManualAdjusted' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
50 - 'Comp_AutoParam' VM 1, VR LT, SyngoDT 20, NoOfItems 0, Data&lt;br /&gt;
51 - 'Comp_AdjustedParam' VM 1, VR LT, SyngoDT 20, NoOfItems 0, Data&lt;br /&gt;
52 - 'Comp_JobID' VM 1, VR LT, SyngoDT 20, NoOfItems 0, Data&lt;br /&gt;
53 - 'FMRIStimulInfo' VM 1, VR IS, SyngoDT 6, NoOfItems 0, Data&lt;br /&gt;
54 - 'FlowEncodingDirectionString' VM 1, VR SH, SyngoDT 22, NoOfItems 0, Data&lt;br /&gt;
55 - 'RepetitionTimeEffective' VM 1, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
56 - 'CsiImagePositionPatient' VM 3, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
57 - 'CsiImageOrientationPatient' VM 6, VR DS, SyngoDT 3, NoOfItems 0, Data&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ref:&lt;br /&gt;
* http://gdcm.sourceforge.net/wiki/index.php/GDCM_Release_2.0&lt;br /&gt;
&lt;br /&gt;
=== Update ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt; As far as the latest MR scanner software (2006) version is concerned, a solution for the access of the following Diffusion attributes was provided:&lt;br /&gt;
&lt;br /&gt;
 0019;000A;SIEMENS MR HEADER  ;NumberOfImagesInMosaic          ;1;US;1&lt;br /&gt;
 0019;000B;SIEMENS MR HEADER  ;SliceMeasurementDuration        ;1;DS;1&lt;br /&gt;
 0019;000C;SIEMENS MR HEADER  ;B_value                         ;1;IS;1&lt;br /&gt;
 0019;000D;SIEMENS MR HEADER  ;DiffusionDirectionality         ;1;CS;1&lt;br /&gt;
 0019;000E;SIEMENS MR HEADER  ;DiffusionGradientDirection      ;1;FD;3&lt;br /&gt;
 0019;000F;SIEMENS MR HEADER  ;GradientMode                    ;1;SH;1&lt;br /&gt;
 0019;0027;SIEMENS MR HEADER  ;B_matrix                        ;1;FD;6&lt;br /&gt;
 0019;0028;SIEMENS MR HEADER  ;BandwidthPerPixelPhaseEncode    ;1;FD;1&lt;br /&gt;
&lt;br /&gt;
That does not solve your problem with the older datasets and unfortunately there is no easy way to access diffusion information there as it is really only stored in the Siemens shadow part.&lt;br /&gt;
&lt;br /&gt;
Credits: Stefan Huwer&lt;br /&gt;
&lt;br /&gt;
=== User Note ===&lt;br /&gt;
&lt;br /&gt;
Don't use the gradient-directions from the DICOM header from VB13 systems, it commonly contains errors; read the b-matrix instead and use that to compute the gradient-directions. In fact, this is also what Siemens does but they compute the gradient-vectors in an error-prone way. They do (in Matlab syntax):&lt;br /&gt;
&lt;br /&gt;
 % BMtx = [bxx bxy bxz byy byz bzz]&lt;br /&gt;
 GradVec = BMtx([1:3])/sqrt(BVal*BMtx([1]));&lt;br /&gt;
&lt;br /&gt;
This gives errors for bxx values of zero and close to one (bxx is an integer); NB this *does* happen! So instead read the b-matrix and do:&lt;br /&gt;
&lt;br /&gt;
 BSign   = sign(sign(BMtx([1:3])) + 0.01);   % Adding 0.01 avoids getting zeros here&lt;br /&gt;
 GradVec = BSign .* sqrt(BMtx([1 4 6])/BVal);&lt;br /&gt;
&lt;br /&gt;
In this way, the vectors may point exactly in the opposite direction (sign flip), but this is usually of no importance.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the stored values may differ from the official values due to the fact that&lt;br /&gt;
# the directions are calculated from the b-matrix and not directly written to the header. Rounding errors, as well as sign flips may occur in this case.&lt;br /&gt;
# the b-matrix takes also the diffusion effect of imaging and spoiling gradients into account. This leads to deviations from the originally specified orientations.&lt;br /&gt;
# the vectors are expressed in the patient coordinate system (Sag/Cor/Tra), not in Slice/Read/Phase, following the DICOM convention.&lt;br /&gt;
&lt;br /&gt;
Credits: Marcel Zwiers&lt;br /&gt;
&lt;br /&gt;
=== Reference ===&lt;br /&gt;
&lt;br /&gt;
* http://www.mmrrcc.upenn.edu/CAMRIS/cfn/&lt;br /&gt;
&lt;br /&gt;
in particular:&lt;br /&gt;
&lt;br /&gt;
* http://www.mmrrcc.upenn.edu/CAMRIS/cfn/dicomhdr.html&lt;br /&gt;
&lt;br /&gt;
== Private vendor: Philips ==&lt;br /&gt;
&lt;br /&gt;
Philips uses the following tags for diffusion weighted images&lt;br /&gt;
&lt;br /&gt;
* (2001,1003) : B_value;&lt;br /&gt;
* (2001,1004) : Diffusion direction.&lt;br /&gt;
&lt;br /&gt;
Complete DICOM conformance statements for&lt;br /&gt;
&lt;br /&gt;
* Intera&lt;br /&gt;
* Achieva&lt;br /&gt;
* Panorama&lt;br /&gt;
* Gyroscan&lt;br /&gt;
* Infinion / Eclipse&lt;br /&gt;
&lt;br /&gt;
are available here http://www.medical.philips.com/main/company/connectivity/mri/&lt;br /&gt;
&lt;br /&gt;
=== DTI Table Gradient ===&lt;br /&gt;
&lt;br /&gt;
 OVERVIEW: &lt;br /&gt;
 &lt;br /&gt;
 This applet computes the gradient table&lt;br /&gt;
 for DTI data acquired on Philips MRI&lt;br /&gt;
 scanners with low, medium, and high &lt;br /&gt;
 directional resolution.&lt;br /&gt;
 &lt;br /&gt;
 The imaging parameters and DTI options&lt;br /&gt;
 listed can affect the gradient table.&lt;br /&gt;
 e.g.directions from a gradient overplus&lt;br /&gt;
 = yes table need to be adjusted for the&lt;br /&gt;
 slice angulation (i.e. oblique slices)&lt;br /&gt;
 specific to each DTI study &lt;br /&gt;
 &lt;br /&gt;
 The output is appropriate for DTIstudio.&lt;br /&gt;
 I hope this applet is useful, but please&lt;br /&gt;
 use it at your own risk. Matlab code with&lt;br /&gt;
 more functionality is available on my &lt;br /&gt;
 webpage&lt;br /&gt;
&lt;br /&gt;
* http://godzilla.kennedykrieger.org/%7Ejfarrell/OTHERphilips/GUI.html&lt;br /&gt;
&lt;br /&gt;
== DICOM for estimated diffusion tensors ==&lt;br /&gt;
&lt;br /&gt;
There is no specification right now in DICOM to support Tensor. Only a supplement:&lt;br /&gt;
&lt;br /&gt;
[ftp://medical.nema.org/medical/dicom/supps/sup63_pc.pdf Supp 63 Parts 3,4,5,6,16,17 Multi-dimensional Interchange Object ]&lt;br /&gt;
&lt;br /&gt;
The discussion would then be (D. Clunie quote):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 Indeed, even if one were to try to standardize in DICOM the encoding&lt;br /&gt;
 of the entire diffusion tensor, there would no doubt be considerable&lt;br /&gt;
 debate as to whether to do that as 6 (or 9) planes of an &amp;quot;image&amp;quot;, since&lt;br /&gt;
 there is such a matrix at each spatial location (&amp;quot;pixel&amp;quot;), or as&lt;br /&gt;
 a special case of the proposed Sup 63 object; the former would keep&lt;br /&gt;
 image-oriented tools and software happier, the latter would require&lt;br /&gt;
 implementing a new mechanism and navigating through a more general&lt;br /&gt;
 structure.&lt;br /&gt;
&lt;br /&gt;
== Discussion on DICOM newsgroup ==&lt;br /&gt;
&lt;br /&gt;
http://groups.google.com/group/comp.protocols.dicom/browse_frm/thread/3d292d9c506b1cbf For cross reference:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 Hi Mathieu&lt;br /&gt;
 &lt;br /&gt;
 Thanks for the very interesting links.&lt;br /&gt;
 &lt;br /&gt;
 The proposed standard way to address this problem is with&lt;br /&gt;
 the Enhanced MR IOD, which explicitly addresses the&lt;br /&gt;
 attributes for encoding diffusion directionality and B-Value.&lt;br /&gt;
 &lt;br /&gt;
 There will likely be no extensions of the existing MR IOD to&lt;br /&gt;
 address this concern, since by policy, we obviously want&lt;br /&gt;
 folks to use the new IOD.&lt;br /&gt;
 &lt;br /&gt;
 It would not surprise me if in the interim, vendors started&lt;br /&gt;
 to send some of the new Sup 49 attributes in old IOD&lt;br /&gt;
 instances, but there will almost certainly be no official move&lt;br /&gt;
 to standardize this, and we would certainly not consider&lt;br /&gt;
 adding a non-Sup 49 based mechanism.&lt;br /&gt;
 &lt;br /&gt;
 If the Sup 49 mechanisms are not sufficient (e.g. for&lt;br /&gt;
 tensors), then we need to discuss what the gaps are&lt;br /&gt;
 and how to fill them.&lt;br /&gt;
 &lt;br /&gt;
 We would not, by the way, ever &amp;quot;encapsulate&amp;quot; the NRRD&lt;br /&gt;
 header, but it would nice if there was a clear mapping between&lt;br /&gt;
 the DICOM Sup 49 attributes and the relevant NRRD&lt;br /&gt;
 attributes, though these are probably obvious.&lt;br /&gt;
 &lt;br /&gt;
 David&lt;br /&gt;
 &lt;br /&gt;
 Mathieu Malaterre wrote:&lt;br /&gt;
 &amp;gt; Hello again,&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    I am currently involved in a group which is working extensively with&lt;br /&gt;
 &amp;gt; DWI data (diffusion-weighted image). Their acquisition data is coming&lt;br /&gt;
 &amp;gt; in the form of DICOM files. Unfortunately they are currently facing two&lt;br /&gt;
 &amp;gt; problems:&lt;br /&gt;
 &amp;gt; 1. Each vendors stores the gradient directions differently (if at all!)&lt;br /&gt;
 &amp;gt; 2. Even if you have the directions, you don't have the measurement&lt;br /&gt;
 &amp;gt; frame(*). As far as I understand the DICOM specification, there is&lt;br /&gt;
 &amp;gt; currently no way to store this type of information.&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    For issue #1 I am currently gathering information about the major&lt;br /&gt;
 &amp;gt; vendors so that I can add extraction code into gdcm. This would provide&lt;br /&gt;
 &amp;gt; a function to extract the gradient directions and make invisible to&lt;br /&gt;
 &amp;gt; user the different way vendors use to store this information. For more&lt;br /&gt;
 &amp;gt; information see the Wiki at:&lt;br /&gt;
 &amp;gt; http://www.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    For issue #2, -again correct me if I am wrong- but I could not find&lt;br /&gt;
 &amp;gt; anything that could represent this measurement frame. Therefore the&lt;br /&gt;
 &amp;gt; solution we are using is to use an intermediate file format called&lt;br /&gt;
 &amp;gt; NRRD, see for instance:&lt;br /&gt;
 &amp;gt; http://wiki.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:Nrrd_format&lt;br /&gt;
 &amp;gt;    Ideally this information should be accessible from a particular&lt;br /&gt;
 &amp;gt; DICOM tag. Can we request an extension of the DICOM standard to allow&lt;br /&gt;
 &amp;gt; us to store this information directly in the DICOM file, which would&lt;br /&gt;
 &amp;gt; greatly simplify the process (avoiding the intermediate NRRD file&lt;br /&gt;
 &amp;gt; format step).&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt; Regards,&lt;br /&gt;
 &amp;gt; Mathieu&lt;br /&gt;
 &amp;gt; (*)&lt;br /&gt;
 &amp;gt; &amp;quot;measurement frame&amp;quot;: relationship between the coordinate frame in which&lt;br /&gt;
 &amp;gt; the gradient coefficients are expressed, and the physical coordinate&lt;br /&gt;
 &amp;gt; frame in which image orientation is defined&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&amp;diff=15362</id>
		<title>NAMIC Wiki:DTI:DICOM for DWI and DTI</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&amp;diff=15362"/>
		<updated>2007-09-06T08:49:03Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Private vendor: Philips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page should serve as a place where information about DICOM and DWI/DTI data can be maintained. With time, this information could be used as part of automated solutions for learning all the necessary DWI-related information from a DICOM series. A collection of tools for DICOM is [[DICOM|here]].&lt;br /&gt;
&lt;br /&gt;
As long DICOM support for DWI information is vendor-specific and/or non-conformant with the info here, the [[NAMIC_Wiki:DTI:Nrrd_format|Nrrd format ]] provides a means of recording the DWI-specific information once it is known.&lt;br /&gt;
&lt;br /&gt;
== DICOM for DWI ==&lt;br /&gt;
&lt;br /&gt;
The recommended tags to use in DICOM are as follows:&lt;br /&gt;
&lt;br /&gt;
 0018 9075 CS 1 Diffusion Directionality&lt;br /&gt;
 0018 9076 SQ 1 Diffusion Gradient Direction Sequence&lt;br /&gt;
 0018 9087 FD 1 Diffusion b-value&lt;br /&gt;
 0018 9089 FD 3 Diffusion Gradient Orientation&lt;br /&gt;
 0018 9117 SQ 1 MR Diffusion Sequence&lt;br /&gt;
 0018 9147 CS 1 Diffusion Anisotropy Type&lt;br /&gt;
&lt;br /&gt;
These are defined in [ftp://medical.nema.org/medical/dicom/final/sup49_ft.pdf Supplement 49]. In particular see section C.8.12.5.9 &amp;quot;MR Diffusion Macro&amp;quot; on pages 94 and 95.&lt;br /&gt;
&lt;br /&gt;
The tags are also referenced in http://medical.nema.org/dicom/2004/04_06PU.PDF (see pages 28-29) as well as in some [http://medical.nema.org/Dicom/minutes/WG-07/WG-07_2005/Minutes-2005-10-20-21-Denver.doc Working Group Minutes] (see pages 155-156).&lt;br /&gt;
&lt;br /&gt;
Two points of interest relative to the NRRD format:&lt;br /&gt;
&lt;br /&gt;
* The definition of &amp;quot;Diffusion Gradient Orientation&amp;quot; implies that the measurement frame is exactly the identity transform.&lt;br /&gt;
* There appears to be no means of recording the full B-matrix when it is known. This is not an issue for any NAMIC datasets, but can arise in small-bore imaging.&lt;br /&gt;
&lt;br /&gt;
=== Private vendor: GE ===&lt;br /&gt;
&lt;br /&gt;
For GE scanners, Signa Excite 12.0 and later, the following tags are reserved for diffusion weighted images:&lt;br /&gt;
&lt;br /&gt;
* (0019,10e0) : # DTI diffusion directions (release 10.0 &amp;amp; above)&lt;br /&gt;
* (0019,10df) : # DTI diffusion directions (release 9.0 &amp;amp; below)&lt;br /&gt;
* (0019,10d9) : Concatenated SAT {# DTI Diffusion Dir., release 9.0 &amp;amp; below}&lt;br /&gt;
* (0021,105A) : diffusion direction&lt;br /&gt;
* (0043,1039) : Slop_int_6... slop_int_9: (in the GEMS_PARM_01 block)&lt;br /&gt;
** 6: b_value&lt;br /&gt;
** 7: private imaging options 2&lt;br /&gt;
** 8: ihtagging&lt;br /&gt;
** 9: ihtagspc&lt;br /&gt;
&lt;br /&gt;
This information can be found in http://www.gehealthcare.com/usen/interoperability/dicom/docs/5162373r1.pdf&lt;br /&gt;
&lt;br /&gt;
Unfortunately the [[DataRepository|Dartmouth DWI data ]] (from a GE Signa scanner) does not conform to this (nor do they use the nominally standard 0x0018 tags), as can be seen by running:&lt;br /&gt;
&lt;br /&gt;
 dcdump S4.100 | &amp;amp; grep \(0x0019,0x10&lt;br /&gt;
&lt;br /&gt;
which includes:&lt;br /&gt;
&lt;br /&gt;
 (0x0019,0x10d9) DS Concatenated SAT      VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
 (0x0019,0x10df) DS User Data     VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
 (0x0019,0x10e0) DS User Data     VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so all of the tags which are supposed to store # gradient directions store the value 0! In addition, there is:&lt;br /&gt;
&lt;br /&gt;
 (0x0021,0x105a) SL Integer Slop          VR=&amp;lt;SL&amp;gt;   VL=&amp;lt;0x0004&amp;gt;  [0x00000000]&lt;br /&gt;
&lt;br /&gt;
so the supposed representation of diffusion-direction is also empty. The Dartmouth data has the following tags describing the scanner and software version:&lt;br /&gt;
&lt;br /&gt;
 (0008,1090) LO [GENESIS_SIGNA]&lt;br /&gt;
 (0018,1020) LO [09]&lt;br /&gt;
&lt;br /&gt;
In GE DWI images (software version 12.0)&lt;br /&gt;
&lt;br /&gt;
 (0008,1090) LO [SIGNA EXCITE]&lt;br /&gt;
 (0018,1020) LO [12\LX\MR Software release:12.0_M4_0520.a]&lt;br /&gt;
&lt;br /&gt;
diffusion directions are stored under the following tags:&lt;br /&gt;
&lt;br /&gt;
 (0019,10bb) DS [0.430617]&lt;br /&gt;
 (0019,10bc) DS [-0.804161]&lt;br /&gt;
 (0019,10bd) DS [-0.420008]&lt;br /&gt;
&lt;br /&gt;
=== Private vendor: Siemens ===&lt;br /&gt;
&lt;br /&gt;
A Siemens DICOM Conformance Statement is available at&lt;br /&gt;
&lt;br /&gt;
 http://www.medical.siemens.com/siemens/en_INT/rg_marcom_FBAs/files/brochures/DICOM/mr/dcs_trio.pdf&lt;br /&gt;
&lt;br /&gt;
No diffusion related tags specified.&lt;br /&gt;
&lt;br /&gt;
David Tuch has stated (in email from December 21, 2005):&lt;br /&gt;
&lt;br /&gt;
 The diffusion gradient information and coordinate frame are not provided&lt;br /&gt;
 in the DICOM hdr for the MGH diffusion sequences.&lt;br /&gt;
&lt;br /&gt;
'''Tag 0029,1010''' may include all necessary information&lt;br /&gt;
&lt;br /&gt;
If you have installed spm (and Matlab), the following spm-File extracts the gradient info:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 P= spm_get(Inf,'*','Selct some files')&lt;br /&gt;
 hdr=spm_dicom_headers(P)&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(1).val&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(2).val&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(3).val&lt;br /&gt;
&lt;br /&gt;
look for &amp;quot;spm_dicom_headers.m&amp;quot; (google or on your hard disk), this spm File shows you how to decode the tag data.&lt;br /&gt;
&lt;br /&gt;
It is easy to write a C or C++ program that does the same, if you have the spm_dicom_headers.m&lt;br /&gt;
&lt;br /&gt;
Credits: Jan Klein &amp;lt;klein AT mevis DOT de&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Update ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt; As far as the latest MR scanner software (2006) version is concerned, a solution for the access of the following Diffusion attributes was provided:&lt;br /&gt;
&lt;br /&gt;
 0019;000A;SIEMENS MR HEADER  ;NumberOfImagesInMosaic          ;1;US;1&lt;br /&gt;
 0019;000B;SIEMENS MR HEADER  ;SliceMeasurementDuration        ;1;DS;1&lt;br /&gt;
 0019;000C;SIEMENS MR HEADER  ;B_value                         ;1;IS;1&lt;br /&gt;
 0019;000D;SIEMENS MR HEADER  ;DiffusionDirectionality         ;1;CS;1&lt;br /&gt;
 0019;000E;SIEMENS MR HEADER  ;DiffusionGradientDirection      ;1;FD;3&lt;br /&gt;
 0019;000F;SIEMENS MR HEADER  ;GradientMode                    ;1;SH;1&lt;br /&gt;
 0019;0027;SIEMENS MR HEADER  ;B_matrix                        ;1;FD;6&lt;br /&gt;
 0019;0028;SIEMENS MR HEADER  ;BandwidthPerPixelPhaseEncode    ;1;FD;1&lt;br /&gt;
&lt;br /&gt;
That does not solve your problem with the older datasets and unfortunately there is no easy way to access diffusion information there as it is really only stored in the Siemens shadow part.&lt;br /&gt;
&lt;br /&gt;
Credits: Stefan Huwer&lt;br /&gt;
&lt;br /&gt;
==== User Note ====&lt;br /&gt;
&lt;br /&gt;
Don't use the gradient-directions from the DICOM header from VB13 systems, it commonly contains errors; read the b-matrix instead and use that to compute the gradient-directions. In fact, this is also what Siemens does but they compute the gradient-vectors in an error-prone way. They do (in Matlab syntax):&lt;br /&gt;
&lt;br /&gt;
 % BMtx = [bxx bxy bxz byy byz bzz]&lt;br /&gt;
 GradVec = BMtx([1:3])/sqrt(BVal*BMtx([1]));&lt;br /&gt;
&lt;br /&gt;
This gives errors for bxx values of zero and close to one (bxx is an integer); NB this *does* happen! So instead read the b-matrix and do:&lt;br /&gt;
&lt;br /&gt;
 BSign   = sign(sign(BMtx([1:3])) + 0.01);   % Adding 0.01 avoids getting zeros here&lt;br /&gt;
 GradVec = BSign .* sqrt(BMtx([1 4 6])/BVal);&lt;br /&gt;
&lt;br /&gt;
In this way, the vectors may point exactly in the opposite direction (sign flip), but this is usually of no importance.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the stored values may differ from the official values due to the fact that&lt;br /&gt;
# the directions are calculated from the b-matrix and not directly written to the header. Rounding errors, as well as sign flips may occur in this case.&lt;br /&gt;
# the b-matrix takes also the diffusion effect of imaging and spoiling gradients into account. This leads to deviations from the originally specified orientations.&lt;br /&gt;
# the vectors are expressed in the patient coordinate system (Sag/Cor/Tra), not in Slice/Read/Phase, following the DICOM convention.&lt;br /&gt;
&lt;br /&gt;
Credits: Marcel Zwiers&lt;br /&gt;
&lt;br /&gt;
==== Reference ====&lt;br /&gt;
&lt;br /&gt;
* http://www.mmrrcc.upenn.edu/CAMRIS/cfn/&lt;br /&gt;
&lt;br /&gt;
in particular:&lt;br /&gt;
&lt;br /&gt;
* http://www.mmrrcc.upenn.edu/CAMRIS/cfn/dicomhdr.html&lt;br /&gt;
&lt;br /&gt;
=== Private vendor: Philips ===&lt;br /&gt;
&lt;br /&gt;
Philips uses the following tags for diffusion weighted images&lt;br /&gt;
&lt;br /&gt;
* (2001,1003) : B_value;&lt;br /&gt;
* (2001,1004) : Diffusion direction.&lt;br /&gt;
&lt;br /&gt;
Complete DICOM conformance statements for&lt;br /&gt;
&lt;br /&gt;
* Intera&lt;br /&gt;
* Achieva&lt;br /&gt;
* Panorama&lt;br /&gt;
* Gyroscan&lt;br /&gt;
* Infinion / Eclipse&lt;br /&gt;
&lt;br /&gt;
are available here http://www.medical.philips.com/main/company/connectivity/mri/&lt;br /&gt;
&lt;br /&gt;
==== DTI Table Gradient ====&lt;br /&gt;
&lt;br /&gt;
 OVERVIEW: &lt;br /&gt;
 &lt;br /&gt;
 This applet computes the gradient table&lt;br /&gt;
 for DTI data acquired on Philips MRI&lt;br /&gt;
 scanners with low, medium, and high &lt;br /&gt;
 directional resolution.&lt;br /&gt;
 &lt;br /&gt;
 The imaging parameters and DTI options&lt;br /&gt;
 listed can affect the gradient table.&lt;br /&gt;
 e.g.directions from a gradient overplus&lt;br /&gt;
 = yes table need to be adjusted for the&lt;br /&gt;
 slice angulation (i.e. oblique slices)&lt;br /&gt;
 specific to each DTI study &lt;br /&gt;
 &lt;br /&gt;
 The output is appropriate for DTIstudio.&lt;br /&gt;
 I hope this applet is useful, but please&lt;br /&gt;
 use it at your own risk. Matlab code with&lt;br /&gt;
 more functionality is available on my &lt;br /&gt;
 webpage&lt;br /&gt;
&lt;br /&gt;
* http://godzilla.kennedykrieger.org/%7Ejfarrell/OTHERphilips/GUI.html&lt;br /&gt;
&lt;br /&gt;
== DICOM for estimated diffusion tensors ==&lt;br /&gt;
&lt;br /&gt;
There is no specification right now in DICOM to support Tensor. Only a supplement:&lt;br /&gt;
&lt;br /&gt;
[ftp://medical.nema.org/medical/dicom/supps/sup63_pc.pdf Supp 63 Parts 3,4,5,6,16,17 Multi-dimensional Interchange Object ]&lt;br /&gt;
&lt;br /&gt;
The discussion would then be (D. Clunie quote):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 Indeed, even if one were to try to standardize in DICOM the encoding&lt;br /&gt;
 of the entire diffusion tensor, there would no doubt be considerable&lt;br /&gt;
 debate as to whether to do that as 6 (or 9) planes of an &amp;quot;image&amp;quot;, since&lt;br /&gt;
 there is such a matrix at each spatial location (&amp;quot;pixel&amp;quot;), or as&lt;br /&gt;
 a special case of the proposed Sup 63 object; the former would keep&lt;br /&gt;
 image-oriented tools and software happier, the latter would require&lt;br /&gt;
 implementing a new mechanism and navigating through a more general&lt;br /&gt;
 structure.&lt;br /&gt;
&lt;br /&gt;
== Discussion on DICOM newsgroup ==&lt;br /&gt;
&lt;br /&gt;
http://groups.google.com/group/comp.protocols.dicom/browse_frm/thread/3d292d9c506b1cbf For cross reference:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 Hi Mathieu&lt;br /&gt;
 &lt;br /&gt;
 Thanks for the very interesting links.&lt;br /&gt;
 &lt;br /&gt;
 The proposed standard way to address this problem is with&lt;br /&gt;
 the Enhanced MR IOD, which explicitly addresses the&lt;br /&gt;
 attributes for encoding diffusion directionality and B-Value.&lt;br /&gt;
 &lt;br /&gt;
 There will likely be no extensions of the existing MR IOD to&lt;br /&gt;
 address this concern, since by policy, we obviously want&lt;br /&gt;
 folks to use the new IOD.&lt;br /&gt;
 &lt;br /&gt;
 It would not surprise me if in the interim, vendors started&lt;br /&gt;
 to send some of the new Sup 49 attributes in old IOD&lt;br /&gt;
 instances, but there will almost certainly be no official move&lt;br /&gt;
 to standardize this, and we would certainly not consider&lt;br /&gt;
 adding a non-Sup 49 based mechanism.&lt;br /&gt;
 &lt;br /&gt;
 If the Sup 49 mechanisms are not sufficient (e.g. for&lt;br /&gt;
 tensors), then we need to discuss what the gaps are&lt;br /&gt;
 and how to fill them.&lt;br /&gt;
 &lt;br /&gt;
 We would not, by the way, ever &amp;quot;encapsulate&amp;quot; the NRRD&lt;br /&gt;
 header, but it would nice if there was a clear mapping between&lt;br /&gt;
 the DICOM Sup 49 attributes and the relevant NRRD&lt;br /&gt;
 attributes, though these are probably obvious.&lt;br /&gt;
 &lt;br /&gt;
 David&lt;br /&gt;
 &lt;br /&gt;
 Mathieu Malaterre wrote:&lt;br /&gt;
 &amp;gt; Hello again,&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    I am currently involved in a group which is working extensively with&lt;br /&gt;
 &amp;gt; DWI data (diffusion-weighted image). Their acquisition data is coming&lt;br /&gt;
 &amp;gt; in the form of DICOM files. Unfortunately they are currently facing two&lt;br /&gt;
 &amp;gt; problems:&lt;br /&gt;
 &amp;gt; 1. Each vendors stores the gradient directions differently (if at all!)&lt;br /&gt;
 &amp;gt; 2. Even if you have the directions, you don't have the measurement&lt;br /&gt;
 &amp;gt; frame(*). As far as I understand the DICOM specification, there is&lt;br /&gt;
 &amp;gt; currently no way to store this type of information.&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    For issue #1 I am currently gathering information about the major&lt;br /&gt;
 &amp;gt; vendors so that I can add extraction code into gdcm. This would provide&lt;br /&gt;
 &amp;gt; a function to extract the gradient directions and make invisible to&lt;br /&gt;
 &amp;gt; user the different way vendors use to store this information. For more&lt;br /&gt;
 &amp;gt; information see the Wiki at:&lt;br /&gt;
 &amp;gt; http://www.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    For issue #2, -again correct me if I am wrong- but I could not find&lt;br /&gt;
 &amp;gt; anything that could represent this measurement frame. Therefore the&lt;br /&gt;
 &amp;gt; solution we are using is to use an intermediate file format called&lt;br /&gt;
 &amp;gt; NRRD, see for instance:&lt;br /&gt;
 &amp;gt; http://wiki.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:Nrrd_format&lt;br /&gt;
 &amp;gt;    Ideally this information should be accessible from a particular&lt;br /&gt;
 &amp;gt; DICOM tag. Can we request an extension of the DICOM standard to allow&lt;br /&gt;
 &amp;gt; us to store this information directly in the DICOM file, which would&lt;br /&gt;
 &amp;gt; greatly simplify the process (avoiding the intermediate NRRD file&lt;br /&gt;
 &amp;gt; format step).&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt; Regards,&lt;br /&gt;
 &amp;gt; Mathieu&lt;br /&gt;
 &amp;gt; (*)&lt;br /&gt;
 &amp;gt; &amp;quot;measurement frame&amp;quot;: relationship between the coordinate frame in which&lt;br /&gt;
 &amp;gt; the gradient coefficients are expressed, and the physical coordinate&lt;br /&gt;
 &amp;gt; frame in which image orientation is defined&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&amp;diff=11211</id>
		<title>NAMIC Wiki:DTI:DICOM for DWI and DTI</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&amp;diff=11211"/>
		<updated>2007-06-06T08:36:32Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page should serve as a place where information about DICOM and DWI/DTI data can be maintained. With time, this information could be used as part of automated solutions for learning all the necessary DWI-related information from a DICOM series. A collection of tools for DICOM is [[DICOM|here]].&lt;br /&gt;
&lt;br /&gt;
As long DICOM support for DWI information is vendor-specific and/or non-conformant with the info here, the [[NAMIC_Wiki:DTI:Nrrd_format|Nrrd format ]] provides a means of recording the DWI-specific information once it is known.&lt;br /&gt;
&lt;br /&gt;
== DICOM for DWI ==&lt;br /&gt;
&lt;br /&gt;
The recommended tags to use in DICOM are as follows:&lt;br /&gt;
&lt;br /&gt;
 0018 9075 CS 1 Diffusion Directionality&lt;br /&gt;
 0018 9076 SQ 1 Diffusion Gradient Direction Sequence&lt;br /&gt;
 0018 9087 FD 1 Diffusion b-value&lt;br /&gt;
 0018 9089 FD 3 Diffusion Gradient Orientation&lt;br /&gt;
 0018 9117 SQ 1 MR Diffusion Sequence&lt;br /&gt;
 0018 9147 CS 1 Diffusion Anisotropy Type&lt;br /&gt;
&lt;br /&gt;
These are defined in [ftp://medical.nema.org/medical/dicom/final/sup49_ft.pdf Supplement 49]. In particular see section C.8.12.5.9 &amp;quot;MR Diffusion Macro&amp;quot; on pages 94 and 95.&lt;br /&gt;
&lt;br /&gt;
The tags are also referenced in http://medical.nema.org/dicom/2004/04_06PU.PDF (see pages 28-29) as well as in some [http://medical.nema.org/Dicom/minutes/WG-07/WG-07_2005/Minutes-2005-10-20-21-Denver.doc Working Group Minutes] (see pages 155-156).&lt;br /&gt;
&lt;br /&gt;
Two points of interest relative to the NRRD format:&lt;br /&gt;
&lt;br /&gt;
* The definition of &amp;quot;Diffusion Gradient Orientation&amp;quot; implies that the measurement frame is exactly the identity transform.&lt;br /&gt;
* There appears to be no means of recording the full B-matrix when it is known. This is not an issue for any NAMIC datasets, but can arise in small-bore imaging.&lt;br /&gt;
&lt;br /&gt;
=== Private vendor: GE ===&lt;br /&gt;
&lt;br /&gt;
For GE scanners, Signa Excite 12.0 and later, the following tags are reserved for diffusion weighted images:&lt;br /&gt;
&lt;br /&gt;
* (0019,10e0) : # DTI diffusion directions (release 10.0 &amp;amp; above)&lt;br /&gt;
* (0019,10df) : # DTI diffusion directions (release 9.0 &amp;amp; below)&lt;br /&gt;
* (0019,10d9) : Concatenated SAT {# DTI Diffusion Dir., release 9.0 &amp;amp; below}&lt;br /&gt;
* (0021,105A) : diffusion direction&lt;br /&gt;
* (0043,1039) : Slop_int_6... slop_int_9: (in the GEMS_PARM_01 block)&lt;br /&gt;
** 6: b_value&lt;br /&gt;
** 7: private imaging options 2&lt;br /&gt;
** 8: ihtagging&lt;br /&gt;
** 9: ihtagspc&lt;br /&gt;
&lt;br /&gt;
This information can be found in http://www.gehealthcare.com/usen/interoperability/dicom/docs/5162373r1.pdf&lt;br /&gt;
&lt;br /&gt;
Unfortunately the [[DataRepository|Dartmouth DWI data ]] (from a GE Signa scanner) does not conform to this (nor do they use the nominally standard 0x0018 tags), as can be seen by running:&lt;br /&gt;
&lt;br /&gt;
 dcdump S4.100 | &amp;amp; grep \(0x0019,0x10&lt;br /&gt;
&lt;br /&gt;
which includes:&lt;br /&gt;
&lt;br /&gt;
 (0x0019,0x10d9) DS Concatenated SAT      VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
 (0x0019,0x10df) DS User Data     VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
 (0x0019,0x10e0) DS User Data     VR=&amp;lt;DS&amp;gt;   VL=&amp;lt;0x0008&amp;gt;  &amp;lt;0.000000&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so all of the tags which are supposed to store # gradient directions store the value 0! In addition, there is:&lt;br /&gt;
&lt;br /&gt;
 (0x0021,0x105a) SL Integer Slop          VR=&amp;lt;SL&amp;gt;   VL=&amp;lt;0x0004&amp;gt;  [0x00000000]&lt;br /&gt;
&lt;br /&gt;
so the supposed representation of diffusion-direction is also empty. The Dartmouth data has the following tags describing the scanner and software version:&lt;br /&gt;
&lt;br /&gt;
 (0008,1090) LO [GENESIS_SIGNA]&lt;br /&gt;
 (0018,1020) LO [09]&lt;br /&gt;
&lt;br /&gt;
In GE DWI images (software version 12.0)&lt;br /&gt;
&lt;br /&gt;
 (0008,1090) LO [SIGNA EXCITE]&lt;br /&gt;
 (0018,1020) LO [12\LX\MR Software release:12.0_M4_0520.a]&lt;br /&gt;
&lt;br /&gt;
diffusion directions are stored under the following tags:&lt;br /&gt;
&lt;br /&gt;
 (0019,10bb) DS [0.430617]&lt;br /&gt;
 (0019,10bc) DS [-0.804161]&lt;br /&gt;
 (0019,10bd) DS [-0.420008]&lt;br /&gt;
&lt;br /&gt;
=== Private vendor: Siemens ===&lt;br /&gt;
&lt;br /&gt;
A Siemens DICOM Conformance Statement is available at&lt;br /&gt;
&lt;br /&gt;
 http://www.medical.siemens.com/siemens/en_INT/rg_marcom_FBAs/files/brochures/DICOM/mr/dcs_trio.pdf&lt;br /&gt;
&lt;br /&gt;
No diffusion related tags specified.&lt;br /&gt;
&lt;br /&gt;
David Tuch has stated (in email from December 21, 2005):&lt;br /&gt;
&lt;br /&gt;
 The diffusion gradient information and coordinate frame are not provided&lt;br /&gt;
 in the DICOM hdr for the MGH diffusion sequences.&lt;br /&gt;
&lt;br /&gt;
'''Tag 0029,1010''' may include all necessary information&lt;br /&gt;
&lt;br /&gt;
If you have installed spm (and Matlab), the following spm-File extracts the gradient info:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 P= spm_get(Inf,'*','Selct some files')&lt;br /&gt;
 hdr=spm_dicom_headers(P)&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(1).val&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(2).val&lt;br /&gt;
 hdr{1}.CSAImageHeaderInfo(22).item(3).val&lt;br /&gt;
&lt;br /&gt;
look for &amp;quot;spm_dicom_headers.m&amp;quot; (google or on your hard disk), this spm File shows you how to decode the tag data.&lt;br /&gt;
&lt;br /&gt;
It is easy to write a C or C++ program that does the same, if you have the spm_dicom_headers.m&lt;br /&gt;
&lt;br /&gt;
Credits: Jan Klein &amp;lt;klein AT mevis DOT de&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Update ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt; As far as the latest MR scanner software (2006) version is concerned, a solution for the access of the following Diffusion attributes was provided:&lt;br /&gt;
&lt;br /&gt;
 0019;000A;SIEMENS MR HEADER  ;NumberOfImagesInMosaic          ;1;US;1&lt;br /&gt;
 0019;000B;SIEMENS MR HEADER  ;SliceMeasurementDuration        ;1;DS;1&lt;br /&gt;
 0019;000C;SIEMENS MR HEADER  ;B_value                         ;1;IS;1&lt;br /&gt;
 0019;000D;SIEMENS MR HEADER  ;DiffusionDirectionality         ;1;CS;1&lt;br /&gt;
 0019;000E;SIEMENS MR HEADER  ;DiffusionGradientDirection      ;1;FD;3&lt;br /&gt;
 0019;000F;SIEMENS MR HEADER  ;GradientMode                    ;1;SH;1&lt;br /&gt;
 0019;0027;SIEMENS MR HEADER  ;B_matrix                        ;1;FD;6&lt;br /&gt;
 0019;0028;SIEMENS MR HEADER  ;BandwidthPerPixelPhaseEncode    ;1;FD;1&lt;br /&gt;
&lt;br /&gt;
That does not solve your problem with the older datasets and unfortunately there is no easy way to access diffusion information there as it is really only stored in the Siemens shadow part.&lt;br /&gt;
&lt;br /&gt;
Credits: Stefan Huwer&lt;br /&gt;
&lt;br /&gt;
==== Reference ====&lt;br /&gt;
&lt;br /&gt;
* http://www.mmrrcc.upenn.edu/CAMRIS/cfn/&lt;br /&gt;
&lt;br /&gt;
in particular:&lt;br /&gt;
&lt;br /&gt;
* http://www.mmrrcc.upenn.edu/CAMRIS/cfn/dicomhdr.html&lt;br /&gt;
&lt;br /&gt;
=== Private vendor: Philips ===&lt;br /&gt;
&lt;br /&gt;
Philips uses the following tags for diffusion weighted images&lt;br /&gt;
&lt;br /&gt;
* (2001,1003) : B_value;&lt;br /&gt;
* (2001,1004) : Diffusion direction.&lt;br /&gt;
&lt;br /&gt;
Complete DICOM conformance statements for&lt;br /&gt;
&lt;br /&gt;
* Intera&lt;br /&gt;
* Achieva&lt;br /&gt;
* Panorama&lt;br /&gt;
* Gyroscan&lt;br /&gt;
* Infinion / Eclipse&lt;br /&gt;
&lt;br /&gt;
are available here http://www.medical.philips.com/main/company/connectivity/mri/&lt;br /&gt;
&lt;br /&gt;
== DICOM for estimated diffusion tensors ==&lt;br /&gt;
&lt;br /&gt;
There is no specification right now in DICOM to support Tensor. Only a supplement:&lt;br /&gt;
&lt;br /&gt;
[ftp://medical.nema.org/medical/dicom/supps/sup63_pc.pdf Supp 63 Parts 3,4,5,6,16,17 Multi-dimensional Interchange Object ]&lt;br /&gt;
&lt;br /&gt;
The discussion would then be (D. Clunie quote):&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 Indeed, even if one were to try to standardize in DICOM the encoding&lt;br /&gt;
 of the entire diffusion tensor, there would no doubt be considerable&lt;br /&gt;
 debate as to whether to do that as 6 (or 9) planes of an &amp;quot;image&amp;quot;, since&lt;br /&gt;
 there is such a matrix at each spatial location (&amp;quot;pixel&amp;quot;), or as&lt;br /&gt;
 a special case of the proposed Sup 63 object; the former would keep&lt;br /&gt;
 image-oriented tools and software happier, the latter would require&lt;br /&gt;
 implementing a new mechanism and navigating through a more general&lt;br /&gt;
 structure.&lt;br /&gt;
&lt;br /&gt;
== Discussion on DICOM newsgroup ==&lt;br /&gt;
&lt;br /&gt;
http://groups.google.com/group/comp.protocols.dicom/browse_frm/thread/3d292d9c506b1cbf For cross reference:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 Hi Mathieu&lt;br /&gt;
 &lt;br /&gt;
 Thanks for the very interesting links.&lt;br /&gt;
 &lt;br /&gt;
 The proposed standard way to address this problem is with&lt;br /&gt;
 the Enhanced MR IOD, which explicitly addresses the&lt;br /&gt;
 attributes for encoding diffusion directionality and B-Value.&lt;br /&gt;
 &lt;br /&gt;
 There will likely be no extensions of the existing MR IOD to&lt;br /&gt;
 address this concern, since by policy, we obviously want&lt;br /&gt;
 folks to use the new IOD.&lt;br /&gt;
 &lt;br /&gt;
 It would not surprise me if in the interim, vendors started&lt;br /&gt;
 to send some of the new Sup 49 attributes in old IOD&lt;br /&gt;
 instances, but there will almost certainly be no official move&lt;br /&gt;
 to standardize this, and we would certainly not consider&lt;br /&gt;
 adding a non-Sup 49 based mechanism.&lt;br /&gt;
 &lt;br /&gt;
 If the Sup 49 mechanisms are not sufficient (e.g. for&lt;br /&gt;
 tensors), then we need to discuss what the gaps are&lt;br /&gt;
 and how to fill them.&lt;br /&gt;
 &lt;br /&gt;
 We would not, by the way, ever &amp;quot;encapsulate&amp;quot; the NRRD&lt;br /&gt;
 header, but it would nice if there was a clear mapping between&lt;br /&gt;
 the DICOM Sup 49 attributes and the relevant NRRD&lt;br /&gt;
 attributes, though these are probably obvious.&lt;br /&gt;
 &lt;br /&gt;
 David&lt;br /&gt;
 &lt;br /&gt;
 Mathieu Malaterre wrote:&lt;br /&gt;
 &amp;gt; Hello again,&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    I am currently involved in a group which is working extensively with&lt;br /&gt;
 &amp;gt; DWI data (diffusion-weighted image). Their acquisition data is coming&lt;br /&gt;
 &amp;gt; in the form of DICOM files. Unfortunately they are currently facing two&lt;br /&gt;
 &amp;gt; problems:&lt;br /&gt;
 &amp;gt; 1. Each vendors stores the gradient directions differently (if at all!)&lt;br /&gt;
 &amp;gt; 2. Even if you have the directions, you don't have the measurement&lt;br /&gt;
 &amp;gt; frame(*). As far as I understand the DICOM specification, there is&lt;br /&gt;
 &amp;gt; currently no way to store this type of information.&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    For issue #1 I am currently gathering information about the major&lt;br /&gt;
 &amp;gt; vendors so that I can add extraction code into gdcm. This would provide&lt;br /&gt;
 &amp;gt; a function to extract the gradient directions and make invisible to&lt;br /&gt;
 &amp;gt; user the different way vendors use to store this information. For more&lt;br /&gt;
 &amp;gt; information see the Wiki at:&lt;br /&gt;
 &amp;gt; http://www.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:DICOM_for_DWI_and_DTI&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt;    For issue #2, -again correct me if I am wrong- but I could not find&lt;br /&gt;
 &amp;gt; anything that could represent this measurement frame. Therefore the&lt;br /&gt;
 &amp;gt; solution we are using is to use an intermediate file format called&lt;br /&gt;
 &amp;gt; NRRD, see for instance:&lt;br /&gt;
 &amp;gt; http://wiki.na-mic.org/Wiki/index.php/NAMIC_Wiki:DTI:Nrrd_format&lt;br /&gt;
 &amp;gt;    Ideally this information should be accessible from a particular&lt;br /&gt;
 &amp;gt; DICOM tag. Can we request an extension of the DICOM standard to allow&lt;br /&gt;
 &amp;gt; us to store this information directly in the DICOM file, which would&lt;br /&gt;
 &amp;gt; greatly simplify the process (avoiding the intermediate NRRD file&lt;br /&gt;
 &amp;gt; format step).&lt;br /&gt;
 &amp;gt;&lt;br /&gt;
 &amp;gt; Regards,&lt;br /&gt;
 &amp;gt; Mathieu&lt;br /&gt;
 &amp;gt; (*)&lt;br /&gt;
 &amp;gt; &amp;quot;measurement frame&amp;quot;: relationship between the coordinate frame in which&lt;br /&gt;
 &amp;gt; the gradient coefficients are expressed, and the physical coordinate&lt;br /&gt;
 &amp;gt; frame in which image orientation is defined&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Slicer3:Data_Model&amp;diff=10739</id>
		<title>Slicer3:Data Model</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Slicer3:Data_Model&amp;diff=10739"/>
		<updated>2007-05-24T13:02:45Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* MRML API Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Slicer3 MRML Overview =&lt;br /&gt;
&lt;br /&gt;
*MRML Library provides API for managing medical image data types (Volumes, Models, Transforms, Fiducials, Cameras, etc) and their visualization. &lt;br /&gt;
*Each data type is represented by a special MRML node. &lt;br /&gt;
*MRML Scene is a collection of MRML nodes. &lt;br /&gt;
*Slicer MRML data model is implemented independent of the visualization and algorithmic components of the system. &lt;br /&gt;
*Other Slicer components (Logic and GUI) observe changes in MRML scene and individual nodes and process change MRML events.&lt;br /&gt;
&lt;br /&gt;
For more details on MRML architecture see [http://www.na-mic.org/Wiki/images/e/e3/Slicer_3-alpha-2006-04-03.ppt Architecture Slides].&lt;br /&gt;
&lt;br /&gt;
= MRML Scene =&lt;br /&gt;
&lt;br /&gt;
*MRML Scene manages MRML nodes : add, delete, find, find by type, etc.&lt;br /&gt;
*MRML Scene provides persistence of MRML nodes (reading/writing to/from XML file). &lt;br /&gt;
*MRML  Scene provides Undo/Redo mechanism that restores a previous state of the scene and individual nodes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= MRML Nodes =&lt;br /&gt;
 &lt;br /&gt;
*The MRML nodes are designed to store the state of the Slicer application, both raw data and visualization parameters.&lt;br /&gt;
&lt;br /&gt;
There following is a set of core MRLN nodes that store the state of core Slicer modules:&lt;br /&gt;
&lt;br /&gt;
* vtkMRMLCameraNode&lt;br /&gt;
* vtkMRMLClipModelsNode&lt;br /&gt;
* vtkMRMLSliceCompositeNode&lt;br /&gt;
* vtkMRMLSliceNode&lt;br /&gt;
* vtkMRMLColorNode&lt;br /&gt;
* vtkMRMLTransformNode&lt;br /&gt;
* vtkMRMLLinearTransformNode&lt;br /&gt;
* vtkMRMLTransformableNode&lt;br /&gt;
* vtkMRMLFiducialListNode&lt;br /&gt;
* vtkMRMLModelNode&lt;br /&gt;
* vtkMRMLModelDisplayNode&lt;br /&gt;
* vtkMRMLStorageNode&lt;br /&gt;
* vtkMRMLModelStorageNode&lt;br /&gt;
* vtkMRMLVolumeNode&lt;br /&gt;
* vtkMRMLScalarVolumeNode&lt;br /&gt;
* vtkMRMLVectorVolumeNode&lt;br /&gt;
* vtkMRMLTensorVolumeNode&lt;br /&gt;
* vtkMRMLDiffusionTensorVolumeNode&lt;br /&gt;
* vtkMRMLDiffusionWeightedVolumeNode&lt;br /&gt;
* vtkMRMLVolumeDisplayNode&lt;br /&gt;
* vtkMRMLVectorVolumeDisplayNode&lt;br /&gt;
* vtkMRMLDiffusionTensorVolumeDisplayNode&lt;br /&gt;
* vtkMRMLDiffusionWeightedVolumeDisplayNode&lt;br /&gt;
* vtkMRMLVolumeHeaderlessStorageNode&lt;br /&gt;
* vtkMRMLVolumeArchetypeStorageNode&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt; &lt;br /&gt;
*MRML nodes are organized into C++ class hierarchies, all derived from vtkMRMLNode class. &lt;br /&gt;
*For example vtkMRMLTransformableNode is the parent class of Volume, Model, Fiducial, and Transformation nodes; vtkVolumeNode is a parent of vtkMRMLScalarVolumeNode and vtkMRMLVectorVolumeNode&lt;br /&gt;
*All MRML nodes have to implement certain standard API: ReadAttributes, WriteAttributes, Copy, etc.&lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Node_Hier.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= References to MRML Nodes =&lt;br /&gt;
&lt;br /&gt;
*Some MRML nodes have references to other nodes. &lt;br /&gt;
*Transformable Node has a reference to a Transformation node. Transformation node has a reference to its parent Transformation node. &lt;br /&gt;
*References are stored by node ID.&lt;br /&gt;
*Use vtkSetReferenceStringMacro to set reference ID (it registers reference with the scene).&lt;br /&gt;
*Access methods should check if the referenced node is still in the MRML scene using its ID.&lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Trans_Ref.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= MRML Events and Observers =&lt;br /&gt;
&lt;br /&gt;
*Changes in MRML scene and individual nodes propagate to other observing nodes, GUI and Logic objects via vtk events and command-observer mechanism.&lt;br /&gt;
*Use vtk AddObserver() and InvokeEvent() methods. vtk SetMacro generates ModifiedEvent.&lt;br /&gt;
*The command-observer mechanism for MRML is implemented using  helper vtkObserverManager, class, MRML Observer macros, and ProcessMRMLEvents method.&lt;br /&gt;
*Observers should store a registered pointer to a MRML node to prevent callbacks on a deleted object. &lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Observ.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
*MRML observer macros are defined in Libs/MRML/vtkMRMLNode.h&lt;br /&gt;
*VtkSetMRMLObjectMacro - registers MRML node with another vtk object (another MRML node, Logic or GUI). No observers added.&lt;br /&gt;
*VtkSetAndObserveMRMLObjectMacro - registers MRML node and adds an observer for vtkCommand::ModifyEvent. &lt;br /&gt;
*VtkSetAndObserveMRMLObjectEventsMacro - registers MRML node and adds an observer for a specified set of events. &lt;br /&gt;
*SetAndObserveMRMLScene[Events]() method is used in GUI and Logic to observe Modify, NewScene, NodeAdded, etc. events.&lt;br /&gt;
*ProcessMRMLEvents method should be implemented in MRML nodes, Logic, and GUI classes in order to process events from the observed nodes.&lt;br /&gt;
&lt;br /&gt;
= Creating Custom MRML Node Classes=&lt;br /&gt;
&lt;br /&gt;
*Custom MRML nodes provide persistent storage for the module parameters. &lt;br /&gt;
*Custom MRML nodes should be registered with the MRML scene using RegisterNodeClass() so they can be saved and restored from a scene file. &lt;br /&gt;
*Classes should implement the following methods: &lt;br /&gt;
*CreateNodeInstance() – similar to VTK New() method only not static. &lt;br /&gt;
*GetNodeTagName() – return a unique XML tag for this node. &lt;br /&gt;
*ReadXMLAttributes() – reads node attributes from XML file as name-value pairs. &lt;br /&gt;
*WriteXML() – writes node attributes to output stream (as in interpolate=&amp;quot;1&amp;quot; ).&lt;br /&gt;
*Copy() – copies node attributes. &lt;br /&gt;
&lt;br /&gt;
*If the node has references to other nodes the following additional methods should be implemented: &lt;br /&gt;
 –UpdateReferenceID() - updates the stored reference to another node. &lt;br /&gt;
 –UpdateScene()- updates other nodes in the scene depending on this node or updates this node if it depends on other nodes when the scene is read in. &lt;br /&gt;
   This method is called automatically by XML parser after all nodes are created. &lt;br /&gt;
*An example of a custom MRML node implementation: vtkMRMLGradientAnisotropicDiffusionFilterNode in Modules/GradientAnisotropicDiffusionFilter directory. &lt;br /&gt;
*To add node to the MRML scene: &lt;br /&gt;
 –In the code: use standard vtk New() and add node to the scene using vtkMRMLScene::AddNode(vtkMRMLNode *)&lt;br /&gt;
 –By user request: use vtkSlicerNodeSelectorWidget that creates a new node from the module’s UI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Undo/Redo Mechanism =&lt;br /&gt;
&lt;br /&gt;
*Undo/Redo is based on saving and restoring the state of MRML nodes in the Scene. &lt;br /&gt;
*MRML scene can save snapshot of all nodes into a special Undo and Redo stacks. &lt;br /&gt;
*The Undo and Redo stacks store copies of nodes that have changed from the previous snapshot. The node that have not changes are stored by a reference (pointer). &lt;br /&gt;
*When an Undo is called on the scene, the current state of Undo stack is copied into the current scene and also into Redo stack. &lt;br /&gt;
*  All Undoable operations must store their data as MRML nodes&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Undo.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
*Developer controls at what point the snapshot is saved by calling SaveStateForUndo method on the MRML scene. &lt;br /&gt;
–SaveStateForUndo() - saves the state of all nodes in the scene &lt;br /&gt;
&amp;lt;br /&amp;gt;–SetActiveScene(vtkMRMLScene *) - saves the state of the specified node. &lt;br /&gt;
&amp;lt;br /&amp;gt;–SetActiveScene(vtkCollection*) - saves the state of the specified collection of nodes. &lt;br /&gt;
*SaveStateForUndo() should be called in GUI/Logic classes before changing the state of MRML nodes. Usually done in the ProcessGUIEvents method that processes events from the user interactions with GUI widgets. &lt;br /&gt;
*SaveStateForUndo() should not be called while processing transient events such as continuous events sent by KW UI while dragging a slider (for example vtkKWScale::ScaleValueStartChangingEvent). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following methods on the MRML scene are used to manage Undo/Redo stacks:&lt;br /&gt;
&lt;br /&gt;
* vtkMRMLScene::Undo() – restore the previously saved state of the MRML scene.&lt;br /&gt;
* vtkMRMLScene::Redo() – restore the previously undone state of the MREML scene.&lt;br /&gt;
* vtkMRMLScene::SetUndoOff() – ignore following SaveStateForUndo calls (usefull when making multiple changes to the scene/nodes that does not need to be undone). &lt;br /&gt;
* vtkMRMLScene::SetUndoOn() – enable following SaveStateForUndo calls.&lt;br /&gt;
* vtkMRMLScene::ClearUndoStack() – clears the undo history.&lt;br /&gt;
* vtkMRMLScene::ClearRedoStack() – clears the redo history.&lt;br /&gt;
&lt;br /&gt;
Slicer Module developers should call vtkMRMLScne::SaveStateForUndo() method in their modules before changing the state of MRML nodes. This is usually done in the ProcessGUIEvents method that processes events from the user interactions with GUI widgets. Note, that vtkMRMLScne::SaveStateForUndo() method should not be called while processing transient events such as continuos events sent by UI while dragging a slider (for example vtkKWScale::ScaleValueStartChangingEvent).&lt;br /&gt;
&lt;br /&gt;
= Other Useful References =&lt;br /&gt;
&lt;br /&gt;
== MRML API Documentation ==&lt;br /&gt;
&lt;br /&gt;
The detailed documentation of MRML API can be found in [http://www.na-mic.org/Slicer/Documentation/Slicer3/html/classes.html &amp;lt;nowiki&amp;gt; Slicer3 Doxygen pages&amp;lt;/nowiki&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
See Data Model notes in [[AHM_2006:ProjectsSlicerDataModel|AHM 2006 Programming week project]].&lt;br /&gt;
&lt;br /&gt;
== Path-based MRML proposal ==&lt;br /&gt;
&lt;br /&gt;
Mike's proposal for a [[Slicer3:MRML3_Path|path-based MRML3 representation]], based on extending the Coordinate Space Manager ideas to the entire MRML3 tree&lt;br /&gt;
&lt;br /&gt;
== Slicer Daemon ==&lt;br /&gt;
&lt;br /&gt;
The goal of the [[Slicer3:Slicer_Daemon|Slicer Daemon]] project is to allow remote editing of the MRML data model by external programs over a socket.&lt;br /&gt;
&lt;br /&gt;
= Slicer 2.6 MRML =&lt;br /&gt;
&lt;br /&gt;
== Data Represented in MRML in Slicer 2.6 ==&lt;br /&gt;
&lt;br /&gt;
* Volumes&lt;br /&gt;
** IJK-&amp;gt;RAS (VTK-&amp;gt;RAS)&lt;br /&gt;
** Scalar Types&lt;br /&gt;
** Multicomponent (RGB, Displacement Vector)&lt;br /&gt;
** Tensor Volumes&lt;br /&gt;
** Label Maps&lt;br /&gt;
** Reference to Lookup Table&lt;br /&gt;
&lt;br /&gt;
* Models&lt;br /&gt;
** vtkPolyData&lt;br /&gt;
*** Named Field Data (scalars, vectors, labels) at points and cells (FreeSurferReaders)&lt;br /&gt;
*** Polylines with tensor point data (DTMRI Module)&lt;br /&gt;
** Color, Clipping State, Visibility, Scalar Visibility, LookupTable&lt;br /&gt;
&lt;br /&gt;
* Transforms&lt;br /&gt;
** Matrix4x4&lt;br /&gt;
&lt;br /&gt;
* Lookup Tables&lt;br /&gt;
** vtkLookupTable info&lt;br /&gt;
&lt;br /&gt;
* Fiducials&lt;br /&gt;
** Position, Quaternion&lt;br /&gt;
** Name, Selection State, Type (endoscopic, normal)&lt;br /&gt;
** Glyph Size, Text Size&lt;br /&gt;
&lt;br /&gt;
* Fiducial Lists&lt;br /&gt;
** Name, Slze, Color, Selection State&lt;br /&gt;
&lt;br /&gt;
* Colors&lt;br /&gt;
** Name, Label#, Diffuse/Ambient/Specular&lt;br /&gt;
&lt;br /&gt;
* Model Groups&lt;br /&gt;
&lt;br /&gt;
* Application State (not to be carried to Slicer3 MRML)&lt;br /&gt;
* Locator (not to be carried to Slicer3 MRML)&lt;br /&gt;
* Module Specific Parameters (not to be carried to Slicer3 MRML)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Operations On MRML Scene ==&lt;br /&gt;
&lt;br /&gt;
* Load from File&lt;br /&gt;
* Save to File&lt;br /&gt;
* Traverse Nodes in Tree&lt;br /&gt;
* Insert Node&lt;br /&gt;
* Delete Node&lt;br /&gt;
* Register Tree Observer&lt;br /&gt;
* Update MRML&lt;br /&gt;
* Get Transformations (e.g. IJK to World through transform tree)&lt;br /&gt;
&lt;br /&gt;
* Data Type Specific Operations&lt;br /&gt;
** Get/Set Node MetaData&lt;br /&gt;
** Get/Set Data (e.g. as vtkImageData)&lt;br /&gt;
&lt;br /&gt;
== General References on XML ==&lt;br /&gt;
&lt;br /&gt;
A wikibook on XML: http://en.wikibooks.org/wiki/XML:_Managing_Data_Exchange&lt;br /&gt;
&lt;br /&gt;
The section on ID/IDREF implementations which are similar to what we use in MRML: http://en.wikibooks.org/wiki/XML:_Managing_Data_Exchange/The_many-to-many_relationship#ID.2FIDRE&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Slicer3:Data_Model&amp;diff=10738</id>
		<title>Slicer3:Data Model</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Slicer3:Data_Model&amp;diff=10738"/>
		<updated>2007-05-24T13:02:36Z</updated>

		<summary type="html">&lt;p&gt;Mathieu: /* MRML API Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Slicer3 MRML Overview =&lt;br /&gt;
&lt;br /&gt;
*MRML Library provides API for managing medical image data types (Volumes, Models, Transforms, Fiducials, Cameras, etc) and their visualization. &lt;br /&gt;
*Each data type is represented by a special MRML node. &lt;br /&gt;
*MRML Scene is a collection of MRML nodes. &lt;br /&gt;
*Slicer MRML data model is implemented independent of the visualization and algorithmic components of the system. &lt;br /&gt;
*Other Slicer components (Logic and GUI) observe changes in MRML scene and individual nodes and process change MRML events.&lt;br /&gt;
&lt;br /&gt;
For more details on MRML architecture see [http://www.na-mic.org/Wiki/images/e/e3/Slicer_3-alpha-2006-04-03.ppt Architecture Slides].&lt;br /&gt;
&lt;br /&gt;
= MRML Scene =&lt;br /&gt;
&lt;br /&gt;
*MRML Scene manages MRML nodes : add, delete, find, find by type, etc.&lt;br /&gt;
*MRML Scene provides persistence of MRML nodes (reading/writing to/from XML file). &lt;br /&gt;
*MRML  Scene provides Undo/Redo mechanism that restores a previous state of the scene and individual nodes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= MRML Nodes =&lt;br /&gt;
 &lt;br /&gt;
*The MRML nodes are designed to store the state of the Slicer application, both raw data and visualization parameters.&lt;br /&gt;
&lt;br /&gt;
There following is a set of core MRLN nodes that store the state of core Slicer modules:&lt;br /&gt;
&lt;br /&gt;
* vtkMRMLCameraNode&lt;br /&gt;
* vtkMRMLClipModelsNode&lt;br /&gt;
* vtkMRMLSliceCompositeNode&lt;br /&gt;
* vtkMRMLSliceNode&lt;br /&gt;
* vtkMRMLColorNode&lt;br /&gt;
* vtkMRMLTransformNode&lt;br /&gt;
* vtkMRMLLinearTransformNode&lt;br /&gt;
* vtkMRMLTransformableNode&lt;br /&gt;
* vtkMRMLFiducialListNode&lt;br /&gt;
* vtkMRMLModelNode&lt;br /&gt;
* vtkMRMLModelDisplayNode&lt;br /&gt;
* vtkMRMLStorageNode&lt;br /&gt;
* vtkMRMLModelStorageNode&lt;br /&gt;
* vtkMRMLVolumeNode&lt;br /&gt;
* vtkMRMLScalarVolumeNode&lt;br /&gt;
* vtkMRMLVectorVolumeNode&lt;br /&gt;
* vtkMRMLTensorVolumeNode&lt;br /&gt;
* vtkMRMLDiffusionTensorVolumeNode&lt;br /&gt;
* vtkMRMLDiffusionWeightedVolumeNode&lt;br /&gt;
* vtkMRMLVolumeDisplayNode&lt;br /&gt;
* vtkMRMLVectorVolumeDisplayNode&lt;br /&gt;
* vtkMRMLDiffusionTensorVolumeDisplayNode&lt;br /&gt;
* vtkMRMLDiffusionWeightedVolumeDisplayNode&lt;br /&gt;
* vtkMRMLVolumeHeaderlessStorageNode&lt;br /&gt;
* vtkMRMLVolumeArchetypeStorageNode&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt; &lt;br /&gt;
*MRML nodes are organized into C++ class hierarchies, all derived from vtkMRMLNode class. &lt;br /&gt;
*For example vtkMRMLTransformableNode is the parent class of Volume, Model, Fiducial, and Transformation nodes; vtkVolumeNode is a parent of vtkMRMLScalarVolumeNode and vtkMRMLVectorVolumeNode&lt;br /&gt;
*All MRML nodes have to implement certain standard API: ReadAttributes, WriteAttributes, Copy, etc.&lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Node_Hier.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= References to MRML Nodes =&lt;br /&gt;
&lt;br /&gt;
*Some MRML nodes have references to other nodes. &lt;br /&gt;
*Transformable Node has a reference to a Transformation node. Transformation node has a reference to its parent Transformation node. &lt;br /&gt;
*References are stored by node ID.&lt;br /&gt;
*Use vtkSetReferenceStringMacro to set reference ID (it registers reference with the scene).&lt;br /&gt;
*Access methods should check if the referenced node is still in the MRML scene using its ID.&lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Trans_Ref.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= MRML Events and Observers =&lt;br /&gt;
&lt;br /&gt;
*Changes in MRML scene and individual nodes propagate to other observing nodes, GUI and Logic objects via vtk events and command-observer mechanism.&lt;br /&gt;
*Use vtk AddObserver() and InvokeEvent() methods. vtk SetMacro generates ModifiedEvent.&lt;br /&gt;
*The command-observer mechanism for MRML is implemented using  helper vtkObserverManager, class, MRML Observer macros, and ProcessMRMLEvents method.&lt;br /&gt;
*Observers should store a registered pointer to a MRML node to prevent callbacks on a deleted object. &lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Observ.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
*MRML observer macros are defined in Libs/MRML/vtkMRMLNode.h&lt;br /&gt;
*VtkSetMRMLObjectMacro - registers MRML node with another vtk object (another MRML node, Logic or GUI). No observers added.&lt;br /&gt;
*VtkSetAndObserveMRMLObjectMacro - registers MRML node and adds an observer for vtkCommand::ModifyEvent. &lt;br /&gt;
*VtkSetAndObserveMRMLObjectEventsMacro - registers MRML node and adds an observer for a specified set of events. &lt;br /&gt;
*SetAndObserveMRMLScene[Events]() method is used in GUI and Logic to observe Modify, NewScene, NodeAdded, etc. events.&lt;br /&gt;
*ProcessMRMLEvents method should be implemented in MRML nodes, Logic, and GUI classes in order to process events from the observed nodes.&lt;br /&gt;
&lt;br /&gt;
= Creating Custom MRML Node Classes=&lt;br /&gt;
&lt;br /&gt;
*Custom MRML nodes provide persistent storage for the module parameters. &lt;br /&gt;
*Custom MRML nodes should be registered with the MRML scene using RegisterNodeClass() so they can be saved and restored from a scene file. &lt;br /&gt;
*Classes should implement the following methods: &lt;br /&gt;
*CreateNodeInstance() – similar to VTK New() method only not static. &lt;br /&gt;
*GetNodeTagName() – return a unique XML tag for this node. &lt;br /&gt;
*ReadXMLAttributes() – reads node attributes from XML file as name-value pairs. &lt;br /&gt;
*WriteXML() – writes node attributes to output stream (as in interpolate=&amp;quot;1&amp;quot; ).&lt;br /&gt;
*Copy() – copies node attributes. &lt;br /&gt;
&lt;br /&gt;
*If the node has references to other nodes the following additional methods should be implemented: &lt;br /&gt;
 –UpdateReferenceID() - updates the stored reference to another node. &lt;br /&gt;
 –UpdateScene()- updates other nodes in the scene depending on this node or updates this node if it depends on other nodes when the scene is read in. &lt;br /&gt;
   This method is called automatically by XML parser after all nodes are created. &lt;br /&gt;
*An example of a custom MRML node implementation: vtkMRMLGradientAnisotropicDiffusionFilterNode in Modules/GradientAnisotropicDiffusionFilter directory. &lt;br /&gt;
*To add node to the MRML scene: &lt;br /&gt;
 –In the code: use standard vtk New() and add node to the scene using vtkMRMLScene::AddNode(vtkMRMLNode *)&lt;br /&gt;
 –By user request: use vtkSlicerNodeSelectorWidget that creates a new node from the module’s UI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Undo/Redo Mechanism =&lt;br /&gt;
&lt;br /&gt;
*Undo/Redo is based on saving and restoring the state of MRML nodes in the Scene. &lt;br /&gt;
*MRML scene can save snapshot of all nodes into a special Undo and Redo stacks. &lt;br /&gt;
*The Undo and Redo stacks store copies of nodes that have changed from the previous snapshot. The node that have not changes are stored by a reference (pointer). &lt;br /&gt;
*When an Undo is called on the scene, the current state of Undo stack is copied into the current scene and also into Redo stack. &lt;br /&gt;
*  All Undoable operations must store their data as MRML nodes&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|[[Image: Slicer3_MRML_Undo.jpg|thumb|400px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
*Developer controls at what point the snapshot is saved by calling SaveStateForUndo method on the MRML scene. &lt;br /&gt;
–SaveStateForUndo() - saves the state of all nodes in the scene &lt;br /&gt;
&amp;lt;br /&amp;gt;–SetActiveScene(vtkMRMLScene *) - saves the state of the specified node. &lt;br /&gt;
&amp;lt;br /&amp;gt;–SetActiveScene(vtkCollection*) - saves the state of the specified collection of nodes. &lt;br /&gt;
*SaveStateForUndo() should be called in GUI/Logic classes before changing the state of MRML nodes. Usually done in the ProcessGUIEvents method that processes events from the user interactions with GUI widgets. &lt;br /&gt;
*SaveStateForUndo() should not be called while processing transient events such as continuous events sent by KW UI while dragging a slider (for example vtkKWScale::ScaleValueStartChangingEvent). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following methods on the MRML scene are used to manage Undo/Redo stacks:&lt;br /&gt;
&lt;br /&gt;
* vtkMRMLScene::Undo() – restore the previously saved state of the MRML scene.&lt;br /&gt;
* vtkMRMLScene::Redo() – restore the previously undone state of the MREML scene.&lt;br /&gt;
* vtkMRMLScene::SetUndoOff() – ignore following SaveStateForUndo calls (usefull when making multiple changes to the scene/nodes that does not need to be undone). &lt;br /&gt;
* vtkMRMLScene::SetUndoOn() – enable following SaveStateForUndo calls.&lt;br /&gt;
* vtkMRMLScene::ClearUndoStack() – clears the undo history.&lt;br /&gt;
* vtkMRMLScene::ClearRedoStack() – clears the redo history.&lt;br /&gt;
&lt;br /&gt;
Slicer Module developers should call vtkMRMLScne::SaveStateForUndo() method in their modules before changing the state of MRML nodes. This is usually done in the ProcessGUIEvents method that processes events from the user interactions with GUI widgets. Note, that vtkMRMLScne::SaveStateForUndo() method should not be called while processing transient events such as continuos events sent by UI while dragging a slider (for example vtkKWScale::ScaleValueStartChangingEvent).&lt;br /&gt;
&lt;br /&gt;
= Other Useful References =&lt;br /&gt;
&lt;br /&gt;
== MRML API Documentation ==&lt;br /&gt;
&lt;br /&gt;
The detailed documentaion of MRML API can be found in [http://www.na-mic.org/Slicer/Documentation/Slicer3/html/classes.html &amp;lt;nowiki&amp;gt; Slicer3 Doxygen pages&amp;lt;/nowiki&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
See Data Model notes in [[AHM_2006:ProjectsSlicerDataModel|AHM 2006 Programming week project]].&lt;br /&gt;
&lt;br /&gt;
== Path-based MRML proposal ==&lt;br /&gt;
&lt;br /&gt;
Mike's proposal for a [[Slicer3:MRML3_Path|path-based MRML3 representation]], based on extending the Coordinate Space Manager ideas to the entire MRML3 tree&lt;br /&gt;
&lt;br /&gt;
== Slicer Daemon ==&lt;br /&gt;
&lt;br /&gt;
The goal of the [[Slicer3:Slicer_Daemon|Slicer Daemon]] project is to allow remote editing of the MRML data model by external programs over a socket.&lt;br /&gt;
&lt;br /&gt;
= Slicer 2.6 MRML =&lt;br /&gt;
&lt;br /&gt;
== Data Represented in MRML in Slicer 2.6 ==&lt;br /&gt;
&lt;br /&gt;
* Volumes&lt;br /&gt;
** IJK-&amp;gt;RAS (VTK-&amp;gt;RAS)&lt;br /&gt;
** Scalar Types&lt;br /&gt;
** Multicomponent (RGB, Displacement Vector)&lt;br /&gt;
** Tensor Volumes&lt;br /&gt;
** Label Maps&lt;br /&gt;
** Reference to Lookup Table&lt;br /&gt;
&lt;br /&gt;
* Models&lt;br /&gt;
** vtkPolyData&lt;br /&gt;
*** Named Field Data (scalars, vectors, labels) at points and cells (FreeSurferReaders)&lt;br /&gt;
*** Polylines with tensor point data (DTMRI Module)&lt;br /&gt;
** Color, Clipping State, Visibility, Scalar Visibility, LookupTable&lt;br /&gt;
&lt;br /&gt;
* Transforms&lt;br /&gt;
** Matrix4x4&lt;br /&gt;
&lt;br /&gt;
* Lookup Tables&lt;br /&gt;
** vtkLookupTable info&lt;br /&gt;
&lt;br /&gt;
* Fiducials&lt;br /&gt;
** Position, Quaternion&lt;br /&gt;
** Name, Selection State, Type (endoscopic, normal)&lt;br /&gt;
** Glyph Size, Text Size&lt;br /&gt;
&lt;br /&gt;
* Fiducial Lists&lt;br /&gt;
** Name, Slze, Color, Selection State&lt;br /&gt;
&lt;br /&gt;
* Colors&lt;br /&gt;
** Name, Label#, Diffuse/Ambient/Specular&lt;br /&gt;
&lt;br /&gt;
* Model Groups&lt;br /&gt;
&lt;br /&gt;
* Application State (not to be carried to Slicer3 MRML)&lt;br /&gt;
* Locator (not to be carried to Slicer3 MRML)&lt;br /&gt;
* Module Specific Parameters (not to be carried to Slicer3 MRML)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Operations On MRML Scene ==&lt;br /&gt;
&lt;br /&gt;
* Load from File&lt;br /&gt;
* Save to File&lt;br /&gt;
* Traverse Nodes in Tree&lt;br /&gt;
* Insert Node&lt;br /&gt;
* Delete Node&lt;br /&gt;
* Register Tree Observer&lt;br /&gt;
* Update MRML&lt;br /&gt;
* Get Transformations (e.g. IJK to World through transform tree)&lt;br /&gt;
&lt;br /&gt;
* Data Type Specific Operations&lt;br /&gt;
** Get/Set Node MetaData&lt;br /&gt;
** Get/Set Data (e.g. as vtkImageData)&lt;br /&gt;
&lt;br /&gt;
== General References on XML ==&lt;br /&gt;
&lt;br /&gt;
A wikibook on XML: http://en.wikibooks.org/wiki/XML:_Managing_Data_Exchange&lt;br /&gt;
&lt;br /&gt;
The section on ID/IDREF implementations which are similar to what we use in MRML: http://en.wikibooks.org/wiki/XML:_Managing_Data_Exchange/The_many-to-many_relationship#ID.2FIDRE&lt;/div&gt;</summary>
		<author><name>Mathieu</name></author>
		
	</entry>
</feed>