<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mark</id>
	<title>NAMIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mark"/>
	<link rel="alternate" type="text/html" href="https://www.na-mic.org/wiki/Special:Contributions/Mark"/>
	<updated>2026-05-13T03:49:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.33.0</generator>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Test-mark&amp;diff=51202</id>
		<title>Test-mark</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Test-mark&amp;diff=51202"/>
		<updated>2010-04-07T22:16:54Z</updated>

		<summary type="html">&lt;p&gt;Mark: Created page with 'hello'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;hello&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44858</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44858"/>
		<updated>2009-11-10T23:30:07Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This can be  even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation on xnat central]&lt;br /&gt;
&lt;br /&gt;
To solve this derived data case, 1) apply a separate .das file that contains new series description&lt;br /&gt;
  (0008,103e) := &amp;quot;Segmented series with label values 14 and 19&amp;quot;&lt;br /&gt;
&lt;br /&gt;
and new series number&lt;br /&gt;
  (0020,0011) := &amp;quot;10&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 2) create new a new SOPInstanceUID for each image in the derived series with the following script that uses the dcmtk tool dcmodify:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/tcsh&lt;br /&gt;
 set flist = `ls -1 I.*`&lt;br /&gt;
 set sopidlist = `ls -1 I.* | awk '{printf(&amp;quot;/projects/mark/dcmtk/bin/dcmdump %s\n&amp;quot;,$1)}' | sh | &lt;br /&gt;
 grep 0008,0018 | awk '{printf(&amp;quot;%s\n&amp;quot;,$3)}' | sed s/\\[// | sed s/\\]//`&lt;br /&gt;
 set i = 1&lt;br /&gt;
 while ($i &amp;lt;= $#flist)&lt;br /&gt;
 echo &amp;quot;$i $flist[$i] $sopidlist[$i]&amp;quot;&lt;br /&gt;
 @ i++&lt;br /&gt;
 end&lt;br /&gt;
 set i = 1&lt;br /&gt;
 while ($i &amp;lt;= $#flist)&lt;br /&gt;
 set newsop = `printf %s.%d $sopidlist[$i] $i`&lt;br /&gt;
 echo &amp;quot;$newsop&amp;quot;&lt;br /&gt;
 /projects/mark/dcmtk/bin/dcmodify -ma &amp;quot;(0008,0018)=$newsop&amp;quot; $flist[$i]&lt;br /&gt;
 @ i++&lt;br /&gt;
 end&lt;br /&gt;
and 3) byte-swap the segmented data as it is the opposite of the other dicom series using dcmodify&amp;quot;&lt;br /&gt;
 ls -1 | awk '{printf(&amp;quot;/projects/mark/dcmtk/bin/dcmodify +tb %s\n&amp;quot;,$1)}' | sh&lt;br /&gt;
&lt;br /&gt;
* Case3 - FMRI data upload - Below is the essense of an email that I sent to the xnat discussion group that I never&lt;br /&gt;
heard back from:&lt;br /&gt;
&lt;br /&gt;
I am uploading some large FMRI datasets to xnat central. Each exam has about 10k images. I&lt;br /&gt;
have been anonymizing the data with DicomRemap and modifying the series descriptions to describe&lt;br /&gt;
the fmri task being performed for each series via the .das file.&lt;br /&gt;
During the upload process, I get a  proxy error:&lt;br /&gt;
&lt;br /&gt;
[[File:xnat-proxy-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
But sometime later my data shows&lt;br /&gt;
up in the prearchive and appears  complete and intact and I can move it to the project. However, when&lt;br /&gt;
I then select the subject from the project,  I see a processing exception at the top of the page:&lt;br /&gt;
&lt;br /&gt;
[[File:process-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
I am concerned that when we go to share the data after uploading everything, if someone sees this processing&lt;br /&gt;
exception they will figure the data is corrupt whereas it is complete and wont use it. So I am curious:&lt;br /&gt;
1) if there is a way to suppress this error after  upload?&lt;br /&gt;
2) is it even  feasible to upload such large datasets? the relative size 300M compressed in not large, but the image count is.&lt;br /&gt;
3) are there alternative better methods to upload?&lt;br /&gt;
&lt;br /&gt;
I looked at http://nrg.wikispaces.com/XNAT+Data+Management&lt;br /&gt;
&lt;br /&gt;
which describes an FTP upload process, but I dont see how to do this to XNAT central&lt;br /&gt;
&lt;br /&gt;
I also looked into using http://nrg.wikispaces.com/StoreXAR&lt;br /&gt;
and tried to organize my data and SESSION.xml file as describe here, but this generated a connection error:&lt;br /&gt;
&lt;br /&gt;
Thu Oct 22 18:13:57 EDT 2009 manderson@http://central.xnat.org:8104/:java.net.ConnectException: Connection timed out&lt;br /&gt;
&lt;br /&gt;
and likely I didnt specify the XML file correctly. Is there a way to generate the xml automatically by scanning the organized data?&lt;br /&gt;
If not, it seems easier to just upload the compressed tar file,  do something else for a while , and then  move the data to the&lt;br /&gt;
project from the prearchive, this is relatively non-time-consuming as xnat does the work and displays the series descriptions that&lt;br /&gt;
I want, I dont need any custom variables.&lt;br /&gt;
&lt;br /&gt;
Thanks for any help/pointers. &lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44855</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44855"/>
		<updated>2009-11-10T23:03:54Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This can be  even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation on xnat central]&lt;br /&gt;
&lt;br /&gt;
To solve this derived data case, 1) apply a separate .das file that contains new series description&lt;br /&gt;
  (0008,103e) := &amp;quot;Segmented series with label values 14 and 19&amp;quot;&lt;br /&gt;
&lt;br /&gt;
and new series number&lt;br /&gt;
  (0020,0011) := &amp;quot;10&amp;quot;&lt;br /&gt;
&lt;br /&gt;
and 2) create new a new SOPInstanceUID for each image in the derived series with the following script that uses the dcmtk tool dcmodify:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/tcsh&lt;br /&gt;
 set flist = `ls -1 I.*`&lt;br /&gt;
 set sopidlist = `ls -1 I.* | awk '{printf(&amp;quot;/projects/mark/dcmtk/bin/dcmdump %s\n&amp;quot;,$1)}' | sh | &lt;br /&gt;
 grep 0008,0018 | awk '{printf(&amp;quot;%s\n&amp;quot;,$3)}' | sed s/\\[// | sed s/\\]//`&lt;br /&gt;
 set i = 1&lt;br /&gt;
 while ($i &amp;lt;= $#flist)&lt;br /&gt;
 echo &amp;quot;$i $flist[$i] $sopidlist[$i]&amp;quot;&lt;br /&gt;
 @ i++&lt;br /&gt;
 end&lt;br /&gt;
 set i = 1&lt;br /&gt;
 while ($i &amp;lt;= $#flist)&lt;br /&gt;
 set newsop = `printf %s.%d $sopidlist[$i] $i`&lt;br /&gt;
 echo &amp;quot;$newsop&amp;quot;&lt;br /&gt;
 /projects/mark/dcmtk/bin/dcmodify -ma &amp;quot;(0008,0018)=$newsop&amp;quot; $flist[$i]&lt;br /&gt;
 @ i++&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
* Case3 - FMRI data upload - Below is the essense of an email that I sent to the xnat discussion group that I never&lt;br /&gt;
heard back from:&lt;br /&gt;
&lt;br /&gt;
I am uploading some large FMRI datasets to xnat central. Each exam has about 10k images. I&lt;br /&gt;
have been anonymizing the data with DicomRemap and modifying the series descriptions to describe&lt;br /&gt;
the fmri task being performed for each series via the .das file.&lt;br /&gt;
During the upload process, I get a  proxy error:&lt;br /&gt;
&lt;br /&gt;
[[File:xnat-proxy-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
But sometime later my data shows&lt;br /&gt;
up in the prearchive and appears  complete and intact and I can move it to the project. However, when&lt;br /&gt;
I then select the subject from the project,  I see a processing exception at the top of the page:&lt;br /&gt;
&lt;br /&gt;
[[File:process-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
I am concerned that when we go to share the data after uploading everything, if someone sees this processing&lt;br /&gt;
exception they will figure the data is corrupt whereas it is complete and wont use it. So I am curious:&lt;br /&gt;
1) if there is a way to suppress this error after  upload?&lt;br /&gt;
2) is it even  feasible to upload such large datasets? the relative size 300M compressed in not large, but the image count is.&lt;br /&gt;
3) are there alternative better methods to upload?&lt;br /&gt;
&lt;br /&gt;
I looked at http://nrg.wikispaces.com/XNAT+Data+Management&lt;br /&gt;
&lt;br /&gt;
which describes an FTP upload process, but I dont see how to do this to XNAT central&lt;br /&gt;
&lt;br /&gt;
I also looked into using http://nrg.wikispaces.com/StoreXAR&lt;br /&gt;
and tried to organize my data and SESSION.xml file as describe here, but this generated a connection error:&lt;br /&gt;
&lt;br /&gt;
Thu Oct 22 18:13:57 EDT 2009 manderson@http://central.xnat.org:8104/:java.net.ConnectException: Connection timed out&lt;br /&gt;
&lt;br /&gt;
and likely I didnt specify the XML file correctly. Is there a way to generate the xml automatically by scanning the organized data?&lt;br /&gt;
If not, it seems easier to just upload the compressed tar file,  do something else for a while , and then  move the data to the&lt;br /&gt;
project from the prearchive, this is relatively non-time-consuming as xnat does the work and displays the series descriptions that&lt;br /&gt;
I want, I dont need any custom variables.&lt;br /&gt;
&lt;br /&gt;
Thanks for any help/pointers. &lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44841</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44841"/>
		<updated>2009-11-10T21:02:05Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This can be  even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation on xnat central]&lt;br /&gt;
&lt;br /&gt;
Obviously,we will need a more elegant solution to upload data derived in this fashion in the future.&lt;br /&gt;
* Case3 - FMRI data upload - Below is the essense of an email that I sent to the xnat discussion group that I never&lt;br /&gt;
heard back from:&lt;br /&gt;
&lt;br /&gt;
I am uploading some large FMRI datasets to xnat central. Each exam has about 10k images. I&lt;br /&gt;
have been anonymizing the data with DicomRemap and modifying the series descriptions to describe&lt;br /&gt;
the fmri task being performed for each series via the .das file.&lt;br /&gt;
During the upload process, I get a  proxy error:&lt;br /&gt;
&lt;br /&gt;
[[File:xnat-proxy-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
But sometime later my data shows&lt;br /&gt;
up in the prearchive and appears  complete and intact and I can move it to the project. However, when&lt;br /&gt;
I then select the subject from the project,  I see a processing exception at the top of the page:&lt;br /&gt;
&lt;br /&gt;
[[File:process-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
I am concerned that when we go to share the data after uploading everything, if someone sees this processing&lt;br /&gt;
exception they will figure the data is corrupt whereas it is complete and wont use it. So I am curious:&lt;br /&gt;
1) if there is a way to suppress this error after  upload?&lt;br /&gt;
2) is it even  feasible to upload such large datasets? the relative size 300M compressed in not large, but the image count is.&lt;br /&gt;
3) are there alternative better methods to upload?&lt;br /&gt;
&lt;br /&gt;
I looked at http://nrg.wikispaces.com/XNAT+Data+Management&lt;br /&gt;
&lt;br /&gt;
which describes an FTP upload process, but I dont see how to do this to XNAT central&lt;br /&gt;
&lt;br /&gt;
I also looked into using http://nrg.wikispaces.com/StoreXAR&lt;br /&gt;
and tried to organize my data and SESSION.xml file as describe here, but this generated a connection error:&lt;br /&gt;
&lt;br /&gt;
Thu Oct 22 18:13:57 EDT 2009 manderson@http://central.xnat.org:8104/:java.net.ConnectException: Connection timed out&lt;br /&gt;
&lt;br /&gt;
and likely I didnt specify the XML file correctly. Is there a way to generate the xml automatically by scanning the organized data?&lt;br /&gt;
If not, it seems easier to just upload the compressed tar file,  do something else for a while , and then  move the data to the&lt;br /&gt;
project from the prearchive, this is relatively non-time-consuming as xnat does the work and displays the series descriptions that&lt;br /&gt;
I want, I dont need any custom variables.&lt;br /&gt;
&lt;br /&gt;
Thanks for any help/pointers. &lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44840</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44840"/>
		<updated>2009-11-10T20:39:09Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation]&lt;br /&gt;
&lt;br /&gt;
Obviously,we will need a more elegant solution to upload data derived in this fashion in the future.&lt;br /&gt;
* Case3 - FMRI data upload - Below is the essense of an email that I sent to the xnat discussion group that I never&lt;br /&gt;
heard back from:&lt;br /&gt;
&lt;br /&gt;
I am uploading some large FMRI datasets to xnat central. Each exam has about 10k images. I&lt;br /&gt;
have been anonymizing the data with DicomRemap and modifying the series descriptions to describe&lt;br /&gt;
the fmri task being performed for each series via the .das file.&lt;br /&gt;
During the upload process, I get a  proxy error:&lt;br /&gt;
&lt;br /&gt;
[[File:xnat-proxy-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
But sometime later my data shows&lt;br /&gt;
up in the prearchive and appears  complete and intact and I can move it to the project. However, when&lt;br /&gt;
I then select the subject from the project,  I see a processing exception at the top of the page:&lt;br /&gt;
&lt;br /&gt;
[[File:process-error.jpg]]&lt;br /&gt;
&lt;br /&gt;
I am concerned that when we go to share the data after uploading everything, if someone sees this processing&lt;br /&gt;
exception they will figure the data is corrupt whereas it is complete and wont use it. So I am curious:&lt;br /&gt;
1) if there is a way to suppress this error after  upload?&lt;br /&gt;
2) is it even  feasible to upload such large datasets? the relative size 300M compressed in not large, but the image count is.&lt;br /&gt;
3) are there alternative better methods to upload?&lt;br /&gt;
&lt;br /&gt;
I looked at http://nrg.wikispaces.com/XNAT+Data+Management&lt;br /&gt;
&lt;br /&gt;
which describes an FTP upload process, but I dont see how to do this to XNAT central&lt;br /&gt;
&lt;br /&gt;
I also looked into using http://nrg.wikispaces.com/StoreXAR&lt;br /&gt;
and tried to organize my data and SESSION.xml file as describe here, but this generated a connection error:&lt;br /&gt;
&lt;br /&gt;
Thu Oct 22 18:13:57 EDT 2009 manderson@http://central.xnat.org:8104/:java.net.ConnectException: Connection timed out&lt;br /&gt;
&lt;br /&gt;
and likely I didnt specify the XML file correctly. Is there a way to generate the xml automatically by scanning the organized data?&lt;br /&gt;
If not, it seems easier to just upload the compressed tar file,  do something else for a while , and then  move the data to the&lt;br /&gt;
project from the prearchive, this is relatively non-time-consuming as xnat does the work and displays the series descriptions that&lt;br /&gt;
I want, I dont need any custom variables.&lt;br /&gt;
&lt;br /&gt;
Thanks for any help/pointers. &lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Xnat-proxy-error.jpg&amp;diff=44839</id>
		<title>File:Xnat-proxy-error.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Xnat-proxy-error.jpg&amp;diff=44839"/>
		<updated>2009-11-10T20:32:35Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44838</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44838"/>
		<updated>2009-11-10T20:32:21Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation]&lt;br /&gt;
&lt;br /&gt;
Obviously,we will need a more elegant solution to upload data derived in this fashion in the future.&lt;br /&gt;
* Case3 - FMRI data upload - Below is the essense of an email that I sent to the xnat discussion group that I never&lt;br /&gt;
heard back from:&lt;br /&gt;
&lt;br /&gt;
I am uploading some large FMRI datasets to xnat central. Each exam has about 10k images. I&lt;br /&gt;
have been anonymizing the data with DicomRemap and modifying the series descriptions to describe&lt;br /&gt;
the fmri task being performed for each series via the .das file.&lt;br /&gt;
During the upload process, I get a  proxy error:&lt;br /&gt;
[[File:xnat-proxy-error.jpg]]&lt;br /&gt;
 But sometime later my data shows&lt;br /&gt;
up in the prearchive and appears  complete and intact and I can move it to the project. However, when&lt;br /&gt;
I then select the subject from the project,  I see a processing exception at the top of the page:&lt;br /&gt;
[[File:process-error.jpg]]&lt;br /&gt;
I am concerned that when we go to share the data after uploading everything, if someone sees this processing&lt;br /&gt;
exception they will figure the data is corrupt whereas it is complete and wont use it. So I am curious:&lt;br /&gt;
1) if there is a way to suppress this error after  upload?&lt;br /&gt;
2) is it even  feasible to upload such large datasets? the relative size 300M compressed in not large, but the image count is.&lt;br /&gt;
3) are there alternative better methods to upload?&lt;br /&gt;
&lt;br /&gt;
I looked at http://nrg.wikispaces.com/XNAT+Data+Management&lt;br /&gt;
&lt;br /&gt;
which describes an FTP upload process, but I dont see how to do this to XNAT central&lt;br /&gt;
&lt;br /&gt;
I also looked into using http://nrg.wikispaces.com/StoreXAR&lt;br /&gt;
and tried to organize my data and SESSION.xml file as describe here, but this generated a connection error:&lt;br /&gt;
&lt;br /&gt;
Thu Oct 22 18:13:57 EDT 2009 manderson@http://central.xnat.org:8104/:java.net.ConnectException: Connection timed out&lt;br /&gt;
&lt;br /&gt;
and likely I didnt specify the XML file correctly. Is there a way to generate the xml automatically by scanning the organized data?&lt;br /&gt;
If not, it seems easier to just upload the compressed tar file,  do something else for a while , and then  move the data to the&lt;br /&gt;
project from the prearchive, this is relatively non-time-consuming as xnat does the work and displays the series descriptions that&lt;br /&gt;
I want, I dont need any custom variables.&lt;br /&gt;
&lt;br /&gt;
Thanks for any help/pointers. &lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44837</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44837"/>
		<updated>2009-11-10T20:31:49Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation]&lt;br /&gt;
&lt;br /&gt;
Obviously,we will need a more elegant solution to upload data derived in this fashion in the future.&lt;br /&gt;
* Case3 - FMRI data upload - Below is the essense of an email that I sent to the xnat discussion group that I never&lt;br /&gt;
heard back from:&lt;br /&gt;
&lt;br /&gt;
I am uploading some large FMRI datasets to xnat central. Each exam has about 10k images. I&lt;br /&gt;
have been anonymizing the data with DicomRemap and modifying the series descriptions to describe&lt;br /&gt;
the fmri task being performed for each series via the .das file.&lt;br /&gt;
During the upload process, I get a  proxy error:&lt;br /&gt;
[[File:proxy-error.jpg]]&lt;br /&gt;
 But sometime later my data shows&lt;br /&gt;
up in the prearchive and appears  complete and intact and I can move it to the project. However, when&lt;br /&gt;
I then select the subject from the project,  I see a processing exception at the top of the page:&lt;br /&gt;
[[File:process-error.jpg]]&lt;br /&gt;
I am concerned that when we go to share the data after uploading everything, if someone sees this processing&lt;br /&gt;
exception they will figure the data is corrupt whereas it is complete and wont use it. So I am curious:&lt;br /&gt;
1) if there is a way to suppress this error after  upload?&lt;br /&gt;
2) is it even  feasible to upload such large datasets? the relative size 300M compressed in not large, but the image count is.&lt;br /&gt;
3) are there alternative better methods to upload?&lt;br /&gt;
&lt;br /&gt;
I looked at http://nrg.wikispaces.com/XNAT+Data+Management&lt;br /&gt;
&lt;br /&gt;
which describes an FTP upload process, but I dont see how to do this to XNAT central&lt;br /&gt;
&lt;br /&gt;
I also looked into using http://nrg.wikispaces.com/StoreXAR&lt;br /&gt;
and tried to organize my data and SESSION.xml file as describe here, but this generated a connection error:&lt;br /&gt;
&lt;br /&gt;
Thu Oct 22 18:13:57 EDT 2009 manderson@http://central.xnat.org:8104/:java.net.ConnectException: Connection timed out&lt;br /&gt;
&lt;br /&gt;
and likely I didnt specify the XML file correctly. Is there a way to generate the xml automatically by scanning the organized data?&lt;br /&gt;
If not, it seems easier to just upload the compressed tar file,  do something else for a while , and then  move the data to the&lt;br /&gt;
project from the prearchive, this is relatively non-time-consuming as xnat does the work and displays the series descriptions that&lt;br /&gt;
I want, I dont need any custom variables.&lt;br /&gt;
&lt;br /&gt;
Thanks for any help/pointers. &lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Process-error.jpg&amp;diff=44836</id>
		<title>File:Process-error.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Process-error.jpg&amp;diff=44836"/>
		<updated>2009-11-10T20:29:36Z</updated>

		<summary type="html">&lt;p&gt;Mark: uploaded a new version of &amp;quot;File:Process-error.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Process-error.jpg&amp;diff=44835</id>
		<title>File:Process-error.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Process-error.jpg&amp;diff=44835"/>
		<updated>2009-11-10T20:28:52Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44834</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44834"/>
		<updated>2009-11-10T20:19:28Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation]&lt;br /&gt;
&lt;br /&gt;
Obviously,we will need a more elegant solution to upload data derived in this fashion in the future.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44833</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44833"/>
		<updated>2009-11-10T20:16:06Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
&lt;br /&gt;
[[File:seg.png]] [ Labelled colon segmentation]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:File.jpg&amp;diff=44832</id>
		<title>File:File.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:File.jpg&amp;diff=44832"/>
		<updated>2009-11-10T20:15:05Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44831</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44831"/>
		<updated>2009-11-10T20:12:21Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
[[File:File.jpg]] | Labelled colon segmentation]&lt;br /&gt;
[[File:File.jpg]]&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
[[Link title]]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Seg.png&amp;diff=44830</id>
		<title>File:Seg.png</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Seg.png&amp;diff=44830"/>
		<updated>2009-11-10T20:10:38Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44829</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44829"/>
		<updated>2009-11-10T20:03:25Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied to the segmented data:&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
Here is the segmented data in xnat:&lt;br /&gt;
[[Image:seg.png|frame| Labelled colon segmentation]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44828</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44828"/>
		<updated>2009-11-10T19:55:14Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Case 1 - Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently. Other &lt;br /&gt;
anonymizing is performed during this step as well.&lt;br /&gt;
&lt;br /&gt;
* Case 2 - Upload of segmentations derived from DICOM data. This is even more problematic, depending on the situation. I am currently &lt;br /&gt;
uploading prostate segmentations combined with the original DICOM data. Each subject has several MRI series and an expert labelmap&lt;br /&gt;
segmentation that need to be uploaded as a single subject. The researcher's approach was to create a new series number for the segmentation,&lt;br /&gt;
using the series header information from the original data. When this is uploaded, the segmentation is treated as duplicate data by xnat.&lt;br /&gt;
Xnat thinks the data already exists, based on the header information. The solution here is potentially more complicated. Not only does a&lt;br /&gt;
new SeriesInstanceUID need to be created, but also it appears that xnat verifies that the SOPInstanceUID i.e. the unique image identifier&lt;br /&gt;
is in fact unique. Apparently, xnat only checks the first image of a series to check that the SOPInstanceUID is in fact unique as I was&lt;br /&gt;
able to apply the remapper to a series of data with a single .das file that set the same SOPInstanceUID to every image in the series. &lt;br /&gt;
Below are the SOPInstanceUID and SeriesInstanceUID that were applied.&lt;br /&gt;
 (0008,0018) := &amp;quot;1.2.840.113619.2.207.3596.11861984.22869.1219405353.999&amp;quot;&lt;br /&gt;
 (0020,000e) := &amp;quot;1.2.840.113619.2.207.3596.11861984.25740.1219404288.477&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44820</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44820"/>
		<updated>2009-11-10T19:31:34Z</updated>

		<summary type="html">&lt;p&gt;Mark: /* Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) CURRENTLY BEING DEVELOPED */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Notes on derived data upload and FMRI data upload.==&lt;br /&gt;
&lt;br /&gt;
* Older non-dicom genesis data upload. This is done using Dave Clunie's useful dicom3tools kit. Genesis data gets converted to DICOM&lt;br /&gt;
with from a directory containing only pre-dicom, genesis format images(named I.001 - I.xxx) with the command:&lt;br /&gt;
 ls -1 I.* | awk '{printf(&amp;quot;gentodc %s %d.dcm\n&amp;quot;,$1,NR)}' | sh&lt;br /&gt;
This will create a series of dicom images, However, gentodc creates a unique SeriesInstanceUID for each image:&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15832.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15833.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15834.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
 (0020,000e) UI [0.0.0.0.3.1779.1.1257878261.15835.2200373056] #  44, 1 SeriesInstanceUID&lt;br /&gt;
&lt;br /&gt;
When the resulting data is &lt;br /&gt;
uploaded to xnat, each image is treated as a series. I uploaded 300 images (all part of a single study) before I realized this problem.&lt;br /&gt;
This fills the prearchive and takes a long time to clean up. As a solution, a second step is run to modify the dicom images. This is done&lt;br /&gt;
by applying the xnat remapper to the newly formed dicom data:&lt;br /&gt;
 /projects/mark/xnat/DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap -d ../tmp/sp.das -o /d/bigdaily/mark/tmp/ .&lt;br /&gt;
&lt;br /&gt;
Where file sp.das contains:&lt;br /&gt;
 (0020,000e) := &amp;quot;0.0.0.0.3.1779.1.1257878261.15832.2200373056&amp;quot;&lt;br /&gt;
which sets all images to the same SeriesInstanceUID - This remapping needs to be performed on a series-by-series basis currently.&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44819</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=44819"/>
		<updated>2009-11-10T18:33:31Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
== Notes on derived data upload and FMRI data upload.&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42677</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42677"/>
		<updated>2009-09-14T21:34:07Z</updated>

		<summary type="html">&lt;p&gt;Mark: /* Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) CURRENTLY BEING DEVELOPED */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
* Several types of errors were encountered:&lt;br /&gt;
** Data entry at scan time. Subject age for one subject listed as 100years old, subject born in 1984&lt;br /&gt;
** Spreadsheet errors. All subject ages were incorrect. I used the age at date of scan. One subject listed as a 20 year-old male, but data is for a 34-year old female&lt;br /&gt;
** It is certainly possible that I made errors transcribing values from the spreadsheet.&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42670</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42670"/>
		<updated>2009-09-11T23:09:17Z</updated>

		<summary type="html">&lt;p&gt;Mark: /* Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) CURRENTLY BEING DEVELOPED */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA and create the custom variables that additionally describe the data.In this project, we&lt;br /&gt;
added variables for tumor size, location, description and grade and patient sex and age. &lt;br /&gt;
* The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable. I used the interactive DicomBrowser tool. This requires&lt;br /&gt;
editing a DICOM descriptor .das file for each subject, and 20 mouse-clicks to specify parameters and do the anonymizing.&lt;br /&gt;
* Upload the anonymized data. This requires making a compressed tar file of the anonymized data and running the upload process from &lt;br /&gt;
xnat Central &lt;br /&gt;
* The entire process of manually uploading and anonymizing a case takes between six and ten minutes.&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42641</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42641"/>
		<updated>2009-09-10T22:25:00Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA&lt;br /&gt;
* Create Custom Variables for each subject that is to be part of project IGT_GLIOMA - The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
* Anonymize all patient data. I tried the DicomRemapper:&lt;br /&gt;
 DicomBrowser-1.5-SNAPSHOT/bin/DicomRemap /projects/igtcases/neuro/glioma_mrt/for_hussein/1802 -o /d/bigdaily/&lt;br /&gt;
but this fails if non-dicom files are found amongst the DICOM data:&lt;br /&gt;
 /projects/igtcases/neuro/glioma_mrt/for_hussein/1802/tumor.xml: not DICOM data&lt;br /&gt;
this will often be the case for us at BWH so this to is not currently viable.&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42426</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42426"/>
		<updated>2009-09-10T18:25:35Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA&lt;br /&gt;
* Create Custom Variables for each subject that is to be part of project IGT_GLIOMA - The value of these variables is manually entered &lt;br /&gt;
and displayed when a subject is selected. Custom variables cannot currently be used as search criteria to select a subset of the project.&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p my-passwd -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42407</id>
		<title>CTSC DataManagementWorkflow</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_DataManagementWorkflow&amp;diff=42407"/>
		<updated>2009-09-10T17:07:24Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ CTSC_Imaging_Informatics_Initiative#Current_Status | &amp;lt;&amp;lt; back to CTSC Imaging Informatics Initiatiave ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option A. interactive upload using various tools and web gui (Mark Anderson) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create Project in web GUI with ProjectID IGT_GLIOMA&lt;br /&gt;
* Get User session id with: &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u manderson -p redsox1 -m POST -remote /REST/JSESSION&lt;br /&gt;
* Use session ID to create all subjects, e.g. &lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session 94202E5B23C1672FDF1B2D1A40173F21 -m PUT -dest /REST/ projects/IGT_GLIOMA/subjects/case3&lt;br /&gt;
This can be automated for lots of subjects.&lt;br /&gt;
==Target Data Management Process Option B. batch scripted upload via DICOM Server (Yong Gao) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
* Create new project using web GUI&lt;br /&gt;
* Manage project using web GUI: Configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
* Create a subject template (download from web GUI)&lt;br /&gt;
* Create a spreadsheet conforming to subject template&lt;br /&gt;
* Upload spreadsheet using web GUI to create subjects&lt;br /&gt;
&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)&lt;br /&gt;
* Need pointer for script to do batch upload &amp;amp; apply DICOM metadata.&lt;br /&gt;
* Confirm data is uploaded &amp;amp; represented properly with web GUI&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process Option C. batch scripted upload via web services  (Wendy Plesniak) '''CURRENTLY BEING DEVELOPED'''==&lt;br /&gt;
&lt;br /&gt;
'''1. Create new project on XNAT instance using web GUI'''&lt;br /&gt;
&lt;br /&gt;
* Create a new project by selecting the New button at the GUI top &lt;br /&gt;
* Select the project from the project list. &lt;br /&gt;
* From within the Project view, Click &amp;quot;Access&amp;quot; tab and set permissions to be appropriate&lt;br /&gt;
* From within the Project view, Select the &amp;quot;Manage&amp;quot; tab and configure settings to automatically place data into the archive (no pre-archive)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Content for Batch anonymize &amp;amp; upload script(s) ===&lt;br /&gt;
'''2. Batch Anonymize your local data (STILL TESTING)'''&lt;br /&gt;
* The approach to writing anonymization scripts is here: http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp&lt;br /&gt;
* See description for batch anonymization here: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html&lt;br /&gt;
* Download and install commandline tools: http://nrg.wustl.edu/projects/DICOM/DicomBrowser-cli.html &lt;br /&gt;
&lt;br /&gt;
'''2.a''' Create a remapping config xml file to describe the spreadsheet to be built from the DICOM data. Root element is &amp;quot;Columns&amp;quot; and each subelement describes a column in the spreadsheet:&lt;br /&gt;
&lt;br /&gt;
* tag = DICOM tag&lt;br /&gt;
* level = (global, patient, study, series) describes the level at which the remapping is applied&lt;br /&gt;
&lt;br /&gt;
An example is:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Columns&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Fixed Institution Name&amp;quot;&amp;gt;(0008,0080)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Global remap=&amp;quot;Anon Requesting Physician&amp;quot; (0032,1032)&amp;lt;/Global&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Name&amp;quot;&amp;gt;(0010,0010)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon PatientID&amp;quot;&amp;gt;(0010,0020)&amp;lt;/Patient&amp;gt; &lt;br /&gt;
  &amp;lt;Patient remap=&amp;quot;Anon Patient Address&amp;quot; (0010,1040)&amp;lt;/Patient&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0020,0010)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Study&amp;gt;(0008,0020)&amp;lt;/Study&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0020,0011)&amp;lt;/Series&amp;gt;&lt;br /&gt;
  &amp;lt;Series&amp;gt;(0008,0031)&amp;lt;/Series&amp;gt;&lt;br /&gt;
 &amp;lt;/Columns&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''2.b''' Generate a spreadsheed from the data that includes the remapped dicom tags:&lt;br /&gt;
   DicomSummarize -c remap-config-file.xml -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
The arguments in brackets are a '''list''' of directories containing the source DICOM data (separated by spaces?)&lt;br /&gt;
&lt;br /&gt;
'''2.c''' Write an anonymization script for any simple changes, such as deleting an attribute, or setting an attribute value to either a fixed value or a simple function of other attribute values in the same file. Here, make sure to remove patient address and requesting physician as noted, plus whatever else you'd like (recommendations?)&lt;br /&gt;
&lt;br /&gt;
See http://nrg.wustl.edu/projects/DICOM/AnonScript.jsp for detailed information about writing anonymization scripts. Here's a script written by Mark during his testing.&lt;br /&gt;
 // removes all attributes specified in the&lt;br /&gt;
 //  DICOM Basic Application Level Confidentiality Profile&lt;br /&gt;
 // mark@bwh.harvard.edu added the following tags:&lt;br /&gt;
 // (0010,1040) PatientsAddress&lt;br /&gt;
 // (0032,1032) RequestingPhysician&lt;br /&gt;
 // is seems the study and series InstanceUID tags are needed  (0020,000D) (0020,000E)&lt;br /&gt;
 //- (0020,000D)&lt;br /&gt;
 //- (0020,000E)&lt;br /&gt;
 // - (0010,1010) preserve pt age&lt;br /&gt;
 // - (0010,1040) preserve pt sex&lt;br /&gt;
 - (0008,0014) &lt;br /&gt;
 - (0008,0050)&lt;br /&gt;
 - (0008,0080)&lt;br /&gt;
 - (0008,0081)&lt;br /&gt;
 - (0008,0090)&lt;br /&gt;
 - (0008,0092)&lt;br /&gt;
 - (0008,0094)&lt;br /&gt;
 - (0008,1010)&lt;br /&gt;
  (0008,1030) := &amp;quot;SPL_IGT&amp;quot;&lt;br /&gt;
 - (0008,1040)&lt;br /&gt;
 - (0008,1048)&lt;br /&gt;
 - (0008,1050)&lt;br /&gt;
 - (0008,1060)&lt;br /&gt;
 - (0008,1070)&lt;br /&gt;
 - (0008,1080)&lt;br /&gt;
 - (0008,2111)&lt;br /&gt;
  (0010,0010) := &amp;quot;case143&amp;quot;&lt;br /&gt;
  (0010,0020) := &amp;quot;case143&amp;quot;&lt;br /&gt;
 - (0010,0030)&lt;br /&gt;
 - (0010,0032)&lt;br /&gt;
 - (0010,1040)&lt;br /&gt;
 - (0010,0040)&lt;br /&gt;
 - (0010,1000)&lt;br /&gt;
 - (0010,1001)&lt;br /&gt;
 - (0010,1020)&lt;br /&gt;
 - (0010,1030)&lt;br /&gt;
 - (0010,1090)&lt;br /&gt;
 - (0010,2160)&lt;br /&gt;
 - (0010,2180)&lt;br /&gt;
 - (0010,21B0)&lt;br /&gt;
 - (0010,4000)&lt;br /&gt;
 - (0018,1000)&lt;br /&gt;
 - (0018,1030)&lt;br /&gt;
  (0020,0010) := &amp;quot;MR1&amp;quot;&lt;br /&gt;
 - (0020,0052)&lt;br /&gt;
 - (0020,0200)&lt;br /&gt;
 - (0020,4000)&lt;br /&gt;
 - (0032,1032)&lt;br /&gt;
 - (0040,0275)&lt;br /&gt;
 - (0040,A124)&lt;br /&gt;
 - (0040,A730)&lt;br /&gt;
 - (0088,0140)&lt;br /&gt;
 - (3006,0024)&lt;br /&gt;
 - (3006,00C2)&lt;br /&gt;
&lt;br /&gt;
'''2.d''' Edit the spreadsheet (remap.csv file) that is generated as output.&lt;br /&gt;
&lt;br /&gt;
This spreadsheet will contain all the columns you defined, plus some additional columns needed to uniquely identify each patient, study, and series.&lt;br /&gt;
&lt;br /&gt;
Each new (remap) column should be filled with values. In some cases, some cells in the spreadsheet can be left blank: for a Patient-level remap, one value must be specified for each patient; if the spreadsheet contains multiple rows for each patient, the column needs only be filled in one row for each patient. Similarly, for a Study-level remap, the value need only be filled once. If you don't fill in a required cell, the remapper will complain. If you give, for example, a Patient-level remap column multiple values for a single patient, the remapper will complain.&lt;br /&gt;
&lt;br /&gt;
'''2.e''' Run the remapper:&lt;br /&gt;
&lt;br /&gt;
 DicomRemapper -c remap-config-file.xml -o &amp;lt;path-to-output-directory&amp;gt; -v remap.csv [directory-1 ...]&lt;br /&gt;
&lt;br /&gt;
*the remap config XML should be the same file used in 2.a, &lt;br /&gt;
* remap.csv is the spreadsheet generated in 2.c and edited in 2.d, and &lt;br /&gt;
* list of directories is the same list of source directories from 2.e.&lt;br /&gt;
* add an anonymization script to be applied at this stage by using the -d option.&lt;br /&gt;
* first time you use a script to generate new UIDs, you'll need a new UID root; &lt;br /&gt;
** do this by adding -s http://nrg.wustl.edu/UIDGen to the DicomRemapper command line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''NOTE''' DicomBrowser doesn't write directly into the database -- it can send to a DICOM server. Below we use webservices to write directly to the database. Does this violate best practices?&lt;br /&gt;
&lt;br /&gt;
'''QUESTION''' How does data go from here into database -- via &amp;quot;admin&amp;quot; move from to db? How labor intensive is this?&lt;br /&gt;
&lt;br /&gt;
'''TODO''' Yong: how-to on CHB instance. URI?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For next steps using web services, use curl or XNATRestClient (See here to '''download XNATRestClient''' in xnat_tools.zip from here: http://nrg.wikispaces.com/XNAT+REST+API+Usage)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Authenticate''' with server and create new session; use the response as a sessionID ($JSessionID) to use in subsequent queries&lt;br /&gt;
 curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password &lt;br /&gt;
 or, use the XNATRestClient&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -u $XNE_UserName -p $XNE_Password -m POST -remote /REST/JSESSION&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''4. Create subjects on XNAT'''&lt;br /&gt;
  XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT /REST/projects/$ProjectID/subjects/s0001 &lt;br /&gt;
 (This will create a subject called 'S0001' within the project $ProjectID)&lt;br /&gt;
&lt;br /&gt;
A script can be written to automatically create all subjects for the project. &lt;br /&gt;
&lt;br /&gt;
'''4a. Specify the demographics of a subject already created, or create with demographic specification'''&lt;br /&gt;
&lt;br /&gt;
'''4.a.1''' No demographics are applied to each subject by default. To edit the demographics (like gender or handedness) of a subject '''already created''' using XML Path shortcuts.&lt;br /&gt;
&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender = male&lt;br /&gt;
 xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness = left&lt;br /&gt;
&lt;br /&gt;
The entire command looks like this (Append XML path shortcuts and separate each by an ''&amp;amp;''. Note that querystring parameters must be separated from the actual URI by a ?):&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0001?xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/gender=male&amp;amp;xnat:subjectData/demographics[@xsi:type=xnat:demographicData]/handedness=left&amp;quot;&lt;br /&gt;
&lt;br /&gt;
All XML Path shortcuts that can be specified on commandline for projects, subject, experiments are listed here: http://nrg.wikispaces.com/XNAT+REST+XML+Path+Shortcuts&lt;br /&gt;
&lt;br /&gt;
'''4.a.2''' Alternatively, specify the demographics '''during subject creation''' by generating and uploading an xml file with the subject:&lt;br /&gt;
 &lt;br /&gt;
 XNATRestClient -host $XNE_srv -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/s0002&amp;quot; - local ./$ProjectID_s0002.xml&lt;br /&gt;
&lt;br /&gt;
The XML file you create and post looks like this:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xnat:Subject ID=&amp;quot;s0002&amp;quot; project=&amp;quot;$ProjectID&amp;quot; group=&amp;quot;control&amp;quot; label=&amp;quot;1&amp;quot; src=&amp;quot;12&amp;quot;  xmlns:xnat=&amp;quot;http://nrg.wustl.edu/xnat&amp;quot; xmlns:xsi=&amp;quot;http://www.w3.org/2001/XMLSchema-instance&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;xnat:demographics xsi:type=&amp;quot;xnat:demographicData&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:dob&amp;gt;1990-09-08&amp;lt;/xnat:dob&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:gender&amp;gt;female&amp;lt;/xnat:gender&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:handedness&amp;gt;right&amp;lt;/xnat:handedness&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:education&amp;gt;12&amp;lt;/xnat:education&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:race&amp;gt;12&amp;lt;/xnat:race&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:ethnicity&amp;gt;12&amp;lt;/xnat:ethnicity&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:weight&amp;gt;12.0&amp;lt;/xnat:weight&amp;gt;&lt;br /&gt;
        &amp;lt;xnat:height&amp;gt;12.0&amp;lt;/xnat:height&amp;gt;&lt;br /&gt;
    &amp;lt;/xnat:demographics&amp;gt;&lt;br /&gt;
 &amp;lt;/xnat:Subject&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''4.b (optional check) Query the server to see what subjects have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects&lt;br /&gt;
&lt;br /&gt;
'''4.c Create experiments (collections of image data) you'd like to have for each subject'''&lt;br /&gt;
&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/MRExperiment?xnat:mrSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/CTExperiment1?xnat:ctSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/PETExperiment1?xnat:petSessionData/date=01/02/09&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''4.d (optional check) Query the server to see what experiments have been created:'''&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m GET -remote /REST/projects/$ProjectID/subjects/s0001/experiments?format=xml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''5. Create uris for scans, reconstructions and upload them.'''&lt;br /&gt;
&lt;br /&gt;
Note: when uploading images, it is good form to define the format of the images (DICOM, ANALYZE, etc) and the content type of the&lt;br /&gt;
data. '''This will not translate any information in the DICOM header into metadata on the scan.'''&lt;br /&gt;
&lt;br /&gt;
 //create SCAN1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1?xnat:mrScanData/type=T1&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 /upload SCAN1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN1/files/1232132.dcm?format=DICOM&amp;amp;content=T1_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN1/1232132.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create SCAN2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2?xnat:mrScanData/type=T2&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload SCAN2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/scans/SCAN2/files/1232133.dcm?format=DICOM&amp;amp;content=T2_RAW&amp;quot; -local /data/subject1/session1/RAW/SCAN2/1232133.dcm&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 1&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343?xnat:reconstructedImageData/type=T1_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 1 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0343/files/0343.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0343/0343.nfti&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 //create reconstruction 2&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344?xnat:reconstructedImageData/type=T2_RECON&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 //upload reconstruction 2 files...&lt;br /&gt;
 XNATRestClient -host $XNE_Svr -user_session $JSessionID -m PUT -remote &amp;quot;/REST/projects/$ProjectID/subjects/$SubjectID/experiments/$ExperimentID/reconstructions/session1_recon_0344/files/0344.nfti?format=NIFTI&amp;quot; -local /data/subject1/session1/RECON/T1_0344/0344.nfti&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''6. Confirm''' data is uploaded &amp;amp; represented properly with web GUI&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=CTSC_IGT,_BWH&amp;diff=41715</id>
		<title>CTSC IGT, BWH</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=CTSC_IGT,_BWH&amp;diff=41715"/>
		<updated>2009-08-19T20:26:56Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[CTSC Imaging Informatics Initiative|CTSC Imaging Informatics Initiative]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Mission=&lt;br /&gt;
Mark Anderson at Surgical Planning and Channing labs currently manages data for many investigators, pulling data from PACS into the research environment. There is interest in setting up a parallel channel by which the data are also enrolled into an XNAT database and accessed from client, and comparing its ease of use with the existing infrastucture. To explore XNAT as a possible long-term informatics solution for the NCIGT project, Mark will be uploading retrospective data for a number of NCIGT efforts (and PIs):&lt;br /&gt;
* NCIGT_Brain_Function (SS/AG)&lt;br /&gt;
** Key Investigators:&lt;br /&gt;
** Brief Description:&lt;br /&gt;
** Use: What kinds of queries will be important?&lt;br /&gt;
* NCIGT_Tumor_Resection (HK/AG)&lt;br /&gt;
** Key Investigators:&lt;br /&gt;
** Brief Description:&lt;br /&gt;
** Use: What kinds of queries will be important?&lt;br /&gt;
* NCIGT_Glioma_Resection (HK)&lt;br /&gt;
** Key Investigators:&lt;br /&gt;
** Brief Description: Intraoperative MRT brain only&lt;br /&gt;
** Use: What kinds of queries will be important? sex,age,tumor size, tumor grade, tumor location (lobe)&lt;br /&gt;
* NCIGT_Prostate (HE/CT)&lt;br /&gt;
** Key Investigators:&lt;br /&gt;
** Brief Description:&lt;br /&gt;
** Use: What kinds of queries will be important?&lt;br /&gt;
* NCIGT_Prostate_Fully_Segmented (HE/CT)&lt;br /&gt;
** Key Investigators:&lt;br /&gt;
** Brief Description:&lt;br /&gt;
** Use: What kinds of queries will be important?&lt;br /&gt;
* NCIGT_Brain_Biopsy (FT)&lt;br /&gt;
** Key Investigators:&lt;br /&gt;
** Brief Description:&lt;br /&gt;
** Use: What kinds of queries will be important?&lt;br /&gt;
&lt;br /&gt;
=Use-Case Goals=&lt;br /&gt;
&lt;br /&gt;
'''Step 1. Data Management'''&lt;br /&gt;
* Anonymize, apply DICOM metadata and upload retrospective datasets; confirm appropriate organization and naming scheme via web GUI.&lt;br /&gt;
&lt;br /&gt;
'''Step 2. Query &amp;amp; Retrieval'''&lt;br /&gt;
* Make specific queries using XNAT web services,&lt;br /&gt;
* Download data conforming to specific naming convention and directory structure, using XNAT web services&lt;br /&gt;
&lt;br /&gt;
Each effort listed above will have different requirements for being able to query, retrieve and use data collections. Brief description of how retrospective data will be used within the NCIGT is described below:&lt;br /&gt;
* NCIGT_Brain_Function:&lt;br /&gt;
* NCIGT_Tumor_Resection:&lt;br /&gt;
* NCIGT_Prostate:&lt;br /&gt;
* NCIGT_Prostate_Fully_Segmented:&lt;br /&gt;
* NCIGT_Brain_Biopsy:&lt;br /&gt;
&lt;br /&gt;
'''Step 3. Disseminating &amp;amp; Sharing'''&lt;br /&gt;
* In addition to NCIGT mandate to share data, each effort listed above will have different requirements for being able to make data available to collaborating and other interested groups.&lt;br /&gt;
&lt;br /&gt;
'''Step 4. Moving data from central.xnat.org to BWH instance of XNAT'''&lt;br /&gt;
&lt;br /&gt;
=Outcome Metrics=&lt;br /&gt;
&lt;br /&gt;
'''Step 1. Data Management'''&lt;br /&gt;
&lt;br /&gt;
'''Step 2. Query &amp;amp; Retrieval'''&lt;br /&gt;
&lt;br /&gt;
'''Step 3. Dissemination &amp;amp; Sharing'''&lt;br /&gt;
&lt;br /&gt;
=Fundamental Requirements=&lt;br /&gt;
&lt;br /&gt;
=Participants=&lt;br /&gt;
&lt;br /&gt;
* Mark Anderson&lt;br /&gt;
* Tina Kapur&lt;br /&gt;
&lt;br /&gt;
= Data =&lt;br /&gt;
&lt;br /&gt;
=Workflows=&lt;br /&gt;
&lt;br /&gt;
==Current Data Management Process==&lt;br /&gt;
Data on local disk.&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process (Step 1.) Option A. interactive upload using various tools and web gui ==&lt;br /&gt;
See [[ CTSC_DataManagementWorkflow | here ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process (Step 1.) Option B. batch scripted upload via DICOM Server ==&lt;br /&gt;
See [[ CTSC_DataManagementWorkflow | here ]]&lt;br /&gt;
&lt;br /&gt;
==Target Data Management Process (Step 1.) Option C. batch scripted upload via web services==&lt;br /&gt;
See [[ CTSC_DataManagementWorkflow | here ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Target Query Formulation (Step 2.)==&lt;br /&gt;
&lt;br /&gt;
==Target Processing Workflow (Step 3.)==&lt;br /&gt;
&lt;br /&gt;
=Other Information=&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=36009</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=36009"/>
		<updated>2009-04-01T21:42:44Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Contact:''' Mark Anderson (mark at bwh.harvard.edu)&lt;br /&gt;
&lt;br /&gt;
==Sending images from BWH scanners to SPL (DICOM)==&lt;br /&gt;
This is the method for researchers and clinicians to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner, select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming/processed. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the technicians and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
==Restoring images from PACS from 2002 -present (DICOM)==&lt;br /&gt;
This is the best way to get all CT, MR, and PET or PET/CT images from approximately the beginning of 2002 to present that are not on the scanners. &lt;br /&gt;
Data will be sent to the SPL directory  /spl/tmp/incoming/processed via a web-based IMPAX service tool. &lt;br /&gt;
Send email to  Mark Anderson (mark at bwh.harvard.edu)to get cases transferred via this method. The information to send is the Accession number of&lt;br /&gt;
the scan, or if the Accession number is not known, use the MRN and date of the scan.&lt;br /&gt;
&lt;br /&gt;
==Finding DICOM data in /spl/tmp/incoming/processed after it has been transferred==&lt;br /&gt;
A file called &lt;br /&gt;
 /spl/tmp/incoming/processed/studylist &lt;br /&gt;
is updated every 10 minutes describing all data currently completely transferred to SPL and processed into&lt;br /&gt;
a hierarchy of /study/series/images for research data or /accession_number/series/image for clinical data. Below is an example of one entry&lt;br /&gt;
in the file /spl/tmp/incoming/processed/studylist for a research study of a phantom scan with a study number of 1234:&lt;br /&gt;
&lt;br /&gt;
  data for subject PHANTOM with patient id PHANTOM is in directory /spl/tmp/incoming/processed/1234&lt;br /&gt;
&lt;br /&gt;
The file system /spl/tmp/incoming/processed contains fifty gigabytes of space. The data persists in /spl/tmp/incoming/processed for one week from the time it is acquired and is automatically deleted after one week.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=36007</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=36007"/>
		<updated>2009-04-01T21:22:42Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Contact:''' Mark Anderson (mark at bwh.harvard.edu)&lt;br /&gt;
&lt;br /&gt;
==Sending images from BWH scanners to SPL (DICOM)==&lt;br /&gt;
This is the method for researchers and clinicians to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner, select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming/processed. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the technicians and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
==Restoring images from PACS from 2002 -present (DICOM)==&lt;br /&gt;
This is the best way to get all CT, MR, and PET or PET/CT images from approximately the beginning of 2002 to present that are not on the scanners. &lt;br /&gt;
Data will be sent to the SPL directory  /spl/tmp/incoming/processed via a web-based IMPAX service tool. &lt;br /&gt;
Send email to  Mark Anderson (mark at bwh.harvard.edu)to get cases transferred via this method. The information to send is the Accession number of&lt;br /&gt;
the scan, or if the Accession number is not known, use the MRN and date of the scan.&lt;br /&gt;
&lt;br /&gt;
==Finding DICOM data in /spl/tmp/incoming/processed after it has been transferred==&lt;br /&gt;
A file called /spl/tmp/incoming/processed/studylist is updated every 10 minutes describing all data currently completely transferred to SPL and processed into&lt;br /&gt;
a hierarchy of /study/series/images for research data or /accession_number/series/image for clinical data. Below is an example of one entry&lt;br /&gt;
in the file /spl/tmp/incoming/processed/studylist for a research study of a phantom scan with a study number of 1234:&lt;br /&gt;
&lt;br /&gt;
data for subject PHANTOM with patient id PHANTOM is in directory /spl/tmp/incoming/processed/1234&lt;br /&gt;
&lt;br /&gt;
The file system /spl/tmp/incoming/processed contains fifty gigabytes of space. The data persists in /spl/tmp/incoming/processed for one week from the time it is acquired and is automatically deleted after one week.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=17017</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=17017"/>
		<updated>2007-10-24T21:30:49Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
'''Procedures to  ''anonymize'' images with DICOM, SIGNA, and GENESIS headers'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Currently these procedures are to be used from within SPL&lt;br /&gt;
on a ''solaris'' machine.&lt;br /&gt;
In all cases, first login to a SOLARIS machine (e.g. ocean) and &lt;br /&gt;
set the following environment variable or add it to your .tcshrc file:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''setenv ANON_DIR /home/mark/anon'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''DICOM data anonymization''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To see the DICOM anonymization options type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''wish $ANON_DIR/dcanon.tcl&lt;br /&gt;
'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Typically for most of our needs when processing a single DICOM series, something like the following is sufficient:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''wish $ANON_DIR/dcanon.tcl -force -nostrip /d/bigweekly/example/000001.SER/ /d/bigweekly/example/anon1'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This will make a copy of  all of the DICOM images in directory /d/bigweekly/example/000001.SER/ and store the anonymized &lt;br /&gt;
copy of the images to directory /d/bigweekly/example/anon1. The ''-force'' option will force the removal of the output directory/d/bigweekly/example/anon1 prior to anonymizing if the directory exists. A list of the dicom tags that are anonymized by this procedure can &lt;br /&gt;
be found here: http://www.na-mic.org/Wiki/index.php/MBIRN:BIRNDUP:Removed_DICOM_Fields&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Some datasets contain many series with many images in each series. Using host ''ocean'' can save a lot of&lt;br /&gt;
time and is is possible to process an entire study simultaneously. This is particularly useful when processing ''fmri''&lt;br /&gt;
datasets that can contain more than fourty thousand images. The following example (run on ocean from the directory containing the all of the dicom series directories but no other directories or subdirectories) will process an entire&lt;br /&gt;
dicom study and additionally will substitute the series number for the patient name. This is particularly useful when&lt;br /&gt;
loading multiple series into ''3dslicer'' as the slicer assigns the ''volume name'' to be the patient name that has been&lt;br /&gt;
extracted from the header. Using the command below will give each series a unique patient name which is just the series number.&lt;br /&gt;
The output directory, in this case /d/bigweekly/anon/ must exist prior to executing the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
 '''ls -1 | awk '{printf(&amp;quot;wish $ANON_DIR/dcanon.tcl -patname %s -force -nostrip /d/bigweekly/example/%s /d/bigweekly/anon/%s &amp;amp;\n&amp;quot;,$0,$0,$0)}' | sed s/.SER// | sh'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''SIGNA data anonymization''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you wanted to anonymize signa image I.001 and call it anon.001, from the directory with your non-anonymized signa image type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''$ANON_DIR/siganon I.001 anon.001'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This can be done for an entire series. For example, if you had a directory full of signa images numbered I.001 to&lt;br /&gt;
I.124, you could anonymize all of them (from within the directory containing the images) with the command: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ls -1 I.*  | awk '{printf(&amp;quot;$ANON_DIR/siganon I.%03d anon.%03d\n&amp;quot;,NR,NR)}' | sh'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''GENESIS data anonymization ''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To anonymize image I.001 can call the resulting image anon.001. from the directory with your non-anonymized genesis images type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''$ANON_DIR/genanon I.001 anon.001'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be done for an entire series. For example, if you had a directory full of genesis images numbered I.001 to&lt;br /&gt;
I.124, you could anonymize all of them (from within the directory containing the images) with the command: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ls -1 I.* | awk '{printf(&amp;quot;$ANON_DIR/genanon I.%03d anon.%03d\n&amp;quot;,NR,NR)}' | sh'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&lt;br /&gt;
Note that the input image and the output image can be the ''same image'' if you are using the genanon program.&lt;br /&gt;
&lt;br /&gt;
If you have any questions or suggestions, please email [mailto:mark@bwh.harvard.edu Mark Anderson]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=13514</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=13514"/>
		<updated>2007-07-13T23:15:36Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
'''Procedures to  ''anonymize'' images with DICOM, SIGNA, and GENESIS headers'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Currently these procedures are to be used from within SPL&lt;br /&gt;
on a ''solaris'' machine.&lt;br /&gt;
In all cases, first login to a SOLARIS machine (e.g. ocean) and &lt;br /&gt;
set the following environment variable or add it to your .tcshrc file:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''setenv ANON_DIR /home/mark/anon'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''DICOM data anonymization''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To see the DICOM anonymization options type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''wish $ANON_DIR/dcanon.tcl&lt;br /&gt;
'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Typically for most of our needs when processing a single DICOM series, something like the following is sufficient:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''wish $ANON_DIR/dcanon.tcl -force -nostrip /d/bigweekly/example/000001.SER/ /d/bigweekly/example/anon1'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This will make a copy of  all of the DICOM images in directory /d/bigweekly/example/000001.SER/ and store the anonymized &lt;br /&gt;
copy of the images to directory /d/bigweekly/example/anon1. The ''-force'' option will force the removal of the output directory/d/bigweekly/example/anon1 prior to anonymizing if the directory exists. A list of the dicom tags that are anonymized by this procedure can &lt;br /&gt;
be found here: http://www.na-mic.org/Wiki/index.php/MBIRN:BIRNDUP:Removed_DICOM_Fields&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Some datasets contain many series with many images in each series. Using host ''ocean'' can save a lot of&lt;br /&gt;
time and is is possible to process an entire study simultaneously. This is particularly useful when processing ''fmri''&lt;br /&gt;
datasets that can contain more than fourty thousand images. The following example (run on ocean from the directory containing the all of the dicom series directories but no other directories or subdirectories) will process an entire&lt;br /&gt;
dicom study and additionally will substitute the series number for the patient name. This is particularly useful when&lt;br /&gt;
loading multiple series into ''3dslicer'' as the slicer assigns the ''volume name'' to be the patient name that has been&lt;br /&gt;
extracted from the header. Using the command below will give each series a unique patient name which is just the series number.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
 '''ls -1 | awk '{printf(&amp;quot;wish $ANON_DIR/dcanon.tcl -patname %s -force -nostrip /d/bigweekly/example/%s /d/bigweekly/anon/%s &amp;amp;\n&amp;quot;,$0,$0,$0)}' | sed s/.SER// | sh'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''SIGNA data anonymization''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you wanted to anonymize signa image I.001 and call it anon.001, from the directory with your non-anonymized signa image type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''$ANON_DIR/siganon I.001 anon.001'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This can be done for an entire series. For example, if you had a directory full of signa images numbered I.001 to&lt;br /&gt;
I.124, you could anonymize all of them (from within the directory containing the images) with the command: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ls -1 I.*  | awk '{printf(&amp;quot;$ANON_DIR/siganon I.%03d anon.%03d\n&amp;quot;,NR,NR)}' | sh'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''GENESIS data anonymization ''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To anonymize image I.001 can call the resulting image anon.001. from the directory with your non-anonymized genesis images type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''$ANON_DIR/genanon I.001 anon.001'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be done for an entire series. For example, if you had a directory full of genesis images numbered I.001 to&lt;br /&gt;
I.124, you could anonymize all of them (from within the directory containing the images) with the command: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ls -1 I.* | awk '{printf(&amp;quot;$ANON_DIR/genanon I.%03d anon.%03d\n&amp;quot;,NR,NR)}' | sh'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&lt;br /&gt;
Note that the input image and the output image can be the ''same image'' if you are using the genanon program.&lt;br /&gt;
&lt;br /&gt;
If you have any questions or suggestions, please email [mailto:mark@bwh.harvard.edu Mark Anderson]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=13512</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=13512"/>
		<updated>2007-07-13T22:34:25Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
'''Process to anonymize DICOM, SIGNA, and GENESIS headers'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Currently these procedures are to be used from within SPL&lt;br /&gt;
on a ''solaris'' machine.&lt;br /&gt;
In all cases, first login to a SOLARIS machine (e.g. ocean) and &lt;br /&gt;
set the following environment variable:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''setenv ANON_DIR /home/mark/anon'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''DICOM data anonymization''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To see the DICOM anonymization options type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''wish $ANON_DIR/dcanon.tcl&lt;br /&gt;
'''&lt;br /&gt;
Typically for our needs, something like the following is sufficient:&lt;br /&gt;
&lt;br /&gt;
'''wish $ANON_DIR/dcanon.tcl -force -nostrip /d/bigweekly/example/000001.SER/ /d/bigweekly/example/anon1'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''SIGNA data anonymization''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
From the directory with your non-anonymized signa images type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''$ANON_DIR/siganon I.001 anon.001'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This can be done for an entire series. For example, if you had a directory full of signa images numbered I.001 to&lt;br /&gt;
I.124, you could anonymize all of them (from within the directory containing the images) with the command: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ls -1 | awk '{printf(&amp;quot;$ANON_DIR/siganon I.%03d anon.%03d\n&amp;quot;,NR,NR)}' | sh'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''GENESIS data anonymization ''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From the directory with your non-anonymized genesis images type:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''$ANON_DIR/genanon I.001 anon.001'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be done for an entire series. For example, if you had a directory full of genesis images numbered I.001 to&lt;br /&gt;
I.124, you could anonymize all of them (from within the directory containing the images) with the command: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''ls -1 | awk '{printf(&amp;quot;$ANON_DIR/genanon I.%03d anon.%03d\n&amp;quot;,NR,NR)}' | sh'''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;This will create an anonymized series of 124 images  called anon.001 - anon.124&amp;lt;br&amp;gt;&lt;br /&gt;
Note that the input image and the output image can be the same image.&lt;br /&gt;
&lt;br /&gt;
If you have any questions or suggestions, please email [mailto:mark@bwh.harvard.edu Mark Anderson]&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=13509</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=13509"/>
		<updated>2007-07-13T21:05:46Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
Process to anonymize SIGNA, GENESIS, and DICOM headers&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Currently these procedures are to be used from within SPL&lt;br /&gt;
on a solaris machine.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8006</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8006"/>
		<updated>2007-02-27T23:47:07Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
Process to anonymize DICOM headers&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
We use several tcl programs which wrap Dave Clunie's DICOM copy program&lt;br /&gt;
called dccp which is part of his Dicom3tools program set.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8005</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8005"/>
		<updated>2007-02-27T23:45:47Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
Process to anonymize DICOM headers&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
We use several tcl programs which wrap Dave Clunie's DICOM copy program&lt;br /&gt;
called dccp which is part of&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8004</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8004"/>
		<updated>2007-02-27T23:42:09Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;center&amp;gt;&lt;br /&gt;
Process to anonymize DICOM headers&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8003</id>
		<title>IGT:Image Anonymization</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Anonymization&amp;diff=8003"/>
		<updated>2007-02-27T23:39:55Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;save a placeholder&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7955</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7955"/>
		<updated>2007-02-21T23:23:58Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* open a command window and change to the Root Directory in the window:&lt;br /&gt;
cd /spl/tmp/incoming&lt;br /&gt;
&lt;br /&gt;
* load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
* use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
* If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
* from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
*./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
*./CT0000.MOD&lt;br /&gt;
*./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
*./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
*./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
*./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
* CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
* BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
* 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
* 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. &lt;br /&gt;
&lt;br /&gt;
This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7954</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7954"/>
		<updated>2007-02-21T23:21:27Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* open a command window and change to the Root Directory in the window:&lt;br /&gt;
cd /spl/tmp/incoming&lt;br /&gt;
&lt;br /&gt;
* load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
* use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
* If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
* from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
* CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
* BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
* 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
* 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. &lt;br /&gt;
&lt;br /&gt;
This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7953</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7953"/>
		<updated>2007-02-21T23:15:40Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* open a command window and change to the Root Directory in the window:&lt;br /&gt;
cd /spl/tmp/incoming&lt;br /&gt;
&lt;br /&gt;
* load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
* use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
* If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
* from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
* CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
* BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
* 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
* 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. &lt;br /&gt;
&lt;br /&gt;
This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7952</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7952"/>
		<updated>2007-02-21T23:07:50Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* open a command window and change to the Root Directory in the window:&lt;br /&gt;
cd /spl/tmp/incoming&lt;br /&gt;
&lt;br /&gt;
* load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
* use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
* If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
* from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7951</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7951"/>
		<updated>2007-02-21T23:02:44Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
* open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
* load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
* use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
* If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
* from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7950</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7950"/>
		<updated>2007-02-21T23:02:09Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
* open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
* load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7949</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7949"/>
		<updated>2007-02-21T23:01:18Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7948</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7948"/>
		<updated>2007-02-21T22:59:39Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7947</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7947"/>
		<updated>2007-02-21T22:52:58Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see the section below &amp;quot;Finding DICOM data in /spl/tmp/incoming after it has been transferred&amp;quot;. If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7946</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7946"/>
		<updated>2007-02-21T22:50:10Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see [#f Finding DICOM data in /spl/tmp after it has been transferred)] If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to  [mailto:mark@bwh.harvard.edu mark@bwh.harvard.edu]&lt;br /&gt;
 or  [mailto:marianna@bwh.harvard.edu marianna@bwh.harvard.edu]&lt;br /&gt;
 to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7937</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7937"/>
		<updated>2007-02-21T00:30:32Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Pushing images from scanners to SPL (DICOM)'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see [#f Finding DICOM data in /spl/tmp after it has been transferred)] If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Restoring images from PACS from 09/1998-present (DICOM)'''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to Mark Anderson mark@bwh.harvard.edu or Marianna Jakab marianna@bwh.harvard.edu to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
'''Finding DICOM data in /spl/tmp/incoming after it has been transferred'''&lt;br /&gt;
&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7936</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7936"/>
		<updated>2007-02-21T00:21:25Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
Transfer Mechanism Currently Used&lt;br /&gt;
&lt;br /&gt;
Pushing images from scanners to SPL (DICOM)&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see [#f Finding DICOM data in /spl/tmp after it has been transferred)] If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
Restoring images from PACS from 09/1998-present (DICOM)&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to Mark Anderson mark@bwh.harvard.edu or Marianna Jakab marianna@bwh.harvard.edu to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&lt;br /&gt;
Finding DICOM data in /spl/tmp/incoming after it has been transferred&lt;br /&gt;
&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:image_info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=File:Image_info.jpg&amp;diff=7935</id>
		<title>File:Image info.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=File:Image_info.jpg&amp;diff=7935"/>
		<updated>2007-02-21T00:15:42Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7934</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7934"/>
		<updated>2007-02-21T00:14:08Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
Transfer Mechanism Currently Used&lt;br /&gt;
&lt;br /&gt;
Pushing images from scanners to SPL (DICOM)&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see [#f Finding DICOM data in /spl/tmp after it has been transferred)] If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
Restoring images from PACS from 09/1998-present (DICOM)&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to Mark Anderson mark@bwh.harvard.edu or Marianna Jakab marianna@bwh.harvard.edu to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&lt;br /&gt;
Finding DICOM data in /spl/tmp/incoming after it has been transferred&lt;br /&gt;
&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Image:image_info.jpg&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7933</id>
		<title>IGT:Image Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=IGT:Image_Transfer&amp;diff=7933"/>
		<updated>2007-02-21T00:09:08Z</updated>

		<summary type="html">&lt;p&gt;Mark: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Contact: Mark Anderson&lt;br /&gt;
[edit]&lt;br /&gt;
Transfer Mechanism Currently Used&lt;br /&gt;
&lt;br /&gt;
Pushing images from scanners to SPL (DICOM)&lt;br /&gt;
This is a good way for researchers to get their data to the SPL after a scan has been done. Most of the scanners have been configured to allow images to be sent to the SPL. The user name of the sender of the images is recorded to maintain HIPAA compliance. At the scanner select either LISA or SPL as the DICOM destination and transfer the images. Data will be sent to the directory /spl/tmp/incoming. If there is a lot of data in /spl/tmp/incoming it may take a while to locate your particular case. One method is to find timestamps on files in /spl/tmp/incoming that match the time that the images were transferred from the scanners or see [#f Finding DICOM data in /spl/tmp after it has been transferred)] If non-clinical scans are not transferred to the SPL or stored on MOD, there is a good chance the data will be deleted unless a prominent note is left for the techs and they are reminded not to delete the data. Clinical cases are archived and can be restored via the following method.&lt;br /&gt;
&lt;br /&gt;
Restoring images from PACS from 09/1998-present (DICOM)&lt;br /&gt;
This is the best way to get all CT and MR images from September 1998 to present that are not on the scanners. Data can be sent to SPL directly to /spl/tmp/incoming from any of the IMPAX systems in the reading rooms or from a web-based IMPAX service tool. Send email to Mark Anderson mark@bwh.harvard.edu or Marianna Jakab marianna@bwh.harvard.edu to get cases transferred via this method. If you plan to use this method frequently, you may want to get your own BICS account with transfer privileges by calling the help desk at x2-5927&lt;br /&gt;
&lt;br /&gt;
Finding DICOM data in /spl/tmp/incoming after it has been transferred&lt;br /&gt;
&lt;br /&gt;
The easiest way to find your DICOM data is to point your browser to /spl/tmp/incoming/ and look for a file with a name beginning with&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of&lt;br /&gt;
&lt;br /&gt;
The file name will contain a time stamp and look similar to the filename below.&lt;br /&gt;
&lt;br /&gt;
DICOM_files_as_of:12_30_04_at_14:10:28.html&lt;br /&gt;
&lt;br /&gt;
This file is updated every hour on the hour with a list of the current contents of /spl/tmp/incoming/ and the time that the file was updated is part of the file name. The above file shows the contents of /spl/tmp/incoming as of Dec 30 2004 at 2:10 in the afteroon. After clicking on the DICOM html you will see information similar to the following which should help you to locate your data. One way to use the page similar to the one below is to do the following:&lt;br /&gt;
&lt;br /&gt;
    * open a command window and change to the Root Directory in the window (e.g cd /spl/tmp/incoming)&lt;br /&gt;
    * load the DICOM_files_as_of:**********.html page into your browser&lt;br /&gt;
    * use the edit-&amp;gt;Find in This Page tab of the browser and type in the Patient ID or Patient Name. If you get the Alert message &amp;quot;The text you entered was not found.&amp;quot; Then either you have entered the incorrect information or you will need to wait until the page is updated on the next hour.&lt;br /&gt;
    * If your Patient Name or Patient ID is found in DICOM html then cut and paste from the DICOM html page the path above the Patient Name or Patient ID pointing to where your data is located and change to that directory(e.g. cd MR0000.MOD/LMRC3T.STA/20050623.DAY/1.2.840.113619.2.136.1762888421.2231.1119526620.0.UID/)&lt;br /&gt;
    * from this directory, copy your data to a safe place as it will be deleted 1 week after it is last accessed &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Image:image_info.jpg&lt;br /&gt;
Below is an example of how the directories might be organized in /spl/tmp/incoming:&lt;br /&gt;
&lt;br /&gt;
./MR0000.MOD/BWOW1.STA&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000001.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000002.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000003.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000005.SER&lt;br /&gt;
./MR0000.MOD/BWOW1.STA/20040609.DAY/1.2.840.113619.2.5.1762874864.1706.1086778760.584.UID/000006.SER&lt;br /&gt;
./CT0000.MOD&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID&lt;br /&gt;
./CT0000.MOD/BWCTPIKE.STA/20040610.DAY/1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID/000002.SER&lt;br /&gt;
&lt;br /&gt;
The following describes the hierarchy of the directories (we will use the CT0000.MOD directory above as an example)&lt;br /&gt;
&lt;br /&gt;
    * CT0000.MOD this is the modality usually either CT or MR for most SPL data&lt;br /&gt;
    * BWCTPIKE.STA this is the StationName where the image was transferred from.&lt;br /&gt;
    * 20040610.DAY The date of the scan. This scan was done on June 10, 2004.&lt;br /&gt;
    * 1.3.12.2.1107.5.1.4.28154.4.0.4973204933331248.UID This annoyingly long string is called the StudyInstanceUID and it uniquely identifies this study. This string does not provide much useful information to most users. The beginning part of the string (1.3.12.2.1107.5.1.4) is the Implementation Class UID and identifies the images in this directory as coming from a Siemens scanner. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[edit]&lt;br /&gt;
Software Available for Sharing&lt;br /&gt;
&lt;br /&gt;
The software used here is comprised of three main parts:&lt;br /&gt;
&lt;br /&gt;
1) A proprietary DICOM listener that recieves the data. We have a licence for this from the Dejarnette Corporation, this is not sharable.&lt;br /&gt;
&lt;br /&gt;
2) print_header program that gleans information from DICOM image headers. This is our software and is available for sharing and runs on Solaris and Linux.&lt;br /&gt;
&lt;br /&gt;
3) A series of shell and python scripts that collect the image information and generate a web page describing the data. This software is very specific to our site and likely not useful elsewhere.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
		
	</entry>
</feed>