Difference between revisions of "CTSC:ARRA.032911"

From NAMIC Wiki
Jump to: navigation, search
Line 16: Line 16:
 
* Alex Zeitsev
 
* Alex Zeitsev
 
* Ron Kikinis
 
* Ron Kikinis
* Mark Anderson (phone)
 
  
  
 
=== mi2b2 software update ===
 
=== mi2b2 software update ===
  
*  Bill recap what was said last week about MGH: As the team worked on the multi-node, multi-thread workflow programming for the MGH deployment they beg running into an issue where  studies were found on more than one node.  It turns out that there are multiple gateway nodes into which the various imaging data sets enter the disk storage systems.  But once the data has been entered into the permanent archival system, queries to a single node should reveal all studies.  Thus, it appears that the MGH case may actually be similar to the CHB, BIDMC and BWH cases.
+
*  Bill recap what was said last week about MGH: As the team worked on the multi-node, multi-thread workflow programming for the MGH deployment they beg running into an issue where  studies were found on more than one node.  It turns out that there are multiple gateway nodes into which the various imaging data sets enter the disk storage systems.  But once the data has been entered into the permanent archival system, queries to a single node should reveal all studies.   
  
 
* At MGH, C-find still not able to move more than 2000 images or series. It is a problem with the configuration of the PACS.  
 
* At MGH, C-find still not able to move more than 2000 images or series. It is a problem with the configuration of the PACS.  
Line 29: Line 28:
 
* Study reports embedded are not being download. The DICOM structured documents are treated as a separate series.  
 
* Study reports embedded are not being download. The DICOM structured documents are treated as a separate series.  
  
 +
* Configuration of infrastructure for multi-node, multi-institution in progress.
  
 
+
* Last week the concern that was raised was how to control what machine is being used to run the software and is thus downloading the data to that machine.  Partners does not require such a thing once people have their IRB approved so we should not spend a lot of time trying to build something not needed. Just allowing people to specify location where they will move the data is fine (example: encrypted laptop, computer behind the firewall). We can allow people to have limited data sets on their encrypted laptop for display. If performance is a concern as alternative, we first put the images on the server and then we send them to the client and they don't need to cache the images. Maybe we can give clients permission to use the mi2b2 cache. Part of the issue is that for now we zip all the files, think about another process.
* Bill is unable to build a VMware version for BIDMC because the target was a Mac machine.  Will need to find another solution.
 
 
 
=== UI Demo ===
 
 
 
Dave shared a video capture of the mi2b2 workflow that is programmed to date.  The concern that was raised was how to control what machine is being used to run the software and is thus downloading the data to that machine.  These user - centric safety procedures are being developed.  
 
 
 
=== mi2b2 baby brain grant ===
 
We (Shawn, Randy and Ellen Grant) are preparing a submission in response to PAR-10-141: Transforming Biomedicine at the Interface of the Life and Physical Sciences (R01)
 
http://grants.nih.gov/grants/guide/pa-files/PAR-10-141.html
 
and
 
http://grants.nih.gov/grants/guide/notice-files/NOT-EB-11-004.html
 
 
 
to attempt to get funding to support mi2b2.
 

Revision as of 15:39, 29 March 2011

Home < CTSC:ARRA.032911

Back to CTSC:ARRA supplement

Harvard Catalyst Medical Informatics group Meeting Minutes March 29, 2011

In attendance:

  • Bill Wang
  • Valerie Humblet
  • Yong Gao
  • Chris Herrick
  • Shawn Murphy
  • Steve Pieper
  • Bill Tellier
  • Darren Sack
  • Bill Hanlon
  • Alex Zeitsev
  • Ron Kikinis


mi2b2 software update

  • Bill recap what was said last week about MGH: As the team worked on the multi-node, multi-thread workflow programming for the MGH deployment they beg running into an issue where studies were found on more than one node. It turns out that there are multiple gateway nodes into which the various imaging data sets enter the disk storage systems. But once the data has been entered into the permanent archival system, queries to a single node should reveal all studies.
  • At MGH, C-find still not able to move more than 2000 images or series. It is a problem with the configuration of the PACS.
  • BWH: Kathie prefers to put the machine at the Crosstown facility because it is where they have all the research servers. Right now they are going through some big updates so mi2b2 is not a priority for them right now. Shawn will prefer to have it at Needham because it will be more convenient for maintenance in the long run.
  • Study reports embedded are not being download. The DICOM structured documents are treated as a separate series.
  • Configuration of infrastructure for multi-node, multi-institution in progress.
  • Last week the concern that was raised was how to control what machine is being used to run the software and is thus downloading the data to that machine. Partners does not require such a thing once people have their IRB approved so we should not spend a lot of time trying to build something not needed. Just allowing people to specify location where they will move the data is fine (example: encrypted laptop, computer behind the firewall). We can allow people to have limited data sets on their encrypted laptop for display. If performance is a concern as alternative, we first put the images on the server and then we send them to the client and they don't need to cache the images. Maybe we can give clients permission to use the mi2b2 cache. Part of the issue is that for now we zip all the files, think about another process.