Difference between revisions of "2017 Winter Project Week/DeepInfer"

From NAMIC Wiki
Jump to: navigation, search
(Created page with "__NOTOC__ <gallery> Image:PW-Winter2017.png|link=2017_Winter_Project_Week#Projects|Projects List <!-- Use the "Upload file" link on the l...")
 
 
(9 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
Image:PW-Winter2017.png|link=2017_Winter_Project_Week#Projects|[[2017_Winter_Project_Week#Projects|Projects List]]
 
Image:PW-Winter2017.png|link=2017_Winter_Project_Week#Projects|[[2017_Winter_Project_Week#Projects|Projects List]]
 
<!-- Use the "Upload file" link on the left and then add a line to this list like "File:MyAlgorithmScreenshot.png" -->
 
<!-- Use the "Upload file" link on the left and then add a line to this list like "File:MyAlgorithmScreenshot.png" -->
 +
File:Deepinfer arch.png|The initial prototype of the DeepInfer with OpenIGTLink.
 
</gallery>
 
</gallery>
 +
  
 
==Key Investigators==
 
==Key Investigators==
Line 9: Line 11:
 
* Alireza Mehrtash (BWH, UBC)
 
* Alireza Mehrtash (BWH, UBC)
 
* Mehran Pesteie (UBC)
 
* Mehran Pesteie (UBC)
* Silvia (Tianjin University)
+
* Yang (Silvia) Yixin (Tianjin University)
 
* Tina Kapur (BWH)
 
* Tina Kapur (BWH)
 
* Sandy Wells (BWH)
 
* Sandy Wells (BWH)
 
* Purang Abolmaesumi (UBC)
 
* Purang Abolmaesumi (UBC)
 
* Andriy Fedorov (BWH)
 
* Andriy Fedorov (BWH)
 +
 
==Background and References==
 
==Background and References==
Deep  learning  models  have  outperformed  some  of  the  previous  state-of-the-art  approaches  in  medical  image analysis.  However, utilizing deep models during image-guided  therapy  procedures  requires  integration  of  several  software  components  which  is  often  a  tedious  taskfor  clinical  researchers.  Hence,  there  is  a  gap  between  the  state-of-the-art  machine  learning  research  and  itsapplication in clinical setup. DeepInfer  enables  3D  Slicer  to  connect  to  a  powerful processing back-end either on the local machine or a remote processing server. Utilizing a repository of pre-trained, task-specific models, DeepInfer allows clinical researchers and biomedical engineers to choose and deploy a model on new data without the need for software development or configuration.
+
Deep  learning  models  have  outperformed  some  of  the  previous  state-of-the-art  approaches  in  medical  image analysis.  However, utilizing deep models during image-guided  therapy  procedures  requires  integration  of  several  software  components  which  is  often  a  tedious  taskfor  clinical  researchers.  Hence,  there  is  a  gap  between  the  state-of-the-art  machine  learning  research  and  itsapplication in clinical setup.  
 +
 
 +
DeepInfer  enables  3D  Slicer  to  connect  to  a  powerful processing back-end either on the local machine or a remote processing server. Utilizing a repository of pre-trained, task-specific models, DeepInfer allows clinical researchers and biomedical engineers to choose and deploy a model on new data without the need for software development or configuration.
  
 
==Project Description==
 
==Project Description==
Line 25: Line 30:
 
|
 
|
 
<!-- Objective bullet points -->
 
<!-- Objective bullet points -->
* Develop the client side as a Slicer extension.  
+
* Redesign the architecture of the toolkit considering Docker as the deep learning model deployment engine.
* Develop the server side.
+
* Discuss about the implementation details of Slicer side.
* Train a diabetic retinopathy classifier and add the model to the DeepInfer model repository.
+
* Planning the structure of the cloud model repository.
 
|
 
|
 
<!-- Approach and Plan bullet points -->
 
<!-- Approach and Plan bullet points -->
*
+
* Study and evaluate different approaches for passing input images and prediction results between Slicer and Docker including CLI.
 +
* Decide about the necessary fields in the metadata of the stored models.
 +
* Implement Slicer side to talk with Docker.
 
|
 
|
 
<!-- Progress and Next steps bullet points (fill out at the end of project week) -->
 
<!-- Progress and Next steps bullet points (fill out at the end of project week) -->
*
+
* Helpful discussions about the design of Slicer part.
 +
* We will use Github as the repository for storing the metadata information about different trained models. The repository will be the same as Slicer extensions index repository. The maintainers of the models will make pull requests to put their model meta data on the repository.
 +
* The metadata file fields: Author,  Institution, Organ, Task, Modality, Accuracy, Training methodology, license, data source, network details, docker image location, version.
 +
* Slicer side scenario for using a deep model:
 +
  * Docker generates the XML of CLI
 +
  * Slicer reads the XML (using QSlicerCLIModule) and generates the required GUI for the task
 +
  * User selects input/output
 +
  * The input images would be saved to a temporary directory
 +
  * The docker will be run with the parameters on the mounted temporary directory
 +
  * Slicer waits for completion of the task.
 +
  * Docker saves the results of processing to the temp directory
 +
  * Slicer loads the results.
 +
* Next steps
 +
  * Implement instantiation of  CLI UI from the XML description in Python.
 +
  * Test the slicer/docker communication on our prostate-segmenter model.
 +
 
 
|}
 
|}

Latest revision as of 14:56, 13 January 2017

Home < 2017 Winter Project Week < DeepInfer


Key Investigators

  • Alireza Mehrtash (BWH, UBC)
  • Mehran Pesteie (UBC)
  • Yang (Silvia) Yixin (Tianjin University)
  • Tina Kapur (BWH)
  • Sandy Wells (BWH)
  • Purang Abolmaesumi (UBC)
  • Andriy Fedorov (BWH)

Background and References

Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. However, utilizing deep models during image-guided therapy procedures requires integration of several software components which is often a tedious taskfor clinical researchers. Hence, there is a gap between the state-of-the-art machine learning research and itsapplication in clinical setup.

DeepInfer enables 3D Slicer to connect to a powerful processing back-end either on the local machine or a remote processing server. Utilizing a repository of pre-trained, task-specific models, DeepInfer allows clinical researchers and biomedical engineers to choose and deploy a model on new data without the need for software development or configuration.

Project Description

Objective Approach and Plan Progress and Next Steps
  • Redesign the architecture of the toolkit considering Docker as the deep learning model deployment engine.
  • Discuss about the implementation details of Slicer side.
  • Planning the structure of the cloud model repository.
  • Study and evaluate different approaches for passing input images and prediction results between Slicer and Docker including CLI.
  • Decide about the necessary fields in the metadata of the stored models.
  • Implement Slicer side to talk with Docker.
  • Helpful discussions about the design of Slicer part.
  • We will use Github as the repository for storing the metadata information about different trained models. The repository will be the same as Slicer extensions index repository. The maintainers of the models will make pull requests to put their model meta data on the repository.
  • The metadata file fields: Author, Institution, Organ, Task, Modality, Accuracy, Training methodology, license, data source, network details, docker image location, version.
  • Slicer side scenario for using a deep model:
 * Docker generates the XML of CLI
 * Slicer reads the XML (using QSlicerCLIModule) and generates the required GUI for the task
 * User selects input/output
 * The input images would be saved to a temporary directory
 * The docker will be run with the parameters on the mounted temporary directory
 * Slicer waits for completion of the task.
 * Docker saves the results of processing to the temp directory
 * Slicer loads the results.
  • Next steps
 * Implement instantiation of  CLI UI from the XML description in Python.
 * Test the slicer/docker communication on our prostate-segmenter model.