Difference between revisions of "2017 Winter Project Week/Evaluate Deep Learning for binary cancer legion classification"

From NAMIC Wiki
Jump to: navigation, search
Line 3: Line 3:
 
Image:Cancer_roi_Img_00001.png|Example Cancer ROI
 
Image:Cancer_roi_Img_00001.png|Example Cancer ROI
 
Image:Cancer-roi-digits-training-0109.png|Initial LeNet training with DIGITS/Caffe
 
Image:Cancer-roi-digits-training-0109.png|Initial LeNet training with DIGITS/Caffe
 +
KVis-trained-LeNet-data-augmentation.png|Training performance after data augmentation
 
<!-- Use the "Upload file" link on the left and then add a line to this list like "File:MyAlgorithmScreenshot.png" -->
 
<!-- Use the "Upload file" link on the left and then add a line to this list like "File:MyAlgorithmScreenshot.png" -->
 
</gallery>
 
</gallery>

Revision as of 15:03, 13 January 2017

Home < 2017 Winter Project Week < Evaluate Deep Learning for binary cancer legion classification

Key Investigators

  • Curt Lisle, KnowledgeVis, LLC
  • Yanling Liu, FNLCR

Project Description

Objective Approach and Plan Progress and Next Steps
  • Train a neural network to become a binary classifier for the detection of cancer lesions using T2 ROI images
  • Start with a dataset, prepared at the Frederick National Lab for Cancer Research, to use to train a classifier.
  • The dataset consists of a series of 52x52 Region Of Interest T2 MR images containing cancer lesions and two T2 image series which do not contain lesions.
  • We plan to collect advice from others at the Project Week, select a deep learning framework, and attempt to build a classifer using this training data.
  • created a DIGITS Amazon instance using NVIDIA's marketplace image before project week
  • Prepared the dataset in the style of the MNIST example
  • Trained LeNet and AlexNet CNNs using DIGITS interface and Caffe learning framework
  • Data augmentation was crucial to improve results up to 83% detection accuracy for 2D case
  • 3D data was presented without augmentation and yielded better results than 2D alone
  • We believe results will further improve when better data augmentation and 3D slice data are used simultaneously

Background and References