Event:Journal Club at BWH

From NAMIC Wiki
Jump to: navigation, search
Home < Event:Journal Club at BWH

Back to Events

General information

Unless otherwise indicated, all SPL Journal Club presentations will take place in the Hollenberg Conference Room, on the 3rd floor of the Thorn Research Building, between 1pm and 2pm, on Wednesdays.

We are always looking for speakers. Please email Andriy Fedorov (fedorov at bwh dot harvard dot edu) if you would like to schedule a talk.

Suggested topics are, but are not limited to:

  • Accomplished work
  • Work-in-progress
  • Brainstorming ideas
  • Review of literature
  • Update of research activities


April 25

Stephen M. Pizer, PhD, Kenan Professor of Computer Science, Radiation Oncology, Radiology, and Biomedical Engineering, University of North Carolina

  • Title: Improved Shape-Based Methods for Classifying Brain Structures as Diseased or Not, with Application to the Hippocampus in Schizophrenia
  • Location: 1249 Boylston St, demo room 2nd floor
  • Time: 11am


Classification of brain structures into the classes of diseased or normal has most commonly been done using standard classification techniques either on volume as a single feature or on positions on the structure’s boundary as the features. I will describe some alternative methods that use shape properties richly and with recognition that shape is explicitly non-Euclidean and thus standard statistical approaches do not strictly apply. After a review of the standard techniques, those techniques that do reflect shape in its non-Euclidean form will be described. All these methods, standard and new, have been applied to a set of hippocampi from first-episode schizophrenics and from controls. I will show that in this challenging task of classifying a hippocampus as schizophrenic or normal, the shape-based methods double or better the classification rate’s performance relative to pure guessing when compared to 1) classification via volume, 2) classification via boundary positions alone, and 3) classification via Euclidean analysis of the non-Euclidean shape feature set. An animation of the deformation of a typical control hippocampus into a typical schizophrenic hippocampus will be presented. This animation is quite informative because the shape properties involved in the shape-based classification reflect not only hippocampal locations but also information of hippocampal width at various positions and the directions perpendicular to the boundary at various positions, thus reflecting twisting and bending,. Awaiting application to classification tasks on other structures and/or for other diseases, we can tentatively conclude that the shape-based methods I describe are noticeably superior to those that have previously been available.




May 22

Joyoni Dey, PhD, U.Mass Medical School

  • Title: Motion Correction for Cardiac SPECT
  • Location: Thorn, Hollenberg conference room, 3rd floor
  • Time: 1:30pm


Cardiac SPECT is a functional imaging modality to non-invasively image myocardial perfusion for over 9-10 million patients every year in USA and double the number world-wide. Standard clinical systems have long acquisition times (10-16 mins) with motion and breathing affecting the diagnostic accuracy of SPECT reconstructed images. The first talk will describe a method to accurately estimate and correct for the motion in presence of irregular breathing. The second talk will describe a method to align the CT transmission map with the emission data in presence of non-rigid motion, using scatter window data acquired during emission.

If time permits other topics will be touched upon: mathematical tumor modeling for tumor quantification; system design for a novel high performance gamma camera for Cardiac SPECT; penalized maximum likelihood reconstruction for limited angle tomography for a lens-less optical microscope.


Joyoni Dey is an Assistant Prof of Radiology at University of Mass Medical School; Her research interests include medical imaging and image processing, registration, segmentation, reconstruction, system design and optimization. She has been successful in procuring NIH funding as Principal Investigator; She has 19 peer-viewed journal and conference papers and 30 conference abstracts. She has a couple of patents pending.


November 6

Paul Mercea, MS Student, University of Heidelberg

  • Title: Quantification of longitudinal tumor changes using PET imaging in 3D Slicer
  • Location: 1249 Boylston, 2nd floor demo room
  • Time: 3:30pm


The increasing use of positron emission tomography (PET) in oncology has changed the way cancer is managed in routine clinical practice. The most exploited clinical application of PET is the assessment of cancer treatment response. In this talk, I will present the development process of an open-source software module for the longitudinal analysis of changes in tumors, using PET/CT Fusion Imaging, Volume Rendering, advanced Segmentation functionalities and Standard Uptake Value (SUV) computation in 3D Slicer 4.


I graduated from the University of Heidelberg (Germany) in 2010 with a degree in Medical Informatics. After a 6 month internship in the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ) in Heidelberg, I attended the master’s program for Medical Informatics (University of Heidelberg) while working part time as software developer at DKZF. For my master thesis I came to work at the Surgical Planning Laboratory (SPL) where I am developing the Longitudinal PET/CT Analysis Extension for 3D Slicer 4.

October 31

Alex Olwal, PhD, Postdoctoral Research Fellow, Media Lab, MIT

  • Title: Augmented Reality and Medical User Interfaces
  • Location: Hollenberg Conference Room, Thorn building
  • Time: 1pm


Augmented Reality is the concept of merging virtual information with a real environment using novel displays systems, sensing technologies, interaction techniques and user interfaces.

The first part of the talk will provide an overview of our research in these areas, which includes projects in touch-screen interaction, spatially aware mobile devices, direct projection systems, augmented reality and immaterial displays.

The second part will focus on a number of projects where we apply these techniques to the medical domain. We have, for example, been exploring the application of our techniques to concepts that improve visual analysis in medical imaging, enhancing communication and collaboration in medical team meetings, simplifying access to remote expertise, and improving efficiency and safety in image-guided surgery.


Alex Olwal (Ph.D.) is a post-doctoral fellow at the MIT Media Lab. He has previously worked with the development of new technologies for Human-Computer Interaction at the Royal Institute of Technology (Stockholm), Columbia University (NY), UC Santa Barbara (CA) and Microsoft Research (WA). Alex's research (www.olwal.com) focuses on interaction techniques and technology, including augmented reality, spatially aware mobile devices, medical user interfaces, ubiquitous computing, touch-screens, as well as novel interaction devices and displays.

September 26

Mauricio Reyes, PhD, Head of Medical Image Analysis group at the Institute for Surgical Technology and Biomechanics at the University of Bern, Switzerland

  • Title: Medical Image Analysis for Cranio-maxillofacial Surgical Planning and Orthopaedic Implant Design
  • Location: Abrams conference room, ASBI L1
  • Time: 9am


In this talk I will present our recent contributions to image-guided cranio-maxillofacial (CMF) surgery, where we have developed clinically-relevant solutions to predict the outcome of soft tissue deformations after CMF surgery. Advances in accuracy, speed and compliance to the clinical scenario will be highlighted. During a second part, I will introduce recent developments in computational anatomy, featuring a novel registration framework able to cope with the complex anatomical variability of the human mandible at different scales, while reducing the number of required parameters needed to describe the underlying transformations, and therefore enabling statistics across scales. Afterwards, I will connect these ideas to the problem of population-based implant design of mandible plates, describing our recent progress to minimize intra-operative deformations as well as to consider bone mineral density in the implant design objective function. Finally, I will describe our applied research activities in facial plastic surgery, where I will present a web-based solution for 3D plastic surgery simulation from patient photographs.


Mauricio Reyes is the head of the Medical Image Analysis group at the Institute for Surgical Technology and Biomechanics at the University of Bern, Switzerland. He received his bachelor degree as electrical engineer at the University of Santiago de Chile, Chile in 2001. His thesis Three-dimensional Reconstruction of a Human Embryo Hand Using Artificial Vision Techniques was awarded best Electrical Engineering bachelor thesis work. During 2002-2004 he conducted studies to obtain his Ph.D. degree from the University of Nice, France on the topic of lung cancer imaging and breathing compensation in emission tomography. In 2006 he joined the Medical Image Analysis group at the MEM Research Center as a postdoctoral fellow focusing on topics related to medical image analysis and statistical shape models for orthopaedic research. In 2007, he took over the Medical Image Analysis group at the Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland.

Additional info: http://www.istb.unibe.ch/


September 28

Seyed-Ahmad Ahmadi, PhD candidate at the Chair for Computer Aided Medical Procedures (CAMP) at the Technische Universität München (TUM)

  • Title: Midbrain Segmentation in Transcranial 3D Ultrasound for Parkinson Diagnosis
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: Noon


Part I (5min)

Personal introduction and general overview of activities at the group for Computer Aided Medical Procedures (CAMP) at Technische Universität München.

Part II (15min)

Summary of MICCAI 2011 publication (see title): Ultrasound examination of the human brain through the temporal bone window, also called transcranial ultrasound (TC-US), is a completely non-invasive and cost-efficient technique, which has established itself for differential diagnosis of Parkinson's Disease (PD) in the past decade. The method requires spatial analysis of ultrasound hyperechogenicities produced by pathological changes within the Substantia Nigra (SN), which belongs to the basal ganglia within the midbrain. Related work on computer aided PD diagnosis shows the urgent need for an accurate and robust segmentation of the midbrain from 3D TC-US, which is an extremely diffcult task due to poor image quality of TC-US. In contrast to 2D segmentations within earlier approaches, we develop the first method for semi-automatic midbrain segmentation from 3D TCUS and demonstrate its potential benefit on a database of 11 diagnosed Parkinson patients and 11 healthy controls.

Part III (5min)

Brief overview of further ultrasound-related research at CAMP.


Since 2008, Seyed-Ahmad Ahmadi is a PhD candidate at the Chair for Computer Aided Medical Procedures (CAMP) at the Technische Universität München (TUM), under the supervision of Prof. Nassir Navab. The research focus lies on using 3D ultrasound imaging for diagnosis and surgical planning in neurology and neurosurgery, with emphasis on (i) volumetric segmentation (ii) advanced compounding techniques for multi-view 3D Freehand ultrasound and (iii) computer-aided early diagnosis of Parkinson's Disease. In 2008, Seyed-Ahmad Ahmadi received a German Diploma and a M.Sc. degree in Electrical and Computer Engineering from a double-Master program between TUM and the Georgia Institute for Technology (GIT).

May 19

Emmanuel Tannenbaum, PhD, Georgia Institute of Technology and Ben-Gurion University of the Negev, Beer-Sheva, Israel

  • Title: Diploidy and the Selective Advantage for Sexual Reproduction in Unicellular Organisms
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: Noon


In this talk, I will give a brief overview of evolutionary dynamics, a subfield of mathematical biology and biophysics that deals with developing mathematical models describing evolutionary processes in biological systems. I will then discuss some semi-recent work of mine on the subject, dealing with the selective advantage for sexual reproduction versus asexual reproduction in unicellular organisms. In particular, I develop models describing the evolutionary dynamics of a unicellular population for the asexual and sexual reproduction pathways. These models are based on the corresponding asexual and sexual pathways in Baker's yeast, a model organism that is used to study many basic biological structures and behaviors. Our models find a selective advantage for sex under far less restrictive and more realistic conditions than other theories for the existence of sex. We also find that sexual reproduction is only advantageous with recombination, which suggests an evolutionary basis for the existence of meiotic recombination during the process of gamete formation. Our results also suggest an explanation for the existence of sex as a stress response in unicellular organisms.


I graduated from the University of Minnesota in 1998 with dual degrees in Mathematics and Chemical Engineering. I then attended graduate school at Harvard University, where I graduated in 2002 with a Ph.D. in Chemical Physics. After a brief stint in the Israeli army, I returned to Harvard in March, 2003 to begin an NIH-sponsored postdoctoral fellowship in biophysics, with a focus on research in theoretical evolutionary dynamics. In the summer of 2005, I completed my postdoc and began a faculty position in the Department of Chemistry at Ben-Gurion University of the Negev in Be'er-Sheva, Israel. I was also a tenure-track assistant professor in the School of Biology at the Georgia Institute of Technology from August, 2006 until July, 2007. During this time I was on an unpaid leave from Ben-Gurion University of the Negev. For the 2010-2011 academic year, I have been a visiting faculty member at the Georgia Institute of Technology, working with Prof. Loren D. Williams of the Department of Chemistry and Biochemistry on problems related to ribosomal structure and evolution and the origin-of-life.

April 13

Luciano da F. Costa, PhD, University of São Paulo

  • Title: Morphological Neuronal Networks
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: 1pm


The nervous system has developed as a means to model the surrounding environment, including other creatures. At the same time, it is inherently underlain by neuronal connectivity in a 3D space, which imposes constraints and is closely related to the brain functionality and behavior. In this talk I will discuss how neuronal connectivity arises as a consequence of the spatial distribution of neurons as well as their respective shapes. Also addressed are the representation of such systems in terms of complex networks and the investigation of the relationship between shape and function. The presented case examples include disease transmission in morphological hosts networks and the characterization of the neuromorphological space (by using the NeuroMorpho database).


Luciano da F. Costa is a full professor at the Institute of Physics at the University of São Paulo, Brazil. He has got a PhD from King's College, University of London, and has been a Visiting Scholar at the University of Cambridge. He is the author of about 180 indexed articles, which have been cited over 2000 times. He has also authored the book Shape Analysis and Classification, CRC Press, 2009 (2nd edition). His areas of interest include image analysis, complex networks, and pattern recognition.

April 12

Simon DiMaio, PhD, Intuitive Surgical, Inc.,Sunnyvale, CA.

  • Title: DaVinci and Beyond
  • Location: Abrams Conference Room, L1 Radiology
  • Time: 12noon


One-by-one, medical robots are crossing the chasm that lies between laboratory bench-top prototypes and commercial products. Despite significant regulatory and adoption barriers, highly complex robotic technologies such as the Accuray CyberKnife®, the Mako Rio®, the Intuitive Surgical da Vinci®, and others, are beginning to make clinical impact. Next generation surgical systems are now on the horizon and the research community is already contemplating fantastic new technologies that will extend computer-assisted surgery even further. In this talk, we will take a step back together to look at the origins of the da Vinci telerobotic system, its present capabilities and limitations, new technologies on the horizon, and how we might see such platforms evolve in the future. I will talk from the perspective of a researcher embedded in industry and will share some of the challenges that we have experienced in taking research concepts and prototypes toward product.


Simon DiMaio received his PhD from the University of British Columbia, was a postdoctoral fellow at Harvard Medical School and Brigham and Women's Hospital, and is currently the Manager for Applied Research at Intuitive Surgical, Sunnyvale, CA.

March 30

Misha Pivovarov, Center for Systems Biology, MGH

  • Title: MIPortal® - a system capable of efficiently archiving and distributing experimental data to individual investigators
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: 1pm
  • Slides: Media:MishaPivovarov_30March2011_JC.pdf


MIPortal® is a web-based system developed by the CSB Bioinformatics Platform. It provides flexible and secure access to complex data generated by more than 20 different imaging modalities, including raw and post-processed data as well as analysis results. All data are organized into Projects and experiments, which can be accessed securely based on user privileges. PIs can create and own Projects, and are able to grant access to other users. Built with state-of-the-art open-source technologies, MIPortal® is a modular system and can be adopted to specific requirements. There are currently over 6 million online images available to more than 250 active CSB users. MIPortal® has also been made available as an image management system to other users such as the NCI Small Animal Imaging Program.


Misha Pivovarov, MS is Director of Information Technology at the Center for Systems Biology, at Massachusetts General Hospital. He holds a Masters Degree in Computer Science and has extensive experience in designing and implementing medical imaging systems both in clinical and research environments. Mr. Pivovarov played an instrumental role in developing the first MGH clinical PACS as well as Radiology Information System. Later on he led development of the Telemedicine Information System. Introduction of such innovative approaches as on-demand networking with Internet protocols and advanced image compression enabled successful deployment of the Global Telemedicine Network – the first commercial telemedicine system. Mr. Pivovarov established a Bioinformatics Platform at the CSB that provides data management and information analysis to over 200 researchers. He directed a multi-organizational team to develop the MIPortal© - a web-based IT platform for archiving and analysis of imaging and non-imaging medical data.


September 21

Chris Gorgolewski, PhD candidate, University of Edinburgh

  • Title: Thresholding statistical maps in context of presurgical planning
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: 1pm


It has been a well established practice that to justify findings in human cognitive neuroscience using fMRI one has to use appropriate statistics to threshold SPM-T maps produced by fitting a GLM model. However, in presurgical mapping application SPM-T maps have been usually thresholded manually without taking any statistics into consideration nor any spatial properties of the maps. The selection of a particular threshold influences the distance of the tumor to the border activation and can influence the decisions of surgeons. Since fMRI gained popularity two decades ago a lot of approaches to delineating the borders of the active areas have been developed. In a quest for standardizing and improving how SPM-T maps are thresholded in the context of presurgical applications I have evaluated commonly used methods in terms of voxelwise and clusterwiese false negatives as well as clusterwise false positives. A new method was also devised and evaluated on simulations and real fMRI data.


Chris Gorgolewski completed his undergraduate studies in Cognitive Science at Adam Mickiewicz University and Computer Science at Poznan University of Technology. After moving to Edinburgh he has been working on functional diffusion tensor imaging which was the topic of his Masters thesis. Currently he is doing his PhD at University of Edinburgh. Chris is interested in building a probabilistic model of grey and white matter activation using fMRI and DTI that would be useful in presurgical planning for tumour extraction. He is also working on single subject fMRI, especially the way of thresholding t SPM of ROIs. Aside from his primary PhD research, Chris is one of the lead developers for Nipype, a Python project that provides a uniform interface to existing neuroimaging software and facilitates interaction between these packages within a single workflow.

March 31

John Kaufhold, SAIC

  • Title: Automated neuroanatomical microvessel segmentation, vectorization, nuclei labeling, and geometric computations with 3D two photon laser scanning microscopy
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: 1pm

The work presented in the talk summarized in the following paper:


Joint work with P.S.Tsai, P.Blinder and D.Kleinfeld

In the past two decades, functional magnetic resonance imaging (fMRI) has matured into an indispensable tool for the study of the human brain by exploiting neurovascular coupling, the interaction of neural activity and cerebral blood flow. Though neurovascular coupling is widely exploited, its origin remains a matter of debate. In collaboration with neuroscientists at the University of California San Diego (UCSD), SAIC is actively engaged in a conjunction of discovery-directed studies and technological development in support of basic scientific studies aimed at understanding and quantifying the connection between vascular topology, flow dynamics, and neuronal control. Practically, supporting these studies translates into developing custom image processing and computer vision algorithms to analyze the imaged anatomy. Ultimately, our analysis will give us a complete “plumbing diagram” of cortex (or angiotome), which is essential for an understanding of neurovascular coupling, normal flow, recovery from stroke, and organization of the cortical microvasculature.

All optical histology (AOH) uses femtosecond pulse plasma mediated laser ablation in conjunction with two photon laser scanning microscopy (TPLSM) to produce large anatomical volumes at micrometer-scale resolution. Specifically, we use AOH to produce ~1mm3 datasets of cerebral microvasculature. Generating a binary mask of the cerebral vasculature is a first step towards modeling its structural and physiological relationship to neuronal cells, and many methods have been proposed to segment such 3D structures. However, simply segmenting the microvasculature is insufficient: many analyses of the tubular vascular network (e.g., average vessel segment length, radii, point-to-point resistance, cycle statistics, and spatial relationships to other structures) are more efficiently computed on a vectorized representation of the data, i.e. a graph of connected centerline points. Generating such a graph requires sophisticated upstream algorithms for both segmentation and vectorization. We present here methods to segment and vectorize the microvasculature, connect any vectorization gaps via local threshold relaxation, segment and classify neurons (vs. non-neurons), and compute statistics on spatial relationships between cell nuclei and vessels. We show results using our methods on real 3D microvasculature data from the rodent brain.

Sponsors: NIH NIBIB (EB003832), NIMH (MH72570), NCRR (RR021907), and NINDS (NS043300)


Dr. Kaufhold’s education and experience are broadly geared toward information extraction from multidimensional signals (images, e.g.) embedded in uncertainty (noise) in the form of both structure and sensor imperfections. In 1993, as an undergraduate, Dr. Kaufhold investigated signal processing algorithms for recognition of telephone speech with Mari Ostendorf. From 1995 to 2000, Dr. Kaufhold collaborated with: the Surgical Planning Lab (SPL) at Brigham and Women’s Hospital (BWH) on high-temperature superconducting magnets for MR, the Neurology Department at Massachusetts General Hospital (MGH) on MR brain segmentation and RF coil correction, the Boston Heart Foundation on joint segmentation and motion estimation of long/short axis ultrasound blood vessel imagery, Boston University’s Hearing Research Laboratory on 3D depth recovery from optically sectioned light microscopy images of cochlear neurons, and MIT’s Stochastic Signals Group (LIDS) on estimation theoretic approaches to image segmentation. In 2000, Dr. Kaufhold joined GE's Global Research Center, where he focused on mammography (dynamic range management, breast density estimation, scatter correction, image formation as well as dual energy mammography) and x-ray imaging for interventional cardiac procedures (especially stent and guidewire enhancement). Also at the research center, Dr. Kaufhold collaborated with Lockheed Martin on machine learning approaches for estimating context (roads, grass, trees, vehicles, buildings, and shadows) for aerial video analysis, especially as that context estimation enables moving object detection and tracking. In March, 2005, Dr. Kaufhold joined the Advanced Concepts Business Unit (now Technology and Advanced Systems Business Unit) at SAIC. Dr. Kaufhold has over 40 peer-reviewed publications, and is named as inventor or coinventor on 17 patents at various stages in the US Patent and Trademark Office. Dr. Kaufhold is currently a visiting scientist at MIT collaborating with David Kleinfeld (UCSD) and Sebastian Seung (MIT). Dr. Kaufhold is also currently an SAIC Technical Fellow and was formerly a Whitaker Fellow.

March 24

Leopold Grinberg, Division of Applied Mathematics, Brown University

  • Title: High Performance Scientific Computing - a new physician assistant
  • Location: Hollenberg Conference Room, Thorn Building, 3rd Floor
  • Time: 1pm
  • Abstract
  • Slides


Modern computing systems have potential to significantly advance biomedical research in many areas, e.g., processing of data, simulating functionality of different organs or even systems, and assisting in surgical planning. However, the success of high performance computing (HPC) in biomedicine depends critically on a) simultaneous developments of mathematical models and computational algorithms; and b) collaboration between clinicians, applied mathematicians and computer science specialists.

This talk will focus on simulations of 3D unsteady blood flow in large arterial networks. I will present several numerical methods and computational approaches developed by our research group at Brown University. I will also present results of numerical simulations and hypothesize on the potential impact of the developed methods.


Leopold Grinberg obtained his PhD from Brown University in 2009. He is currently a Senior Research Associate at Brown University, Division of Applied Mathematics, and a Lecturer at Brown and Tufts Universities. Dr. Grinberg also holds a master degree in Mechanical Engineering.

His research interests encompass diverse topics in computational science, specifically High Performance Scientific Computing with applications in biomedical research.

A main current thrust of his research is the modeling of vascular systems, and particular aspects include:

  • integration of available patient-specific data into numerical simulations
  • one- and three-dimensional modeling of a blood flow in large arterial networks
  • developing scalable algorithms for solutions of tightly and loosely coupled systems
  • visualization of CFD data

More information


December 18

Satrajit Ghosh, RLE MIT and Harvard HST

  • Title: Nipype - A Python framework for neuroimaging
  • Location: 2nd Floor Conference Room, 1249 Boylston St, Boston.
  • Time: 2:00 PM - 3:00 PM
  • Slides


Nipype is a project under the umbrella of Nipy, an effort to develop open-source, community-developed neuroimaging tools in Python. The goals of Nipype are two-fold: 1) to provide a uniform interface to existing neuroimaging software packages; and 2) to provide a pipelined environment for efficient batch-processing that can tie together different neuroimaging data analysis algorithms.

The interface component of nipype provides access to command-line, matlab-mediated, and pure-python based algorithms from packages such as FSL, SPM, AFNI and Freesurfer, along with the growing number of algorithms being developed in Python. The uniform calling-convention of the nipype interface across all these packages reduces the learning curve associated with understanding the algorithms, the API and the user interface in the separate packages.

The interface component extends easily to a rich pipeline environment, able to interchange processing steps between different packages and iterate over a set of parameters, along with providing automated provenance tracking. The structure of the pipeline allows the user to easily add data and change parameters, and the pipeline will run only the steps necessary to update the new data or analysis parameters. Because it is written in Python, the pipeline can also take advantage of standard Python packages for future integration with a variety of database systems for storing processed data and metadata.

By exposing a consistent interface to the external packages, researchers are able to explore a wide range of imaging algorithms and configure their own analysis pipeline which best fits their data and research objectives, and perform their analysis in a highly structured environment. The nipype framework is accessible to the wide range of programming expertise often found in neuroimaging, allowing for both easy-to-use high-level scripting and low-level algorithm development for unlimited customization. We will explain the software architecture and challenges in interfacing the external packages, and demonstrate the flexibility of nipype in performing an analysis. This work is partially supported by NIH grant R03 EB008673 (NIBIB; PIs: Ghosh, Whitfield-Gabrieli).


Satrajit Ghosh is a Research Scientist in the Research Laboratory of Electronics at MIT. Dr. Ghosh received his undergraduate degree in Computer Science specializing in artificial intelligence and his graduate degree in Computational and Cognitive Neuroscience, specializing in functional imaging and computational modeling of speech motor control. His current research focuses on developing software related to nipype, relating macro-neuroanatomy and function and understanding mechanisms of speech production and perception.

December 9

Pádraig Cantillon-Murphy, MIT and BWH

  • Title: Magnetic Self Assembly in Minimally-Invasive Gastrointestinal Surgery


The development of Single-Incision Laparoscopic Surgery (SILS) and, more recently, Natural Orifice Transluminal Endoscopic Surgery (NOTES) as minimally-invasive alternatives to traditional laparoscopy has generated enormous interest in the surgical community in recent years. However, the creation of secure transluminal ports for access to target organs (e.g., accessing the small bowel or gallbladder through the gastric wall) is still a major challenge to the widespread adoption of these advanced techniques. I have recently designed and tested a self-assembling magnetic microsystem which solves this shortcoming. The microsystem, currently undergoing animal trials, will serve as the platform for the transformation of some of the most common laparoscopic procedures (e.g., gallbladder removal, gastrojejunal fistula formation) into minimally-invasive endoscopic procedures. Hypothesized results include shorter hospitalization, reduced anesthesia and significantly decreased healthcare costs.


Pádraig Cantillon-Murphy is a post-doctoral research fellow at the Laboratory for Electromagnetic and Electronic Systems (LEES) at the Massachusetts Institute of Technology and a research fellow of Harvard Medical School at Brigham and Women's Hospital, Boston. He graduated with a bachelor's degree in Electrical and Electronic Engineering from University College Cork, Ireland, in 2003 before joining the Master's program at the Department of Electrical at MIT. He graduated with a Master of Science in Electrical Engineering in 2005 and joined the MRI group in the Research Laboratory for Electronics, also at MIT. He graduated with a Ph.D. in Electrical Engineering from MIT in June 2008 where his doctoral thesis examined the dynamic behavior of magnetic nanoparticles in MRI. His current work investigates the role that magnetic self-assembly and smart systems can play in minimally-invasive surgery, with an emphasis on common gastrointestinal procedures.

October 9

Christopher Wyatt, Virginia Tech

  • Title: Neuroimage Analysis in Nonhuman Primates Models of Alcoholism and Drug Abuse


Neuroimaging studies have demonstrated morphological and functional consequences of alcoholism and drug abuse. Studying the time course of such changes in humans is complicated by numerous factors such as age, comorbid disease and polydrug abuse, as well as the fact that traditional experimental approaches are not available for ethical reasons. Thus, nonhuman primate models are a critical link in the translation of research on the effects of alcohol and drug exposure. This talk describes the experimental models in two ongoing studies of alcohol and drug exposure, the neuroimaging acquisition protocols, and the analysis pipeline.


Chris Wyatt is an Assistant Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. His research interests center on biomedical image analysis, with a focus on structural and functional neuroimaging.

June 10

Ronilda Lacson, TBA

April 15

Arkadiusz Sitek, Brigham and Women's Hospital

  • Title: Image-Guided Spinal Interventions Using Virtual Fluoroscopy and Tomography


The goal of this work is to create a virtual radiology workstation (VRW) for use in fluoroscopy image-guided spinal interventions. The VRW will create a virtual reality environment in which the surgeon will be able to visualize locations of critical spine structures with respect to surgical tools (needles used for spinal injections or other surgical tools). . Registration between pre-operative imaging data (CT, MRI) and fluoroscopic is performed first. The registration is based on novel algorithms for photon propagation. Using stereoscopic visualizations and head tracking device, we create a virtual reality system in which spine structures and surgical tools are accurately visualized to interventional radiologist. The virtual reality environment showing the spine and surgical tools is created based on preoperative volumetric data coregistered with intra-operative fluoroscopy. We plan to extend the VRW to other areas of spinal surgery. The VRW will increase the accuracy of these procedures minimizing patient risks, increase surgeon confidence during interventions, and reduce patient and surgeon radiation doses.

March 25

Yaroslav Tencer, PhD Candidate, Mechatronics in Medicine Lab, Imperial College London, UK

  • Title: Improving the Realism of Haptic Perceptions in Virtual Arthroscopy Training


Human tactile sensing is essential for perceiving the environment, which is why many virtual reality simulators offer force feedback (haptics) alongside realistic visual rendering. Haptic feedback is especially important in applications where forces carry useful information. I will elaborate on the following four subtopics; First, the OrthoForce, which is an arthroscopy training simulator, which consists of a custom-built haptic device with 4 haptic degrees of freedom and appears to posses the highest “stiffness to size” ratio among devices of its kind to date. Second, the “Haptic Noise” method, which is a novel approach for quantitative evaluation of a haptic system, which we developed to help overcome the inherent problems of evaluating and comparing haptic devices. Third, vibrotactile feedback, where measured high frequency vibrations were produced on the OrthoForce handle to improve haptic realism. Vibrotactile feedback has been shown to improve haptic perceptions in general, although its application to surgical training simulators has not yet been demonstrated. Last, a novel programmable brake will be presented, which was developed to improve the stability and stiffness of haptic devices at minimum cost.


December 10

James Balter, Department of Radiation Oncology, University of Michigan


While dramatic improvements in imaging access and speed have occurred in the image-guided (radiation) therapy arena, the cost (in terms of time and possibly radiation dose) of acquiring verify high fidelity volume information is prohibitive for real-time monitoring for (re-)adjustment of therapy to account for patient changes between modeled and treated states. To overcome such limitations, surrogates of various forms are introduced to the process. Surrogates provide a reduced representation of the patient and typically present information in a format that is conveniently analyzed. Furthermore, extraction of surrogate information is typically faster than the initial targeting process used in planning therapy. Difficulties exist in selecting appropriate surrogates, and optimizing their relationship to the actual patient state. This talk explores these issues, including typical sources of uncertainty, the error budget involved in an image guidance process (for Radiation Oncology), and the potential impact of advanced models and use of prior information to maximize the value of surrogate measurements.

December 17

Andriy Fedorov

July 23

Matt Toews

  • Title: Modeling Appearance via the Object Class Invariant


As humans, we are able to identify, localize, describe and classify a wide range of object classes, such as faces, cars or the human brain, by their appearance in images. Designing a general computational model of appearance with similar capabilities remains a long standing goal in the research community. A major challenge is effectively coping with the many sources of variability operative in determining image appearance: illumination, noise, unrelated clutter, occlusion, sensor geometry, natural intra-class variation and abnormal variation due to pathology to name a few. Explicitly modeling sources of variability can be computationally expensive, can lead to domain-specific solutions and may be unnecessary, as they may be ultimately unrelated to the computational tasks at hand. In this talk, I will show how appearance can instead be modeled in a manner invariant to nuisance variations, or sources of variability unrelated to the tasks at hand. This is done by relating spatially localized image features to an object class invariant (OCI), a reference frame which remains geometrically consistent with the underlying object class despite nuisance variations. The resulting OCI model is a probabilistic collage of local image patterns that can be automatically learned from sets of images and robustly fit to new images, with little or no manual supervision. Due to its general nature, the OCI model can be used to address a variety of difficult, open problems in the contexts of computer vision and medical image analysis. I will show how the model can be used both as a viewpoint-invariant model of 3D object classes in photographic imagery and as a robust anatomical atlas of the brain in magnetic resonance imagery.

July 23

Xenios Papademetris, Departments of Diagnostic Radiology and Biomedical Engineering,Yale University

  • Title: Development of a Research Interface for Image Guided Navigation
  • Location: 2nd Floor Conference Room, 1249 Boylston St, Boston.


In this talk, I will describe the development and application of techniques to integrate research image analysis methods and software with a commercial image guided surgery navigation system (the BrainLAB VectorVision Cranial System.) The integration was achieved using a custom designed client/server architecture termed VectorVision Link (VVLink) which extends functionality from the Visualization Toolkit (VTK). VVLink enables bi-directional transfer of data such as images, visualizations and tool positions in real time. In this paper, we describe both the design and the application programming interface of VVLink, as well as show the function of an example VVLink client control. The resulting interface provides a practical and versatile link for bringing image analysis research techniques into the operating room (OR). I will present examples from the successful use of this research interface in both phantom experiments and in real neurosurgeries.

I will also, briefly, present some more recent work begun at the NAMIC all-hands meeting last month to establish a communication link between Slicer and the VVCranial System using a combination of OpenIGTLink andVVLink.

May 14

Anna Custo, MIT

Purely Optical Tomography: Atlas-Based Reconstruction of Brain Activation"


Diffuse Optical Tomography (DOT) is a relatively new method used to image blood volume and oxygen saturation in vivo. Because of its relatively poor spatial resolution (typically no better than 1-2 cm), DOT is increasingly combined with other imaging techniques, such as MRI, fMRI and CT, which provide high-resolution structural information to guide the characterization of the unique physiological information offered by DOI. This work aims at improving DOT by offering new strategies for a more accurate, efficient, and faster image processor. Specifically, after investigating the influence of Cerebral Spinal Fluid (CSF) properties over the optical measurements, we propose using a realistic segmented head model that includes a novel CSF segmentation approach for a more accurate solution of the DOT forward problem. Moreover, we outline the benefits and applicability of a Diffusion Approximation-based faster forward model solver such as the one proposed by Barnett. We also describe a new registration algorithm based on superficial landmarks which is an essential tool for the purely optical tomographic image process here proposed. A purely optical tomography of the brain during neural activity will greatly enhance DOT applicability and many advantages, in the sense that DOT low cost, portability and non-invasive nature would be fully exploited without the compromises due to the MRI role in the DOT forward image process. We achieve a purely optical tomography by using a generalized head model (or atlas) in place of the subject specific anatomical MRI.We validate the proposed imaging protocol by comparing measurements derived from the DOT forward problem solution obtained using the subject specific anatomical model versus these acquired using the atlas registered to the subject, and we show the results thus calculated over a database of 31 healthy human subjects, focusing on a set of 12 functional region of interests per hemisphere. We conclude our study presenting 3 experimental subjects with acquired measurements of the absorption coefficient perturbation during motor cortex activation. We apply our purely optical tomography protocol to the 3 subjects and analyze the observations derived from both the DOT forward and inverse solutions. The experimental results demonstrate that it is possible to guide DOT forward problem with a general anatomical model in place of the subject's specific head geometry to localize the macroanatomical structures of neural activity.

February 27

Haytham Elhawary, Mechatronics in Medicine Laboratory, Imperial College London, UK

Hosted by Noby Hata.


Mechatronic systems compatible with magnetic resonance imaging (MRI) promise interventional robots guided by real-time MRI as well as efficient tools for clinical diagnosis in internal medicine. We have designed two MR compatible systems for use inside high-field closed bore scanners. The first consists of a device designed to perform trans-rectal prostate biopsy under real-time image guidance. The 5-DOF robot is actuated by means of piezoceramic motors located very close to the isocentre, and its position is registered by both compatible optical encoders as well as passive fiducial markers embedded in an endorectal probe. By tracking the markers in the probe, the scan plane orientation can be updated to always include the needle axis. A force sensor along the needle driving DOF, allows the generation of a force profile as the needle is inserted, quantifying tissue rigidity. Future work includes adding haptic feedback to the system. The second system provides limb positioning capabilities to exploit the denominated “magic angle” coupling effect, which is the increase in signal shown in tendinous tissue when oriented at 55 degrees to Bo. The system uses specifically developed pneumatic rotary actuators to provide the large torques required and compatible encoders to feedback position. With 3-DOF a limb can be positioned at its required angle at a minimum distance from the isocentre while assuring patient comfort. Once initial registration is complete, the system can provide the location of the scan planes as the limb is oriented at a specified angle. Preliminary trials imaging the Achilles tendon of healthy volunteers prove the functionality of the device.

Akihito Sano, Department of Engineering Physics, Electronics and Mechanics, Nagoya Institute of Technology, Japan

Hosted by Noby Hata

  • Title: Toward intuitive teleoperation and touch enhancing


Master-slave systems take advantage of human cognitive and sensorimotor skills. In applications of this system, an intuitive teleoperation based on a natural and instinctive manner is strongly desired in order that the operator can use his full set of daily experiences. Keeping medical applications in mind, we have developed a master console with a compact spherical stereoscopic display, named Micro Dome. Accurate visual/haptic registration has been realized based on the framework of a mixed reality. And, we have recently developed a multi-finger telemanipulation system. One of the key devices is a bio-mimetic soft-finger with tactile sensor that can instantaneously detect whether the object is rough or slippery. Another is a tactile display based on the squeeze effect using an ultrasonic vibrator that can control the frictional coefficient. Further, we show a touch enhancing tool that operates not only as a disturbance filter, but also, supposedly, as a magnifier of surface undulation.

Dr. Sano is a Professor at Nagoya Institute of Technology (JAPAN). He received M.S. degree in Precision Engineering from Gifu University in 1987 and Ph.D. degree from Nagoya University in 1992. He received the JSME Award for young engineers in 1992, the JSME Robotics and Mechatronics Achievement Award in 1996, the ASME-ISCIE Japan-USA Symposium on Flexible Automation Best Paper Award (Ford Motor Company) in 2000, the 1st IEEE Technical Exhibition Based Conference on Robotics and Automation Best Technical Exhibition Award in 2004, and the SICE Transaction Best Paper Award in 2005. He is a fellow of the Japan Society of Mechanical Engineers. The main research interests in his laboratory are Human-centered Robotics, Haptics, and Passive Walking. He was a Visiting Scholar in Mechanical Engineering of Stanford University.

February 6

Ender Konukoglu, INRIA

Hosted by Kilian Pohl

  • Title: Patient Specific Tumor Growth Modeling


Patient specific tumor growth models combine mathematical explanations of dynamics of the tumor growth with clinical data. Mathematically general models are modified for each patient to provide information to the clinician regarding the growth of the tumor and its state. In this talk we are going to present our work on tumor growth modeling, adapting these models to specific patient cases using MR images and applications of these models on radiotherapy planning.

January 16

Marco Ribaldi, MGH

  • Title: 4D Targeting Error Analysis in Image-Guided Radiotherapy


Image-guided therapy (IGT) involves the acquisition and processing of biomedical images to actively guide medical interventions. This field has been rapidly evolving in recent years, resulting in commercially available IGT solutions. The proliferation of IGT technologies has been particularly significant in image-guided external beam irradiation (image-guided radiotherapy, IGRT), as a way to increase targeting accuracy. When IGRT is applied to the treatment of moving tumors, such as for lung or liver lesions, image guidance becomes challenging, as intra-fraction motion leads to increased uncertainty in tumor localization. Different strategies, such as respiratory gating or tumor tracking may be applied to mitigate the effects of motion. Each technique is related to a different technological effort to be pursued; also, a different level of complexity in treatment planning and delivery is required. We postulate that the advantages of IGRT should be formalized by a mathematical description, to objectively compare different motion mitigation strategies. This will allow the comparison of different approaches to the compensation of inter and intra-fractional motion in terms of the residual uncertainties in tumor targeting, to be detected by IGRT technologies. Quantitation of targeting error requires an extension of targeting error to a 4D space, where the 3D tumor trajectory as a function of time is taken explicitly into account. This extension makes possible the evaluation of the accuracy of the delivered treatment (4D targeting error analysis, 4DTEA). Accurate 4DTEA can be represented by a motion probability density function, describing the statistical fluctuations of tumor position over time. We illustrate the application of 4DTEA through examples, including: daily variations in tumor trajectory as detected by 4DCT, respiratory gated irradiation via external surrogates, and real-time tumor tracking by means of stereoscopic X-ray imaging.


December 5

Arkadiusz Sitek, Physicist, Nuclear Medicine, Brigham and Women's Hospital, Assistant Professor Harvard Medical School

  • Title: Programming and Medical Applications Using Graphics Hardware

This will be an introduction to programming and applications of graphics processing unit (GPU) in medical imaging. Driven by the computer game industry, the development of graphics hardware experienced tremendous growth in recent years. Due to parallel computational architecture as well as availability of GPU hardware implemented geometrical functions used frequently in data analysis and reconstruction, the GPU offers readily available fast computational resource that can be used in medical imaging applications. The GPU programming model is substantially different than standard Von Neumann architecture used for the programming of the CPUs. I will introduce computational model of the GPU in the context of basic computer graphics and general purpose computing. Examples of GPU implementations in the area of medical data visualization and reconstruction for Nuclear Medicine, X-Ray CT, and MRI data will be given.

May 9

Eigil Samset

  • Title: Image-guided navigation software and novel applications. Eigil's Talk

May 16

Mert Rory Sabuncu

May 23

Martin Reuter

  • Title: Laplace-Beltrami Spectra for Global Shape Analysis of 3d Medical Data. Martin's Talk

May 30

Padma Sundaram

  • Title: A Geometry Processing Approach to Colon Polyp Detection.