2006 IGT Workshop Tracking Breakout

From NAMIC Wiki
Revision as of 19:45, 3 November 2010 by Marianna (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Home < 2006 IGT Workshop Tracking Breakout

Disclaimer: This is a very early draft... will be refined with input from the breakout session participants.


Back to workshop agenda

Questions to be answered in the report-out session

  • Identify 3 main challenges in this area
  • Identify 3 specific problems that can be solved by a collaborative effort between academic and industry partners
  • Identify a problem that the NCIGT can help address in the next year


Attendees

  • Sven Flossmann, BrainLAB AG, sven.flossmann at brainlab.com
  • Ziv Yaniv, Georgetown University, zivy at isis.georgetown.edu
  • Chuck Stevens, Ascension Technology Corp., cstevens at ascension-tech.com
  • Steve Hartmann, Medtronic Navigation, steve.hartmann at medtronic.com
  • Eric Shen, Philips Research, e dot shen at philips.com
  • Antony Hodgson, University of British Columbia, ahodgson at mech.ubc.ca
  • Haying Liu, Brigham and Women's Hospital, hliu at bwh.harvard.edu
  • Joel Zuhars, GE Healthcare, joel.zuhars at med.ge.com
  • David Gobbi, Atamai Inc., dgobbi at atamai.com
  • Jeff Biegus, NDI, jbiegus at ndigital.com
  • Jeff Stanley, NDI, jstanley at ndigital.com
  • Terry Peters, Robarts Institute, tpeters at imaging.robarts.ca
  • Chris Nafis, GE Global Research, nafis at crd.ge.com
  • Eigil Samset, Brigham and Women's Hospital samset at bwh.harvard.edu
  • Dan Groszmann, GE Healthcare, daniel.groszmann at med.ge.com
  • Simon DiMaio, Brigham and Women's Hospital, simond at bwh.harvard.edu


Agenda

  • Define scope of discussion:
    • methods for tracking anatomy and instruments in the image space as well as in the patient space
    • connecting pre-operative/intra-operative imaging and planning with devices and navigation
    • standards for devices that are used for navigation
  • State of the art in 6-dof tracking: EM/Optical/ultrasonic/active/passive
    • key specifications: static/dynamic accuracy, temporal/spatial resolution
    • hazards associated with each system, either with respect to accuracy or OR safety
    • particular strengths of each system with respect to ergonomics, reliability, accuracy, special applications
    • emerging technologies for tracking and the roles these technologies will play
    • connectivity standards
    • clinical applications to date
    • typical research applications to date
    • future research needs and applications
    • shortcomings of current technologies in the clincal context
  • Localizing and tracking soft tissue targets, particularly those strongly affected by respiration or heart motion
    • intra-operative imaging and tracking
    • methods for direct tracking of non-rigid targets (e.g. laser scanning)
    • methods for indirect tracking of non-rigid targets (e.g. modelling of deformation)
  • Abstraction layers for tracker / device integration (opentracker?)
    • standard interface/API for programming trackers
      • consider realtime requirements
    • hardware interfaces (USB, serial ports, Bluetooth, etc)
  • Unified interfaces for FDA approved tracking systems (stealthlink, vectorlink, SPLOT)
  • Multi-modal tracking / sensor fusion
  • Latency, synchronization and frame rate tradeoffs
    • pushing the limits of latency (making tracking vendors address this issue)
  • Cooperative tracking applications with different tracking vendors / technologies and different imaging modalities: interference, global coordinate system, different working volumes, registration ...
  • Assessing accuracy in clinical environments (eg. EM systems with surgery tables, metal tools, etc). For example, get marketing data to establish most likely OR tables that should be tested and define protocol.
  • Related Technologies to tracking
    • haptics: haptic systems are essentially tracking systems that add tactile feedback that can be controlled via computer
    • virtual reality: tracking allows a surgeon to augment his or her direct view of the patient with a virtual representation of organs etc. that are registered to the patient


To address the questions posed at the top of the page I suggest the following four step approach:

0. Define the abstract object tracker

An apparatus for localizing points in three dimensional space relative to a fixed coordinate system.

This definition separates between the notion of tracking and that of transformation. Currently most tracking systems are used to report the rigid transformation of a local coordinate system based on tracked points. In the future the transformation type may be configurable so that a tracker will report affine or curved transformations?

1. Define what we want, the ideal system a la (Welch, Foxlin 2002)

  • Small: an unobtrusive system.
  • Complete: estimates all degrees of freedom of the specific transformation.
  • Accurate: resolution less than 1mm and 0.1 degree.
  • Fast: refresh rate of 1,000Hz with a latency of less than 1ms, regardless of the number of deployed sensors.
  • Concurrent: tracks up to 100 objects concurrently.
  • Robust: not affected by the environment (line of sight, lighting, electromagnetic interference)
  • Working volume: has an effective work volume of 5m cubed (room sized).
  • Wireless: sensors are wireless and can function for several hours.
  • Inexpensive: free? (not impossible, ARTrack was used in Nicolau et al. MICCAI'05).

2. State of the art

What types of tracking systems are out there and their characteristics (how do they relate to the ideal system)

  • Optical - stereo, homography based single camera, camera ego-motion
  • Electromagnetic
  • Fiber optic - ShapeTape
  • Mechanical (FaroArm, all robotic systems)
  • Ultrasonic systems

3. Suggestions on how to get there

  • Meta trackers (OpenTracker, Intersense), seamlessly integrate multiple tracking systems.


Minutes (rough)

  • What is tracking?
    • how broad should we consider for this breakout?
    • tracking of instruments versus tracking of anatomy (may involve imaging).
    • perhaps define heirarchy of "virtual classes", or an abstraction of tracking (e.g., OpenTracker) to decouple different modes of tracking.
    • let's explore high level issues in tracking, largely abstracted from specific devices...


  • Assessment and Validation
    • Nafis - review of testing methods (link SPIE paper here)
      • detection of accuracy should also include "self assessment" by device, i.e., devices need to be able to determine when tracking is inaccurate
    • testing methods are very much application dependent
    • how do we compare systems?
      • stability of accuracy in different modes of use
        • use equivalent test (best represenation of use)
        • capabilities to detect inaccuracy and degradation
    • need a consensus on set of tests to be used
    • need a consensus on metrics and measures to use (e.g., average error, RMS error, confidence intervals, etc.)
      • reporting depends upon procedure type/protocols
    • perhaps we can track surgeon performance during interventions (logging) to determine base accuracy requirements
    • consensus on tracking requirements
      • vendors are not willing to define this, due to liability issues
      • need a standards body? who governs this?
      • role of scientific community?
    • interference between devices (and other equipment)
      • again, need standards on the ability to detect degradation
    • another important measure is LATENCY
      • digital filtering is main culprit (as sensors get smaller, SNR decreases, more filtering needed, latency increases)
      • difference between latency before time stamp versus latency due to transfer to host
      • effect of latency on accuracy versus perception
    • Confidence measures
      • no standard for specifying confidence measure (software libraries like OpenTracker provide mechanism for specifying confidence, but not what this means)
      • difficult to quantify
      • qualitative measures (e.g. 0-1 confidence) are often not useful
      • cummulative error due to orientation, location, fov etc.
        • attach as much error bounding as possible
        • performance error
        • should vendors be providing more detailed measures to decouple contributors to error?
          • bandwidth issue? RS232? transformations?
          • may not necessary to get this information at full frame rate. could be requested if confidence low?
    • absolute errors versus relative errors
    • different needs between clinical developers and scientists
    • how to drive requirements?


  • APIs and Open Systems
    • open interfaces - how to choose functionality - too large?
      • use basic requirements common to most users.
    • regulatory
      • how to validate open systems?
      • depends on flow of data (in versus out) - for tracking systems flow is mostly out = low risk for tracking system
    • what is the need?
    • each tracking system is different, need common API
      • focus on commonalities
    • OpenTracked good enough?
      • some responsibility of vendors to monitor development
      • we need to build support for an opensource API, so that there is continuity
      • vendors cannot support all APIs - need standard?
    • Open Source model feasible for clinical use in general?
      • existing use of open-source components by vendors
        • frozen open-source validated internally?
      • what is the deployment route through the open-source community?
      • consider shift of the responsiblity/liability to the surgeon?
        • FDA in the middle
        • how is surgeon qualified?
        • role of the scientific community?
    • NEED METHODS for tracking data to the SYNCHRONIZED
      • due to fusion of multiple systems


  • See PPT report below for condensed summary of challenges and problems.



Summary Report

Breakout report presentation (Simon DiMaio)