Difference between revisions of "2016 Winter Project Week:TrackedUltrasoundStandardization/PreparationMeetings"

From NAMIC Wiki
Jump to: navigation, search
 
Line 339: Line 339:
 
** Start/stop server (a background process would listen and would start a server instance with the received configuration file; this way the remote application could send the configuration file and the user would not be required to copy the configuration file to the acquisition computer and start a server instance using that)
 
** Start/stop server (a background process would listen and would start a server instance with the received configuration file; this way the remote application could send the configuration file and the user would not be required to copy the configuration file to the acquisition computer and start a server instance using that)
  
====TODO/Discuss====
+
==== TODO/Discuss ====
 
* To be standardized (check how it's done in DICOM):
 
* To be standardized (check how it's done in DICOM):
 
** Coordinate system names (Image, Transducer, Probe)
 
** Coordinate system names (Image, Transducer, Probe)

Latest revision as of 05:42, 14 December 2015

Home < 2016 Winter Project Week:TrackedUltrasoundStandardization < PreparationMeetings

Christian's list of potential topics of collaboration

Enhance connection of CustusX client to PLUS server

Background: CustusX has used PLUS to connect to an Ultrasonix scanner with an attached tracking device from Ascension (Sonix GPS), as part of the EU RASimAs project. CustusX also has tracking support from the IGSTK toolkit, and own direct support for a number of video/US devices. We would like to add PLUS to this family, poosibly replacing some of our older components.

Topics:

  • Tracking of BrainLab devices.
  • Connection to BK medical scanners.
  • Enhance transfer of US probe metadata. Might include extending the OpenIGTLink standard.
  • Use of PLUS US reconstruction algorithms in CustusX as an alternative to CX internal algorithms, and enable comparison of algorithms.

Standardization of IGT software

Background:

  • Emails, major players converge towards IGTLink. Several groups are using PLUS and/or Slicer, but also open-sourcing their own software and want to be compatible with each other.
  • Data should be usable by several systems in order to ease comparison in e.g. studies.
  • Our group need to communicate models and volumes with Slicer, and would like to use PLUS.

There are several areas where standardization would be beneficial:

  • messaging between applications.
  • file formats.
  • algorithms/shared libraries.

It is important to build as much as possible on existing material and de facto standards.

Topics:

  • Messaging: OpenIGTLink is the de facto standard for messaging. Is the standard complete? What do we need? US-probe data, polydata colors, volume transfer functions /LUTs.
  • File Formats: Models, Volumes, Video streams, US metadata, scenegraph. Some of these objects are already somewhat standardized, but not all.
  • Algorithms: A library of IGT algorithms would be welcome. It might be included into an existing library (VTK, CTK, …) or emerge as a new one.

Meeting 2015-11-23

Participants: Andras Lasso, Tamas Ungi, Steve Pieper, Junichi Tokuda, Adam Rankin, Simon Drouin, Ole Vegard Solberg, Ingerid Reinertsen, Christian Askeland

Notes:

  • Introduction of all present. Everyone seems to have a strong interest in IGT and systems integration.
  • Junichi: Funded OpenIGTLink grant scope is well aligned with this standardization effort.
  • Christian: project manager for CustusX, Plus is used for data acquisition, 3D Slicer used for pre/post operative processing
  • Ingrid: research scientist, IGT and CustusX user, uses Slicer for pre-processing, segmentation, registration, post-processing; clinicians use Slicer for tumor segmentation, becoming a routine
  • Simon: IBIS has been developed for 10 years, next step is integration with Plus, plan is to move features to 3D Slicer/SlicerIGT/Plus and use that as a platform (grant application submitted for getting funding)
  • Walkthrough of (some of) Christian’s topics
    • CustusX - BrainLab connection through PLUS: BrainLab already supports OpenIGTLink, connection either through Plus or directly to the application should be straightforward.
    • CustusX - BK connection through PLUS: Partially implemented in PLUS. NOTE: Must buy additional license from BK to access continuous image streaming (GRAB ON/OFF command). Without that the maximum acquisition rate is limited to a few FPS. GRAB ON/OFF interface implementation is partially complete in Plus. If a BK machine is available that supports this command then implementation in Plus requires a couple of days.
    • 4D (3D+t) US acquisition support: Robarts has access to a private Philips API, SINTEF has access to a private GE API. Both have converter to OpenIGTLink (Philips converter is implemented in Plus, GE is external).
    • Ultrasound scanner two-way communication support in OpenIGTLink (data query, device control - depth, imaging mode): several of those present (Adam, Christian, ..) and others need this, and several slightly different implementations exist (Plus, CustusX, MUSiiC). Junichi agrees that this should be standardized into OpenIGTLink. We need a backward and forward compatible solution, possibly XML in STRING OpenIGTLink message. Needs further discussion
    • Shared file formats: All IGT data (pose tracking, 2D+t, 3D, 3D+t images) should be stored in a common format. Needs further discussion
      • Steve suggests DICOM. Some work already in progress on using DICOM as interchange format between BrainLab and Slicer (BrainLab already uses DICOM as its internal data representation). Most of the required structures are present, possibly excluding tracking positions. This can be (with time and work) added to the standard. General agreement on DICOM, but more discussion needed. Real-time streaming to DICOM file may be problematic (DICOM header has to be prepended to image data; if image data is several GB then it may take long time, not convenient for intra-operative use).
      • Can also discuss with Michael Onken/DCMTK when in Boston.
  • General agreement that standardization and cooperation is needed, and that more discussions are needed before Boston.

TODO:

  • Contact others if they were interested to join these efforts (MUSiiC, MITK-IGT, Nifty-IGT) - Andras
  • Ask Tina about possibility for Amigo visit (BK ultrasound, BrainLab integration) - Andras
  • Talk to BrainLab engineers about tracked ultrasound (and general IGT) information storage in DICOM format - Steve

Next meeting: December 7., 0900 EST (1500 Trondheim time)

Meeting 2015-12-07

Date: December 7., 0900 EST (1500 Trondheim/Heidelberg time)

Participants: Andras Lasso, Tamas Ungi, Steve Pieper, Junichi Tokuda, Adam Rankin, Simon Drouin, Ole Vegard Solberg, Christian Askeland, Alfred Franz, Thomas Kirchner, Janek Groehl

Topics of interests

  • Tracking and imaging data stream (image includes distance maps, various color images, etc):
    • Storage file format
      • MetaImage (mha/mhd), NRRD with custom fields
        • Very simple, flexible, human-readable file format
        • Lossless compression, special image types (4D, color, etc.) are supported
        • Basic support is available in all research software, C++ and Matlab libraries are available for reading/writing
        • Reading/writing is fully supported by Plus, 3DSlicer/SlicerIGT/Sequences
        • Limitations: only one image stream per file, no per-frame fields (simulated by adding per-file custom fields; standard libraries cannot associate them with frames); numbers stored as text, some precision is lost
      • DICOM: Ultrasound image IOD, Ultrasound multi-frame image IOD, Ehanced ultrasound volume IOD
        • use cases, overview
        • Standard exists for tracked 2D, 3D ultrasound sequences (frame of reference relative to probe, patient, other)
        • Physiological waveforms (cardiac, respiratory)
        • Physical units, real world values (speed, etc), color calibration
        • Custom per-frame fields can be specified (could be added to standard)
        • Multiple image regions (biplane, etc)
        • Limitation: no standard for storing navigation data (only probe and image position) - to be confirmed with David Clunie
    • Storage in memory for visualization and processing
      • Plus: rotating buffer for image frames (vtkImageData + custom fields), tool tracking data (pose, status); TrackedFrameList (list of image data with string field list)
      • 3D Slicer: sequences of data nodes (vtkMRMLScalarVolume, vtkTransform)
    • Real-time transfer: colors, volume transfer functions / LUTs?
  • Ultrasound metadata: imaging parameters (depth, dynamic range, etc.), image geometry (clipping rectangle, clipping fan, image spacing, axis directions, transducer center position), freeze state, probe button press
    • File storage: currently in XML in Plus
    • In memory storage: XML data elements
    • Real-time transfer: MUSiiC custom OpenIGTLink messages have some info
  • Surgical microscope metada: focus, zoom
  • Calibration info: from/to coordinate system name, linear transform
    • File storage: In Plus: XML, CoordinateDefinitions element
    • In memory: vtkMatrix4x4, TransformRepository
    • Real-time transfer: application receives as TRANSFORM message, set by OpenIGTLink command
  • Meshes, point cloud
    • File storage: vtkPolyData, VTK file formats are OK but there is no coordinate system information; DICOM?
    • Memory: VTK class
    • Real-time transfer: is anything missing in OpenIGTLink?
  • Dependency, data provenance
    • Data represenation for segmentation, keeping relationship to source structures in DICOM
    • Make sure there is a way to store DICOM UIDs, use definitions that are already specified in DICOM
  • Scene:
    • File storage: Slicer uses MRML - alternatives?
    • Memory: MRML library
    • Real-time transfer?
  • Common algorithms:
    • Pivot calibration (with robust point matching, error rejection, error metrics, etc.)
    • Landmark registration (with automatic landmark detection, etc.)
    • Volume reconstruction
    • Scan conversion, brightness conversion
    • Bone surface detection
    • Ultrasound simulation
    • US/MRI registration
    • Custom reader/writer classes for tracked image data (supporting compressed streaming, compressed seeking, all necessary metadata, etc.)

Current implementations

Tracked ultrasound storage in file

  • Plus: sequence metafile
  • CustusX:
    • Saving during recording: a new folder is created, each image frame is written to a separate file (optionally compressed), separate file stores tracking data and metadata
    • Sequence metafile (MetaImage format with Plus custom fields) is also used
    • Application state is saved to an XML file (similar to MRML scene) that links to other files (sequence metafile, VTK polydata, etc)
    • The entire procedure is also recorded to a separate, simple, compressed file
  • IBIS:
    • Sequence of MINC files: each frame is a separate MINC file, contains both tracking transform and calibration transform that was current when the frame was acquired. All provenance information is kept in the header of each file (up to the original data source, DICOM, etc.)
  • MITK-IGT:
    • There is not really a standard, images can be saved as a 3D NRRD image using ITK, tracking data can be saved as xml or csv
    • Sometimes sequence metafile is used

Tracked ultrasound storage in memory

  • Plus: TrackedFrameList class, list of PlusVideoFrame, PlusVideoFrame is a VTK class with vtkImageData (with zero origin and unit spacing), timestamp, and custom string fields (with convenience function to store/retrieve transforms, numerical values, etc.)
  • CustusX:
    • Uses VTK as main backend (with ITK converters for algorithms)
    • Custom image class that contains vtkImageData (with zero position, unit spacing), additional with orientation, in vtk image data position is 0
    • all transforms in eigen
    • sequences of video and ultrasound in a class
  • IBIS:
    • vtkImageData data and separate transform vtkMatrix4x4 for individual frame and separate for registration
    • Everything is kept in memory, no streaming writing to disk
  • MITK-IGT:
    • OpenCV mat image for image acquisition and pre-processing
    • MITK image sequence could be used as well
    • Calibration is a property of the MITK image
    • Tracking data: tracking data set is a separate structure

Tracked ultrasound streaming

  • Plus:
    • Transforms: receive TRANSFORM or TDATA, send TRANSFORM (TDATA implementation is in progress)
    • Images: IMAGE, USMESSAGE (MUSiiC ultrasound message), TRACKEDFRAME (Plus tracked frame: image data + string fields)
  • CustusX:
    • OpenIGTLink: has been used for 4 years, Plus for about half year
    • Image: both 3D and 4D US, using own image message type, non-standard message
    • Tracking: TRANSFORM
  • IBIS: everything is in the same process, no IPC
  • MITK: TDATA or TRANSFORM messages, depending on the application

Plans

  • We focus on real-time data transfer standardization first, then file storage, in-memory representation, reader/writer and processing software library

Next meeting: December 14., 0900 EST (1500 Trondheim/Heidelberg time)

Homework before next meeting: Add to this wiki page the description of all data to be transferred in real-time using OpenIGTLink

Meeting 2015-12-14

Link to join the meeting: https://plus.google.com/u/0/events/cna4d11el56n6lhkaqrupqlu8tk?authkey=CLnh6Zza0dTHFw

Data necessary to be transferred in real time

MITK-IGT group

  • For image data:
    • Option to add meta data for different modalities (e.g: ultrasound- and photoacoustic imaging):
    • Additional metadata for PAI:
      • Wavelength (for multispectral image shots)
      • Laser pulse energy (for normalisation of signals)
      • Adding specific message types based on IMGMETA could be of use [optional/custom fields?]
  • For tracking data:
    • Consistent format
    • TDATA message suits our needs

IBIS group

  • Image data:
    • RGB + Depth images
    • synchronized stereo images
    • Meta-data for augmented reality: zoom, focus setting of microscope/endoscope/camera
    • Meta-data for US: depth, mode (b-mode, doppler, power doppler), image mask parameters
  • Tracking data:
    • TDATA + Tracker tool state
    • Meta-data for augmented reality: calibration parameters:
      • Camera model type: Zhang, Heikkila, Tsai.
      • focal length
      • Image center
      • distortion coefficients (depends on model type)
    • Meta-data for US : calibration matrix ( tool to image transform )

SINTEF group

The work is based on experience working mainly with Ultrasonix and GE scanners. Another igtl message used by CustusX can be found here. It contains the minimal information required to work with 2D probes, with no control (only status).

Complete description of all properties for an ultrasound server

The following are properties for a server residing on an ultrasound scanner. The structure is given in pseudocode, but the intended format is XML.

Parameters are either required, optional or extension:

  • required: Defined by standard. Must be present in order to comply with standard.
  • optional: Defined by standard. Might be present, if available from scanner and implemented.
  • extension: not part of the standard, but added by vendor.

Properties:

  • Available_applications { list of names } (list of applications/presets available on the scanner)
  • Application_name { name }
  • Available_Probes { device_id list } ( List of all probes currently attached to the scanner.)
  • Available_Streams { device_id list } (these are an exhaustive list of all stream types that can be of interest from a scanner. Only those supported by the scanner are given. The scalar variants contain raw data, while the RGB/Gray versions are suitable for direct screen display. LUTs can be available when using the raw variants.)
    • Display, (RGB)
    • US_B-Mode (Gray)
    • US_B-Mode-Raw (Scalar)
    • US_Color-Doppler (RGB)
    • US_Color-Doppler-Velocity (Scalar)
    • US_Power-Doppler (RGB)
    • US_Elastography (RGB)
    • US_Elastography (Scalar)
    • US_R0 (Scalar)
    • US_R1 (Scalar)
    • US_RF (Scalar)
    • US_IQ (Complex)
  • Probe: {device_id_0} (one probe entry for each probe)
    • probe parameters: Name, Center freq, focus pt, pitch, geometry model..?,
  • Stream: {device_id} (one stream entry for each stream)
    • Image Format: {RGB, Gray, Complex, Scalar}. If Scalar, a color table might be available (in a separate field)
    • Probe sector: (A description handling 2D/3D sector/curvilinear/linear probes. Handles most cases, mask handles the rest. TODO: add figure. )
      • origin {x,y} (position of sector arc center in image space)
      • depth start (start offset from origin)
      • depth stop (end offset from origin)
      • azimuth (shape in the azimuth plane)
        • type{linear, sector}
        • width{angle/width}
        • tilt {angle, optional}
      • elevation (shape in the elevation plane, optional, NA for 2D probes)
        • width{angle/width}
        • type{linear, sector}
        • tilt {angle, optional}
      • exact: bool. If false, must use Mask stream to get exact geometry
      • mask {device_id, optional}: a device_id that can be used to GET an IMAGE containing a mask describing the sector.
    • Color table {device_id, optional} If given, points to a device_id that can be used to GET a COLORT, can be applied on scalar image values.
    • optional parameters (name-value pairs, specified by description and value def (unit etc) examples given)
      • frequency
      • harmonic {y/n}
      • focus points {array}
      • pitch
      • nyquist velocity
      • packet size
      • averaging window
    • extension parameters (same as optional, but user-defined, nonstandard)

Setting the scanner state

The scanner can be controlled by sending a state from client to server on a format similar to the one given in the properties section. This requires that the client know the value range, e.g. the available probes in order to set a probe.

The following must be controllable:

  • current probe
  • current application
  • stream on/off for each stream
  • probe sector shape for each stream.

Status message format, suggested implementation

The status can be organized as a xml document, and can thus be embedded into any text field in any message. XML also provides an easy system for optional and extension parameters. Each DOM-element can contain a reserved extension element. Here custom properties can be added without interfering with the required/optional properties.

The existing STATUS message can be reused for this purpose, or a new similar message can be defined. Status messages are defined for the scanner as a whole, and for each specific stream. The status is sent per stream in order to reduce the amount of information in each message. Note: For each stream there are now two messages: STATUS and IMAGE.

The syntax for the message descriptions below are:

MESSAGE_TYPE {device_id} [direction CLIENT<->SERVER]: Description of message.

Scanner messages
  • GET_STATUS {no device_id} [CLI->SVR]: Request scanner to return STATUS{scanner_id}.
  • STATUS{scanner_id} [SVR->CLI]: A status for the entire scanner, with a XML doc containing all scanner properties except the details of each stream.
  • STT/STP_STATUS{scanner_id} [CLI->SVR]: start/stop streaming of scanner status. Status is only sent when changed.
  • RTS_STATUS{scanner_id} [SVR->CLI]: reply when start/stop called
Stream status messages
  • GET_STATUS {no stream_id} [CLI->SVR]: Request scanner to return STATUS{stream_id}.
  • STATUS{stream_id} [SVR->CLI]: A status for the stream, with a XML doc containing properties for this stream only.
  • STT/STP_STATUS{stream_id} [CLI->SVR]: start/stop streaming of stream status. Status is only sent when changed.
  • RTS_STATUS{stream_id} [SVR->CLI]: reply when start/stop called
Additional data data related to stream
  • GET_IMAGE{mask_id} [CLI->SVR]: get mask
  • IMAGE{mask_id} [SVR->CLI]: a mask image for the stream associated with mask_id.
  • GET_COLORT{colort_id} [CLI->SVR]: get color table
  • COLORT{colort_id} [SVR->CLI]: a color table for the stream associated with colort_id.
Stream image messages
  • IMAGE{stream_id} [SVR->CLI]: An image from the given stream, corresponding to the status given in STATUS{stream_id}.
  • STT/STP_IMAGE{stream_id} [CLI->SVR]: start/stop image streaming.
  • RTS_IMAGE{stream_id} [SVR->CLI]: reply when start/stop called
Setting scanner state

This might be implemented by sending a STATUS containing the modified state from client to server. TBD.

Choices

  • By reusing existing messages, we only need to add STT/STP/RTS_STATUS for this to work. Alternative is to be more concise by adding new messages.
  • Not added: Available range for all settable properties. Not easy to query on scanners, overkill.
  • Not added: General catch-all geometry description for sector. The proposal catches all normal cases, while the Mask Stream provides an exact description of other cases.

Todo

  • Consider tracking data vs. coordinate spaces.
  • define xml schema
  • define all properties exactly.
  • figure describing probe sector.
  • add gain property?

PerkLab

  • Tracking data:
    • Transform matrix
    • Transform name: in standard <from>To<to>Transform transform name; currently it is limited to device name length (20 characters), often it is not enough (StylusTipToTransducerTransform)
    • Transform status: quality value for electromagnetic sensors; status value for any tracker (out of view, damaged marker warning); currently it is limited to received/not received and a watchdog determines tool status in the application based on that (if no update is received for 0.5 or so seconds then the transform is considered out of date)
    • Update transform (already implemented, using OpenIGTLink commands)
    • We may want to get (interpolated) transforms at the image frame's timestamp and/or all transforms at full acquisition rate
  • Image data:
    • Types:
      • 2D grayscale (8-bit, 1-component), RF-data (16-bit, 1-component), color (8-bit, 3/4-component RGB without/with alpha channel)
      • 3D grayscale (8-bit, 1/2-component without/with alpha channel) for reconstructed volumes
    • Meta-data that could be useful to send with each image (in addition to data already contained in OpenIGTLink IMAGE messages):
      • Clipping: rectangle (origin, size - in pixels), fan (start angle, stop angle, start radius, end radius)
      • Transducer to Image transform (where the transducer's position is within the image, what is the image spacing and axis directions)
      • Transducer to Probe transform (where the probe-attached marker or sensor is located in the Transducer coordinate system)
  • Ultrasound device control/status:
    • Image freeze state (get/set)
    • Depth (get/set)
    • Dynamic range (get/set)
    • Gain (get/set)
    • Selected probe (get/set)
    • Probe button (get): indicates if the user pressed "the button" on the ultrasound probe (for example on BK and Interson scanners)
  • Other device control/status:
  • Configuration:
    • Save configuration: write the current configuration to local disk (already implemented, using OpenIGTLink commands)
    • Send/receive configuration (send/receive entire device set configuration XML file)
    • Start/stop server (a background process would listen and would start a server instance with the received configuration file; this way the remote application could send the configuration file and the user would not be required to copy the configuration file to the acquisition computer and start a server instance using that)

TODO/Discuss

  • To be standardized (check how it's done in DICOM):
    • Coordinate system names (Image, Transducer, Probe)
    • Description of clipping rectangle/fan/binary mask
  • Interactive ultrasound and tracker simulator: to help development and testing (ultrasound simulator could use the one in Plus, tracker could use mouse and could support gestures, such as move pointer, pivot pointer, etc.)
  • Ultrasound control widget (display/set freeze, depth, etc.): to allow basic operatinon of an ultrasound without showing its software's screen
  • Can we do demos of current implementations at the project week?

Agenda

  • Intro of anyone who joined the tcon the first time
  • Presentation of each group's real-time data transfer needs (understand it better, discuss open points), take not of important tasks
  • Select tasks that we plan to work on during the project week

Link to join the meeting: https://plus.google.com/u/0/events/cna4d11el56n6lhkaqrupqlu8tk?authkey=CLnh6Zza0dTHFw