Difference between revisions of "Project Week 25/Next Generation GPU Volume Rendering"

From NAMIC Wiki
Jump to: navigation, search
 
(5 intermediate revisions by 3 users not shown)
Line 41: Line 41:
 
Determine a sensible way to integrate all those contribution in VTK.
 
Determine a sensible way to integrate all those contribution in VTK.
 
|
 
|
<!-- Progress and Next steps (fill out at the end of project week) -->
+
* Minutes of the meeting held during project week at the bottom of this page
 +
* In summary, for the first iteration of the new GPU raycasting:
 +
** Latest VTK may provide sufficient hooks to modify GLSL of the raycast volume shader.
 +
** The group will provide simple examples that should be easy to implement with the new functionality and verify that it is indeed the case. If not, we will create a pull request with the function added to the GPU raycast shader to be able to inject GLSL code.
 +
** Creating the shader code may still be complicated. Kitware will propose a high level GLSL API to facilitate the process and will submit to this group for feedback.
 +
** Multiple volume support is in development at Kitware.
 +
** Kitware will consider enabling context sharing across mappers and widgets. SINTEF will create a pull request in VTK with their implementation of the concept.
 +
* New VTK-m may be a solution for GPU volume processing in the long term.
 
|}
 
|}
  
Line 101: Line 108:
 
=== Hangout at Project Week ===
 
=== Hangout at Project Week ===
  
* Attendees: Alvaro, Sankesh, Aashish, Andras, Steve, Jc, Simon, Ole
+
* Attendees: Alvaro, Sankhesh, Aashish, Andras, Steve, Jc, Simon, Ole
  
 
* Open Questions:
 
* Open Questions:
Line 139: Line 146:
 
* At project week:
 
* At project week:
 
** collect examples of things that should be simple
 
** collect examples of things that should be simple
 +
** provide feedback and suggestions to vtk developer
 
* At Kitware:
 
* At Kitware:
 
** propose a high level API for feedback
 
** propose a high level API for feedback
Line 144: Line 152:
 
** look at sharing volumes across mappers (class to represent a volume in memory)
 
** look at sharing volumes across mappers (class to represent a volume in memory)
 
** shared GL context (see CustusX for worked out example of a "hacked in" way to support this)
 
** shared GL context (see CustusX for worked out example of a "hacked in" way to support this)
 +
 +
Long-Term
 +
* Be able to create volume processing pipelines that connect CPU and GPU processing seemlessly
 +
* Investigate the capabilities of vtk-m
 +
 +
===== Addendum =====
 +
* [https://blog.kitware.com/vtk-8-0-0/ VTK 8.0.0 release notes]
 +
* [http://m.vtk.org/ VTK-M]

Latest revision as of 09:04, 30 June 2017

Home < Project Week 25 < Next Generation GPU Volume Rendering


Back to Projects List

Key Investigators

Project Description

Objective Approach and Plan Progress and Next Steps

Develop a specification for the next generation of GPU volume processing and rendering in VTK and Slicer

The specification should support

  • Multiple input volumes for ray-casting and slice-based volume rendering
  • GPU-based volume slicing and compositing
  • Programmable volume rendering shader
  • Smart management of volumes in GPU memory
  • Possibility to run simple processing operations on the volume (e.g. Non-linear transforms)
  • Consider possible complications
    • GPU features not available / software only fallback
    • Volumes too big to for GPU memory

Compare the architectures of different existing projects where parts of the required functionality has been implemented:

Determine a sensible way to integrate all those contribution in VTK.

  • Minutes of the meeting held during project week at the bottom of this page
  • In summary, for the first iteration of the new GPU raycasting:
    • Latest VTK may provide sufficient hooks to modify GLSL of the raycast volume shader.
    • The group will provide simple examples that should be easy to implement with the new functionality and verify that it is indeed the case. If not, we will create a pull request with the function added to the GPU raycast shader to be able to inject GLSL code.
    • Creating the shader code may still be complicated. Kitware will propose a high level GLSL API to facilitate the process and will submit to this group for feedback.
    • Multiple volume support is in development at Kitware.
    • Kitware will consider enabling context sharing across mappers and widgets. SINTEF will create a pull request in VTK with their implementation of the concept.
  • New VTK-m may be a solution for GPU volume processing in the long term.

Illustrations

https://www.dropbox.com/s/bqg1fyd42z6lawn/prism-demo-simon-drouin-imic-2016.mp4?dl=0

3D Image Filters in WebGL2

Multivolume rendering and nonlinear transforms in WebGL2

Background and References

Some notes about sharing GLSL code between desktop OpenGL and WebGL

Schott M, Pascal Grosset AV, Martin T, Pegoraro V, Smith ST, Hansen CD. Depth of Field Effects for Interactive Direct Volume Rendering. Comput Graph Forum. 2011 Jun;30(3):941-50.

Volumetric shadows and light scattering:

Kniss J, Premoze S, Hansen C, Shirley P, McPherson A. A model for volume lighting and modeling. IEEE Trans Vis Comput Graph. 2003;9(2):150–62.

Ambiant occlusion (this one should be possible to do with ray casting, but not as efficient):

Hernell F, Ljung P, Ynnerman A. Local Ambient Occlusion in Direct Volume Rendering. IEEE Trans Vis Comput Graph. 2010 Jul;16(4):548–59.

Planning hangout June 13

(Simon, Jc, Alvaro, Sankhesh, Steve, Hina)

Recent features in VTK Volume Rendering (in master, blog post coming)

  • Volume peeling - translucent geometry with volumes in GPU ray cast mapper
  • Render to texture
  • 2D lookup tables (value and gradient magnitude)

Work in progress

  • Overlapping volumes - multiple inputs to mapper
  • vtk charts to work with 2D transfer functions

Slicer to migrate to latest version once cmake hierarchy is sorted out (Jc)

Questions / discussion points:

  • Ray cast vs view aligned plane-based algorithms
    • depth of focus, shadows, diffuse lighting...
    • How to integrate multiple features
  • Nonlinear transformation
  • Custom shaders
  • Dynamic shader generation in python
  • multiple components
    • RGBA
    • Independent components?
  • 2D lookup tables
  • Volumes that live on the GPU
    • sharing across contexts
    • use as input textures and render targets
    • tiling?
    • streaming?
  • Large volumes?
  • Mesa backend?
    • Offscreen support for OpenGL2 backend should work in VTK now

Hangout at Project Week

  • Attendees: Alvaro, Sankhesh, Aashish, Andras, Steve, Jc, Simon, Ole
  • Open Questions:
    • What is best to accomplish at Project Week?
    • What architecture is best for the future and how can we get there?
  • Alvero announced that volume peeling supports the OpenGL render pass, which could support custom shaders
    • two calls: Prereplace and Postreplace (as in vtkPolyDataMapper)
    • newer OpenGL2 version is much more flexible than the older version in OpenGL1
    • allows user to replace parts of the rending process
  • Need to be able to change
    • per-sample ray accumulating value
    • compositing function
    • initialization of ray path parameters
    • control over when ray terminates
    • uniforms
    • textures
  • RenderPath vs RenderTags example?
    • Pattern is used in the vtk volume rendering mapper
    • existing implementation serves as a working example for customization
    • may need to define new spots to inject code (TBD)

Debugging

    • Pattern to add unique identifiers to shader variables
    • There are vtk hooks to help debug
    • Access to compiler errors, and full shader scene
    • Can use an apitrace program (linux and mac) to track opengl state

How to make it easier to customize?

    • e.g. coloring samples by depth for 4D US
    • high level API?
    • set of worked out examples for common use cases

Next steps

  • At project week:
    • collect examples of things that should be simple
    • provide feedback and suggestions to vtk developer
  • At Kitware:
    • propose a high level API for feedback
    • multiple overlapping volumes
    • look at sharing volumes across mappers (class to represent a volume in memory)
    • shared GL context (see CustusX for worked out example of a "hacked in" way to support this)

Long-Term

  • Be able to create volume processing pipelines that connect CPU and GPU processing seemlessly
  • Investigate the capabilities of vtk-m
Addendum