<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Julia-Rackerseder</id>
	<title>NAMIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.na-mic.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Julia-Rackerseder"/>
	<link rel="alternate" type="text/html" href="https://www.na-mic.org/wiki/Special:Contributions/Julia-Rackerseder"/>
	<updated>2026-04-10T13:04:44Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.33.0</generator>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Project_Week_25/Segmentation_for_improving_image_registration_of_preoperative_MRI_with_intraoperative_ultrasound_images_for_neuro-navigation&amp;diff=96752</id>
		<title>Project Week 25/Segmentation for improving image registration of preoperative MRI with intraoperative ultrasound images for neuro-navigation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Project_Week_25/Segmentation_for_improving_image_registration_of_preoperative_MRI_with_intraoperative_ultrasound_images_for_neuro-navigation&amp;diff=96752"/>
		<updated>2017-06-28T10:16:29Z</updated>

		<summary type="html">&lt;p&gt;Julia-Rackerseder: /* Project Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Back to [[Project_Week_25#Projects|Projects List]]&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
*[http://www.mic.uni-bremen.de/jennifer-nitsch/ Jennifer Nitsch] (University of Bremen, Germany)&lt;br /&gt;
*[http://www.mic.uni-bremen.de/cmt-management-team/scheherazade-kras/ Scheherazade Kraß] (University of Bremen, Germany)&lt;br /&gt;
*[http://campar.in.tum.de/Main/JuliaRackerseder Julia Rackerseder] (Technical University of Munich, Germany)&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Objective&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Approach and Plan&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Progress and Next Steps&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Objective bullet points --&amp;gt;&lt;br /&gt;
*Segmented multiple anatomical structures/landmarks in both MRI and Ultrasound (US) images, using machine learning algorithms (applicability of Deep Learning algorithms is currently tested, DL for US-images, data augmentation...).&lt;br /&gt;
&lt;br /&gt;
*The next step would be to analyze the improvement of registration quality with different segmentations/generated landmarks in order to adapt the segmentation algorithms.&lt;br /&gt;
&lt;br /&gt;
*Using advanced segmentation and registration features for iterative robot control. The respective status of the Kuka LWR iiwa (position, configuration) is simulated and visualized as 3-D model in MeVisLab. Medical data sets including target pose (position and path towards it) can be send and received via the OpenIGTLink network protocol as well.&lt;br /&gt;
&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Approach and Plan bullet points --&amp;gt;&lt;br /&gt;
*Starting from multi-modal image segmentation in preopertive MRI and intraoperative Ultrasound images,&lt;br /&gt;
it would be to discuss in which &amp;quot;form&amp;quot; one or multiple segmented structures should influence the registration result.&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Progress and Next steps (fill out at the end of project week), Please start each sentence in a new line. --&amp;gt;&lt;br /&gt;
*Working with the public RESECT dataset [http://sintef.no/projectweb/usigt-en/data/]&lt;br /&gt;
*Using segmentations to register MRI with US&lt;br /&gt;
*Locally refine with image based registration&lt;br /&gt;
*Plastimatch B-spline deformable registration module works well for image based registration [https://www.slicer.org/wiki/Documentation/4.6/Modules/PlmBSplineDeformableRegistration]&lt;br /&gt;
&lt;br /&gt;
TODO:&lt;br /&gt;
*Implement LC2 norm to improve image based registration between US and MRI (Wein, Wolfgang, et al. &amp;quot;Global registration of ultrasound to mri using the LC2 metric for enabling neurosurgical guidance.&amp;quot; International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2013.)&lt;br /&gt;
*Create module to load RESECT dataset including landmarks for ground truth&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Illustrations==&lt;br /&gt;
[[image:RegistrationInNeuroNavigationSystem.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Using segmented structures as guiding frame for multi-modal image registration:&lt;br /&gt;
&lt;br /&gt;
[[image:MultimodalImageSegmentation3.png|500px]]&lt;br /&gt;
&lt;br /&gt;
LWR Robot simulation in MeVisLab:&lt;br /&gt;
&lt;br /&gt;
[[image:Picture 2016-12-19 13 55 31.png|500px]]  &lt;br /&gt;
[[image:Picture 2017-06-26 11 04 59.png|500px]] &lt;br /&gt;
&lt;br /&gt;
==Background and References==&lt;br /&gt;
&amp;lt;!-- Use this space for information that may help people better understand your project, like links to papers, source code, or data --&amp;gt;&lt;br /&gt;
In glioma surgery neuronavigation systems assist in determining the tumor's location and estimating its extent.  However, the intraoperative situation diverges seriously from the preoperative situation in the MRI scan&lt;br /&gt;
displayed on the navigation system.  The movement of brain tissue during surgery,  i.e.,  caused by brainshift&lt;br /&gt;
and tissue removal, must be considered mentally by the surgeon.  A task that gets more challenging in later&lt;br /&gt;
phases of the tumor resection. &lt;br /&gt;
&lt;br /&gt;
Besides, it is an exhaustive issue and the shift of cerebral structures must be&lt;br /&gt;
expected being non-uniform and that it implies a deformation of the image data.  This makes it especially hard&lt;br /&gt;
to mentally predict and model.&lt;br /&gt;
&lt;br /&gt;
Thus,  intraoperative  imaging  modalities  are  used  to  visualize  the  current  intraoperative  situation.   IUS,&lt;br /&gt;
for  instance,  is  easy  to  use  intraoperatively,  offers real-time  information,  is  widely  available  at  low  cost  and&lt;br /&gt;
causes no radiation.  These are important advantages when iUS is compared with iCT or iMRI. However, in&lt;br /&gt;
image-guided surgery precise image registration of iUS and preMRI and the thereon-based image fusion is still&lt;br /&gt;
an unsolved problem.  The different representations of cerebral structures in both modalities as well as&lt;br /&gt;
artifacts within the iUS, hinder direct fusion of both modalities.&lt;/div&gt;</summary>
		<author><name>Julia-Rackerseder</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Project_Week_25/Segmentation_for_improving_image_registration_of_preoperative_MRI_with_intraoperative_ultrasound_images_for_neuro-navigation&amp;diff=96751</id>
		<title>Project Week 25/Segmentation for improving image registration of preoperative MRI with intraoperative ultrasound images for neuro-navigation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Project_Week_25/Segmentation_for_improving_image_registration_of_preoperative_MRI_with_intraoperative_ultrasound_images_for_neuro-navigation&amp;diff=96751"/>
		<updated>2017-06-28T10:00:44Z</updated>

		<summary type="html">&lt;p&gt;Julia-Rackerseder: /* Project Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Back to [[Project_Week_25#Projects|Projects List]]&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
*[http://www.mic.uni-bremen.de/jennifer-nitsch/ Jennifer Nitsch] (University of Bremen, Germany)&lt;br /&gt;
*[http://www.mic.uni-bremen.de/cmt-management-team/scheherazade-kras/ Scheherazade Kraß] (University of Bremen, Germany)&lt;br /&gt;
*[http://campar.in.tum.de/Main/JuliaRackerseder Julia Rackerseder] (Technical University of Munich, Germany)&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Objective&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Approach and Plan&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Progress and Next Steps&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Objective bullet points --&amp;gt;&lt;br /&gt;
*Segmented multiple anatomical structures/landmarks in both MRI and Ultrasound (US) images, using machine learning algorithms (applicability of Deep Learning algorithms is currently tested, DL for US-images, data augmentation...).&lt;br /&gt;
&lt;br /&gt;
*The next step would be to analyze the improvement of registration quality with different segmentations/generated landmarks in order to adapt the segmentation algorithms.&lt;br /&gt;
&lt;br /&gt;
*Using advanced segmentation and registration features for iterative robot control. The respective status of the Kuka LWR iiwa (position, configuration) is simulated and visualized as 3-D model in MeVisLab. Medical data sets including target pose (position and path towards it) can be send and received via the OpenIGTLink network protocol as well.&lt;br /&gt;
&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Approach and Plan bullet points --&amp;gt;&lt;br /&gt;
*Starting from multi-modal image segmentation in preopertive MRI and intraoperative Ultrasound images,&lt;br /&gt;
it would be to discuss in which &amp;quot;form&amp;quot; one or multiple segmented structures should influence the registration result.&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Progress and Next steps (fill out at the end of project week), Please start each sentence in a new line. --&amp;gt;&lt;br /&gt;
*Working with the public RESECT dataset [http://sintef.no/projectweb/usigt-en/data/]&lt;br /&gt;
*Using segmentations to register MRI with US&lt;br /&gt;
*Locally refine with image based registration&lt;br /&gt;
&lt;br /&gt;
TODO:&lt;br /&gt;
*Implement LC2 norm to improve image based registration between US and MRI (Wein, Wolfgang, et al. &amp;quot;Global registration of ultrasound to mri using the LC2 metric for enabling neurosurgical guidance.&amp;quot; International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2013.)&lt;br /&gt;
*Create module to load RESECT dataset including landmarks for ground truth&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Illustrations==&lt;br /&gt;
[[image:RegistrationInNeuroNavigationSystem.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Using segmented structures as guiding frame for multi-modal image registration:&lt;br /&gt;
&lt;br /&gt;
[[image:MultimodalImageSegmentation3.png|500px]]&lt;br /&gt;
&lt;br /&gt;
LWR Robot simulation in MeVisLab:&lt;br /&gt;
&lt;br /&gt;
[[image:Picture 2016-12-19 13 55 31.png|500px]]  &lt;br /&gt;
[[image:Picture 2017-06-26 11 04 59.png|500px]] &lt;br /&gt;
&lt;br /&gt;
==Background and References==&lt;br /&gt;
&amp;lt;!-- Use this space for information that may help people better understand your project, like links to papers, source code, or data --&amp;gt;&lt;br /&gt;
In glioma surgery neuronavigation systems assist in determining the tumor's location and estimating its extent.  However, the intraoperative situation diverges seriously from the preoperative situation in the MRI scan&lt;br /&gt;
displayed on the navigation system.  The movement of brain tissue during surgery,  i.e.,  caused by brainshift&lt;br /&gt;
and tissue removal, must be considered mentally by the surgeon.  A task that gets more challenging in later&lt;br /&gt;
phases of the tumor resection. &lt;br /&gt;
&lt;br /&gt;
Besides, it is an exhaustive issue and the shift of cerebral structures must be&lt;br /&gt;
expected being non-uniform and that it implies a deformation of the image data.  This makes it especially hard&lt;br /&gt;
to mentally predict and model.&lt;br /&gt;
&lt;br /&gt;
Thus,  intraoperative  imaging  modalities  are  used  to  visualize  the  current  intraoperative  situation.   IUS,&lt;br /&gt;
for  instance,  is  easy  to  use  intraoperatively,  offers real-time  information,  is  widely  available  at  low  cost  and&lt;br /&gt;
causes no radiation.  These are important advantages when iUS is compared with iCT or iMRI. However, in&lt;br /&gt;
image-guided surgery precise image registration of iUS and preMRI and the thereon-based image fusion is still&lt;br /&gt;
an unsolved problem.  The different representations of cerebral structures in both modalities as well as&lt;br /&gt;
artifacts within the iUS, hinder direct fusion of both modalities.&lt;/div&gt;</summary>
		<author><name>Julia-Rackerseder</name></author>
		
	</entry>
	<entry>
		<id>https://www.na-mic.org/w/index.php?title=Project_Week_25/Segmentation_for_improving_image_registration_of_preoperative_MRI_with_intraoperative_ultrasound_images_for_neuro-navigation&amp;diff=96711</id>
		<title>Project Week 25/Segmentation for improving image registration of preoperative MRI with intraoperative ultrasound images for neuro-navigation</title>
		<link rel="alternate" type="text/html" href="https://www.na-mic.org/w/index.php?title=Project_Week_25/Segmentation_for_improving_image_registration_of_preoperative_MRI_with_intraoperative_ultrasound_images_for_neuro-navigation&amp;diff=96711"/>
		<updated>2017-06-26T13:24:30Z</updated>

		<summary type="html">&lt;p&gt;Julia-Rackerseder: /* Key Investigators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Back to [[Project_Week_25#Projects|Projects List]]&lt;br /&gt;
&lt;br /&gt;
==Key Investigators==&lt;br /&gt;
*[http://www.mic.uni-bremen.de/jennifer-nitsch/ Jennifer Nitsch] (University of Bremen, Germany)&lt;br /&gt;
*[http://www.mic.uni-bremen.de/cmt-management-team/scheherazade-kras/ Scheherazade Kraß] (University of Bremen, Germany)&lt;br /&gt;
*[http://campar.in.tum.de/Main/JuliaRackerseder Rackerseder, Julia] (Technical University of Munich, Germany)&lt;br /&gt;
&lt;br /&gt;
==Project Description==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Objective&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Approach and Plan&lt;br /&gt;
! style=&amp;quot;text-align: left; width:27%&amp;quot; |   Progress and Next Steps&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Objective bullet points --&amp;gt;&lt;br /&gt;
*Segmented multiple anatomical structures/landmarks in both MRI and Ultrasound (US) images, using machine learning algorithms (applicability of Deep Learning algorithms is currently tested, DL for US-images, data augmentation...).&lt;br /&gt;
&lt;br /&gt;
*The next step would be to analyze the improvement of registration quality with different segmentations/generated landmarks in order to adapt the segmentation algorithms.&lt;br /&gt;
&lt;br /&gt;
*Using advanced segmentation and registration features for iterative robot control. The respective status of the Kuka LWR iiwa (position, configuration) is simulated and visualized as 3-D model in MeVisLab. Medical data sets in-cluding target pose (position and path towards it) can be send and received via the OpenIGTLink network protocol as well.&lt;br /&gt;
&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Approach and Plan bullet points --&amp;gt;&lt;br /&gt;
*Starting from multi-modal image segmentation in preopertive MRI and intraoperative Ultrasound images,&lt;br /&gt;
it would be to discuss in which &amp;quot;form&amp;quot; one or multiple segmented structures should influence the registration result.&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- Progress and Next steps (fill out at the end of project week), Please start each sentence in a new line. --&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Illustrations==&lt;br /&gt;
[[image:RegistrationInNeuroNavigationSystem.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Using segmented structures as guiding frame for multi-modal image registration:&lt;br /&gt;
&lt;br /&gt;
[[image:MultimodalImageSegmentation3.png|500px]]&lt;br /&gt;
&lt;br /&gt;
LWR Robot simulation in MeVisLab:&lt;br /&gt;
&lt;br /&gt;
[[image:Picture 2016-12-19 13 55 31.png|500px]]  &lt;br /&gt;
[[image:Picture 2017-06-26 11 04 59.png|500px]] &lt;br /&gt;
&lt;br /&gt;
==Background and References==&lt;br /&gt;
&amp;lt;!-- Use this space for information that may help people better understand your project, like links to papers, source code, or data --&amp;gt;&lt;br /&gt;
In glioma surgery neuronavigation systems assist in determining the tumor's location and estimating its extent.  However, the intraoperative situation diverges seriously from the preoperative situation in the MRI scan&lt;br /&gt;
displayed on the navigation system.  The movement of brain tissue during surgery,  i.e.,  caused by brainshift&lt;br /&gt;
and tissue removal, must be considered mentally by the surgeon.  A task that gets more challenging in later&lt;br /&gt;
phases of the tumor resection. &lt;br /&gt;
&lt;br /&gt;
Besides, it is an exhaustive issue and the shift of cerebral structures must be&lt;br /&gt;
expected being non-uniform and that it implies a deformation of the image data.  This makes it especially hard&lt;br /&gt;
to mentally predict and model.&lt;br /&gt;
&lt;br /&gt;
Thus,  intraoperative  imaging  modalities  are  used  to  visualize  the  current  intraoperative  situation.   IUS,&lt;br /&gt;
for  instance,  is  easy  to  use  intraoperatively,  offers real-time  information,  is  widely  available  at  low  cost  and&lt;br /&gt;
causes no radiation.  These are important advantages when iUS is compared with iCT or iMRI. However, in&lt;br /&gt;
image-guided surgery precise image registration of iUS and preMRI and the thereon-based image fusion is still&lt;br /&gt;
an unsolved problem.  The different representations of cerebral structures in both modalities as well as&lt;br /&gt;
artifacts within the iUS, hinder direct fusion of both modalities.&lt;/div&gt;</summary>
		<author><name>Julia-Rackerseder</name></author>
		
	</entry>
</feed>