interpolate stack of images

I have a stack of images where the thickness of each slice is known and there is a gap of known distance between each slice.
I have used and seen where image resampling is used to create isotropic volumes from image slices that are thicker but don’t have a gap between it and the next slice.

Does anyone have a suggestion of alternative to the resampling or does resampling solve the same problem whether there is a known gap or not?

Actually, this is a very simple task. You can create a grid transform that maps 4 corners of each slice in IJK coordinate system to the 4 corners in LPS coordinate system, then resample the image using this transform. This is implemented in AcquisitionModeling class in 3D Slicer’s DICOM importer to create a regular Cartesian volume from variable-spacing and/or tilted-gantry acquisitions.

For more general 3D image reconstruction from thick, intersecting, and/or overlapping slices or subvolumes, you need a bit more complex methods. For example, you can use Plus toolkit, which is developed for freehand 3D ultrasound reconstruction.

2 Likes

It looks like grid transform is a VTK thing and not an ITK thing, so would I need to make my own itk like transform or some kind of bridge functionality?

“Grid” transform in VTK is the same as “displacement field” transform in ITK.

Thank you. I attempted to modify the class to use sitk. Getting the displacement grid makes sense, but how to use it properly with displacement field transform is confusing and can’t find a decent example . I will post some code tomorrow.

Hello @kayarre,

Example code for resampling with SimpleITK can be found in this Jupyter notebook, As a DisplacementFieldTransform is just a SimpleITK transform the resampling functions will all accept it.

Thank you @zivy, I will try that. maybe I don’t understand how to get the displacement field mapped correctly. I have a [2,2,slices] vectorfloat64 with 3 components per pixel. if the input ijk image has the regular direction [1 0 0 0 1 0 0 0 1], Only a displacement field that is [0 0 z] is required for each image? Does it need to generate a dense deformation field from that? is 4 corners all one needs?

when I resample, should the references image be representative in size and type of the correct spacing I want for the output?

I was able to build an image volume with the proper spacing but has blank slices in the gaps as opposed to interpolated images in between.

I will give it another whirl, thank you so much @lassoan as well.

The thing about the slicer example is that its using the Slicer api, and registers the transform so it updates things once the filter is called, I think.

[2,2,slices] displacement vectors are all you need. ITK interpolates between these vectors automatically.

yes

Make sure position vectors in your [2,2,slices] vector volume corresponds to corners of your image slices. Use the inverse of this transform to resample your image volume.

Is there any particular issue that prevents you from using this feature in Slicer?

The class designed for this is itk::SliceSeriesSpecialCoordinatesImage provided by the ITKUltrasound module. An example of how it is used to resample into an itk::Image uniform grid can be found in SlicerITKUltrasound.

1 Like

@lassoan
This is really helpful information. I think I was missing the inverse transform part.

I wasn’t using slicer as its one more package in the work flow that will struggle to manage, but I suppose I can go for it. Ideally I would like to stay in python script land. The last I was using Slicer I found I needed different to manage python environments with different scripts for those environments. Is Slicer’s API to a point where one could do something like import slicer and not have to do environment setting through an initiation script?

P.S. It’s hard to express how grateful for everyones input, opinions, insight, and help I have received for all my questions.

1 Like

Thanks for the feedback. Being able to just import slicer would be certainly nice. However, it would require development of all logic classes in a way that they work correctly both in Slicer and in any Python executable. Making everything work in an uncontrolled third-party executable would be complicated, as we would not have control over the event loop, standard in/out, environment, threads, OpenGL contexts, etc. Probably that’s why other applications that embed Python (Blender, FreeCAD, Cinema4D, etc.) allows importing of only a small feature set as Python module, and it is mostly as experimental, not officially supported.

We may provide part of Slicer core available as a Python module in the future, but for now we rather focus on providing third-party integration in other ways: by making it easy to pip install any packages (so that you can run anything inside Slicer), run external scripts (which allows reusing the same Python scripts inside and outside Slicer), use Slicer as a Jupyter kernel (which allows running Jupyter notebooks in Slicer and interact with the data in the application’s interactive GUI - all in a single process), use Slicer from a web browser, run it in a docker container, etc.

1 Like

I agree with these goals. I also think something being done is pushing some functionality down to other libraries. E.g. oriented images in VTK. And not that vtk/itk should get everything as they’re already “huge” and maybe import slicer.core would be cool.

I am curious about some ways to manage complexity, and simplicity in software. Conceptually it seems simple, but not in practice. I think likely it comes to implementing what aligns with project goals. “It’s all hard, some paths are less hard and many are equally hard”

@matt.mccormick
I am not sure the first link to the ITK class is correct, it links to itkJensenHavrdaCharvatTsallisPointSetToPointSetMetricv4.h

I found it here. https://github.com/KitwareMedical/ITKUltrasound/blob/master/include/itkSliceSeriesSpecialCoordinatesImage.hxx

@kayarre thanks, I updated the link.