When rendering my demons displacement field as polygonal arrows they point backwards. Additionally, if I manually investigate the values of the image, they point backwards. So I’ve been considering if demons even produces a forward displacement like documentation lead me to believe. It seems as though something is fundamentally wrong with my understanding but maybe it’s just a bug.
The documentation is a bit vague just saying that it would produce a field that would map the moving image to the fixed. Which is confusing because it entirely depends on how you map the image. IE: Some function exists that would produce the fixed image given the moving image and output field.
Most examples and myself, use WarpImageFilter. This computes the inverse of the input field which is much more useful because we can warp the image by simply sampling where it came from. This would imply you should provide WarpImageFilter with your forward displacement. And since I have supplied WarpImageFilter with my demons displacement field (and it produced the fixed image successfully), it would imply this is a forward displacement field.
So I’m a bit confused why the values are backwards. The only other possible step that could have messed them up is my conversion of the ITK vector image to a VTK image. But this just uses ITKs itk to vtk filters. When I manually investigate the values though I use the VTK converted image.
Any help would be appreciated.
All image transforms are done using backward transform. This is because you want to have a full sampling of your output space. More specifically, what you want when you compute your resampled and transformed image is the value of your new image (output space) at each point of that space. Typically this output space is the space of your fixed image. So for each point of your fixed image, you are searching where in your moving image that point comes from. So you really want to have the backward transform that goes from your fixed image to your moving image. A good picture to illustrate this is slide 26 of this presentation.
Note: If you were working with 3D points, like a mesh, you would want to use the forward transform as you are not trying to fully sample the output space.
Note 2: Sometimes you do want to be able to go in both directions. One example is if you have computed an average image for an entire population. You have the backward transforms with your average image as your fixed image and your moving image as all your individual images. Now if you create a segmentation image in your average space and want to propagate this back to the individual images, you need the forward transform. In this case, you need to either make sure to compute an invertible transform or use an ITK filter to invert a deformation field (there are many algorithms for that and some give better results than others depending on the case).
Ah alright, thanks. So that would mean WarpImageFilter accepts a backwards transform not a forward? I thought it accepted a forward then inversed it internally.
Indeed. You provide the backward transform. It is not inverted internally.