I’ve been trying to use B-spline transformation using Neural Networks in PyTorch. I’ve been using the model from here. However, when I use the predicted parameters in SITK, the transformation results do not match. I wanted to know if there is any differentiable BSpline transformation that works in the same way as the BSpline transformation in SITK.
Not sure I understand your question. Possibly provide additional details or figures to clarify the issue. As far as I could tell you have a deep learning “black box” which predicts the displacement of b-spline control points and when you apply these using the SimpleITK BSplineTransform, they do not create the expected deformation?
Do you also have the grid structure used by the “black box”, the initial grid locations are required for working with the SimpleITK transformation. Bottom line, you need to translate between the Bspline grid structure and displacements used by the deep learning model and those used by SimpleITK.
I have been using the BSplineSTN3D model from here which is based in BSplineTransformation from here. The NN outputs the parameters and those parameters are used to deform the image. I applied those parameters in BSplineTransform of SITK which results in a different image. The mentioned code has methods to compute the grid using the displacement field. They are using some convolutional operations and ultimately reduce to the shape of the volume that is being used.
I also wanted to visualize the deformation field thinking that might help. Is there some tool in SITK to visualize grid?
I had posted another question in the project repo. Turns out, we cannot relate these transformations. Now I am wondering whether I am using the BSplineTransform correctly or not since it does not seem to warp the image.
@dzenanz Thanks for the response. My problem is not being able to figure out why two deformations are so different. Is there any way to visualize the deformations in SimpleITK? My other need is to be able to find the new points after deformation using the deformation field. Is this possible?
I would go with visualization using Slicer as recommended by @dzenanz.
If you are willing to use a crude way of visualizing, then you can use the GridImageSource and apply the deformation fields to that image to understand how they vary in space. This approach is used in this Jupyter notebook, section titled Radial Distortion.
The deformation field, DisplacementFieldTransform, has the same interface as the rest of the transformations, so just use the TransformPoint method to map your points. Remember to use physical points and not the image indexes (move between the two representations using the image’s TransformIndexToPhysicalPoint and TransformPhysicalPointToIndex methods).
Yes. In Slicer, got to the transforms module, select your transform and then add your points to the list of transformed nodes. ITK and SimpleITK do not have visualization capabilities of their own. There is a VTKGlue module with only basic capabilities.