Yes you need to cast. The fixed and moving images need to be of float type (sitkFloat32 or sitkFloat64). This is independent of the file format. If the original data in the file was in float then no need to cast, just read it. If it isn’t float, read and cast.
So in my Affine registration between the same subject , the graph for iterations and metric value looks something like this
Where there is a spike after 100th iteration . Is there a way to stop at the convergence value around 100th iteration . I’m using
registration_method.SetMetricAsCorrelation()
registration_method.SetInterpolator(sitk.sitkLinear)
and
I assume that after 100 iterations the registration switches to higher-resolution version of your image. And that causes your metric to spike. The third star is where the highest resolution registration step starts. Overall, this registration seems fine, at least by looking at that graph. Does the result look fine?
Sorry to bother .
After my affine registration , I’m trying to do a Bspline registration where my moving and fixed images are .mha format and my fixed mask is .img ( mask for airway tree ) .
My question is what kind of moving and fixed points do I need and what is the minimum number of points .
As I’m getting an error
Exception thrown in SimpleITK ImageRegistrationMethod_Execute: d:\a\1\sitk-build\itk-prefix\include\itk-5.2\itkImageToImageMetricv4.hxx:270:
ITK ERROR: MeanSquaresImageToImageMetricv4(000001F474731D80): VirtualSampledPointSet must have 1 or more points.
If your mask is very small relative to the entire volume of the image, as is the case of airways relative to entire chest and air around it captured by a CT image, it causes sampling issues. There is some fixed number of voxels chosen to be used in registration, and only a subset of that falls inside the fixed mask. And those voxels are mapped to the moving image, and only a subset that falls inside the moving mask participates in computation of the metric. In your case, all those intersections yield an empty set.
A quick and good way to proceed would be to ignore you have airway segmentation, and just do normal CT-CT registration without any masks. Have it initialized by your transform coming from registration of lung segmentations. That is easy to do, and should produce decent results.
If you insist on somehow utilizing your airway segmentation, a relatively fast way is to compute distance fields on those binary segmentations and register those distance fields using mean square error as the metric.
So just to clarify , you mean doing a b spline CT-CT ( .mha files) between fixed and moving images and then initializing the transform from the registration of lung volume affine as previously performed .
Or should I do an affine registration for lung segmentation masks ( Lung airway masks) and then initialize that transformation to the B spline CT-CT ( .mha ) between fixed and moving images .
I have all the voxel locations ( x, y ,z ) for the connected airway points at each bifurcations . Can those be ideal to use for my moving image and fixed image to do a B-spline registration .
I am doing a B spline FFD registration . I have saved all the moving and the fixed points for the airway tree bifurcations as 2 DataFrame which contain all these (x, y ,z ) points as Float points .
So my moving and fixed images are .mha format and my fixed mask is .img ( mask for airway tree ) .
When I run the registration I am getting this error
So is there a way to convert my DataFrame points to PointSet that is readable by itk .
@VeerBal you will need to see what your pandasDataFrame contains as data, extract it appropriately, and feed that to any of the PointSet class constructors or methods as they expect it. The official documentation of the PointSet class is here, as @dzenanz posted in the immediate previous answer:
The ITK Examples also contain code that shows how to use the class. See the related examples at the bottom of that page as well.
The ITK Software Guide does also contain a section about the PointSet class. Section 4.2. PointSet (p. 57).
Alternatively, you can also have a look at the tests (this one and this other one) to try to see what you need to feed to the PointSet instance and methods.
Thank you so much for your help .
I converted my DataFrame into two separate lists that contains all the coordinates as floating points for both the fixed and the moving image . Right now I’m trying to import the list onto a Pointset class , but I don’t know if its the right way to proceed .