I am performing a 3D multistage registration. 1st Stage is a Translation transform initiated with an identity transform. The second stage is a Rotation transform.
It is pops up the following errors right after the first iteration:
ExceptionObject caught !
itk::ExceptionObject (0x563b1e8ede50)
Location: "void itk::MattesMutualInformationImageToImageMetricv4<TFixedImage, TMovingImage, TVirtualImage, TInternalComputationValueType, TMetricTraits>::ComputeResults() const [with TFixedImage = itk::Image<float, 3>; TMovingImage = itk::Image<float, 3>; TVirtualImage = itk::Image<float, 3>; TInternalComputationValueType = double; TMetricTraits = itk::DefaultImageToImageMetricTraitsv4<itk::Image<float, 3>, itk::Image<float, 3>, itk::Image<float, 3>, double>]"
File: /home/9679247/Documentos/General-Sources/InsightToolkit-5.1.2/Modules/Registration/Metricsv4/include/itkMattesMutualInformationImageToImageMetricv4.hxx
Line: 312
Description: itk::ERROR: itk::ERROR: MattesMutualInformationImageToImageMetricv4(0x563b1e6a4c60): All samples map outside moving image buffer. The images do not sufficiently overlap. They need to be initialized to have more overlap before this metric will work. For instance, you can align the image centers by translation.
Also:
ExceptionObject caught !
itk::ExceptionObject (0x55d97dcea7d0)
Location: "itk::Versor::Set( const VectorType )"
Description: Trying to initialize a Versor with a vector whose magnitude is greater than 1
I tried varying the initial transform initialization method:
First I verified the way I am sing the previous stage transform into the next stage:
registrationR->SetInitialTransform( initialTransformRotation );
registrationR->InPlaceOn();
// Connecting previous stage transform to the next stage;
registrationR->SetMovingInitialTransform(
initialTransformTranslation);
Then I check some possibilities for the Initial transform. I tried:
Initial transform with SetIdentity(), to make it an Identity transfom.
Also, I tried to use the CenteredTransformInitializer class, defining MomentsOn() method.
None of the strategies above sounded to solve this problem.
This error appears when the transform does a terrible job. There needs to be some overlap (at least 25%, I think) between fixed and moving images for MI to compute something. You need better initial transform
I kinda got it. Makes total sense, BUT I am using the result of the previous stage (translation as the initial transform).
Previous stage transform should be a good initial guess, shouldn’t it?
Perhaps I am not setting that correctly! Can you please check that for me?
registrationRotation->SetInitialTransform( initialTransformRotation );
registrationRotation->InPlaceOn();
// Connecting previous stage transform to the next stage;
registrationRotation->SetMovingInitialTransform(
initialTransformTranslation);
initialTransformRotation is the identity transform or centeredInitilized initial Rotation Transform. initialTransform Translation is the previous stage transform result.
I guess that the second call is overwriting the first transform. You could write the transform to disk at various points in program, and then check it either in text editor or Slicer.
I`ve tried some other initialization methods. Another error pops up:
>
> ExceptionObject caught !
>
> itk::ExceptionObject (0x559b56fa17d0)
> Location: “itk::Versor::Set( const VectorType )”
> Description: Trying to initialize a Versor with a vector whose magnitude is greater than 1
I tried to perform the multistage procedure accordingly the itk example.
This image obtained after the first stage (Almost perfect alignment) is used as the input for the rotation stage. An the setting code is the following:
But I am still getting the same initial misoverlapping error:
itk::ERROR: itk::ERROR: MattesMutualInformationImageToImageMetricv4(0x55d901657c60): All samples map outside
moving image buffer. The images do not sufficiently overlap. They need to be initialized to
have more overlap before this metric will work. For instance, you can align the image centers by translation.
initialTransformRotation computed by rotationInitializer->InitializeTransform(); is very soon obliterated by initialTransformRotation->SetRotation( rotation );
Can you write to disk initialTransformRotation before and after SetRotation() call?
If you just want to update a transform so it has a certain rotation, you should also take care of center of rotation. If both are not the same (usually [0,0,0]), that could explain your big problem.
I often found that registration works much more robustly if the moving and fixed images are cropped to approximately to the same region. Probably because this way simple center-of-gravity initial alignment and center of rotation initializations work very well and you also ensure that samples are taken only from relevant image regions. There may be other ways to ensure that initial alignment, center of rotation, and sampling are optimal, but cropping the image is a very simple method to achieve these.
Even if you compute transform to a smaller region of the image, you can still apply the resulting transform to the entire image. In case of non-linear transforms this extrapolation is not trivial, but I know that VTK supports this, and probably ITK can do it, too.
Well, I just tested FOUR ways of giving an initial guess to the rotation stage:
I - Creating a InitialTransformRotation and then initializing that with the CenteredTransformInitializer;
II - Setting a generic rotation axis to the newly created InitialTransformRotation, and then setting that as the initial transform.
III - Using an Identity transform;
IV - Using the class ItkImageMomentsCalculator to calculate the center of gravity and set that center as the FixedParamenters of the InitialTransformTranslation;
I also tried all the ITK-Software-Guide methods of connecting two stages in a multi-stage registration.
What I found:
No method for connecting stages worked. I then performed a straightforward approach. After the first stage (translation), I RESAMPLED THE MOVING IMAGE WITH THE TRANSFORM OBTAINED and used that resampled image as the moving image input of the next stage. The other approaches always stopped with transform type conversions ERRORs.
For all methods I used for the InitialTransformRotation guess: the strategy IV worked only when I use Mattes Entropy. When I use the other two (likewise) implementations it did not. I do not know why.
I am sharing the GitHub code in here.
Please check the SeparatedTransforms Branch.
It looks like you have more initial translation error than rotational. Might be able to do a FFT base cross correlation to initialize translation. Given that it’s MRI and CT you may need to do some intensity manipulation to get good results for correlation. Here is a sample implementation which may help you get starting with this approach:
You may also fine this SimpleITK registration tutorial informative about some options:
I have some resistance believing that there is more initialTranslationalTransform error than rotationalTransform Issues.
When I run the code, the first stage (translation) works perfectly well, the optimizer evolves and the metric and iteration outputs are reasonable.
Also, when I check the moving image resampled with the first stage transform I find an almost perfect alignment, as you can see here.
Yes, I did.
The thing is that example uses ImageRegistrationMethod V3, not V4.
I must also state that when I used 3DVersorRigidTransform (translation and rotation at once), the pipeline worked pretty well with none of the mentioned errors.