In my multilevel registration application, I’m using a specific initialization. So I just give parameters (scale rotation translation) + center to the transform being optimized (in place) before the registration start. This works well.
However, when I’m adding a moving initial transform (through SetMovingInitialTransform(), or using directly a composite transform), the center of all the transforms get reseted to [0,0]. All other parameters are kept as initialized. So my “superb” initialization doesn’t work as expected… I just have to uncomment the SetMovingInitialTransform() line, whatever the transform is (identity or not) to observe the undesired behavior.
Do anyone has any clue on why this is happening. I have started browsing itk source, but I haven’t found a possible explanation for this yet.
I’m using ITK v4, a similarity 2d transform, mean square difference metric, regular step gradient descend optimizer.
It sounds more like a problem of transform composition than a problem with initial transform setting.
Can you post a sample of your registration setup ?
The difficulty with using composite transforms and transform centers is the space of the center of the transform is not the image space but the space of the prior composite transform stack. None of the transform initializers support a prior transform stack.
Thanks both of you for your help.
I agree with you, Tim, it seems like a composition problem.
Bradley, thanks, I’m reading this!
Here is a sample of code. I tried to make as concise as possible, and remove anything unnecessary.
I’m not using any transform initializer, but I have written “a custom” initialization (not really fancy, based on principal moments), which gives a center for both image, as well as rotation, scaling and translation. I need composite transform to allow image flipping (mirroring) before the registration.
using InternalType = double;
using InternalPixelType = float;
static const unsigned int Dimension = 2;
using InternalImageType = itk::Image< InternalPixelType, Dimension > ;
using FlipTransformType = itk::AffineTransform< InternalType, Dimension >;
using TransformType = itk::Similarity2DTransform< InternalType >;
using OptimizerType = itk::RegularStepGradientDescentOptimizerv4< InternalType >;
using MetricType = itk::MeanSquaresImageToImageMetricv4 < InternalImageType, InternalImageType >;
using ScalesEstimatorType = itk::RegistrationParameterScalesFromPhysicalShift<MetricType>;
using RegistrationType = itk::ImageRegistrationMethodv4 < InternalImageType, InternalImageType, TransformType >;
// I haven't put the implementation here
observer = CommandIterationObserver::New();
command = RegistrationInterfaceCommand::New();
flipTransform = FlipTransformType::New();
optimizedTransform = TransformType::New();
metric = MetricType::New();
scalesEstimator = ScalesEstimatorType::New();
scalesEstimator->SetMetric( metric );
scalesEstimator->SetTransformForward( true );
optimizer = OptimizerType::New();
optimizer->AddObserver( itk::IterationEvent(), observer );
optimizer->SetScalesEstimator( scalesEstimator );
registration = RegistrationType::New();
registration->AddObserver( itk::MultiResolutionIterationEvent(), command );
registration->SetOptimizer( optimizer );
registration->SetMetric( metric );
registration->SetInitialTransform( optimizedTransform );
command->setInitialLearningRate(... );
command->setInitialMinimumStepLength( ... );
command->setMinimumStepLengthShrinkFactor( ... );
registration->SetFixedImage( rescalerFixed->GetOutput() );
registration->SetMovingImage( rescalerMoving->GetOutput() );
movingCenter = ...; // calculated from moments
fixedCenter = ...; // calculated from moments
// affine transform from moving image space to the same moving image space
MomentsCalculatorType::VectorType centerAffine;
centerAffine = movingCenter;
flipTransform->SetCenter( centerAffine );
// remove some code here for the sake of conciseness
flipTransform->SetIdentity();
registration->SetMovingInitialTransform(flipTransform); // if I comment this line, everything is fine
optimizedTransform->SetCenter( fixedCenter );
optimizedTransform->SetScale( initScale );
optimizedTransform->SetAngle( initAngle );
optimizedTransform->SetTranslation( initTrans );
optimizer->SetNumberOfIterations( ... );
optimizer->SetRelaxationFactor( ... );
optimizer->SetDoEstimateLearningRateAtEachIteration( ... );
optimizer->SetDoEstimateLearningRateOnce( ... );
optimizer->SetMaximumStepSizeInPhysicalUnits( ... );
optimizer->SetGradientMagnitudeTolerance( ... );
registration->SetNumberOfLevels ( ... );
registration->SetShrinkFactorsPerLevel( ... );
registration->SetSmoothingSigmasPerLevel( ... );
// here the optimizedTransform has the proper center
// as soon as the registration begins (in the observer), center is to [0 0]
registration->Update();
You had a good feeling Bradley, when I set registration->SetInitializeCenterOfLinearOutputTransform(false);, the center remains the same and is not reseted. So this “solves” my problem, great!
I’m wondering whether it is a desired behavior to reset the center. In this experiment, both the center of affine and the similarity transform seemed to be reset.
If nor the automatic composition in the Registration Method nor setting manually a composed transform works, I would suggest to embed the transformations directly in the initial images i.e. resampling both your fixed and moving images to have their centers on [0,0] with the help of the pre-calculated transforms, then go for the registration process with a composite transform with center [0,0]. That should do the trick.