Computation time does not scale with SetMetricSamplingPercentage setting in ITK V4

I have been away from ITK community for a few years and recently I was trying to adapt my old code to the latest release. I was running into a few problems and I don’t know if those are the known issues with the new release or I have not done it correctly. My old code was developed with InsightToolkit-3.20.1 and compiled with VC++ 2010. I have modified the code for InsightToolkit-4.13.1 and compiled with VC++ 2013. The code is doing 3D rigid registration. The following classes are involved in my V4 version.
itkImageRegistrationMethodv4
itkMattesMutualInformationImageToImageMetricv4
itkEuler3DTransform
itkRegularStepGradientDescentOptimizerv4

Similarly the classes I used in V3 includes:
itkMultiResolutionImageRegistrationMethod
itkMultiResolutionPyramidImageFilter
itkMattesMutualInformationImageToImageMetric
itkEuler3DTransform
itkRegularStepGradientDescentOptimizer

I was able to run both versions of my code and compare the performance. Here is what I found out.

  1. V4 requires more memory than V3. I was able to registered two image with the size of 400x400x245 in V3 but it crashes in V4.
  2. V4 code runs significant slower than V3. I have not been able to set both version of my code to let them have identical termination conditions but I can tell the difference from each iteration.
  3. The method SetMetricSamplingPercentage of the class itkImageRegistrationMethodv4 has not effect on the computation complexity. I didn’t see any change in computation time when I change the sampling percentage from 0.5 to 0.05. This is not the case in my V3 code. When the number of samples changes from 500,000 points to 50,000 points, the CPU time is reduced to about 1/9 of the original time.

Are the computation efficiency and memory usage a known issue for V4? I’ll appreciate it very much if anyone can answer my questions or give my any hints!

Jian

  • V4 registration should be neither significantly slower nor take significantly more memory, if the settings are equivalent.
  • SetMetricSamplingPercentage should have an effect. See a related discussion.
  • The old registration classes are still supported, so you don’t have to switch to V4 registration framework. Unless you have a good reason to switch to V4 framework, I suggest you stick with the old one. V4 framework is more flexible (and complicated), with many differences including different defaults for some things. If you switch to V4 registration framework, you will have to re-tune your parameters, at least to some degree.

Thank you very much!
I found what caused the problem. The default setting for MetricSamplingStrategyType is “NONE”. So there is no effect even though I changed the value for the sampling percentage. I think it would be better to set the default value for MetricSamplingStrategyType as “REGULAR” and the sampling percentage as “1.0” instead. The current setting is prone to error and misunderstanding.
After the setting was added, my code for V4 works very well. Thank again!

2 Likes

@eewujian could you please submit a patch?

I would not recommend this. The “NONE” sampling strategy and the “REGULAR” strategy are different!

The None uses all the pixels at their location, and no point set is generated. While the “REGULAR” produces a point-set by iterating over the virtual domain and selecting every nth point then perturbing the point.

These two method have different data structures and different spacial locations. Very different!

They are very different.

For most use cases, the regular strategy uses more pixels than is required and makes computation time take too long. A different strategy (1.0 sampling percentage may be too aggressive), would be a better default.

Note: I assume you mean the “NONE” strategy produces too may pixel.

Yes, practically the “NONE” sampling strategy is not used due to efficiency. The other sampling strategies are an optimization that are use to approximate the “NONE” strategy. Should the defaults be the parameters use for some efficient solution to an “arbitrary” problem?

My general strategy to tackle new registration problems is to start with a slow and steady optimization then try to improve the performance while keeping the solution stable.

IMHO the real bug here is that the MetricSamplingPercentagePerLevel method and in general the sampling needs better documentation.

I am OK with the approach described by Bradley that starts “with a slow and steady optimization”. But I think whenever a user uses the method “SetMetricSamplingPercentage” in the class itk::ImageRegistrationMethodv4 to set a sampling ratio, the sampling strategy should be automatically changed from “NONE” to another strategy. Otherwise the user might think he/she is using a subset of image pixels/voxels but he/she is actually not.

I have another question. I’m working on a project to rigidly register two images. One image is a sparse matrix so I created an itk::ImageMaskSpatialObject to only turn on the valid voxels. On top of that, I also want to use a fraction of valid voxels (e.g., 5% to 20%) for the similarity metric calculation. I’m looking for the most efficient approach because the time is critical in my application. I’m wondering how ITK is handling this. Does ITK first identify the valid voxel base on the mask and then sample the subset? Or it samples the whole image first and then picks the voxels that are valid based on the user-define mask? What sampling strategy is most efficient in my situation? My mask is like in the mask image a few slices are turned on and the values are all zeros anywhere else.

Any help and suggestions will be appreciated very much!

Jian

Hello @eewujian,

ITK uses a rejection sampling approach which implicitly assumes that you are interested in a significant portion of the image.

For additional details see this related discussion.

1 Like

The default approach should be the one that makes sense for the majority of cases. A NONE strategy is almost always excessively computationally expensive. A more reasonable default is the REGULAR strategy with a conservative percentage. Improved documentation is also helpful, but documentation does supplant sensible defaults.

As I referenced above the memory consumption when switching to the REGULAR strategy can be significant because the samples are converted into a point list. For a 3D image of uint8 with a REGULAR sample strategy and a sampling rate of 100% the memory allocated to the point sample list will be 24x the size of the volume. For each sample in the point list, 3 doubles are stored hence 24 bytes per sample. So even at a 5% sampling rate the memory requirement to use the sampling approach with a list will require additional memory of about the size of the volume.

Profiling and analysis is needed to determine the best trade off between computation, memory consumption, and accuracy for the “general” case.