N4BiasFieldCorrection issues when re-run (applied through SliceBySlice filter)

Hello!

I am wondering what is the current status of streaming when using the SliceBySlice filter to apply another filter slice-wise. I found this communication but it is quite old and have trouble to get my implementation of slice-wise applying N4 to stream fully. For a 64GB input (with 2796 z-slices) it needs more than 1TB of memory even if I specify to use a 100 stream chunks. I would expect that the number of slices would be the best choice to apply streaming with the SliceBySlice filter, is it necessary to determine this before hand and set that to the SetNumberOfStreamDivisions of the writer?
Is it necessary to set an ImageRegionSplitter for the writer corresponding to the slicing of the SliceBySlice, since the writer drives the streaming?

Many thanks for any help or hints.

Yes, I believe I got streaming working through the SliceBySliceImageFilter a while a go.

Yes, number of slices generally works well. Yes, this should be done before hand. The number of slices can easily be obtained before pipeline execution by calling UpdateOutputInformation then getting the image regions. For this case it make the most since to set the number of divisions to the number of slices.

The PipelineMonitorImageFilter can be very useful for monitoring how the pipeline is streaming the execution.

The ImageRegionSplitter is internal to the ImageIO, so you can’t set it. The ImageIO manages the how the image region is split based on what it’s capabilities. If an ImageIO’s doesn’t support streaming then the request will be for the full region. When compression is enabled streaming can be disabled too, as zlib compression does support the needed streaming/concatenation operations.

Alternatively to having the ImageIO you could use the StreamingImageFilter to drive the pipeline.

At a higher level for your problem consider a method to exploit the coherence in the z directions of the image. Perhaps first running BinShrinkImageFilter to reduce the size, then estimate the bias field in 3D at this lower resolution. The lower resolution field could be using at an initial estimate at the full resolution.

1 Like

Many thanks @blowekamp for your quick and detailed reply. Perhaps I should use the StreamingImageFilter to ensure that the splitter fits the SliceBySlice processing to be independent of the splitting of the ImageIO. So far I am using uncompressed MHAs so streaming should work and I was expecting that the splitter would correspond to SliceBySlice but I should test that.

At a higher level for your problem consider a method to exploit the coherence in the z directions of the image. Perhaps first running BinShrinkImageFilter to reduce the size, then estimate the bias field in 3D at this lower resolution. The lower resolution field could be using at an initial estimate at the full resolution.

Generally, I fully agree, but in this case the images are acquired slice by slice and the bias field can vary from slice to slice without any correspondence to the third dimension, so to properly account for the “jagged” bias field (if viewed along the z-direction) I would expect that slice-by-slice processing is suited best. Using BinShrinkImageFilter before N4 even for 2D is on the ToDo list.

1 Like

Oddly, my n4_SBS does need much more memory (as if leaking) if employing streaming when the NumberOfFittingLevels is larger than 1, e.g. if using 3x3 as last parameter. All seems fine with NumberOfFittingLevels equal 1, e.g. if using 3 or 100 as last parameter. In addition the SBS variant needs much more (about a factor of 100) time for processing that the non-SBS variant n4.
Could it be that for some reason N4 (and possibly other iterative filters) are not (well) compatible with SBS?

There are many possibilities for the source of the problematic execution. For one you have two new things you are adding here.

  • Streaming the pipeline
  • Using N4 aside the SBB

Try to break it up to figure where the problem is.

  • Stream through the SBS with the CastImageFilter as the internal SBS
  • Run the SBS with N4 but no streaming.

You are right, that there is likely some subtle problem with the N4 filter and it being executed multiple time in SBS filter.

Yes, true, and “SBS with N4 but no streaming” even does not behave as expected. So far I was only able to get an output for a 2D input image (i.e. expecting N4 line processing just for testing) and only for NumberOfFittingLevels = 1 (SDI and noSDI do yield the same result though: https://gitlab.com/romangrothausmann/ITK-CLIs/-/jobs/505584525). The output looks like a ramp image, basically not containing any info from the input any more. So there is definitely something wrong, but so far I cannot see where that problem may lie.
Streaming N4 (without SBS) works as expected and testing SBS with another filter and streaming is in progress.

When run with the SBS, is the first slice OK?

There may be a problem with the N4 filter being run multiple time. May be worth testing independently.

When run with the SBS, is the first slice OK?

Yes, the first slice seems fine and I figured that the ramp image dominates but the actual image data is still there if using higher bit depths, e.g. 16-bit. Adjusting bc just within a small region leads to this:

There may be a problem with the N4 filter being run multiple time. May be worth testing independently.

Is there a way to reset the filter chain applied by the SBS filter? n4->ReleaseDataFlagOn(); did not seem to have an effect.

I would try running the N4 outside of this complicated pipeline twice with different inputs to see if the behaves as expected.

I would try running the N4 outside of this complicated pipeline twice with different inputs to see if the behaves as expected.

True, will try to create a test for the ITK-test suite.
It seems the number of threads matters concerning the result, e.g. if run on a GL-runner with 24 CPUs the current test succeeds

but if run on a standard GL-runner (providing only one CPU) the result differs

The other ITK-CLI tests do not depend on a specific runner. I had a similar problem though with elastix.

This is quite a bit of hassle, because the N4 implementation is not offering a down-sample parameter directly, so the bias field at full res needs to be constructed separately and applied, like in e.g.

However, the API has changed between the original implementation and the one in ITK-5, so the bias field can be generated as such

Oddly, it seems, the N4 test does not check the final corrected image only the bias field so the test, that was adjusted to the API changes, does not include the application of the up-sampled bias field.

Is there a reason why the N4 implementation does not have an option for a down-sample first, run N4, and apply up-sampled bias field to input image to produce the output?

Not sure if related, but I just noticed that streaming N4 (without SBS) does not work for ITK-4.13.1


yielding

Description: Requested region is (at least partially) outside the largest possible region.

Oddly, it seems, the N4 test does not check the final corrected image only the bias field so the test, that was adjusted to the API changes, does not include the application of the up-sampled bias field.

While trying to implement up-sampling making use of the change from

I now stumbled upon the same/similar issue as with streaming N4 of ITK-4.13.1:

yielding

N4BiasFieldCorrectionImageFilter  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  #  done. (real= 2.0s, CPUs= 12.7s)
ExpImageFilter progress: 100.0% done. (real= 0.0s, CPUs= 0.1s)

itk::InvalidRequestedRegionError (0x1a2f470)
Location: "unknown" 
File: /ITK/Modules/Core/Common/src/itkDataObject.cxx
Line: 355
Description: Requested region is (at least partially) outside the largest possible region.

i.e. failing on

apparently input and the biasField returned by ReconstructBiasField are not equal in size/origin.

@dzenanz Did ReconstructBiasField for up-sampling work for you when working on

Just as a note for others: The error

error: ?itk::N4BiasFieldCorrectionImageFilter<TInputImage, TMaskImage, TOutputImage>::RealImagePointer itk::N4BiasFieldCorrectionImageFilter<TInputImage, TMaskImage, TOutputImage>::ReconstructBiasField(itk::N4BiasFieldCorrectionImageFilter<TInputImage, TMaskImage, TOutputImage>::BiasFieldControlPointLatticeType*) [with TInputImage = itk::Image<float, 3>; TMaskImage = itk::Image<unsigned char, 3>; TOutputImage = itk::Image<float, 3>; itk::N4BiasFieldCorrectionImageFilter<TInputImage, TMaskImage, TOutputImage>::RealImagePointer = itk::SmartPointer<itk::Image<float, 3> >; itk::N4BiasFieldCorrectionImageFilter<TInputImage, TMaskImage, TOutputImage>::BiasFieldControlPointLatticeType = itk::Image<itk::Vector<float, 1>, 3>]? is private within this context

arises if using ReconstructBiasField with an ITK version before v5.1rc1, i.e. before

I was using N4 filter in this example. It was working correctly when I committed, and it should still be working.

Wow, that it is an amazing application and a cool example, many thanks for the pointer.
However, isn’t ReconstructBiasField meant to replace


as in

It could be. I don’t remember any more. Maybe I just forgot to update the example once the ITK patch was integrated? You could try replacing it, and if it works correctly make a PR.

Tried the old way


which works (https://gitlab.com/romangrothausmann/ITK-CLIs/-/jobs/510871225) and replacing it with the new method


which then fails (https://gitlab.com/romangrothausmann/ITK-CLIs/-/jobs/510951570) with Description: Requested region is (at least partially) outside the largest possible region.
The only difference I see so far is that the former bspliner->Update(); is basically now replaced by an update to the output of the reconstructer

Could that lead to the observed problem?

The reason was simply a case of avoiding feature creep. In the original N3 implementation, John Sled/MNI use something akin to the itkShrinkImageFilter and that’s what we use in the ANTs interface but there’s no reason why somebody couldn’t use the itkResampleImageFilter. But with the latter, one then has to worry about potentially exposing all the associated parameters with that choice. To avoid that choice and the set of unnecessary parameters, as well as the fact that there seems to be a clean conceptual separation between estimating a bias field and applying that correction to an image, there is no downsampling/upsampling in the filter.

Many thanks @ntustison for your explanation. I see. What made me wonder is the fact that the filter returns the corrected image in its output


not the bias field. So to me it seemed that the main idea of the filter is to yield the corrected image in which case including the down-sampling, up-sampling and applying the bias field would be very helpful, especially since these parts are not so straight forward and only needed in the case of a down-sampling factor > 1.