Shadowed functions in GPUImage (or CudaImage)

(Simon Rit) #1

I am looking for explanations and advices, I hope you can help. I would like to be able to Graft an itk::Image to an itk::CudaImage (which is similar to itk::GPUImage). Recently, I have removed some code in itk::CudaImage (doing this) to be more consistent with itk::GPUImage, see this commit but, as you can see, the example becomes more complex: it actually reproduces by hand the itk::Image::Graft function see here and here. This function is inaccessible because this line is in the protected section. So I first tried to simply move it to the public section but that does not work because SetPixelContainer is not virtual (and therefore not overriden in itk::GPUImage) so the wrong version is called by itk::Image::Graft.
So my questions are:

  • is this intentional that some functions are not virtual in itk::Image and shadowed by itk::GPUImage?
  • what is the best solution here? To shadow Graft(itk::Image *) in itk::GPUImage?

Thanks in advance!

(Matt McCormick) #2

Hi Simon,

Since PixelContainer is typed and typed to contain a pixel buffer stored in the CPU’s memory, we do not want / need SetPixelContainer to be virtual. The using is misleading – there is a custom Graft implementation for itk::GPUImage:

and itk::CudaImage could be similar.


(Simon Rit) #3

Thanks Matt. Yes, there is already a similar custom Graft for a CudaImage which might need to be corrected because I realize the SetImagePointer is missing (the TimeStamp is not used by the DataManager so I removed it):

But that means there is no way to Graft an itk::Image to an itk::GPUImage, right? Shouldn’t this be allowed? This is particularly useful in Python, because most filters do not accept a GPUImage. How do you handle conversion from CPU to GPU images in Python, e.g., to feed the CPU output of an itk::ImageFileReader to the GPU input of a GPUImageToImageFilter?

(Matt McCormick) #4

In Python, I think it should look like this (though we have not wrapped itk.GPUImage):

PixelType = itk.ctype('float')
image = itk.imread('myfile.nrrd', PixelType)
GPUImageType = itk.GPUImage[PixelType, image.GetImageDimension()]
gpu_filter = itk.GPUImageToImageFilter[type(image), GPUImageType].New()
gpu_image = gpu_filter.GetOutput()
# Transfer from GPU memory to CPU memory if needed
# Since itk::GPUImage inherits from itk::Image, use it in parent filters

or, in ITK 5 syntax,

image = itk.imread('myfile.nrrd')
gpu_image = itk.gpu_image_to_image_filter(image)

So, the itk::CudaImage could be wrapped in a similar way…

(Simon Rit) #5

Thanks again for suggesting a solution. I don’t see how this could work, GPUImageToImageFilter does not thing but call GPUGenerateData after allocating the outputs

which does nothing

Similarly to the ImageToImageFilter, GPUImageToImageFilter is not meant to be used but to be inherited from. It’s not a converter from a CPU image to a GPU image. Or am I missing something?

(Matt McCormick) #6

If the second template argument is an itk::GPUImage, it will produce a GPU image.

But GPUGenerateData / GenerateData would need to be defined to populate the pixel buffer.