I am looking for explanations and advices, I hope you can help. I would like to be able to Graft an itk::Image to an itk::CudaImage (which is similar to itk::GPUImage). Recently, I have removed some code in itk::CudaImage (doing this) to be more consistent with itk::GPUImage, see this commit but, as you can see, the FirstCudaReconstruction.py example becomes more complex: it actually reproduces by hand the itk::Image::Graft function see here and here. This function is inaccessible because this line is in the protected section. So I first tried to simply move it to the public section but that does not work because SetPixelContainer is not virtual (and therefore not overriden in itk::GPUImage) so the wrong version is called by itk::Image::Graft.
So my questions are:
is this intentional that some functions are not virtual in itk::Image and shadowed by itk::GPUImage?
what is the best solution here? To shadow Graft(itk::Image *) in itk::GPUImage?
Since PixelContainer is typed and typed to contain a pixel buffer stored in the CPU’s memory, we do not want / need SetPixelContainer to be virtual. The using is misleading – there is a custom Graft implementation for itk::GPUImage:
Thanks Matt. Yes, there is already a similar custom Graft for a CudaImage which might need to be corrected because I realize the SetImagePointer is missing (the TimeStamp is not used by the DataManager so I removed it):
But that means there is no way to Graft an itk::Image to an itk::GPUImage, right? Shouldn’t this be allowed? This is particularly useful in Python, because most filters do not accept a GPUImage. How do you handle conversion from CPU to GPU images in Python, e.g., to feed the CPU output of an itk::ImageFileReader to the GPU input of a GPUImageToImageFilter?
In Python, I think it should look like this (though we have not wrapped itk.GPUImage):
PixelType = itk.ctype('float')
image = itk.imread('myfile.nrrd', PixelType)
GPUImageType = itk.GPUImage[PixelType, image.GetImageDimension()]
gpu_filter = itk.GPUImageToImageFilter[type(image), GPUImageType].New()
gpu_filter.SetInput(image)
gpu_filter.Update()
gpu_image = gpu_filter.GetOutput()
# Transfer from GPU memory to CPU memory if needed
gpu_image.UpdateBuffers()
# Since itk::GPUImage inherits from itk::Image, use it in parent filters
Thanks again for suggesting a solution. I don’t see how this could work, GPUImageToImageFilter does not thing but call GPUGenerateData after allocating the outputs
which does nothing
Similarly to the ImageToImageFilter, GPUImageToImageFilter is not meant to be used but to be inherited from. It’s not a converter from a CPU image to a GPU image. Or am I missing something?