I’m interested in your CPUTOGPUImageFilterType. Can you give me a pointer to the corresponding filter? I can’t find in ITK. For Cuda, we do the conversion manually
which costs almost nothing but the image is not on the GPU yet. I think it’s a better idea to put these operations in a filter and I’d like to know how it’s done in the one you used.
using CPUTOGPUImageFilterType = itk::CastImageFilter<CPUImageType, GPUImageType>;
using CPUTOGPUImageFilterPointer = typename CPUTOGPUImageFilterType::Pointer;
However, I convert a array vector to GPU image, and it only cost about 0.05s. If I convert the CPU image to array vector, and then convert the array vector to GPU image, I think the cost time is smaller than 0.1s (I don’t test it).
I just think it is very weird. Why convert CPU image to GPU image would cost so much time? What operator does it perform?
What are the CPUImageType and the GPUImageType? I don’t think there is anything specific to gpu in the cast image filter so I think it’s not related to the GPUImage type. The solution I suggested above is much faster since there is no data copy whereas the cast does a data copy.