Hello. I’m currently performing a CT reconstruction using FDK. This process involves the setting of projection and the setting of reconstructed data. How should the origin, spacing and size of these two parts be set? Currently I read a 512512200 projection image from a raw file and save and read it one by one.
for i in range(nproj):
projection = itk.GetImageFromArray(projections[:, :, i], is_vector=False)
projection.SetOrigin([0, 0])
projection.SetSpacing([0.198 * 3, 0.198 * 3])
# projection.SetSize([512, 512])
projection.Allocate()
In ITK, an image is defined by its origin, spacing and size. Size is the number of pixels in each direction, spacing the distance between pixels and origin the coordinates of the first pixel in memory. In RTK, the geometry defines the relation between the reconstructed object coordinates and each projection coordinates.
So when i use itk.GetImageFromArray(projections[i], is_vector=False), origin is set to [0,0,0]. Use projection = rtk.constant_image_source(origin=[origin,origin,0.], size=[size,size,1], spacing=[spacing]*3, ttype=image_type), origin is set to [-(size-1)/2]*3. The reason is that itk and rtk have different coordinates, right?
No. GetImageFromArray create an ITK image from a NumPy array. NumPy arrays don’t have an origin or spacing so it defaults to origin 0 and spacing 1. In the other code, you indicate which origin and spacing to set. Had you written rtk.constant_image_source(origin=[0.]*3, size=[size,size,1], spacing=[1.]*3, ttype=image_type) it would we the same result.
I want to change the code to CUDA version in inline.py, but it seems to be different from the cpu version. my code: extractor = itk.ExtractImageFilter[image_type, image_type].New(Input=reader.GetOutput()) cuda_input = cuda_type.New() cuda_input.SetPixelContainer(extractor.GetOutput().GetPixelContainer()) cuda_input.CopyInformation(extractor.GetOutput()) cuda_input.SetBufferedRegion(extractor.GetOutput().GetBufferedRegion()) cuda_input.SetRequestedRegion(extractor.GetOutput().GetRequestedRegion()) parker = rtk.ParkerShortScanImageFilter[image_type].New(Input=extractor.GetOutput(), Geometry=geometry_rec) fdk = rtk.CudaFDKConeBeamReconstructionFilter.New(Geometry=geometry_rec)
However, in fdk.GetOutput().UpdateOutputInformation(),
ITK ERROR: ExtractImageFilter(0x64d27b9d57e0) appears: The number of zero sized dimensions in the input image Extraction Region is not consistent with the dimensionality of the output image.
Expected the extraction region size ([0, 0, 0]) to contain 0 zero sized dimensions to collapse. Mistake.
I carefully compared the cpu version, the shape of each variable is the same, why the CUDA version will appear such a mistake? It is worth mentioning that I changed the code for reading and saving the projection to: projection = itk.GetImageFromArray(projections[i], is_vector=False) projection.SetOrigin([0, 0]) projection.SetSpacing([0.198 * 3, 0.198 * 3]) projection.Allocate()
I am currently trying to modify it myself, and it is already running. But the reconstruction results are all black pictures, I wonder which part of the result is lost, this is my code:
I actually put the declarations for fdk and parker after reading the extracted_region, let the extractor read the data before doing the Update, and then passed parker and fdk, but I printed fdk.getOutput () and found all zeros.
I really appreciate your reply, which solved my problem. I’m sorry to bother you. Next, I will optimize the reconstruction results, and use Parker + FDK. Do you have any suggestions to improve the quality of reconstruction?
There can be many things to do to improve image quality: improve geometric calibration, beam hardening correction, scatter correction, denoising (pre, per or post reconstruction), etc. Maybe start a new thread with a more specific issue you want to address.
Oh,!Thank you very much for your detailed answer. I will first do some research on my own, and if I encounter any difficulties that are too challenging to solve, I will reach out to you again.