Hi. I’m trying to port my ITK DRR renderer to RTK to leverage some of the GPU speedups, but the RTK API and output are considerably different from ITK and I’m not getting very far.
For instance, here’s a a lateral DRR from ITK
And here is what RTK outputs. Ignore the rotation and FOV difference…
The ITK workflow makes sense to me logically:
- Resample filter, 3D CT as input, 2D DRR as output.
- Use raycast interpolator, set xray source pos, attach to filter.
- Filter and interpolator get a transform (AffineTransform in my case)
- For each output pixel, interpolator casts a ray, accumulates result.
I’m kinda not grokking how the overall RTK forward projection paradigm works, though.
RTK workflow as I currently have it:
SetSize(DRR output size, in px)
SetSpacing(DRR pixel spacing, in mm)
SetOrigin(??) - What origin am I setting here? What coordinate space?
SetConstant(0) - What is this for?
SetInput(1, 3D CT)
- Create Reg23ProjectionGeometry
Set virtual xray source pos, virtual xray detector pos, and DRR direction vectors
Ultimately I expect to transform these params by the AffineTransform that I use for my ITK DRRs to position my src/det correctly. First things first, though…
- ForwardProjection output to 2D image
- Why does it accumulate everything outside the volume to solid white? No matter far back I move my virtual source pos in ITK it never accumulates empty space to white.
- What is the relationship of the ConstantImageSource to the forward projector? Is this just a hack to get some output parameters into the filter, or is it actually iterating over the ConstantImageSource space?
- Why are output parameters specified via an input to the forward iterator?
- What is the relationship between the ConstantImageSource origin to the DRR? What are you actually setting when you set ConstantImageSource origin?
- What is the relationship of the ConstantImageSource constant value to anything? Seems not to have any effect.
Any sort of help here would be appreciated. Thanks!