Converting 2D images with tracking information into 3D volume

Hi, this is my first time using this forum and I am a beginner with SimpleITK in Python.

I have a series of png ultrasound images. Each image has a corresponding (location, quaternion) data from a tracker that indicates where it was taken. Using the location data, I would like to embed each image in the corresponding location in a 3D image, and interpolate the gaps. What is the proper way to do this? My current approach is to simply calculate pixel location and paint that voxel in the 3D image with the corresponding pixel intensity. Then, I use an interpolator to try to fill in the gaps. But I am guessing there is a better way.

Thank you!

Hello @Ale_Ballester,

While this can be implemented using SimpleITK, I highly recommend taking a look at the Plus volume reconstruction. The code is available on GitHub if you feel you need to reimplement it.

Possibly you don’t even need to write code, use it directly from Slicer as shown in the YouTube instructional video.