How to get oblique 2d slice from volume (simpleITK)

Hello,

I’m searching for code example of oblique 2d slice.
Asking chatgpt and bard I was able to understand the underlying geometry: plane center point, plane normal vector, rotation matrix, translation matrix. They even generated some code, but using methods that don’t exist.

From what I understand, the resample receives these parameters and rotates the entire volume so that I can extract it on some axis. I have already managed to extract the slices on the axial, coronal and sagittal axis, it works correctly. But now I find myself lost on how to use the resample. I read the documentation but it’s still unclear.

Please, if some code example is provided i would be very grateful.

Hello @Salu_Ramos,

This isn’t trivial because of the need to specify the resampling grid. You need to decide on its origin, size and spacing so these make sense for your use case. Resampling does not “rotate the entire volume”, it is a point based operation. Points in space are transformed and the intensity values at those spatial locations in the original image are estimated (for additional details see this notebook).

You can work on a volume to obtain an oblique version of it, multiple 2D slices in one go (this is just “hiding” the fact that by default we use the same origin, size, spacing as the original volume):

import SimpleITK as sitk

file_name = 

image = sitk.ReadImage(file_name)
image_center = image.TransformContinuousIndexToPhysicalPoint(
    [(i - 1) / 2.0 for i in image.GetSize()]
)
#rotate 45 degrees around x axis, center of rotation is the image center
transform = sitk.Euler3DTransform(image_center, 0.785398, 0, 0)

#use the original image grid for resampling.
sitk.WriteImage(sitk.Resample(image, transform), "oblique.nrrd")
1 Like

Hello @zivy,

thank you very much for your reply.
i made changes to the code and my function looked like this, for future readers:

import numpy as np
import SimpleITK as sitk
import time

def oblique_slice(self, theta_x_degrees:float, theta_y_degrees:float, theta_z_degrees:float, slice_size:list[int], slice_index:list[int]) -> np.array:
        """
        extrai um slice obliquo do volume e retorna em formato de array numpy
        """
        theta_x_radians = np.deg2rad(theta_x_degrees)
        theta_y_radians = np.deg2rad(theta_y_degrees)
        theta_z_radians = np.deg2rad(theta_z_degrees)
        rotation_matrix = sitk.Euler3DTransform()
        rotation_matrix.SetRotation(theta_x_radians, theta_y_radians, theta_z_radians)
        time1 = time.time()
        rotated_volume = sitk.Resample(self.volume, rotation_matrix)
        time2 = time.time()
        print(f"resample timer = {time2 - time1}")
        slice_extraction = sitk.Extract(rotated_volume, slice_size, slice_index)
        return sitk.GetArrayFromImage(slice_extraction)

With this parrameters i will be able to implement mpr function.
I am trying to optimize this code because Resample are taking too much time, approximately 0.04/0.045 seconds, which gives me 20 frames per second.
In my MainSettings i set:

sitk.ProcessObject.SetGlobalDefaultNumberOfThreads(multiprocessing.cpu_count())

And now i’m trying to use gpu with pytorch or tensorflow, after reading this:

which by coincidence you answered too, lol!
I will bring news in case I get something.

Hello @Salu_Ramos,

I am not sure the size of the input image, but you are resampling the whole input image, then extracting a slice. It would improve efficiency when resampling if the output image size was specifies to only have a size of 1 in the z dimensions.

1 Like

Hello again @blowekamp , i came back with this situation and I would like to understand this sentence of yours:

I would like to acquire any extraction from an oblique plane, with this plane positioned in any position on its normal axis.
for example, with ExtractImageFilter, if the extraction plane is axial i can setSize to [xSize, ySize, 0] and setIndex to [0, 0, n],

In this case, is it impossible to optimize the resample?

Currently in the above function you are resampling with this call:

rotated_volume = sitk.Resample(self.volume, rotation_matrix)

This signature of the resample function takes all the output geometry information from the input image (self.volume). I am proposing that you explicitly set all the output geometry information, manually mostly from the input. However, the output size is replaced by slice_size, and the output index is replaced by slice_index.

I would recommend using the object oriented interface to the ResampleImageFilter to be explicit about the parameters being set.

I forgot to mention that now I’m doing this in Java, like this:

    private Image image;
    private double xDegrees = 0;
    private double yDegrees = 0;
    private double zDegrees = 0;
    private int layer;

    public void start () {
        String path = "dicom path here";
        VectorString dicomFilesNames = ImageSeriesReader.getGDCMSeriesFileNames(path);
        this.reader.setFileNames(dicomFilesNames);
        this.image = this.reader.execute();
        Image tempImage = rotateImageWithResample(image);
        tempImage = extractSingleFrameSlice(tempImage, {512, 512, 0}, {0, 0, this.layer});
    } 

    private Image extractSingleFrameSlice(Image pixeldata, long[] sliceSize, int[] sliceIndex) {
        VectorUInt32 vectorSize = new VectorUInt32(sliceSize);
        VectorInt32 vectorIndex = new VectorInt32(sliceIndex);
        ExtractImageFilter filter = new ExtractImageFilter();
        filter.setIndex(vectorIndex);
        filter.setSize(vectorSize);
        filter.setNumberOfWorkUnits(1);
        return filter.execute(pixeldata);
    }

    private Image rotateImageWithResample(Image series) {
        this.xDegrees = this.xDegrees % 360;
        this.yDegrees = this.yDegrees % 360;
        this.zDegrees = this.zDegrees % 360;
        double thetaXRadians = Math.toRadians(this.xDegrees);
        double thetaYRadians = Math.toRadians(this.yDegrees);
        double thetaZRadians = Math.toRadians(this.zDegrees);

        Euler3DTransform rotationMatrix = new Euler3DTransform();
        rotationMatrix.setRotation(thetaXRadians, thetaYRadians, thetaZRadians);

        ResampleImageFilter resampleFilter = new ResampleImageFilter();
        resampleFilter.setInterpolator(InterpolatorEnum.sitkLinear);
        resampleFilter.setTransform(rotationMatrix);
        resampleFilter.setDefaultPixelValue(255);
        resampleFilter.setSize(series.getSize());
        resampleFilter.setOutputOrigin(series.getOrigin());
        resampleFilter.setOutputSpacing(series.getSpacing());
        resampleFilter.setOutputDirection(series.getDirection());
        resampleFilter.setNumberOfWorkUnits(1);
        return resampleFilter.execute(series);
    }

This is a summarized code, I made an mre to test the performance of the function. In my mre I can rotate in x/y/z and change the layer using the keyboard keys.
But the performance is taking approximately 200ms per render.

i was able to reduce size by translating the transform, this speed up from ~200ms to ~5ms.

private Image extractSingleFrameSliceWithRotation(Image series, long[] sliceSize, int layer, int orientationIndex) {
        this.xDegrees = this.xDegrees %360;
        this.yDegrees = this.yDegrees %360;
        this.zDegrees = this.zDegrees %360;
        double thetaXRadians = Math.toRadians(this.xDegrees);
        double thetaYRadians = Math.toRadians(this.yDegrees);
        double thetaZRadians = Math.toRadians(this.zDegrees);

        Euler3DTransform transform = new Euler3DTransform();
        transform.setRotation(thetaXRadians, thetaYRadians, thetaZRadians);
        double[] translationVector = new double[]{0, 0, 0};
        translationVector[orientationIndex] = layer*series.getSpacing().get(orientationIndex);
        VectorDouble translation = new VectorDouble(translationVector);
        transform.setTranslation(translation);

        ResampleImageFilter resampleFilter = new ResampleImageFilter();
        resampleFilter.setInterpolator(InterpolatorEnum.sitkLinear);
        resampleFilter.setTransform(transform);
        resampleFilter.setDefaultPixelValue(255);
        long[] size = Arrays.copyOfRange(sliceSize, 0, sliceSize.length);
        size[orientationIndex] = 1;
        resampleFilter.setSize(new VectorUInt32(size));
        resampleFilter.setOutputOrigin(series.getOrigin());
        resampleFilter.setOutputSpacing(series.getSpacing());
        resampleFilter.setOutputDirection(series.getDirection());
        resampleFilter.setNumberOfWorkUnits(1);

        VectorUInt32 vectorSize = new VectorUInt32(sliceSize);
        ExtractImageFilter extractFilter = new ExtractImageFilter();
        extractFilter.setIndex(new VectorInt32(new int[]{0, 0, 0}));
        extractFilter.setSize(vectorSize);
        extractFilter.setNumberOfWorkUnits(1);
        return extractFilter.execute(resampleFilter.execute(series));
    }

1 Like

Great! Hopefully that give you the performance needed to meet your requirements.

2 Likes