There goes my problem:
I want to obtain 2D sagittal slices from a 3D volume composed of cross images. So far so good.
My problem arrives when I want to resample/reform this sagittal images with a STIR sequence from the same pacient.
I would like to work with “real dimension” information and no with pixels to achieve this sequences (sag and stir) matching.
I’ve tried with ResampleImageFilter in Python but that’s what I obtain:
Could you help me please?
Thank you very much in advance!
Just to clarify:
Example of Sagittal image slice
Example of STIR image from the same subject
Welcome to SimpleITK!
To extract a slice from a volume use the slicing operator:
image[int(image.Width()/2), :, :]
image[:, int(image.GetHeight()/2), :]
image[:, :, int(image.GetDepth()/2)]
For display purposes you may want to make the slices isotropic if they aren’t. I recommend going over the Jupyter notebook dealing with visualization. Also, possibly the notebook describing resampling.
I highly recommend skimming the notebook repository (you’ll find reasonable visualization components there) and possibly going over the online tutorial.
Thank you very much @zivy, but I think that my explanation isn’t so good
What I want is, once I have the set of Sag and STIR images, to achieve a type of “mapping” between both images, so that they are referenced in the same “real world” coordinate system, as the images are acquired to the same subject, equipment and session.
I think that with this images I could explain a little bit better.
In this one, we can see in red the sagital image and stir in blu.
What I want is achieve this overlap and for that I would have to take into account some transform.
I hope I have explained myself better.
Thank you very much!
Still not 100% sure that I understand what you want:
- Register, align, two 3D MRI images that are not aligned.
- Overlay data from one image onto the other and extract a sagital slice of the combined image.
Do you need to do step 1 before step 2, or are your images intrinsically aligned and we don’t need step 1 (transformation between the two is the identity).
It sounds to me like you want to resample one image into the grid of the other image. That can be achieved with resample example and identity transform, assuming your images are already aligned like in the ITK-SNAP screenshot.
Thank you very much @zivy and @dzenanz for your time and attention!
What I want is what you say @dzenanz .
The only difference is that my sagital slice (256x126) obtained from the 3D axial volume (MxNxn_slices=256x256x126) isn’t already aligned with the stir image (512x512).
My sagital image:
I want convert the sagital slice to the space of the stir image to get the result of this image:
So, I’m not sure what I have to do (use pixel spacing, space between slices, image position, image orientation information…). I’m too beginner
I appreciate all the help!
All the best.
If you have just one slice, you need 2D-3D registration. If multiple slices, then you need multi-modal 3D registration.
Thank you all! I’ve solved my problem.
All I need was to use ‘Resample’
Now, I’ve my Sagital image resample to the Stir space:
I’m trying to do the same with a Sagital mask segmentation, but my result is only black images. (I’m using sitkNearestNeighbor as interpolator)
Have you got any idea about what could be the problem?
Thank you very much
You might need to load your mask images as segmentations. If you load it as an intensity image, values of 1 or 2 are black, just not the absolute black as 0. If you drag-drop the mask into ITK-SNAP, use “Load as segmentation” button.
Yes, thank you @dzenanz, I knew it
I wanted to say with simpleitk in python.
Is LabelToRGBImageFilter available in SimpleITK? If yes, use it.
Yes. LabelToRGBImageFilter and LabelMapToRGBImageFilter have been in SimpleITK since before 1.0.
Thank you very much about your help!
Finally, why I couldn’t see my resampled segmentation is that I was doing the resample with a nii mask that hadn’t the correct header, so the spacing, origin, orientation… weren’t related to the STIR space and the result was wrong.
Now is correct, thank you one more time!
All the best!