I applied affine registration to images, and the result shows that images are successfully registered. However, when I compare the fixed image with the registered moving image, I can see that the fixed image is blurry (in other words, the moving images that underwent interpolation have higher contrast). This results in getting a weaker signal after applying the segmentation algorithm to the fixed image in comparison to others. I am using linear interpolation (Affine approach). Any idea why I am getting contrast enhanced images after the registration? Thanks
Nothing comes right to mind. Can you share example images? Or their screenshots?
Blurriness (lack of fine details in the image, because for example you applied Gaussian smoothing) and higher contrast (larger range of voxel intensity) in the displayed image are two very different things.
It can be normal to have slightly different default displayed contrast/brightness after some processing. For example, the image viewer may automatically determine the display settings from the processed image content. This is just a display setting, you can adjust the brightness/contrast (window/level) in the viewer without changing the underlying image data. If you set the brightness/contrast display settings to be the same for all the images then their appearance should be very similar.
If the fixed image is more blurry than the transformed moving image then the only reason I can imagine is that the fixed image was already more blurry than the moving image. It may have been acquired with different settings (e.g., larger spacing along all axes or certain axes) or the patient may have moved.
To display the images, I’m using the
matpltlib package. As a result, all images have identical setups.
It’s probably more accurate to say that the properties of moving images is altered because we use the fixed image as a reference and the transformation process is carried out on the moving image.
There are two layers in the image that I have (assume two 2D images with different structure on top of each other). I’m attempting to work on the second layer, but due to the poor quality of the bottom layer, I’m actually applying the top layer’s transformation matrix to the bottom layer, which perfectly registers the bottom layer. Could it be a source of error (I mean, using the transformation matrix after the registration of the top layer and applying it to the bottom layer while their structures are different)?
I’ve also noticed that this problem occurs most frequently when I have a very low-quality image.
I still don’t know what image quality difference you are talking about (blurriness, artifacts, or different intensity range). If you cannot share the images that you are currently working with then you can try to reproduce the problem with any public data sets.