interpolate 3d image where the xy planes have 70nm between each other

I am working with electronmicroscopy where we have brain slices along the z-axis, 70nm. I would like to compute the image data that is missing between each image slice, and need help picking the right algorithm (if there is one).

The following slices are positioned taken along the z-axis:

This is called interpolation. Take a look at ImageInterpolators, the highest quality of which is WindowedSinc.

If this is not good enough for you, you will need to come up with your own, domain-specific interpolator which makes more assumptions about your data. That would allow you to create more meaningful interpolation. What do you intend to use the “missing data” for?

2 Likes

Generating ground truth for the training of neural networks.

Any assumptions that I can make for this kind of data? I’d like to be as close to the truth as possible.

Perhaps model the axons and sheets, then assume they are mostly continuous?

No quick and easy things there, as far as I can tell. Except maybe to model the noise, then remove it, then interpolate, then add the estimated noise back into the interpolated slices? I don’t know whether this would help or make the interpolation quality worse.

Can’t make that kind of assumption I am afraid. These are not axons, they are organelles, the larger structures are actually mitochondria. Yes, we are very zoomed in (power of electrons). These structures can change shape and move between the images in the stack.

Yeah, modeling and removing noise is a good idea. What did you have in mind?

Hello @caniko,

Another keyword for use in your searches is super-resolution.

Traditional:
If the standard interpolation implemented in ITK is not good enough, possibly look into registration based interpolation (e.g. Frakes et. al).

Deep learning approaches:
If you have higher-res versions of the 3D volume you could train a network to output a super-resolution version of the input (e.g. Chen et al.). The only downside to this approach is that if you have a unique structure that is very different from what was used in the training set, you likely will get a super-resolved image which appears plausible but doesn’t accurately represent that structure.

1 Like

Any way to couple the greyscale image data with morphological mask data that I have from image segmentation? Initially the morphological data had the same resolution as the greyscale image, these were interpolated using morphological interpolation with ITK, the resulting morphological mask data has approximately (1, 1, 1) nm while the initial data is approximately (1, 1, 63) nm where (x, y, z).

Super-resolution sounds very promising @zivy.

Deep learning for refining seems to be irrelevant for this task. Ground truth data with higher resolution than electron microscopy data, let me know when it is available :grinning_face_with_smiling_eyes:

Always open to other ideas!
Thank you for the help