Cuberille Mesh

Hi,

Is there a way to convert the mesh from cuberille image to mesh filter back to an image? I saw something like itk.TriangleMeshToBinaryImageFilter but my python brain doesnt understand how

instance = itk.TriangleMeshToBinaryImageFilter[itk.Mesh[itk.UC,3], itk.Image[itk.UC,3]].New()

works…is it like

image_from_mesh = instance(mesh) because then it says
ITK ERROR: TriangleMeshToBinaryImageFilter(0000016AD471C630): Must Set Image Size

Mesh does not explicitly contain the resolution of the image from which it originated. And implicitly determining it is hard. So you need to specify the image grid parameters (size, spacing, origin, orientation).

If you have the original image, you could take the information from it.

I got it to work, thanks for the help!

Perhaps share your solution in case someone in the future runs into the same issue?

Sure, no problem! I was trying to transform the 3D segmentations in physical space so this was my way of doing it. Please let me know if there are other ways!

import itk
MeshType = itk.Mesh[itk.UC,3]
ImageType = itk.Image[itk.UC, 3]
PixelType = itk.ctype("unsigned char")

segmentation= itk.imread(path, PixelType)
inv = itk.transformread(invpath)


size = itk.size(segmentation)
origin = segmentation.GetOrigin()
spacing = segmentation.GetSpacing()
direction = segmentation.GetDirection()

mesh = itk.cuberille_image_to_mesh_filter(segmentation)

transformed_mesh = itk.transform_mesh_filter(
    mesh,
    transform=inv
)

mesh_to_image_filter = itk.TriangleMeshToBinaryImageFilter[MeshType, ImageType].New()
mesh_to_image_filter.SetInput(transformed_mesh)
mesh_to_image_filter.SetSize(size)
mesh_to_image_filter.SetOrigin(origin)
mesh_to_image_filter.SetSpacing(spacing)
mesh_to_image_filter.SetDirection(direction)
mesh_to_image_filter.Update()
tf_image_from_mesh = mesh_to_image_filter.GetOutput()

1 Like

If your transform is rigid or affine, you can do this operation much more elegantly via TransformGeometryImageFilter, not to mention there is no loss and it is nearly instantaneous compared to cuberille filter workaround.

If your transform is not affine, I would still recommend the image resampling approach, without a diversion into mesh. You might want a fancy interpolator such as Label Gaussian instead of the usual nearest which is the normal choice for label resampling.

Hi Dzenanz,

Thanks for these two links! I still have yet to understand itk stuff coming from sitk. As a follow up, I get the concerns for time efficiency of my code but where would the loss (what is loss also) come from?

Also, I’m not sure if this is what you mean but I did try the SITK resampling to my images but when I converted them to mesh files, they looked quite different from what I would get say from applying a transform on the segmentations/meshes via 3DSlicer.

Conversion meshlabel image is not fully reversible in general case, so these conversions should be avoided when possible.

SITK resampling to my images but when I converted them to mesh files, they looked quite different from what I would get say from applying a transform on the segmentations/meshes via 3DSlicer.

If they were quite different, I suspect you were not doing it right. When resampling an image, direction of transform is opposite (inverted) from the one used to transform points (mesh). Try resampling with an inverse of the transform you were originally trying.

Hi dzenanz,

The differences were not drastic but they were noticeable at least (the resampled image turned mesh looked grainier)! I used the transform produced by registering the two images for the resampling. I inverted this transform for the meshes (for 3dslicer I did not need to do this).

It sounds like you might have chosen lower resolution for resampling, or you like the smoothing implicit in cuberille converter. If you resample to an image of higher resolution, and/or use smooth label interpolator, how do the results compare then?

Apologies, my terminology is a bit weird. When I say grainier, I mean two labels can get interspersed with one another whereas in the mesh it does not. This was using SITK.resample so I may have to go up a level and do it by learning and using itk stuff to achieve a better result? (like the fancy interpolator you mentioned)

If labels get interspersed, that is probably an artifact of label interpolation. Try using nearest interpolator instead of the default linear.