I am trying to determine the correct way to interrupt processing of a segmentation pipeline. This will be from a user request, application shutdown, etc. The segmentation is running in a worker thread and the cancellation controlled from the GUI thread. I have searched the documentation and manuals and have not found anything useful - the documentation in the itk::ProcessObject::AbortGenerateDataOn function for instance is ‘Turn on and off the AbortGenerateData flag.’ which offers little guidance on how to actually perform and handle an abort.
From trial and error and reading examples from others, my (partial) understanding is as follows:
Setting the abort flag can only be done in the thread running the Update(), i.e. via an observer callback. Setting the flag from a separate thread (e.g. the GUI thread) causes issues.
Just the currently running filter should have its abort flag set (I have tried setting the flag on all downstream filters and the final filter, but this either causes problems (hangs caused by filters waiting for their threads to generate data) or nothing happens. As a side note here, the reason I thought that just the final filter needed the abort flag set was because I accidentally had a circular pipeline which caused a stack overflow when setting the abort flag due to the process objects propagating the flag recursively.
Once the abort has completed, the itk::ProcessAborted exception is thrown from the Update() call. This needs to be handled and dealt with.
In order to re-use the pipeline, ResetPipeline() needs to be called. I am unclear whether this should be called on the ProcessObject that was aborted, all process objects, the final ProcessObject or even the final itk::Image (with the assumption that it propagates down the pipeline). I am assuming that the handler for the exception mentioned above is the right place to do this. I seem to have most success from resetting all ProcessObjects, I get hangs otherwise.
Any guidance on how to use this functionality properly would be greatly appreciated as it is not at all clear from the documentation.
Filters occasionally check whether abort flag is set. I don’t think it matters whether it was done from the same thread.
itk::ProcessAborted exception … needs to be handled
Of course. You should generally wrap your
Update call in a try-catch block.
I can’t say I used abort mechanism much, so I don’t know whether and how it propagates through the pipeline. Each filter only checks its own abort flag.
There are a lot of complications here. I have implemented most of this in a 3D Slicer Extension call “SimpleFilters”. You can try it out in the Release against ITKv4 or the nightly against ITKv5. It runs ITK in a background thread ( which then spawn more threads with ITK multi-threaded). The GUI thread sets a flag that the background thread checks on any event. The background thread queues actions for the main GUI thread in a thread safe queue for events such as the progress status etc.
- This is correct/ best practice.
- The linked extensions just executes one filter, but I would expect the setting the abort flag on all filter should work OK.
- In the linked extension, the filter is just destroyed afterwards. Restarting after an abort is not testing, so the reliability of this may vary.
Another note ITKv5 reports progress less frequently that ITKv4 did.
Thanks for the replies.
From looking at the source of itkProcessObject, you are correct Dženan that the setter just sets the abort flag although it is probably safest to set it from the observer callback to avoid any race conditions. It also seems to get automatically reset by the ProcessObject before it is next run, so there is no need to reset it manually.
The ResetPipeline call is more interesting. It does seem to recurse back up the pipeline towards the original input(s), starting with the output of the process object it is called on. I expect this is the reason for the stack overflow I saw with a circular pipeline. All it seems to do is reset the m_Updating flag unless I am missing something. There is some code in the UpdateOutputData function to handle aborts and exceptions, but it only calls the ResetPipeline function on the current object so anything downstream from it will still think it is updating. For this reason, it is still necessary to call ResetPipeline on the final data object to ensure the whole pipeline is reset before reuse.
I am still seeing some deadlocks when cancelling rapidly, so I will keep digging, although I have not moved on to ITK 5 yet so this may be fixed.