LabelErodeDilate And other remote module failures

The hash specified for the LabelErodeDilate remote module does not exist in the repository;
https://open.cdash.org/viewConfigure.php?buildid=5240282

The InsightSoftwareConstorium repo is the one used by ITK, the repo is also a from from richaredbeare.

I am not sure what is going on here:

But I think this issue is causing my remote builds to STOP with this module, and not build further.

I think @hjmjohnson forked a few repositories into InsightSoftwareConsortium recently. That change is a possible culprit.

Similarly the MGHImageIO seems to have a similar error:
https://open.cdash.org/viewConfigure.php?buildid=5240375

Who is working on fixing the remote module URLS/Hashes?

Before we start working on the fix, it would be good to have @hjmjohnson’s input since he is the one who updated the SHAs. From what I can see on the MGHImageIO module repo, it looks like there are some PRs pending and this is most likely what the new SHA is suppose to point to. Pinging @lassoan and @jcfr on this one too, since MGHImageIO is part of Slicer. I am looking at the other one, LabelErodeDilate.

1 Like

LabelErodeDilate master has diverged on InsightSoftwareConsortium from the original repository.
A patch has been submitted to ITK to point to the upstream repository to download and compile the latest version of the module.
A new branch has been created in the InsightSoftwareConsortium repository to keep track of the diverged branch.
If everybody agrees, I will overwrite the InsightSoftwareConsortium master branch with the upstream master branch.

2 Likes

@fbudin’s plans looks OK to me.

Although @richard.beare is an active contributor of the community, IMHO we should have some mechanism, policy or procedure (e.g. in some *.md file of the ITK repo) by virtue of which we decide when to fork the repo to the InsightSoftwareConsortium (e.g. the original repo has been inactive for a long time) (and thus when to change the repo pointed by *.remote.cmake), or why to fork or not a repo (e.g. if by any chance the repo gets removed by its user, do we/does GitHub have ways to get its contents transparently; assume nobody has cloned it/has removed clones?)

May be this is tribal knowledge, but may be when we get some time (e.g. after the current heavy workload period), we should write it down?

Now, we may have forked some of these in the last days to avoid waiting for upstream PR’s be accepted. I guess this is relatively easy to fix for the ones having @richard.beare or @jcfr as their owners.

@fbudin Thanks for the reminder, these will be integrated shortly.

I’ve recently accepted PR from @hjmjohnson that updated the various override settings. Perhaps there is another change that was never propagated. If so, please send another PR. I’m happy for the InsightSoftwareConsortium versions to become the versions referenced by the build tree. I’m certainly not doing active development on this code. It would be nice if the PRs were sent to keep the repos synced, but obviously not essential.

@richard.beare Thanks for chiming in. The divergence between the two master branches come from the fact that a year ago maybe, the same commit was merged independently in your repo and in the forked repo. Those two different merges ended up having different SHA (probably not the same person merging, different time or day,…), so there is nothing we can do to make these two repo synchronized in a clean way. But what I will do is that I will force push your master branch in the forked repo, so that they are synchronized again. The bad master branch in the fork was moved to master-InsightSoftwareConsortium-2016.03.13.

Thanks, I’m happy with whatever you recommend.

Thank you for updating the remote!

Looking into further why the nightly remotes are stoping. Some builds are using the following code in the CTest script driver:


foreach(itk_module ${itk_remote_modules})
  set( CTEST_BUILD_NAME "Linux-x86_64-gcc4.8-${itk_module}")
  set( CTEST_DASHBOARD_ROOT "/scratch/dashboards/Linux-x86_64-gcc4.8-remotes" )
  set( CTEST_NOTES_FILES )
  # don't do a list concatination to form dashboard_cache,
  # just embed one string inside the other
  set(dashboard_cache "${dashboard_cache_base}
    Module_${itk_module}:BOOL=ON
    ")
  set(CTEST_TEST_ARGS INCLUDE_LABEL ${itk_module})

  include(${CTEST_SCRIPT_DIRECTORY}/../ITK-dashboard/itk_common.cmake)
endforeach()

But the itk_common.cmake driver terminates then there is a build error:

So this is why not all the nightly remotes are being build. Likely there was some additional change with the behavior of ctest with the recent updated to 3.10-ish that exposed this issue.

@matt.mccormick @fbudin
Now what is the recommended way to fix this problem with the dashboard scripts?

Every CTest script build should be a separate process.

Do you know if ctest_run_script([NEW_PROCESS]…) is used if the current variable context is propagated to the new process?

Depending on how it is called, that is what the documentation suggests:

If no argument is provided then the current script is run using the current settings of the variables.

I have not been able to update the script above to preserve the CMake variable while running itk_common.cmake multiple time.

It is very convenient running the remote builds in a for loop. Any suggestions on how to update our processes for building the remotes nightly?

A Bash or Python script can also be used to run the builds in a for loop.

Yes, the limitation of using ctest to call a set of nightly ctest scripts is apparent. I originally choose chest because of these other options not being standard on windows.

An example script attached.
corista.sh (1.0 KB)