Organizing Your Dashboards with Subprojects: an addendum
Last July, for the Kitware Source, our quarterly developer newsletter, I wrote an article called “CDash Subprojects.” http://kitware.com/products/archive/kitware_quarterly0709.pdf(see pp. 10-13)
Recently, it came to my attention that there are two omissions in that article, and so I am here today to make amends, and offer you this addendum.
There are two additional steps you should take to get subprojects and labels working flawlessly on your CDash dashboard. The first step is mandatory if you follow the article to the letter, and should not have been left out. The sample scripts presented do not submit pretty results as written. The second step is useful for improved warning and error reporting.
1. Use the APPEND argument in all ctest_build(…) script calls.
When you submit to a CDash dashboard with multiple ctest_submit calls in a script, submitting individual parts each time you do so, it is very important that when you call ctest_build, you use the APPEND flag. Near the bottom of page 12, in the first column, there’s a line of code that reads:
ctest_build(BUILD "${CTEST_BINARY_DIRECTORY}")
It should instead read:
ctest_build(BUILD "${CTEST_BINARY_DIRECTORY}" APPEND)
Similarly for subsequent sample uses of the ctest_build command. When splitting into subproject based steps, the APPEND argument is critical.
If you do not add APPEND, then strange things happen: multiple rows for what should have been the configure, build and test steps for the same row; rows that are just simply missing entirely. The reason behind this is all about CDash behaving in a backwards compatible manner, and is a topic for a whole ‘nother blog entry.
2. Set CTEST_USE_LAUNCHERS to 1 either in the driving script and the initial configuration cache OR in your CTestConfig.cmake.
The idea behind ctest launchers is that they wrap each compile or link step so the output can be saved and sent to CDash in the event of a warning or error. Rather than trying to grep through and analyze the full build output after thousands of compile and link calls, with this technique, ctest may simply capture the error output directly and pass it in its entirety to the dashboard. This helps immensely in figuring out some why some errors occur, without necessarily even having access to the client machine.
Additionally, since each call to compile or link is wrapped when this setting is on, ctest can associate labels with build errors or warnings. Since it knows what source file or target caused the error, and it knows the source-file- and target-to-label mappings, it can pass that information along to CDash.
I would recommend doing this in the driving script as the best practice:
set(CTEST_USE_LAUNCHERS 1)
...
ctest_configure(BUILD "${CTEST_BINARY_DIRECTORY}"
OPTIONS "-DCTEST_USE_LAUNCHERS=${CTEST_USE_LAUNCHERS}"
... )
The documentation for the CTest.cmake module includes this text regarding CTEST_USE_LAUNCHERS:
While building a project for submission to CDash, CTest scans the build output for errors and warnings and reports them with surrounding context from the build log. This generic approach works for all build tools, but does not give details about the command invocation that produced a given problem. One may get more detailed reports by adding
set(CTEST_USE_LAUNCHERS 1)
to the CTestConfig.cmake file. When this option is enabled, the CTest module tells CMake’s Makefile generators to invoke every command in the generated build system through a CTest launcher program. (Currently the CTEST_USE_LAUNCHERS option is ignored on non-Makefile generators.) During a manual build each launcher transparently runs the command it wraps. During a CTest-driven build for submission to CDash each launcher reports detailed information when its command fails or warns. (Setting CTEST_USE_LAUNCHERS in CTestConfig.cmake is convenient, but also adds the launcher overhead even for manual builds. One may instead set it in a CTest dashboard script and add it to the CMake cache for the build tree.
Credit to Brad King for writing the launchers documentation in the first place. And to David Partyka and Marcus Hanwell for trying out the steps outlined in the article and pointing out these omissions.
Sorry to have left this information out of the original Source article. I’m just glad people found it useful enough to try it out and find the errors in the first place! Keep up the good work out there. The world needs you, good coders…