the Compartmented Robust Posix C++ Unit Test system

Chapter 3. Problems

3.1. My test is killed by crpcut, what gives?
3.2. How do I parametrize tests?
3.3. How do I pass information from the command line to a test?
3.4. Why doesn't gcov give me meaningful information from my test runs?
3.5. How do I share a test setup that is expensive to initialize? I don't want to do it over for every test.
3.6. How do I pass state from one test to another, to chain them, if they run in isolated processes, or even in parallel?
3.7. But I really really really want to chain the result of tests. How do I pass state from one test to another?
3.8. Why does my crpcut test program say there are no tests? I've written plenty!
3.9. How can I test that a function goes to sleep?
3.10. I am writing negative tests for a function that creates files, and the test fails because the files aren't erased. This is expected. What should I do?
3.11. My test fails with a non-standard exception being thrown. How can I obtain information from it?
3.12. Why is the output garbled? My strings don't look like that.
3.13. How can I differentiate between failures in work in progress, and regressions?
3.14. Why do I get link errors about undefined references to identifiers in crpcut::heap?
3.15. Why does my program die due to stack exhaution before even getting to main()?
3.16. How can I prevent crpcut's heap instrumentation from getting in the way of my code that needs to implement its own heap?

3.1.

My test is killed by crpcut, what gives?

A message like this, eh?


     FAILED: name_of_my_test
     phase="running"  --------------------------------------------------------------
     Timed out - killed

          

By default, tests functions are given 2s to complete, and fixture construction and destruction are given 1s each. Either you test function consumed at least 3s, or your fixture constructor or destructor consumed more that 1s. Most probably your code waits for something that doesn't happen, or is stuck in a no-progress loop (infinite, that is.)

If, however, your test legitimately needs more time than that, you can easily raise the limit. Just use the DEADLINE_REALTIME_MS(n) modifier and set a suitably long deadline for the test function, or FIXTURE_CONSTRUCTION_DEADLINE_REALTIME_MS(ms) for a slow fixture constructor or FIXTURE_DESTRUCTION_DEADLINE_REALTIME_MS(ms) to get more time for the fixture destructor.

[Note]Note
If this occurs when running under special circumstances, for example with time consuming tools like valgrind, you may temporarily turn off all timeouts with the -t / --disable-timeouts command line parameter, or extend the timeouts using --timeout-multiplier=factor to compensate for the slower execution time.

3.2.

How do I parametrize tests?

Parametrized tests, the way they are implemented in other unit test systems, are convenient, but also makes it a bit difficult to find what was in error and what wasn't.

The crpcut way of doing parametrized tests, is to express more or less the entire test functionality in a fixture using templates, and add several tests, each using different parameters for the test. Example:


    class parameter_base
    {
    protected:
      template <typename T1, typename T2>
      void my_test(T1 t1, T2 t2) {
        ASSERT_GT(t1, t2);
      }
    };

    TEST(gt_int_4_int_3, parameter_base)
    {
      my_test(4, 3);
    }

    TEST(gt_double_pi_int_3, parameter_base)
    {
      my_test(3.141592, 3);
    }

3.3.

How do I pass information from the command line to a test?

In your test, use any of the available forms of crpcut::get_parameter() to pick up the value of a parameter passed with the -p name=value / --param=name=value command line parameter. -p name=value / --param=name=value can be used several times on the command line, to define several named parameters.

3.4.

Why doesn't gcov give me meaningful information from my test runs?

It does, but it only reports values for tests that exit normally, i.e. tests that return from (or run through) the test function body, tests that call exit() themselves, and tests that leave the test function body by throwing an exception.

Tests that fail ASSERT checks, however, or tests that themselves choose to exit abnormally, for example by calling _Exit() or abort(), will not reliably get the collected gcov data flushed, even when that is the expected behaviour of the test.

3.5.

How do I share a test setup that is expensive to initialize? I don't want to do it over for every test.

You initialize your setup once, outside the tests. It is probably a good idea to use a singleton, which you initialize from the main() function. The tests will get copies of the state when the processes are forked.

3.6.

How do I pass state from one test to another, to chain them, if they run in isolated processes, or even in parallel?

You really don't want to do that for a software unit test. If you were running a hardware production test, it might be interesting, in order to shorten the time to find defective units, but for a software unit test you want to pinpoint logical errors, and fix bugs. To do that effectively, you want to be able to run each test case individually.

3.7.

But I really really really want to chain the result of tests. How do I pass state from one test to another?

Sigh, if you insist on making like difficult for yourself, then so be it. What you do is that you set up a shared memory region from the main() function, and use that region to pass state. Do not forget to use the DEPENDS_ON(...) modifier to impose a strict order.

3.8.

Why does my crpcut test program say there are no tests? I've written plenty!

crpcut relies on constructors for global objects having executed before crpcut::run(argc, argv) is called. There are a number of reasons why this may fail:

  • main() is not compiled by a C++ compiler.
  • The object file containing main() and the object files containing the tests are not linked with the C++ runtime.
  • Your tests are in a shared library.

All the above, and yet some, can be worked around, but it's not for the faint of heart. This may be of help.

3.9.

How can I test that a function goes to sleep?

A broad check can be made for the entire test function with DEADLINE_CPU_MS(n) and EXPECT_REALTIME_TIMEOUT_MS(ms). The former imposes a limit on the maximum allowed CPU-time for the test, so that it cannot busy-wait. The latter requires a minimum realtime duration for the test, so if the test finishes earlier, it fails.

It is also possible to make more fine grained checks with the ASSERT macros ASSERT_SCOPE_MAX_CPUTIME_MS(ms), ASSERT_SCOPE_MAX_REALTIME_MS(ms), ASSERT_SCOPE_MIN_REALTIME_MS(ms) and the corresponding VERIFY macros VERIFY_SCOPE_MAX_CPUTIME_MS(ms), VERIFY_SCOPE_MAX_REALTIME_MS(ms), VERIFY_SCOPE_MIN_REALTIME_MS(ms)

3.10.

I am writing negative tests for a function that creates files, and the test fails because the files aren't erased. This is expected. What should I do?

Use WIPE_WORKING_DIR as the optional action parameter to EXPECT_EXIT(num, action?) or EXPECT_SIGNAL_DEATH(signo, action?).

3.11.

My test fails with a non-standard exception being thrown. How can I obtain information from it?

Write a custom exception describer for your exception type using the CRPCUT_DESCRIBE_EXCEPTION(signature) macro.

3.12.

Why is the output garbled? My strings don't look like that.

This is probably a character set issue. By default, crpcut assumes that all output is in the UTF-8 character set. If the code you're testing is using another character set, you may want to call crpcut::set_charset() before calling crpcut::run(). If you run crpcut with text output instead of XML output, it may be that your terminal is using another character set than UTF-8. In that case, add the command line flag -C name / --output-charset=name to define the character set used by your terminal when starting your test program.

3.13.

How can I differentiate between failures in work in progress, and regressions?

Tags are your friends here. Define a tag for each new feature or user-story you are working on, and use it for each new test. Then run your crpcut test program with that tag as non-critical. That way you can easily see what is expected non-critical failures, and regressions.

See DEFINE_TEST_TAG(tagname), WITH_TEST_TAG(tagname) and the command line parameter -T {select}{/non-critical} / --tags={select}{/non-critical} .

3.14.

Why do I get link errors about undefined references to identifiers in crpcut::heap?

Your test uses ASSERT_SCOPE_HEAP_LEAK_FREE or VERIFY_SCOPE_HEAP_LEAK_FREE, and for those to work you need to link with -lcrpcut instead of -lcrpcut_basic.

3.15.

Why does my program die due to stack exhaution before even getting to main()?

Most probably you have used the experimental library -lcrpcut_heap, and added it after -lcrpcut_basic on the link command line. The preferred solution is to link with -lcrpcut instead, but should you want to use -lcrpcut_heap it is important that -lcrpcut_heap precedes -lcrpcut on the link command line.

3.16.

How can I prevent crpcut's heap instrumentation from getting in the way of my code that needs to implement its own heap?

Link your test program with -lcrpcut_basic instead of -lcrpcut. You will, however, lose the crpcut heap instrumentation functionality if you do.