Tips & Tricks

To Catch a CLion

CLion is a cross-platform IDE for C++ development (which happens to have a built in test runner).

Catch is a cross-platform test framework for C++.

I’m the original author of Catch and am now working at JetBrains so this seems to be an ideal match! As of the 2017.1 release I’m pleased to say that CLion fully supports Catch as a first class test framework. That means you can create Catch test runner configurations that let you specify test names (with wildcards) or tags, see the results of your test runs, hierarchically, in the test runner window, and can re-run failing tests, or the test where the cursor is, with just a single shortcut. You can even run all tests in a source file.

But why would you want to use Catch in the first place? In the post I’ll take you through using Catch with CLion’s new integration and give you a flavour of what to expect.

Getting and using Catch

Catch is distributed as a single header file, with no external dependencies (beyond the C++03 standard library). You can always get the latest version of this header directly at this URL.

Or you might prefer to clone the whole repository from GitHub. That will also give you the docs, tests, scripts and source files that get used to build the single include. If you do this you’ll find the single header file in the `single_include` directory.

Once you have the file you just need to drop it into your project directory – or somewhere else on your include path. Because Catch is a single header you can directly include it in your own repository.

Now to use it just include the header in your test files. Additionally, in exactly one source file you’ll also need to precede the `#include` with either `#define` `CATCH_CONFIG_MAIN` or `CATCH_CONFIG_RUNNER`. The former instructs Catch to define `main()` for you, and will take care of parsing command line arguments and exit codes. Use the latter identifier if you want to supply your own `main()`. There are then several levels you can hook in at. See the documentation for more on that.

#include "catch.hpp"

To start a test case just use the `TEST_CASE` macro and give it a test name – as a freeform string. Optionally give it a second string containing one or more tags in square brackets. Assertions are usually `REQUIRE` – which aborts the test on failure, or `CHECK` – which reports the failure but carries on. For more details on these macros and more see the documentation.

Within a `REQUIRE` or `CHECK` just use normal C++ comparison operators. Catch uses expression templates to decompose the expression so it can report the left-hand-side and right-hand-side operands independently.

Here’s an example of everything we just discussed:

int theAnswer() { return 6*9; }

TEST_CASE( "Life, the universe and everything", "[42][theAnswer]" ) {
  REQUIRE( theAnswer() == 42 );

Notice that Catch test cases are modelled as free functions. There is no need to create a “test fixture” class (although that is supported too). Instead, tests can be organised by dividing top level test cases into sections. You can even nest sections within each other, or use the BDD style bindings that give you GIVEN-WHEN-THEN blocks. For more see the docs on test cases and sections

If you build and run this now it will use the default console reporter to give a textual report on the results. This already works quite nicely. But to take advantage of CLion’s Catch integration you just need to add a unit test configuration.


Go to `Run | Edit Configurations` (or from the configurations drop-down at the top-right) and click the `+` icon at the top left of the configurations dialog. Select `Catch Test` from the drop-down and give it a name (e.g. “Unit Tests”). There are some more options here but that’s all you need to start using Catch with CLion’s test runner.
Click `OK` and, now, running the tests will open the test runner tool window.

Fully Integrated

The left-hand pane shows a hierarchical representation of all the tests that have been run. You can hide or show passing or ignored tests with the green and orange icons at the top, as well as various sorting and folding options.


When you click on a test name you’ll see the output from the test on the right. Or if you double-click you’ll also be taken to the start of the test case in the code. Where file and line numbers appear in the output you can click there to go to individual assertions in the code (especially useful if you have many assertions, or they are not directly in the body of the test case).

By default only failing tests have detailed output. But Catch has a command line option to force passing tests to show output, too. Edit the configuration again and in, `Program Arguments` write `-s`. Run the tests again and now output is shown for all tests, pass or fail. Catch has a rich set of command line options which, again, can be found in the docs.

Being selective

The test runner gives you several options for running your tests. With the editor caret inside a test case, Pressing `^⇧R` on a Mac, or `Ctrl+Shift+F10` on Windows/ Linux (or use the the option from the context menu) will run just that test – or press `^⇧D`/ `Ctrl+Shift+D` to debug it. If the caret is outside any particular test case, but still within a test file, the same command will run all tests in the file. (Note that whenever you selectively run tests using one of these features a new, temporary, configuration will be created and selected. Remember to select your main test configuration to run all tests again).


And if you just want to re-run all test that failed in the previous run you can do that too. Just click the `Rerun Failed Tests` icon in the tool window.

If you want even more control over which tests are run go to `Edit Configurations` again. Here you can select tests by `Tags / Test` or `Pattern`.

With the first option selected the next field is `Tags:`.


Start typing tag names (in square brackets) here and you’ll get completions right in the text entry field! For our example, start typing `[t` and you’ll
immediately be offered `[theAnswer]`. You can select more than one tag to narrow the set of tests further.

With one or more tags selected the `Test:` field becomes active. This is a dropdown of all tests matching the current tags. You can, optionally, choose a specific test from this list – or just leave it to run all matching tests. It can also be handy to just drop down this list to reassure you that the tags are selecting the set of tests you think they should be.

Alternatively, if you set `Test kind:` to `Pattern` and go to the `Pattern:` field, you can write a test name, or part of a test name with a `*` wildcard at the beginning and/ or the end to match one or more tests.


About time


To the right of the `Test Results` hierarchy are timings for each test, as well as groups of tests. If you’re writing simple unit tests it’s likely these timings will all be rounded to 0ms. But in the case of longer running tests it can be useful to keep an eye on these numbers. One of the sorting options is “by duration”, which can be helpful to see where the time is going if your tests are taking too long to run, for example.


Further along the toolbar are icons for exporting and importing with test results. They can be exported as Html or Xml reports, saved to a file. Reports exported this way can later be imported, of course. But if you click the `Import Test Results` icon you’ll see that it also lists your recent test runs – which have automatically been saved (temporarily) for you.


This can be useful if you want to remind yourself what state you were in earlier and fits nicely with CLion’s `Local History` feature (under `VCS | Local History`) for getting back to earlier versions of your code – independently of using a fully-fledged version control system. CLion automatically saves a snapshot of your changes every time you run and, with test integration, automatically labels each test run with pass/ fail status.


Catch up

So CLion’s Catch integration gives us more conveniences in how we run our tests, as well as a rich set of viewing and reporting options. Altogether this can make us more productive and offers a faster feedback loop – all important in testing practices such as TDD and BDD.

Catch itself has many more features than we have touched on here, so do be sure to read the docs, maybe starting with the tutorial.
You might also want to follow my personal blog where I often write about Catch.

Comments below can no longer be edited.

23 Responses to To Catch a CLion

  1. Avatar

    Eduardo Borsa says:

    March 2, 2017

    Thats awesome!
    Will it be able to run test coverage inside IDE?
    Just like it is possible in IntellijIDEA?


  2. Avatar

    Roman says:

    March 2, 2017

    Do you mean code coverage or something more like functional coverage with user defined cover groups?

  3. Avatar

    Jan Svoboda says:

    March 3, 2017

    Thank you so much for this!
    Switching from Google Tests to Catch and FakeIt will be even better experience now.

  4. Avatar

    Aleksander says:

    March 3, 2017

    Very cool! I used Catch for a small university project and it was very easy to set up. Glad to hear CLion supports it now.

  5. Avatar

    Novikov Egor says:

    March 30, 2017

    Is it possible to view test output in console window (not only test success/fail results, any output from test)?

    • Avatar

      Phil Nash says:

      March 30, 2017

      You should get all the output in the console window. What are you missing that you expect to be there?
      It will look a little different to running it from the command line because it’s actually running the XML reporter then interpreting that back into console-like output.
      You can still run the console reporter by running the Catch executable as a normal process, rather than using a Catch configuration, if you prefer that output.

  6. Avatar

    Josh says:

    March 30, 2017

    Thank you a lot. Recently I’ve discovered my favorite test framework Catch and now it is supported in my favorite IDE! 🙂

    • Avatar

      Phil Nash says:

      March 30, 2017

      Cool, thanks!

  7. Avatar

    Andrew Wilkie says:

    April 9, 2017

    Hi Phil. Thanks for Catch! Love BDD style. As a suggestion for a possible future post, it would be awesome to see how to replace Google Test with Catch in an Integration flow like this –

  8. Avatar

    Sebastian says:

    June 22, 2017

    Hello Phil,

    I was wondering if you could share a few remarks with respect to how we can integrate Catch tests with CMake’s `enable_testing()` infrastructure and what is a good way setup the CMakeLists.txt.


    • Avatar

      Phil Nash says:

      June 22, 2017

      That’s a good question, Sebastien. I’m not an expert on CTest so I can’t give an exhaustive answer here, but I have found that there are some trade-offs in how you map them.
      In particular (and this is based on how I’ve seen others set up CTest support in the Catch SelfTest project) it seems that you need an `add_test()` entry in your CMakeLists.txt file for every test case that you want to appear as a separate CTest test. Alternatively you could have a single CTest test that runs all the Catch tests (but then you don’t get as fine-grained output) – or somewhere in-between (e.g. run groups of tests as a single CTest test). You might have to experiment to see what works for you. You could also write a script to generate the CTest calls (by calling Catch first with `–list-test-names-only`, for example) – but there is nothing built in for that right now.

      This is an area that I need to dig into more to see if there’s anything I’m missing that might make this all easier. When I do I’ll be writing it up – but for now that’s the limit of my own experience. Take a look at the end of the CMakeLists.txt file in the Catch SelfTest project for examples of how it is set up for that.

      • Avatar

        Sebastian says:

        June 23, 2017

        Thanks for the thorough reply. You are right that using `add_test()` [1] is used in conjunction with `enable_testing()` to add individual tests, this is also where parameters for the test executables can be specified. My issue is more with setting up the test executables.

        I would prefer to have multiple CTests rather then one big one, however I am unsure how I should setup all those different executables with `add_executable`,

        Right now I have a folder structure like this:


        If I want to produce two CTests for my classes foo and bar, then my CMakeLists.txt would look like this:

        add_executable(test1 test1.cpp ../foo/foo.cpp ../foo/foo.h)
        add_executable(test2 test2.cpp ../bar/barcpp ../bar/barh)


        In addition I need to specify include directories and link libraries for each executable.

        I don’t like the relative “../” in the paths of foo and bar files. Also this does not look very maintainable to me, especially if I start to test classes which have several dependencies.

        I guess its more an issue of how to properly structure the CMake build rather than an issue with Catch itself. Anyway, if you find some time to explore this direction further please let us know in another blog post.


        • Avatar

          Phil Nash says:

          June 23, 2017

          It depends how you want to do it, but you don’t necessarily need multiple executables. You can build one (or a small number of) executable(s), then invoke `add_test()` with the same executable multiple times, specifying one or more test cases on the command line.


          add_executable( testExe test1.cpp test2.cpp foo.cpp bar .cpp etc.cpp )

          add_test( NAME test1 COMMAND testExe “first test” )
          add_test( NAME test2 COMMAND testExe “second test” )
          # etc

          • Avatar

            Sebastian says:

            September 21, 2018

            Hi Phil,

            I just stared a new project and was going to use Catch2 for my test, so I came back to this issue. Since a little more than a year has passed I was wondering what has changed in the mean time. I see that in Catch 2, there seems to be better CMake support now (, but the documentation is still a little sparse.

            Could share some more details on how to use CMake with Catch2. In my case I downloaded the whole release and placed it into a subdirectory (${root}/libs/Catch2-2.40, while my tests are under ${root}/tests/…).

            I managed to execute my tests from CLion, but I am uncertain where to go with CMake from here.

            Did you maybe already write a newer blog post that I have missed? What I missing are more instructions about what to do if I drop the Catch2 sources into a subdirectory of my project, rather then installing it on my system.

  9. Avatar

    Carl Eckhardt says:

    September 22, 2017

    Hello Phil,
    Welcome to Jetbrains. My dream is to work there too, in the Berlin office. 🙂 But that’s another story.

    I was wondering if you could help me understand the best workflow for using Catch.
    Here’s what I’m thinking: Use Catch as integrated in CLion for the purpose of getting
    that feelgood result that the tests pass for the module under development, etc. But,
    then incorporate those tests, as would be built by CMAKE(using CTest) in my Bamboo
    (CI) flow.

    Am I being redundant, or would one method fit all scenarios(Ha!… BDD, get it?)?


    • Avatar

      Phil Nash says:

      September 23, 2017

      Thanks Carl (although, note, we don’t have an office in Berlin – are you thinking of Munich?),

      If I’m reading you right that is exactly how Catch is intended to be used. You can choose which reporter to use at runtime (using the `-r` or `–reporter` command line switches), so for your Bamboo builds you’d probably use the JUnit reporter (or, perhaps, the XML reporter with some XSLT to render it?) – I’ve not used Bamboo so I don’t know the best way to integrate it there.

      Another thing you might want to do is: if you have any long running tests (e.g. integration or system tests) that you still want to write with Catch, it can be a good idea to “hide” them from the default run (usually by tagging them `[.integration]` or `[.system]` or whatever. The `.` prefix on the tag tells Catch not to include them in the default run, so you can run all your unit tests quickly and easily – but on your CI server you can run those tags too (ideally as separate steps). Of course you can also run them manually locally, too, just before you check in.

  10. Avatar

    Michael Hewitt says:

    January 9, 2018


    Old thread I realize. We are considering using Catch. Two questions:

    1. The latest Catch2 headers cause CLion 3.1 to hang when clicking on the tests after a run. We have fallen back to Catch v1.11 for now. Are there plans to fix this issue?
    2. Does JetBrains plan to officially support Catch, or is it just a one-off project that you will continue to support in your spare time? A related question: Is Catch2 v3 in the works, ideally officially subsidized by JetBrains?

    Michael Hewitt
    Staff Scientist
    Comtech EF Data Corporation

    • Anastasia Kazakova

      Anastasia Kazakova says:

      January 9, 2018

      By CLion hang do you mean a UI freeze? Are there any thread dumps generated in the log directory? If yes, can you share with us please?

    • Avatar

      Phil Nash says:

      January 10, 2018

      Hi Michael,

      1. CLIon 2017.3.0 hadn’t been updated for Catch2 – but those changes did go into 2017.3.1 – and it all seems to work fine for me. So please follow up on Anastasia’s suggestion there.

      2. JetBrains does officially support Catch/ Catch2 – so we are trying to keep up with any changes. The Catch2 transition made me realise (as Catch author) how low-level the CLion integration is (and needs to be to work properly), so it was a while before I realised it was broken (after completely changing a lot of private interfaces). I’m more aware now, so hopefully I’ll either avoid breaking the integration completely or, if necessary, will be more proactive at keeping any changes in lockstep.

      2b. v3 is not specifically in the works. There are a few larger features in the pipeline, but we’re trying to be stricter at sticking to Semantic Versioning, so going to v3 would indicate a breaking API change, rather than size of features. Catch2 is not directly “subsidized” by JetBrains, although I do get to spend some of my work time on it – so indirectly, yes. In terms of support and avoiding breaking the integration what I said in the previous paragraph applies.

  11. Avatar

    U007D says:

    July 14, 2018

    I asked on Twitter, but I thought I’d also ask in a bit more depth here: is there any further documentation/examples of integrating Catch2 with Clion?

    Best-practices recommendation: put `#define CATCH_CONFIG_MAIN`/`#include ` into its own file. Then how to have “catch_main” used when Catch Unit Test configuration is selected, and conventional main when building debug or release?

    When packaging unit tests with sub-libraries, is it best practice to have “catch_main” CMake scan entire project for tests, or to have each sub-library’s tests reference the central “catch_main”?

Discover more