Blogroll Features

Test Automation with TeamCity

TeamCity offers a number of ways to help you work with tests. First of all, it presents the test results for each build execution so it’s easy to analyze what happened and why something has failed. Besides that, TeamCity includes advanced features like code coverage, flaky test detection, metadata in tests, and the ability to re-order the test execution. In this blog post, we provide a brief overview of these features.

Test results

It all starts with presenting the results. On the build configuration home page, for each build, we can see the number of tests that were executed. We can also see the numbers for failed, ignored, and muted tests.


If we choose a single build and navigate to the build overview page, we will see some more data. For instance, for build number 14 we can see that there are 39 passed tests, and 1 test (system.CrashControllerTests) is ignored.


Further on, the Tests tab displays the full list of the tests that were executed:


It is possible to use the controls at the top of the list for grouping and filtering the results. Since TeamCity preserves the history of the tests, we can also see the historical data for their duration:


The duration is associated with the build agent on which the test was executed. The graph immediately shows us whether any agents are performing poorly for the specific test. We may want to fix something in the environment for that agent, or we can also exclude the agent entirely for this build configuration by adjusting the Agent Requirements.

Flaky test detection

Sometimes, seemingly at random, the same test of the same code will produce different results. We call those tests flaky. Flaky tests can be caused by various factors: environment affecting the test or application code, concurrency issues, etc.

It’s really great if we can detect such unstable tests sooner and fix them faster. Analyzing the historical test reports manually to find out, if there are any flaky tests would take ages. TeamCity detects flaky tests and displays them on the dedicated tab for a given project.


For every single test, we can view the history, assign an investigation, or temporarily mute the failure. For instance, we can mute all the flaky tests for the duration while someone is fixing them. Muting the tests will allow the build to pass so that the build chain can continue the execution. Once the tests are fixed, we can unmute them again.


Here’s an example of a flaky test. At the screenshot below, we can see that the test status was changing from passed to failed and back a few times, even though there were no related changes in the version control. Hence, the test was marked as flaky so that we could easily find it in the test report.


The test results are associated with the build agent which helps to diagnose the failures better. For instance, the test might be failing only on a specific build agent. Then maybe the failure is related to the environment and not the test logic itself.

Code coverage

Code coverage is a combination of metrics that measure how your code is covered by unit tests. To get the code coverage information displayed in TeamCity for the supported tools, first we need to configure it in the dedicated section of a Build Runner settings page.

For instance, for Maven build step, at the bottom of the configuration page, we can find the Code Coverage section. We can choose the code coverage runner and specify which binaries should be inspected.


After the test execution, the coverage metrics are presented on the build overview page. We can view the full report on the Code Coverage tab.




Metadata in tests

Sometimes you need to provide more data with the test results. For instance, include a screenshot, add some numeric data, or just provide any description. This is useful when we need to analyze why the test failed and any additional information can help save precious time.

In TeamCity, a test may be associated with some information – the metadata. The types of data that we can associate with the test is text, numeric values, images, and files that are published as build artifacts.

We can use service messages to associate the data with the test. The data is then rendered with the test results:


For more about metadata in tests, read this blog post.

Risk group tests re-ordering

A test suite with a large number of tests takes some time to execute. Sometimes, for large suites, the execution time can be many hours. What if the very last test fails and we want to rerun the test suite? Would we need to wait a few more hours just to learn that the same test fails again? Instead, it would be nice to run the recently failed tests first.

TeamCity provides a feature for running the risk group tests first. The feature works with TestNG, JUnit, and NUnit test suites and is supported for Ant and IntelliJ IDEA runners.

Comments below can no longer be edited.

13 Responses to Test Automation with TeamCity

  1. Avatar

    Tim Boehm says:

    September 11, 2019

    Is there any way to set this up to work with unit tests with Python projects?

    • Avatar

      Anton Arhipov says:

      September 11, 2019

      Hello Tim!

      There’s a Python Unit Test reporting plugin for TeamCity
      However, I haven’t tried it myself yet, so I can’t comment if it covers all the features described in the blog post.

  2. Avatar

    pallavi says:

    November 1, 2019

    How can one integrate or use a plugin of some sort to incorporate UFT QTP automation tests with Team City? Is that possible?

    • Avatar

      Anton Arhipov says:

      November 8, 2019

      I guess the question is how to display the test results from a 3rd party tool in TeamCity? It’s possible to include any external report in a custom tab as described here:

      • Avatar

        Sunil says:

        December 5, 2019

        Hi Anton Arhipov,

        How to trigger UFT automation scripts from TeamCity? Is there a plugin available for the same.

        • Avatar

          Anton Arhipov says:

          December 10, 2019

          Hello Sunil.

          What kind of tool you are asking about specifically? If the tool provides some sort of a script or an executable, then it is always possible to run it via Command Line build step in TeamCity.

  3. Avatar

    Michal Tsadok says:

    November 14, 2019

    Is there a way to invoke additional automation tests after a successful build?

    • Avatar

      Anton Arhipov says:

      November 14, 2019

      You can create one more build configuration that would depend on the first configuration via snapshot dependency and execute if the first build was successful.

  4. Avatar

    Wilf says:

    January 21, 2020

    I have some paramertized JUnit5 tests that take a list of requests and I want to report the total number of requests rather than the number of test executions e.g. test is run 2 times with a list of 10 requests for each so I want to see the test report reflect 20 rather than 2, is this possible?

    • Avatar

      Anton Arhipov says:

      January 23, 2020

      You could use TeamCity’s service messages to report the figures from the tests. However, it looks like you need some sort of accumulator to report the totals instead of the individual numbers. There’s no such functionality out-of-the-box so you’d have to invent something. Maybe it’s possible to store the counts in some environment variable and then add new figures to that variable and then report via a service message. Or maybe it is possible to somehow implement it in a custom JUnit executor, I’m not sure.

  5. Avatar

    kumar says:

    January 23, 2020

    i have a jar which contains all the test case needs to be run. when i am trying to run that through Teamcity as a commandline build step its saying windows procwss cannot be started . As per my understanding TC creates a windows process while triggering job on Windows agent and when we run a batch file which contain a line to run jar file (java -jar abc.jar). please help how o run this jar file.

  6. Avatar

    Estevão says:

    January 30, 2020

    Really good see that.
    But there is an issue I would like to know how to deal with it

    I can report metadata for tests using:

    But I can’t fail based on this metadata
    Using: Failure Conditions->Fail build on metric change->[my additional test data]

    If it fails it’s in the build, but not in the test.
    Although there is the option for “tests duration (secs)”

    I would love to have this! Or at least know how to accomplish that.

    Thanks and best to you,

  7. Avatar

    Mohamed Amin says:

    February 5, 2020

    Hello Anton, Thanks for the useful information. Is there a plugin to connect TeamCity with Micro Focus ALM Octane? I would like to simulate the following work flow:
    1. Start a test suite consisting of automated test cases in ALM Octane.
    2. Job(s) would be created in TeamCity accordingly to execute the automated test cases (using Micro Focus UFT) on the system under test (e. g. a website).
    3. TeamCity will gather the results and pass it forward to ALM Octane.

    Please let me know if you also have any remarks on the above mentioned work flow.

Discover more