Performance Profiling in Continuous Integration: dotTrace and TeamCity

Performance testing is growing more and more popular as one of the most effective methods for preventing application performance regression.

Usually, by performance testing developers mean web application load testing, stress and spike testing, i.e. all types of tests that measure system behavior under particular loads. Nevertheless, performance testing is not only about loads. Tracking the execution time of critical application features (like rendering a scene in a game application, or filtering a large amount of data in an enterprise application) is also an important aspect of performance testing.

When tracking the execution time of critical features, a typical CI workflow looks as follows:

  1. Write an integration test that runs performance-critical functionality in your application.
  2. Configure your CI system to run this test during a build.
  3. Configure the CI system to check test execution time. If it exceeds some threshold (expected performance), fail the build.

Although such an approach looks easy to implement, it has some drawbacks. First, depending on the CI system you use, step 3 may be quite tricky. You need to add some custom failure condition for the duration of the specific test, and your CI server may not support this out of the box. Second and more important, when such a build fails because of your test, you don’t have a clue as to why this happened.

To find the exact methods that have become slower due to regression, you need either to thoroughly examine your code or (and this is much more effective) profile the problematic test in a performance profiler. So why not do this right on the CI server? Why not run the test under a profiler, and if there’s a slowdown, fail the build and get a performance snapshot?

We tried to solve this problem using two of our products: dotTrace performance profiler and TeamCity CI server. The brainchild of our efforts is the dotTrace Profiler plugin for TeamCity. With this plugin, the performance testing workflow becomes even simpler:

  1. Write integration tests.
  2. Add the plugin runner as a build step to your build configuration.
    In the plugin parameters, specify the methods (tests themselves or the methods they run) whose performance you want to measure. Specify the corresponding performance thresholds: absolute own time or total (own + call subtree) time, in ms. It’s also possible to compare measured values against values from previous successful builds.

Now, if at least one method exceeds its threshold, the plugin will fail the build and save the collected performance snapshot to build artifacts.

Here’s how the workflow looks in one picture:

Plugin workflow

For this plugin to run, only the free dotTrace console profiling tool is required. You will only need the full version of dotTrace if you want to analyze the collected performance snapshot.

Now let’s take a closer look at how to use the plugin.

1. Install the Plugin and dotTrace Console Profiler

  1. On your TeamCity server, copy to the plugins directory located in your TeamCity data directory.
  2. Restart the TeamCity Server service.
  3. As the dotTrace console profiling tool is required for the plugin, download and unzip the console profiler to any directory on a TeamCity build agent.

IMPORTANT! Unit test runner must also be installed on the same agent. In our example, we will use NUnit.

2. Write an Integration Test

Let’s suppose we have an application with a killer feature called Foo. For example, we have a Foo class with the Foo() method which, in turn, uses a Goo class. It creates a Goo instance and runs its Goo() method, which also runs some methods. We assume the execution time of the Foo feature is crucial for user experience. So we add a performance NUnit test that runs the feature:

namespace IntegrationTests
   public class PerformanceTests
      public void TestFooFeature()
         Foo foo = new Foo();

3. Create Profiling Configuration

The next step is to tell the dotTrace console tool what application it has to profile. This is done via an XML configuration file. If you have a standalone dotTrace installation, you can simplify file creation by using the Configuration2Xml tool which is located in the dotTrace installation directory. Otherwise create the file manually by using the format described below.

  <HostParameters type="LocalHostParameters" />
  <Argument type="StandaloneArgument">
    <FileName>C:\Program Files (x86)\NUnit 2.6.4\bin\nunit-console-x86.exe</FileName>
    <Arguments>"%CHECKOUTDIR%\MyProject\IntegrationTests\bin\Release\IntegrationTests.dll" /framework:net-4.5</Arguments>
    <WorkingDirectory>C:\Program Files (x86)\NUnit 2.6.4\bin\</WorkingDirectory>
        <Item />
  <Info type="PerformanceInfo">
        <SearchPaths />
        <PatternFilters />
        <DenyAttributeFilters />
  • To profile tests, we must profile the test runner that runs them (NUnit in our case). That is a standalone application, that’s why the main Argument tag has the type="StandaloneArgument" attribute.
  • The Filename tag specifies the path to the NUnit console runner installed on the agent.
  • In Arguments we specify the path to the DLL with integration tests. As you don’t know in advance where TeamCity will put the built DLL, you cannot specify the full absolute path to it. Instead, you can use the %CHECKOUTDIR% placeholder. During the build, the plugin will replace the placeholder with the checkout directory path. Therefore, all you need is to specify the path which is relative to the checkout directory. In our example it will be
  • Working directory specifies the path to the NUnit bin directory.
  • NUnit creates a separate runner process to run tests. To profile only the runner process, we must exclude the main NUnit process from profiling as follows:
    • Set the ProfileChildProcesses to true.
    • In the Scope | ProcessFilters | Item | ProcessNameFilter, specify nunit-console*.
    • In the Scope | ProcessFilters | Item | Type , specify Deny.
  • In MeasureType, specify the type of profiling you want to use. This should always be the Sampling type as it gives the most realistic time values. Note that the plugin does not support Timeline profiling.
  • Leave other options as shown above and save the file. In our example, we will name it myconfig.xml and store it in the console profiler directory.

4. Create Build Configuration

IMPORTANT! To help ensure consistency of profiling results, you should assign the build configuration that uses the plugin to a specific build agent (hardware agent is strongly recommended). For further instructions please refer to TeamCity documentation.

The next step is to create the build configuration dedicated for performance testing in TeamCity. For example, we already have My Project. Let’s add to that project a configuration consisting of two steps: building the application and running performance tests. To save time, let’s proceed right to configuring the second build step.

  1. In build configuration settings, go to Build Steps and click Add build step.
    Creating build configuration
  2. In Runner type, select dotTrace Profiler.
  3. Set the following plugin options and then click Save:
    • Console profiler path – the path to the directory storing dotTrace console profiler. In our example, it’s C:\Console Profiler
    • Profiling config file path – the path to the XML configuration file we created on the previous step. In our example, it’s C:\Console Profiler\myconfig.xml
    • Temp directory path – the path for storing a temporary performance snapshot. Note that this snapshot is auto-deleted each time the build is run.
    • Publish performance snapshot to artifacts – sets the condition for publishing a snapshot to the artifacts directory: Always, Never, or On exceeding thresholds. While the latter is the most reasonable choice, keep in mind that, depending on application complexity, the snapshot may take up hundreds of MB of disk space. We recommend updating your artifact cleanup policy so that after some time TeamCity would delete from the artifacts folders.
    • Threshold values – specify the list of methods whose performance you want to check. The pattern is Namespace.Class.Method TotalTime OwnTime, where
      • TotalTime is the method’s execution time, including its own time and the time of the method’s call subtree, in ms;
      • OwnTime is the method’s own execution time, in ms.

      (Setting a value to 0 will make the plugin ignore the threshold.)

      If we want to check the method’s time against the corresponding time in previous successful builds, we have three options: a) we can take values for comparison from the first successful build, b) take them from the last successful build, or c) compare against the average value calculated for all successful builds. If so, instead of the absolute threshold value in ms we should use one of the prefixes:
      a) F – take value from the first successful build,
      b) L – take value from the last successful build, or
      c) A – for the average value
      Then, set the tolerance to indicate by how much the time value may be exceeded, as a percentage.

Configuring the plugin

In our example, we want to track the total execution time of the Foo feature, so we add a threshold for the total time of the TestFooFeature() test. F15 means that the value for comparison (with up to 15% tolerance) must be taken from the first successful build. E.g., if during the first successful build run dotTrace measures 1000 ms total time for the method, the method’s threshold for all following builds will equal 1150 ms (1000 + 1000*0.15).

In our example, we also want to check the total time of the Goo() method as it strongly impacts the performance of the Foo feature. Checking the method’s execution time during the build simplifies our job: in case of performance issues with Foo, we will know right away if Goo() is the one to blame. As a threshold for the method, we also use the value from the first successful build, plus 15% tolerance.

5. Run the Build

Now it’s time to run the build! As we decided to use values from the first successful build for the thresholds, the first build won’t fail in any case – it is used only to set the baselines for the TestFooFeature() and Goo() methods. The time values from this build will become the “golden standard” for all subsequent builds.

  1. Run the build.
    First successful build
  2. After the build is finished successfully, we can take a look at the profiling results. Open the Build Log for this build and expand the performance testing step.
    Build Log for the first successful build
    TeamCity had no data on the methods’ execution times before the first build, so the expected values are equal to 0.
    If we run the build a second time, the expected values will be calculated based on the results from build #1:Build log for the second build

Suppose now that someone has edited the Goo() method and made it slower. (We’ll emulate this using Thread.Sleep(200).) Now, the build fails with the Performance thresholds exceeded: 2 message:
Failed build overview

The Build Log provides some info on what happened:
Build log of the failed build

As we instructed the plugin to save a performance snapshot in case of any issues, it has stored the archive with the snapshot in the artifacts folder:
Snapshot in artifacts

Now, we can analyze the snapshot and find the exact cause of the performance flaw:
Snapshot in dotTrace

Note that all values are reported as TeamCity statistic values, so you can build trend graphs for the measured time values if you like. To do this, open the Parameters tab of any build and switch to Reported statistic values:
Statistic values

Click View trend for a particular parameter to see its diagram of changes:
Statistics trend


If you use continuous integration in your development environment and it involves performance testing, don’t hesitate to try the dotTrace plugin. We’re just getting started with it! In future releases we plan to integrate the plugin right into TeamCity’s unit test runner types.

Profile with pleasure!

image description