Taking Memory Snapshots by Condition in dotMemory 4.4

While dotMemory 4.4 is primarily focused on dotMemory Unit 2.0, it also brings a number of useful updates. One such improvement is the ability to take memory snapshots only when a certain condition occurs. The most common use case for this is taking a snapshot when the memory consumption in your application increases dramatically. As catching the moment manually (using the Get Snapshot button) may be quite tricky, dotMemory 4.4 allows you to automate the process.

All you need to do is set a particular condition, such as:

  • Memory consumption increase in MB
  • Memory consumption increase in %
  • Period of time in minutes (periodical taking of snapshots)

After the condition occurs, the snapshot will be collected automatically.

For example, let’s say we have a web application that allocates a huge amount of memory after some actions (but we’re not sure of their exact sequence). What is the root cause? Is this some ineffective algorithm or just IIS that reserves memory for hard times? In this case, the best solution is to attach dotMemory to the app pool that runs the application and take a memory snapshot when memory consumption surges. As we don’t want to spend all day waiting for this moment, we will add the condition on taking the snapshot.

  1. Attach dotMemory to the corresponding app pool (w3wp.exe process).
  2. As soon as the session starts, click Get Snapshot By Condition.
    Profiling session
  3. In the Get Snapshot by Condition window, specify the condition for taking a snapshot. In our case, we will take snapshot if memory consumption increases by 100 MB. The maximum number of snapshots that can be taken automatically is set by the Limit the max number of snapshots to parameter.
    Get Snapshot by Condition window
    That’s it! As soon as we click Start, dotMemory will track overall application memory usage. When it exceeds the current value (taken at the time of clicking Start) by 100 MB, dotMemory will take a snapshot, all on its own!

Getting snapshots by condition is enabled

Note that, as soon as the defined condition occurs, the memory consumption at that moment becomes the new baseline.

You can try this feature if you’re using dotMemory 4.4 or later (as a separate product or as part of ReSharper Ultimate).

Posted in How-To's | Tagged , | Leave a comment

Code Completion Improvements in ReSharper 9

ReSharper 9.x continues to build on the radical improvements in ReSharper 8 with even more completion features and enhancements to existing ones.

Here’s what we’ve got:

  • Completion filters let you take control of your search results if they get too big. The new completion filters let you fine-tune the set of results by showing a set of options below the completion list. These buttons can be toggled either using the shortcut keys or the mouse (left button adds the element, right button removes it):
    Code completion filters
    The behavior of completion filters is configurable in ReSharper’s options. You can choose whether to enable or disable filters, whether to show the filter bar, and whether filter state should persist between successive invocations. It is also possible to add custom filters which can take care of special circumstances, such as collisions between identically named APIs:
    Custom completion filter configuration
  • Sorting by relevance, which is a ReSharper 9.1 feature, ensures that the set of completion items you are shown have the most relevant items listed first. Also, selection no longer jumps between items while populating the completion list.
  • Fuzzy matching ensures that, even if you make a mistake in an identifier, code completion results will still show the right item if it looks to be similar. Here’s an example:
    Fuzzy matching in code completion
  • Combo returns are a novel feature useful for situations where you have a method that has to do either a return or a yield return. When using smart completion in a return statement, ReSharper filters the list to only show the types that match the return value:
    Combo returns
    Return completion also detects the use of an enum member that is being returned:
    Enum members in return completion
  • Parameter info now includes ReSharper’s annotations:
    Annotations in Parameter Info
    This new behavior is configurable: you can elect to show all annotations, only NotNull/CanBeNull, or hide them altogether. You can also now limit the number of signatures that are shown in the Parameter Info list:
    Parameter Info configuration
  • Double completion improvements (read this for more info on double completion) now show potentially overridable methods after override, static members in non-static context, instance members in a static context, sealed types after typing : in a class declaration, abstract types after typing new, and non-partial classes after typing partial class.
  • Smart completion for strings determines situations where a parameter that takes a string actually takes a specific range of values available elsewhere. For example, here is how ReSharper shows smart completion for a WebRequest.Method property:
    Smart completion for strings

Finally, as always, we’ve made performance improvements to the way code completion works. The completion mechanism was partially rewritten so as to make the lookup items lazy, and to ensure that only the items in the current view are rendered. This makes this whole thing faster, just so you know.


Posted in ReSharper Tips&Tricks | Tagged , , , , | 8 Comments

Analyzing slow HTTP requests with dotTrace

Modern applications are becoming more complex and use an increasing number of subsystems such as databases, web services, third-party frameworks, and so on. When your application suddenly starts performing slower, it’s not always easy to understand which subsystem is to blame. This is when profiling can come to your rescue, and even then it may be tricky. Imagine that the calls in your application are evenly distributed among multiple subsystems. If your profiler doesn’t aggregate calls by particular subsystems, finding a performance flaw will become a headache. That’s why we always say that the more performance aspects and subsystems your profiler supports, the better.

A while ago we introduced support for analyzing SQL queries in dotTrace 6.1. Now the time has come for the next data filter: dotTrace 6.2 supports filtering out HTTP requests. When analyzing a snapshot, you can get the exact data on how a particular HTTP request is processed. For example, to process a request, a web application may access databases, communicate with a number of external resources via their APIs, process data, and so on. Performance flaws at any of these steps may result in a slow overall response of your application. The Incoming HTTP Requests filter will help you determine the exact cause and find the methods that run slowly.

Important notes:

  • HTTP requests can be profiled only in the Timeline profiling mode.
  • Only incoming HTTP requests are shown. More specifically, Timeline Viewer filters out time intervals between a server receiving a request and sending a response.

Let’s take a more detailed look at how this works.

Let’s say, we’ve updated a web application and now it works slower than before. Our workflow is pretty straightforward: we use Timeline profiling to reproduce the slowdown and take a snapshot, and then we open the collected snapshot in Timeline Viewer.

If we use dotTrace 6.2, we see that Interval Filters now has a new filter, Incoming HTTP Requests:

Interval Filters

In our example, the application processes HTTP requests for a long 31 seconds. Let’s turn on the Incoming HTTP Requests filter to see what’s happening:

Sub-filters for HTTP requests

After we turn on the Incoming HTTP Requests filter, two more sub-filters appear: URL and Method. The URL sub-filter shows the time distribution between requests to particular URLs. The Method sub-filter allows us to understand how the request times are distributed among particular HTTP methods.

If the URLs do not fit the URL filter area, consider opening them in a separate Incoming HTTP Requests window.

HTTP requests window

As we can see, our application spent most of the time processing requests to the URL /Employee/SearchByName. To find out what exact methods were called, we should apply the filter by this URL. Double-click on the URL, and now all other filters show data only for the time intervals where the request to /Employee/SearchByName was processed.

The Call Stack | Top Methods window shows that most of the processing time was spent in a single method:

Call stack

We click on this method to take a look at its source code in the Source View window. In our case, the web application spends most of the time in the ASP-generated Execute() method for the index page. This lets us assume that the application probably lags because of the large amount of data it has to display, and limiting the number of entries in a page can contribute to solving the problem.

Call Tree and Source View

We hope you find this new feature useful when developing and maintaining web applications. To try it on your own, download dotTrace 6.2 or later.

Posted in dotTrace Tips&Tricks, How-To's | Tagged | 3 Comments

ReSharper 10 EAP 2

We have launched EAP for ReSharper 10 about a week ago with a set of fixes and improvements to JavaScript and TypeScript. We hinted that we had more to offer as soon as we merge the new material. Today is the day when we start delivering. ReSharper 10 EAP 2 adds considerable changes in ReSharper itself, as well as in two other ReSharper Ultimate tools: ReSharper C++ and dotPeek. Below is the list of highlights:


Out-of-the-box postfix templates

Do you know postfix templates? There’s a ReSharper plugin started over 2 years ago that has since garnered quite a lot of following. The idea has even been implemented in IntelliJ IDEA and its derivative products and made available out of the box. In ReSharper 10, in order to close the circle, we’re including postfix templates into core ReSharper, and this is already available in ReSharper 10 EAP 2.

From now on, without installing any plugins, you can enjoy postfix templates in C#. For example, you can start with an expression and proceed to wrap it in an if statement to check whether it returns true.

More than that, if you’re dealing with null checks on a regular basis, use .null and .notnull postfix templates:
ReSharper's postfix template to check if something is not null (before expanding)
This will both check if the selected entity is not null and wrap it in an if statement:
ReSharper's postfix template to check if something is not null (after expanding)
Need to throw an exception if the condition is met? Enter the exception class name, then a dot, and look what we’ve got here:
ReSharper's postfix template to throw an exception (before expanding)
After selecting the .throw template, you have the complete throw statement, and the caret in place to enter arguments:
ReSharper's postfix template to throw an exception (after expanding)

Usage-aware Go to declaration

In other news, ReSharper 10 EAP 2 extends the functionality of Go to declaration (as well as Ctrl+click) so that you can also use the shortcut to look up usages.

In the easiest case where you have one declaration and one usage, you can use Go to declaration to simply switch between them: you’re simply enabled to use a single shortcut instead of two different shortcuts.

If you have multiple usage of a symbol, here’s what you can do:

  • Invoking Go to declaration on a usage for the first time gets you to the declaration as you would expect.
  • Invoking Go to declaration for the second time (when already on the declaration) gets you back to the usage where you came from.
  • Subsequent Go to declaration hits take you to further found usages of the symbol, one usage at a time. Navigation between usages is aided by a Find Usages-like pane that enumerates found usages, contains additional controls to mouse-click between usages, and helps you flush all found usages to the regular Find Results window if you feel like doing this.
    A pane that shows found usages for the extended Go to Declaration

Needless to say, at any point you can hit Esc to revert to the traditional, non-usage-aware Go to declaration.

Further fixes

We’ve fixed more red code in Xamarin.Forms projects and elsewhere. Other fixes are scattered across TypeScript support, Feedback Center, code completion and Code Cleanup, among other subsystems. Here’s the (hopefully) full list of fixes in ReSharper 10 EAP 2 if you’re interested in details.

ReSharper C++

The ReSharper C++ team has basically merged everything they’ve worked for since mid August. There are about 90 issues closed during the time span, and the list is highlighted by the following:

  • A new context action to create derived class when standing on a class declaration:
    Create derived class action in ReSharper C++
  • New context actions to replace decltype with the underlying type, as well as to substitute a template type alias:
    Substitute template type alias
  • A new gutter icon (in addition to a code inspection and a quick-fix) that displays whenever ReSharper C++ detects that one local declaration hides another local declaration:
    Declaration hides previous local declaration

Other notable changes include a more elaborate way to determine that a function can be made const, support for functions marked with [[noreturn]] and __declspec(noreturn) attributes, a generate action that creates Google Mock stubs, and a way to navigate from a function declaration’s parameter to the corresponding parameter in the function’s definition.


The dotPeek team has implemented one of the most anticipated features: you can now navigate to IL code from any point in the C# decompiled code viewer:
Navigate to IL code viewer in dotPeek
IL code is shown in a separate tool window that is synchronized with the main code viewer:
dotPeek IL viewer tool window

IL code support is not yet implemented in ReSharper but we’ll be working on that as well.

In another considerable change, dotPeek now supports the same set of color themes as Visual Studio does: for example, you can select Blue or Dark theme, or choose to synchronize your color scheme preference with Visual Studio settings.
Dark theme in dotPeek

What’s next?

If the changes described above are relevant to you, you’re more than welcome to download ReSharper 10 EAP 2.

We have more significant changes waiting in line to be made public in further EAP builds. Stay tuned.

Posted in How-To's | Tagged , , , , | 21 Comments

Memory Testing on a CI Server. dotMemory Unit Standalone Launcher

After we announced the dotMemory Unit framework, the most frequently asked question was, “When will the standalone launcher for CI be available?” Finally, the time has come! Along with dotMemory 4.4, we have also released dotMemory Unit 2.0 which contains not only a lot of improvements but also the dotMemoryUnit.exe tool.

The tool (distributed as a zip archive*) works as a mediator: it runs a particular standalone unit test runner under a profiler and provides support for dotMemory Unit calls in the running tests.

*Important! If your domain policy is to treat files from the Internet as unsafe, don’t forget to unblock the zip file by clicking the Unblock button in file properties.

For example, this is how you can run NUnit tests from some MainTests.dll file:
-targetExecutable="C:\NUnit 2.6.4\bin\nunit-console.exe"
-returnTargetExitCode --"E:\MyProject\bin\Release\MainTests.dll"


  • -targetExecutable is the path to the unit test runner that will run tests.
  • -returnTargetExitCode makes the launcher return the unit test runner’s exit code. This is important for CI as the build step must fail if any memory tests fail (test runners return a nonzero exit code in this case).
  • The parameters passed after the double dash (--) are unit test runner arguments (in our case it’s a path to the dll with tests).

Now, it’s easier than ever to make memory tests a part of your continuous integration builds. Simply add the command shown above as a build step on your CI server, and it will run your tests with the dotMemory Unit support.

For example, this is how the build log of a failed step looks like in TeamCity:

dotMemory Unit launcher output

However, if you’re a TeamCity user, we can offer a much more convenient solution.

Integration with TeamCity

Along with dotMemory Unit 2.0, we release the plugin for TeamCity that adds support for dotMemory Unit to all .NET test runner types. Let’s take a more detailed look at how this works.

TIP! If you want to take a look at a working example, here is a sample build project on our public TeamCity server. Project build configuration is available at this link.

  1. On your TeamCity server, copy dotMemoryUnit.zip (get the latest version from Artifacts on JetBrains build server) to the plugins directory located in your TeamCity data directory.
  2. Restart the TeamCity Server service.
    Now, all .NET test runners in TeamCity provide support for dotMemory Unit.
  3. As the dotMemory Unit standalone launcher is required for the plugin to work, you should provide it to your build agent. There are two ways to do this:
    • Download and unzip the launcher to any directory on a TeamCity build agent. Don’t forget to Unblock the zip!
    • [Recommended] Use the launcher from the dotMemory Unit NuGet package referenced by your project.
      Note that if you omit binaries from the source control repository, you can use TeamCity’s NuGet Installer runner type. It will perform NuGet Package Restore before the build. All you need is to add the NuGet Installer “build step” and specify the path to your solution.
      NuGet Installer Step
  4. Now, update the step used to run tests in your build configuration. Open the corresponding build step in your build configuration:
    TeamCity Unit Test Build Step
  5. Note that after we installed the dotMemoryUnit plugin, this build step now additionally contains the JetBrains dotMemory Unit section. Here you should:
    • Turn on Run build step under JetBrains dotMemory Unit.
    • Specify the path to the dotMemory Unit standalone launcher directory in Path to dotMemory Unit. Note that as we decided to use the launcher from the NuGet referenced by our project (see step 3), we specify the path relative to the project checkout directory.
    • In Memory snapshots artifacts path, specify a path to the directory (relative to the build artifacts directory) where dotMemory Unit will store snapshots in case memory test(s) fail.

    dotMemory Unit support

  6. Save the configuration.

Done! Now, this build step supports tests that use dotMemory Unit.

From the point of the end-user, nothing has changed. If you run the configuration and any of the memory tests fails, the results will be shown in the overview:
TeamCity failed build overview

The Tests tab will show you the exact tests that have failed. For example, here the reason had to do with the amount of memory traffic:TeamCity Tests Tab

Click on a failed test to see exactly was has gone wrong:
TeamCity test result

Now, you can investigate the issue more thoroughly by analyzing a memory snapshot that is saved in build artifacts:

Build Artifacts - Memory Snapshot

Note that to open memory snapshots, you will need a standalone dotMemory (or ReSharper Ultimate) installation.

Snapshot in dotMemory

Additional Benefits

If you’ve used dotMemory Unit before, you probably know that it required ReSharper unit test runner to work. Since v2.0, you can also run tests via dotMemory Unit’s standalone launcher and the list of supported unit testing frameworks has expanded to include:

  • MSTest*
  • NUnit*
  • XUnit
  • MBUnit
  • csUnit

*Can also be run from Visual Studio with ReSharper

Nevertheless, if your unit testing framework is missing from this list (e.g., MSpec), you can still use it with dotMemory Unit.

All you need to do is tell dotMemory Unit where your test method starts and where it ends. This is done with two methods: DotMemoryUnitController.TestStart() and DotMemoryUnitController.TestEnd().

We recommend that you create an IDisposable class that uses these methods, and then wrap the code in your tests with the using statement that creates an instance of this class:

That’s it! Now you can run tests wrapped in this way using dotMemory Unit standalone launcher or Visual Studio with ReSharper (in case it has the extension for the unit testing framework you use). Simply copy & paste the code above and reuse it in your project. You can also find this code on github.

With this new feature, dotMemory Unit becomes very flexible: you can use it not only to add support for a unit testing framework, but also to extend and adapt dotMemory Unit to your specific needs. For example, see how we added support for MSpec.

Thanks for reading! As always we recommend that you try dotMemory Unit on your own. If you’re already using it, then probably automating your tests on a CI server is the next logical step.

Posted in dotMemory Tips&Tricks, How-To's | Tagged , , , , | 2 Comments

Meanwhile… ReSharper 10 EAP kicks off

We have just opened Early Access Program for ReSharper 10, which means you can start using the latest development builds of ReSharper and its sibling tools: ReSharper C++, dotTrace, dotCover, dotPeek and dotMemory.

We should note that we’re not starting with a bang this time: a lot of changes, large and small, that we expect to make part of this release are not merged in yet, and we’ll be making more announcements as they are introduced in the EAP. In addition, as of the inaugural EAP build, noticeable changes have only occurred in the mainline ReSharper, and all the other products are only available for compatibility purposes.

That said, here’s a couple of reasons why you might want to try ReSharper 10 EAP right now:

  • Initial support for JSX syntax is now available in .jsx files order to streamline using React in ASP.NET applications. Code completion, all ReSharper’s regular context actions for HTML and JavaScript, navigation to declarations and search for usages, as well as a couple of refactorings are already available:
    Refactoring options as part of JSX support in ReSharper
  • JavaScript regular expressions that were originally supported in ReSharper 9.2 are now covered in more detail. Specifically, regular expressions are now recognized in RegExp constructors and methods, as well as in String object methods (match(), search(), replace(), split()), in addition to regular expression literals. Also, Make regular expression and Remove regular expression context actions are now available in JavaScript.
    JavaScript regular expressions via RegExp objects
  • TypeScript 1.6 support has gotten stronger with the addition of intersection types (RSRP-445826).
  • Less red code in C#. We’ve fixed a number of scenarios where ReSharper couldn’t resolve perfectly valid code, such as extension methods (RSRP-447688), System references in Xamarin.Forms projects (RSRP-447755), and await statements in Universal Windows Platform applications (RSRP-445975). Web sites in Visual Studio 2015 have received their fair share of red code fixes as well.
  • Fixes in ReSharper Command Line Tools, mostly related to analyzing WPF applications (RSRP-447361) and making sure that InspectCode can be successfully launched on funny file paths (RSRP-447873).
  • IntelliSense fixes related to formatting on completion before attribute (RSRP-437539), smart completion formerly leading to Visual Studio errors (RSRP-445914), and performance issues when collecting extension methods to show in completion (RSRP-447262).
  • An important unit testing regression has been fixed, and parent nodes in Unit Test Sessions tool window again show exactly how many child tests have failed (RSRP-442851).

If you find the changes outlined above relevant to your work, please download ReSharper 10 EAP, and expect more changes soon: both in ReSharper itself and in other products that are part of the ReSharper Ultimate family.

Posted in How-To's | Tagged , , , , , | 16 Comments

How to instantly profile any method in your code with ReSharper and dotTrace

As you may already know, starting with ReSharper 9 all our .NET products share a common platform (including ReSharper, dotCover, dotTrace, dotMemory, and dotPeek). This allowed us to create the ReSharper Ultimate bundle which includes the entire .NET product lineup.

In our recent posts about ReSharper Ultimate, we promised that thanks to the shared platform, dotTrace (as well as other .NET tools) would become more integrated with ReSharper.

So, what does the new ReSharper 9.2 release bring us and how does dotTrace benefit from it?

Run configurations in ReSharper 9.2

Along with other important features and changes, ReSharper 9.2 introduces run configurations. You’ve probably already read about them in our recent blog post. In a nutshell, they give you an ability to pre-define run parameters for your startup project, an arbitrary executable, or even any static method. Then you can quickly run or debug a particular configuration without the need to set up parameters.

What does this mean for dotTrace 6.2?

So, how does dotTrace benefit from the ReSharper’s ‘run configurations’? As both products share the same platform, dotTrace is able to profile any run configuration you create.

Important! No ReSharper installation is required for this feature to work in dotTrace 6.2. Nevertheless, without ReSharper, the only way to create and profile run configurations is the Performance Snapshot Browser window. This seriously reduces the usability of the feature. That’s why having ReSharper installed is strongly recommended.

Profiling run configurations for projects and executables is not so exciting, though it does provide some sensible benefits. The main one is that you can create a number of configurations once and then quickly start profiling any configuration.

Quick profiling of a run configuration

A much more interesting ability is profiling run configurations of specific static methods, and that’s what we’re going to talk about.

Profiling specific methods

With dotTrace 6.2, you can instantly profile any static method in your code. As simple as that! You don’t even need to create a special run configuration for this. Simply place the caret on the method, press Alt+Enter, and select Debug | Profile ([profiling_type])* in the action list.

* The default profiling type is specified in the Performance Snapshot Browser window available via ReSharper | Profile | Show Performance Snapshots.

Profiling of a static method

Now, just imagine what this feature can do. You can profile any part of your code in place, right after you’ve written it! No need to build your solution, run the profiler, perform a usage scenario that runs your new code, etc.

For example, we’ve just written a feature in our application called Goo. There is a Goo class that has the Goo() method performing some complex calculations on an int array.

Wouldn’t you like to find out instantly if this algorithm is effective? No problem, we simply add a static method that creates an instance of the Goo class and runs the Goo() method.

Almost there. Now we just use Alt+Enter on the method and select Debug | Profile ([profiling_type])….

Static method created for profiling

In the Run Configuration Properties window that opens, among other run configuration parameters we’ll be asked whether we want to just run profiling of this configuration or run profiling and save the configuration (in case we’d like to use it again later):

Run configuration properties

After we click Execute, dotTrace will run our RunGoo() method under a special JetLauncher process that will be profiled. Right after the method finishes working, dotTrace automatically collects a snapshot and opens it in a viewer: Timeline Viewer for timeline profiling, and Performance Viewer for all other profiling types.

Call stack in TV

There we go! We’ve got all the details about how much time our feature requires, including time values for the entire call subtree.

Note that if you use the Timeline profiling type, in addition to call time values you also get a lot of other data on your code to answer questions such as:

  • How much memory does your code allocate?
    Timeline memory allocation
  • How much time does the File I/O, JIT, SQL queries processing (if there were any), etc. take?
    Timeline interval filters
  • Were there any blocking garbage collections?
    Timeline blocking GC

If after analyzing the snapshot we open the Performance Snapshot Browser window (by going to ReSharper | Profile | Show Performance Snapshots), we’ll see the collected snapshot. We can either delete it or keep for future analysis.

Snapshot browser in VS

Easy, isn’t it? The suggested workflow looks similar to performance testing but it’s much easier: You don’t even have to add a separate test project to your solution or reference a unit testing framework. Simply create a static method right near the code that you want to check, and profile it in a couple of clicks when needed.

We hope you find this new feature useful. Do try it on your own by installing the latest dotTrace 6.2 or the entire ReSharper Ultimate toolset.

Posted in dotTrace Tips&Tricks, How-To's | Tagged , , , , | 4 Comments

ReSharper 10 EAP 3: Continuous Testing and ReSharper Build

ReSharper 10 Early Access Program goes on and today we’ve prepared a new build highlighted with major features in ReSharper and dotCover.


ReSharper Build

ReSharper Build is a brand new ReSharper 10 feature that originated from an internal tool that the team used to build ReSharper itself. ReSharper Build is an out-of-process incremental build tool that can take advantage of multiple processes, visualizes different kinds of project build status, and is optimized for large solutions with lots of dependencies.
ReSharper Build & Run tool window

At this point ReSharper Build supports most .NET project types available in Visual Studio, except for Web Sites and ASP.NET 5 applications. Unmanaged C++ can be built using ReSharper although not all features will be available in this case. If you choose to use ReSharper Build and the solution you’re going to build is not supported (or contains projects that are not supported), ReSharper Build will automatically switch to Visual Studio’s native build process.

Note that ReSharper Build is off by default. In order to switch to ReSharper Build instead of Visual Studio’s native build process, go to ReSharper | Options | Tools | ReSharper Build and select ReSharper Build.

ReSharper build switch to VS Build

We’ll follow up shortly with a separate detailed blog post on the ins and outs of ReSharper Build.

Code style and configuring inspections

ReSharper 10 EAP 3 comes with a set of changes aimed to simplify configuration of and complying with code style settings.

For example, inspection severity can now be configured right from the Alt+Enter menu, without using a modal window:
Configuring inspection severity from bulb menu in ReSharper 10

Find similar issues modal window is now used only for searching in a custom scope. All default scopes, such as solution, project and file, can be applied right from the ALt+Enter menu.

A part of code style settings can now also be configured without leaving the Alt+Enter menu:
Configuring code style from the bulb menu in ReSharper 10

Additionally, there are new code inspections with quick-fixes that detect explicit or implicit access modifiers for types and type members, let you use a pre-configured order of modifiers, and help you join or separate attributes in a section.
A quick-fix to improve the order of modifiers

Other changes

On another note, there are also changes related to TypeScript and JavaScript, such as improved TSX support (though still not final) and JSX support in plain .js files.

If you’re interested, here’s a detailed list of fixes that made it into ReSharper 10 EAP 3.


Continuous Testing

If there’s anything that this EAP build proves, it’s that you should never, ever lose hope. dotCover boosts its version to 4.0 and reveals a preview of Continuous Testing.
Starting a Continuous Testing session
Following an initial code coverage analysis of your solution, dotCover then tracks your code changes, figures out which tests are affected by them, and then it can re-run the affected tests as soon as you hit Save, or use a different strategy of reacting to code changes.
Continuous Testing Session tool window
We’ll write more about the way Continuous Testing is implemented very soon. We know there’s quite a lot of ways to improve the feature but if you’re ready to try it out, please let us know of any issues you’ve had on your first approach.

ReSharper C++

In addition to a set of fixes, the new ReSharper C++ EAP build incorporates the following changes:

  • Usage-aware Go to declaration is now supported in a way similar to the mainline ReSharper.
  • New code inspections are available to detect uninitialized base class, accompanied by a quick-fix to initialize a member/base class.
  • Evaluation of compile-time expressions of floating point types is added.

What’s next?

First of all, make sure you didn’t forget to actually download the new EAP build.

We are expecting more EAP builds coming shortly so stay tuned for further updates.

Posted in How-To's, News and Events | Tagged , , , , , | 11 Comments

Run Configurations: debug any static method in Visual Studio, and more

Sometimes when you’re writing code, you want to verify an assumption quickly, check how your API works, or simply prototype and execute a small part of your code.

What common options are available to us?

  1. Create a separate small project, put the code to verify into its Main() method, and add all the necessary dependencies.
  2. Hook the code to verify to a random button and start the debugger (and hopefully remember to remove the code afterwards).
  3. Write a temporary unit test and execute it with the Unit Test Runner.

Well, with ReSharper 9.2′s new run configurations there’s now a better way.

With the caret on a static method without parameters, press Alt+Enter to see a context action called Debug:

ReSharper's Debug context action

Calling this action triggers debugging, with this very static method used as an entry point. In other words, without breaking your coding flow, you can write a public static void SomeTestMethod() right where you’re coding and quickly launch the debugger.

Under the hood, this context action is based on a new feature, run configurations. The feature grabs information required to execute from the current project, creates an implicit run configuration, and actually executes it.

The Debug menu item can be expanded to reveal the entire set of available actions: Debug, Run and Profile (available if dotTrace is installed), each in two varieties: with or without building. Each of the expanded actions opens a dialog box where you can refine configuration parameters as necessary, and execute the configuration with or without saving it for the future.

Run or debug any static method without parameters

When you choose to save and execute, your settings are persisted so that you can reuse them later. There are three ways to access a saved configuration:

    • A pinned gutter icon next to a static method
      Gutter icon for a saved run configuration
    • The Run Configurations pop-up that is available by selecting ReSharper | Tools | Run Configurations or with a shortcut (Ctrl+Shift+Alt+R in both Visual Studio and IntelliJ keymaps)
      Run Configurations popup
    • If you have dotTrace integrated into Visual Studio, you can also access run configurations via its Performance Snapshots Browser (ReSharper | Profile | Show Performance Snapshots).
      Performance Snapshots Browser

    ReSharper 9.2 provides three kinds of run configurations:

    • Executable: Allows running any executable file.
    • Static Method: As suggested above, this lets you run any static method that doesn’t have parameters.
    • Project: Allows executing a Visual Studio project (which in essence emulates setting a project as startup and launching it immediately).

    You can add any of these configurations via the Add… submenu in the Run Configurations pop-up.

    In addition, there is a new item in the project context menu, Create Run Configuration, which creates a new project run configuration.

    The Run Configurations pop-up provides quick keyboard access to configurations. It allows you to execute any run configuration in any supported execution mode. Each configuration’s submenu provides a set of Configure commands.

    A run confgiration's Configure menu

    One of these commands, Select, deserves a notable mention. Select assigns the current configuration to launch when you invoke Visual Studio’s own Start (F5) and Start Without Debugging (Ctrl+F5) commands. In addition, two Alt-modified shortcuts are added which affect whether or not a solution is built before launch:

    • Alt+F5: Debug without building
    • Ctrl+Alt+F5: Execute a configuration without debugging and building

    A setting called Don’t build by default that concludes the Run Configurations pop-up serves to swap the default and Alt-modified actions. For example, it makes F5 invoke debugging without building.

    Other Configure options include:

    • Edit, Duplicate and Delete: Fairly self-descriptive.
    • Share/Unshare: Saves a run configuration to shared solution settings (instead of the default personal solution settings), which enables sharing the run configuration among all developers working on the current solution.
Posted in How-To's | Tagged , , | 4 Comments

Performance Profiling in Continuous Integration: dotTrace and TeamCity

Performance testing is growing more and more popular as one of the most effective methods for preventing application performance regression.

Usually, by performance testing developers mean web application load testing, stress and spike testing, i.e. all types of tests that measure system behavior under particular loads. Nevertheless, performance testing is not only about loads. Tracking the execution time of critical application features (like rendering a scene in a game application, or filtering a large amount of data in an enterprise application) is also an important aspect of performance testing.

When tracking the execution time of critical features, a typical CI workflow looks as follows:

  1. Write an integration test that runs performance-critical functionality in your application.
  2. Configure your CI system to run this test during a build.
  3. Configure the CI system to check test execution time. If it exceeds some threshold (expected performance), fail the build.

Although such an approach looks easy to implement, it has some drawbacks. First, depending on the CI system you use, step 3 may be quite tricky. You need to add some custom failure condition for the duration of the specific test, and your CI server may not support this out of the box. Second and more important, when such a build fails because of your test, you don’t have a clue as to why this happened.

To find the exact methods that have become slower due to regression, you need either to thoroughly examine your code or (and this is much more effective) profile the problematic test in a performance profiler. So why not do this right on the CI server? Why not run the test under a profiler, and if there’s a slowdown, fail the build and get a performance snapshot?

We tried to solve this problem using two of our products: dotTrace performance profiler and TeamCity CI server. The brainchild of our efforts is the dotTrace Profiler plugin for TeamCity. With this plugin, the performance testing workflow becomes even simpler:

  1. Write integration tests.
  2. Add the plugin runner as a build step to your build configuration.
    In the plugin parameters, specify the methods (tests themselves or the methods they run) whose performance you want to measure. Specify the corresponding performance thresholds: absolute own time or total (own + call subtree) time, in ms. It’s also possible to compare measured values against values from previous successful builds.

Now, if at least one method exceeds its threshold, the plugin will fail the build and save the collected performance snapshot to build artifacts.

Here’s how the workflow looks in one picture:

Plugin workflow

For this plugin to run, only the free dotTrace console profiling tool is required. You will only need the full version of dotTrace if you want to analyze the collected performance snapshot.

Now let’s take a closer look at how to use the plugin.

1. Install the Plugin and dotTrace Console Profiler

  1. On your TeamCity server, copy dotTracePlugin.zip to the plugins directory located in your TeamCity data directory.
  2. Restart the TeamCity Server service.
  3. As the dotTrace console profiling tool is required for the plugin, download and unzip the console profiler to any directory on a TeamCity build agent.

IMPORTANT! Unit test runner must also be installed on the same agent. In our example, we will use NUnit.

2. Write an Integration Test

Let’s suppose we have an application with a killer feature called Foo. For example, we have a Foo class with the Foo() method which, in turn, uses a Goo class. It creates a Goo instance and runs its Goo() method, which also runs some methods. We assume the execution time of the Foo feature is crucial for user experience. So we add a performance NUnit test that runs the feature:

3. Create Profiling Configuration

The next step is to tell the dotTrace console tool what application it has to profile. This is done via an XML configuration file. If you have a standalone dotTrace installation, you can simplify file creation by using the Configuration2Xml tool which is located in the dotTrace installation directory. Otherwise create the file manually by using the format described below.

  • To profile tests, we must profile the test runner that runs them (NUnit in our case). That is a standalone application, that’s why the main Argument tag has the type="StandaloneArgument" attribute.
  • The Filename tag specifies the path to the NUnit console runner installed on the agent.
  • In Arguments we specify the path to the DLL with integration tests. As you don’t know in advance where TeamCity will put the built DLL, you cannot specify the full absolute path to it. Instead, you can use the %CHECKOUTDIR% placeholder. During the build, the plugin will replace the placeholder with the checkout directory path. Therefore, all you need is to specify the path which is relative to the checkout directory. In our example it will be
  • Working directory specifies the path to the NUnit bin directory.
  • NUnit creates a separate runner process to run tests. To profile only the runner process, we must exclude the main NUnit process from profiling as follows:
    • Set the ProfileChildProcesses to true.
    • In the Scope | ProcessFilters | Item | ProcessNameFilter, specify nunit-console*.
    • In the Scope | ProcessFilters | Item | Type , specify Deny.
  • In MeasureType, specify the type of profiling you want to use. This should always be the Sampling type as it gives the most realistic time values. Note that the plugin does not support Timeline profiling.
  • Leave other options as shown above and save the file. In our example, we will name it myconfig.xml and store it in the console profiler directory.

4. Create Build Configuration

IMPORTANT! To help ensure consistency of profiling results, you should assign the build configuration that uses the plugin to a specific build agent (hardware agent is strongly recommended). For further instructions please refer to TeamCity documentation.

The next step is to create the build configuration dedicated for performance testing in TeamCity. For example, we already have My Project. Let’s add to that project a configuration consisting of two steps: building the application and running performance tests. To save time, let’s proceed right to configuring the second build step.

  1. In build configuration settings, go to Build Steps and click Add build step.
    Creating build configuration
  2. In Runner type, select dotTrace Profiler.
  3. Set the following plugin options and then click Save:
    • Console profiler path – the path to the directory storing dotTrace console profiler. In our example, it’s C:\Console Profiler
    • Profiling config file path – the path to the XML configuration file we created on the previous step. In our example, it’s C:\Console Profiler\myconfig.xml
    • Temp directory path – the path for storing a temporary performance snapshot. Note that this snapshot is auto-deleted each time the build is run.
    • Publish performance snapshot to artifacts – sets the condition for publishing a snapshot to the artifacts directory: Always, Never, or On exceeding thresholds. While the latter is the most reasonable choice, keep in mind that, depending on application complexity, the snapshot may take up hundreds of MB of disk space. We recommend updating your artifact cleanup policy so that after some time TeamCity would delete dotTraceSnapshot.zip from the artifacts folders.
    • Threshold values – specify the list of methods whose performance you want to check. The pattern is Namespace.Class.Method TotalTime OwnTime, where
      • TotalTime is the method’s execution time, including its own time and the time of the method’s call subtree, in ms;
      • OwnTime is the method’s own execution time, in ms.

      (Setting a value to 0 will make the plugin ignore the threshold.)

      If we want to check the method’s time against the corresponding time in previous successful builds, we have three options: a) we can take values for comparison from the first successful build, b) take them from the last successful build, or c) compare against the average value calculated for all successful builds. If so, instead of the absolute threshold value in ms we should use one of the prefixes:
      a) F – take value from the first successful build,
      b) L – take value from the last successful build, or
      c) A – for the average value
      Then, set the tolerance to indicate by how much the time value may be exceeded, as a percentage.

Configuring the plugin

In our example, we want to track the total execution time of the Foo feature, so we add a threshold for the total time of the TestFooFeature() test. F15 means that the value for comparison (with up to 15% tolerance) must be taken from the first successful build. E.g., if during the first successful build run dotTrace measures 1000 ms total time for the method, the method’s threshold for all following builds will equal 1150 ms (1000 + 1000*0.15).

In our example, we also want to check the total time of the Goo() method as it strongly impacts the performance of the Foo feature. Checking the method’s execution time during the build simplifies our job: in case of performance issues with Foo, we will know right away if Goo() is the one to blame. As a threshold for the method, we also use the value from the first successful build, plus 15% tolerance.

5. Run the Build

Now it’s time to run the build! As we decided to use values from the first successful build for the thresholds, the first build won’t fail in any case – it is used only to set the baselines for the TestFooFeature() and Goo() methods. The time values from this build will become the “golden standard” for all subsequent builds.

  1. Run the build.
    First successful build
  2. After the build is finished successfully, we can take a look at the profiling results. Open the Build Log for this build and expand the performance testing step.
    Build Log for the first successful build
    TeamCity had no data on the methods’ execution times before the first build, so the expected values are equal to 0.
    If we run the build a second time, the expected values will be calculated based on the results from build #1:Build log for the second build

Suppose now that someone has edited the Goo() method and made it slower. (We’ll emulate this using Thread.Sleep(200).) Now, the build fails with the Performance thresholds exceeded: 2 message:
Failed build overview

The Build Log provides some info on what happened:
Build log of the failed build

As we instructed the plugin to save a performance snapshot in case of any issues, it has stored the archive with the snapshot in the artifacts folder:
Snapshot in artifacts

Now, we can analyze the snapshot and find the exact cause of the performance flaw:
Snapshot in dotTrace

Note that all values are reported as TeamCity statistic values, so you can build trend graphs for the measured time values if you like. To do this, open the Parameters tab of any build and switch to Reported statistic values:
Statistic values

Click View trend for a particular parameter to see its diagram of changes:
Statistics trend


If you use continuous integration in your development environment and it involves performance testing, don’t hesitate to try the dotTrace plugin. We’re just getting started with it! In future releases we plan to integrate the plugin right into TeamCity’s unit test runner types.

Profile with pleasure!

Posted in dotTrace Tips&Tricks, How-To's | Tagged , , | 6 Comments