Exploring .NET Core with ReSharper Ultimate

We recently started the EAP for ReSharper 9.1, and it might have been easy to miss that the EAP is not just for ReSharper, but for ReSharper Ultimate – that is, our entire .NET tools product range. Starting with ReSharper 9.0, we unified the way we both build and distribute our .NET tools, so that they share a common install. This has many benefits, from licensing to sharing in-memory caches between products. So not only is it the ReSharper 9.1 EAP, but it’s the EAP for dotCover 3.1, dotTrace 6.1, dotMemory 4.3 and dotPeek 1.4.

With EAP03, we’re introducing support for .NET Core to ReSharper Ultimate. This means you can get code coverage and performance and memory profiling for your .NET Core apps.

But let’s back up. What’s .NET Core?

This is Microsoft’s new, Open Source, cross platform .NET stack. It is the next generation of .NET, designed to replace the need for Portable Class Libraries by providing a common, modular .NET implementation across all platforms, including ASP.NET vNext. (One slightly confusing part is that it doesn’t replace the .NET Framework. This will still be Microsoft’s .NET stack on the Windows desktop, and will remain in use, and supported for both compatibility reasons and to support desktop apps, such as WPF.)

It’s made up of a couple of pieces – the CoreCLR project is the runtime, containing the JIT and the Garbage Collector, and the Base Class Library is being reimplemented in the CoreFX project.

One of the nice things about a .NET Core application is that it is xcopy deployable. That is, .NET Core apps don’t require an install, or a GAC, or anything. All of the files required to run an application can live in a single folder.

Explorer showing the files in the HelloWorld sample

This test application is the HelloWorld app from the CoreFX Lab repo. As we can see, the folder contains everything we need to run the application. The BCL System assemblies are present, perhaps a few more than you’re used to on the desktop – this allows for a more modular distribution of the BCL, only distributing the assemblies you use. You can also see mscorlib.dll, which provides the primitive types, such as Object and String, and coreclr.dll, which is the actual runtime, including the JIT and the garbage collector. Most interestingly, we have three other files – HelloWorld.dll, CoreRun.exe and HelloWorld.exe.

dotPeek showing decompiled code

If we take a look at these files in dotPeek, we can see that the System assemblies and HelloWorld.dll are just normal looking assemblies, although they’re now targeting .NET Core v4.5, rather than the .NET Framework. Furthermore, we can see that HelloWorld.dll contains our Program class, with a normal static Main method – this is the entry point to the application. And coreclr is listed as “not supported”, as we know that’s a dll containing the native code for the runtime itself.

So what are HelloWorld.exe and CoreRun.exe?

These files are also coming up as “not supported” in dotPeek, and if we look at the their file sizes, we see that they’re both very small, only about 40Kb each. These files are native stubs, and are used to bootstrap the .NET Core runtime and launch the .NET application. The stub will load coreclr.dll, initialise the runtime and point it to the managed code to run.

The CoreRun.exe process is a generic launcher that requires the HelloWorld.dll to be passed in as a command line parameter, and provides extra command line parsing to enable attaching a debugger, and providing verbose output. The HelloWorld.exe process is simpler, and requires no command line parameters, automatically launching HelloWorld.dll and passing any command line parameters to the .NET application.

These stubs are required for .NET Core primarily to allow for running cross platform. Traditional .NET Framework applications combine the bootstrap stub and the .NET application code and metadata (HelloWorld.dll) so the .NET application just works transparently (in fact, versions of Windows later than Windows XP recognise .NET applications at the Operating System level, and bootstrap the .NET Framework automatically). But if we want to run the HelloWorld .NET application on OS X or Linux, then we can’t just run HelloWorld.exe – other operating systems don’t know how to launch .exe files, and might not even be Intel based! So on other operating systems, the stub can be replaced with another executable, without having to replace the .NET application.

Image of HelloWorld console application running

We can dive deeper into .NET Core applications, by profiling and analysing them at runtime.

We can use dotCover to provide code coverage – note that DrawWindows is being called, but not DrawLinux or DrawMac. (While .NET Core is cross platform, ReSharper Ultimate is a set of .NET Framework applications, using WPF, and as such, tied to Windows.)

dotCover showing coverage for the HelloWorld application

Performance profiling is handled by dotTrace, which can provide a nice timeline view of the performance of a .NET Core application, with powerful filtering capabilities to filter based on CPU, CPU state, thread and so on. Hopefully you’ll be working on slightly more interesting applications!

dotTrace showing performance information about HelloWorld application

And finally, dotMemory will allow you to examine the memory allocations of .NET Core apps, too.

dotMemory analysing memory snapshot of HelloWorld application

All of this support for .NET Core is handled transparently by ReSharper Ultimate – you simply need to start profiling or coverage for a .NET application, passing in either of the stubs as the startup application, and ReSharper Ultimate will do the rest.

Also, this is initial support. There are a few things that aren’t working correctly just now, but we’re working on them, and they should be fixed in future EAP builds.

Grab yourself a copy of the latest EAP and give it a go. There’s more to come. Stay tuned!

Posted in dotCover Tips&Tricks, dotMemory Tips&Tricks, dotPeek Tips&Tricks, dotTrace Tips&Tricks, How-To's | Tagged , , , , , , , | 8 Comments

JavaScript Support in ReSharper 9

ReSharper 9 comes with plenty of JavaScript-related changes. We’ve already mentioned some of them in our TypeScript blog post and, additionally, we have a blog post on converting JavaScript to TypeScript, but now we are going to focus on JavaScript proper. So, what does ReSharper 9 have in store?

Tools like JSHint and JSLint are increasingly popular nowadays. Their inspections are great, but sometimes you want not just to know where the error is, but also to be able to fix it, and you want to see the error as you write code, not as a post-build step. ReSharper is here to help: we have already implemented several inspections similar to JSHint, for example, the variable and function naming inspections and inspections for using undeclared global variables, and we offer quick-fixes for them. ReSharper 9 offers additional JSHint-inspired inspections and quick-fixes for them:

  • Assignment inside conditional expression, with a quick-fix to change = to ===.Assignment inside conditional expressionJust in case you do want to perform assignments in conditional statements, ReSharper gives you an option to indicate such expressions by wrapping the assignee with an extra set of round braces. There’s even a fix for this:

    Wrapping the assignee with an extra set of round braces quick-fix

    The option to ignore based on redundant braces is configurable: you can find it in ReSharper | Options | Code Editing | JavaScript | Inspections.

  • Constructor call not used or possibly used for side effects — we provide a quick-fix for replacing it with a function call.
    Quick-fix for replacing constructor call with a function call
  • Caller/callee usings (warning; error in strict mode).
    Caller/callee usings
  • Coerced usage of equals (use of ==/!= instead of ===/!==) with a quick-fix to perform appropriate replacements.Coerced usage of equals code inspection
  • Extending/overwriting prototype of native objectExtending/overwriting prototype of native object
  • Variable/function used before it is declared with a quick-fix to normalize declarations.
  • Variable/function used outside of block where it is declared, again with a suitable quick-fix.Variable/function used outside of block where it is declared
  • Local function is redefined laterLocal function is redefined later
  • Incorrect argument comparison with typeof expression with a quick-fix to replace with the correct value.Incorrect argument comparison with typeof expression with a quick-fix to replace with the correct value
  • Closure on a variable modified in loop of outer scopeClosure on a variable modified in loop of outer scope
  • There is no .hasOwnProperty() check in a body of for in loop — this will cause an enumeration of prototype properties. ReSharper provides a quick-fix to surround the body with a .hasOwnProperty() check.Quick-fix to surround the body with a .hasOwnProperty() check

Also, ReSharper supports JSHint’s /* global ... */ annotations for global variables:

Support for annotations for global variables

In addition to the above features, ReSharper 9.1 EAP (which you can download here) comes with initial support for ECMAScript 6. This includes support for destructuring, let and const declarations, template strings, lambdas, and a lot more. This support is early-stage, but we do plan to fully support ECMAScript 6 in due time. Check out the EAP for how these features are shaping up, but do bear in mind that VS2015 CTP will highlight correct ECMAScript 6 code as errors in many situations. We just have to give it a bit more time to catch up. 

Posted in How-To's, ReSharper Tips&Tricks | Tagged , , | 3 Comments

Unit Testing and Memory Profiling: Can They Be Combined?

Memory profilers can hardly be called an “everyday tool.” Typically, developers start thinking about profiling their product closer to its release. This approach may work fine until some last-minute issue like a leak or huge memory traffic crushes all your deadlines. The proactive approach would be to profile your app’s functionality on a daily basis, but who’s got the resources to do that? Well, we think there may be a solution.

If you employ unit testing in your development process, it is likely that you regularly run a number of tests on app logic. Now imagine that you can write some special “memory profiling” tests, e.g. a test that identifies leaks by checking memory for objects of particular type, or a test that tracks memory traffic and fails in case the traffic exceeds some threshold. This is exactly what dotMemory Unit framework allows you to do. The framework is distributed as a NuGet package and can be used to perform the following scenarios:

  • Checking memory for  objects of a certain type.

  • Checking memory traffic.

  • Getting the difference between memory snapshots.

  • Saving memory snapshots for further investigation in dotMemory (a standalone .NET memory profiler from JetBrains).

In other words, dotMemory Unit extends your unit testing framework with the functionality of a memory profiler.

IMPORTANT: dotMemory Unit is currently in the EAP (Early Access Program) stage. Please use it for evaluation purposes only!

How It Works

  • dotMemory Unit is distributed as a NuGet package installed to your test project:
    PM> Install-Package JetBrains.DotMemoryUnit -pre

  • dotMemory Unit requires ReSharper unit test runner. To run tests that use dotMemory Unit, you should have either dotCover 3.1 EAP or ReSharper 9.1 EAP05 installed on your system.

  • After you install the dotMemory Unit package, ReSharper’s menus for unit tests will include an additional item, Run Unit Tests under dotMemory Unit. In this mode, the test runner will execute dotMemory Unit calls as well as ordinary test logic. If you run a test the ‘normal’ way (without dotMemory Unit support), all dotMemory Unit calls will be ignored.

    Unit Tests Menu

  • dotMemory Unit works with all of the unit-testing frameworks supported by ReSharper’s unit test runner including MSTest and NUnit.

  • A standalone launcher for integrating with CI systems like JetBrains TeamCity is planned for future releases.

Now let’s take a look at some examples to better understand what dotMemory Unit does.

Example 1: Checking for Specific Objects

Let’s start with something simple. One of the most useful cases can be finding a leak by checking memory for objects of a specific type.

GetObjects Assertion

  1. A lambda is passed to the Check method of the static dotMemory class. This method will be called only in case you run the test using Run test under dotMemory Unit.

  2. The memory object passed to the lambda contains all memory data for the current execution point.

  3. The GetObjects method returns a set of objects that match the condition passed in another lambda. This line slices the memory by leaving only objects of the Foo type. The Assert expression asserts that there should be 0 objects of the Foo type.
    Note that dotMemory Unit does not force you to use any specific Assert syntax. Simply use the syntax of the framework your test is written for. For example, the line in the example uses NUnit syntax but could be easily updated for MSTest:
    MSTest Assertion

With dotMemory Unit you can select a set of objects by almost any condition, get the number of objects in this set and their size, and use these data in your assertions.
In the following example, we check for objects in the large object heap:

Checking for Specific Objects

 

Example 2: Checking Memory Traffic

The test for checking memory traffic is even simpler. All you need do to is mark the test with the AssertTraffic attribute. In the example below, we assert that the amount of memory allocated by all the code in TestMethod1 does not exceed 1,000 bytes.

AssertTraffic Attribute Example

Example 3: Complex Scenarios for Checking Memory Traffic

If you need to get more complex information about memory traffic (say, check for traffic of objects of a particular type during some specific time interval), you can use a similar approach to the one from the first example. The lambdas passed to the dotMemory.Check method slice and dice traffic data by various conditions.

Check Traffic with Traffic Type

  1. To mark time intervals where memory traffic can be analyzed, use checkpoints created by dotMemory.Check (as you probably guessed, this method simply takes a memory snapshot).

  2. The checkpoint that defines the starting point of the interval is passed to the GetTrafficFrom method.
    For example, this line asserts that the total size of objects implementing the IFoo interface created in the interval between memoryCheckPoint1 and memoryCheckPoint2 is less than 1,000 bytes.

  3. You can get traffic data for any checkpoint that was set earlier. Thus, this line gets traffic between the current dotMemory.Check call and memoryCheckPoint2.

Example 4: Comparing Snapshots

Like in the ‘standalone’ dotMemory profiler, you can use checkpoints not only to compare traffic but for all kinds of snapshot comparisons. In the example below we assert that no objects from the MyApp namespace survived garbage collection in the interval between memoryCheckPoint1 and the second dotMemory.Check call.

Compare Snapshots

Conclusion

dotMemory Unit is very flexible and allows you to check almost any aspect of app memory usage. Use “memory” tests in the same way as unit tests on app logic:

  • After you manually find an issue (such as a leak), write a memory test that covers it.

  • Write tests for proactive testing – to ensure that new product features do not create any memory issues, like objects left in memory or large traffic.

Thanks for reading and don’t hesitate to try dotMemory Unit EAP on your own! It’s absolutely free, and the only requirement is ReSharper or dotCover installed on your machine.

Posted in dotMemory Tips&Tricks, How-To's, News and Events | Tagged , | 10 Comments

String Interning: Effective Memory Management with dotMemory

Starting with version 4.1, dotMemory offers the String duplicates inspection. The idea behind it is quite simple: it automatically checks memory for string objects with the same value. After you open a memory snapshot, you will see the list of such strings:

String duplicates inspection in dotMemory

How can this help? Well, string duplicates possibly indicate ineffective memory usage. Why create a new string if it is already in memory?

Imagine, for example, that in the background your app parses some text files with repetitive content (say, some XML logs).

Code example for log file processing

So, dotMemory finds a lot of strings with identical content. What can we do?

Inspection results for the example

The obvious answer – rewrite our app so that it allocates strings with unique content just once. Actually, there are at least two ways this can be done. The first one is to use the string interning mechanism provided by .NET.

CLR Intern Pool

.NET automatically performs string interning for all string literals. This is done by means of an intern pool – a special table that stores references to all unique strings. But why  aren’t the strings in our example interned? The thing is that only explicitly declared string literals are interned on the compile stage. The strings created at runtime are not checked for being already added to the pool. For example:

Interning example

Of course, you can circumvent this limitation by working with the intern pool directly. For this purpose, .NET offers two methods: String.Intern and String.IsInterned. If the string value passed to String.Intern is already in the pool, the method returns the reference to the string. Otherwise, the method adds the string to the pool and returns the reference to it. If you want to just check if a string is already interned, you should use the String.IsInterned method. It returns the reference to the string if its value is in the pool, or null of it isn’t.

Thus, the fix for our log parsing algorithm could look as follows:

CLR interning example

Further memory profiling will show that strings are successfully interned.

Inspection after the fix

Nevertheless, such an implementation has one rather serious disadvantage – the interned strings will stay in memory “forever” (or, to be more correct, they will persist for the lifetime of AppDomain, as the intern pool will store references to the strings even if they are no longer needed).

If, for example, our app has to parse a large number of different log files, this could be a problem. In such a case, a better solution would be to create a local analogue of the intern pool.

Local Intern Pool

The simplest (though very far from optimal) implementation might look like this:

Local pool code example

The processing algorithm will change a little bit as well:

Local pool example

In this case, pool will be removed from memory with the next garbage collection after ProcessLogFile is done working.

Thanks for reading! We hope this post was helpful. If you want to try dotMemory and the full set of its automatic inspections on your code, just download your free 5-day trial here.

Posted in dotMemory Tips&Tricks, How-To's | Tagged , | 10 Comments

Ways and Advantages of Migrating JavaScript Code to TypeScript

Nowadays, more and more new languages are created with the aim of providing an alternative to JavaScript. One of these languages is TypeScript. It is compiled to readable JavaScript code; its main goals are to perform type checking at compile time and to support the object-oriented paradigm. In this post we will not discuss the pros and cons of concrete languages or type systems. Instead, we’ll describe an easy way of performing migrations to TypeScript, and demonstrate how the conversion from JavaScript to TypeScript helps discover interesting artifacts (potential errors) as well as making your code clearer.

Ways Of Migrating

In the JavaScript world, people like to use patterns to represent entities, but unfortunately, these are not supported on the language level with constructs such as classes with inheritance, modules and other sorts of abstraction. There are lots of ways to implement these patterns. Sometimes, this abundance of mechanisms for emulating classical OOP constructs leads to confusion. On the other hand, TypeScript explicitly supports these abstractions in its syntax. This is why the process of converting JavaScript to TypeScript makes your code much clearer. The most straightforward way is, of course, manual conversion (see, e.g., this article), but that takes a lot of time and resources. In this post we try using a semi-automatic tool for this and attempt to assess its advantages and disadvantages.

Using a Semi-Automatic Tool

We took some real-world industrial code: bits of JavaScript that are part of node.js framework. It is about 25k lines of JavaScript code with classes and other constructs. For migration to TypeScript, we used a new ability of ReSharper 9, which performs the transformation of the provided JavaScript code. This feature is capable of handling the following JavaScript patterns:

Internal modules
Immediately-invoked function expressions
Immediately-invoked function expressions with namespace object passed as an argument
External modules
AMD format
CommonJs format
Classes and inheritance
Prototype.js
Native class implementation
Misc
Function expression to lambda
Member assigned by a function expression to function member
Usage of parent class via invocation of call/apply to keyword super
Creating dynamic property to ambient declaration
Recovering types
Infer possible overload signatures by function body/vsDoc

In general, support for these patterns allows ReSharper to very quickly produce an OOP baseline implementation. This is the tool’s main advantage. As a result we get approximate TypeScript code, which requires some error correction before it can be put into production.

So, what errors did we get after performing the conversion? We got about 1,200 errors and, after a certain amount of classification (thanks to ReSharper’s Solution-Wide Analysis) we managed to pinpoint these particular types of errors:

  • “Cannot resolve symbol” and other related errors – 70% of cases

  • “Supplied parameters do not match any signature of call target” – 25% of cases

  • Other cases – 5%

The occurrence of the “lack of matched signature” errors is related to the fact that the inference of overload signatures is not always possible statically without special annotation libraries. See example below.

Original JavaScript code:

JavaScript to TypeScript converter: original JavaScript code

Generated TypeScript code:

JavaScript to TypeScript converter: generated TypeScript code

But this is not the only source of problems. The next example illustrates a more frequent situation:

 

You may notice that the fail function does not declare optional parameters. That is why ReSharper is unable to infer the overload signature with optional parameters. This code can be confusing, and may cause errors.

The most frequent type of error, however, is the inability to resolve a particular symbol, as well as related issues. These also appear because of dynamic nature of the language:

In general, these sorts of errors are expected. At the moment, it is the developer’s responsibility to fix these errors but, in the future, ReSharper might be able to resolve them automatically.

Errors from the other category are more interesting. Some of them shows that the lack of common convention leads to confusing. The next example shows a typical deviation from convention CommonJs modules.

Declaration of Buffer as exported variable:

Usage of the Buffer symbol, which was not imported:

Symbol Buffer is global type, by design, of node.js, but it is not obvious by code.

The next example demonstrates multiple inheritance. If your parent classes have properties with the same name and you don’t notice it, it obviously leads to undefined behavior. The compiler can’t help you either. And if you use multiple inheritance in code, you will have to replace it on composition yourself. TypeScript doesn’t support it.

Original JavaScript code:

multiple inheritance in JavaScript code

Generated Ts code:

multiple inheritance in generated TypeScript code

How To Perform Conversion?

Obsolete code can be converted with ReSharper’s highlightings and the corresponding quick-fixes. There are different granularities for conversion:

  • Element under caret
  • In file
  • In folder
  • In project
  • In solution

Converting JavaScript to TypeScript with ReSharper

In conclusion, this tool speeds up the process of converting JavaScript to TypeScript. Give it a try!

Posted in How-To's, ReSharper Tips&Tricks | Tagged , , | 9 Comments

ReSharper Workshop at The JetBrains BASH – Belfast, February 16th

On the morning of February 16th, JetBrains will hold a ReSharper workshop with Matt Ellis at The MAC in Belfast.

The JetBrains BASH

ReSharper is expert at navigating huge codebases, but are you expert at navigating ReSharper? In this workshop, we’ll take a hands-on tour through ReSharper’s features, learning not only about basic navigation, quick fixes and refactoring, but more advanced features, like Structural Search and Replace, Live Templates and Value Tracking.

For more information and to register for the workshop, simply email enquiries@instil.co with your name and address. To cover expenses a £25 fee will be charged at the door and for corporate bookings an invoice can be made.

Additionally, there will be an IntelliJ IDEA workshop in the afternoon and a free and open evening presentation with Hadi Hariri in the evening. Hadi will share his thoughts on developing for developers, what makes for a killer IDE and how to continuously roll out new products and the challenges around it. The evening will be very informal with plenty of time reserved for questions and conversation.

Workshop attendees are guaranteed a place at the evening event and the remaining spots are first come, first served. To register now for the evening event only, click on the link “Join and RSVP” from The BASH meetup page. Act fast or miss out!

Posted in News and Events | Tagged , , , | 2 Comments

ReSharper 9.1 and ReSharper C++ Early Builds Available for Download

I know you are all still recovering from the New Year celebrations but we’ve already got another Early Access Preview (EAP) version of ReSharper for you. As is customary, this ReSharper 9.1 EAP release includes new features, improvements to existing features, bug fixes and also continues improving our impending release of ReSharper C++.

Visual Studio 2015 Integration and C#6 support

We continue working on VS2015 and C#6 support and, for this EAP, we have implemented the following:

  • ReSharper now supports Visual Studio 2015 CTP 5.
  • Support for new C# language constructs as the nameof() operator and string interpolation. More things in this field are to come a bit later.
  • Last, but certainly not least, ReSharper provides support for .NET Framework 4.6 and ASP.NET 5 projects. We will continue working on improving ASP.NET support and more enhancements are to come with future early access builds.

User Experience Improvements

Code completion items have been reordered to provide more relevant completion items first. This feature is currently functioning in experimental mode — to turn it on, pick sorting by relevance in ReSharper | Options | IntelliSense | Completion Behavior.

To make search results for generic types more relevant, Find usages is now able to return only exact types:

Find usages of generic types in ReSharper 9.1. EAP

You can adjust to your needs the way that Find Usages works via ReSharper | Options | Search & Navigation | Find generic type usages behavior.

ECMAScript and TypeScript support

This iteration of ReSharper comes with improved support for both ECMAScript and TypeScript.

  • Initial ECMAScript 6 support, including support for let/const, templates, new literals and computed property names.
  • Improvements in TypeScript 1.4 support, including union types, parenthesized types and type aliases.

Known Issues

  • At this point ReSharper 9.1 EAP doesn’t provide support for MSTest in Visual Studio 2015 CTP 5 due to API changes, this is going to be addressed in upcoming updates (Corresponding issue in our tracker).

We’re still working on more features and improvements, which will appear as part of the 9.1 release, so stay tuned for further updates. Here is a full list of issues that were addressed in the first ReSharper 9.1 EAP build.

ReSharper C++

Finally, here are some improvements on the C++ side of things:

  • ATL, MFC and COM project support is now in place, including support for #import directives.
  • Cache loading speed has been improved by a factor of 10× — this means a dramatically improved repeat start-up time.
  • Unused #include detection means R# finds unused include statements, highlights them and offers you the option to remove them.

ReSharper for C++ is getting very close to release so, with that in mind, we’d really appreciate if you post any issues you have with it (including screenshots, if possible) on our issue tracker. Thanks!

Meanwhile feel free to download ReSharper 9.1 EAP builds and let us know how this update is working out for you. Also note that compatible EAP builds for our other .NET tools, including dotTrace 6.1, dotCover 3.1, dotMemory 4.3 and dotPeek 1.4 are available for installation as well.

Posted in ReSharper Tips&Tricks | Tagged , , | 14 Comments

Overview of dotTrace 6 Timeline Profiling

Different profiling methods and options in dotTrace 6 In our announcement post for dotTrace 6, we introduced a new profiling method: Timeline profiling. Let’s explore what it is, how it can help us and how it compares to other profiling methods like sampling, tracing and line-by-line profiling.

We’ll also have a look at the Timeline Viewer and see how we can analyze a performance snapshot that was captured using this new profiling method.

The Timeline Profiling Method

When using timeline profiling, dotTrace collects temporal data about thread states, application events, memory allocation, garbage collections, I/O operations and other multi-threading data by using Event Tracing for Windows (ETW). Having all this information available and bound to a timeline, this profiling method is ideal for analyzing UI freezes, excessive garbage collections, uneven workload distribution, insufficient I/O and so on.

The Timeline Viewer lets us view all events and zoom in on a particular timespan of our application’s runtime. We can see which calls were made by our application, how these calls were distributed over time, on which tread they ran and when I/O was involved. We can even see the source code for the selected method, giving us all context we need to dive into the data presented.

dotTrace 6 timeline profiling using ETW analyzes threads, memory, I/O

What’s the Difference with Performance Profiling?

From a birds-eye perspective, dotTrace 6 now provides two methods of profiling applications: performance profiling and timeline profiling. In most cases, we can interchange both profiling methods. Since they both collect call stack data, we can use them to determine performance bottlenecks in our .NET applications. There are some important differences though:

  • With performance profiling, dotTrace collects call stack data, data about call timings and data about the number of calls. This lets us analyze how much time is required to execute a particular method and how often it is called, and lets us trace the method call path.
  • With timeline profiling, dotTrace records application events and data about how application and thread state changed during the profiling session. This lets us see what calls were made by our application, but more importantly, it also shows how these calls were distributed in time. This is very helpful when analyzing behavior of multi-threaded applications where the chronological order of events typically matters a lot. For example in determining synchronization delays, UI freezes and so on.

Timeline profiling is less suitable for measuring exact call numbers of every statement. As a rule of thumb, use performance profiling if the analysis leans more to finding the slowest method in an application, and use timeline profiling to identify the cause of UI freezes and the effects of garbage collection, I/O and other multithreading issues like irregular work distribution, lock contention, serialized execution, and other.

More information about the differences between both profiling methods is available in the web help.

Timeline Profiling an Application

After downloading, installing and starting dotTrace 6, we have two options to start profiling an application. If integration with Visual Studio is enabled, we can use the ReSharper | Profile | Profile Startup Project (Performance) menu from there. dotTrace can also be launched standalone, after which we can use the home screen to start a profiling session. Make sure to provide the path to the executable to profile, and select Timeline as the profiling type.

After clicking Run, our application will be launched with the profiler attached. The profiling controller window also opens, which we can use to start and stop collecting snapshot data, kill the application that’s being profiled and more. We can expand the controller window to view real-time memory and CPU usage and the processes involved in the profiling session.

Profiling controller (expanded)

After reproducing an issue in our application, for example by evoking a UI freeze or following the steps someone provided in a bug report, we can collect a timeline profiling snapshot by clicking Get Snapshot’n’Wait in the controller window. This will open the Timeline Viewer in which we will analyze the snapshot we’ve just captured.

Analyzing a Snapshot using the Timeline Viewer

Once a snapshot is opened, either by capturing a fresh one or opening a previously captured snapshot from the dotTrace Home, we will see the Timeline Viewer and its components. Please note that depending on contained data snapshot processing can take a while. Since the main idea behind timeline profiling is collecting temporal call stack and thread state data, the timeline diagram takes a prominent place showing CPU utilization across various threads over time.

Timeline Viewer showing timeline diagram, filters, call tree and source view.

Analyzing a snapshot using the Timeline Viewer is done by applying filters. Pretty much every tool window can be used to set a filter. The most logical filter tool window is, well, the filter tool window. Imagine we’re interested in seeing only File I/O, we can click that and if needed, specify more fine-grained options like a specific file or the access method (read/write).

Filters tool window

Selecting a filter in one tool window applies it in all the other tool windows. In our example here, filtering File I/O on the file 3.txt tells us where in our application’s runtime the file was accessed if we look at the Timeline Diagram. Our I/O seems to have happened at a little more than 1 minute in, on the thread with ID 3488. Again, we can apply more filtering here. We could select just that thread and show only data for it. Or zoom in on the timeline and select just the timespan where this file was accessed. Or both. No matter what, we’re applying additional filters if we do that.

Timeline diagram filtering

In the Call Stack tool window, we can find the call tree for everything that happened in our application during the selected timespan, on a thread with ID 3488, related to File I/O on the file named 3.txt. We can see a method named ProcessFiles spent time reading and writing files. In the Source View tool window, we can even see the source code for that method obtained by looking for PDB files or by decompiling our application’s assembly.

Call Stack Source Code

By combining filters, we can find the information we want and investigate almost every aspect of our application.

Give dotTrace 6 a try, there’s a free trial available! We have a tutorial available that helps getting started with Timeline Profiling in dotTrace 6, and a more advanced one where we analyze an application issue. We’re looking forward to your feedback! Comments, issues and suggestions are welcome in our issue tracker or through the comments below.

Posted in dotTrace Tips&Tricks | Tagged , , , , | Leave a comment

Type Dependency Diagrams in ReSharper 9

In ReSharper 8, we introduced the concept of Architecture Tools as ways of visualizing the state of module dependencies in your solution. In ReSharper 9, we have augmented these tools with an ability to visualize class dependencies too.

Dependencies between types are shown on a Type Dependency Diagram. You can make one by selecting any node from the Solution Explorer (you can pick a file, folder, project or the whole solution) and choosing Show Type Dependency Diagram. You will see something similar to the following:

Type Dependency Diagram in ReSharper 9

In addition, sometimes you need to investigate which external modules you depend on. In this case, you can once again open Solution Explorer, but this time pick a couple of references, right-click and choose Show Usages on Diagram in order to see how your code depends on the selected modules.

Show Usages on Diagram in ReSharper 9

This yields the following diagram:

Dependence of one type on another on a diagram

Just as with the project dependency diagram, arrows are used to indicate the dependence of one type on another, and the width of the line indicates the relative number of usages of one type from another. The diagram is capable of showing usages, aggregation, inheritance relationships, return type usage and even the use of constructor injections.

Both the types and the lines connecting them show additional navigation options if you hover the mouse over them. In the case of the types, you get to see the references and usages, with plenty of navigation hyperlinks:

References on Type Dependency Diagram

Edge-related popups are similar, showing the usages:

Usages on Type Dependency Diagram

It is also possible to collapse the groups in which the particular types are shown. If you do, the popup for the collapsed group will show how all the types inside this group depend on some external module.

It’s important to note that the diagram is very flexible in terms of content. Some of the things it supports are:

  • Component removal: simply select a type and press Delete to get rid of it. The diagram will fluently readjust itself.
  • Types can be added via simple drag-and-drop: grab an element on the Solution Explorer, drag it onto the diagram, and you’re done. Also, as an added bonus, you can select a class name right in the text editor and drop this onto the diagram canvas to add the desired class!
  • Types can also be added if you simply start typing right on the diagram. This will result in a search (similar to Go to Type) and, after you pick a particular element, that element will be added to the diagram.
  • You can also filter out particular types of elements:
    Filter for types on a diagram
  • You can drag an element on the type diagram to a different folder or project. This will automatically invoke the Move refactoring.

Type Dependency Diagram controls

The right side of the diagram also contains the controls for zooming and panning, but also has two large buttons for controlling the layouter that’s used for the diagram, and they both cause the graph to be laid out in a specific way.

The Inheritance Hierarchy Layouter is more suitable for one big inheritance hierarchy, whereas the Business Logic Layouter is better for a set of non-derived but dependent modules. ReSharper generally tries to automatically determine which layout is better, but you are always at liberty to change the setting.

We hope you enjoy this functionality. If you haven’t got yourself a copy of ReSharper 9 already, a 30-day trial awaits.


Get ReSharper

Posted in ReSharper Tips&Tricks | Tagged , , | 1 Comment

Navigation Improvements in ReSharper 9

As is customary, every new version of ReSharper comes with improvements across the board, so in this blog post we are going to focus on Navigation and related features.

Universal Alt+Enter

Let’s face it: Alt+Enter is a primary shortcut in ReSharper and we, on our end, have given this shortcut plenty of power, starting with using it just for context actions and quick-fixes and extending it to support for things like unit testing.

Now we’re taking this one step further by exposing all actions that are provided by ReSharper in a single, unified Alt+Enter menu. So now, if you press the shortcut on a chunk of code that is not an inspection, ReSharper will present you a menu with some of the possible actions you can take at this point:

This popup is also searchable, and the search happens on all of ReSharper’s actions:

Navigate to Exposing APIs

Quite often, when given a particular type (say, IFoo), you want to find out the sum total of all the ways in which you can actually get an instance of this type. There are lots of possible sources: static and ordinary fields, method return values, out parameters, and others. ReSharper’s navigation menu now includes an item called Navigate to Exposing APIs that helps you do exactly that:

Assembly Explorer Navigation

In recent past, dotPeek’s Assembly Explorer has migrated to ReSharper to provide a navigation tree for decompiled items. In ReSharper 9, we have added a few tweaks to make sure it’s more accessible. First of all, we’ve added an appropriate item to the Navigate To menu:

You can also navigate to the Assembly Explorer directly from VS’s own Solution Explorer:

It’s also possible to configure exactly what happens during navigation to a type whose sources are not locally available:

And a few extras…

Just a few more features to cover:

  • The Go to Type mechanism is now disambiguated for generics, meaning that navigating to a declaring type of a variable of type List<Foo> will now take you to either List<T> or Foo:
  • Find Usages now uses the Preview Tab feature in Visual Studio. This is a configurable setting that you can find in ReSharper Options:
  • The Find Usages mechanism no longer blocks the UI thread for long-running searches, so you can keep working with code while a search is in progress. That said, we did quite a lot to reduce search times in this release, so we kind of hope you don’t notice this feature at all.
  • We also made a couple of improvements to the Find Results window. ReSharper will not reuse tabs so that multiple searches for IFoo will show up in a single tab. Also, it is possible to ignore or hide search results. You can press Delete with a search item selected, and it will be grayed out. You can then show/hide the ignored items using the toolbar.


Try ReSharper

Posted in ReSharper Tips&Tricks | Tagged , , | 1 Comment