Unusual Ways of Boosting Up App Performance. Lambdas and LINQs

This is the third post in the series. The previous ones can be found here:

Today, we’re going to uncover the common pitfalls of using lambda expressions and LINQ queries, and explain how you can evade them on a daily basis.

Lambda Expressions

Lambda expressions are a very powerful .NET feature that can significantly simplify your code in particular cases. Unfortunately, convenience has its price. Wrong usage of lambdas can significantly impact app performance. Let’s look at what exactly can go wrong.

The trick is in how lambdas work. To implement a lambda (which is a sort of a local function), the compiler has to create a delegate. Obviously, each time a lambda is called, a delegate is created as well. This means that if the lambda stays on a hot path (is called frequently), it will generate huge memory traffic.

Is there anything we can do? Fortunately, .NET developers have already thought about this and implemented a caching mechanism for delegates. For better understanding, consider the example below:

Caching lambdas 1

Now look at this code decompiled in dotPeek:

Caching lambdas example. Decompiled code

As you can see, a delegate is made static and created only once – LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.

So, what pitfalls should we watch out for? At first glance, this behavior won’t generate any traffic. That’s true, but only as long as your lambda does not contain a closure. If you pass any context (this, an instance member, or a local variable) to a lambda, caching won’t work. It make sense: the context may change anytime, and that’s what closures are made for—passing context.

Let’s look at a more elaborate example. For example, your app uses some Substring method to get substrings from strings:

Lambdas example 1

Let’s suppose this code is called frequently and strings on input are often the same. To optimize the algorithm, you can create a cache that stores results:

Lambdas example 2

At the next step, you can optimize your algorithm so that it checks whether the substring is already in the cache:

Lambdas example 3

The Substring method now looks as follows:

Lambdas example 4

As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate. Let’s look at the decompiled code:

Lambdas example. Decompiled code with no caching

There it is. A new instance of the c__DisplayClass1() is created each time the Substring method is called. The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1.

How to Find

As with any other example in this series, first of all, make sure that a certain lambda causes you performance issues, i.e. generates huge traffic. This can be easily checked in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find delegates that generate significant traffic. Objects of …+c__DisplayClassN are also a hint.
  3. Identify the methods responsible for this traffic.

For instance, if the Substring method from the example above is run 10,000 times, the Memory Traffic view will look as follows:

Lambdas shown in dotMemory

As you can see, the app has allocated and collected 10,000 delegates.

When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect delegate allocation. In our case, the plugin’s warning will look like this:

Warning about lambdas in the HAV plug-in

But once again, data gathered by dotMemory is more reliable, because it shows you whether this lambda is a real issue (i.e. whether it does or does not generates lots of traffic).

How to Fix

Considering how tricky lambda expressions may be, some companies even prohibit using lambdas in their development processes. We believe that lambdas are a very powerful instrument which definitely can and should be used as long as particular caution is exercised.

The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will always be cached with no impact on traffic.

Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look as follows:

Caching lambdas code fix

The updated lambda doesn’t capture any variables; therefore, its delegate should be cached. This can be confirmed by dotMemory:

Labdas caching after the fix shown in dotMemory

As you can see, now only one instance of Func is created.

If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable closure) should be used. For example:

Code example of passing additional context to lambdas

LINQ Queries

As we just saw in the previous section, lambda expressions always assume that a delegate is created. What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and have very similar implementation ‘under the hood.’ This means that all concerns we discussed for lambdas are also true for LINQs.

If your LINQ query contains a closure, the compiler won’t cache the corresponding delegate. For example:

LINQ caching example

As the threshold parameter is captured by the query, its delegate will be created each time the method is called. As with lambdas, traffic from delegates can be checked in dotMemory:

LINQ caching shown in dotMemory

Unfortunately, there’s one more pitfall to avoid when using LINQs. Any LINQ query (as any other query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path, then constant allocation of iterators will generate significant traffic.

Consider this example:

LINQ iterator allocation example

Each time GetLongNames is called, the LINQ query will create an iterator.

How to Find

With dotMemory, finding excessive iterator allocations is an easy task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects from the namespace System.Linq that contain the word “iterator”. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects.
  3. Determine the methods responsible for this traffic.

For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view will look as follows:

LINQ iterator allocation shown in dotMemory

The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they explicitly call LINQ methods. For example:

LINQ iterator allocation warning by the HAV plug-in

How to Fix

Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ query can be replaced with foreach. In our example, a fix could look like this:

LINQ iterator allocation fix example

As no LINQs are used, no iterators will be created.

LINQ iterator allocation fix shown in dotMemory

We hope this series of posts has been helpful. Just in case, the previous two can be found here:

Please follow @dotmemory on Twitter or dotMemory google+ page to stay tuned.

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 7 Comments

Unusual Ways of Boosting Up App Performance. Strings

This is the second post in the series. The other ones can be found here:

This post will focus on best approaches of working with strings.

Changing String Contents

String is an immutable type, meaning that the contents of a string object cannot be changed. When you change string contents, a new string object is created. This fact is the main source of performance issues caused by strings. The more you change string contents, the more memory is allocated. This, in turn, triggers garbage collections that impact app performance. A relatively simple solution is to optimize your code so as to minimize the creation of new string objects.

How to Find

Check all string instances that are not created by your code, but by the methods of the String class. The most obvious example is the String.Concat method that creates a new string each time you combine strings with the + operator.

To do this in dotMemory:

  1. In the Memory Traffic view, locate and select the System.String class.

  2. Find all methods of the String class that create the selected strings.

Consider an example of the function that reverses strings:Reversing strings example

An app that uses this function to revert a 1000-character line generates enormous memory traffic (more than 5 MB of allocated and collected memory). A memory snapshot taken with dotMemory reveals that most of the traffic (4 MB of allocations) comes from the String.Concat method, which, in turn, is called by the Reverse method.

Traffic from string objects shown in dotMemory

The Heap Allocations Viewer plug-in will also warn you about allocations by highlighting the corresponding line of code:

The HAV plug-in highlights string concatenation

How to Fix

In most cases, the fix is to use the StringBuilder class or handle a string as an array of chars using specific array methods. Considering the ‘reverse string’ example, the code could be as follows:

String concatenation code fix

dotMemory shows that traffic dropped by over 99% after the fix:

Memory traffic in dotMemory after the fix

Improving Logging

When seeking ways to optimize your project, take a look at the logging subsystem. In complex applications, for the sake of stability and support convenience, almost all actions are logged. This results in significant memory traffic from the logging subsystem. That’s why it is important to minimize allocations when writing messages to log. There are multiple ways to improve logging.*

*Actually, the optimization approaches shown in this section are universal. The logging subsystem was taken as an example because it works with strings most intensively.

Empty Arrays Allocation

A typical LogMessage method looks as follows:

logging_1

 

What are the pitfalls of such implementation? The main concern here is how you call this method. For example, the call

Logging call 1

will cause allocation of an empty array. In other words, this line will be equivalent to

Logging call 2

How to Find

These allocations would be difficult to detect in the memory snapshot manually, but you can use the Heap Allocations Viewer plug-in to find it very quickly:

Empty array creation shown in the HAV plug-in

How to Fix

The best solution is to create a number of method overloads with explicitly specified arguments. For instance:

Logging fix example

Hidden Boxing

The implementation above has a small drawback. What if you pass a value type to, say, the following method?

Hidden boxing example 1

For example:

Hidden boxing example 2

As the method accepts only the object argument, which is a reference type, boxing will take place.

How to Find

As with any other boxing, the main clue is a value type on the heap. So, all you need to do is look at the memory traffic and find a value type. In our case this will look as follows:

Hidden boxing shown in dotMemory

Of course, the Heap Allocations Viewer will also warn you:

Hidden boxing warning in the HAV plug-in

How to Fix

The easiest way is to use generics—a mechanism for deferring type specification until it is declared by client code. Thus, the revised version of the LogMessage method should look as follows:

Hidden boxing code fix

Early String Allocation

The advice to defer variable allocation as much as possible is quite obvious. Still, sometimes stating the obvious is useful.

Consider the code below. Here the logmsg string is created regardless of whether logging is turned on or off:

Deferring string allocation

A better solution would be:

Deferring string allocation fix

Excessive Logging

If you use logging for debugging purposes, make sure log calls never reach the release build. You can do this by using the [Conditional] attribute.

In the example below, the LogMessage method will be called only if the DEBUG attribute is explicitly defined.

Conditional attribute example

That does it for this post. In the next one, we’ll talk about the nuances of using lambda expressions and LINQ queries. To stay tuned please follow @dotMemory twitter or google+ product page!

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 5 Comments

dotPeek 1.2 Is Released

We have finally finished fine-tuning the new version of dotPeek, our free .NET decompiler and assembly browser. Please download dotPeek 1.2 that receives a new superpower and learns how to walk in symbol server shoes.

dotPeek12 is released

Highlights of this update include:

  • dotPeek can be used as a symbol server and provide Visual Studio debugger with the information required to debug assembly code. Not only can dotPeek generate source files on debugger’s request but you can even generate PDB files manually with dotPeek and watch the progress in a dedicated tool window.
  • Process Explorer window that provides you with the list of all currently running processes and allows exploring their modules and decompiling those of them that are .NET assemblies.
  • Quick search and node filtering in Assembly Explorer with lowerCamelHumps supported.

For the full list of fixes and enhancements addressed in version 1.2, take a look at release notes.


Get dotPeek

Posted in dotPeek Tips&Tricks, News and Events | Tagged , , | 7 Comments

Unusual Ways of Boosting Up App Performance. Boxing and Collections

This is a first post in the series. The other ones can be found here:

Many developers today are familiar with the performance profiling workflow: You run an application under the profiler, measure the execution times of methods, identify methods with high ‘own time,’ and work on optimizing them. This scenario, however, does not cover one important performance aspect: the time distributed among numerous garbage collections in your app. Of course you can evaluate the total time required for GC, but where does it come from, and how to reduce it? ‘Plain vanilla’ performance profiling won’t give you any clue about that.

Garbage collections always result from high memory traffic: the more memory is allocated, the more must be collected. As all we know, memory traffic optimization should be done with the help of a memory profiler. It allows you to determine how objects were allocated and collected and what methods stay behind these allocations. Looks simple in theory, right? However, in practice many developers end up with the words, “Okay, so some traffic in my app is generated by some system classes whose names I see for the first time in my life. I guess this could be because of some poor code design. What do I do now?”

This is what this post is about. Actually, this will be a series of posts where we share our experience of memory traffic profiling: what we consider ‘poor code design,’ how to find its traces in memory, and, of course, what we consider best practices.* Here’s a simple example: If you see objects of a value type in the heap, then surely boxing is to blame. Boxing always implies additional memory allocation, so removing is very likely to make your app better.

The first post in the series will focus on boxing. Where to look and how to act if a ‘bad memory pattern’ is detected?

*Best practices described in this series allowed us to increase the performance of certain algorithms in our .NET products by 20%-50%.

What Tools You Will Need

Before we go any further, let’s look at the tools we’ll need. The list of tools we use here at JetBrains is pretty short:

  • dotMemory memory profiler.
    The profiling algorithm is always the same regardless of the issue you’re trying to find:

    1. Start profiling your application with memory traffic collection enabled.

    2. Collect a memory snapshot after the method or functionality you’re interested in finishes working.

    3. Open the snapshot and select the Memory Traffic view.

  • ReSharper plugin called Heap Allocations Viewer. The plugin highlights all places in your code where memory is allocated. This is not a must, but it makes coding much more convenient and in some sense ‘forces’ you to avoid excessive allocations.

Boxing

Boxing is converting a value type to the object type.  For example: Boxing example

Why is this a problem? Value types are stored in the stack, while reference types (object) are stored in the managed heap. Therefore, to assign an integer value to an object, CLR has to take the value from the stack and copy it to the heap. Of course, this movement impacts app performance.

How to Find

With dotMemory, finding boxing is an elementary task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects of a value type. All these objects are the result of boxing.
  3. Identify methods that allocate these objects and generate a major portion of the traffic.

Boxing shown in dotMemory The Heap Allocations Viewer plugin also highlights allocations made because of boxing. Boxing shown by the HAV plug-in

The main concern here is that the plugin shows you only the fact of a boxing allocation. But from the performance perspective, you’re more interested in how frequently this boxing takes place. E.g., if the code with a boxing allocation is called once, then optimizing it won’t help much. Taking this into account, dotMemory is much more reliable in detecting whether boxing causes real problems.

How to Fix

First of all: before fixing the boxing issue, make sure it really is an issue, i.e. it does generate significant traffic. If it does, your task is clear-cut: rewrite your code to eliminate boxing. When you introduce some struct type, make sure that methods that work with this struct don’t convert it to a reference type anywhere in the code. For example, one common mistake is passing variables of value types to methods working with strings (e.g., String.Format):Fixing boxing

A simple fix is to call the ToString() method of the appropriate value type:Fixing boxing 2

Resizing Collections

Dynamically-sized collections such as DictionaryListHashSet, and StringBuilder have the following specifics: When the collection size exceeds the current bounds, .NET resizes the collection and redefines the entire collection in memory. Obviously, if this happens frequently, your app’s performance will suffer.

How to Find

The insides of dynamic collections can be seen in the managed heap as arrays of a value type (e.g. Int32 in case of Dictionary) or of the String type (in case of List). The best way to find resized collections is to use dotMemory. For example, to find whether Dictionary or HashSet objects in your app are resized too often:

  1. Open a memory snapshot on the Memory Traffic view.
  2. Find arrays of the System.Int32 type.
  3. Find the Dictionary<>.Resize and HashSet<>.SetCapacity methods and check the traffic they generate.

Finding resized Dictionary in dotMemoryThe workflow for the List collections is similar. The only difference is that you should check the System.String arrays and the List<>.SetCapacity method that creates them.Finding resized List in dotMemoryIn case of StringBuilder, look for System.Char arrays created by the StringBuilder.ExpandByABlock method. Finding resized StringBuilder in dotMemory

How to Fix

If the traffic caused by the ‘resize’ methods is significant, the only solution is reducing the number of cases when the resize is needed. Try to predict the required size and initialize a collection with this size or larger. Predicting collection size

In addition, keep in mind that any allocation greater than or equal to 85,000 bytes goes on the Large Object Heap (LOH). Allocating memory in LOH has some performance penalties: as LOH is not compacted, some additional interaction between CLR and the free list is required at the time of allocation. Nevertheless, in some cases allocating objects in LOH makes sense, for example, in the case of large collections that must endure the entire lifetime of an application (e.g. cache).

Enumerating Collections

When working with dynamic collections, pay attention to the way you enumerate them. The typical major headache here is enumerating a collection using foreach only knowing that it implements the IEnumerable interface. Consider the following example:Enumerating collections example

The list in the Foo method is cast to the IEnumerable interface, which implies further boxing of the enumerator.

How to Find

As with any other boxing, the described behavior can be easily seen in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find the System.Collections.Generic.List+Enumerator value type and check generated traffic.
  3. Find methods that originate those objects.

Finding enumerators using dotMemory As you can see, a new enumerator was created each time we called the Foo method.

The same behavior applies to arrays as well. The only difference is that you should check traffic for the SZArrayHelper+SZGenericArrayEnumerator<> class.Finding array enumerators using dotMemory  The Heap Allocation Viewer plug-in will also warn you about hidden allocations: HAV plug-in warning about enumerator allocation

How to Fix

Avoid casting a collection to an interface. In our example above, the best solution would be to create a Foo method overload that accepts the List<string> collection.

Fixing excessive enumerator allocations

If we profile the code after the fix, we’ll see that the Foo method doesn’t create enumerators anymore.

Memory traffic after the fix

In the next installment of this series, we’re going to take a look at the best approaches for working with strings. Stay tuned!


Get dotMemory

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 23 Comments

ReSharper for C++ EAP Goes On

As you may have already heard, the Early Access Program for ReSharper with C++ support is in progress.

We’ve prepared a new build and wanted to share a quick update on what new features and options it brings to the table:

  • Better performance: indexes are now saved locally, meaning that subsequent launches are faster than the initial launch.
  • Improved code completion including smart completion.
  • More settings to customize code formatting style.
  • New control flow analyses that detect unreachable code, unitialized local variable, assigned value that is never used, and redundant ‘else’ keyword.
  • New quick-fixes including Create from usage for global variables, class and enum members.

To learn more on what ReSharper C++ features are already implemented and how they can help you in your everyday work, check this video by Dmitri Nesteruk:

Please note that there are still limitations in terms of supported project size (up to 40 MB), MS C++ extensions and MS preprocessor extensions. ReSharper C++ EAP page contains a full list of known issues and unsupported items, which you’re highly encouraged to examine before you decide to download and install an EAP build.

Posted in News and Events, ReSharper Tips&Tricks | Tagged , | 6 Comments

Webinar Recording: Merging Refactored Code – ReSharper Meets SemanticMerge

The recording of our June 17th webinar, Merging Refactored Code – ReSharper Meets SemanticMerge, is now available on JetBrains YouTube Channel.

In this webinar, Matt Ellis (JetBrains) hosts Pablo Santos (SemanticMerge) who runs through a number of refactoring examples, from the seemingly trivial (yet essential) to complex structure modification scenarios, and demonstrates how to refactor with ReSharper and later get it merged with Semantic.

Pablo talks about the challenges of merging complex refactorings and demonstrates how SemanticMerge simplifies it, parsing the code into a syntax tree then reasoning about it as code rather than text (or text with heuristics), and merges accordingly.

The slides used in the webinar are available on Slideshare.

About the Presenter:

Pablo SantosPablo Santos is the founder at Codice Software, the company behind Plastic SCM and SemanticMerge. Codice started in 2005 and since then Pablo played different roles ranging from core engineering to marketing, business development, advertising and sales operations. Nowadays he plays a dual role as lead of software engineering and product manager for both Semantic and Plastic.

Posted in How-To's, ReSharper Tips&Tricks | Tagged , , | 2 Comments

Profiling ASP.NET vNext using dotMemory and dotTrace

With ASP.NET vNext (working title), Microsoft is making some bold moves towards opening up development and deployment options for building applications. For starters, we no longer have to install the full .NET framework on our servers: ASP.NET vNext lets us deploy our own version of .NET on an app-by-app basis. Next, the project model is being migrated to being just a project.json file describing dependencies and some optional configuration, much like we write apps in NodeJS.

ASP.NET vNext is still in an early preview, but it never hurts to explore and test-drive what’s going to be the next generation of .NET. And when writing code for it, we may want to inspect performance and memory of our applications to verify we don’t have any issues there. And what better tools than dotMemory and dotTrace to do that?

Setting the correct CLR type

Before we start profiling, it is important to know that profiling currently only works when the Desktop CLR is active. The new Core CLR does not (yet?) expose the same profiler hooks.

Checking if the correct CLR is active can be done from the command line using the kvm list command.

Output of the kvm list command

Make sure that the active CLR (marked with an *) shows the svr50 runtime. If it’s svrc50 that means we’re on the Core CLR and have to switch to the Desktop CLR using the kvm set <pick the correct version from the above list> command.

Attaching to an already running ASP.NET vNext process

After starting dotMemory or dotTrace, hit Attach To Process on the toolbar. This will give us a list of all running processes on our local computer. Both profilers can also attach to processes running on remote machines. From this list, find the klr.exe process. When running multiple ASP.NET vNext applications on our machine, it will be a bit tricky to find the correct one.

Attach profiler

We can provide additional options and even make use of the profiler API in our ASP.NET vNext applications. Hitting Run will attach the profiler which will behave identical to profiling “classic” .NET applications. For example in dotMemory, we can see the memory usage (unmanaged and managed heaps) in real-time:

dotMemory memory usage of project K ASP.NET vNext

In dotTrace we can see the subsystems used by our code and drill down to the function or thread level.

Profiling ASP.NET vNext in dotTrace

Starting an ASP.NET vNext process with profiling enabled

When attaching to an already running process, some options will be unavailable. For example, dotMemory will be unable to collect memory traffic for an already running application. It will be impossible to pick a more detailed profiling mode for dotTrace, such as Tracing or Line-by-Line profiling. For these options to be available, we have to launch the process with profiling enabled from the start.

Running ASP.NET vNext applications is done using the k run or k web commands. These are aliases for the active runtime which is located under %USERPROFILE%\.kre\packages\<seleced runtime>\. From dotMemory and dotTrace, we can use the Profile | Standalone Application menu and enter the startup details for our ASP,NET vNext application.

Profiler configuration

Here’s what we have to specify:

  • Application: the ASP.NET vNext runtime to use. This will depend on which runtimes we have installed on or system but will look like %USERPROFILE%\.kre\packages\<seleced runtime>\bin\K.cmd.
  • Arguments: the arguments we would typically pass to the k command. If we start our application with k web, then web would be the argument to specify here.
  • Working directory is the path where our application and its project.json file are located.

Using this approach, we can now set all additional profiler options. dotMemory allows collecting memory traffic while dotTrace will let us perform Line-by-Line profiling and specify the measure used. Run will launch the application and lets us capture a snapshot we can investigate later.

Give it a try! Even with a new CLR in town we still have to care about memory and performance of our applications. Here are some resources to get started with dotMemory and dotTrace:

Posted in dotMemory Tips&Tricks, dotTrace Tips&Tricks, How-To's | Tagged , , , | 3 Comments

Webinar Recording and Q&A: .NET Code Coverage for Continuous Integration using TeamCity and dotCover

The recording of our June 10th webinar with Maarten Balliauw, .NET Code Coverage for Continuous Integration using TeamCity and dotCover, is now available on JetBrains YouTube Channel and slides on slideshare.

In this webinar we use dotCover to collect code coverage information while running tests in our CI process. We can see how we can configure code coverage and how we can use the TeamCity Visual Studio plugin to download the coverage snapshot generated on the build server and inspect it using dotCover on a developer machine.

How much of our code is being covered by our unit tests? Are there areas we are not testing? By capturing code coverage data during a test run, we can analyze which areas of our applications are well-tested and which ones require additional tests to be written. And where better to capture code coverage information than on our build server? Find out in this webinar.

Below are select questions and answers from our webinar:

Q: Can we use MStest with TeamCity and also get code coverage with dotCover?

A: Yes, test runner type and .NET coverage tool can be chosen in TeamCity Build Configuration Settings tab. Please check the following settings:
MSTest and dotCover with TeamCity


Q: Is it possible to use dotCover on TeamCity with xUnit.net tests?

A: Yes, it is possible. If you are using xUnit and you want to capture code coverage, you can run dotCover from the command line and import the data into TeamCity by using a service message. The process details have been described by Maarten during the webinar (starting from 24th minute).

For more information on dotCover, please visit dotCover official website. Stay tuned for further events with dotCover twitter!

Maarten BalliauwMaarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He’s a Microsoft Most Valuable Professional (MVP) for Windows Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC and others.
Posted in dotCover Tips&Tricks, How-To's | Tagged , , , , , | Leave a comment

dotPeek 1.2 EAP: Introducing Process Explorer

Have you ever wanted to dig deeper into a process running on your machine? We have. That’s the reason why the new dotPeek 1.2 EAP build introduces Process Explorer.

The Process Explorer window provides you with the list of all currently running processes and allows decompiling those of them that are .NET processes. Once you locate a process to decompile, you can add it to Assembly Explorer for further investigation by clicking the “+” button. From there, you can export decompiled code to a Visual Studio project if necessary.

Process Explorer window in dotPeek 1.2

You can see native processes in this window as well although you naturally shouldn’t expect dotPeek to be able to decompile them. To display native processes, click Show Native Processes in the Process Explorer toolbar.

To display additional details about a process, for instance the .NET framework version used or MVID (which can be useful to double check that you are going to debug the right application), press F4, which should bring up a tool window displaying process properties.

dotPeek 1.2 Process Properties tool window

In case you’ve missed it, note that dotPeek 1.2 EAP can now work as a symbol server and supply Visual Studio debugger with the information required to debug assembly code. Download dotPeek 1.2 EAP and give it a try!

Posted in dotPeek Tips&Tricks, News and Events | Tagged , | 6 Comments

Heap Allocations Viewer plugin

When we first launched ReSharper 8 back in July last year, there was a very nice little feature hidden in internal mode. This was being used by the dev team to help keep an eye on performance. It was a little checkbox that said “Show allocations (delegates, closures, hidden allocations, boxing)”.

It added highlights to your code, as you type, indicating where memory is being allocated. It’s very eye-opening to see how frequently memory is allocated, and often in places you don’t expect. Take this call to a function with “params” arguments. Note the orange highlight, telling us that .net will allocate an array for the parameters, for each call:

Highlight hidden allocation of an array

But alas, features hidden in our internal mode are not very discoverable. So the developer behind the feature, Alexander Shvedov, made a few changes, fixed a few issues and pulled it out into an extension, which is now available to download from the extension manager, by searching for “heapview“. A stable version 0.9.1 is available, but you can always live on the edge by installing a prerelease version.

Extension Manager showing heapview extension

WARNING: Before we go any further, it must be said: BEWARE OF PREMATURE OPTIMISATION.

This plugin will tell you when an allocation is made, but it does not tell you how much memory is being allocated, or how frequently. It does not tell you if you have memory problems with your application. It simply shows where allocations are going to occur. You should still use a memory profiler such as dotMemory to measure your application and decide if your memory usage is causing you issues.

But of course, the plugin is still very useful. It’s always good to know what your code is doing, and having that visible can help you to make more informed design decisions. It’s also very helpful when you know you need to write code that should avoid excessive allocations. For example, the dev team use this to keep an eye on memory allocations made during performance critical code paths, such as as-you-type analysis. These code paths are called very frequently, every time you edit a document, and obviously need to be fast. Memory churn can have a huge impact on performance especially if it causes garbage collection, so even a small amount of allocations in these circumstances can have a knock on effect on performance.

And of course, it’s simply a very interesting view into what your code is doing!

So what does it highlight?

Obviously, it handles explicit allocation, with the new keyword:

Highlight explicit allocation

And it also supports the standard ReSharper functionality of changing the severity of the highlight (so you can disable it if you’re not interested in that particular highlight) and allows you to find all similar highlights in the project or solution.

Configure severity and find all in scope

Other heap allocations:

Example heap allocation highlights

The plugin will highlight allocations made due to boxing:

Highlight boxing allocations

Delegate creation, even listing local scope variables that are captured in closures:

Highlight delegate creation

It will even show you the points where the classes used to capture closure variables are created:

Highlight closure allocation

You can use this plugin in combination with a full profiler, such as dotMemory. Here’s an example where the plugin reports a possible enumerator allocation:

Possible enumerator allocation

When profiling this code using dotMemory and capturing memory allocations, we can see that the plugin is right: generic array enumerators are being allocated (of type SZArrayHelper+SZGenericArrayEnumerator<String>) and collected all the time, potentially triggering garbage collections! The backtrace also shows us that these allocations originate from Method1:

Allocations in dotMemory

Our code would be more performant and less heavy on memory traffic if we just enumerated the args array instead of casting it to an IEnumerable<string>.

As we can see, once you’ve found your allocation hotspots with the profiler, the plugin will make it obvious where in the method the memory is being allocated, whether it be explicitly, or through hidden allocations such as boxing or capturing closure information.

And of course, it’s Open Source. If you’re interested to see how it’s written, or want to report issues, or perhaps even contribute, please check it out.

Posted in ReSharper Tips&Tricks | Tagged , , , | 12 Comments