Get dotMemory 4.1 Beta

It’s been a while since dotMemory 4.0 was released, and we’ve spent all this time baking a new batch of .NET memory profiling goodies for you. Please welcome dotMemory 4.1 Beta, bringing you more rigorous, convenient and beautiful profiling experience.

Here are the must-try features in dotMemory 4.1 Beta:

Disable/Enable collection allocations on the fly. To profile certain functionality of your app without slowing down all of its other areas, try disabling allocations collection directly from the real-time profiling control view. Feel free to comment on DMRY-2317 in dotMemory issue tracker.
Collection allocations
Detect more common issues in a single click. To help you focus on what really needs your attention, we’ve added a new automatic inspection: String duplicates detects the issue and shows how much memory is wasted (DMRY-2232). Two more recently added automatic inspections, both related to WPF, were announced earlier in v4.0.10.
String duplicates inspection
Copy, save and share profiling results. Simply press Ctrl+C to copy current analysis results to the clipboard. Selection and copying of multiple rows is supported for all “Type List” views, all nicely formatted and ready to be compared or shared (DMRY-454).
Formatted copy of analysis results
Get more data on GC roots. Starting from this update, dotMemory shows you the name of the field through which a static object is referenced. (DMRY-774). Later we plan to publish a good read on GC roots basics, so stay tuned.
Named static reference
Enjoy restyled UI icons. Our notion of profiling is that it should be clear, productive and even beautiful. That’s why this update features new great-looking UI icons for your viewing pleasure.
Refreshed UI icons
Name your analysis. Easily order and manage multiple memory investigations by providing your analysis tabs with custom names. Never again get lost in loads of various unnamed analyses.
Custom analysis name

dotMemory 4.1 should be out in just a couple of weeks. For now, go ahead, download dotMemory 4.1 Beta and try on all the new things shipped with this update. Feel free to share any feedback you may have:

We’re eager to hear from you, and make dotMemory 4.1 better with your help and input.

Profile with pleasure!
The dotMemory Team

Posted in dotMemory Tips&Tricks, News and Events | Tagged , | 3 Comments

Fighting Common WPF Memory Leaks with dotMemory

When developing Windows Presentation Foundation (WPF), Silverlight and Windows Store applications, a number of common memory leaks may surface. In this blog post, we’ll see how dotMemory can help detect these common WPF memory leaks and how we can fix them.

Here are the leaks we will discuss in this post:

  • Binding leak
  • Collection binding leak
  • Textbox undo leak
  • Event Handler leak
  • x:Name leak

Binding leak

WPF comes with a number of data binding patterns, which, if we break them, can cause memory leaks in our applications. Take the following class:

Person class

When we bind to an instance’s Name property, the binding target starts listening for property change notifications. If the property is not a DependencyProperty or an object that implements  INotifyPropertyChanged, WPF will resort to subscribing to the ValueChanged event of the System.ComponentModel.PropertyDescriptor class to get notifications when the source object’s property value changes.

Why is this a problem? Well, since the runtime creates a reference to this PropertyDescriptor, which in turn references our source object, and the runtime will never know when to deallocate that initial reference (unless explicitly told), both the PropertyDescriptor as well as our source object will remain in memory.

How to detect it?

dotMemory comes with a series of automatic inspections, detecting common memory leaks. When profiling an application, we can instantiate the control which binds to our class and then dispose that control. When opening a snapshot in dotMemory, the snapshot overview page will tell us about WPF binding leaks immediately.

WPF analyze binding leak

This should be all we need to know, but let’s see if we can find proof of the theory above about the PropertyDescrriptor’s ValueChanged event handler keeping our objects in memory. After double-clicking the list entry, we can see the object set open. When we navigate to the Group by Similar Retention view, we get proof from the ValueChangedEventManager retaining our object:

Binding leak retention schema

How to fix it?

The simplest fix for a WPF binding leak would be making our Name property a DependencyProperty, or implementing the INotifyPropertyChanged interface correctly on our Person class and its Name property, like so:

Person class with INotifyPropertyChanged

If the object is of a type we can not edit, for example because it comes from a library we’re depending on, we can also explicitly clear the binding by calling:

Clear binding explicitly

Note that if a binding has the OneTime mode, this leak will not be present as the binding is done only once and the binding target will not listen for changes on the source object.

Collection binding leak

A similar issue to the WPF binding leak is the Collection binding leak. If there is binding to a collection that does not implement the INotifyCollectionChanged interface, WPF creates a strong reference to this collection. As a result, it stays in memory for the entire application lifetime.

How to detect it?

We will have to run our application and bind a control like a ListBox, for example, to our collection. Next, we can remove the binding by destroying the control and then taking a snapshot using dotMemory. We would expect our collection to be removed from memory, yet if we look at the snapshot overview we can see a WPF collection binding leak had been detected.

Detect WPF collection binding leak

If we then open the object set and look at the Group by Dominators view, we can see our collection is held in memory by the WPF DataBindEngine, an object which will be around for the lifetime of our application. And as long as our object dominator stays in memory, our collection will, too…

Analyse collection binding leak

How to fix it?

An easy way of fixing this issue is by implementing the INotifyCollectionChanged interface on our custom collection type. If the collection does not need any specific implementations, we could also inherit from the ObservableCollection type as it handles the implementation for us.

ObservableCollection

Textbox undo leak

One of the great things about WPF is that it enables Undo on several controls, like a textbox for example. For every change we make to its contents, WPF will keep the actions in memory so we can easily undo them using Ctrl+Z. Now imagine our application has a textbox in which lots of changes are being done… By default, the WPF UndoManager will keep up to 100 of these actions and in earlier WPF versions this limit was not there.

While not really a memory leak as such, having a high undo limit on textboxes in our applications may cause excessive memory usage without us evening knowing. And if we profile the application, we will see this as well.

How to detect it?

After running our application and making a large number of changes to a textbox’ contents, the dotMemory snapshot overview could show a large number of Char[] objects.

Largest Size diagram

If we drill deeper into this object set and look at the dominators (Group by Dominators), we can see that these object types are held in memory by several others. The first dominator here (TextTreeRootNode) is our textbox control itself. Of course it needs a few Char[] arrays to hold its contents. The second one however, UndoManager, is more interesting.

UndoManager

It seems the UndoManager is keeping on to quite a few Char[] arrays as well. Logical, as WPF’s undo behavior will need this information to be able to undo/redo changes made to the textbox.

How to fix it?

First of all, this is not really a leak. It’s a feature! It is important to know it’s there, though, for two reasons. The first one is when profiling WPF applications, we may see a number of Char[] arrays being created. Don’t get distracted by the UndoManager and try focusing on other dominators if the allocations are too excessive. Second, when building applications where a lot of text editing is done, high memory usage can be explained by this undo behavior.

To limit the number of entries the undo and redo stacks can hold, we can update the textbox’ UndoLimit property to a lower number. WPF used to default to –1 (unlimited) but in recent versions it defaults to 100.

UndoLimit

We could also turn off undo entirely, by changing the IsUndoEnabled property.

Turn off undo

Event Handler leak

A common leak, not only in WPF but also in other frameworks, is the event handler leak. Event handler leaks are caused by developer oversight. Here’s an example. Imagine we open an AdWindow window in our application, and let it update its contents every few seconds. We could instantiate a DispatcherTimer in our constructor and subscript to the Tick event to handle these updates.

Subscribe to Timer

Now what happens if we close this AdWindow? It all depends… If we do nothing, the DispatcherTimer will keep on firing Tick events, and since we’re still subscribed to it, the ChangeAds event handler will be called. And if that event handler has to remain in memory for it to be called, our AdWindow will stay in memory too, even if we expected it to be released.

How to detect it?

There are a number of ways to detect this type of leak. The easiest is to capture a snapshot after the object was expected to be released. In the snapshot overview page, we will immediately see if the object remained in memory because of an event handler leak.

Event Handler leak

See our AdWindow there? dotMemory detected it is kept in memory because of an event handler leak. Now how to find out which event handler is keeping it in memory… If we double-click the entry, we will see the details of the instance. Under Key Retention Paths, we can easily identify which type’s event handler retains our object in memory: the DispatcherTimer.

Key Retention Paths

If we know our code base,we know where to look. But imagine this is the first time we see the code base, how do we know where we are subscribing to this event handler?

First of all, we want to make sure our snapshot was captured collecting creation stack traces (can be enabled in the profiler options). From the Key Retention Paths diagram, we can now double-click the EventHandler entry here, which will open the specific event handler instance. From the Creation Stack Trace view, we can see we’re subscribing the event handler in the AdWindow constructor.

Creation Stack Trace

The Shortest Paths to Roots Tree view will tell us which event we’re subscribing to.

Shortest Paths to Roots

How to fix it?

From the investigation above, we know which event and which event handler we’ve forgotten to unsubscribe from (DispatcherTimer’s Tick event), and where we’re subscribing to it in the first place (the AdWindow constructor).

Unsubscribing from the event in the constructor is pointless in this case, as it would render our functionality of rotating content every few seconds useless. A more logical place to unsubscribe is when closing the AdWindow:

Closing AdWindow

Note: The DispatcherTimer example here is a special case, as the above will still not ensure our AdWindow is released from memory. If we profile the application, we’d be able to see the AdWindow instance is still there. The Key Retention Paths diagram will help discover we have to set the private variable adTimer to null as well, to remove another reference from the .NET runtime’s DispatcherTimers collection. Or how one memory leak can hide another.

x:Name leak

The beauty of building UI’s in software is that we can do some nice things with it, like removing controls from the UI when a given action is performed. Depending on how we build our UI, we may be introducing a memory leak by doing this…

WPF creates a strong global reference to the UI element that is declared in XAML if it uses the x:Name directive.

x:Name

Dynamically removing an element from code will not remove the control from memory… Not even if we remove it from the parent control’s Children collection.

Dynamically remove control

How to detect it?

After clicking he button that removes our control, we can capture a snapshot in dotMemory. The snapshot overview has an automatic inspection for x:Name leaks and will tell us our control remained in memory.

x:Name leak

We can drill down and look at the Key Retention Paths to see WPF is retaining our object in memory.

How to fix it?

To ensure the control gets removed from memory, we will have to call the UnregisterName method of the parent control. The code that removes our control from the window could be updated to look like this:

UnregisterName

There are many more common memory leaks and things we can do to improve our applications. Check the Unusual Ways of Boosting up App Performance series we did earlier.

New automatic inspections on WPF collection binding and WPF x:Name were released with the latest v.4.0.10 update on September 2, 2014. Please note: To use dotMemory 4.0.10, your subscription should be valid until September 2, 2014. Otherwise, the subscription renewal is required.

If you are not using dotMemory yet, it’s a good time to start. Together with multiple enhancements, the minor update v4.0.10 introduces a new evaluation model: from now on you can use the trial version for 10 actual days of use, even if they are non-consecutive, even if your previous evaluation period is expired. Get the latest dotMemory and enjoy!

Posted in dotMemory Tips&Tricks, How-To's | Tagged , , | 1 Comment

ReSharper 8.2.2 with JetBrains Account Support is Here

A new ReSharper maintenance release is now available for download. Please install this upgrade if you have experienced the following issues with ReSharper 8.2:

  • A numeric comparison error when getting or updating nuget packages (when MSBuild is used to obtain project references).
  • SSL-Certificate error while connecting to JetBrains Account.

Speaking of JetBrains Account, this is a new way of managing your licenses for JetBrains tools. As a single interaction point with JetBrains products and services, it provides a simple and convenient way to access and manage your purchases, view your order history, distribute licenses to users, and more. You can use your JetBrains Account to have single access to the following JetBrains services:

You can still use your license key to activate and work with our tools if that way is more comfortable for you.

Additional Tools

Please note that along with ReSharper 8.2.2 you can download and install compatible builds of dotTrace, dotCover and dotMemory with similar fixes for the SSL-Certificate error.

By the way, great news for dotMemory users: we’ve added two brand new automatic inspections of common WPF memory leaks types. If you are not using dotMemory yet, today’s a good day to start. Together with multiple enhancements, the minor update v4.0.10 introduces a new evaluation model: from now on you can use the trial version for 10 actual days of use, even if they are non-consecutive. Enjoy!

Posted in News and Events | Tagged , , , , | 26 Comments

ReSharper 9: first look at C# 6.0 support

Today we want to share some details on our progress in supporting the new language features coming in C# 6.0. While most of the changes in C# 6.0 are relatively small, they will definitely affect the existing ReSharper actions, quick-fixes, refactorings and code inspections. Let’s take a look at some of the new C# features and how ReSharper will help you work with them.

Conditional access

The conditional access expression, also known as the ‘null-propagating’ operator is a small syntax sugar, designed to help dealing with null values. Despite the popular myth, it can’t really magically save you from NullReferenceException once and for all. However, the conditional access operator can dramatically reduce the amount of code needed in some scenarios to check for null values. Consider this method from the ReSharper source code:

cond1

Code like this is used extremely often in ReSharper development, since IDE tooling deals with incomplete code almost all of the time. We have to write a lot of code just to handle broken syntax and semantic trees. But most of the time you just need to return false/null to exit method early — and here is when the operator ?. is simply invaluable:

cond2

As you could always expect, ReSharper has deep knowledge on how the new language construct behave, how it evaluates, what values it may produce, and whether or not parts of  the expression are reachable during the execution:

cond5

We also notify you if the conditional access is redundant and offer a quick-fix to replace it with normal non-conditional member access:

cond33

But the most interesting part of the ReSharper support for C# 6.0 are transformations. Since tons of C# code had been written before the ? operator was even designed, it is natural for ReSharper to be able to recognize null-propagating code patterns and suggest using the new conditional expression when it matters:

cond6

We reviewed a lot of various code snippets to recognize and formalize the patterns, which were then used to implement context actions and suggestions, as well as reverse transformations:

cond7

We are even considering a small refactoring action to easily upgrade usages of popular extension methods like .IfNotNull(x => …) by implementing the conditional expressions for earlier C# versions:

cond8

The conditional access operator is a great addition to the C# language: it doesn’t affect the language too much, you are not forced to use it, but when you really need the null-propagating code – it provides a nice way of keeping your code clean and concise.

Expression-bodied members

We are super excited by this C# 6.0 language feature. It is designed to reduce the amount of boilerplate code required to declare trivial type members, especially the get-only properties. C# developers might be a bit tired of writing explicit accessors, multiple curly braces and return statements just to yield some values from properties. C# 6.0 solves this problem in a great way with the familiar lambda-expressions syntax:

exprs1

As you can see, ReSharper is here to help you migrate your code to idiomatic C# 6.0, also being able to apply the transformations file-, project- and solution-wide. Expression bodies are also available for method, indexer and user-defined operator declarations.

Improvements on auto-properties

Besides introducing new language constructs, C# 6.0 aims on making some existing features more useful and covering more user scenarios. For example, auto-properties in C# 6.0 can have initializer expression, just like fields and field-like events:

prop1

ReSharper already knows about it and suggests you the transformation even if the backing field is declared readonly and has an initializer expression. Also, you may notice that ReSharper creates auto-property without the set accessor — which is also allowed in C# 6.0 to give you the experience of truly immutable auto-properties, just like readonly fields.

prop2

Primary constructors

This feature has been trending on Roslyn forums for months. Primary constructors are designed to make your initialization code more declarative and trivial, to move away from imperative error-prone initialization code in constructor. This feature also improves usability of field/property initializers – they are not allowed to make use of this reference, but constructor parameters are in the scope now. So a typical data class from the example above transforms into the following form:

ctor1

Unfortunately, C# design team decided not to provide any way of automatic declaration of fields or properties from primary constructor parameter declarations, so you still need to repeat mentioning types and names when declaring property and field members. But there  is still room for future design improvements and we hope that the current design is just an incremental improvement before more significant changes are introduced.

ctor2

Currently, ReSharper recognizes the new primary constructor declaration and the body block syntactically and semantically. The familiar IDE experience, from the extend/shrink selection feature to code completion is available for primary constructors. But of course we want to provide you with the new experience of transforming classes into primary constructor. This is why we’ve built a small analysis engine to inspect the constructor code: ReSharper collects the assignments happening always during the normal constructor execution and tracks type member reads and writes as well as other initialization information. The new ReSharper features like ‘Move assignment to initializer’  already have this new engine under the hood (ReSharper 8 has only the reverse ‘Move initialization to constructor’ action) and we are looking forward to use it in primary constructor transformations:

ctor3

We still have a lot of decisions to make regarding intersections with the existing ReSharper features, and we are still reviewing the feature set that we want to deliver with ReSharper 9. By now, it is unclear how often developers will use this feature, how would they format primary constructor parameters list and so on. Therefore, we would like to hear your opinion on how you expect ReSharper to behave with primary constructor declarations, which transformations and code inspections might be useful to you.

Other changes

We have already supported some other C# 6.0 language features like ‘using static’ (importing of static members of some type in scope) and exception filters (condition clause for catch blocks):

exc1

As usual, ReSharper maintains the ‘language level’ concept, enabling different sets of features and inspections for projects targeting any C# compiler version starting from 2.0. We handle everything from the changes in overload resolution between C# versions to small language improvements like await expressions availability in catch/finally clauses of try block in C# 6.0:

exc2

New features like ‘declaration expressions’ (ability to declare variables anywhere inside expressions), dictionary initializers and nameof() operator are on their way.

Looking forward

We should mention that the openness of Roslyn development process helps us a lot. Now we are able to play with the new language and compiler features as early as they are implemented, debug compilers to understand their behavior and even participate in language design discussions. By the way, ReSharper finally successfully compiles with the Roslyn compiler (even while some VB bugs critical for us are still not fixed) and we are looking forward to Visual Studion 14 and Roslyn releases. We expect fast adoption of C# 6.0, since the new language features do not depend on the BCL support and can be used even with .NET Framework 2.0 projects.

In this post we have highlighted only the C# 6.0 bits of upcoming ReSharper 9, just to give you an overview of what we are working on. As usual, we are going to open the EAP as soon as our builds become stable enough. Stay tuned and tell us what you think!

Posted in ReSharper Tips&Tricks | Tagged , , | 17 Comments

Unusual Ways of Boosting Up App Performance. Lambdas and LINQs

This is the third post in the series. The previous ones can be found here:

Today, we’re going to uncover the common pitfalls of using lambda expressions and LINQ queries, and explain how you can evade them on a daily basis.

Lambda Expressions

Lambda expressions are a very powerful .NET feature that can significantly simplify your code in particular cases. Unfortunately, convenience has its price. Wrong usage of lambdas can significantly impact app performance. Let’s look at what exactly can go wrong.

The trick is in how lambdas work. To implement a lambda (which is a sort of a local function), the compiler has to create a delegate. Obviously, each time a lambda is called, a delegate is created as well. This means that if the lambda stays on a hot path (is called frequently), it will generate huge memory traffic.

Is there anything we can do? Fortunately, .NET developers have already thought about this and implemented a caching mechanism for delegates. For better understanding, consider the example below:

Caching lambdas 1

Now look at this code decompiled in dotPeek:

Caching lambdas example. Decompiled code

As you can see, a delegate is made static and created only once – LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.

So, what pitfalls should we watch out for? At first glance, this behavior won’t generate any traffic. That’s true, but only as long as your lambda does not contain a closure. If you pass any context (this, an instance member, or a local variable) to a lambda, caching won’t work. It make sense: the context may change anytime, and that’s what closures are made for—passing context.

Let’s look at a more elaborate example. For example, your app uses some Substring method to get substrings from strings:

Lambdas example 1

Let’s suppose this code is called frequently and strings on input are often the same. To optimize the algorithm, you can create a cache that stores results:

Lambdas example 2

At the next step, you can optimize your algorithm so that it checks whether the substring is already in the cache:

Lambdas example 3

The Substring method now looks as follows:

Lambdas example 4

As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate. Let’s look at the decompiled code:

Lambdas example. Decompiled code with no caching

There it is. A new instance of the c__DisplayClass1() is created each time the Substring method is called. The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1.

How to Find

As with any other example in this series, first of all, make sure that a certain lambda causes you performance issues, i.e. generates huge traffic. This can be easily checked in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find delegates that generate significant traffic. Objects of …+c__DisplayClassN are also a hint.
  3. Identify the methods responsible for this traffic.

For instance, if the Substring method from the example above is run 10,000 times, the Memory Traffic view will look as follows:

Lambdas shown in dotMemory

As you can see, the app has allocated and collected 10,000 delegates.

When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect delegate allocation. In our case, the plugin’s warning will look like this:

Warning about lambdas in the HAV plug-in

But once again, data gathered by dotMemory is more reliable, because it shows you whether this lambda is a real issue (i.e. whether it does or does not generates lots of traffic).

How to Fix

Considering how tricky lambda expressions may be, some companies even prohibit using lambdas in their development processes. We believe that lambdas are a very powerful instrument which definitely can and should be used as long as particular caution is exercised.

The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will always be cached with no impact on traffic.

Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look as follows:

Caching lambdas code fix

The updated lambda doesn’t capture any variables; therefore, its delegate should be cached. This can be confirmed by dotMemory:

Labdas caching after the fix shown in dotMemory

As you can see, now only one instance of Func is created.

If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable closure) should be used. For example:

Code example of passing additional context to lambdas

LINQ Queries

As we just saw in the previous section, lambda expressions always assume that a delegate is created. What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and have very similar implementation ‘under the hood.’ This means that all concerns we discussed for lambdas are also true for LINQs.

If your LINQ query contains a closure, the compiler won’t cache the corresponding delegate. For example:

LINQ caching example

As the threshold parameter is captured by the query, its delegate will be created each time the method is called. As with lambdas, traffic from delegates can be checked in dotMemory:

LINQ caching shown in dotMemory

Unfortunately, there’s one more pitfall to avoid when using LINQs. Any LINQ query (as any other query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path, then constant allocation of iterators will generate significant traffic.

Consider this example:

LINQ iterator allocation example

Each time GetLongNames is called, the LINQ query will create an iterator.

How to Find

With dotMemory, finding excessive iterator allocations is an easy task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects from the namespace System.Linq that contain the word “iterator”. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects.
  3. Determine the methods responsible for this traffic.

For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view will look as follows:

LINQ iterator allocation shown in dotMemory

The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they explicitly call LINQ methods. For example:

LINQ iterator allocation warning by the HAV plug-in

How to Fix

Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ query can be replaced with foreach. In our example, a fix could look like this:

LINQ iterator allocation fix example

As no LINQs are used, no iterators will be created.

LINQ iterator allocation fix shown in dotMemory

We hope this series of posts has been helpful. Just in case, the previous two can be found here:

Please follow @dotmemory on Twitter or dotMemory google+ page to stay tuned.

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 9 Comments

Unusual Ways of Boosting Up App Performance. Strings

This is the second post in the series. The other ones can be found here:

This post will focus on best approaches of working with strings.

Changing String Contents

String is an immutable type, meaning that the contents of a string object cannot be changed. When you change string contents, a new string object is created. This fact is the main source of performance issues caused by strings. The more you change string contents, the more memory is allocated. This, in turn, triggers garbage collections that impact app performance. A relatively simple solution is to optimize your code so as to minimize the creation of new string objects.

How to Find

Check all string instances that are not created by your code, but by the methods of the String class. The most obvious example is the String.Concat method that creates a new string each time you combine strings with the + operator.

To do this in dotMemory:

  1. In the Memory Traffic view, locate and select the System.String class.

  2. Find all methods of the String class that create the selected strings.

Consider an example of the function that reverses strings:Reversing strings example

An app that uses this function to revert a 1000-character line generates enormous memory traffic (more than 5 MB of allocated and collected memory). A memory snapshot taken with dotMemory reveals that most of the traffic (4 MB of allocations) comes from the String.Concat method, which, in turn, is called by the Reverse method.

Traffic from string objects shown in dotMemory

The Heap Allocations Viewer plug-in will also warn you about allocations by highlighting the corresponding line of code:

The HAV plug-in highlights string concatenation

How to Fix

In most cases, the fix is to use the StringBuilder class or handle a string as an array of chars using specific array methods. Considering the ‘reverse string’ example, the code could be as follows:

String concatenation code fix

dotMemory shows that traffic dropped by over 99% after the fix:

Memory traffic in dotMemory after the fix

Improving Logging

When seeking ways to optimize your project, take a look at the logging subsystem. In complex applications, for the sake of stability and support convenience, almost all actions are logged. This results in significant memory traffic from the logging subsystem. That’s why it is important to minimize allocations when writing messages to log. There are multiple ways to improve logging.*

*Actually, the optimization approaches shown in this section are universal. The logging subsystem was taken as an example because it works with strings most intensively.

Empty Arrays Allocation

A typical LogMessage method looks as follows:

logging_1

 

What are the pitfalls of such implementation? The main concern here is how you call this method. For example, the call

Logging call 1

will cause allocation of an empty array. In other words, this line will be equivalent to

Logging call 2

How to Find

These allocations would be difficult to detect in the memory snapshot manually, but you can use the Heap Allocations Viewer plug-in to find it very quickly:

Empty array creation shown in the HAV plug-in

How to Fix

The best solution is to create a number of method overloads with explicitly specified arguments. For instance:

Logging fix example

Hidden Boxing

The implementation above has a small drawback. What if you pass a value type to, say, the following method?

Hidden boxing example 1

For example:

Hidden boxing example 2

As the method accepts only the object argument, which is a reference type, boxing will take place.

How to Find

As with any other boxing, the main clue is a value type on the heap. So, all you need to do is look at the memory traffic and find a value type. In our case this will look as follows:

Hidden boxing shown in dotMemory

Of course, the Heap Allocations Viewer will also warn you:

Hidden boxing warning in the HAV plug-in

How to Fix

The easiest way is to use generics—a mechanism for deferring type specification until it is declared by client code. Thus, the revised version of the LogMessage method should look as follows:

Hidden boxing code fix

Early String Allocation

The advice to defer variable allocation as much as possible is quite obvious. Still, sometimes stating the obvious is useful.

Consider the code below. Here the logmsg string is created regardless of whether logging is turned on or off:

Deferring string allocation

A better solution would be:

Deferring string allocation fix

Excessive Logging

If you use logging for debugging purposes, make sure log calls never reach the release build. You can do this by using the [Conditional] attribute.

In the example below, the LogMessage method will be called only if the DEBUG attribute is explicitly defined.

Conditional attribute example

That does it for this post. In the next one, we’ll talk about the nuances of using lambda expressions and LINQ queries. To stay tuned please follow @dotMemory twitter or google+ product page!

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 5 Comments

dotPeek 1.2 Is Released

We have finally finished fine-tuning the new version of dotPeek, our free .NET decompiler and assembly browser. Please download dotPeek 1.2 that receives a new superpower and learns how to walk in symbol server shoes.

dotPeek12 is released

Highlights of this update include:

  • dotPeek can be used as a symbol server and provide Visual Studio debugger with the information required to debug assembly code. Not only can dotPeek generate source files on debugger’s request but you can even generate PDB files manually with dotPeek and watch the progress in a dedicated tool window.
  • Process Explorer window that provides you with the list of all currently running processes and allows exploring their modules and decompiling those of them that are .NET assemblies.
  • Quick search and node filtering in Assembly Explorer with lowerCamelHumps supported.

For the full list of fixes and enhancements addressed in version 1.2, take a look at release notes.


Get dotPeek

Posted in dotPeek Tips&Tricks, News and Events | Tagged , , | 7 Comments

Unusual Ways of Boosting Up App Performance. Boxing and Collections

This is a first post in the series. The other ones can be found here:

Many developers today are familiar with the performance profiling workflow: You run an application under the profiler, measure the execution times of methods, identify methods with high ‘own time,’ and work on optimizing them. This scenario, however, does not cover one important performance aspect: the time distributed among numerous garbage collections in your app. Of course you can evaluate the total time required for GC, but where does it come from, and how to reduce it? ‘Plain vanilla’ performance profiling won’t give you any clue about that.

Garbage collections always result from high memory traffic: the more memory is allocated, the more must be collected. As all we know, memory traffic optimization should be done with the help of a memory profiler. It allows you to determine how objects were allocated and collected and what methods stay behind these allocations. Looks simple in theory, right? However, in practice many developers end up with the words, “Okay, so some traffic in my app is generated by some system classes whose names I see for the first time in my life. I guess this could be because of some poor code design. What do I do now?”

This is what this post is about. Actually, this will be a series of posts where we share our experience of memory traffic profiling: what we consider ‘poor code design,’ how to find its traces in memory, and, of course, what we consider best practices.* Here’s a simple example: If you see objects of a value type in the heap, then surely boxing is to blame. Boxing always implies additional memory allocation, so removing is very likely to make your app better.

The first post in the series will focus on boxing. Where to look and how to act if a ‘bad memory pattern’ is detected?

*Best practices described in this series allowed us to increase the performance of certain algorithms in our .NET products by 20%-50%.

What Tools You Will Need

Before we go any further, let’s look at the tools we’ll need. The list of tools we use here at JetBrains is pretty short:

  • dotMemory memory profiler.
    The profiling algorithm is always the same regardless of the issue you’re trying to find:

    1. Start profiling your application with memory traffic collection enabled.

    2. Collect a memory snapshot after the method or functionality you’re interested in finishes working.

    3. Open the snapshot and select the Memory Traffic view.

  • ReSharper plugin called Heap Allocations Viewer. The plugin highlights all places in your code where memory is allocated. This is not a must, but it makes coding much more convenient and in some sense ‘forces’ you to avoid excessive allocations.

Boxing

Boxing is converting a value type to the object type.  For example: Boxing example

Why is this a problem? Value types are stored in the stack, while reference types (object) are stored in the managed heap. Therefore, to assign an integer value to an object, CLR has to take the value from the stack and copy it to the heap. Of course, this movement impacts app performance.

How to Find

With dotMemory, finding boxing is an elementary task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects of a value type. All these objects are the result of boxing.
  3. Identify methods that allocate these objects and generate a major portion of the traffic.

Boxing shown in dotMemory The Heap Allocations Viewer plugin also highlights allocations made because of boxing. Boxing shown by the HAV plug-in

The main concern here is that the plugin shows you only the fact of a boxing allocation. But from the performance perspective, you’re more interested in how frequently this boxing takes place. E.g., if the code with a boxing allocation is called once, then optimizing it won’t help much. Taking this into account, dotMemory is much more reliable in detecting whether boxing causes real problems.

How to Fix

First of all: before fixing the boxing issue, make sure it really is an issue, i.e. it does generate significant traffic. If it does, your task is clear-cut: rewrite your code to eliminate boxing. When you introduce some struct type, make sure that methods that work with this struct don’t convert it to a reference type anywhere in the code. For example, one common mistake is passing variables of value types to methods working with strings (e.g., String.Format):Fixing boxing

A simple fix is to call the ToString() method of the appropriate value type:Fixing boxing 2

Resizing Collections

Dynamically-sized collections such as DictionaryListHashSet, and StringBuilder have the following specifics: When the collection size exceeds the current bounds, .NET resizes the collection and redefines the entire collection in memory. Obviously, if this happens frequently, your app’s performance will suffer.

How to Find

The insides of dynamic collections can be seen in the managed heap as arrays of a value type (e.g. Int32 in case of Dictionary) or of the String type (in case of List). The best way to find resized collections is to use dotMemory. For example, to find whether Dictionary or HashSet objects in your app are resized too often:

  1. Open a memory snapshot on the Memory Traffic view.
  2. Find arrays of the System.Int32 type.
  3. Find the Dictionary<>.Resize and HashSet<>.SetCapacity methods and check the traffic they generate.

Finding resized Dictionary in dotMemoryThe workflow for the List collections is similar. The only difference is that you should check the System.String arrays and the List<>.SetCapacity method that creates them.Finding resized List in dotMemoryIn case of StringBuilder, look for System.Char arrays created by the StringBuilder.ExpandByABlock method. Finding resized StringBuilder in dotMemory

How to Fix

If the traffic caused by the ‘resize’ methods is significant, the only solution is reducing the number of cases when the resize is needed. Try to predict the required size and initialize a collection with this size or larger. Predicting collection size

In addition, keep in mind that any allocation greater than or equal to 85,000 bytes goes on the Large Object Heap (LOH). Allocating memory in LOH has some performance penalties: as LOH is not compacted, some additional interaction between CLR and the free list is required at the time of allocation. Nevertheless, in some cases allocating objects in LOH makes sense, for example, in the case of large collections that must endure the entire lifetime of an application (e.g. cache).

Enumerating Collections

When working with dynamic collections, pay attention to the way you enumerate them. The typical major headache here is enumerating a collection using foreach only knowing that it implements the IEnumerable interface. Consider the following example:Enumerating collections example

The list in the Foo method is cast to the IEnumerable interface, which implies further boxing of the enumerator.

How to Find

As with any other boxing, the described behavior can be easily seen in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find the System.Collections.Generic.List+Enumerator value type and check generated traffic.
  3. Find methods that originate those objects.

Finding enumerators using dotMemory As you can see, a new enumerator was created each time we called the Foo method.

The same behavior applies to arrays as well. The only difference is that you should check traffic for the SZArrayHelper+SZGenericArrayEnumerator<> class.Finding array enumerators using dotMemory  The Heap Allocation Viewer plug-in will also warn you about hidden allocations: HAV plug-in warning about enumerator allocation

How to Fix

Avoid casting a collection to an interface. In our example above, the best solution would be to create a Foo method overload that accepts the List<string> collection.

Fixing excessive enumerator allocations

If we profile the code after the fix, we’ll see that the Foo method doesn’t create enumerators anymore.

Memory traffic after the fix

In the next installment of this series, we’re going to take a look at the best approaches for working with strings. Stay tuned!


Get dotMemory

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 23 Comments

ReSharper for C++ EAP Goes On

As you may have already heard, the Early Access Program for ReSharper with C++ support is in progress.

We’ve prepared a new build and wanted to share a quick update on what new features and options it brings to the table:

  • Better performance: indexes are now saved locally, meaning that subsequent launches are faster than the initial launch.
  • Improved code completion including smart completion.
  • More settings to customize code formatting style.
  • New control flow analyses that detect unreachable code, unitialized local variable, assigned value that is never used, and redundant ‘else’ keyword.
  • New quick-fixes including Create from usage for global variables, class and enum members.

To learn more on what ReSharper C++ features are already implemented and how they can help you in your everyday work, check this video by Dmitri Nesteruk:

Please note that there are still limitations in terms of supported project size (up to 40 MB), MS C++ extensions and MS preprocessor extensions. ReSharper C++ EAP page contains a full list of known issues and unsupported items, which you’re highly encouraged to examine before you decide to download and install an EAP build.

Posted in News and Events, ReSharper Tips&Tricks | Tagged , | 6 Comments

Webinar Recording: Merging Refactored Code – ReSharper Meets SemanticMerge

The recording of our June 17th webinar, Merging Refactored Code – ReSharper Meets SemanticMerge, is now available on JetBrains YouTube Channel.

In this webinar, Matt Ellis (JetBrains) hosts Pablo Santos (SemanticMerge) who runs through a number of refactoring examples, from the seemingly trivial (yet essential) to complex structure modification scenarios, and demonstrates how to refactor with ReSharper and later get it merged with Semantic.

Pablo talks about the challenges of merging complex refactorings and demonstrates how SemanticMerge simplifies it, parsing the code into a syntax tree then reasoning about it as code rather than text (or text with heuristics), and merges accordingly.

The slides used in the webinar are available on Slideshare.

About the Presenter:

Pablo SantosPablo Santos is the founder at Codice Software, the company behind Plastic SCM and SemanticMerge. Codice started in 2005 and since then Pablo played different roles ranging from core engineering to marketing, business development, advertising and sales operations. Nowadays he plays a dual role as lead of software engineering and product manager for both Semantic and Plastic.

Posted in How-To's, ReSharper Tips&Tricks | Tagged , , | 2 Comments