Introducing the ReSharper 9 Early Access Program

As many of you may have noted, there’s quite a lot of things happening in the land of .NET right now. The introduction of Roslyn and C# 6, the new ASP.NET, the forthcoming release of Visual Studio 14 and many other things are afoot and, once again, ReSharper is unrelenting in its support of the new and exciting language features and frameworks.

So today we are opening the Early Access Program (EAP) for ReSharper 9. What’s in store? Let’s take a look.

TL;DR VS 14 and C#6 support, regular expressions, lots of JS/TS improvements, more bulk fixes, a revised file layout editor, new navigation actions and a new shared platform for all our .NET-related tools.

C# 6 Support

ReSharper 9 supports the ?. operator in C# 6

We already did an introductory blog post on C# 6 support so, just to recap, here are some of the things that ReSharper 9 will be aware of:

  • Conditional access support — the .? operator lets us perform chained null checks (a.k.a the Maybe monad), so ReSharper is now not only aware of the construct, but is also capable of updating your null-checking code to the new paradigm.
  • Expression-bodied members — since the notation for simple expressions has been simplified in C# 6, ReSharper now offers the option to use expression bodies instead of curly-brace-delimited constructs. This is also a bulk action, so you can apply it across the whole project.
  • Auto-properties benefit from additional support of initializer expressions, as well as an ability to get rid of private setters and get truly immutable auto-properties.
  • Support for static usings (finally, you can type Sin instead of Math.Sin), exception filters and other new C# 6 constructs.

Bulk Actions and Fix in Scope

Fix in Scope: more quick-fixes that can be applied in the entire project or solutionReSharper 9 introduces further improvements to the Fix in Scope mechanic that allows bulk non-destructive changes on large areas of code. This includes the following:

  • Bulk Import ensures that when you import a chunk of code, ReSharper now goes through the code in your solution and generates the appropriate imports for all the necessary types and members – this includes extension methods as well.
  • Redundant Code Removal now appears as a separate item, so given a single redundant this. qualifier, you can fix these qualifiers locally or globally or you can remove all redundant code in a file, project or solution.
  • Global naming fixes help you rename code according to your conventions. Again, this is a bulk action, so you can fix a single location or a single file/project/solution within a single menu.
  • Move to File/Namespace now ensures that files with multiple class declarations get treated properly.
  • Remove Regions is another bulk action that will be particularly welcome by those of us who do not like the idea of regions in our code.

Navigation Improvements

Navigation improvements in ReSharper 9: Go to ActionSome of the navigation improvements in ReSharper 9 include:

  • Go to Action (a.k.a. Universal Alt+Enter) changes the Alt+Enter shortcut to provide top-level action suggestions as well as an ability to perform a text search for a particular action.
  • Navigate to Exposing APIs is a navigation option for finding all the methods that are capable of returning a particular type.
  • Assembly Explorer enjoys tighter integration, with dedicated Navigate To options both in the editor as well as the Solution Explorer.
  • The Go to Type mechanism has been disambiguated for generic types, so while on a variable of type List<Foo>, ReSharper will ask you whether you want to navigate to List<T> or Foo.
  • Find Usages now has a configurable setting for using the Preview Tab. It also no longer blocks the UI on long-running searches.
  • The Find Results window will no longer spawn extra tabs for the same type. And finally, it’s now possible to ignore or hide particular search results.

Code Style Settings

File Layout Editor in ReSharper 9

Style settings for C# include the following new features:

  • var Usage in Declarations has been split into three separate settings (for built-in types, simple types and elsewhere) for finer-grained control over how var-related highlights and quick-fixes function.
  • Instance member qualification determines precisely which members are qualified by the this. qualifier.
  • Static member qualification settings determine whether static members should be qualified via the current type or the type they are declared in.
  • All three aforementioned settings are also available as Code Cleanup profile elements.
  • Finally, a brand new GUI-based File Layout Editor — no more editing a cryptic XML file by hand!

Regular Expression Support

ReSharper 9 supports regular expressionsReSharper now understands regular expressions, so to make heads or tails of them, you might no longer need to have a separate application by your side. Here’s what it supports:

  • Syntax Highlighting for regular expressions, including piecewise highlighting of errors.
  • Code Completion helps you pick the wildcards for an expression. There is a special code completion feature for regular expression groups, too!
  • An Escaping Quick-Fix helps you quickly adjust wildcard backslashes to C# style by turning the string into a verbatim string.
  • A separate Validation Utility lets you quickly check input against a regular expression and get an immediate view of the matches and groups. As you might expect, the utility is highly configurable.
  • Options for Conversion to Precompiled Expressions let you optimize performance by turning ordinary regular expression invocations into static Regex instances.

TypeScript and JavaScript Improvements

ReSharper 9 improves TypeScript and JavaScript supportHere are some of the improvements introduced for the TypeScript language:

  • New Highlighting and Quick-Fixes now cover over 270 different types of errors.
  • New Refactorings include Introduce Field, Copy Type, Move Refactorings (move type to module/file, move into matching files, move to folder), Rename File as well as the Import of Types for Pasted Code.
  • Code Generation for TypeScript now supports implementing interfaces/generating missing members as well as generating constructors and overriding members, too.
  • Live Templates were improved to support two new scopes — Module Member and Type Member. ReSharper also comes with live templates for common TypeScript entities, such as classes, enums, interfaces and modules.
  • Parameter Info now supports not only function arguments, but generic arguments as well.
  • Specialized Signatures are now fully supported.

In addition, ReSharper also offers automated JavaScript to TypeScript conversion as well as initial support for ASP.NET vNext.

Unified Installer

We plan to offer a single unified installer for all our .NET products (ReSharper, dotTrace, dotMemory, dotCover, dotPeek). When installing, you will be able to pick which products you want, and the installer will do the rest. Also, behind the scenes, all our products now operate on a shared platform, which saves resources when several products are running at the same time.

Please note that the initial EAP build we’re posting will only offer installation of ReSharper itself; it will also remove all previous installations of ReSharper and other .NET tools, and will not support VS 2005 or 2008.

And a Few More…

Here are just a couple more features that were added or improved:

  • Visual Studio 14 support is here, as you would expect.
  • Switching features on and off has never been easier with a dedicated option page which lets you control which parts of your code ReSharper actually affects. Current options include support for Unit Testing, MsBuild and NAnt, CSS, JavaScript and TypeScript as well as Internationalization.
  • Debugging of decompiled code now works in ReSharper. Of course, you can also debug decompiled code using dotPeek.
  • ReSharper’s Architecture Tools now include an ability to generate a Type Dependency Graph. This form of visualization is great for finding out what code depends on a particular type, and what the type itself depends on. This diagram integrates with other ReSharper’s features, for example navigation result sets can now be shown as a Type Dependency Graph via the Alt+ shortcut.

This sums up what we have in store for the first EAP build but we’ll be adding more changes further on. We invite you to download ReSharper 9 EAP and give it a go!

Oh, and as always, let us know what you think and please report any issues you encounter — thanks!

Posted in ReSharper Tips&Tricks | Tagged , , | 36 Comments

ReSharper 8.2.3 Is Available

A new ReSharper maintenance release is now available. We recommend you to download and install ReSharper 8.2.3 in case you were unable to use previous bug-fix updates (8.2.1 and 8.2.2) due to expired subscription, although your subscription was valid for upgrading to 8.2 (expired after March, 21).

In addition to solving the upgrade issue, this update contains a fix to a problem with Go To Implementation on method or interface not working properly.

Stay tuned since more good news are just around the corner!

Get ReSharper

Posted in News and Events | Tagged , | 15 Comments

dotMemory 4.1 is Released: 6+ Reasons to Upgrade

Today we are thrilled to introduce dotMemory 4.1 with a new batch of .NET memory profiling goodies regarding automatic inspections, profiling process control, GC roots data, and more. Please welcome dotMemory 4.1, bringing you more rigorous, convenient and beautiful profiling experience.
dotMemory 4.1 splash screen

Why upgrade to dotMemory 4.1? For at least 6 reasons:

1: Disable/Enable collection allocations on the fly.
To profile certain functionality of your app without slowing down all of its other areas, try disabling allocations collection directly from the real-time profiling control view. For your convenience allocations collection period is marked by red line on a timeline.
Collection Allocations

2: Get more data on GC roots.
Starting from this update, dotMemory shows you the name of the field through which a static object is referenced. Later we plan to publish a good read on GC roots basics, so stay tuned.
Named static reference

3: Detect more common issues in a single click.
To help you focus on what really needs your attention, we’ve added a new automatic inspection that finds String duplicates and shows the related memory waste. Two more recently added automatic inspections, both related to WPF, were announced earlier in v4.0.10.
String duplicates inspection

4: Copy, save and share profiling results.
Simply press Ctrl+C to copy current analysis results to the clipboard. Selection and copying of multiple rows is supported for all “Type List” views, all nicely formatted and ready to be compared or shared.
Formatted copy of analysis results

5: Enjoy new restyled UI icons. Our notion of profiling is that it should be clear, productive and even beautiful. That’s why this update features new great-looking UI icons for your viewing pleasure.
Refreshed UI icons

6: Name your analysis. Easily order and manage multiple memory investigations by providing your analysis tabs with custom names. Never again get lost in loads of various unnamed analyses.
Custom analysis name

+: Even more reasons?
To get the full list of enhancement and fixes, please see the release notes.

Discover all new features introduced in dotMemory 4.1 in this short Overview Demo.

Download dotMemory 4.1 and try out all the new things shipped with this update. Learn more about dotMemory 4.1 on the What’s New page. A free trial version is available for 10 actual days of use (even if they are non-consecutive). Note for existing customers: To use this update, you will need an active subscription.

Feel free to share any feedback you may have. Ask questions on the discussion forum, report bugs and feature requests to our issue tracker and/or leave comments on this blog post below. Follow @dotMemory on twitter to stay tuned about state of the art in .NET memory profiling. And to raise your skills in spotting .NET memory issues with dotMemory, watch this series of video tutorials by Maarten Balliauw, JetBrains Technical Evangelist.

Profile with pleasure!
dotMemory team

Posted in dotMemory Tips&Tricks, News and Events | Tagged | 3 Comments

Introducing dotMemory Video Tutorials

Great news for .NET memory investigators, with the help from our technical evangelist Maarten Balliauw we have prepared a series of short videos that cover some of the core features of dotMemory.


These video tutorials should be a good starting point for users who are new to dotMemory and just starting to explore its functionality. It is focused around core features available in dotMemory, explained in short videos of 2-4 minutes each. You can learn basic concepts of .NET memory management, profiling workflow, analysis techniques and more.

32 minutes and 11 videos to watch:

Stay tuned! Subscribe to JetBrains YouTube channel and follow @dotMemory on Twitter to be notified when new videos arrived.

Maarten BalliauwMaarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He’s a Microsoft Most Valuable Professional (MVP) for Windows Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC and others..

Enjoy the show!
dotMemory Team

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 2 Comments

Get dotMemory 4.1 Beta

It’s been a while since dotMemory 4.0 was released, and we’ve spent all this time baking a new batch of .NET memory profiling goodies for you. Please welcome dotMemory 4.1 Beta, bringing you more rigorous, convenient and beautiful profiling experience.

Here are the must-try features in dotMemory 4.1 Beta:

Disable/Enable collection allocations on the fly. To profile certain functionality of your app without slowing down all of its other areas, try disabling allocations collection directly from the real-time profiling control view. Feel free to comment on DMRY-2317 in dotMemory issue tracker.
Collection allocations
Detect more common issues in a single click. To help you focus on what really needs your attention, we’ve added a new automatic inspection: String duplicates detects the issue and shows how much memory is wasted (DMRY-2232). Two more recently added automatic inspections, both related to WPF, were announced earlier in v4.0.10.
String duplicates inspection
Copy, save and share profiling results. Simply press Ctrl+C to copy current analysis results to the clipboard. Selection and copying of multiple rows is supported for all “Type List” views, all nicely formatted and ready to be compared or shared (DMRY-454).
Formatted copy of analysis results
Get more data on GC roots. Starting from this update, dotMemory shows you the name of the field through which a static object is referenced. (DMRY-774). Later we plan to publish a good read on GC roots basics, so stay tuned.
Named static reference
Enjoy restyled UI icons. Our notion of profiling is that it should be clear, productive and even beautiful. That’s why this update features new great-looking UI icons for your viewing pleasure.
Refreshed UI icons
Name your analysis. Easily order and manage multiple memory investigations by providing your analysis tabs with custom names. Never again get lost in loads of various unnamed analyses.
Custom analysis name

dotMemory 4.1 should be out in just a couple of weeks. For now, go ahead, download dotMemory 4.1 Beta and try on all the new things shipped with this update. Feel free to share any feedback you may have:

We’re eager to hear from you, and make dotMemory 4.1 better with your help and input.

Profile with pleasure!
The dotMemory Team

Posted in dotMemory Tips&Tricks, News and Events | Tagged , | 3 Comments

Fighting Common WPF Memory Leaks with dotMemory

When developing Windows Presentation Foundation (WPF), Silverlight and Windows Store applications, a number of common memory leaks may surface. In this blog post, we’ll see how dotMemory can help detect these common WPF memory leaks and how we can fix them.

Here are the leaks we will discuss in this post:

  • Binding leak
  • Collection binding leak
  • Textbox undo leak
  • Event Handler leak
  • x:Name leak

Binding leak

WPF comes with a number of data binding patterns, which, if we break them, can cause memory leaks in our applications. Take the following class:

Person class

When we bind to an instance’s Name property, the binding target starts listening for property change notifications. If the property is not a DependencyProperty or an object that implements  INotifyPropertyChanged, WPF will resort to subscribing to the ValueChanged event of the System.ComponentModel.PropertyDescriptor class to get notifications when the source object’s property value changes.

Why is this a problem? Well, since the runtime creates a reference to this PropertyDescriptor, which in turn references our source object, and the runtime will never know when to deallocate that initial reference (unless explicitly told), both the PropertyDescriptor as well as our source object will remain in memory.

How to detect it?

dotMemory comes with a series of automatic inspections, detecting common memory leaks. When profiling an application, we can instantiate the control which binds to our class and then dispose that control. When opening a snapshot in dotMemory, the snapshot overview page will tell us about WPF binding leaks immediately.

WPF analyze binding leak

This should be all we need to know, but let’s see if we can find proof of the theory above about the PropertyDescrriptor’s ValueChanged event handler keeping our objects in memory. After double-clicking the list entry, we can see the object set open. When we navigate to the Group by Similar Retention view, we get proof from the ValueChangedEventManager retaining our object:

Binding leak retention schema

How to fix it?

The simplest fix for a WPF binding leak would be making our Name property a DependencyProperty, or implementing the INotifyPropertyChanged interface correctly on our Person class and its Name property, like so:

Person class with INotifyPropertyChanged

If the object is of a type we can not edit, for example because it comes from a library we’re depending on, we can also explicitly clear the binding by calling:

Clear binding explicitly

Note that if a binding has the OneTime mode, this leak will not be present as the binding is done only once and the binding target will not listen for changes on the source object.

Collection binding leak

A similar issue to the WPF binding leak is the Collection binding leak. If there is binding to a collection that does not implement the INotifyCollectionChanged interface, WPF creates a strong reference to this collection. As a result, it stays in memory for the entire application lifetime.

How to detect it?

We will have to run our application and bind a control like a ListBox, for example, to our collection. Next, we can remove the binding by destroying the control and then taking a snapshot using dotMemory. We would expect our collection to be removed from memory, yet if we look at the snapshot overview we can see a WPF collection binding leak had been detected.

Detect WPF collection binding leak

If we then open the object set and look at the Group by Dominators view, we can see our collection is held in memory by the WPF DataBindEngine, an object which will be around for the lifetime of our application. And as long as our object dominator stays in memory, our collection will, too…

Analyse collection binding leak

How to fix it?

An easy way of fixing this issue is by implementing the INotifyCollectionChanged interface on our custom collection type. If the collection does not need any specific implementations, we could also inherit from the ObservableCollection type as it handles the implementation for us.


Textbox undo leak

One of the great things about WPF is that it enables Undo on several controls, like a textbox for example. For every change we make to its contents, WPF will keep the actions in memory so we can easily undo them using Ctrl+Z. Now imagine our application has a textbox in which lots of changes are being done… By default, the WPF UndoManager will keep up to 100 of these actions and in earlier WPF versions this limit was not there.

While not really a memory leak as such, having a high undo limit on textboxes in our applications may cause excessive memory usage without us evening knowing. And if we profile the application, we will see this as well.

How to detect it?

After running our application and making a large number of changes to a textbox’ contents, the dotMemory snapshot overview could show a large number of Char[] objects.

Largest Size diagram

If we drill deeper into this object set and look at the dominators (Group by Dominators), we can see that these object types are held in memory by several others. The first dominator here (TextTreeRootNode) is our textbox control itself. Of course it needs a few Char[] arrays to hold its contents. The second one however, UndoManager, is more interesting.


It seems the UndoManager is keeping on to quite a few Char[] arrays as well. Logical, as WPF’s undo behavior will need this information to be able to undo/redo changes made to the textbox.

How to fix it?

First of all, this is not really a leak. It’s a feature! It is important to know it’s there, though, for two reasons. The first one is when profiling WPF applications, we may see a number of Char[] arrays being created. Don’t get distracted by the UndoManager and try focusing on other dominators if the allocations are too excessive. Second, when building applications where a lot of text editing is done, high memory usage can be explained by this undo behavior.

To limit the number of entries the undo and redo stacks can hold, we can update the textbox’ UndoLimit property to a lower number. WPF used to default to –1 (unlimited) but in recent versions it defaults to 100.


We could also turn off undo entirely, by changing the IsUndoEnabled property.

Turn off undo

Event Handler leak

A common leak, not only in WPF but also in other frameworks, is the event handler leak. Event handler leaks are caused by developer oversight. Here’s an example. Imagine we open an AdWindow window in our application, and let it update its contents every few seconds. We could instantiate a DispatcherTimer in our constructor and subscript to the Tick event to handle these updates.

Subscribe to Timer

Now what happens if we close this AdWindow? It all depends… If we do nothing, the DispatcherTimer will keep on firing Tick events, and since we’re still subscribed to it, the ChangeAds event handler will be called. And if that event handler has to remain in memory for it to be called, our AdWindow will stay in memory too, even if we expected it to be released.

How to detect it?

There are a number of ways to detect this type of leak. The easiest is to capture a snapshot after the object was expected to be released. In the snapshot overview page, we will immediately see if the object remained in memory because of an event handler leak.

Event Handler leak

See our AdWindow there? dotMemory detected it is kept in memory because of an event handler leak. Now how to find out which event handler is keeping it in memory… If we double-click the entry, we will see the details of the instance. Under Key Retention Paths, we can easily identify which type’s event handler retains our object in memory: the DispatcherTimer.

Key Retention Paths

If we know our code base,we know where to look. But imagine this is the first time we see the code base, how do we know where we are subscribing to this event handler?

First of all, we want to make sure our snapshot was captured collecting creation stack traces (can be enabled in the profiler options). From the Key Retention Paths diagram, we can now double-click the EventHandler entry here, which will open the specific event handler instance. From the Creation Stack Trace view, we can see we’re subscribing the event handler in the AdWindow constructor.

Creation Stack Trace

The Shortest Paths to Roots Tree view will tell us which event we’re subscribing to.

Shortest Paths to Roots

How to fix it?

From the investigation above, we know which event and which event handler we’ve forgotten to unsubscribe from (DispatcherTimer’s Tick event), and where we’re subscribing to it in the first place (the AdWindow constructor).

Unsubscribing from the event in the constructor is pointless in this case, as it would render our functionality of rotating content every few seconds useless. A more logical place to unsubscribe is when closing the AdWindow:

Closing AdWindow

Note: The DispatcherTimer example here is a special case, as the above will still not ensure our AdWindow is released from memory. If we profile the application, we’d be able to see the AdWindow instance is still there. The Key Retention Paths diagram will help discover we have to set the private variable adTimer to null as well, to remove another reference from the .NET runtime’s DispatcherTimers collection. Or how one memory leak can hide another.

x:Name leak

The beauty of building UI’s in software is that we can do some nice things with it, like removing controls from the UI when a given action is performed. Depending on how we build our UI, we may be introducing a memory leak by doing this…

WPF creates a strong global reference to the UI element that is declared in XAML if it uses the x:Name directive.


Dynamically removing an element from code will not remove the control from memory… Not even if we remove it from the parent control’s Children collection.

Dynamically remove control

How to detect it?

After clicking he button that removes our control, we can capture a snapshot in dotMemory. The snapshot overview has an automatic inspection for x:Name leaks and will tell us our control remained in memory.

x:Name leak

We can drill down and look at the Key Retention Paths to see WPF is retaining our object in memory.

How to fix it?

To ensure the control gets removed from memory, we will have to call the UnregisterName method of the parent control. The code that removes our control from the window could be updated to look like this:


There are many more common memory leaks and things we can do to improve our applications. Check the Unusual Ways of Boosting up App Performance series we did earlier.

New automatic inspections on WPF collection binding and WPF x:Name were released with the latest v.4.0.10 update on September 2, 2014. Please note: To use dotMemory 4.0.10, your subscription should be valid until September 2, 2014. Otherwise, the subscription renewal is required.

If you are not using dotMemory yet, it’s a good time to start. Together with multiple enhancements, the minor update v4.0.10 introduces a new evaluation model: from now on you can use the trial version for 10 actual days of use, even if they are non-consecutive, even if your previous evaluation period is expired. Get the latest dotMemory and enjoy!

Posted in dotMemory Tips&Tricks, How-To's | Tagged , , | 1 Comment

ReSharper 8.2.2 with JetBrains Account Support is Here

A new ReSharper maintenance release is now available for download. Please install this upgrade if you have experienced the following issues with ReSharper 8.2:

  • A numeric comparison error when getting or updating nuget packages (when MSBuild is used to obtain project references).
  • SSL-Certificate error while connecting to JetBrains Account.

Speaking of JetBrains Account, this is a new way of managing your licenses for JetBrains tools. As a single interaction point with JetBrains products and services, it provides a simple and convenient way to access and manage your purchases, view your order history, distribute licenses to users, and more. You can use your JetBrains Account to have single access to the following JetBrains services:

You can still use your license key to activate and work with our tools if that way is more comfortable for you.

Additional Tools

Please note that along with ReSharper 8.2.2 you can download and install compatible builds of dotTrace, dotCover and dotMemory with similar fixes for the SSL-Certificate error.

By the way, great news for dotMemory users: we’ve added two brand new automatic inspections of common WPF memory leaks types. If you are not using dotMemory yet, today’s a good day to start. Together with multiple enhancements, the minor update v4.0.10 introduces a new evaluation model: from now on you can use the trial version for 10 actual days of use, even if they are non-consecutive. Enjoy!

Posted in News and Events | Tagged , , , , | 26 Comments

ReSharper 9: first look at C# 6.0 support

Today we want to share some details on our progress in supporting the new language features coming in C# 6.0. While most of the changes in C# 6.0 are relatively small, they will definitely affect the existing ReSharper actions, quick-fixes, refactorings and code inspections. Let’s take a look at some of the new C# features and how ReSharper will help you work with them.

Conditional access

The conditional access expression, also known as the ‘null-propagating’ operator is a small syntax sugar, designed to help dealing with null values. Despite the popular myth, it can’t really magically save you from NullReferenceException once and for all. However, the conditional access operator can dramatically reduce the amount of code needed in some scenarios to check for null values. Consider this method from the ReSharper source code:


Code like this is used extremely often in ReSharper development, since IDE tooling deals with incomplete code almost all of the time. We have to write a lot of code just to handle broken syntax and semantic trees. But most of the time you just need to return false/null to exit method early — and here is when the operator ?. is simply invaluable:


As you could always expect, ReSharper has deep knowledge on how the new language construct behave, how it evaluates, what values it may produce, and whether or not parts of  the expression are reachable during the execution:


We also notify you if the conditional access is redundant and offer a quick-fix to replace it with normal non-conditional member access:


But the most interesting part of the ReSharper support for C# 6.0 are transformations. Since tons of C# code had been written before the ? operator was even designed, it is natural for ReSharper to be able to recognize null-propagating code patterns and suggest using the new conditional expression when it matters:


We reviewed a lot of various code snippets to recognize and formalize the patterns, which were then used to implement context actions and suggestions, as well as reverse transformations:


We are even considering a small refactoring action to easily upgrade usages of popular extension methods like .IfNotNull(x => …) by implementing the conditional expressions for earlier C# versions:


The conditional access operator is a great addition to the C# language: it doesn’t affect the language too much, you are not forced to use it, but when you really need the null-propagating code – it provides a nice way of keeping your code clean and concise.

Expression-bodied members

We are super excited by this C# 6.0 language feature. It is designed to reduce the amount of boilerplate code required to declare trivial type members, especially the get-only properties. C# developers might be a bit tired of writing explicit accessors, multiple curly braces and return statements just to yield some values from properties. C# 6.0 solves this problem in a great way with the familiar lambda-expressions syntax:


As you can see, ReSharper is here to help you migrate your code to idiomatic C# 6.0, also being able to apply the transformations file-, project- and solution-wide. Expression bodies are also available for method, indexer and user-defined operator declarations.

Improvements on auto-properties

Besides introducing new language constructs, C# 6.0 aims on making some existing features more useful and covering more user scenarios. For example, auto-properties in C# 6.0 can have initializer expression, just like fields and field-like events:


ReSharper already knows about it and suggests you the transformation even if the backing field is declared readonly and has an initializer expression. Also, you may notice that ReSharper creates auto-property without the set accessor — which is also allowed in C# 6.0 to give you the experience of truly immutable auto-properties, just like readonly fields.


Primary constructors

This feature has been trending on Roslyn forums for months. Primary constructors are designed to make your initialization code more declarative and trivial, to move away from imperative error-prone initialization code in constructor. This feature also improves usability of field/property initializers – they are not allowed to make use of this reference, but constructor parameters are in the scope now. So a typical data class from the example above transforms into the following form:


Unfortunately, C# design team decided not to provide any way of automatic declaration of fields or properties from primary constructor parameter declarations, so you still need to repeat mentioning types and names when declaring property and field members. But there  is still room for future design improvements and we hope that the current design is just an incremental improvement before more significant changes are introduced.


Currently, ReSharper recognizes the new primary constructor declaration and the body block syntactically and semantically. The familiar IDE experience, from the extend/shrink selection feature to code completion is available for primary constructors. But of course we want to provide you with the new experience of transforming classes into primary constructor. This is why we’ve built a small analysis engine to inspect the constructor code: ReSharper collects the assignments happening always during the normal constructor execution and tracks type member reads and writes as well as other initialization information. The new ReSharper features like ‘Move assignment to initializer’  already have this new engine under the hood (ReSharper 8 has only the reverse ‘Move initialization to constructor’ action) and we are looking forward to use it in primary constructor transformations:


We still have a lot of decisions to make regarding intersections with the existing ReSharper features, and we are still reviewing the feature set that we want to deliver with ReSharper 9. By now, it is unclear how often developers will use this feature, how would they format primary constructor parameters list and so on. Therefore, we would like to hear your opinion on how you expect ReSharper to behave with primary constructor declarations, which transformations and code inspections might be useful to you.

Other changes

We have already supported some other C# 6.0 language features like ‘using static’ (importing of static members of some type in scope) and exception filters (condition clause for catch blocks):


As usual, ReSharper maintains the ‘language level’ concept, enabling different sets of features and inspections for projects targeting any C# compiler version starting from 2.0. We handle everything from the changes in overload resolution between C# versions to small language improvements like await expressions availability in catch/finally clauses of try block in C# 6.0:


New features like ‘declaration expressions’ (ability to declare variables anywhere inside expressions), dictionary initializers and nameof() operator are on their way.

Looking forward

We should mention that the openness of Roslyn development process helps us a lot. Now we are able to play with the new language and compiler features as early as they are implemented, debug compilers to understand their behavior and even participate in language design discussions. By the way, ReSharper finally successfully compiles with the Roslyn compiler (even while some VB bugs critical for us are still not fixed) and we are looking forward to Visual Studion 14 and Roslyn releases. We expect fast adoption of C# 6.0, since the new language features do not depend on the BCL support and can be used even with .NET Framework 2.0 projects.

In this post we have highlighted only the C# 6.0 bits of upcoming ReSharper 9, just to give you an overview of what we are working on. As usual, we are going to open the EAP as soon as our builds become stable enough. Stay tuned and tell us what you think!

Posted in ReSharper Tips&Tricks | Tagged , , | 24 Comments

Unusual Ways of Boosting Up App Performance. Lambdas and LINQs

This is the third post in the series. The previous ones can be found here:

Today, we’re going to uncover the common pitfalls of using lambda expressions and LINQ queries, and explain how you can evade them on a daily basis.

Lambda Expressions

Lambda expressions are a very powerful .NET feature that can significantly simplify your code in particular cases. Unfortunately, convenience has its price. Wrong usage of lambdas can significantly impact app performance. Let’s look at what exactly can go wrong.

The trick is in how lambdas work. To implement a lambda (which is a sort of a local function), the compiler has to create a delegate. Obviously, each time a lambda is called, a delegate is created as well. This means that if the lambda stays on a hot path (is called frequently), it will generate huge memory traffic.

Is there anything we can do? Fortunately, .NET developers have already thought about this and implemented a caching mechanism for delegates. For better understanding, consider the example below:

Caching lambdas 1

Now look at this code decompiled in dotPeek:

Caching lambdas example. Decompiled code

As you can see, a delegate is made static and created only once – LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.

So, what pitfalls should we watch out for? At first glance, this behavior won’t generate any traffic. That’s true, but only as long as your lambda does not contain a closure. If you pass any context (this, an instance member, or a local variable) to a lambda, caching won’t work. It make sense: the context may change anytime, and that’s what closures are made for—passing context.

Let’s look at a more elaborate example. For example, your app uses some Substring method to get substrings from strings:

Lambdas example 1

Let’s suppose this code is called frequently and strings on input are often the same. To optimize the algorithm, you can create a cache that stores results:

Lambdas example 2

At the next step, you can optimize your algorithm so that it checks whether the substring is already in the cache:

Lambdas example 3

The Substring method now looks as follows:

Lambdas example 4

As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate. Let’s look at the decompiled code:

Lambdas example. Decompiled code with no caching

There it is. A new instance of the c__DisplayClass1() is created each time the Substring method is called. The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1.

How to Find

As with any other example in this series, first of all, make sure that a certain lambda causes you performance issues, i.e. generates huge traffic. This can be easily checked in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find delegates that generate significant traffic. Objects of …+c__DisplayClassN are also a hint.
  3. Identify the methods responsible for this traffic.

For instance, if the Substring method from the example above is run 10,000 times, the Memory Traffic view will look as follows:

Lambdas shown in dotMemory

As you can see, the app has allocated and collected 10,000 delegates.

When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect delegate allocation. In our case, the plugin’s warning will look like this:

Warning about lambdas in the HAV plug-in

But once again, data gathered by dotMemory is more reliable, because it shows you whether this lambda is a real issue (i.e. whether it does or does not generates lots of traffic).

How to Fix

Considering how tricky lambda expressions may be, some companies even prohibit using lambdas in their development processes. We believe that lambdas are a very powerful instrument which definitely can and should be used as long as particular caution is exercised.

The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will always be cached with no impact on traffic.

Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look as follows:

Caching lambdas code fix

The updated lambda doesn’t capture any variables; therefore, its delegate should be cached. This can be confirmed by dotMemory:

Labdas caching after the fix shown in dotMemory

As you can see, now only one instance of Func is created.

If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable closure) should be used. For example:

Code example of passing additional context to lambdas

LINQ Queries

As we just saw in the previous section, lambda expressions always assume that a delegate is created. What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and have very similar implementation ‘under the hood.’ This means that all concerns we discussed for lambdas are also true for LINQs.

If your LINQ query contains a closure, the compiler won’t cache the corresponding delegate. For example:

LINQ caching example

As the threshold parameter is captured by the query, its delegate will be created each time the method is called. As with lambdas, traffic from delegates can be checked in dotMemory:

LINQ caching shown in dotMemory

Unfortunately, there’s one more pitfall to avoid when using LINQs. Any LINQ query (as any other query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path, then constant allocation of iterators will generate significant traffic.

Consider this example:

LINQ iterator allocation example

Each time GetLongNames is called, the LINQ query will create an iterator.

How to Find

With dotMemory, finding excessive iterator allocations is an easy task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects from the namespace System.Linq that contain the word “iterator”. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects.
  3. Determine the methods responsible for this traffic.

For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view will look as follows:

LINQ iterator allocation shown in dotMemory

The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they explicitly call LINQ methods. For example:

LINQ iterator allocation warning by the HAV plug-in

How to Fix

Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ query can be replaced with foreach. In our example, a fix could look like this:

LINQ iterator allocation fix example

As no LINQs are used, no iterators will be created.

LINQ iterator allocation fix shown in dotMemory

We hope this series of posts has been helpful. Just in case, the previous two can be found here:

Please follow @dotmemory on Twitter or dotMemory google+ page to stay tuned.

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 9 Comments

Unusual Ways of Boosting Up App Performance. Strings

This is the second post in the series. The other ones can be found here:

This post will focus on best approaches of working with strings.

Changing String Contents

String is an immutable type, meaning that the contents of a string object cannot be changed. When you change string contents, a new string object is created. This fact is the main source of performance issues caused by strings. The more you change string contents, the more memory is allocated. This, in turn, triggers garbage collections that impact app performance. A relatively simple solution is to optimize your code so as to minimize the creation of new string objects.

How to Find

Check all string instances that are not created by your code, but by the methods of the String class. The most obvious example is the String.Concat method that creates a new string each time you combine strings with the + operator.

To do this in dotMemory:

  1. In the Memory Traffic view, locate and select the System.String class.

  2. Find all methods of the String class that create the selected strings.

Consider an example of the function that reverses strings:Reversing strings example

An app that uses this function to revert a 1000-character line generates enormous memory traffic (more than 5 MB of allocated and collected memory). A memory snapshot taken with dotMemory reveals that most of the traffic (4 MB of allocations) comes from the String.Concat method, which, in turn, is called by the Reverse method.

Traffic from string objects shown in dotMemory

The Heap Allocations Viewer plug-in will also warn you about allocations by highlighting the corresponding line of code:

The HAV plug-in highlights string concatenation

How to Fix

In most cases, the fix is to use the StringBuilder class or handle a string as an array of chars using specific array methods. Considering the ‘reverse string’ example, the code could be as follows:

String concatenation code fix

dotMemory shows that traffic dropped by over 99% after the fix:

Memory traffic in dotMemory after the fix

Improving Logging

When seeking ways to optimize your project, take a look at the logging subsystem. In complex applications, for the sake of stability and support convenience, almost all actions are logged. This results in significant memory traffic from the logging subsystem. That’s why it is important to minimize allocations when writing messages to log. There are multiple ways to improve logging.*

*Actually, the optimization approaches shown in this section are universal. The logging subsystem was taken as an example because it works with strings most intensively.

Empty Arrays Allocation

A typical LogMessage method looks as follows:



What are the pitfalls of such implementation? The main concern here is how you call this method. For example, the call

Logging call 1

will cause allocation of an empty array. In other words, this line will be equivalent to

Logging call 2

How to Find

These allocations would be difficult to detect in the memory snapshot manually, but you can use the Heap Allocations Viewer plug-in to find it very quickly:

Empty array creation shown in the HAV plug-in

How to Fix

The best solution is to create a number of method overloads with explicitly specified arguments. For instance:

Logging fix example

Hidden Boxing

The implementation above has a small drawback. What if you pass a value type to, say, the following method?

Hidden boxing example 1

For example:

Hidden boxing example 2

As the method accepts only the object argument, which is a reference type, boxing will take place.

How to Find

As with any other boxing, the main clue is a value type on the heap. So, all you need to do is look at the memory traffic and find a value type. In our case this will look as follows:

Hidden boxing shown in dotMemory

Of course, the Heap Allocations Viewer will also warn you:

Hidden boxing warning in the HAV plug-in

How to Fix

The easiest way is to use generics—a mechanism for deferring type specification until it is declared by client code. Thus, the revised version of the LogMessage method should look as follows:

Hidden boxing code fix

Early String Allocation

The advice to defer variable allocation as much as possible is quite obvious. Still, sometimes stating the obvious is useful.

Consider the code below. Here the logmsg string is created regardless of whether logging is turned on or off:

Deferring string allocation

A better solution would be:

Deferring string allocation fix

Excessive Logging

If you use logging for debugging purposes, make sure log calls never reach the release build. You can do this by using the [Conditional] attribute.

In the example below, the LogMessage method will be called only if the DEBUG attribute is explicitly defined.

Conditional attribute example

That does it for this post. In the next one, we’ll talk about the nuances of using lambda expressions and LINQ queries. To stay tuned please follow @dotMemory twitter or google+ product page!

Posted in dotMemory Tips&Tricks, How-To's | Tagged | 5 Comments