Live Webinar: Merging Refactored Code – ReSharper Meets SemanticMerge, June 17th

Join us Tuesday, June 17th, 16:00 – 17:00 PM CEST (9:00 – 10:00 AM EDT) for a free webinar, Merging Refactored Code: ReSharper Meets SemanticMerge.

In this webinar, Matt Ellis (JetBrains) will be hosting Pablo Santos (SemanticMerge) who will run through a number of refactoring examples, from the seemingly trivial (yet essential) to complex structure modification scenarios, and demonstrate how to do the refactor with ReSharper and later get it merged with Semantic.

Pablo will talk about the challenges of merging complex refactorings and demonstrate how SemanticMerge simplifies it, parsing the code into a syntax tree then reasoning about it as code rather than text (or text with heuristics), and merges accordingly.

If you’ve ever wanted to know more about ReSharper’s many refactoring capabilities and new tools designed to make merging complex refactorings a breeze, don’t miss this webinar.

Space is limited, please register now.

About the Presenter:

Pablo SantosPablo Santos is the founder at Codice Software, the company behind Plastic SCM and SemanticMerge. Codice started in 2005 and since then Pablo played different roles ranging from core engineering to marketing, business development, advertising and sales operations. Nowadays he plays a dual role as lead of software engineering and product manager for both Semantic and Plastic.

Posted in News and Events, ReSharper Tips&Tricks | Tagged , , , | 1 Comment

Webinar Recording and Q&A: High-Performance Computing with C++

The recording of our May 29th webinar with Dmitri Nesteruk, High-Performance Computing with C++, is now available on JetBrains YouTube Channel.

Languages such as JavaScript may receive a lot of hype nowadays, but for high-performance, close-to-the-metal computing, C++ is still king. This webinar takes you on a tour of the HPC universe, with a focus on parallelism; be it instruction-level (SIMD), data-level, task-based (multithreading, OpenMP), or cluster-based (MPI).

We also discuss how specific hardware can significantly accelerate computation by looking at two such technologies: NVIDIA CUDA and Intel Xeon Phi. (Some scarier tech such as FPGAs is also mentioned). The slides used in the webinar are available here.

We received plenty of questions during the webinar and we’d like to use this opportunity to highlight some of them here as well as those we didn’t have a chance to answer during the webinar. Please find the questions below with answers by presenter, Dmitri Nesteruk.

Q: Why are you not a fan of OpenCL?
A: I generally think that OpenCL is a great idea. The ability to run the same code on x86, GPU, FPGA and Xeon Phi is fantastic. However, this hinges on a very important assumption, namely that it is reasonable to run the same kind of computation on all these devices.

In practice this isn’t always the case. x86 and Xeon Phi are great for general-purpose code. GPGPU is restricted mainly to data-parallel numerics. FPGAs are an entirely different beasts – they are not as good for numerics, but excellent for structured data with a high degree of intrinsic parallelism that arises from the architecture that’s being used.

On the GPGPU side, where OpenCL is the principal way to program AMD graphics cards, I’d say NVIDIA won the war, at least for now. Whether or not they make better devices, their tools support and their marketing efforts have earned them the top spot. CUDA C is more concise than OpenCL, which is great, but of course you have to keep in mind that OpenCL tries to be compatible with all architectures, so you’ve got to expect it to be more verbose. However, programming OpenCL is like programming CUDA using the Driver API, and I’d much rather use the user-mode APIs of CUDA, together with the excellent libraries (including Thrust, where applicable).

Q: CUDA has to copy data to the devices RAM, right? At which magnitude of data do we gain something from CUDA?
A: Yes, and there’s actually a whole 5 (five!) types of memory on CUDA devices that can be used. The time delay in sending data to/from does prevent us from using CUDA for real-time purposes, but if you’re not trying to do that, then what you’re going to be worrying about is saturating the device with enough data to make the computations worthwhile. This depends first of all on how well your problem parallelizes, but assuming it does, the question becomes how much a single unit of work actually takes.

The logic here is simple: if a unit of work takes less than the time to send data to/from the device, consider keeping it on the CPU and vectorizing it if possible. If, however, you’ve got processes that take up more time, then it might make sense to do the calculations on the GPU. And keep in mind that the GPU is capable of supporting a form of task-based parallelism (streams), so if you’ve got distinct computation processes, you can try pipelining them onto the device.

The best way to tell if the GPU is the right solution or not is to write your algorithm and do a performance measurement.

Q: Can you elaborate on the protocol parsing?
A: Let’s look at a problem from a higher level. Any specific single computation problem you may have is likely to be faster when done in hardware than in software. But in practice, you’re unlikely to be designing an ASIC for each particular problem, because it’s very expensive, and because changing an ASIC is impossible once it’s designed and produce.

On the other hand, some problems do benefit from extra computation resources. For example, parsing data from a protocol such as FIX allows you to offload some of the work for, e.g., building order books from the CPU and also avoid paying the costs of data moving from your network card to RAM and back. These might seem like trivial costs, but given the kind of technology war that trading firms are engaged in, every microsecond helps!

It also so happens that FPGAs are excellent for parsing fixed data formats. This means that you can build an NIC that uses an FPGA to parse the data then and there, structure it, maybe even analyze it and send it back upstream faster than an ordinary server would. Plenty of commercial solutions exist for this, but designing your own is also a lot of fun. FPGAs also offer a lot of additional benefits that I’ve mentioned in the webinar, such as relative scalability (you can have 20 FPGAs on a single card, if you want).

Q: Would you recommend using the Intel Xeon Phi for complex simulation (like car simulator)?

A: This depends on what you’re simulating and whether the problem is actually decomposable into independently executing units of work. If you’re after agent-based modeling, then with the Xeon Phi you’re in luck, because it supports just about every parallel paradigm under the sun. You can use MPI or Pthreads or something else, and have the parts of the system interact with one another via messages.

It really depends on the specifics of the problem you’re trying to solve.

Q: As an alternative to inline assembly and instrinsics, you can ensure vectorization and use of SIMD instructions by giving hints to the compiler through pragmas. Why not mention that?

A: Indeed! In fact, one of the pain points of SIMD is that you have to abandon the normal ways of writing code. So having the compiler vectorize it for you is great – it leaves you to do other things. The same goes for OpenMP that follows exactly the same idea: mark a loop as parallel and the compiler will do its best to parallelize it.

The concern that I have with these processes are always a black box. The compiler can easily understand a simple for loop with no side effects, but problems aren’t always that simple, and when other variables and constructs get entangled, it is sometimes better to hand-craft critical paths rather than trusting the compiler to do a good job on rewriting them.

Q: In the domain you currently work, which HW alternative do you recommend using: FPGAs, GPGPUs, or anything else?
A: It’s ‘horses for courses.’ Quant finance falls into two categories: analysis when you sit at your desk and try to calibrate a model, and execution, which is when your co-located servers trade on some market or other. The rules of the game are drastically different.

For analysis, anything goes, because you’re not time-constrained by a fast-moving market. The more computational power you have, the faster you’re going to get your ideas validated. Numeric simulations (e.g., Monte-Carlo) fit nicely on GPUs and are especially great when you can allow yourself to drop to single-precision. Xeon Phi’s are general-purpose workhorses, anything can run on them, but they are a relatively new technology that I’m still getting the grasp of.

When it comes to execution, it’s all about speed. An arbitrage opportunity comes in and disappears in the blink of an eye, and thus FPGA-based hardware can help you capture it before others react. Having great hardware is not enough if you’ve got a huge ping to the exchange, obviously. I haven’t tried GPUs and FPGAs on the execution side: I suspect they might be relevant, but it’s something I haven’t investigated yet.

Keep in mind that not all trading is high-frequency trading. Some firms run things on MATLAB which, while not being as fast as hand-tuned C++ with SIMD instructions, provides sufficient execution speed coupled with a vast array of built-in math libraries meaning you don’t have to roll your own. Other quant institutions, including some of the major ones, survive just fine running their trades off Excel spreadsheets, possibly the slowest computation mechanism.

Q: You are talking a lot about FPGA. I don’t quite understand what FPGA has to do with C++ programming

A: It’s fair to say that FPGAs are typically programmed using Hardware Description Languages (HDLs) such as VHDL and Verilog. However, an interesting trend is the support for OpenCL on FPGAs. Hopefully the relationship between OpenCL and C++ is self-evident. Plus, since we’re discussing HPC technologies, FPGAs definitely deserve a mention.

Q: Does ReSharper C++ support CUDA? / Will the IDE support Embedded Systems?
A: The support for CUDA as well as the Intel C++ compiler is on the top of my personal wish list, and I’ve been pestering the ReSharper developers with related issues. While I hesitate to make any promises, I don’t imagine CUDA support is that difficult considering there’s only one language extension (the triple-angled chevrons) and the rest is fairly normal C++ that should be parseable straight away. I could be wrong of course.

Regarding the C++ IDE and its support for embedded: if you’re after cross-compilation, it will work right now, though of course the IDE will not parse compiler output. As for supporting specific libraries – we’ll consider those after the initial release which, as you may have guessed, will target the general platform rather than any particular device.

The future direction of the C++ IDE will largely depend on user demand. At the moment, it’s difficult to predict where the product will head. So keep posting and voting for feature requests!

About the Presenter

Dmitri NesterukDmitri Nesteruk is a developer, speaker, podcaster and a technical evangelist for JetBrains. His interests lie in software development and integration practices in the areas of computation, quantitative finance and algorithmic trading. He is an instructor of an entry-level course in Quantitative Finance. His technological interests include C#, F# and C++ programming as well as high-performance computing using technologies such as CUDA. He has been a C# MVP since 2009.

Posted in How-To's, Links and Opinions | Tagged , , , , , , , , , | Leave a comment

ReSharper 8.2.1 RTM Is Available

A new ReSharper maintenance release is now available for download. Please feel free to upgrade if your experience with ReSharper 8.2 is subpar due to the following issues:

  • False positive errors in Windows Phone projects with references to PCLs
  • Ambiguous reference errors
  • Issues with Jasmine and AngularJS tests not running

Here’s the full list of fixes and enhancements included in ReSharper 8.2.1 for your reference.

Updates to ReSharper SDK and ReSharper Command Line Tools are available too, as well as compatible builds of dotTrace, dotCover and dotMemory (the latter additionally contains a number of bug fixes).

Posted in News and Events | Tagged , , , | 2 Comments

Code Navigation: ReSharper 8 vs. Visual Studio 2013

It’s no secret that on large projects, developers spend a lot of time simply navigating through their code. The bigger the project, the more time you need to find a certain class/object/file/anything. That’s why fast and easy code navigation is an essential feature of any IDE as it directly affects developer’s productivity.

Visual Studio and ReSharper both offer tools that help you not get lost in your code: the Navigate to feature in Visual Studio and the Go to set of tools in ReSharper.

In this post, we’ll try to compare these navigation features of Visual Studio 2013 and ReSharper 8.x. This can be especially interesting as both products contain a number of improvements compared to their previous versions.

Navigation Within a Solution

With Visual Studio 2013

Visual Studio’s Navigate to feature allows you to find and navigate to a specific element in your solution. Once you press the Ctrl+, combination and start typing your query*, Visual Studio will display all found elements matching your query, regardless of their type and location. To navigate to a certain element, either select it with cursor keys and press Enter or double click the element.
Visual Studio 2013. Navigating within a solution
In the example above, Visual Studio shows all elements that have an occurrence of ‘addp’: two files, one class and its constructor, and one method.

Note that the search scope of Navigate to is always the entire solution. Such an approach works well in small-to-medium projects, but what if you work on a much bigger one? In this case, the list of discovered elements would be much longer and you would have to refine your search.

*In Visual Studio 2013, the modal Navigate to dialog is replaced with a new non-modal window with search-as-you-type support.

With ReSharper 8

Comparing to Visual Studio, ReSharper’s set of Go to tools is much more flexible. In addition to searching for all occurrences through the entire solution, it allows you to narrow down your search by specifying the kind of element you’re looking for: a certain type, file, or any symbol in a file.

Go to Everything
To make ReSharper show all elements regardless of their type, press Ctrl+T and start typing your query. To navigate to a certain element, either select it with cursor keys and press Enter or click the element:
ReSharper 8. Navigating to everything
As you can see, search results in ReSharper slightly differ from the ones in Visual Studio. There is no AddProductWindow() class constructor (it is hidden for conciseness) and there is one more element – the OpenAddProduct button, which appears here thanks to ReSharper’s wider search scope (see the Search Scope section).

Go to Type
Press Ctrl+T (twice) and navigate to any type (class, interface, and so on) within your solution:
ReSharper  8. Navigating to a type
As you can see, ReSharper narrows results by displaying the AddProductWindow class only.

Go to File
Press Ctrl+Shift+T to navigate to any file within your solution:
ReSharper 8. Navigating to a file

Go to Symbol
Go to Symbol (Shift+Alt+T) is the most far-reaching of the Go to set of commands. This powerful feature searches by name, solution-wide, for any file member:
ReSharper 8. Navigating to a symbol

Navigation Within a File

With Visual Studio 2013

The Navigate to feature of Visual Studio doesn’t support navigation between members of a file. Instead, you can use the list of members in the upper left corner of the main window. Unfortunately, there are no keyboard shortcuts for quick access to the list.
Visual Studio 2013. List of file members
Note that members from related files are also shown (they are greyed out).

With ReSharper 8

Using Go to File Member, you can quickly navigate within a source file.
Once you press Alt+ \ , ReSharper will show you all members (classes, constructors, properties, methods, and so on) available in the currently opened file. To find an exact file member, start typing its name.
You can also narrow your search by specifying the accessibility level of the member you’re looking for. For example, to show only private members, type ‘private’. If you want to search only for constructors, use the ‘new’ keyword.
ReSharper 8. Specifying the accessibility level of a file member
To make ReSharper show you all members from related files (base types, partial classes, so on), press Alt+N:
ReSharper 8. Showing all file members

Fast Navigation to Type Members

With Visual Studio 2013

The Navigate to feature doesn’t quite provide fast navigation to a certain type member (for example, a method of a class). In Visual Studio, this operation requires a couple of steps. For example, to get to the OnInit method of the AddProductWindow class, you must:

  1. Using Navigate to, go to a file containing the class:
    Visual Studio 2013. Navigating to file
  2. Using the list of members in the main window, navigate to the OnInit method:
    Visual Studio 2013. Navigating to type members

With ReSharper 8

To let you get to a certain type member in one step, ReSharper has so-called “combos.” To use them, first specify the type you’re looking for and then specify a member using a whitespace* as a separator:
ReSharper 8. Navigating to type members using combos
Since ReSharper 8, you can use combos to navigate to a particular line in your code:
ReSharper 8. Navigating to a line using combos

* You can also use the dot (.), plus (+), or backslash (\) characters.

Highlighting of Characters

With Visual Studio 2013

Visual Studio automatically highlights characters in search results (making them bold) as you type them.*
Visual Studio 2013. Highlighting characters during typing
Note that the character highlighting feature doesn’t work with CamelCase.

*New in Visual Studio 2013

With ReSharper 8

ReSharper highlights characters as well. The feature doesn’t have any limitations.
ReSharper 8. Highlighting characters during typing

Support for CamelCase

With Visual Studio 2013

The Navigate to feature provides support for CamelCase. For example, to find the AddProductWindow elements, you can simply type ‘APW’. Note that this feature is case-sensitive. To perform the search using CamelCase, you must specify the ‘humps’ only in upper case. Another limitation of СamelCase support in Visual Studio is that you can’t use it to find files.
Visual Studio 2013. Using camel case

With ReSharper 8

To support CamelCase, ReSharper uses the advanced CamelHumps™ technology. Due to this, ReSharper has a number of advantages compared to Visual Studio:

  1. CamelHumps is a case-insensitive mechanism:
    ReSharper 8. Using an arbitrary case in CamelHumps
  2. CamelHumps lets you mix element names and ‘humps’:
    ReSharper 8. Mixing names and humps in CamelHumps
  3. CamelHumps supports underscores (_) in identifier names:
    ReSharper 8. Using underscores in CamelHumps

Support for Wildcards

With Visual Studio 2013

Although MSDN states that the Navigate to feature doesn’t support wildcards, in actual fact, at least one wildcard symbol is supported. The asterisk (*) can be used anywhere in your query as a substitute for zero or more symbols.
Visual Studio 2013. Using wildcards in Navigate to

With ReSharper 8

ReSharper also allows you to use the asterisk (*) wildcard for zero or more symbols:
ReSharper 8. Using wildcards in Go to Everything

Word Order in Search Queries

With Visual Studio 2013

Visual Studio allows you to change the order of words in queries quite freely. For example, from Visual Studio’s perspective, the add win and win add queries are treated in a virtually identical manner.
Visual Studio 2013. Mixing word order in Navigate to

With ReSharper 8

In ReSharper, you must specify words in a query in the same order as they appear in your code. This means that, for example, win add will not return AddProductWindow.

Search Scope

With Visual Studio 2013

In Visual Studio, search scope is limited by your solution only. This means that Navigate to doesn’t search in libraries that are referenced by your solution.
Besides, Navigate to is unable to perform searching in the following type of files even if they are included in your solution:

  • XAML
  • JavaScript
  • HTML
  • CSS

With ReSharper 8

ReSharper’s search scope is much wider and includes members of libraries referenced by your solution as well.
Note that in order to avoid cluttering up search results, ReSharper will show you library members only in case there are no members matching your query in the solution. To force ReSharper to show library members, you can press Alt+N.
ReSharper 8. Navigating to library members
Unlike in Visual Studio, searching in XAML, JavaScript, HTML, and CSS files is supported:
ReSharper 8. Navigating in XAML, JavaScript, HTML, and CSS files

Navigating to a Folder

With Visual Studio 2013

When you enter a folder name into the Navigate to field, all files located in this folder will be shown among other search results. This may be inconvenient when you’re just looking up for a particular folder.
Visual Studio 2013. Showing files in a folder

With ReSharper 8

In ReSharper, a folder is a separate solution member that can be navigated to. Once you select it in the search results, you will be taken to the folder in Solution Explorer.
ReSharper 8. Navigating to a folder

Saving Results

With Visual Studio 2013

Visual Studio doesn’t provide any way for you to save the results of your searches. Information about your query is lost once you close the Navigate to window.

With ReSharper 8

To save the search results after performing your query, press the Num+ button instead of Enter. This will flush the results to a separate Find Results window. You can keep this window open for quick navigation between different search results.
ReSharper 8. Viewing search results history
The results of all your search queries are retained until you close the solution.


Get ReSharper

Posted in ReSharper Tips&Tricks | Tagged , , , | 4 Comments

Live Webinar: High-Performance Computing with C++, May 29th

Join us Thursday, May 29th, 14:00 – 15:00 GMT (10:00 – 11:00 AM EDT) for our free live webinar, High-Performance Computing with C++ with Dmitri Nesteruk.

Are you are a developer interested in getting the most out of your hardware? Are you looking to take the computational performance of your applications to the next level? Then this is the webinar for you! In this webinar, Dmitri Nesteruk, our ‘resident quant,’ will talk about some of the language features, libraries and expensive toys he uses to get calculations done as quickly as possible.

Languages such as JavaScript may receive a lot of hype nowadays, but for high-performance, close-to-the-metal computing, C++ is still king. This webinar will take you on a tour of the HPC universe, with a focus on parallelism, be it instruction-level (SIMD), data-level, task-based (multithreading, OpenMP), or cluster-based (MPI). We’ll also discuss how specific hardware can significantly accelerate computation by looking at two such technologies: NVIDIA CUDA and Intel Xeon Phi. (Some scarier tech such as FPGAs will also be mentioned).

Space is limited, please register now.

About the Presenter:

Dmitri NesterukDmitri Nesteruk is a developer, speaker, podcaster and a technical evangelist for JetBrains. His interests lie in software development and integration practices in the areas of computation, quantitative finance and algorithmic trading. He is an instructor of an entry-level course in Quantitative Finance. His technological interests include C#, F# and C++ programming as well high-performance computing using technologies such as CUDA. He has been a C# MVP since 2009.

Posted in How-To's, News and Events, ReSharper Tips&Tricks | Tagged , , | 1 Comment

Live Webinar: .NET Code Coverage for Continuous Integration using TeamCity and dotCover, June 10th

Join us Tuesday, June 10th, 14:00 – 15:00 GMT (10:00 AM – 11:00 PM EDT) for our free live webinar, .NET Code Coverage for Continuous Integration using TeamCity and dotCover, with Maarten Balliauw.

How much of our code is being covered by our unit tests? Are there areas we are not testing? By capturing code coverage data during a test run, we can analyze which areas of our applications are well-tested and which ones require additional tests to be written. And where better to capture code coverage information than on our build server?

In this webinar we will use dotCover to collect code coverage information while running tests in our CI process. We will see how we can configure code coverage and how we can use the TeamCity Visual Studio plugin to download the coverage snapshot generated on the build server and inspect it using dotCover on a developer machine.

Space is limited, please register now.

About the Presenter:

Maarten BalliauwMaarten Balliauw is a Technical Evangelist at JetBrains. His interests are all web: ASP.NET MVC, PHP and Windows Azure. He’s a Microsoft Most Valuable Professional (MVP) for Windows Azure and an ASPInsider. He has published many articles in both PHP and .NET literature such as MSDN magazine and PHP architect. Maarten is a frequent speaker at various national and international events such as MIX (Las Vegas), TechDays, DPC and others..

UPD: Due to an unforeseen scheduling conflict we have had to postpone the date of the upcoming webinar to June, 10th. We apologize for any inconvenience this may cause you but we do hope that you are still be able to join us!

Posted in dotCover Tips&Tricks, How-To's | Tagged , , , , | 6 Comments

ReSharper 8.2.1 Release Candidate Is Here

ReSharper 8.2.1 Release Candidate has arrived. In case you don’t have any issuses with ReSharper 8.2 feel free to skip this update. Please download and try the RC build if the following improvements fix problems that you experienced with ReSharper 8.2:

  • Fix for multiple false positive errors in Windows Phone projects with referenced Portable Class Libraries.
  • Unit test runner fixes that address Jasmine and AngularJS tests not running, as well as fixed NUnit integration.
  • Fixed incorrect ambiguous references error.
  • Other bug-fixes and improvements.

If any critical issues are still bothering you, please feel free to report them and have them fixed before release.

Compatible builds of dotTrace and dotCover are also available.

Posted in dotCover Tips&Tricks, dotTrace Tips&Tricks, News and Events, ReSharper Tips&Tricks | Tagged , , , , | Leave a comment

dotMemory 4.0 out, dotTrace on sale

You’ve waited for this long enough! As of today, our profilers dotTrace Memory and dotTrace Performance assume new identities. Previously known as dotTrace Memory profiler, dotMemory is the newest product to rejoin the JetBrains .NET tools family. The decision to separate renewed memory profiler was natural since dotMemory was reworked from the scratch and re-invented even in analysis concept.

We are thrilled to announce the release of dotMemory 4.0, our smart and reliable .NET memory profiler.
dotMemory 4.0

At the same time, dotTrace Performance, now known as simply dotTrace, is on SALE. That’s right, our dotTrace .NET performance profiler can be yours for up to 58% OFF (commercial licenses) or up to 33% OFF (personal licenses). Profiling has never been this affordable. For more pricing details please check the dotTrace website.

So, back to dotMemory 4.0: it introduces a unique memory analysis concept. We believe that complicated and deeply intellectual memory usage analysis can be a fascinating and everyday activity. With dotMemory you start from all objects in memory and narrow to smaller object sets until the memory issue is solved. Just let dotMemory’s powerful functionality, multiple views on data and intuitive UI guide you through the journey.

Let’s have a closer look at key dotMemory features:

Powerful automatic inspections to instantly detect common types of memory leaks.

To ease your task, dotMemory automatically scans your snapshot for the most common types of memory issues. These inspections can be a great starting point in analyzing a snapshot if you’re not sure where to begin.
Automatic Inspections

Multiple views on data are great for detailed analysis of memory usage issues.

Examine objects in the heap from multiple views. Want to know how objects relate to each other? What objects do they reference and through what fields? Which calls created these objects? No problem! dotMemory has a view for nearly everything. For example, you can group instances by dominating object sets or by similar retention paths. Other innovative views include the icicle chart to visualize the call tree.
Multiple Views

Memory traffic analysis to detect what causes excessive garbage collection.

Excessive allocations and garbage collections may imply significant memory management overhead. Use the traffic view to understand what objects are created/collected most intensively in your app and what functions are causing this memory traffic.

Comparing memory snapshots to visualize improvements or regressions in memory usage.

Comparing two snapshots is the main way to find objects that are causing a memory leak. Use the comparison view to find out how many objects were created between snapshots and how many objects were collected.

Timeline view with real-time data collection

To monitor memory consumed by your application, get snapshots and drop them to compare in one click.
dotMemory Timeline View

Remote profiling — detect memory issues in running production environments.

Profile apps not only on your local computer but on any computer in your network or on the Internet. Remote profiling is especially helpful when you need to profile a web app on a production server.

A Profiling API lets you invoke the profiler from exact code positions in your applications.

Picking the right moment to get a snapshot is very important for memory analysis. Use the dotMemory API calls to take snapshots at exact places of your code.

Support for various .NET applications

Profile apps based on .NET Framework 2.0 to 4.5.1 including desktop, Silverlight 4, Silverlight 5, Windows Store, WCF, and IIS applications.

Visual Studio integration

dotMemory integrates with Visual Studio so you can start a memory profiling session right from the IDE, whenever you need.

To get to know dotMemory 4 better and explore getting-started hints, please visit Docs&Demos webpage.

dotMemory is available in different license types: commercial, personal, open source, and more. A new commercial license is priced at $249, and upgrades are just $79. If you purchased a dotTrace Memory license after January 1, 2012, you are welcome to upgrade your memory profiler for free. If not, we encourage you to take advantage of our low upgrade prices. All licenses include 1-year subscription for updates. Additional options and licenses details are available on the dotMemory website.

Download dotMemory 4.0 today and enjoy the new memory profiler. If you would like to first try it, a free 10-day evaluation license is available.

Profile with pleasure!
JetBrains dotMemory team

Posted in News and Events | Tagged , , | 1 Comment

Webinar Recording: Deconstructing Clippy – Common Issues for an Uncommon Extension

The recording of our April 16th webinar with Matt Ellis, Deconstructing Clippy – Common Issues for an Uncommon Extension, is now available on JetBrains YouTube Channel.

The Clippy extension makes use of DoubleAgent, an open source library available on SourceForge. The source code used in this webinar is available on GitHub and the slides right here.

This webinar takes a deep dive into Clippy’s implementation. We don’t look at the standard extension points (it doesn’t use any!), instead we look at common platform issues faced by all plugins, such as versioning, components, lifetime management and threading. We also look at some of the more interesting issues thrown up by trying to add a cartoon character to a productivity tool.

Clippy for ReSharper is a good April Fools’ day joke made better by the fact that it’s a real, fully functioning extension – replacing the Alt+Enter menus as well as integrating with navigation and refactoring. It’s also a great demonstration of the varied and often surprising ways you can extend ReSharper.

You can learn more about ReSharper extensions in our video, Build Your Own Extension in ReSharper 8.

About the Presenter:

Matt EllisMatt Ellis is a Technical Evangelist at JetBrains. He has been a professional developer for nearly 20 years, shipping software for BBC Worldwide, BNP Paribas and Egg, the UK’s first Internet bank. He is an open source creator and contributor, owning projects such as the xUnit.net test runner for ReSharper. He believes in clean code, simplicity and the open/closed principle.
Posted in How-To's, ReSharper Tips&Tricks | Tagged , , , | 1 Comment

ReSharper and Roslyn: Q&A

As you probably know based on news from BUILD Windows conference last week, Roslyn, the new .NET Compiler Platform with code analysis APIs for C# and VB.NET, has been pushed to Preview phase and open sourced. In case the event slipped your attention, here’s a nice tour of Roslyn on the C# FAQ blog at MSDN.

Immediately we at JetBrains were faced with multiple questions on the perspectives of ReSharper using Roslyn for its code analysis and how the two tools might compete. The flow of questions wouldn’t end, to the point of introducing a template to answer them:

ReSharper and Roslyn? I dare you!!

Seriously though, it was clear that we needed to elaborate on the ReSharper vs Roslyn issue. Hence this post.

We sat down with Sergey Shkredov (@serjic), ReSharper Project Lead and .NET Tools Department Lead at JetBrains, and Alex Shvedov (@controlflow), a Senior Developer on the ReSharper team who’s responsible for ReSharper’s Generate functionality, code annotations and support for XML-based languages. The following Q&A is a summary of the conversation that we had with them.

What’s JetBrains stance towards Roslyn? Do we consider the technology and its Open Source status important and valuable?

Roslyn is definitely important and a good step forward for Microsoft in that it should help Visual Studio users take advantage of more C# and VB.NET code editing and analysis features in Visual Studio out of the box.

It should also help Visual Studio extension developers write code-centric extensions against a consistent API while having the opportunity to know how it works inside, thanks to the Open Source status of the project. This is not to mention hackers who are willing to spend their time forking the compiler and tuning it to make, say, C# the ideal language they’ve always envisioned.

We also believe that Roslyn is no less important for Microsoft itself. Faced with the burden of maintaining a plethora of Visual Studio integrated tools including code editing tools, IntelliTrace and code designers, the folks at Microsoft are interested in making these tools as flexible and easy to update as possible. Roslyn should enable updating .NET languages and experimenting with them faster than before. Apart from that, the old compiler wouldn’t let launch compilation steps in parallel, and Roslyn is expected to enable that, bringing more scalability to the table.

What’s the point of making Roslyn Open Source?

As to the act of letting Roslyn go Open Source, we don’t believe that Microsoft is expecting anyone from outside of the company to develop the compiler for them. Programming languages are entities too monolithic and complex to justify accepting external changes of any significance. Therefore, we expect Microsoft to keep the function of designing .NET languages totally to itself without depending on the community.

The true value of Roslyn going Open Source lies in enabling extension developers to look into Roslyn code that is relevant to their purposes: how it’s written and whether it’s efficient. They might debug or profile it to see if it’s the culprit of unexpected behavior in their extensions or if it introduces performance issues. This is possibly the workflow whereby meaningful pull requests might start coming in to the Roslyn repository.

As to possible endeavors to fork and modify the compiler to address application- or domain-specific tasks, this scenario appears to be a shot in the foot. Even if the default compiler in Visual Studio can be replaced with a fork, instrumental support for the fork ends as soon as you go beyond Visual Studio. In theory we can imagine a custom INotifyPropertyChanged implementation based on a Roslyn fork that can even gain certain popularity. However, we can barely imagine supporting it in ReSharper as our intention is to focus on supporting the official version of Roslyn.

Will ReSharper take advantage of Roslyn?

The short answer to this tremendously popular question is, no, ReSharper will not use Roslyn. There are at least two major reasons behind this.

The first reason is the effort it would take, in terms of rewriting, testing and stabilizing. We’ve been developing and evolving ReSharper for 10 years, and we have a very successful platform for implementing our inspections and refactorings. In many ways, Roslyn is very similar to the model we already have for ReSharper: we build abstract syntax trees of the code and create a semantic model for type resolution which we use to implement the many inspections and refactorings. Replacing that much code would take an enormous amount of time, and risk destabilizing currently working code. We’d rather concentrate on the functionality we want to add or optimize, rather than spend the next release cycle reimplementing what we’ve already got working.

The second reason is architectural. Many things that ReSharper does cannot be supported with Roslyn, as they’re too dependent on concepts in our own code model. Examples of these features include Solution-Wide Error Analysis, code inspections requiring fast lookup of inheritors, and code inspections that require having the “big picture” such as finding unused public classes. In cases where Roslyn does provide suitable core APIs, they don’t provide the benefit of having years of optimization behind them: say, finding all derived types of a given type in Roslyn implies enumerating through all classes and checking whether each of them is derived. On the ReSharper side, this functionality belongs to the core and is highly optimized.

The code model underlying ReSharper features is conceptually different from Roslyn’s code model. This is highlighted by drastically different approaches to processing and updating syntax trees. In contrast to ReSharper, Roslyn syntax trees are immutable, meaning that a new tree is built for every change.

Another core difference is that Roslyn covers exactly two languages, C# and VB.NET, whereas ReSharper architecture is multilingual, supporting cross-language references and non-trivial language mixtures such as Razor. Moreover, ReSharper provides an internal feature framework that streamlines consistent feature coverage for each new supported language. This is something that Roslyn doesn’t have by definition.

Will it be practical to use both ReSharper and Roslyn-based functionality in Visual Studio?

This is a tricky problem as it’s still uncertain whether we would be able to disable Roslyn-based features (such as refactorings or error highlighting) when integrating into new releases of Visual Studio. If we’re unable to do that, performance would take a hit. Apart from ReSharper’s own immanent memory and performance impact, Roslyn’s immutable code model would increase memory traffic, which would in turn lead to more frequent garbage collection, negatively impacting performance.

We’re hopeful that this problem would be solved in favor of letting us disable Roslyn features that ReSharper overrides, because otherwise ReSharper would have to work in a highly resource-restricted environment. Irrelevant of whether this happens though, we’ll keep doing what we can do, minimizing ReSharper’s own performance impact.

As Roslyn is now Open Source, which parts of its code are going to be of particular interest to ReSharper developers?

We’ll be sure to peek into Roslyn code and tests from time to time, to see how C# and VB.NET language features are implemented. We don’t rule out that actual code supporting them is going to emerge before formal specifications are finalized. In fact, we’ve already started.


That’s more or less the picture of living in the Roslyn world as we see it today. As time goes by, we’ll see if things turn out the way we expected them to.

Meanwhile, if you have questions that were not addressed in this post, please ask in comments and we’ll try to come up with meaningful answers.

Posted in How-To's | Tagged , , | 59 Comments