Speeding up solution load and formatting inspections – ReSharper 2018.2
Previously in this series, we have talked about how we’ve sped up EditorConfig support and added performance improvements based on the YouTrack reports you’ve sent us. In this post, we’ll look at the rest of the optimizations, most of them related to reducing solution loading times.
In this series:
- EditorConfig support (ReSharper 2018.2)
- Performance reports (and fixes) from the ReSharper issue tracker (ReSharper 2018.2)
- Speeding up solution load and formatting inspections (ReSharper 2018.2)
So, let’s plunge into the additional performance optimizations that made it into 2018.2.
Fetching project configuration without Visual Studio API
To be ready to build the solution on demand, ReSharper Build needs to collect the Build Configuration for a solution and its projects while creating/loading caches on solution loading.
In previous versions, we used the Visual Studio API to get this information. This led to two issues:
- Any communication with Visual Studio could be executed only on the UI thread.
- ReSharper asked for Build Configuration data during solution load, a stage where not necessarily all components would be already loaded and ready to provide the required information.
As a result, it was common for the UI thread to become blocked for a significant period of time while ReSharper waited for a API response.
In 2018.2, ReSharper no longer uses the Visual Studio API on solution load, and instead reads all the required information directly from csproj and sln files on disk. This is a much faster way which avoids UI freezes. Although we still use the API to fetch Build Configuration info from time to time, it mostly happens when all Visual Studio components are ready to work.
Async updating of Project Model
Both Visual Studio and ReSharper have own Project Models for every project in a solution. ReSharper is subscribing to events about changes in the Visual Studio Project Model and synchronizes changes to have its own one up to date.
Before the latest release, on getting an event, ReSharper occupied the UI thread, built a part of PM affected by the event, compared with the existed ReSharper PM, and, if there was anything needed to update, acquired a write lock and synchronized changes. However, sometimes there was nothing to change in ReSharper PM, so occupying the UI thread was redundant.
Now, we are using another approach. When ReSharper receives an event, it acquires a read lock on a background thread to build a part of PM affected by the event and compare the result with the current PM. In case there is nothing to change – release the read lock and do nothing. In case, we need to synchronize the PM, we occupy the UI thread, acquire a write lock and change our PM. As the result of the new approach: rarely doing some activities on the UI thread.
Smarter execution of ReSharper Solution Load Tasks
ReSharper has hundreds of “solution load tasks” which must be executed before it can begin analyzing a solution. They are divided into groups according to their goals and priorities, and are sorted in a particular way.
In the past, all these groups were executed as a single chunk, occupying the UI thread until all the tasks were completed. This prevented non-ReSharper-related activities from being executed during this time, and sometimes led to UI freezes in the latest Visual Studio versions.
In the 2018.2 release cycle, we stopped loading these groups as a single chunk. Instead, we queue loading each group so that other non-ReSharper activities are able to run in between ours. As a result, there are fewer noticeable delays on solution load.
Formatting inspections improvements
Formatter inspections, which were introduced in ReSharper 2018.1 based on our formatting engine, could selectively highlight formatting style violations (which in some cases led to real errors!). Unfortunately, they did not perform quickly: when they were enabled, code analysis for a file could take approximately 3 times as long.
As we investigated the issue, it turned out that the main cause was repeatedly getting descriptions of formatting options that affect the code being inspected. That one was easy to fix – we cached them. The inspections became twice as fast, but still, code analysis took 1.5 times as long.
We investigated further and discovered that some of the formatter stages (there are 5 of them: basic spaces/line breaks, indenting, wrapping, blank lines, and aligning in columns) recalculated the applicable formatting type for each formatted interval (two adjacent code tokens with nothing but white spaces in between).
Recalculating is a time-consuming operation: we need to filter several hundreds of formatting rules that could be applied for a particular interval. It makes sense for an ordinary code formatter run, because formatter stages are run sequentially and an applicable formatting type can be changed between them. But we don’t have to run formatter stages sequentially for inspections as no changes can happen at that time. By sharing this data between formatter stages for each interval, we managed to shave another 12% off the initial running time.
Still, calculations of applicable formatting/indenting/blank lines/aligning types are costly even if we run them just once. As we mentioned earlier, there are several hundreds of formatting rules that have different conditions for being applied. Some of them only check types of left, right, parent, or grandparent AST nodes of the formatted interval; some have complex checks that are provided to the formatter engine such as lambdas; while many have multiple conditions combined with the logical operators like OR or AND.
We used to use parent node types required by rules to pre-filter them using a one-to-many dictionary, but there were still dozens of them remaining to iterate each one. We switched to using both left and right node types to pre-filter formatter rules, and that gave us another -2%.
An additional -6% came from sorting rule conditions combined via AND logical operators, so that easy checks were executed first, allowing us to skip running complex lambdas when the easy checks failed.
All in all, formatter inspections are now 70% quicker than before and take approximately the same time as error checking does. The next step, which we plan to implement in the 2018.3 release cycle, is to run formatter stages selectively. For example, if only indenting inspections are enabled, then only the indenting stage should be run. This will also allow us to turn on indenting inspections for some clearly erroneous cases by default.
Looks like we’ve reached the deepest point of our performance dive. In this series, we’ve examined almost all performance optimizations and bug-fixes implemented in the ReSharper 2018.2 release cycle.
In the upcoming 2018.3 release cycle, we are going to continue optimizing performance so that the time you spend using ReSharper in Visual Studio is more enjoyable!
Subscribe to Blog updates
Creating Custom AI Prompts
AI has swept through the software development industry like a wildfire. So people want to learn how to best use AI in their day to day tasks. In this post we’ll take a look at how to write custom prompts for use with the JetBrains AI Assistant in ReSharper and Rider so you can make the most of AI.&n…
12 Debugging Techniques In JetBrains Rider You Should Know About
Twelve must know debugging features in JetBrains Rider every developer should know.
Interceptors – Using C# 12 in Rider and ReSharper
Welcome to our series, where we take a closer look at the C# 12 language features and how ReSharper and Rider make it easy for you to adopt them in your codebase. If you haven’t yet, download the latest .NET 8 SDK and update your project files! In this series, we are looking at: Primary …