Unusual Ways of Boosting Up App Performance. Lambdas and LINQs

This is the third post in the series. The previous ones can be found here:

Today, we’re going to uncover the common pitfalls of using lambda expressions and LINQ queries, and explain how you can evade them on a daily basis.

Lambda Expressions

Lambda expressions are a very powerful .NET feature that can significantly simplify your code in particular cases. Unfortunately, convenience has its price. Wrong usage of lambdas can significantly impact app performance. Let’s look at what exactly can go wrong.

The trick is in how lambdas work. To implement a lambda (which is a sort of a local function), the compiler has to create a delegate. Obviously, each time a lambda is called, a delegate is created as well. This means that if the lambda stays on a hot path (is called frequently), it will generate huge memory traffic.

Is there anything we can do? Fortunately, .NET developers have already thought about this and implemented a caching mechanism for delegates. For better understanding, consider the example below:

Caching lambdas 1

Now look at this code decompiled in dotPeek:

Caching lambdas example. Decompiled code

As you can see, a delegate is made static and created only once – LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.

So, what pitfalls should we watch out for? At first glance, this behavior won’t generate any traffic. That’s true, but only as long as your lambda does not contain a closure. If you pass any context (this, an instance member, or a local variable) to a lambda, caching won’t work. It make sense: the context may change anytime, and that’s what closures are made for—passing context.

Let’s look at a more elaborate example. For example, your app uses some Substring method to get substrings from strings:

Lambdas example 1

Let’s suppose this code is called frequently and strings on input are often the same. To optimize the algorithm, you can create a cache that stores results:

Lambdas example 2

At the next step, you can optimize your algorithm so that it checks whether the substring is already in the cache:

Lambdas example 3

The Substring method now looks as follows:

Lambdas example 4

As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate. Let’s look at the decompiled code:

Lambdas example. Decompiled code with no caching

There it is. A new instance of the c__DisplayClass1() is created each time the Substring method is called. The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1.

How to Find

As with any other example in this series, first of all, make sure that a certain lambda causes you performance issues, i.e. generates huge traffic. This can be easily checked in dotMemory.

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find delegates that generate significant traffic. Objects of …+c__DisplayClassN are also a hint.
  3. Identify the methods responsible for this traffic.

For instance, if the Substring method from the example above is run 10,000 times, the Memory Traffic view will look as follows:

Lambdas shown in dotMemory

As you can see, the app has allocated and collected 10,000 delegates.

When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect delegate allocation. In our case, the plugin’s warning will look like this:

Warning about lambdas in the HAV plug-in

But once again, data gathered by dotMemory is more reliable, because it shows you whether this lambda is a real issue (i.e. whether it does or does not generates lots of traffic).

How to Fix

Considering how tricky lambda expressions may be, some companies even prohibit using lambdas in their development processes. We believe that lambdas are a very powerful instrument which definitely can and should be used as long as particular caution is exercised.

The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will always be cached with no impact on traffic.

Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look as follows:

Caching lambdas code fix

The updated lambda doesn’t capture any variables; therefore, its delegate should be cached. This can be confirmed by dotMemory:

Labdas caching after the fix shown in dotMemory

As you can see, now only one instance of Func is created.

If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable closure) should be used. For example:

Code example of passing additional context to lambdas

LINQ Queries

As we just saw in the previous section, lambda expressions always assume that a delegate is created. What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and have very similar implementation ‘under the hood.’ This means that all concerns we discussed for lambdas are also true for LINQs.

If your LINQ query contains a closure, the compiler won’t cache the corresponding delegate. For example:

LINQ caching example

As the threshold parameter is captured by the query, its delegate will be created each time the method is called. As with lambdas, traffic from delegates can be checked in dotMemory:

LINQ caching shown in dotMemory

Unfortunately, there’s one more pitfall to avoid when using LINQs. Any LINQ query (as any other query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path, then constant allocation of iterators will generate significant traffic.

Consider this example:

LINQ iterator allocation example

Each time GetLongNames is called, the LINQ query will create an iterator.

How to Find

With dotMemory, finding excessive iterator allocations is an easy task:

  1. Open a memory snapshot and select the Memory Traffic view.
  2. Find objects from the namespace System.Linq that contain the word “iterator”. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects.
  3. Determine the methods responsible for this traffic.

For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view will look as follows:

LINQ iterator allocation shown in dotMemory

The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they explicitly call LINQ methods. For example:

LINQ iterator allocation warning by the HAV plug-in

How to Fix

Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ query can be replaced with foreach. In our example, a fix could look like this:

LINQ iterator allocation fix example

As no LINQs are used, no iterators will be created.

LINQ iterator allocation fix shown in dotMemory

We hope this series of posts has been helpful. Just in case, the previous two can be found here:

Please follow @dotmemory on Twitter or dotMemory google+ page to stay tuned.

This entry was posted in dotMemory Tips&Tricks, How-To's and tagged . Bookmark the permalink.

9 Responses to Unusual Ways of Boosting Up App Performance. Lambdas and LINQs

  1. Pingback: Dew Drop – July 24, 2014 (#1821) | Morning Dew

  2. Charlie Hayes says:

    Could you post a followup explaining why allocating a lot of LINQ iterators is bad and how the proposed alternative doesn’t suffer from the same or similar issue? My naive view is that the new method will instantiate a new temporary result list every invocation, also including many possible dynamic backed-array reallocation and copy operations. Both the LINQ result and the ‘filtered list’ result would require iterators for iterating over, maybe the LINQ iterator isn’t as efficient? My understanding of LINQ is that the iterator won’t be created until the result is iteratated over, which would happen when the ToList() method is called and would produce a similar list to the proposed work around.

    • Steve Ruble says:

      Charlie, I think there would be a better trade off between iterator allocations and List allocations if the method parameter and return value were typed as IEnumerable<string> rather than List<<string>. With the current signature you get the worst of both worlds, because you're allocating an iterator and calling ToList() at the end which causes a list to be allocated as well.

  3. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1659

  4. Matt Warren says:

    @Charlie, it’s because you can do a regular foreach using a enumerator that is a struct, i.e. with no heap allocation. For a bit more information see the talk here http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DEV-B333#fbid= or slide 35 from http://video.ch9.ms/sessions/teched/na/2013/DEV-B333.pptx

  5. Pingback: Dew Drop – July 25, 2014 (#1822) | Morning Dew

  6. >In most cases, a LINQ query can be replaced with foreach.

    Indeed, and in most cases a foreach can be replaced with a

    var count = list.Count;
    for(var i =0; i< count; i++) {
    var obj = list[i];

    which is faster than

    foreach(var obj in list) {

  7. Dave Black says:

    @Patrick,

    The performance of ‘for’ vs. ‘foreach’ is dependent on the type of collection being iterated over as well as the type contained within the collection. I’ve extended Vance Morrison’s (CLR Performance Architect) MeasureIt tool (google it), I’ve come up with some numbers I’ll post here along with explanations of said numbers. These “micro tests” in the tool account for JIT warmup, inlining, no-op instructions, are JIT-optimized, etc. So it is far more advanced than the standard timing done using System.Diagnostics.Stopwatch. Here is the description/interpretation followed by the numbers:

    Below are the results of running a series of benchmarks. Use the MeasureIt /usersGuide for more details on exactly what the benchmarks do.

    The resultant data numbers are the number of microseconds (µ) to run the operation n times divided by the scale where n is the ‘count’ and scale is listed in the test description. If the scale isn’t shown, it is implicitly a value of 1. Scaling is useful if you want to normalize a single iteration of an operation

    To improve the stability of the measurements, a measurement may be cloned several times and this cloned code is then run in a loop. If the benchmark was cloned the ‘scale’ attribute represents the number of times it was cloned, and the count represents the number of times the cloned code was run in a loop before the measurement was made. The reported number divides by both of these values, so it represents a single instance of the operation being measured.

    The benchmarks data can vary from run to run, so the benchmark is run several times and the statistics are displayed. If we assume a normal distribution, you can expect 68% of all measureuments to fall within 1 StdDev of the Mean. You can expect over 95% of all measurements to fall within 2 StdDev of the Mean. Thus 2 StdDev is a good error bound. Keep in mind, however, that it is not uncommon for the statistics to be quite stable during a run and yet very widely across different runs. See the users guide for more info.

    Generally, the mean is a better measurment if you use the number to compute an aggregate throughput for a large number of items. The median is a better guess if you want to best guess of a typical sample. The median is also more stable if the sample is noisy (eg. has outliers).

  8. Dave Black says:

    Note that the optimizations in the JIT compiler are smart enough to do loop unrolling, bounds hoisting, etc. It can detect that a bounds check is taking place and cache that value instead of computing it every iteration – http://blogs.msdn.com/b/clrcodegeneration/archive/2009/08/13/array-bounds-check-elimination-in-the-clr.aspx.

    The old days of trying to eek a couple of cycles out of a loop by hoisting its bounds check are over – and can actually hurt performance.

    In other words, try to outsmart the JIT optimizer and you will lose…it’s come a long way and is smarter than most people think!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">