Features News Tutorials

Profiling Go Tests

In today’s blog post, we’ll take a look at the new Profiler features of GoLand. We will split this article into two sections:
· General profiler usage in which we’ll cover the common features of the profiler across the various profiler methods.
· Using the different profiling methods in which we’ll cover how to use each individual profiling method and share code samples so you can try this on your own computer.

General profiler usage

The profiler supports capturing and displaying information for CPU, Memory, Mutex Contention, and Blocking profiling, which is covered in the section below. However, they all share a few common operations/UI elements so it’s best to cover them first.

The profiler works with the built-in Go tooling, namely the pprof profiling tool. It supports displaying information in several ways:

Using Flamegraphs

Go program flamegraph view

Using call trees

Go program call tree view

Using a list of elements sorted by their properties

Go program function and method list

Recent profiles list

It is also possible to see a list of previous runs and display them in the profiler view.

Go program profiling recent list

Importing profiling results from a different machine

If you run your pprof command on a different machine, then you can also import those results into the IDE. One way to obtain these profiles is to run either the tests or benchmarks from your application.

In the following example, we are running the BenchmarkInBoundsChannels benchmark function from all the packages in our project and saving the output in a file named cpu.out.
go test -bench=^BenchmarkInBoundsChannels$ -cpuprofile=cpu.out ./...

After that, we can use the Import Profiler Results | From File…

5

Custom profiler settings

The different profiler methods support configuration options as well. You can find these under Settings/Preferences | Build, Execution, Deployment | Go Profiler and select the profiling method you would like to configure, and then adjust it accordingly.

Go profiler configuration

You can also have multiple profiler configurations for each profiling method, which enables you to better profile each application/test/benchmark according to your needs.

7

Using the different profiling methods

Let’s use the code below and run the various profiling methods available to us against it.

package main

import (
    "sync"
    "testing"
)

var pi []int

func printlner(i ...int) {
    pi = i
}

type mySliceType struct {
    valuesGuard *sync.Mutex
    values      []int
}

func (s mySliceType) Get(idx int) int {
    s.valuesGuard.Lock()
    defer s.valuesGuard.Unlock()

    checkBuffer(s.values, idx)

    return s.values[idx]
}

func (s mySliceType) GetCh(ch chan int, idx int) {
    s.valuesGuard.Lock()
    defer s.valuesGuard.Unlock()

    checkBuffer(s.values, idx)

    ch <- s.values[idx]
}

func newMySliceType(values []int) mySliceType {
    return mySliceType{
        valuesGuard: &sync.Mutex{},
        values:      values,
    }
}

func fillBuffer(slice []int) map[int]int {
    result := map[int]int{}
    for i := 0; i<100; i++ {
        for j :=0; j<len(slice); j++ {
            result[i * len(slice) + j] = slice[j]
        }
    }

    return result
}

func checkBuffer(slice []int, idx int) {
    buffer := make(map[int]int, len(slice) * 100)
    buffer = fillBuffer(slice)
    for i := range buffer {
        if i == idx {
            return
        }
    }
}

func slicerInBounds(slice mySliceType) {
    for i := 0; i < 8; i++ {
        a0 := slice.Get(i*8 + 0)
        a1 := slice.Get(i*8 + 1)
        a2 := slice.Get(i*8 + 2)
        a3 := slice.Get(i*8 + 3)
        a4 := slice.Get(i*8 + 4)
        a5 := slice.Get(i*8 + 5)
        a6 := slice.Get(i*8 + 6)
        a7 := slice.Get(i*8 + 7)

        printlner(a0, a1, a2, a3, a4, a5, a6, a7)
    }
}

func slicerInBoundsChannels(slice mySliceType) {
    ch := make(chan int, 8)
    for i := 0; i < 8; i++ {
        go slice.GetCh(ch, i*8+0)
        go slice.GetCh(ch, i*8+1)
        go slice.GetCh(ch, i*8+2)
        go slice.GetCh(ch, i*8+3)
        go slice.GetCh(ch, i*8+4)
        go slice.GetCh(ch, i*8+5)
        go slice.GetCh(ch, i*8+6)
        go slice.GetCh(ch, i*8+7)

        a0 := <-ch
        a1 := <-ch
        a2 := <-ch
        a3 := <-ch
        a4 := <-ch
        a5 := <-ch
        a6 := <-ch
        a7 := <-ch

        printlner(a0, a1, a2, a3, a4, a5, a6, a7)
    }
}

func BenchmarkInBounds(b *testing.B) {
    var mySlice []int
    for i := 0; i < 99; i++ {
        mySlice = append(mySlice, i)
    }
    ms := newMySliceType(mySlice)
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        slicerInBounds(ms)
    }
}

func BenchmarkInBoundsChannels(b *testing.B) {
    var mySlice []int
    for i := 0; i < 99; i++ {
        mySlice = append(mySlice, i)
    }
    ms := newMySliceType(mySlice)
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        slicerInBoundsChannels(ms)
    }
}

 

CPU Profiling

As the name suggests, this profiling method allows you to profile where the time is spent in your application. To run it, you can use the Run CPU Profile action in the application. The views will then display the time spent by the CPU to execute each function call, as well as the time spent for the CPU to execute subsequent calls.

8

This profiler does not currently support any special configuration options.

Memory Profiling

9

This profiler allows you to identify where your application is allocating memory and how much. You can see the in-use memory space, in-use objects, object allocations, and the allocated space.

You can configure this profiler to mark one allocation for every 512 KB of allocated memory, regardless of whether it was made by one or more allocations. The default is set to 512 KB. Setting it to 1 will show all the allocations individually.

Mutex Contention Profiling

This profiling method is useful when you want to detect whether a contention happens in your application and for how long. It will help you detect bottlenecks in your applications caused by code waiting for mutexes to be released/acquired, which might not be immediately obvious by just looking at the code itself or at the CPU profile.

10

You can configure this profiler to show only a certain fraction of the contentions happening. The default is to show every contention.

Blocking Profiling

Last but not least, the blocking profiler method allows users to understand the goroutines that could have run but were blocked by other goroutines. This is similar to the Mutex profiling method, as it displays the contents or the delay caused, but it works at the goroutines level instead of the mutex level.

11

You can configure this profiler to only show blocking events longer than a certain time, in nanoseconds. The default value will show every blocking event that happened for more than a nanosecond.

This concludes our blog post. I hope you learned a bit about how the new profiler feature in GoLand can help you detect and fix bottlenecks in your applications using the CPU, Mutex, or Blocking profiling methods, and how to detect memory leaks and optimize memory consumption using the Memory profiling method.

As usual, please let us know your feedback in the comments section below, on Twitter, or on our issue tracker. Suggestions for any topics that you would like to see covered in future posts are also very welcome.

image description