Best Practices Features

Make It Workflow — Part 13: Supporting Test Management Scenarios

Read this post in other languages:

Welcome back to our Make It Workflow series! We have received several requests from the community to discuss YouTrack features that can be utilized by the QA team. Today we would like to take a look at how to maintain test management scenarios (TMS) in YouTrack. This article will demonstrate how YouTrack can be set up so that you can maintain test management functionality within YouTrack itself, allowing you to avoid having to use third-party solutions. This approach helps you save on licence costs and makes the test management process flow more smoothly.

This article is for test managers and anyone interested in YouTrack workflows. You will find a practical example of how to build a testing process in YouTrack, with recommendations for project and issue settings, a description of how to implement test management scenarios, and more.

We have also curated a list of ready-to-use workflow code blocks which can be used to automate test management processes. These code blocks make it easier to associate test cases to the specific test run, clone test run, display system suggestions on the next test, and more.

Settings for test management projects in YouTrack

You may want to use test-management specific issue types for your test management projects, like Test Case, Test Suite (a set of test cases), Test Run (a set of test cases or test suites assigned for specific test cycle), and Test Case Executions assigned to a specific test run. You should configure the issue types required for your TMS project. There are 3 types of issue links for establishing connections and relevance between your test management tasks:

  • Standard ‘parent-subtasks’ maintain test runs and test case executions as well as test suite and test case relations.
  • Custom ‘Execution’ maintains test case and test case execution relations.
  • Custom ‘Related Bug’ maintains failed test and assigned bug relations.

Testing requirements can be maintained as an article linked to the issues in a text field.
Additionally, for your test management tasks, you may want to set up custom fields such as Test mode, Category, Test flow, and Application, along with predefined values (e.g. predefined test statuses). You can also use conditional fields when you need to display information specifically for one issue type.

A set of predefined fields for the ‘Test Case’ and ‘Test Run’ issue types can be adjusted up to your business needs.

Setting up test runs and test execution

When, for example, we want to test a specific product version, we need to create a test run and assign a number of relevant test suites and test cases to it. Here are the steps to follow to do this:

  • Create an issue with the ‘Test Run’ issue type.
  • Use the ‘Assigned test case and test suite’ custom link type to link to the issue.
  • Specify the test cases you would like to include in the test run in the pop-up window.

The steps of the ‘Populate Test Run’ workflow will save your QA team time and effort:

  • Copies of all selected test suites and test cases will be created in the system. All issues will have the ‘Test Case Execution’ issue type.
  • The ‘Execution’ link type will connect test case executions to selected test cases.
  • Newly created issues will be linked with test run as a parent issue and its related subtasks.

Here’s how it works:

Expand code block
var entities = require('@jetbrains/youtrack-scripting-api/entities');
var workflow = require('@jetbrains/youtrack-scripting-api/workflow');

exports.rule = entities.Issue.onChange({
  title: 'Populate-test-run',
  guard: function(ctx) {
    var issue = ctx.issue;
    return !issue.isChanged('project') && issue.Type && (issue.Type.name == ctx.Type.TestRun.name) && issue.links[ctx.Execution.outward].added.isNotEmpty() && issue.isReported;
  },
  action: function(ctx) {
    var issue = ctx.issue;
    var totalTestRuns = issue.links[ctx.Execution.outward].added.size;
    issue.links[ctx.Execution.outward].added.forEach(function(TestCase) {
      TestCase.links[ctx.Execution.inward].delete(issue);

      var TestCaseRun = TestCase.copy();
      TestCaseRun.Type = ctx.Type.TestExecution.name;
      TestCaseRun.Status = ctx.Status.NoRun.name;
      Object.keys(TestCaseRun.links).forEach(function(linkType) {
       if (!TestCaseRun.links[linkType])
        return;
         TestCaseRun.links[linkType].clear();
      });
      TestCaseRun.summary = "[TEST_CASE_EXECUTION" + "] [" + TestCaseRun.summary + "]";

      TestCaseRun.links[ctx.Subtask.inward].add(issue);
      issue.links[ctx.Subtask.outward].add(TestCaseRun);
      TestCaseRun.links[ctx.Execution.outward].add(TestCase);
    });
    issue.fields['Total number of test cases'] = totalTestRuns;
  },
  requirements: {
    Execution: {
      type: entities.IssueLinkPrototype,
      name: 'Execution',
      inward: 'Execution',
      outward: 'Assigned test case or test suite'
    },
    Subtask: {
      type: entities.IssueLinkPrototype,
      name: 'Subtask',
      inward: 'parent for',
      outward: 'subtask of'
    },
    Type: {
      type: entities.EnumField.fieldType,
      TestExecution: {
        name: "Test Case Execution"
      },
      TestRun: {
        name: "Test Run"
      },
      TestCase: {
        name: "Test Case"
      },
      TestSuite: {
        name: "Test Suite"
      }
    },
    Total: {
      type: entities.Field.integerType,
      name: 'Total number of test cases'
    },
    TotalFailed: {
      type: entities.Field.integerType,
      name: 'Number of failed test cases'
    },
    TotalPassed: {
      type: entities.Field.integerType,
      name: 'Number of passed test cases'
    },
    Status: {
      type: entities.EnumField.fieldType,
      InProgress: {
        name: 'In Progress'
      },
      Passed: {
        name: 'Passed'
      },
      Failed: {
        name: 'Failed'
      },
      NoRun: {
        name: 'No Run'
      },
    },
  }
});

Maintaining testing activities

Once test runs are fully set up, a test engineer can start testing. By default, all issues with the ‘Test Case Execution’ and ‘Test Run’ types have the ‘No Run’ status.

How to switch between tests

When you are working with a set of tests, you have two ways of switching from one test to another:

  1. Manually change the test status on the issue list page.
  2. Open a test and switch to the next non-completed test. When you want to switch tests in this way, a helpful pop-up appears that suggests the next test to run.
    The following actions can be automated:

    • Check if there are tests that have the ‘No Run’ status and belong to the same test run.
    • Display a message with the URL of the next available test (if there is one).ApplyingCommandForSubsystem

    This can be implemented using the code below.

    Expand code block
     action: function(ctx) {
      var issue = ctx.issue;
      if (!issue.links['subtask of'].isEmpty()) {
                  var parent = issue.links['subtask of'].first();
                  var TestRunList = parent.links[ctx.Subtask.outward];
                  var resultSet = null;
                  var isPassing = true;
                  TestRunList.forEach(function(v) {
                    if (v.Status.name == ctx.Status.Failed.name) {
                      isPassing = false;
                    } else if ((v.Status.name == ctx.Status.InProgress.name) && (v.id !== issue.id)) {
                      resultSet = v;
                    }
                  });
                  if (resultSet) {
                    var otherIssueLink = '<a href="' + resultSet.url + '"> ' + resultSet.id + '</a>';
                    var message = 'Switch to next open test in current Test Run' + otherIssueLink + '.';
                    workflow.message(message);
        }
    }
    }
    

Test statuses

When switching from one test to another, a test engineer can either indicate that the test was ‘Passed’ or change the test status to ‘Failed’ (in the event that a bug was identified during the execution of the test case). Based on the YouTrack settings we initially configured for this project, there are several predefined test run statuses:

  • Failing: at least one of the associated tests has the ‘No run’ status, and at least one of the associated tests has the ‘Failed’ status.
  • Passing: at least one of the associated tests has the ‘No run’ status, and there are no associated tests with the ‘Failed’ status.
  • Failed: no associated tests have the ‘No Run’ status and at least one of associated tests has the ‘Failed’ status.
  • Passed: no associated tests have the ‘No Run’ or ‘Failed’ statuses.

The statuses listed above may help your QA team monitor testing progress through the complete test cycle. It is important to note that the test run status depends only on the associated tests and should not be changed manually. The process of determining test run statuses is implemented using the state-machines per issue type workflow. Tests switch codeblock is incorporated in the workflow as well.

Expand code block
var entities = require('@jetbrains/youtrack-scripting-api/entities');
var workflow = require('@jetbrains/youtrack-scripting-api/workflow');

exports.rule = entities.Issue.stateMachine({
  title: 'status-management',
  stateFieldName: 'Status',
  typeFieldName: 'Type',
  defaultMachine: {
    'No Run': {
      initial: true,
      transitions: {
        'Failed': {
          targetState: 'Failed',
          action: function(ctx) {
            var issue = ctx.issue;
            if (!issue.links['subtask of'].isEmpty()) {
              var parent = issue.links['subtask of'].first();
              var TestRunList = parent.links[ctx.Subtask.outward];
              var resultSet = null;
              var isPassing = true;
              TestRunList.forEach(function(v) {
                if (v.Status.name == ctx.Status.Failed.name) {
                  isPassing = false;
                } else if ((v.Status.name == ctx.Status.InProgress.name) && (v.id !== issue.id)) {
                  resultSet = v;
                }
              });
              if (resultSet) {
                var otherIssueLink = '<a href="' + resultSet.url + '"> ' + resultSet.id + '</a>';
                var message = 'Switch to next open test in current Test Run' + otherIssueLink + '.';
                workflow.message(message);
                // Updating Test Run Status 
                parent.fields["Status"] = ctx.Status.Failing;
              } else {
                parent.fields["Status"] = ctx.Status.Failed;
              }
            }
          }
        },
        'Passed': {
          guard: function(ctx) {
            var issue = ctx.issue;
            return !issue.isChanged('project') && !issue.becomesReported && issue.isReported && (issue.Type.name == ctx.Type.TestExecution.name);
          },
          targetState: 'Passed',
          action: function(ctx) {
            var issue = ctx.issue;
            if (!issue.links['subtask of'].isEmpty()) {
              var parent = issue.links['subtask of'].first();
              var TestRunList = parent.links[ctx.Subtask.outward];
              var resultSet = null;
              var isPassing = true;
              TestRunList.forEach(function(v) {
                if (v.Status.name == ctx.Status.Failed.name) {
                  isPassing = false;
                } else if ((v.Status.name == ctx.Status.InProgress.name) && (v.id !== issue.id)) {
                  resultSet = v;
                }
              });
              if (resultSet) {
                var otherIssueLink = '<a href="' + resultSet.url + '"> ' + resultSet.id + '</a>';
                var message = 'Switch to next open test in current Test Run' + otherIssueLink + '.';
                workflow.message(message);
                parent.fields["Status"] = (isPassing) ? ctx.Status.Passing : ctx.Status.Failing;
              } else {
                parent.fields["Status"] = (isPassing) ? ctx.Status.Passed : ctx.Status.Failed;
              }
            }
          }
        }
      }
    },
    Passed: {
      transitions: {
        'Failed': {
          guard: function(ctx) {
            var issue = ctx.issue;
            return !issue.isChanged('project') && !issue.becomesReported && issue.isReported && (issue.Type.name == ctx.Type.TestExecution.name);
          },
          targetState: 'Failed',
          action: function(ctx) {
            var issue = ctx.issue;
            if (!issue.links['subtask of'].isEmpty()) {
             var parent = issue.links['subtask of'].first();
              var TestRunList = parent.links[ctx.Subtask.outward];
              var resultSet = null;
              TestRunList.forEach(function(v) {
                if (v.Status.name == ctx.Status.Failed.name) {
                } else if ((v.Status.name == ctx.Status.InProgress.name) && (v.id !== issue.id)) {
                  resultSet = v;
                }
              });
              if (resultSet) {
                var otherIssueLink = '<a href="' + resultSet.url + '"> ' + resultSet.id + '</a>';
                var message = 'Switch to next open test in current Test Run' + otherIssueLink + '.';
                workflow.message(message);
                parent.fields["Status"] = ctx.Status.Failing;
              } else {
                parent.Status = ctx.Status.Failed;
              }
            }
          }
        },
        'No Run': {
          guard: function(ctx) {
            var issue = ctx.issue;
            return !issue.isChanged('project') && !issue.becomesReported && issue.isReported && (issue.Type.name == ctx.Type.TestExecution.name);
          },
          targetState: 'No Run',
          action: function(ctx) {
            var issue = ctx.issue;
            if (!issue.links['subtask of'].isEmpty()) {
              var parent = issue.links['subtask of'].first();
              var TestRunList = parent.links[ctx.Subtask.outward];
              var ActiveTestRun = false;
              var isPassing = true;
              TestRunList.forEach(function(v) {
                if (v.Status.name == ctx.Status.Failed.name) {
                  isPassing = false;
                  ActiveTestRun = true;
                } else if ((v.Status.name == ctx.Status.Passed.name) && (v.id !== issue.id)) {
                  ActiveTestRun = true;
                }
              });
              if (ActiveTestRun) {
                parent.fields["Status"] = (isPassing) ? ctx.Status.Passing : ctx.Status.Failing;
              } else parent.fields["Status"] = ctx.Status.InProgress;
            }
          }
        }
      }
    },
    Failed: {
      transitions: {
        'Passed': {
          guard: function(ctx) {
            var issue = ctx.issue;
            return !issue.isChanged('project') && !issue.becomesReported && issue.isReported && (issue.Type.name == ctx.Type.TestExecution.name);
          },
          targetState: 'Passed',
          action: function(ctx) {
            var issue = ctx.issue;
            if (!issue.links['subtask of'].isEmpty()) {
              var parent = issue.links['subtask of'].first();
              var TestRunList = parent.links[ctx.Subtask.outward];
              var resultSet = null;
              var isPassing = true;
              TestRunList.forEach(function(v) {
                if ((v.Status.name == ctx.Status.Failed.name) && (v.id !== issue.id)) {
                  isPassing = false;
                } else if ((v.Status.name == ctx.Status.InProgress.name) && (v.id !== issue.id)) {
                  resultSet = v;
                }
              });
              if (resultSet) {
                var otherIssueLink = '<a href="' + resultSet.url + '"> ' + resultSet.id + '</a>';
                var message = 'Switch to next open test in current Test Run' + otherIssueLink + '.';
                workflow.message(message);

                parent.fields["Status"] = (isPassing) ? ctx.Status.Passing : ctx.Status.Failing;
              } else {
                parent.fields["Status"] = (isPassing) ? ctx.Status.Passed : ctx.Status.Failed;
              }
            }
          }
        },
        'No Run': {
          guard: function(ctx) {
            var issue = ctx.issue;
            return !issue.isChanged('project') && !issue.becomesReported && issue.isReported && (issue.Type.name == ctx.Type.TestExecution.name);
          },
          targetState: 'No Run',
          action: function(ctx) {
            var issue = ctx.issue;
            if (!issue.links['subtask of'].isEmpty()) {
              var parent = issue.links['subtask of'].first();
              var TestRunList = parent.links[ctx.Subtask.outward];
              var ActiveTestRun = false;
              var isPassing = true;
              TestRunList.forEach(function(v) {
                if ((v.Status.name == ctx.Status.Failed.name) && (v.id !== issue.id)) {
                  isPassing = false;
                  ActiveTestRun = true;
                } else if ((v.Status.name == ctx.Status.Passed.name) && (v.id !== issue.id)) {
                  ActiveTestRun = true;
                }
              });
              if (ActiveTestRun) {
                parent.fields["Status"] = (isPassing) ? ctx.Status.Passing : ctx.Status.Failing;
              } else parent.fields["Status"] = ctx.Status.InProgress;
            }
          }
        }
      }
    }
  },
  alternativeMachines: {
    'Test Run': {
      'No Run': {
        initial: true,
        transitions: {
          'Failing': {
            targetState: 'Failing',
            action: function(ctx) {
           /* Add actions. */
            }
          },
          'Failed': {
            targetState: 'Failed',
            action: function(ctx) {
         /* Add actions. */
            }
          },
          'Passing': {
            targetState: 'Passing',
            action: function(ctx) {
          /* Add actions. */
            }
          },
          'Passed': {
            targetState: 'Passed',
            action: function(ctx) {
           /* Add actions. */
            }
          }
        }
      },
      Failing: {
        transitions: {
          'Passing': {
            targetState: 'Passing',
            action: function(ctx) {
           /* Add actions . */
            }
          },
          'Passed': {
            targetState: 'Passed',
            action: function(ctx) {
          /* Add actions. */
            }
          },
          'Failed': {
            targetState: 'Failed',
            action: function(ctx) {
           /* Add actions. */
            }
          }
        }
      },
      Passing: {
        transitions: {
          'Failing': {
            targetState: 'Passing',
            action: function(ctx) {
              workflow.check(false, workflow.i18n('Test Run has-read-only status which is defined based on assigned tests statuses'));
            }
          },
          'Passed': {
            targetState: 'Passed',
            action: function(ctx) {
           /* Add actions. */
            }
          },
          'Failed': {
            targetState: 'Failed',
            action: function(ctx) {
           /* Add actions. */
            }
          }
        }
      },
      Failed: {
        transitions: {
          'Passing': {
            targetState: 'Passing',
            action: function(ctx) {
           /* Add actions. */
            }
          },
          'Passed': {
            targetState: 'Passed',
            action: function(ctx) {
           /* Add actions. */
            }
          },
          'Failing': {
            targetState: 'Failed',
            action: function(ctx) {
           /* Add actions. */
            }
          }
        }
      },
      Passed: {
        transitions: {
          'Passing': {
            targetState: 'Passing',
            action: function(ctx) {
           /* Add actions. */
            }
          },
          'Failed': {
            targetState: 'Passed',
            action: function(ctx) {
          /* Add actions. */
            },
          },
          'Failing': {
            targetState: 'Failed',
            action: function(ctx) {
           /* Add actions. */
            }
          }
        }
      }
    }
  },
  requirements: {
    Assignee: {
      type: entities.User.fieldType
    },
    Status: {
      type: entities.EnumField.fieldType,
      InProgress: {
        name: 'No Run'
      },
      Failing: {
        name: 'Failing'
      },
      Passing: {
        name: 'Passing'
      },
      Passed: {
        name: 'Passed'
      },
      Failed: {
        name: 'Failed'
      },
    },
    Type: {
      type: entities.EnumField.fieldType,
      TestRun: {
        name: "Test Run"
      },
      TestExecution: {
        name: "Test Case Execution"
      }
    },
    Subtask: {
      type: entities.IssueLinkPrototype,
      name: 'Subtask',
      inward: 'subtask of',
      outward: 'parent for'
    },
  }
});

Test statistics

You also may be interested in key test cycle statistics. For the purposes of this demonstration we have incorporated the following metrics:

  • Total number of tests assigned to a specific test run.
  • Number of tests with the ‘Passed’ status.
  • Number of tests with the ‘Failed’ status.


Thanks to the code below, all the metrics are updated in the event of any of the following changes: test status changes, the assignment of a new test to the test run, and the removal of a test from the test run.

Expand code block
  exports.calculateStatuses = function(parent) {
  var totalTR = 0;
  var totalFailed = 0;
  var totalPassed = 0;
  if (!parent.links['parent for']) {
    return;
  } else {
    parent.links['parent for'].forEach(function(tr) {
      totalTR++;
      if (tr.Status.name == 'Passed') {
        totalPassed++;
      }
      if (tr.Status.name == 'Failed') {
        totalFailed++;
      }
    });
    parent.fields['Total number of test cases'] = totalTR;
    parent.fields['Number of passed test cases'] = totalPassed;
    parent.fields['Number of failed test cases'] = totalFailed;
    return true;
  }
};
exports.resetStatuses = function(testRun, testRunCopy) {
   testRunCopy.fields['Total number of test cases'] = testRun.fields['Total number of test cases'];
   testRunCopy.fields['Number of passed test cases'] = 0;
   testRunCopy.fields['Number of failed test cases'] = 0;
  return true;
};

In our case this code is triggered in several workflow rules, such as ‘Update stats when links are adjusted’, ‘ Switch to the next test case ’, and others. If you want to adopt a set of metrics that fit your specific needs, you can add the required custom fields and adjust your workflow logic.

Issue tracker features

You may want to create a separate task that addresses a bug found for a failed test and link it to an issue that identifies the related test case execution. You can either link failed tests with their related bugs by using a custom issue link type, or use a custom text field containing a reference to the bug. It may be easier to organize a separate YouTrack project to keep track of bugs.

Possible enhancements

You can extend YouTrack’s functionality to meet additional business requirements. Here are several options to consider.

Cloning an existing test run

Sometimes it may be very helpful to make a new test run that is similar to an existing one but includes some small changes (e.g. a change to the assigned version). The ‘Create Test Run copy’ menu item is created precisely for such cases. This menu item is available only for issues with the ‘Test Run’ issue type, and it triggers the following actions:

  • Create a copy of the test run as well as copies of the test case executions assigned to it.
  • All newly created test case execution copies are assigned to the new test run using the ‘parent-child’ issue link type. Additionally, these copies will be assigned to the original test cases with the ‘Execution’ issue link type (in order to maintain traceability).
  • All statuses and statistics will be discarded for the newly created issue.

The logic above is implemented using the ‘Create Test Run copy’ action-rule.

Expand code block
var workflow = require('@jetbrains/youtrack-scripting-api/workflow');
var entities = require('@jetbrains/youtrack-scripting-api/entities');
var utils = require('../calculate-tms-stats/utils');

exports.rule = entities.Issue.action({
  title: 'Create Test Run copy',
  command: 'Test Run Creation',
  guard: function(ctx) {
    return ctx.issue.isReported && (ctx.issue.Type.name == ctx.Type.TestRun.name);
  },
  action: function(ctx) {
    var issue = ctx.issue;
    var TestRunCopy = issue.copy(issue.project);
    TestRunCopy.Status = ctx.Status.InProgress; 
    var oldTestList = issue.links[ctx.Subtask.outward];
    oldTestList.forEach(function(v) {
      var newTest = v.copy(v.project);
      newTest.Status = ctx.Status.InProgress;
      newTest.links[ctx.Subtask.inward].delete(issue);
      newTest.links[ctx.Subtask.inward].add(TestRunCopy);
    });
    utils.resetStatuses(issue, TestRunCopy); 
    var newTestRunLink = '<a href="' + TestRunCopy.url + '"> ' + TestRunCopy.id + '</a>';
    var message = 'New Test Run has been created ' + newTestRunLink + '.';
    workflow.message(message);
  },
  requirements: {
    Execution: {
      type: entities.IssueLinkPrototype,
      name: 'Execution',
      inward: 'Execution',
      outward: 'Assigned Test case or test suite'
    },
    Subtask: {
      type: entities.IssueLinkPrototype,
      name: 'Subtask',
      inward: 'subtask of',
      outward: 'parent for'
    },
    Type: {
      type: entities.EnumField.fieldType,
      TestRun: {
        name: "Test Run"
      },
    },
    Status: {
      type: entities.EnumField.fieldType,
      InProgress: {
        name: 'No Run'
      },
    }
  }
});

Restrict user’s actions

In order to avoid potential mistakes while working with testing scenarios (e.g using the wrong issue types), you can restrict the possible actions of users. For instance, you can implement actions to ensure that users link test cases to test runs. When the user links a test run to an issue, this action will check whether the linked issue type is ‘Test Case’ or ‘Test Suite.’ If it is neither, the action will send a warning that prevents the user from proceeding.

The following code-block can be added to the ‘populate test run’ workflow.

Expand code block
 ' + TestCase.id + '</a>';
      workflow.check((TestCase.Type.name == ctx.Type.TestCase.name) || (TestCase.Type.name == ctx.Type.TestSuite.name), workflow.i18n('\'Test Run\' can be linked to \'Test Case\' and \'Test Suite\' only, but {0} has \'{1}\' type!', message, TestCase.Type.name));

As mentioned above, the test run status should be read-only, since it depends on the progress of test executions. This can be achieved in a straightforward manner by simply including a code block which restricts manual status changes for issues with the ‘Test Run’ issue type.


For instance, ‘transitions’ section for ‘Test Run’ issue type in ‘status-management’ workflow can be adjusted as follows:

Expand code block
   'Failing': {
            targetState: 'Failing',
            action: function(ctx) {
              workflow.check(false, workflow.i18n('Test run has read-only status which is defined based on assigned tests statuses'));
            }
          },
          ...
        }

Reporting and traceability

Reporting

At the end of the day, after the test cycle is finished, you may want to analyze the results. The YouTrack reporting feature can be used for this purpose. You can select the report type, which will display key information related to the test management procedure. For the purposes of this demonstration, we will add 2 reports to the dashboard:

  • Cumulative flow report with filtering by ‘version’ field. This kind of status report refers to a specific product version. It shows you key metrics, like the number of tests with the ‘Passed’, ‘Failed’, and ‘No Run’ statuses, mapped onto a timeline. In order to display cross-version data, you should create a report for each version. It may be convenient to add all reports to the dashboard so you can work with all the data in one place.

    Report setup:

    ApplyingCommandForSubsystem

    Report results:

    ApplyingCommandForSubsystem

  • Issue distribution report. This report is a snapshot that includes key metrics (such as the number of test runs with each status) for several product versions. Specifically for this demo we have included report results that can help you compare versions between each other and analyze version stability.

    Report setup:

    ApplyingCommandForSubsystem

    Report results:

    ApplyingCommandForSubsystem

Traceability

Since YouTrack stores key test cycle-related information for each issue with the ‘Test Execution’ type, you can always access the test history and identify any related bugs.
If you have a test case, you can refer to any test executions that involve it.

With test case execution you can refer to a test run.

With a failed test case execution you can always refer to the list of associated bugs.

You always can download sources for all workflows described in this article from our repo. To discuss other workflow-related topics, feel free to join our YouTrack Workflow Community in Slack.

image description