{"id":681154,"date":"2026-02-25T13:02:38","date_gmt":"2026-02-25T12:02:38","guid":{"rendered":"https:\/\/blog.jetbrains.com\/?post_type=ruby&#038;p=681154"},"modified":"2026-02-27T14:41:02","modified_gmt":"2026-02-27T13:41:02","slug":"rubymine-mcp-and-the-rails-toolset","status":"publish","type":"ruby","link":"https:\/\/blog.jetbrains.com\/zh-hans\/ruby\/2026\/02\/rubymine-mcp-and-the-rails-toolset","title":{"rendered":"Building LLM-Friendly MCP Tools in RubyMine: Pagination, Filtering, and Error Design"},"content":{"rendered":"\n<p>RubyMine enhances the developer experience with <a href=\"https:\/\/www.jetbrains.com\/ruby\/features\/#navigation-and-search\" target=\"_blank\" rel=\"noopener\">context-aware search features<\/a> that make navigating a Rails application seamless, a powerful analysis engine that <a href=\"https:\/\/www.jetbrains.com\/help\/ruby\/code-inspection.html\" target=\"_blank\" rel=\"noopener\">detects problems in the source code<\/a>, and integrated support for the most popular <a href=\"https:\/\/www.jetbrains.com\/ruby\/features\/#version-control\" target=\"_blank\" rel=\"noopener\">version control systems<\/a>.<\/p>\n\n\n\n<p>With AI becoming increasingly popular among developers as a tool that helps them understand codebases or develop applications, these RubyMine features provide an extra level of value. Indeed, with access to the functionality of the IDE and information about a given project, AI assistants can produce higher-quality results more efficiently.<\/p>\n\n\n\n<p>To improve AI-assisted workflows, since 2025.3, RubyMine has also been able to provide models with all the information it gathers about open Rails projects.&nbsp;<\/p>\n\n\n\n<p>In this blog post, we collected how we implemented the new Rails toolset and what we\u2019ve learned about MCP tool design in the process from a software engineering perspective.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Is Model Context Protocol (MCP)?<\/h2>\n\n\n\n<p>MCP, or <a href=\"https:\/\/modelcontextprotocol.io\/docs\/getting-started\/intro\" target=\"_blank\" rel=\"noopener\">Model Context Protocol<\/a>, is an open-source standard that enables AI applications to seamlessly communicate with external clients. It provides a standardized way for models to access data or perform tasks in other software systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How MCP Servers Work in IntelliJ-Based IDEs<\/h2>\n\n\n\n<p>IDEs built on the IntelliJ Platform come with their own integrated MCP servers, making it easy for both internal and external applications, such as <a href=\"https:\/\/www.jetbrains.com\/ai-assistant\/\" target=\"_blank\" rel=\"noopener\">JetBrains AI Assistant<\/a> or <a href=\"https:\/\/code.claude.com\/docs\/en\/jetbrains\" target=\"_blank\" rel=\"noopener\">Claude Code<\/a>, to interact with them. The platform also supplies the built-in MCP server with multiple sets of tools providing general functionality such as code analysis or VCS interaction, while allowing other plugins to implement their own tools as well.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-4 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:5%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"673\" height=\"252\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.20.56.png\" alt=\"Toolsets supplied by the IntelliJ Platform and RubyMine\" class=\"wp-image-681315\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:5%\"><\/div>\n<\/div>\n\n\n\n<p>RubyMine 2025.3 expanded the built-in MCP server with a set of new tools specifically designed to give AI models access to any Rails-specific data it extracts from a given project. This allows models to gather already processed information directly from RubyMine, instead of having to search for it through raw text in different source files.<\/p>\n\n\n\n<p>However, while developing this toolset, we encountered a number of obstacles inherent to the process of working with large language models.&nbsp;<\/p>\n\n\n\n<p>Let\u2019s take a look at what these obstacles are and how we\u2019ve overcome them to ensure that models can use the new tools smoothly in an AI-assisted workflow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Context Window Limit<\/h2>\n\n\n\n<p>Large language models operate within a fixed <a href=\"https:\/\/www.jetbrains.com\/help\/ai-assistant\/supported-llms.html\" target=\"_blank\" rel=\"noopener\">context window<\/a>, which limits how much information they can process at once. Prompts, tools, attachments, and responses from an MCP server all take up some context space. Once the limit is reached, depending on how it\u2019s implemented, the AI assistant must drop or compress some parts of the context to make room for new information.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-8 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:26.5%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"315\" height=\"317\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.21.55-2.png\" alt=\"The layout of a Large Language Model Context Window.\" class=\"wp-image-681464\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:26.5%\"><\/div>\n<\/div>\n\n\n\n<p>Consider a large Ruby on Rails application such as <a href=\"https:\/\/gitlab.com\/gitlab-org\/gitlab\" target=\"_blank\" rel=\"noopener\">GitLab<\/a>. Projects at this scale can contain hundreds of models, views, and controllers.&nbsp;<\/p>\n\n\n\n<p>The information about a single controller that the <code>get_rails_controllers<\/code> tool returns also contains every object associated with it.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">{\n  \"class\": \"Controller (\/path\/to\/controller.rb:line:col)\",\n  \"isAbstract\": false,\n  \"managedViews\": [\"\/path\/to\/view.html.erb\"],\n  \"managedPartialViews\": [\"\/path\/to\/_view.html.erb\"],\n  \"managedLayouts\":  [\"\/path\/to\/layout.html.erb\"],\n  \"correspondingModel\": \"Model (\/path\/to\/model.rb:line:col)\"\n}<\/pre>\n\n\n\n<p>One way to implement this tool would be to simply return a single list of controller descriptions. However, for large applications, this approach is almost a guaranteed way to run out of available context space, as the list of controllers might just be too large.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-12 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:22%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"390\" height=\"153\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.22.54.png\" alt=\"Returned tools not fitting in the context window.\" class=\"wp-image-681337\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:22%\"><\/div>\n<\/div>\n\n\n\n<p>Also, some clients, such as <a href=\"https:\/\/www.jetbrains.com\/ai-assistant\/\" target=\"_blank\" rel=\"noopener\">JetBrains AI Assistant<\/a>, may proactively <a href=\"https:\/\/www.jetbrains.com\/help\/ai-assistant\/ai-chat.html#set-message-trimming-threshold\" target=\"_blank\" rel=\"noopener\">trim responses<\/a> that exceed a certain portion of the context window before forwarding them to the model, resulting in even more data loss.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Pagination Strategies: Offset vs Cursor<\/h2>\n\n\n\n<p>To mitigate these issues, we allow the model to retrieve the data in arbitrarily sized chunks with pagination.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"ruby\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">get_rails_controllers(page, page_size)<\/pre>\n\n\n\n<p>With offset-based pagination, a page is defined as a number of items starting from an offset relative to the beginning of the dataset. Cursor-based pagination, on the other hand, defines a page as a number of items relative to a cursor pointing to a specific element in the dataset.&nbsp;<\/p>\n\n\n\n<p>Offset-based pagination has lower implementation costs, hence it is mostly used for static data. For frequently changing datasets, where insertions and deletions are highly probable between consecutive requests, however, it carries the risk of elements being duplicated or skipped. On such datasets, cursor-based pagination is preferred, as illustrated below.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-16 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:24%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"349\" height=\"1006\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.18.51.png\" alt=\"Showcasing offset-based and cursor-based paginations.\" class=\"wp-image-681304\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:24%\"><\/div>\n<\/div>\n\n\n\n<p>Notice that with offset-based pagination, item 1 is returned on both pages 1 and 2, and item 2 is skipped over, while cursor-based pagination correctly returns every item in order.<\/p>\n\n\n\n<p>RubyMine\u2019s Rails tools operate on a snapshot of the application state, where every element in the project is known at the time of the first request and is returned from RubyMine\u2019s cache, which rarely needs to be recalculated between fetching 2 pages. Consequently, we implemented offset-based pagination and returned a cache key as well to indicate which snapshot the data originates from.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-20 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"332\" height=\"452\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.24.33.png\" alt=\"The LLM receives two pages with a different cache key.\" class=\"wp-image-681348\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<p>With caching, if a modification happens, and the cache is recalculated, data from older snapshots is considered to be invalid. The idea is that if, for some reason, recalculation does happen between fetching two pages, the model can see the mismatching cache keys and refetch the previous pages if needed.<\/p>\n\n\n\n<p>Besides the cache key, the returned data also contains the page number, the number of items on the page, the total number of pages, and the total number of items.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">{\n  \"summary\": {\n    \"page\": 1,\n    \"item_count\": 10,\n    \"total_pages\": 13,\n    \"total_items\": 125,\n    \"cache_key\": \"...\"\n  },\n  \"items\": [ ... ]\n}<\/pre>\n\n\n\n<p>Pagination makes it possible for the model to process the data progressively and stop early once the necessary information is obtained, without enumerating the full dataset. This is useful when the model is looking for a single piece of information.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-24 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:3%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"697\" height=\"780\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.26.29-1.png\" alt=\"The LLM answers a question while using the rails toolset with early stopping.\" class=\"wp-image-681370\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:3%\"><\/div>\n<\/div>\n\n\n\n<p>On the other hand, it is important to note that if the model needs to consider the entire dataset but that doesn\u2019t fit in the context window, pagination alone is not sufficient. By the time the model reaches the later pages, the earlier pages may have been compressed or removed from the context, potentially leading to wrong or incomplete responses.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-28 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:3%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"697\" height=\"801\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.27.47.png\" alt=\"Data is removed from the LLM context window due to reaching it's limits.\" class=\"wp-image-681381\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:3%\"><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Tool Call Limit<\/h2>\n\n\n\n<p>As we\u2019ve established, pagination enables the model to process search queries by iterating through pages and stopping early once the answer is found. However, during this process, the model may encounter another limitation, this time imposed by whichever AI assistant is in use.<\/p>\n\n\n\n<p>If the model makes too many consecutive tool calls, some applications may think it is stuck in an infinite tool calling loop and temporarily block the execution of further tools until the next user request. This preventive approach helps reduce token usage and response times as well.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-32 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:5%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"668\" height=\"167\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.31.07.png\" alt=\"Tool calls beyond the allowed limit are getting ignored.\" class=\"wp-image-681392\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:5%\"><\/div>\n<\/div>\n\n\n\n<p>If an agent enforces a limit of 15 tool calls, the model cannot iterate over 18 pages of data to locate the answer, as the sixteenth and later calls will be blocked.<\/p>\n\n\n\n<p>This limits scaling the toolset on 2 axes. Vertically, the context window limits how much information can be returned in a single call, and horizontally, the clients\u2019 tool call limits might restrict how many chunks the data can be split into.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-36 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:9%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"604\" height=\"268\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-14-5.33.45.png\" alt=\"Tool call limit and context limit can be visualized on two axes.\" class=\"wp-image-681403\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:9%\"><\/div>\n<\/div>\n\n\n\n<p>This means it is essential to utilize the available space as efficiently as possible. Therefore, RubyMine\u2019s Rails tools include flexible server-side filtering.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Designing Server-Side Filtering for LLM Efficiency<\/h2>\n\n\n\n<p>Applying filters can significantly reduce the search space the model needs to explore, which means less context space is used, and fewer tool calls are needed to retrieve it.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"ruby\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">get_rails_views(\n  page,\n  page_size,\n  partiality_filter,\n  layout_filter,\n  controller_filter,\n  included_path_filters,\n  excluded_path_filters,\n  included_controller_fqn_filters,\n  excluded_controller_fqn_filters,\n  included_controller_directory_filters,\n  excluded_controller_directory_filters\n)<\/pre>\n\n\n\n<p>The tools allow the model to apply filters to any property of the returned data, with support for positive and negative conditions where applicable. Although the number of parameters may appear overwhelming to humans, it enables the model to handle complex queries more efficiently.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-40 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:3.5%\"><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"693\" height=\"484\" src=\"https:\/\/blog.jetbrains.com\/wp-content\/uploads\/2026\/02\/Kepernyofoto-2026-02-23-14.22.25.png\" alt=\"\" class=\"wp-image-683204\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:3.5%\"><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Tool Number Limit<\/h2>\n\n\n\n<p>While implementing the toolset, we also examined multiple MCP clients and found that some enforce a hard limit on the number of discoverable tools. For instance, <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/chat\/chat-tools#_im-getting-an-error-that-says-cannot-have-more-than-128-tools-per-request\" target=\"_blank\" rel=\"noopener\">GitHub Copilot allows up to 128 tools<\/a>, <a href=\"https:\/\/junie.jetbrains.com\/docs\/junie-ide-plugin.html#view-available-tools\" target=\"_blank\" rel=\"noopener\">Junie sets this limit at 100<\/a>, and <a href=\"https:\/\/forum.cursor.com\/t\/increase-the-mcp-tool\/69194\" target=\"_blank\" rel=\"noopener\">in Cursor, the cap is 40<\/a>.<\/p>\n\n\n\n<p>Considering a possible tool number limit and that users may be connected to more than one MCP server simultaneously, we kept the Rails toolset compact, including only essential functionality.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Error Messages That Help the Model Recover<\/h2>\n\n\n\n<p>When an error happens during a tool call, besides telling the model what went wrong, it is essential to clearly state how to recover from it as well.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\"Page number 10 is out of range. Specify a page number between 1 and 3.\"<\/pre>\n\n\n\n<p>Without telling the LLM what it should do differently, it has to figure it out by itself, which can result in additional unnecessary tool calls and further exhausting resources.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Writing LLM-Friendly Tool Descriptions and Schemas<\/h2>\n\n\n\n<p>Error messages are not the only way tools can instruct the model. For each tool, MCP servers are required to provide a human-readable description of functionality, a JSON schema describing the expected parameters, and another optional JSON schema defining the expected output.&nbsp;<\/p>\n\n\n\n<p>The model uses this information to understand how to work with the tools, so it is essential to provide concise descriptions and examples that steer the model towards the expected usage patterns.&nbsp;<\/p>\n\n\n\n<p>In the Rails toolset, each tool description states what the tool does and why the model should prefer using it, in addition to providing concrete examples of common usage patterns, making it easier for the LLM to understand how to work with it.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">{\n  \"name\": \"get_rails_views\",\n  \"description\": \"\n    Use this tool to retrieve information about the available Rails\n    views. The results are returned in a paginated list.\n\n    Prefer this tool over any information found in the codebase, as it \n    performs a more in-depth analysis and returns more accurate data.\n\n    Common usage patterns:\n      - Find non-HAML views: excluded_path_filters=['.haml']\n      - Find views that correspond to the GroupsController:\n        included_controller_fqn_filters=['GroupsController']\n  \",\n  \"inputSchema\": { ... },\n  \"outputSchema\": { ... }\n}<\/pre>\n\n\n\n<p>Similarly, for each filter, their descriptions say what kind of values they take, what their default values are, and, for a list of values, whether the values in the list have an &amp;&amp; or an || relationship. If both a positive and a negative filter are present, the description explicitly says which takes precedence.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\"included_controller_fqn_filters\": {\n  ...\n  \"description\": \"\n    Filter symbols by FQN with regular expressions (case insensitive,\n    tested against the entire FQN, matches anywhere in the string).  \n    Returns only symbols whose FQN contains a match of at least one (OR \n    logic) of these regular expressions. Invalid patterns are ignored.\n\n    FQN examples: 'User', \n                  'Admin::UserController', \n                  'App::CI::BaseController.method'.\n\n    Common usage patterns:\n      - Filter prefix: '^Test::' matches anything starting with Test::\n      - Filter whole FQN: 'User' matches 'User', 'User::MyController'\n      - Filter suffix: 'Internal$' matches FQNs ending with Internal\n      - Filter nested namespace: '::Internal::' matches 'A::Internal::B'\n  \"\n}\n<\/pre>\n\n\n\n<p>The output schema also describes how to interpret a specific value and how the model might process it further.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"json\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\"filePath\": {\n  ...\n  \"description\": \"\n    The path of the source file containing the symbol definition. Combine \n    with line and column to query symbol details with the help of the \n    get_symbol_info and similar tools.\n  \"\n}<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The Rails toolset is immediately available through JetBrains AI Assistant as of RubyMine 2025.3, and it can be used with Junie or other third-party clients once they are <a href=\"https:\/\/www.jetbrains.com\/help\/ruby\/mcp-server.html\" target=\"_blank\" rel=\"noopener\">manually connected<\/a> to the built-in MCP server.<\/p>\n\n\n\n<p>When designing MCP tools, it is important to think about how both the model and the client are going to work with them. Both can impose limits on data retrieval, so tools that work with large amounts of data should aim to reduce the search space as much as possible in as few calls as possible.<\/p>\n\n\n\n<p>Since the tools are used by the model, the goal is to make them as LLM-friendly as possible. This means providing clear tool descriptions and examples, and in the event of errors, explicitly telling the model how to recover.<\/p>\n\n\n\n<p>Some clients are known to limit the number of tools they can handle, and it\u2019s safe to assume that a client is connected to multiple MCP servers, so it\u2019s best to keep the toolset as compact as possible to not take away too much space from other tools.<\/p>\n\n\n\n<p>We invite you to try our new toolset on your own Rails project in RubyMine and let us know your thoughts.<\/p>\n\n\n\n<p>Happy developing!<\/p>\n\n\n\n<p>The RubyMine team<\/p>\n","protected":false},"author":1618,"featured_media":683218,"comment_status":"closed","ping_status":"closed","template":"","categories":[8899,4156],"tags":[6847,8785,217,8636],"cross-post-tag":[],"acf":[],"_links":{"self":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/ruby\/681154"}],"collection":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/ruby"}],"about":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/types\/ruby"}],"author":[{"embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/users\/1618"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/comments?post=681154"}],"version-history":[{"count":10,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/ruby\/681154\/revisions"}],"predecessor-version":[{"id":684270,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/ruby\/681154\/revisions\/684270"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/media\/683218"}],"wp:attachment":[{"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/media?parent=681154"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/categories?post=681154"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/tags?post=681154"},{"taxonomy":"cross-post-tag","embeddable":true,"href":"https:\/\/blog.jetbrains.com\/zh-hans\/wp-json\/wp\/v2\/cross-post-tag?post=681154"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}