Ai logo

JetBrains AI

Supercharge your tools with AI-powered features inside many JetBrains products

Insights JetBrains AI

AI Tool Switching Is Stealth Friction – Beat It at the Access Layer

Has your team’s sprint velocity actually improved since you approved all those AI coding tools?

If not, recent research by JetBrains and UC Irvine shows your developers may be facing a new dimension of context switching that resists the usual fixes.  

The key findings were that most AI-assisted developers switched in and out of their IDEs more but 74% of those surveyed didn’t notice it. When context switching doesn’t feel like context switching, behavioral policies won’t catch it.

Consolidating AI tools would catch it but at the cost of flexibility. Model capabilities evolve constantly. Locking into one vendor limits your team’s ability to learn, experiment, and stay competitive.     

The good news is that there’s a solution that sidesteps both challenges – consolidating the access layer. 

Here’s the research behind it, why it works, and how to apply it. 

Developers complain about switching, just not this kind

In general, developers are outspoken about context switching killing productivity. Atlassian’s State of Developer Experience Report 2025 found developers citing switching context between tools as one of their biggest drags on productivity.

At the same time, developers report record productivity thanks to an ever-increasing array of AI tools. In the 2025 DORA State of AI-Assisted Software Development Report, respondents said that AI had a positive impact on delivery throughput, code quality, and almost every other key performance outcome. 

Paradoxically DORA also found no relationship between AI adoption and reduced friction or burnout. The organizational wins weren’t translating to a lighter day-to-day experience.

This disconnect between experience and performance points to something deeper. When researchers combine self-reported perceptions with objective behavioral data, the gap becomes clear.

  • In the JetBrains/UC Irvine study mentioned above, 74% of surveyed AI-assisted developers didn’t notice an increase in their switching. Telemetry on 151 million IDE window activations across 800 developers told a different story. Over the two-year study period, AI users’ monthly window switching trended upward while non-AI users’ did not. This divergence was mostly invisible to those experiencing it. Conducted from October 2022 to October 2024, the research spanned ChatGPT’s launch and the initial scramble to adopt AI coding tools.

74% said switching hadn’t gone up.

Telemetry disagreed.

  • Experienced open-source developers in a 2025 METR study believed AI tools made them 20% faster. Screen recordings showed the opposite.

All this research suggests that AI’s productivity benefits come with a hidden cost when distributed across different tools and interfaces. The switching feels productive and voluntary, so it is nearly impossible to manage behaviorally. When developers don’t perceive the friction, they can’t self-correct. When they don’t report it, you can’t coach around it.

The solution isn’t measuring or managing – it’s architectural. And there’s a proven pattern for architectural solutions to developer friction.

The platform-engineering lesson: Consolidation reduces cognitive load

Platform engineering is all about building internal tooling and infrastructure that lets developers self-service what they need without hitting speed bumps like tickets or approvals. The goal is to create “golden paths” that make the right ways the easy ways.

Traditionally, platform engineering has focused on the “outer loop” of everything after git push. This includes CI/CD pipelines, deployment automation, infrastructure provisioning, and security scanning.

AI tools, on the other hand, fragment the “inner loop” of everything before git push. GitLab’s 2025 Global DevSecOps Report found that 49% of development teams use more than five AI tools across use cases like code generation, testing, and documentation. 

Standardization was the top motivation for platform initiatives according to Weave Intelligence’s State of AI in Platform Engineering 2025 report, but standardizing around a single AI tool doesn’t work when different models are better at different tasks. 

Reducing developers’ cognitive load was the second-highest motivation. Apply that principle to AI tools: consolidate the access layer, not the options.

One environment, multiple AI tools

Since our study data was finalized in 2024, we’ve shipped two features that make JetBrains IDEs the consolidated access layer for your team’s AI tools of choice: 

Bring Your Own Key (BYOK) lets your team use OpenAI, Anthropic, or any OpenAI-compatible provider with existing API keys. You maintain cost visibility through provider dashboards while developers access models directly in the IDE.

No browser tabs required. LLMs work inside the IDE.

Agent Client Protocol (ACP) support means any ACP-compatible coding agent can work within JetBrains IDEs. ACP is an open standard we’re partnering with Zen on to ensure agents function across editors without vendor lock-in. The recently launched ACP Registry makes finding and configuring agents quick and easy.

All ACP-compatible agents are available in the IDE.

Takeaway

AI-related switching doesn’t surface the same way as shifts between meetings, projects, or traditional tools. Developers notice it less, so they report it less. Behavioral policies can’t apply to what isn’t visible.

The fix is architectural, not managerial. In platform engineering, this principle applies to post-commit workflows. Apply it to pre-commit AI workflows by standardizing where developers access the tools: in the environment where they already write, test, and debug code.

image description