Supercharge your tools with AI-powered features inside many JetBrains products
This summer, we're making Mellum, our code completion model, easier for developers to use locally, and we’re introducing support for more languages than ever.
JetBrains Mellum – our open, focused LLM specialized on code completion – is now available to run as a containerized microservice on NVIDIA AI Factories.
Mellum doesn't pretend to know everything, but it does one thing exceptionally well. We call it a focal model – small, efficient, and built with a specialized focus.
Code completion has always been a defining strength of JetBrains products. See how we trained the model behind our cloud-based completion.
The behind-the-scenes story of building Mellum.
We’ve now added Gemini 1.5 Pro and Gemini 1.5 Flash to the lineup of LLMs used by JetBrains AI Assistant. These LLMs join forces with OpenAI models and local models. What’s special about Google models? Gemini 1.5 Pro and 1.5 Flash on Google Cloud’s Vertex AI will deliver advanced reasoning an…