Working with files in DataGrip

People working with databases write SQL. A lot of SQL. In this post, we’ll go over whether it’s better to just use a temporary SQL editor in DataGrip or if it’s more convenient to work directly with SQL files stored on your computer.

Spoiler: it all depends on your needs. You might already have a working directory with tons of scripts; we will tell you how you can use DataGrip to deal with them too.


There are four types of files where you can write SQL in DataGrip:

1. Query console is an SQL editor attached to a particular data source. When you create a data source, a database console is created automatically. But you can add more consoles to a data source, each of them will then create a new connection, unless that is, you use Single Connection Mode.

2. Scratch files are like consoles, but they are not attached to a data source. We refer to scratch files as temporary notes or drafts for code ideas which are outside of the project context. If the scratch file is an SQL file, then it is also an SQL editor which can be associated with any of your connections.

3. User files are SQL scripts you store on your machine. You know where they are located and you can put them in Version Control.

In DataGrip, you can run all the files we’ve mentioned above, without opening them.

4. Object editors are internal files where you edit the DDL of each object: procedure, view, etc. We’ll not be talking about these in this tutorial.

Let’s have a closer look at these first three cases.

Query console

From our experience, the query console is the best place to write normal everyday SQL. Every data source has its own default console. To open it, go the context menu and click Open Console. Or use the shortcut (F4 or Cmd+Down).


You can write SQL here, then run it and get the result! It is as simple as that. Located in the top right-hand corner of the query console, is a switcher where you can choose the database/schema you are working with, or, in the case of PostgreSQL, define a search path.


If for any reason you want to create another console for a data source, this is also possible: Context menu | New | Console


In fact, every console is a file. To see the details of a file, you can take a look at the tooltip on the console’s tab. If you want to create a separate file from the console, you can use the Save As… action for this.


You can see that all the consoles you’ve created are stored in the internal Database Consoles folder. To open this folder open the Files tool window (Alt/Cmd+2) and go to Scratches and Consoles | Database Consoles.

If you want to get to the current console in the Files tool window, press Alt+F1 (Select the object in the other view) and choose Files.


Here you can also rename a console file, which can then later help you to locate it.

Another way to rename a console file, without opening the Files tool window, is to open the Navigation bar (Ctrl/Cmd+Up) and then rename the console from there.

Scratch Files

Scratch files are similar to query consoles; the only difference is that they are not attached to a data source. They can be of any type: JSON, XML, or text files, but we’ll speak specifically about SQL scratch files. They are SQL editors which can be associated with any of your connections.

There are several ways to create a new scratch file:

File | New | Scratch file
— Invoke Find Action (Ctrl/Cmd+Shift+A) then type ‘scratch’ and press Enter.


Then choose the language you want this file to be highlighted as. Let’s say it’s PostgreSQL.

In this newly created file, you can write SQL. If you need to run it, then obviously, you need the attached connection. In DataGrip you need to attach the console. Again, there are several ways to do this:

— Just click Run (Ctrl/Cmd+Enter), and the IDE will offer to do it for you.
— Use the switcher in the top right-hand corner. There you’ll see only the consoles which are associated with the same language as your file. To attach a console from the data source of another DBMS, first, change the file’s language.
— Use the context menu of your scratch file.


Once you have attached the console, the objects in your SQL will be resolved. This means that DataGrip now understands which particular objects are used there. As a result, you will be able to navigate from the SQL to the database tree.

All the scratch files you’ve created are saved in a virtual Scratches folder. Like Database Consoles, it can be found in the Files tool window in the Scratches and Consoles section.


There are also files called scratch buffers which are almost the same, but they are created in a cyclical way: only 5 of them can exist at a time; when а the sixth scratch buffer is created the first one will be deleted.

User files

It is likely that you find the best way for you to work with SQL is to use the files as they are: you know where they are located, they can be modified from another place, and they can even be stored under Version Control System. Or they can be a huge legacy folder of scripts from the year 2005!

To work with files from your machine or any other place you have access to, attach the directory you need to the Files tool window. This can be done from the context menu. Or use the button on the toolbar.


Then you’ll see the folder that you need. Open and modify each file there with DataGrip. As with scratch files, every file should have its own language specified so it can use the correct highlighting and code analysis. The language (in our case, SQL Dialect) can be chosen from the top right-hand switcher.


Sometimes DataGrip will detect the dialect automatically.


SQL dialects can be defined for the group of files in Settings | Database | SQL dialects.


The dialect can be set for each file or for the whole script folder. If you define a Global SQL Dialect, this dialect will be used as the default to highlight all your files. It can be useful if you work on one DBMS.

As with scratches, once you’ve defined the dialect, it’s time to attach the console to the file. It can again be done from the different places.

When you run the statement, and the console isn’t attached, DataGrip will offer to do this.


Or you can do it from the context menu.


The switcher in the top right-hand side also works.


Once the console is attached, you can run statements in the context of the connection associated with that console. The resolve will work correctly as well.

There is also the possibility to resolve your scripts to CREATE and ALTER statements in other files. For this, you need to create a DDL Data Source from your DDL scripts.

A little more about resolve: If you just observe files, then you also need to have a correct resolve without attaching consoles to the file. In these cases use Settings | Tools | Database | SQL resolution scopes where you can map any SQL file or folder to any scope. If you do this, all unqualified objects from these locations will be resolved to the selected data source, database, or schema.


The final thing to tell you about is that there is a file size limit which allows you to use the full set of code insight features. It is 2.56 MB by default. If the file you open exceeds this limit, you’ll get a notification. If the file is bigger than 20 MB, you will only see part of the file in the editor.


If you want to change this 2.56 MB limit, be aware that it can affect performance, do the following: Go to Help | Edit custom properties and in the open file (whether it’s empty or not) put the following string: idea.max.intellisense.filesize=3000 where 3000 is the new limit in bytes.


Usually, if the file is too big and you just want to run the whole script, use the Run action from the context menu of the file. You can define several schemas and databases here so that your file will be run against all of them.


If you want to know more about working with Version Control then check out our tutorials: How to work with Git in DataGrip and DataGrip and GitHub integration.

Thank you for reading! If you have any questions, please write them in comments or drop an email to

DataGrip team

Posted in Tutorial | Tagged | Leave a comment

DataGrip 2018.3.2


Here it is, the second minor update for 2018.3 with some added improvements. The most notable feature is:

Enum values can now be completed in the data editor. This works in both PostgreSQL and ClickHouse.

Screenshot 2019-01-22 14.11.50

Other important fixes we’ve made:

DBE-7561: DataGrip doesn’t change the default value of a column in MySQL.
DBE-7572: CREATE OR REPLACE is supported for MariaDB events.
DBE-7604: Hidden schemas are not used in comparison now.
DBE-7417: Report mismatch between expected/actual column count in the SQL Server OUTPUT clause is fixed.
DBE-7457, DBE-7458: DataGrip doesn’t report columns as unused in some false positive cases.

Thanks for reading!

The DataGrip Team

Posted in Minor updates | Leave a comment

DataGrip 2018.3.1


Here comes the first minor update for 2018.3 with a few important improvements.

DataGrip had a usability problem where, every time you opened the source code, it showed you a version loaded during the indexing process. In fact, if someone had changed the source code from another place, DataGrip was unaware of it and thus displayed an outdated version of the DDL.

The thing is that DataGrip maintains a consistent database model at every moment to provide navigation and usage search. This means the IDE cannot update just one single DDL in the local storage as that would break the whole flow.

Now, if a source code object has been changed, you can get an alert. Go to the data source properties and turn on Notify when the outdated object is opened.


Then, when you open the source code, you’ll see a notification saying it’s been changed from elsewhere.


Another very important thing: we’ve finally fixed the problem with NULL instead of empty date in MySQL. Well, in fact, MySQL fixed it :) Now, if you work with new 8.0 JDBC driver, everything is OK.

Important! This new driver isn’t compatible with MySQL 5.1 and older. This means that the date problem will persist for those who use these old versions and will not be fixed until the database is updated.


— Comma position can now be detected automatically: see video.
— We’ve fixed the issue with connecting to PostgreSQL: DBE-7447
— The highlighting for ‘column aliases required‘ inspection works better: DBE-7314
— The color in the database tree is exactly the same as set: DBE-6558
Select in works in Search Everywhere: IDEA-197872
— Replace in pre-selected text works again: IDEA-200011
— We’ve fixed the bug with connecting to Redshift: DBE-7459

Thanks for reading!

The DataGrip Team

Posted in Uncategorized | Leave a comment

DataGrip and GitHub: Step-by-step Integration

All the IDEs based on IntelliJ Platform have Version Control System integration. DataGrip is no exception! But because database developers usually do things a little differently to other developers and because DataGrip’s project model has its own specific workflow, the integration between the IDE and VCS support isn’t quite as straightforward as with other JetBrains IDEs.


We’ll cover two general use cases:

— Cloning an existing GitHub repository with SQL scripts and other files

— Sharing a user scripts directory on GitHub

In DataGrip, VCS support doesn’t come out of the box. To have support for a particular version control system, please install the corresponding JetBrains plugin from Settings → Plugins. Since we’re going to work with GitHub, we’ll need the Git Integration plugin and the GitHub plugin.

They’re both provided by JetBrains which is indicated by the small JB icon on them. Note that they can appear at the end of the list. This happens because we use a special plugin repository in DataGrip.

Once you’ve installed these two plugins and restarted DataGrip, please go to Settings | Version Control | GitHub. Add your GitHub account there so the IDE is able to connect and use it during interactions with GitHub.

The GitHub plugin is a must if you use two-factor authentication. If everything is OK, you’ll see that your account is linked.

Your IDE is ready to work with GitHub. Let’s go through the use cases we mentioned above.

Cloning an existing GitHub repository with SQL-scripts and other files

Say you want to work with an existing GitHub repository. To clone it go to VCS | Git | Clone…
We should mention that all the actions, including this one, can be performed from any place in the IDE via Find Action (Ctrl/Cmd + Shift + A). Anyway, there is a way to do this through the menu too – see our screenshot.

Paste the link to the GitHub repository in the URL field and click Test. Hopefully, everything goes smoothly. In our example, we will clone the public DataGrip repository with various database dumps:

DataGrip will ask you a question:
— Would you like to open this directory?
— No

By clicking No, you indicate that you want to work with the repository from an existing project where you have already attached directories and have connections.

If you click Yes, it will suggest that you create a new project.

So, now you’re still in your project. The connections are in the left part, but the directory mapped to the GitHub repository isn’t attached and you don’t see it in the Files tool window. So, you need to attach it!

— Open Files tool window (Alt/Cmd+2).
— Invoke the context menu.
— Click Attach directory.


The only thing left to do is to register root. To do that, go to Settings | Version Control, locate the folder that is needed, and click +.

Now you’re ready to change your files, commit, and push your changes!


Sharing the user scripts directory on GitHub

Perhaps you’ve realized that you can’t track changes on your SQL files and you cannot revert to previous versions of your scripts. In some cases, the Local History is the only thing you need, but sharing scripts on GitHub can also be a good idea.

When you work with files in DataGrip, you need to attach the directory from your computer to the Files tool window. If you have no folders attached yet, you can do it from the context menu.

If you have already and worked with it – great! Say we have a MyScripts folder which we want to share on GitHub. First, we need to create a Git repository on our machine which will then will be linked with the remote one on GitHub.

Go to VCS | Import into Version Control | Create Git repository.

Choose the folder. In my case, it is the folder MyScripts.

The local repository is created. Now let’s create an empty remote repository on GitHub. Log in to your account and click + | New repository.

Define all the fields you need and click Create repository. That’s it! Done!

If you used the option Initialize this repository with a README, don’t forget to pull your repository before pushing commits.

The final thing we need to do is to add this remote repository to our local one. Go to Git | Remotes…


Choose the repository you need and click +.


Paste the repository address in the URL field and click OK.


Now you can commit and push your files.


The first push will actually create all the files that you need in the remote repository.

After performing Push, we have all the files we need on GitHub!

That’s it! One last thing to mention:

You can tell DataGrip which SQL dialect is used in the work directories so that the IDE can analyze your code correctly. To define dialects for the directories, go to Settings | SQL Dialects.


If you want to learn more about how to work with Git in DataGrip, and how to commit, push, and work with branches, check out our tutorial named How to work with Git in DataGrip.

Download the latest DataGrip and try it out! We’d love to hear your thoughts and feedback.

To learn more about Git with all the JetBrains IDEs, watch the YouTube series covering Git Questions.

Please report issues and problems to our tracker. Thanks!

The DataGrip Team

Posted in Features, Tutorial | Tagged | 2 Comments

DataGrip 2018.3 is Released!

Hello everybody!

We are excited to release DataGrip 2018.3 today! As usual, we thank all our early adopters who have helped us make the IDE better during this release cycle. The most active ones have already got their free DataGrip subscriptions, as a token of our appreciation.Rujrn4Rw

If you wish to see the detailed overview of this update, please visit our What’s new page. Read on for a list of all the enhancements waiting for you in 2018.3.

Database objects

Cassandra database supported
— Generate SQL files for the selected objects
— Support for extensions in PostgreSQL
— Support for stored procedures in PostgreSQL 11
— Statistics in Quick Doc
Use drop cascade syntax option



Code completion

— Postfix completion
— New setting to automatically add aliases when completing table names
— Not aggregated fields in GROUP BY
— All columns list in SELECT, MERGE and INSERT INTO table variable
— Named parameters of stored procedures
— Numeric fields in SUM() and AVG()
FILTER (WHERE…) clause
— Option to Invert order of operands in auto-generated ON clause
— Window functions


Code generation

— Dialects for Live templates
— Column names hint appears automatically when using INS live template
CREATE TABLE definition for SELECT




— Intention action to Introduce table alias
— Many fixes in Extract subquery as CTE action


Code insight

— Warnings about unsafe DELETE and UPDATE
— New inspection to detect unreachable code
— Unused subquery item inspection




— Single connection mode
— Auto-reconnect after timeouts

Search and navigation

— New Search Everywhere
— Multiline TODO comments
— Multiline search in Find/Replace in Path


User interface

High-contrast color scheme
— Better UI for page size setting
— Color settings in the data source Properties dialog



You already know all of this, but still:
— Get your 30-day trial of DataGrip.
Tweet @ us!
Discuss anything in the forum.
— Report bugs to our issue tracker.

Thanks for reading this!

Your DataGrip Team
The Drive to Develop

Posted in Uncategorized | Tagged , | 1 Comment

DataGrip 2018.3 Release candidate


All IntelliJ-based IDEs are nearing the 2018.3 release, and today we’ve got a DataGrip 2018.3 RC. Let’s have a look at what’s been added since the latest EAP.


Previously, each new query console meant a new connection. In the new version, it’s possible to work with just one connection for the data source and have all consoles use that same one connection. This allows you to see the temporary objects in the database tree, or use the same transaction in different consoles. This is our first step toward creating full connection management in DataGrip.

To turn it on, go to Data source properties -> Options -> Single connection mode.


We’ve also added the ability to define colors for data sources to the Properties dialog.


Another small improvement is a new inspection called Unreachable code.


That’s about it! Your feedback is welcome, as always, in Twitter, the forum, and the issue tracker. Let us know what you think about this new release!

Your DataGrip Team

Posted in Uncategorized | Leave a comment

DataGrip 2018.3 EAP 4

DataGrip 2018.3 EAP 4

The next build for DataGrip 2018.3 is here.

There is a new setting: automatically add aliases when completing table names. But, if the aliases we generated don’t really suit you, create your own for particular tables.

Screenshot 2018-11-13 09.21.05

Here’s the example:


There are a lot of fixes around the Extract subquery as a CTE action. For those of you who don’t know about it: it’s placed under Refactor -> Extract -> Subquery as CTE. But we think it’s easier to use Find Action. Or assign a shortcut!

– The name suggested for CTE doesn’t bring conflicts: DBE-6496
– Context is properly defined if a statement is wrapped in another expression (DBE-6503, DBE-6517)
– DataGrip won’t suggest extracting CTE when the column aliases in an as-expression for dialects that don’t support it (DBE-6490)
– This refactoring is enabled for MySQL: CTEs are supported since MySQL 8


And a small fix for INSERT live template: now the column names hint appears automatically: there is no longer any need to press Ctrl/Cmd+P!


That’s it! Your feedback is welcome in Twitter, the forum, and the issue tracker. Let us know what you think about our new release!

The DataGrip Team

Posted in Uncategorized | 2 Comments

DataGrip 2018.3 EAP 3

DataGrip 2018.3 EAP 3

The next build for DataGrip 2018.3 is here.

We continue polishing Cassandra support. In this release the Modify Table UI became clearer, and also we’ve fixed the bug with casing: now you can manage objects with capital letters in their names.

Our team will highly appreciate it if you try out our EAP builds with Cassandra.

What else is here?

One of the most important SQL refactorings is finally here: introduce alias. Take your stop-watch and count how much time it’ll save!


We’ve added the new setting called Invert order of operands in auto-generated ON clause:


If it’s off, the FROM table will be the first in JOIN condition.


If it’s on, vice versa:

A new inspection was added: Unused subquery item. It’ll help make your code neater.


That’s it! Your feedback is welcome in Twitter, forum, and issue tracker. Let us know what you think about our new release!

DataGrip team

Posted in Uncategorized | 2 Comments

DataGrip 2018.3 EAP 2

DataGrip 2018.3 EAP

Hello! The next build for DataGrip 2018.3 is here. Let’s see what’s inside.

Cassandra database

We’re gradually finding our feet with NoSQL. After adding Clickhouse support in 2018.2.2, we have now gone on to add the support for Cassandra.


Please, try it out and share your feedback with us! This time right now, before the release is set in stone, is the best time to let us know what you think as improvements can still be made on the fly.


Generating files from objects

Now, if you use the SQL Generator (Ctrl/Cmd+Alt+G) for getting the DDL from objects, you can also generate the SQL files for these objects. To do this press the save button on the left pane.

As you can see, we’ve created two layouts for the output. The UI is not quite finished yet, so please, if you have any ideas you would like implemented share them with us: what kind of settings you would like to see here.


Anyway, now you can press Edit button to the right of the Layout drop-down and edit any of them. Or create your own. These layouts are just groovy scripts.


Better foldings

Enhancements in this area:
— Single lines cannot be folded.
— Text for foldings now has more information.
— Expression/declaration lists can be folded:


Completion for window functions

Now DataGrip automatically adds OVER() and puts the caret into the appropriate place

Drop cascade

We’ve added a ‘Use drop cascade syntax’ option when dropping an object from the database tree.

Page size

We’ve also reworked the UI for the page size a bit. Now it’s clear how to see all the rows from a table or a result.PageSize

That’s it! Your feedback is welcome in our Twitter, forum, and issue tracker. Let us know what you think about our new release!

DataGrip team

Posted in Uncategorized | 3 Comments

DataGrip 2018.3 EAP

DataGrip 2018.3 EAP

Hello! No long preambles, this is DataGrip 2018.3 EAP! Let’s see what’s inside.

Code completion

Many feature requests are implemented in this area.

When you use the GROUP BY statement, DataGrip offers you a list of non-aggregated fields.


A list of all columns is available in completion after the SELECT keyword.


We also added this list to the MERGE statement and now it works with INSERT INTO when using table variables.

Completion now works well for named parameters.


In the case of ambiguous names, DataGrip now shows you the context.


We also introduced Postfix completion which might be familiar for those who use other IntelliJ-based IDEs.


Here is a movie of how all of them work.


Code completion also works for:

— Items in brackets. DBE-4246
— Numeric fields in SUM() and AVG () functions. DBE-2285
— FILTER (WHERE …) clause of an aggregate function.DBE-5591
— Table value types.DBE-5768
— Field types in SQLite. DBE-5660, DBE-6778

Now there is an ability to choose a dialect for Live Templates. It means that this live template will only work in the specified dialect.


This ability lets you create different dialect-based implementations for the same abbreviation. Say, we want to create a template for getting the first n rows. In SQL Server and PostgreSQL, this query would have different syntax.

One can use the same abbreviations for different live templates if they are in different groups. Create two new groups: SQL Server and PostgreSQL. In each group create a corresponding live template with the appropriate syntax.

Don’t forget to choose the right dialect for templates. Thus, they will work only in the needed context.


Another small improvement: the Create table intention (did you know about that?) now respects field types and generates the correct code. Press Alt+Enter if there is a table where you are going to insert data.


The result is:


Code insight

Now DataGrip will warn you if you use the DELETE or UPDATE statement without WHERE clause.



In case you are running this, there will be a caution notice.


Other enhancements

We support extensions in PostgreSQL.


Quick doc for the data source (Ctrl+Q for Windows/Linux, F1 for OSX) now contains some statistics: the number of different objects


Also, the long-awaited auto-reconnect feature is added: no more disonnections after timeouts!


Support for:
— Stored procedures in PostgreSQL 11. DBE-6863
— ROWS FROM in PostgreSQL. DBE-7098
— Vector valued updates in SQLite. DBE-4449
— If JOIN has no condition, this error is highlighted. DBE-6759
— ‘@’symbol doesn’t break the spell checker, thus it works in variables’ names. DBE-2250
— JSON groovy extractor works OK with big numbers. DBE-7019
— Row count works if columns are sorted. DBE-2444
— Сorrect search path is set in the reopened data editor. DBE-7044
— Query plan visualiser works in Redshift. DBE-7129

That’s all for today. Your feedback is welcome in our twitter or forum. By the way, starting from this version, we have a handy thing: click Help -> Report problem… to create a new issue in our issue tracker.

DataGrip team.

Posted in Uncategorized | 1 Comment