DataGrip 2018.1.5

Hello! Before we start our EAP program for 2018.2, let us bring you a new minor update for 2018.1. Here’s the list of bug-fixes:

DBE-6502: YAML plugin is included in DataGrip.
DBE-6537: Now you can drag-and-drop database elements to Favorites.
DBE-6551: Data source colors work fine after project re-opening.
DBE-6544: POJO-generator works for views as well.
IDEA-191807: Now you can paste file into Scratches folder.

That’s it! Stay tuned!

DataGrip team.

Posted in Uncategorized | 4 Comments

How to use Git with DataGrip

In the software world, there has been a solid habit of using version control to keep track and manage source code changes and collaboration. We have moved from centralized solutions like SourceSafe and TFS and now into using tools that distribute the management with Mercurial and Git. While both centralized and distributed technologies have pros and cons, we will be looking at why database professionals need to use version control and how to do it with the popular tool Git.

DataGrip_TwitterGit_w_o_text

With DataGrip’s inheritance from IntelliJ IDEA, we have the ability to use Git to store, manage edits and allow a team to collaborate on projects that benefit everyone. There is no reason why your database SQL scripts should not be under source control right next to your team’s application code. We will dig into VCS and give you some knowledge to keep that your database scripts and files backed up and up to date.

You can download DataGrip now to see how your team’s database work can benefit.

What is Version Control and Git?

If you are new to VCS or version control just know that it is a way to track changes to one or more files and has the ability to track those changes with a version that can be referenced later when necessary. By having versioned (explicit or dated) changes we can revert back to previous versions of the code if anything goes wrong. We can also have labels that track released versions of our database so that we can proceed to future development without losing our production code.

Having backups of production data and scripts is a key for a well-run database, but it’s hard to keep the database history while not having issues like forgotten files throughout your hard drive. You probably have a practice of overwriting your database’s backups with the latest data. The issue you will run into is losing good data that you may need in the future and also corrupting your database schema. The other technique database professionals use is to save a date stamped backup in a folder, but that just leads to a mess with lots of files to manage.

A database backup is just SQL code or other files that contain the data in a format the database engine can import, so why not handle it the same way you manage the rest of your code — in a source code system like Git? Setting this up is very simple so let’s walk through it.

As with any version control system, Git was created to keep a historical look at source code and that includes database scripts and backups. As a distributed version control system it is aimed at speed, data integrity, and support for non-linear workflows. You might be asking what makes it distributed? The key to Git is that is tracks your changes locally and remotely. When you check your code into Git it stores those snapshot changes locally on your file system in a local repository. That way you can make many changes or back out of changes that you need to without affecting anyone else that also works with or depends on the code. When you are sure you want to make your changes available for others you can then “push” your changes to the remote repository. You can also “pull” other’s changes that have been pushed to the remote repository.

That is a very simple look at Git but if you want more details please review the Git documentation and tutorials.

Installing Git

To work with Git or any other VCS system with DataGrip, we will need to install the plugin. To install the plugin go to File | Settings… and then select the Plugins option. Finally, select the Install JetBrains plugins… button.

DataGrip Settings

Search for git on the search box. You should get the Git Integration plugin showing first in the list that will allow you to install the plugin and restart DataGrip.

DataGrip install git plugin

Working with Git in DataGrip

Cloning a Project from GitHub

With the Git plugin installed, you can now clone a remote repository to your local file system. Cloning is a command in Git that copies all the most updated file snapshots that exist on the remote server to the local you specify.

In DataGrip, go to VCS | Checkout from Version Control | Git and you will be presented with the Clone Repository dialog.

DataGrip checkout from Git

For this example, we will clone the following Git Repository URL from GitHub to a local folder of your choosing (Parent Directory).

https://github.com/cwoodruff/sql-server-sakila.git

GitHub is a Web-based Git version control repository hosting service. It is used for software code and other files such as database scripts we are showing with DataGrip. It offers all of the distributed version control and source code management functionality of Git. To learn more about GitHub and/or to sign up for a free account go to GitHub.com.

DataGrip Git checkout confirm

After cloning the files that were copied to your local filesystem will be present in DataGrip’s File Tool Window.

Modifying a Database Script and adding a new file to the project

For the purpose of showing how we save changes to files stored in our local Git repository and also how new files are added, we will modify the schema file that was cloned from the remote repository and also add a new *.sql script for a stored procedure that was added to the MSSQL database project.

Replace line 16 (“use sakila”) in the sql-server-sakila-schema.sql file with the following:

DataGrip SQL for creating database

Next, add a new file called sql-server-sakila-spInsertActor.sql to the project with the following code:

You now have your 2 file changes that we will commit locally and push to the remote repository.

Committing code changes to Local Git

You will now commit the 2 changes (along with your .idea folder and files) to your local Git repository. Go to VCS | Commit… and open the Commit Changes dialog. You will need to select the Unversioned Files to have the new stored procedure and .idea folder committed.

DataGrip Commit Changes dialog

It is a good idea and required that you have a short concise commit message that gets logged with your commit. A good rule is just one sentence that clearly states what and why you did the commit. I will give a commit message of the following: “Added sproc spInsertActor, updated schema, and .idea folder and files”.

Press Commit and your files will be saved to your local repository and will be ready to be saved or pushed to the remote repository. If DataGrips finds any issues before committing due to the analysis it runs on the code, you will be prompted with a message and the commit will not be performed.

Pushing Commits to Remote Git on GitHub

Now that the commit has been processed and DataGrip did not prompt any errors or warnings, we can look at pushing the commit to the remote Git server. Push is the term that Git uses when the user saves all local commits remotely. In our example, the remote git server resides on GitHub.

We will push our commit to GitHub by selecting VCS | Git | Push…

DataGrip push to Git

A Push Dialog will appear. The dialog will show the commits and commit details so you can review everything before doing to push to the remote Git servers at GitHub.

DataGrip Push commits dialog

DataGrip may prompt you at this point for your GitHub.com credentials before allowing your commit(s) to be pushed. If you were pushing your commits to another remote Git server you may be prompted for login information as needed.

DataGrip GitHub Login

Pulling Changes from Others with Git

Since you can push your changes to the remote Git server to store those changes in the remote repository, it makes sense that there should be a way to get other’s commits from the remote repositories. That is the next feature of Git we will explore, the Pull.

Pulling from Git is very similar to pushing but the flow and analysis works from the remote to the local. When the pull happens Git and possibly DataGrip will analyze the code being brought back from the remote servers and detect any issues before saving those files to the local filesystem. A common complaint from Git might be pulling code when you have commits locally that have not been pushed to the remote repository.

Your coworker has updated a file or files in the remote GitHub repository for the project you are working with currently. You need to get those changes to have a complete up to date database environment and also to make sure you have the current scripts for testing. To get all updates from others including your coworker, you will go in DataGrip to VCS | Git | Pull.

DataGrip pull from Git

After the pull is performed and no errors have been found you will get the results of your pull. Here is an example of what results you would get from having your sql-server-sakila-spInsertActor.sql file updated by your coworker and pushed to GitHub.

DataGrip pull in Version Control pane

You can now work with the updated files and continue with your work.

Creating a Branch with Git

Now that you know how to commit, push and pull using Git, we will look at one final feature called Branching. A branch in Git is a local and remote snapshot of code files you can encapsulate your work. Maybe you are creating a new feature and you need to keep it isolated from other work you or others are doing. You can work and commit your updates to your branch and then push your commits to the remote repository as your branch can also exist there. When you are done with your branch and need to get the code into the main area of the repository you will then perform a merge locally and push that merge to the remote side.

Let’s look at a diagram that shows the workflow of a git branch and merge. You will see the point where the branch is created from the master branch (Common base). The green points along the workflow shows the commits that have occurred during the branch lifetime along with the blue commits that happened at the same time from others. In the end, we will have a common merged code after a git merge has been performed (New merge commit).

Branch workflow diagram

In DataGrip we can create a new branch by going to VCS | Git | Branches…

DataGrip Branches

You will then get the Git Branches dialog that will allow you to create a new branch. You can also detect and access other local and remote branches but that is beyond the scope of this tutorial.

DataGrip Git Branches popup

After you create your branch you will see the name of the branch at the bottom right in the DataGrip bottom status bar. This is where you can also create branches or see what other branches are available to you on your local filesystem. The great thing about Git is you can have more than one branch locally (just make sure not to confuse them). I created a new branch call Test as you can see below.

DataGrip Branch selection

After you have committed and pushed your changes to the branch, tested that your changes meet your requirements and finally need to be merged back into the master branch, you will need to go to the branch you want the Test branch merged with. In this case, it is the Master branch.

DataGrip Checkout branch

Once you check out the Master branch and make it the current branch, you will then need to merge the Test branch into it. The best way is to select the Master branch again and then Merge. This will merge the only other branch currently in your local repository which is the Test branch. DataGrip will prompt you if you want to delete the Test branch after you have merged it into the Master branch. It will be safe to allow DataGrip to delete the branch since we will not have any use for it after the merge.

DataGrip merge branch

At this point, you need to push the changes that now are in the Master branch back to the remote repository. Perform a Push on the Master branch and your work that started in the Test branch that was merged locally to the Master branch will now reside in the remote Master branch in GitHub.

DataGrip Push Commit for merge

Branching can be tricky so please pay attention to what you are doing. It would be wise to create local and remote repositories (with GitHub hosting the remote) and experimenting with Git branches. To learn more about branching go to the Git docs on Git Branching – Basic Branching and Merging.

Wrap Up

All code no matter, if it is C# files for an ASP.NET Core project, Javascript files for a React/Redux application or SQL scripts for a database, should be stored and tracked using a version control system. As you have learned in this tutorial, Git is a powerful distributed version control system that will allow not only you as a database professional to organize, track and work with branches but also allows your entire team to do the same because of the remote Git repositories that go along with your local repositories.

Download the latest DataGrip and try it out! We’d love to hear your thoughts and feedback!

To learn more about Git with all the JetBrain’s IDEs go watch the YouTube series covering Git Questions.

Posted in Tutorial | Tagged , , | 7 Comments

Using AWS Athena from IntelliJ-based IDE

If you use the AWS cloud, then you probably store data on their S3 platform, either for your application data or in the form of logs from services like the AWS Elastic Load Balancing. Amazon offers Athena, a service built on Presto, and allows you to query this S3 data using ANSI SQL syntax.

So how to access this data straight from the IDE?

To do this, we’ll first need to do some configuration to connect the IDE with Athena.

Head towards the Database Tool Window, then select Data Source Properties, click on the “+” button and then, at the bottom of the list you’ll see an entry named Driver, click on this.

Start by changing the name of the configuration to “Athena”. You will notice that there is a section called Driver files. Here you can import custom JDBC drivers to expand the IDE connectivity capabilities.

AWS provides a JDBC driver for use with Java, which you can download from this page. As our IDEs work on JDK 8, you can download the corresponding jar file, at the time of writing is “AthenaJDBC42-2.0.2.jar”.

Once you downloaded the driver, go back to the IDE, and in the Additional files section, click on “+” and add the file we’ve just downloaded. New options should now be available under the Class drop-down, select “com.simba.athena.jdbc42.Driver” from it. In the Dialect drop-down, select Generic SQL.

In order for the IDE to connect to Athena, we can also define a connection URL template. Click on the “+” button under the URL templates section and add a new entry named default with the value:

jdbc connection

Since the driver is now configured, you can go to the “+” sign at the top left of the Data Sources and Drivers window and select “Athena” as a driver. In the Host field introduce the AWS region you want to make the queries from, “eu-west-1”, in the User section add your AWS Access Key Id, “AKIA…”, and in the Password field the AWS Secret Access Key, “Go2018…”.

You can also mark the connection as Read-only since Athena does not support modifying data on the S3 buckets it queries.

Click on the Advanced tab and type in the location of where the query results will be saved. By default, Athena will save this under a location similar to “s3://aws-athena-query-results-YourAWSAccountID-eu-west-1/” but you can find yours via the Settings section in the Athena Console.

Go back to the General tab and click on the Test Connection button and you should see a “Successful” message. This means you configured everything correctly and you can now connect to AWS Athena.

Close the window by pressing the OK button, and now you’ll be presented with a database console ready for use.

AWS Athena Custom Config - 1

Let’s run a few sample queries to see how everything. For this, we’ll use the AWS Athena sample data for AWS ELB logs. If you decide to a different data source, such as your own data in an S3 bucket your account has access to, make sure you also allow Athena to query the data as explained in the official documentation.

First, run this query to create the table that we’ll use:

Select the whole query and press Ctrl/Cmd+Enter. If you selected that this connection is a read-only one, you’d need to confirm you want to execute this query.

Now, let’s pull some data from it using this query:

AWS Athena Custom Config - 2

Note: Every time you run a query against Athena which requires it to process data, you will pay for it. Make sure that you understand the data size and the query you are about to run before doing so. For more details about the pricing, please see the official AWS documentation or contact your AWS administrators.

And that’s it. Now you can use AWS Athena just like any other database and have all the IDE power for completion, data export, and visualization. Please let us know in the comments below if you have any questions, or features requests, we look forward to your feedback.

Posted in Tutorial | 6 Comments

DataGrip 2018.1.4

Hello!

This update fixes just one but an important bug: DBE-6451. User parameters work well again.

Posted in Uncategorized | 2 Comments

Copy and share data sources in IntelliJ-based IDEs

800x418_Twitter_DG_DataCopy2

Hello! In this tutorial we’re going to look at three popular use-cases:

— Duplicate a data source within one project.
— Share data sources between projects inside one IDE.
— Copy a data source from one IDE to another IDE, to another computer.

Duplicate a data source within one project

A trivial problem with a trivial solution: in the context menu of the selected data source click Duplicate or press Ctrl/Cmd+D.

Duplicate

A new data source will be created with the same properties, including the password.

Share data sources between projects inside one IDE

In IntelliJ-based IDEs a data source can be global. It means that it becomes visible from all projects you run from a particular IDE. By default, data sources are local. To make a data source global click the corresponding button in the toolbar (or the context menu).

MakeGlobal

If you want to make it local again, click the button again.

MoveToProject

Global data sources are visible in all projects you open from the same version of the same IDE. Say, if you use DataGrip 2018.1, global data sources will not be attached to projects you open in other versions of DataGrip or in IntelliJ IDEA.

Copy a data source from one IDE to another IDE, or to another computer

Global data sources can be copied via Export Settings. But what if you don’t want to export all settings? And what if you don’t use global data sources?

The first and the most convenient way to copy data sources to another IDE: select the ones you need and in the context menu choose Database tools → Copy data source to clipboard.

CopyToClipboard

In fact, an XML is copied to the clipboard like in the following example:

XML

You can send this snippet to your colleague via a messenger, or just import it to another IDE on your machine. To do this, click Add data source (+) → Import from clipboard.

ImportFromClipboard

The datasource is copied without a password, so you’ll need to input it manually. DataGrip stores passwords in the KeePass storage on Windows and in native keychains on Linux and MacOS. You can configure the IDE to store passwords in KeePass on any OS: to do this go to Settings/Preferences → Appearance and behaviour → System settings → Passwords.

Passwords

If for some reason that flow doesn’t meet your needs, here comes something for advanced users.

DataGrip, as any other IDE from JetBrains, works in the context of a project. The entity of the project is not a thing which database developers are used to, that’s why DataGrip creates a default project once you run the IDE. And you continue using this default project unless you create a new one.

The information on all data sources you have in a project is stored inside dataSources.xml file inside this project.

Where is this default project is located?

Windows
<SYSTEM DRIVE>\Users\<USER ACCOUNT NAME>\.<PRODUCT><VERSION>\config\projects\default

Linux
~/.config/projects/default
~ is an alias for the home directory, for example, /home/john.

MacOS
~/Library/Preferences/projects

If you created other projects than default, we assume you know where they are located. Anyway, if you go to File → Open Recent → Manage projects you’ll see all projects with their paths except the opened one.

So, if you open a project from another IDE, it will contain all data sources without passwords. They are stored in \.idea\dataSources.xml

dataSourcesXML

If you make any changes to this file, it will affect the data sources you use. It can be even placed under VCS to synchronize the data sources list between several users.

You might also notice the dataSources.local.xml file where DataGrip stores schema visibility settings and some other information that can vary from user to user. If a data source becomes global, the information about it is stored in the options folder which is located at the same level as the projects folder.

That’s it! Please, comment if you have questions.

Posted in Uncategorized | 3 Comments

DataGrip 2018.1.3

Hello! This is another minor update for 2018.1. Here is the list of important bug-fixes:

Oracle
DBE-3195 — Auto-completion for object types
DBE-5340 — Compiling PL/SQL code works OK
Support for:
DBE-2581 — Conditional compilation flags
DBE-6027 — SET statements with multiple clauses
DBE-6196 — INSERT ALL
DBE-6378 — JSON keyword

PostgreSQL
DBE-6301 — The bug with creating foreign keys between tables is fixed
DBE-6391 — Fields are resolved in LOOP cycle

Redshift
DBE-6336 — MAXFILESIZE support for UNLOAD

Sql Server
DBE-6325, DBE-6370 — Columns for OUTPUT are resolved
DBE-6354 — Support for ORDER BY in subqueries

H2
DBE-6367 — CTE support
DBE-6368 — DROP CASCADE support

DataGrip team

Posted in Uncategorized | 7 Comments

DataGrip 2018.1.2

Hello! This is the second minor update for 2018.1.

And here is a list of important improvements.

— Access methods support in PostgreSQL.

screen_shot_2018-04-27_at_14.34.51

Run SQL script menu option for the schema.

Screenshot-2018-04-27-15.18.22

— In the Power Save mode DataGrip shows icons.

We encourage you to use free ToolBox application to install updates. It makes it seamless!

Posted in Uncategorized | Leave a comment

DataGrip 2018.1.1 is Released!

Hello! This is the first minor update for 2018.1.

Use the Check for updates action to receive the latest update if you are working with IntelliJ DataGrip 2018.1.

If you don’t get a notification about new updates please check the Updates page in the Settings / Preferences dialog. Make sure that the Automatically check updates for option is enabled.

Another crucial fix: the scrolling in 2018.1 became laggy. No more!

And here is a list of the other improvements included in 2018.1.1

— A new action is introduced: Context menu of the object → Database tools → Modify Comment.
DBE-5479: Explain Plan works in Redshift.
DBE-5987: Now it’s possible to use \N as null text in data import.
IDEA-186636: The keystrokes aren’t lost during freezing anymore.
IDEA-188229: Clicking the Replace all option from the Replace in path dialog doesn’t cause the IDE to freeze.

Posted in Uncategorized | Leave a comment

DataGrip 2018.1

Hello,

Please give a warm welcome to our new release, DataGrip 2018.1!

Splash

We thank all of the EAPers who have helped test all the new features added in this release. Let’s take a closer look at those.

Data editor
— SQL log
IDE general enhancements
— Rename everything
— Schema switch mode
— Enhancements in Navigation
— Target is remembered when running scripts
— Quick documentation for Files
Database view
— MariaDB support
— External schemas in Redshift
— Virtual schemas in Exasol
— Events in MySQL
— Partitioned tables in PostgreSQL 10
— Datasource color affects associated files
SQL editing
— Navigate to next/previous occurrence
— Move element left or right
— Completion for file references
— CTEs in MySQL
— Folded code highlighting

Data editor SQL log

Now, while updating data you can see which particular queries DataGrip is running. We have nothing to hide!

SQLLog

IDE general enhancements

Rename everything

Starting with this version, we have challenged ourselves to be able to rename all the objects we support. Sequences in Oracle, events in MySQL, triggers in PostgreSQL, and many other objects can now be renamed by pressing Shift+F6. This works both in the database tree and in an SQL script. Remember that usages of the object in other scripts will also be renamed.

Triggers

Schema switch mode

Now you can tell DataGrip how to switch schemas.

SwitchSchema

Automatic: DataGrip can switch schemas during internal operations when defining the context. This is the default behavior for all databases except PostgreSQL.
Manual: DataGrip cannot switch schemas during internal operations when defining the context. However, you can switch them manually via the switcher in the top-right corner. This is the default behavior for PostgreSQL.
Disabled: If you work with pg_bouncer, no more wildcat switches!

Navigation
We added an option called Prefer data editor over DDL editor.

Navigation

This option affects:
— Navigation to the table/view by Ctrl+N/Cmd+O.
— Navigation to the column by Shift+Ctrl/Cmd+Alt+N.
— Double-clicking on the table/view or column in the database tree view.
‘Navigate references to DDL editor’ is now called Prefer database view instead of DDL editor. As you probably know, it defines where you’ll be taken when you click Navigate to declaration (Ctrl/Cmd+B or Ctrl+Click) from the SQL.
We’ve merged two navigation actions: Go to class and Go to table.

EnterCalssName

In DataGrip, you can now navigate to classes if you use any of the other supported languages.
In other JetBrains IDEs, you can finally see database objects here like tables, stored procedures, etc. Remember that there’s a filter by language.
Running scripts

Now when you run the file against a data source, your choice is remembered for further runs.

Execute

Quick documentation for files

FileQuickDoc
We added quick docs for files. For example, now it’s much easier to know the file size. But that’s not all!

Database view

We’ve added MariaDB to the list of supported databases.

MariaDB

DataGrip now supports external schemas in Redshift and virtual schemas in Exasol.

ExtrenalObjects

Events in MySQL are shown in the database tree. Source code generation is also available.

Events

DataGrip now supports partitioned tables in PostgreSQL 10.

Partitions

Data source color affects console files in the Files pane.

Consoles

SQL editing

Now DataGrip lets you navigate to the next or previous identifier occurrence by using two shortcuts, Alt + Wheel down and Alt + Wheel up. (Or, enter ‘highlighted element usage’ in Find Action.) Before, you could only navigate to the next text occurrence. It should be noted that text occurrence and reference are not always the same.

GoToUsage

Another new action in DataGrip is Move element left or right. This works for different kinds of lists including parameters inside functions, fields in INSERT statement, and the column list. Note that it doesn’t spoil the comma while working with the column list, while Move Line would.

MoveElements

SQLite commands used to spoil highlighting. No more.

SQLiteCommands

Completion for file references now works for Oracle and Exasol.

FileCompletion

Common table expressions are supported in MySQL.

CTE

The Duplicate entire lines action now works at the end of the file as well. If you weren’t aware of what this action does, it duplicates all selected lines adding a line break before each one. In many cases, it’s much handier than just using Duplicate (Ctrl+Cmd/D).

Duel

The IDE now highlights the folded code regions that contain errors or warnings, and colors such blocks according to their validation status. The folded code also will be highlighted if it contains matching occurrences for the search results.

FoldingHighlights

Other

Features
— In the Replace in Path window, the IDE now lets you preview a regex replacement.
— When you search through the current file, the IDE now highlights folded code regions if they contain the matching occurrences.
— The REST client plugin is now available.
— Code formatter doesn’t change keywords casing by default.

Fixes
DBE-397: We don’t use driver-level read-only mode anymore. This will let you run safe statements that don’t begin with “s”.
DBE-5969: No error is raised when inserting a row with all default values.
DBE-5999: The ability to sort items in structure pane is added.
DBE-2102: Several fixes are delivered concerning default constraints in SQL Server. Now it’s possible to alter and delete them.
DBE-5968: Sybase shows the correct lengths for nchar, nvarchar, unichar, and univarchar types.
DBE-4962: Cascade rules of foreign keys are now correctly displayed for SQL Server.
DBE-6028: Code folding works with CTEs.
IDEA-124160: Now, the Search Everywhere pop-up receives focus.

To try these goodies out, get your 30-day trial of DataGrip today!
We’re always listening to your feedback on twitter, in our forum, and in the issue tracker. Rant, rave, and give us your improvement suggestions!
Your DataGrip Team
_
JetBrains
The Drive to Develop

Posted in Uncategorized | 8 Comments

DataGrip 2018.1 Release candidate

Hello! Here is our Release Candidate for 2018.1!

DataGrip_splash_2018_1_RC_preview

A few more fixes in this build:

DBE-6115: Scroll from editor works if horizontal split is used.
DBE-3548: The sequence start number is correct in PostgreSQL.
DBE-6104: Drop multiple columns works in H2.
DBE-5081: The REST client plugin is now available.
DBE-3260: Default values are added in the SSH configuration for data sources.

The release is coming next week!

DataGrip team.

Posted in Uncategorized | Leave a comment