DataGrip 2018.2 EAP 3

Hello! We’ve prepared a new build for our early adopters. Here’s a list of interesting things you will find inside.

Running stored procedures

A long-awaited feature! Now it’s implemented for Oracle and PostgreSQL. SQL Servers procedures are to come.

Choose Execute from the context menu of the procedure.

ExecuteProc

The SQL code will be generated. Put in the values of the parameters you need and press OK.

RunOracle

As you see, we deal with the output of this Oracle procedure, that’s because we have SQL code which retrieves the result-set from the JDBC driver.

In the cases where you run just a simple procedure with no output, DataGrip will generate SQL code just running this procedure. Again, you’ll need to add the values to the parameters. This is a PostgreSQL example.

RunPostgreSQL

Note, that you can run the procedure by clicking on a button in the toolbar when opening the source code.

Code style per data source

As you perhaps already know, you can create different schemes of Code Style.

ActionsOnCodeStyle

Now any of them can be used specifically for each data source. To do this, go to the Options tab of the data source properties.

SetCodeStyle

That’s all for today. Take DataGrip 2018.2 EAP for a spin!

Your DataGrip Team

Posted in Uncategorized | 2 Comments

Using BigQuery from IntelliJ-based IDE

Continuing the series of posts on how to connect DataGrip (or any other IntelliJ-based IDE) to various data sources, in this post we’ll show you how to connect to Google’s BigQuery. BigQuery is a low-cost enterprise data warehouse designed to handle data analytics at a massive scale.

Currently, DataGrip does not come with a built-in connector for BigQuery, so let’s connect the IDE to it.

First, let’s get all the details needed to connect to BigQuery from the Cloud Console. Make sure you have the right project selected, then click on the Create service account button. Type the name of the account, for example “datagrip”, in the Role section, select the BigQuery | BigQuery Data View, BigQuery | BigQuery Job User, and BigQuery | BigQuery User roles, then check the Furnish a new private key box. Make sure that the Key Type is set to JSON, then click on the Create button. A credentials file will be downloaded and you can click on the Close button on the dialog.

Note: Make sure you place that file in a safe place as it will allow anything with rights to read it to connect to your BigQuery dataset you are allowed to access.

DataGrip - BigQuery - Google Cloud Console

Now, back to the IDE, we need to add a new Data Source. Go to Database Tool Window, then select Data Source Properties, click on the “+” button and then, at the bottom of the list you’ll see an entry named Driver, click on this.

Start by changing the name of the configuration to “BigQuery”. You will notice that there is a section called Driver files. You can download the BigQuery JDBC driver from this page. Download the “JDBC 4.2-compatible” zip file, named “SimbaJDBCDriverforGoogleBigQuery42_1.1.6.1006.zip” at the time of writing, then unpack it into a new directory.

Once you downloaded the driver, go back to the IDE, and in the Additional files section, click on “+” and add all the “.jar” files from the newly created directory. Under the Class drop-down, now you can select “com.simba.googlebigquery.jdbc42.Driver” from it. In the Dialect drop-down, select Generic SQL.

Next, let’s define the connection URL template. Click on the “+” button under the URL templates section and add a new entry named “default” with the value: “jdbc:bigquery://[{host::https\://www.googleapis.com/bigquery/v2}[:{port::443}]][<;,{:identifier}={:param}>]”. Go to the Advanced tab, and in the OAuthType field, type “0”(zero).

DataGrip - BigQuery - Driver Config

With the driver is configured, go to the “+” sign at the top left of the Data Sources and Drivers window and select “BigQuery”. Leave the User and Password fields empty.
If you don’t want to modify the data in BigQuery, or your user can only view the data without modifying it, then mark the connection as Read-only.

Click on the Advanced tab and add your service account to OAuthServiceAcctEmail. This is the email we have from the earlier step in the Google Cloud Console. In the OAuthPvtKeyPath type the path to the file downloaded earlier from the Cloud Console. Finally, type the project id in the ProjectId field.

Go back to the General tab, click on the Test Connection button and you should see a “Successful” message. This means you configured everything correctly and you can now connect to BigQuery.

Close the window by pressing the OK button, and now you’ll be presented with a database console ready for use.

DataGrip - BigQuery - Connection Config

Let’s run a few sample queries to see how everything. For this, we’ll use the BigQuery public dataset available, and borrow one query from the official documentation:

Press CTRL+ENTER and the query will start running.

Note: Every time you run a query against BigQuery which requires it to process data, you will pay for it. Make sure that you understand the data size and the query you are about to run before doing so. For more details about the pricing, please see the official BigQuery documentation or contact your Google Cloud administrators.

DataGrip - BigQuery - Run Query

And that’s it. Now you can use BigQuery just like any other database and have all the IDE power for completion, data export, and visualization. Please let us know in the comments below if you have any questions, or features requests, we look forward to your feedback.

Posted in Uncategorized | Leave a comment

DataGrip 2018.2 EAP 2

Hello! Now is a good time to take a look at the new EAP build for 2018.2 which will be released later this month. It comes with one notable feature and several bug-fixes.

Context info

When editing a huge procedure in a package, it is sometimes useful to refresh your memory on the context of it: what particular procedure or package is now being edited. To do this use Alt+Q (Shift+Ctrl+Q for OSX) to see the context information:

Context

Number of selected rows

Now DataGrip will show the number of rows selected in the data editor in the status bar.

Cells

Custom data source

We added the ability to add jdbc-driver based data source to the drop-down invoked by the + button on the toolbar.

Driver

By the way, how do you like our new grayscale toolbar icons? :)

Bug fixes

DBE-6681: MySQL comments are being formatted properly
DBE-5204: DDL for a table opens in a right place
DBE-1858: Deleting the criteria from the Filter field shows all rows in the data editor

That’s it! Don’t forget about Twitter, our forum, and the issue tracker.
Your DataGrip Team
_
JetBrains
The Drive to Develop

Posted in Uncategorized | Tagged | 6 Comments

DataGrip 2018.2 EAP is Open

For those eager for the newest things, we invite you to try an Early Access Program for DataGrip 2018.2. As many of you know, these builds don’t require a license as we want to encourage everybody to try the new features out before the official release.

DataGrip@2x_splash_2018_EAP_preview

New SQL formatter

Thanks to everyone who shared their ideas, we hope that now DataGrip will be able to meet the needs of lots more different code styles. This is a feature we strongly need feedback on, so, please, try out our new formatter and tell us if your specific cases are not covered. We are still working on new clauses to add . The start of the EAP program is the best moment to make DataGrip better!

Formatter

SQL Log

First of all, now you’ll see every query that DataGrip runs in the console output. Whether it’s a user’s SQL or something DataGrip needs to run internally: check the Output tab to understand what’s going on.

SQLLogConsoleOutput

Secondly, literally all queries from the IDE are now logged in a text file.

SQLLogFileTo open this file go to Help -> Show SQL log.

SQLLogShowInFinder

Source code migrations

Now, to update the source code of any of the other objects just double-click on them and make the needed changes. The gutter (the panel to the left of the text editor) will highlight the changes you’ve made. Pay attention to the toolbar: now there is Commit button.

MigrationButtons

Once you’ve pressed Commit, the migration dialog appears. The SQL code to update the source code is generated there.

MigrationDialog

Query plan diagram

A diagram-based view is now available for the query plan.

ExplainGraphics

To view it, press the button on the toolbar after you invoke Explain Plan action. Let us remind you, that it’s available in the context menu of the query in the SQL console.

ExplainButton

Run SQL on a schema

Before, the only way to run the script was executing it from the context menu of the corresponding SQL file. Now you can use a more intuitive way: choose the datasource or the database you need and invoke Run SQL script from the context menu.

RunSQLSCript

That’s all for today! The release is going to be interesting, isn’t it?
DataGrip team.

Posted in Uncategorized | Tagged | 5 Comments

DataGrip 2018.1.5

Hello! Before we start our EAP program for 2018.2, let us bring you a new minor update for 2018.1. Here’s the list of bug-fixes:

DBE-6502: YAML plugin is included in DataGrip.
DBE-6537: Now you can drag-and-drop database elements to Favorites.
DBE-6551: Data source colors work fine after project re-opening.
DBE-6544: POJO-generator works for views as well.
IDEA-191807: Now you can paste file into Scratches folder.

That’s it! Stay tuned!

DataGrip team.

Posted in Uncategorized | 3 Comments

How to use Git with DataGrip

In the software world, there has been a solid habit of using version control to keep track and manage source code changes and collaboration. We have moved from centralized solutions like SourceSafe and TFS and now into using tools that distribute the management with Mercurial and Git. While both centralized and distributed technologies have pros and cons, we will be looking at why database professionals need to use version control and how to do it with the popular tool Git.

DataGrip_TwitterGit_w_o_text

With DataGrip’s inheritance from IntelliJ IDEA, we have the ability to use Git to store, manage edits and allow a team to collaborate on projects that benefit everyone. There is no reason why your database SQL scripts should not be under source control right next to your team’s application code. We will dig into VCS and give you some knowledge to keep that your database scripts and files backed up and up to date.

You can download DataGrip now to see how your team’s database work can benefit.

What is Version Control and Git?

If you are new to VCS or version control just know that it is a way to track changes to one or more files and has the ability to track those changes with a version that can be referenced later when necessary. By having versioned (explicit or dated) changes we can revert back to previous versions of the code if anything goes wrong. We can also have labels that track released versions of our database so that we can proceed to future development without losing our production code.

Having backups of production data and scripts is a key for a well-run database, but it’s hard to keep the database history while not having issues like forgotten files throughout your hard drive. You probably have a practice of overwriting your database’s backups with the latest data. The issue you will run into is losing good data that you may need in the future and also corrupting your database schema. The other technique database professionals use is to save a date stamped backup in a folder, but that just leads to a mess with lots of files to manage.

A database backup is just SQL code or other files that contain the data in a format the database engine can import, so why not handle it the same way you manage the rest of your code — in a source code system like Git? Setting this up is very simple so let’s walk through it.

As with any version control system, Git was created to keep a historical look at source code and that includes database scripts and backups. As a distributed version control system it is aimed at speed, data integrity, and support for non-linear workflows. You might be asking what makes it distributed? The key to Git is that is tracks your changes locally and remotely. When you check your code into Git it stores those snapshot changes locally on your file system in a local repository. That way you can make many changes or back out of changes that you need to without affecting anyone else that also works with or depends on the code. When you are sure you want to make your changes available for others you can then “push” your changes to the remote repository. You can also “pull” other’s changes that have been pushed to the remote repository.

That is a very simple look at Git but if you want more details please review the Git documentation and tutorials.

Installing Git

To work with Git or any other VCS system with DataGrip, we will need to install the plugin. To install the plugin go to File | Settings… and then select the Plugins option. Finally, select the Install JetBrains plugins… button.

DataGrip Settings

Search for git on the search box. You should get the Git Integration plugin showing first in the list that will allow you to install the plugin and restart DataGrip.

DataGrip install git plugin

Working with Git in DataGrip

Cloning a Project from GitHub

With the Git plugin installed, you can now clone a remote repository to your local file system. Cloning is a command in Git that copies all the most updated file snapshots that exist on the remote server to the local you specify.

In DataGrip, go to VCS | Checkout from Version Control | Git and you will be presented with the Clone Repository dialog.

DataGrip checkout from Git

For this example, we will clone the following Git Repository URL from GitHub to a local folder of your choosing (Parent Directory).

https://github.com/cwoodruff/sql-server-sakila.git

GitHub is a Web-based Git version control repository hosting service. It is used for software code and other files such as database scripts we are showing with DataGrip. It offers all of the distributed version control and source code management functionality of Git. To learn more about GitHub and/or to sign up for a free account go to GitHub.com.

DataGrip Git checkout confirm

After cloning the files that were copied to your local filesystem will be present in DataGrip’s File Tool Window.

Modifying a Database Script and adding a new file to the project

For the purpose of showing how we save changes to files stored in our local Git repository and also how new files are added, we will modify the schema file that was cloned from the remote repository and also add a new *.sql script for a stored procedure that was added to the MSSQL database project.

Replace line 16 (“use sakila”) in the sql-server-sakila-schema.sql file with the following:

DataGrip SQL for creating database

Next, add a new file called sql-server-sakila-spInsertActor.sql to the project with the following code:

You now have your 2 file changes that we will commit locally and push to the remote repository.

Committing code changes to Local Git

You will now commit the 2 changes (along with your .idea folder and files) to your local Git repository. Go to VCS | Commit… and open the Commit Changes dialog. You will need to select the Unversioned Files to have the new stored procedure and .idea folder committed.

DataGrip Commit Changes dialog

It is a good idea and required that you have a short concise commit message that gets logged with your commit. A good rule is just one sentence that clearly states what and why you did the commit. I will give a commit message of the following: “Added sproc spInsertActor, updated schema, and .idea folder and files”.

Press Commit and your files will be saved to your local repository and will be ready to be saved or pushed to the remote repository. If DataGrips finds any issues before committing due to the analysis it runs on the code, you will be prompted with a message and the commit will not be performed.

Pushing Commits to Remote Git on GitHub

Now that the commit has been processed and DataGrip did not prompt any errors or warnings, we can look at pushing the commit to the remote Git server. Push is the term that Git uses when the user saves all local commits remotely. In our example, the remote git server resides on GitHub.

We will push our commit to GitHub by selecting VCS | Git | Push…

DataGrip push to Git

A Push Dialog will appear. The dialog will show the commits and commit details so you can review everything before doing to push to the remote Git servers at GitHub.

DataGrip Push commits dialog

DataGrip may prompt you at this point for your GitHub.com credentials before allowing your commit(s) to be pushed. If you were pushing your commits to another remote Git server you may be prompted for login information as needed.

DataGrip GitHub Login

Pulling Changes from Others with Git

Since you can push your changes to the remote Git server to store those changes in the remote repository, it makes sense that there should be a way to get other’s commits from the remote repositories. That is the next feature of Git we will explore, the Pull.

Pulling from Git is very similar to pushing but the flow and analysis works from the remote to the local. When the pull happens Git and possibly DataGrip will analyze the code being brought back from the remote servers and detect any issues before saving those files to the local filesystem. A common complaint from Git might be pulling code when you have commits locally that have not been pushed to the remote repository.

Your coworker has updated a file or files in the remote GitHub repository for the project you are working with currently. You need to get those changes to have a complete up to date database environment and also to make sure you have the current scripts for testing. To get all updates from others including your coworker, you will go in DataGrip to VCS | Git | Pull.

DataGrip pull from Git

After the pull is performed and no errors have been found you will get the results of your pull. Here is an example of what results you would get from having your sql-server-sakila-spInsertActor.sql file updated by your coworker and pushed to GitHub.

DataGrip pull in Version Control pane

You can now work with the updated files and continue with your work.

Creating a Branch with Git

Now that you know how to commit, push and pull using Git, we will look at one final feature called Branching. A branch in Git is a local and remote snapshot of code files you can encapsulate your work. Maybe you are creating a new feature and you need to keep it isolated from other work you or others are doing. You can work and commit your updates to your branch and then push your commits to the remote repository as your branch can also exist there. When you are done with your branch and need to get the code into the main area of the repository you will then perform a merge locally and push that merge to the remote side.

Let’s look at a diagram that shows the workflow of a git branch and merge. You will see the point where the branch is created from the master branch (Common base). The green points along the workflow shows the commits that have occurred during the branch lifetime along with the blue commits that happened at the same time from others. In the end, we will have a common merged code after a git merge has been performed (New merge commit).

Branch workflow diagram

In DataGrip we can create a new branch by going to VCS | Git | Branches…

DataGrip Branches

You will then get the Git Branches dialog that will allow you to create a new branch. You can also detect and access other local and remote branches but that is beyond the scope of this tutorial.

DataGrip Git Branches popup

After you create your branch you will see the name of the branch at the bottom right in the DataGrip bottom status bar. This is where you can also create branches or see what other branches are available to you on your local filesystem. The great thing about Git is you can have more than one branch locally (just make sure not to confuse them). I created a new branch call Test as you can see below.

DataGrip Branch selection

After you have committed and pushed your changes to the branch, tested that your changes meet your requirements and finally need to be merged back into the master branch, you will need to go to the branch you want the Test branch merged with. In this case, it is the Master branch.

DataGrip Checkout branch

Once you check out the Master branch and make it the current branch, you will then need to merge the Test branch into it. The best way is to select the Master branch again and then Merge. This will merge the only other branch currently in your local repository which is the Test branch. DataGrip will prompt you if you want to delete the Test branch after you have merged it into the Master branch. It will be safe to allow DataGrip to delete the branch since we will not have any use for it after the merge.

DataGrip merge branch

At this point, you need to push the changes that now are in the Master branch back to the remote repository. Perform a Push on the Master branch and your work that started in the Test branch that was merged locally to the Master branch will now reside in the remote Master branch in GitHub.

DataGrip Push Commit for merge

Branching can be tricky so please pay attention to what you are doing. It would be wise to create local and remote repositories (with GitHub hosting the remote) and experimenting with Git branches. To learn more about branching go to the Git docs on Git Branching – Basic Branching and Merging.

Wrap Up

All code no matter, if it is C# files for an ASP.NET Core project, Javascript files for a React/Redux application or SQL scripts for a database, should be stored and tracked using a version control system. As you have learned in this tutorial, Git is a powerful distributed version control system that will allow not only you as a database professional to organize, track and work with branches but also allows your entire team to do the same because of the remote Git repositories that go along with your local repositories.

Download the latest DataGrip and try it out! We’d love to hear your thoughts and feedback!

To learn more about Git with all the JetBrain’s IDEs go watch the YouTube series covering Git Questions.

Posted in Tutorial | Tagged , , | 5 Comments

Using AWS Athena from IntelliJ-based IDE

If you use the AWS cloud, then you probably store data on their S3 platform, either for your application data or in the form of logs from services like the AWS Elastic Load Balancing. Amazon offers Athena, a service built on Presto, and allows you to query this S3 data using ANSI SQL syntax.

So how to access this data straight from the IDE?

To do this, we’ll first need to do some configuration to connect the IDE with Athena.

Head towards the Database Tool Window, then select Data Source Properties, click on the “+” button and then, at the bottom of the list you’ll see an entry named Driver, click on this.

Start by changing the name of the configuration to “Athena”. You will notice that there is a section called Driver files. Here you can import custom JDBC drivers to expand the IDE connectivity capabilities.

AWS provides a JDBC driver for use with Java, which you can download from this page. As our IDEs work on JDK 8, you can download the corresponding jar file, at the time of writing is “AthenaJDBC42-2.0.2.jar”.

Once you downloaded the driver, go back to the IDE, and in the Additional files section, click on “+” and add the file we’ve just downloaded. New options should now be available under the Class drop-down, select “com.simba.athena.jdbc42.Driver” from it. In the Dialect drop-down, select Generic SQL.

In order for the IDE to connect to Athena, we can also define a connection URL template. Click on the “+” button under the URL templates section and add a new entry named default with the value:

jdbc connection

Since the driver is now configured, you can go to the “+” sign at the top left of the Data Sources and Drivers window and select “Athena” as a driver. In the Host field introduce the AWS region you want to make the queries from, “eu-west-1”, in the User section add your AWS Access Key Id, “AKIA…”, and in the Password field the AWS Secret Access Key, “Go2018…”.

You can also mark the connection as Read-only since Athena does not support modifying data on the S3 buckets it queries.

Click on the Advanced tab and type in the location of where the query results will be saved. By default, Athena will save this under a location similar to “s3://aws-athena-query-results-YourAWSAccountID-eu-west-1/” but you can find yours via the Settings section in the Athena Console.

Go back to the General tab and click on the Test Connection button and you should see a “Successful” message. This means you configured everything correctly and you can now connect to AWS Athena.

Close the window by pressing the OK button, and now you’ll be presented with a database console ready for use.

AWS Athena Custom Config - 1

Let’s run a few sample queries to see how everything. For this, we’ll use the AWS Athena sample data for AWS ELB logs. If you decide to a different data source, such as your own data in an S3 bucket your account has access to, make sure you also allow Athena to query the data as explained in the official documentation.

First, run this query to create the table that we’ll use:

Select the whole query and press Ctrl/Cmd+Enter. If you selected that this connection is a read-only one, you’d need to confirm you want to execute this query.

Now, let’s pull some data from it using this query:

AWS Athena Custom Config - 2

Note: Every time you run a query against Athena which requires it to process data, you will pay for it. Make sure that you understand the data size and the query you are about to run before doing so. For more details about the pricing, please see the official AWS documentation or contact your AWS administrators.

And that’s it. Now you can use AWS Athena just like any other database and have all the IDE power for completion, data export, and visualization. Please let us know in the comments below if you have any questions, or features requests, we look forward to your feedback.

Posted in Tutorial | 1 Comment

DataGrip 2018.1.4

Hello!

This update fixes just one but an important bug: DBE-6451. User parameters work well again.

Posted in Uncategorized | Leave a comment

Copy and share data sources in IntelliJ-based IDEs

800x418_Twitter_DG_DataCopy2

Hello! In this tutorial we’re going to look at three popular use-cases:

— Duplicate a data source within one project.
— Share data sources between projects inside one IDE.
— Copy a data source from one IDE to another IDE, to another computer.

Duplicate a data source within one project

A trivial problem with a trivial solution: in the context menu of the selected data source click Duplicate or press Ctrl/Cmd+D.

Duplicate

A new data source will be created with the same properties, including the password.

Share data sources between projects inside one IDE

In IntelliJ-based IDEs a data source can be global. It means that it becomes visible from all projects you run from a particular IDE. By default, data sources are local. To make a data source global click the corresponding button in the toolbar (or the context menu).

MakeGlobal

If you want to make it local again, click the button again.

MoveToProject

Global data sources are visible in all projects you open from the same version of the same IDE. Say, if you use DataGrip 2018.1, global data sources will not be attached to projects you open in other versions of DataGrip or in IntelliJ IDEA.

Copy a data source from one IDE to another IDE, or to another computer

Global data sources can be copied via Export Settings. But what if you don’t want to export all settings? And what if you don’t use global data sources?

The first and the most convenient way to copy data sources to another IDE: select the ones you need and in the context menu choose Database tools → Copy data source to clipboard.

CopyToClipboard

In fact, an XML is copied to the clipboard like in the following example:

XML

You can send this snippet to your colleague via a messenger, or just import it to another IDE on your machine. To do this, click Add data source (+) → Import from clipboard.

ImportFromClipboard

The datasource is copied without a password, so you’ll need to input it manually. DataGrip stores passwords in the KeePass storage on Windows and in native keychains on Linux and MacOS. You can configure the IDE to store passwords in KeePass on any OS: to do this go to Settings/Preferences → Appearance and behaviour → System settings → Passwords.

Passwords

If for some reason that flow doesn’t meet your needs, here comes something for advanced users.

DataGrip, as any other IDE from JetBrains, works in the context of a project. The entity of the project is not a thing which database developers are used to, that’s why DataGrip creates a default project once you run the IDE. And you continue using this default project unless you create a new one.

The information on all data sources you have in a project is stored inside dataSources.xml file inside this project.

Where is this default project is located?

Windows
<SYSTEM DRIVE>\Users\<USER ACCOUNT NAME>\.<PRODUCT><VERSION>\config\projects\default

Linux
~/.config/projects/default
~ is an alias for the home directory, for example, /home/john.

MacOS
~/Library/Preferences/projects

If you created other projects than default, we assume you know where they are located. Anyway, if you go to File → Open Recent → Manage projects you’ll see all projects with their paths except the opened one.

So, if you open a project from another IDE, it will contain all data sources without passwords. They are stored in \.idea\dataSources.xml

dataSourcesXML

If you make any changes to this file, it will affect the data sources you use. It can be even placed under VCS to synchronize the data sources list between several users.

You might also notice the dataSources.local.xml file where DataGrip stores schema visibility settings and some other information that can vary from user to user. If a data source becomes global, the information about it is stored in the options folder which is located at the same level as the projects folder.

That’s it! Please, comment if you have questions.

Posted in Uncategorized | Leave a comment

DataGrip 2018.1.3

Hello! This is another minor update for 2018.1. Here is the list of important bug-fixes:

Oracle
DBE-3195 — Auto-completion for object types
DBE-5340 — Compiling PL/SQL code works OK
Support for:
DBE-2581 — Conditional compilation flags
DBE-6027 — SET statements with multiple clauses
DBE-6196 — INSERT ALL
DBE-6378 — JSON keyword

PostgreSQL
DBE-6301 — The bug with creating foreign keys between tables is fixed
DBE-6391 — Fields are resolved in LOOP cycle

Redshift
DBE-6336 — MAXFILESIZE support for UNLOAD

Sql Server
DBE-6325, DBE-6370 — Columns for OUTPUT are resolved
DBE-6354 — Support for ORDER BY in subqueries

H2
DBE-6367 — CTE support
DBE-6368 — DROP CASCADE support

DataGrip team

Posted in Uncategorized | 7 Comments