Announcing the MicroPython Plugin for PyCharm

Today we’ve released the MicroPython plugin 1.0 for PyCharm. This plugin lets you edit your MicroPython code and interact with your MicroPython-powered microcontrollers using PyCharm. It supports ESP8266, Pyboard, and BBC Micro:bit devices. The plugin is being developed as a team project by the PyCharm Community lead Andrey Vlasovskikh. The source code for the project can be found on GitHub.

MicroPython is a relatively new member of the Python interpreters family. It’s basically a Python 3.5 implementation designed for microcontrollers — small computing devices that are used everywhere from smart watches to cars. People usually program microcontrollers in C or an assembly language due to low performance and memory limits. Thanks to clever optimization techniques implemented in MicroPython you can now use (almost) standard Python for microcontrollers. For example, you can create your own Internet of Things device and program it in MicroPython.

The MicroPython plugin is compatible with both PyCharm Community and Professional editions. We’re going to make it available for IntelliJ IDEA soon as well. Let me walk you through the setup process and the features of the plugin using PyCharm:

We’ll be using an ESP8266-based device called WEMOS D1 mini. Basically, it’s a Wi-Fi chip with a couple of digital and analog I/O pins to connect external sensors and actuators. But for our simple demo, we won’t need anything besides the LED light that is already located on the device and is connected to the digital output pin 2.

This is our demo program which toggles the LED every second:

Let’s get started setting up our device!

First of all, make sure your OS can see your device via USB. This step is device-dependent. For WEMOS D1 mini on Windows and macOS you’ll need a serial port driver provided by the device vendor.

Next, we’ll setup PyCharm to work with your device. First, you need to install the MicroPython plugin in PyCharm “File | Settings | Plugins”. Then you need to create a new Python project in “File | New Project…”. In PyCharm 2017.3 the new project dialog with the correct settings will look like this:

image6

Make sure you’ve configured a Python 3.5 or a newer interpreter for it (preferably a virtual environment), since the MicroPython plugin will later ask you to install a few Python packages to communicate with your device.

After that add a new file to your new project with the contents of our program above. Finally, enable MicroPython support for your project in “File | Settings | Languages & Frameworks | MicroPython” and specify your device there:

MicroPython Configurable

Now let’s see what the plugin has to offer.

Code Completion and Documentation

The MicroPython plugin provides code completion and documentation for MicroPython-specific modules:

MicroPython Code Completion

Notice that code completion is context-aware. On this screenshot PyCharm shows you only the members of the utime module.

The quick documentation window contains the docs for the selected name. Use Ctrl+Q (F1 on macOS) to show this pop-up window. You can also dock it and enable “Auto-update from Source” to keep it permanently.

Syntax Checking and Type Checking

The plugin searches syntax errors and other problems in your code like potential AttributeError or ImportError using static code analysis. It comes with Python stub files for device-specific binary modules. These stubs contain Python type hints that make it possible to check types in your MicroPython code:

MicroPython Type Checking

On the screenshot above you can see several Python syntax errors when the user tries to write some C code in the middle of their Python file. There is also a type error in utime.sleep_ms(3.14), since this function only accepts integers.

Flash Files to Devices

The MicroPython plugin helps you to upload your files to your MicroPython device via USB. Use “MicroPython” run configurations to flash files or folders to your device in “Run | Edit Configurations…” menu. To quickly upload a single file you can select “Run ‘Flash <your-file-name>.py’” from the context menu of your Python file:

MicroPython Run Configuration

MicroPython REPL

Interactive experiments play an important role in Python development, but they are even more important with microcontrollers which usually don’t have any screens to show possible errors. The MicroPython plugin allows you to quickly run an interactive Python REPL console. Use “Tools | MicroPython | MicroPython REPL” menu to run a MicroPython shell on your device.

MicroPython REPL

I hope you enjoy this plugin. I’ll be glad to hear your feedback and how you’re using it. Tell me about your experience with it in the comments below, or on twitter: @vlasovskikh or @pycharm. Star or fork the intellij-micropython repository on GitHub, send your issues and pull requests!

Posted in Cool Feature | Tagged | 11 Comments

PyCharm 2017.3.3 RC

Today we’ve published PyCharm 2017.3.3 Release Candidate which has a number of important fixes specific to PyCharm as well as numerous platform fixes. If you’d like to try them, you can now get the release candidate from the confluence page.

Some of the fixes in this build:

  • Fixes for false positives in SQLAlchemy support
  • A fix for Python Console not showing a prompt after calling input()
  • A fix for freezes on closing the IDE while connecting to an SSH host
  • Various fixes for javascript and database support
  • And more, see the release notes for details

If you use multiple JetBrains applications, you can use JetBrains Toolbox to make sure all your JetBrains IDE’s stay up to date. PyCharm is also available as a snap package. If you’re on Ubuntu 16.04 or later, you can install this PyCharm RC version by using:

sudo snap install [pycharm-professional|pycharm-community] --classic --candidate

Posted in Early Access Preview | Tagged | Leave a comment

Webinar: “MongoDB Quickstart with Python and PyCharm” with Michael Kennedy

Learn the most popular NoSQL document database with leading Python instructor and speaker Michael Kennedy. This webinar is adapted from Michael’s free online course.

  • Tuesday, January 30, 2018
  • 5:30 PM – 6:30 PM CET (11:30 AM – 12:30 AM EST)
  • Register here

mongodb_webinar_register

Python is great for web development and MongoDB is a popular datastore for JSON. How do these two work together, using PyCharm as your IDE?

Michael Kennedy brings material from his acclaimed MongoDB course, which uses PyCharm, over to a webinar format. He’ll quickly go through the basics, sharing a repository before the course, and letting you work along and ask questions. If you’re looking to get oriented with NoSQL development, this webinar is for you.

The outline includes:

  • Explore the benefits of document databases with MongoDB
  • Learn how to map Python classes to MongoDB documents
  • Use an ODM (object-document mapper) to add structure and requirements to a schemaless database
  • See something useful built with MongoDB, live

Speaking to you

Michael Kennedy is an author, instructor, and speaker. He is the host of the Talk Python To Me podcast and founder of Talk Python Training, an online training company for Python developers. Michael has taught over 100 week-long training courses to professional software development teams throughout the world and is an expert in Python, MongoDB, and .NET among other technologies.

Posted in Webinar | Leave a comment

PyCharm 2017.3.2 Out Now

To close off 2017, we’re releasing a new version of PyCharm 2017.3! Get it now from our website

New in this version:

  • Type annotation ahead of time. You can’t put an inline type annotation when defining a variable in a for or a with  statement, so you need to provide the annotation in a separate place, read more on the ticket. This is now supported in PyCharm.
  • ‘Show command line afterwards’ now works correctly for remote interpreters. And further fixes to this feature
  • Various SQL syntax fixes for PostgreSQL and Oracle (PyCharm Professional Edition bundles all DataGrip features)
  • React code completion improvements (PyCharm Professional Edition bundles all WebStorm features)
  • And more, see the release notes for details

If you use multiple JetBrains applications, you can use JetBrains Toolbox to make sure all your JetBrains IDE’s stay up to date. PyCharm is also available as a snap package. If you’re on Ubuntu 16.04 or later, you can install the latest PyCharm version by using:

sudo snap install [pycharm-professional|pycharm-community] --classic

Posted in Release Announcements | Tagged | 3 Comments

Creating a Python Development Environment on Amazon EC2

In the last two blog posts of this series we discussed how to set up a local VM-based development environment for a cloud application, and then built a Flask-RESTful app within this environment. Today, we’ll take our app to AWS, and we’ll set up a remote development environment.

The environment we’ll describe here is configured for development, not production. If you’re interested in seeing how to prepare this application for production, let me know in the comments!

This blog post was written on Ubuntu; it should work as well on macOS. There are some difficulties on Windows with the SSH configuration for Terraform.

The Environment

We’ll create a two-tier environment on AWS: a web server, and a database server. To keep in line with best practices, we’ll want to make only the absolutely necessary ports open. In line with this practice, we’ll add a third EC2 instance to be a ‘management host’. We’ll use that as an SSH bastion to connect to the other machines. We’ll also run Ansible configuration changes from this box.

Infrastructure Overview

As only the management and web hosts need to be exposed to the internet, we can put the database host in a private subnet. Not shown in the diagram is the NAT gateway that’s necessary to make sure we can access the internet from the DB host. It would be very hard to install PostgreSQL on it without it.

Terraform

Now that we know the configuration we want on AWS, we need a way to make it happen. We could manually go into the AWS Console and configure everything from there. However, that would be hard to reproduce, and impossible to version control.

The next options come from AWS themselves: the CLI and CloudFormation. The CLI makes scripting easy, but it’s hard to make scripts that are idempotent and allow for easy changes. CloudFormation is a solution that allows us to describe a desired infrastructure in JSON which can then be applied to AWS.

Terraform is software from the same guys who make Vagrant and Consul, that allows us to write our desired state in a language that’s a lot user-friendlier than CloudFormation JSON. As it’s what a lot of the cool kids are using today, we’ll use this for the grouporder project.

Describing the Configuration

I’ve split the code into two repositories: the project itself, and the infrastructure. The infrastructure essentially consists of three files: the network setup, the instances, and some additional provisioning details. Terraform reads all files in the directory and essentially combines them into one big file. Splitting the files is just to make it easier for us to find things.

The networking setup creates the public and private subnets within our virtual private cloud (VPC). By creating a new VPC this application is fully separated from other applications in my AWS account. Furthermore, we’ll have internal IP addresses we can use to communicate between the EC2 instances for this project.

Any EC2 instance launched in the public subnet will receive both a private IP address (in the 10.0.x.x range) and a public IP address. The instances in the private subnet will only get a private IP.

Let’s have a closer look at the configuration of the database host (which is in the private subnet):

It is all pretty simple. We first look up the AMI ID of a current Ubuntu 16.04 LTS image and then we describe the instance we want to launch. It is easy to see here how Terraform allows us to link pieces of our configuration together. In the networking file I’ve defined an aws_subnet, which I gave the name private_subnet, and we’re accessing it’s id attribute to instruct Terraform to launch this EC2 instance in that subnet.

Let’s take a look at the security group for our database server, as it’s the most interesting one:

We’re allowing SSH and SQL connections in, but we specify the security groups that it’s allowed to come in from. So for example, an SSH connection from the web host would be denied, even though the web host is in the same subnet as the management host. As we’re writing full security groups, Terraform needs us to specify explicitly that our DB host can communicate to the outside world.

For the sake of  ‘brevity’ I won’t go through the rest of the Terraform configuration here, but it’s all in the repo, and if you have any questions, let us know in the comments!

Applying the Configuration

Please note, that we’re using a NAT gateway, which is not included in AWS’s free tier, so starting this configuration will cost you money. The total cost should be under 10 cents per hour, with the NAT gateway surprisingly being over half the cost.

Now that that’s out of the way, let’s get started. To be able to do this you’ll need:

  • An AWS account
  • An IAM role with the appropriate permissions (at least EC2FullAccess and VPCFullAccess), and an access key
  • Terraform, get it here, and then place it in a directory that’s on your PATH

Terraform will use the AWS credentials you’ve configured for the AWS CLI. If you don’t have the CLI installed, you can manually create a file in ~/.aws/credentials with the following contents:

Then check out the Terraform files from the infrastructure repository, and open them in PyCharm. Be sure to get the ‘HashiCorp Terraform’ plugin, to get code completion in the Terraform files.

When starting a new project, or when you add new providers (like ‘AWS’ and ‘Templating’) in Terraform, you’ll need to run terraform init to make Terraform configure these. So open the Terminal in PyCharm (Alt+F12) and run terraform init inside the project folder.

Before applying, you need to make sure that you have a private key loaded in your SSH agent and have uploaded the public key to AWS. Without the key, you won’t be able to provision or access the EC2 machines.

After Terraform has initialized, you can run terraform apply, which will first check the current state on AWS, and then ask you whether to apply changes to make AWS look like the state described in the Terraform files. When you start the command, Terraform will ask about any variables that are required for the configuration. For the grouporder-aws configuration, two variables are required: the desired AWS region, and your AWS Key name. Please keep in mind that public keys are region specific, so you should choose the same region where you’ve uploaded your key.

If you got the PyCharm plugin, you can use a run configuration for this. Otherwise, just run terraform apply on the command line, and answer yes when it asks whether or not to apply the changes.

Infrastructure and Software

The Terraform files configure the infrastructure and then also kick off provisioning. For this setup, we’re checking out the grouporder repository on the management machine, and then using Ansible to setup the software on all three EC2 instances.

The Ansible configuration is mostly in the grouporder repository, which also contains a Vagrantfile to make it possible to have everything run on a single VM for local development. Ansible “roles” enable us to describe certain server behaviors, which we can choose to deploy to machines as described in the inventory file.

To tell Ansible about our AWS configuration, we’re using Terraform to fill out a template of an Ansible inventory file, which is then transferred to the server:

Although Ansible is used to configure the Python environment on the web server, we’re not actually checking out the code there, as we’ll connect to the machines with PyCharm soon to be able to start developing on this environment.

Setting up PyCharm

Now that we have the environment spun up in AWS, we’d like to get started with developing on the cloud. So let’s hook up PyCharm!

Due to our network settings, we’ll need to connect through the management host to both our web and database hosts. PyCharm 2017.3 can read an SSH config file to set up this kind of connections. Let’s use a Terraform template (ssh_config.tmpl in the repository) to generate a section of an SSH config file, which we can then copy over:

After Terraform completes, we get a ssh_config.out file. Open this file and copy-paste its contents into your ~/.ssh/config file.

If you already checked out grouporder for last week’s blog post, you can open that. If not: open the grouporder repo in PyCharm (VCS | Checkout from Version Control | GitHub). Make sure to mark the grouporder subfolder as a sources folder: right-click the folder | Mark directory as | Sources root. If you don’t set this up, the PYTHONPATH will be set wrong when we execute our code later.

After opening the project, go to Tools | Deployment | Configuration, and add a new SFTP server. As we’ve defined “Web” as a Host in the SSH config file, we can just use ‘Web’ for the hostname here. Choose ‘OpenSSH Config and authentication agent’ as the authentication type, and type ‘ubuntu’ as the username. We’ll use our remote home folder as the root path: /home/ubuntu.

Deployment Settings

Then, on the mappings page, add a mapping between the project folder on your machine, and a subfolder of the root path configured on the ‘Connection’ page:

Deployment Mappings

This will place our code in /home/ubuntu/grouporder. After this configuration, make sure that the following option is checked: Tools | Deployment | Automatic upload (always). By doing this, all changes we make will automatically be uploaded to our remote machine. Finally, right-click the project root folder in the project tool window, and choose ‘Upload to web’ from the context menu to upload the initial version.

We’ll now need to have a look what the internal IP address is of our database server, you can find this in the SSH Config after HostName. For me this happens to be 10.0.1.38, which makes the database connection string: postgresql://grouporder:hunter2@10.0.1.38/grouporder.

We’ll use a run configuration to apply the migrations to the database. Choose module name pgmigrate, the migrations subfolder as working directory, and provide as script parameters: migrate -t latest --conn postgresql://grouporder:hunter2@10.0.1.38/grouporder. For details, see the previous blog post. pgmigrate will not return anything if it successfully applied the migrations.

Let’s connect to the database to make sure that we’ve correctly applied the migrations. However, we didn’t expose the database to the world, so how can we do this? The answer is SSH tunneling!

Create a new database connection: View | Tool Windows | Database, then use the green ‘+’ icon to add a PostgreSQL data source. On the ‘General’ page, we’ll connect as if we’re on the database box, so host is localhost (db and username is ‘grouporder’, password is ‘******’). Then we’ll go over to the SSH/SSL page, and we’ll use database as the hostname. This is one of the hosts defined in the Terraform ssh_config segment. Just make sure to choose ‘OpenSSH config and authentication agent’ as the Auth type:

DB Connection AWS

You may want to rename it from grouporder@localhost to Grouporder AWS or something else that makes it clear that we’re not actually connecting to localhost.

After connecting, we should see all the tables in the database tool window.

At this point we’ve fully configured our development environment on AWS, and we can create a regular run configuration for our Flask application. Don’t forget to set host to 0.0.0.0 or we won’t be able to access it. So let’s get started:

Grouporder AWS Complete

If you want to play around with the application a little, see the end of the previous blog post. Everything that worked on Vagrant should work the same on AWS now.

And that concludes the third part of our developing for the cloud blog series. If you’re interested in more, let us know in the comments! For example, if there’s enough interest, I could write a blog post about making a production-ready version of this application.

Posted in Tutorial | Tagged , , , , , , , , , | 5 Comments

PyCharm 2017.3.2 RC

The PyCharm team is getting ready for the holidays and has some early Christmas presents features for you! If you can’t wait, try the release candidate version now! Download the RC from Confluence.

New in this version:

  • Type annotation ahead of time. You can’t put an inline type annotation when defining a variable in a for or a with  statement, so you need to provide the annotation in a separate place, read more in the ticket. This is now supported in PyCharm.
  • ‘Show command line afterwards’ now works correctly for remote interpreters. And further fixes to this feature
  • Various SQL syntax fixes for PostgreSQL and Oracle (PyCharm Professional Edition bundles all DataGrip features)
  • React code completion improvements (PyCharm Professional Edition bundles all WebStorm features)
  • And more, see the release notes for details

If you use multiple JetBrains applications, you can use JetBrains Toolbox to make sure all your JetBrains IDE’s stay up to date. PyCharm is also available as a snap package. If you’re on Ubuntu 16.04 or later, you can install this PyCharm RC version by using:

sudo snap install [pycharm-professional|pycharm-community] --classic --candidate

Posted in Early Access Preview | Tagged , | Leave a comment

Building an API using Flask-RESTful, and using the PyCharm HTTP Client

In the previous blog post in this series, I created a Vagrant VM and provisioned it with Ansible for Python development. In this part, we’ll have a look at a larger application which is configured in the same way.

In this post, I’ll introduce an application that I’ll be using for more blog posts in the future, so let me know in the comments if you have any cool ideas about what can be done to make this application better.

We’ll use the following:

The Application: Grouporder

Those of you working in an office will know the challenge of wanting to order food, and wanting to share the order with some coworkers. Many restaurants have minimum order amounts, and it’s also nice to have food together. That way if it ends up badly, you have others to share the misery with.

At JetBrains we have some Slack channels that help us arrange the orders, and I’m assuming many other companies have something similar. We’re a technology company, so today we’ll look into how we can solve this with code.

Our application should expose a REST API that allows us to create a group order, and then allows others to add the food they want to this order.

The routes we’ll need:

  • /users for registering users
  • /users/login for exchanging a username and password for a token
  • /restaurants to keep track of the places we can order food from
  • /restaurants/<id>/menu for the actual food items
  • /orders to define the actual orders

See the example_requests.http file for a complete walkthrough of defining a restaurant, its menu, and then placing an order through the REST API.

I’d like to use this code base in future blog posts, as there are many possible ways to extend it. If you have any cool ideas of what to do with it, let me know in the comments below! Please keep in mind that the code is not production ready. So use it at your own peril.

The Stack

We’ll be using Flask together with Flask-restful to create our API. We’ll both accept and return JSON, and Flask-RESTful takes care of most of this for us. To keep separation of concerns, we’ll separate the API layer from the data layer.

The database for this project will be PostgreSQL, which we’ll communicate with using psycopg2 and plain SQL, because we can. If enough people are interested in the topic, a future blog post could look into how to change over to SQLAlchemy instead. Let me know in the comments if you’d like to read about this.

To version our database while not using an ORM, we’ll use pgmigrate, a tool that allows us to write migration scripts in SQL, and will handle the execution of the migrations.

Finally, we’ll use Vagrant to automatically create a reproducible VM for our development. We’ll use Ansible to provision the VM, so we can easily use the same provisioning on AWS in a later blog post.

Getting Started

Open PyCharm, and check out the repository (VCS | Checkout from Version Control | GitHub). After the project has opened, run vagrant up (Tools | Vagrant | Up) to bootstrap the development VM.

At this point, Vagrant will download a Ubuntu base box, as configured in the Vagrantfile. Afterwards it will provision the VM with using Ansible, with all the roles from the roles folder, as specified in setup.yml. For more details, read the previous blog post.

When Vagrant has finished, we can set up our interpreter. Go to Settings | Project Interpreter, use the gear icon to choose ‘Add Remote’, and then add a Vagrant interpreter, with /home/vagrant/venv/bin/python as the interpreter path.

Vagrant Interpreter

The grouporder Python code is in a subdirectory of the repository, so we need to tell PyCharm this so it correctly resolves the code. Right-click the ‘grouporder’ folder, choose Mark Directory as | Sources Root:

Mark as Sources Root

Before we can start the Flask application, there’s one more thing to take care of. We need to migrate the database. Let’s create a run configuration for this:

Migrate Run Config

Use the downward facing arrow to select ‘Module name’, and then type pgmigrate. The parameters are: migrate --target latest --conn postgresql://grouporder:hunter2@localhost/grouporder. Then finally, make sure that the working directory is the migrations folder. When you run this configuration (with the green play icon), it should return nothing, and just say ‘Process finished with exit code 0’.

We can make sure that the migrations have been applied by connecting PyCharm to the database. Let’s go to the Database tool window (View | Tool Windows | Database) to configure the connection. Use the green ‘+’ to add a new PostgreSQL data source. If PyCharm tells you it’s missing a driver, just click the link and it will be installed automatically.

We exposed Postgres on the host’s port 5678 in the Vagrantfile, so we can connect to it by using localhost with that port. Of course the password is the ever-secure hunter2:

Database Settings

After clicking ‘OK’ we should see the database in the tool window, and if we expand the ‘tables’ node in the ‘public’ schema, we should see our application’s tables:

Database Tables

If you don’t, make sure that you’re connecting to port 5678, and not the default 5432, you may be connecting to a postgres server running locally on your system.

Running the Application

Now that all setup is done, let’s create a run configuration for our application, and have a look to see how it works. Go to Run | Edit Configurations, and use the green ‘+’ to add a new Python run configuration:

Flask Run Config

Again use the downward facing arrow to change from ‘Script path’ to ‘Module name’, and put flask as the module name. Then add run --host=0.0.0.0 as the parameters. All further configuration is done with environment variables. Use the ... button next to the environment variables field to add the necessary variables:

Environment Variables

The database connection is configured with a postgres URL: postgres://grouporder:hunter2@localhost/grouporder. Finally, make sure to check the ‘Single instance only’ checkbox in the top-right corner of the run configuration. We only have one port 5000, so we can’t run multiple instances of the same server.

Now we can finally use our application for its intended purpose. Let’s set up a user account in our application, and go through the process.

HTTP Requests

In PyCharm 2017.3, we’ve introduced a new HTTP client. In the new client you can simply write an HTTP request in the editor, and then run it right from there. Let’s have a look to see how this works.

To create a user account, we need to POST to the /users route. Press Ctrl+Alt+Shift+Insert (or Ctrl+N on macOS) to create a scratch file, choose ‘HTTP Request’ for the file’s type. Then we can start writing the request:

HTTP Request

After using the green button to run the request, we can see the output in the run tool window. In scratch files we can also see the responses to our previous requests, by Ctrl+Clicking the line with the date and time of the request.

Now we can log in as the new user (we can write another request after the line with ‘###’), and we’ll get a token:

Login HTTP Request

During this session, we can use this token in an Authorization header to authenticate our requests. The token expires three hours after it’s created (see data/api/users.py). So let’s use our token to try to create a restaurant:

Failed HTTP Request

Unfortunately, the application doesn’t let us. Turns out we need a ‘can_manage_restaurants’ permission to make changes to restaurants or menus. And the lazy developer of this program (who is this guy anyway?) didn’t add a route to do so. So let’s manually change this in the database.

At this point we should be able to go to the users table, and check the can_manage_restaurants checkbox. Double-click the field to activate editing, and then click until it shows a checkbox. Afterwards, commit your changes to the database by clicking submit (or pressing Ctrl+Enter):

Database Meddling

When you’ve made the change, you should be able to re-run the HTTP request, and get 201 Created. To see the rest of the requests you need to make to add menu items, and to create an order, see the example_requests.http file. As long as you replace the Authorization tokens with the one you got from the login request earlier, you should be able to go through the whole process by clicking the green play icons in the gutter.

In the next post we’ll set up a development environment for this application in AWS. Continue reading the next part now.

Posted in Tutorial | Tagged , , , , , , , | Leave a comment

PyCharm Edu 2017.3: Enhanced Stepik integration, course filtering, better UX, and more

800x400_blog_PE_2017_3_

PyCharm Edu versioning and release timetables are now aligned with our other JetBrains products, to help us focus on delivering more continuous value. So we’re switching to an aligned 2017.3 version from the previous version 4.0.

Get PyCharm Edu 2017.3

Please note, that all the PyCharm Edu 2017.3 features are also available in PyCharm for Python courses, as well as IntelliJ IDEA for Java courses and IntelliJ IDEA and Android Studio for Kotlin courses with the help of the EduTools plugin.

PyCharm Edu 2017.3 new features

Enhanced Stepik integration for learners

New course management UI for educators

Continue reading

Posted in Education, Release Announcements, Tutorial | Tagged , | 2 Comments

PyCharm 2017.3.1

We have a couple of fixes and small improvements for you in PyCharm 2017.3.1. Update now on the website.

New in this version:

  • Several issues with running Python modules (-m) were resolved: running modules remotely, showing command line after running
  • Further issues with running code over SSH were resolved: they can now connect to IPv6 hosts from macOS, don’t misinterpret ProxyCommand: none, and correctly parse the HostKeyAlgorithms option (Connecting to a Python interpreter over SSH is only supported in PyCharm Professional Edition)
  • Code insight for SQLAlchemy was improved, the issue with ‘incorrect call arguments’ has been fixed.
  • And more, see the release notes for details

If you use multiple JetBrains applications, you can use JetBrains Toolbox to make sure all your JetBrains IDE’s stay up to date. PyCharm is also available as a snap package. If you’re on Ubuntu 16.04 or later, you can install PyCharm by using this command:

sudo snap install [pycharm-professional|pycharm-community] --classic

Posted in Release Announcements | Tagged | 5 Comments

Developing in a VM with Vagrant and Ansible

One of the things that could make developing cloud applications hard, would be differences between the dev environment and the production environment. This is why one of the factors of the twelve factor app is maintaining dev-prod parity. Today we’ll start a blog series about developing cloud applications, and we’ll discuss how to set up a local development environment using Vagrant.

We’ll use these technologies for this application:

  • Vagrant
  • Ansible
  • Flask
  • Virtualenv
  • Ubuntu

Today we’ll just create a simple Flask application that’ll say ‘Hello world’. In the next post in this series, we’ll introduce a larger application that we’ll deploy to AWS in a future post.

If you want to follow along at home, you can find the code from today’s blog post on GitHub. See the commit history there to see the progress from the beginning to the end.

Getting Started

So let’s create a project, and get started. If you want to follow along, you’ll need to have Vagrant, Virtualbox, and PyCharm Professional Edition installed on your computer.

Open PyCharm, and create a new pure Python project.

The first step will be to set up the Vagrant VM, and configure the necessary items. In the project folder, run vagrant init -m bento/ubuntu-16.04. You can run commands within PyCharm by opening the terminal (Alt + F12).

This generates a Vagrantfile that only contains the base box that we’re using. If we run vagrant up at this point, we’d get a plain Ubuntu server box. For our project we’ll need to install some things and expose some ports though, so let’s add this to the Vagrantfile:

The ansible_local provisioner will install Ansible on the Ubuntu VM and then run it there, this means we don’t need to install Ansible on our host computer. Ansible lets us describe the desired state for a computer, and will then make the necessary changes to achieve that state. So let’s have a look at what’s necessary to install Python 3.6 on the VM.

Provisioning a VM with Ansible

Ansible works with Playbooks. These are YAML files that describe what state should be applied to what machines. Let’s create setup.yml, and try to install Python 3.6:

A playbook is a list of plays on the top level. We can configure per play which hosts we want to apply it to, whether we need to become another user, and a list of tasks. In our example, we apply the play to all hosts: there’s only one host in the Vagrant setup, so that’s easy enough. We also set become to yes, which has the effect of running our tasks with sudo.

The tasks are the way we can configure the desired state of our VM. We can name our tasks to make it easier for us to see what’s going on, but Ansible doesn’t technically need it. The task we have here is just an instruction for Ansible to use the apt module, which is bundled with Ansible. We specify three options to the apt module:

  • The name of the package we’re interested in
  • The state we’d like the package to be in: present on the machine
  • Update the apt cache before installing

This last option basically means that Ansible will run apt update before running apt install, if necessary.

If you’re thinking, isn’t this just a very hard way to write sudo apt update && sudo apt install python3.6, at this point you’re right. However, the value of Ansible is that you’re not describing actions, but you’re describing a desired state. So the second time you run Ansible, it detects Python 3.6 is already installed, and it won’t do anything. Idempotence is one of Ansible’s core principles. Another key benefit is that you can version control changes to server configuration.

So let’s run vagrant up (Ctrl+Shift+A to Find action, and then type vagrant up), and we should have a VM with Python 3.6!

Trouble in Paradise

Unfortunately, Python 3.6 isn’t available from Ubuntu’s default package repositories. There are several ways to resolve this situation, the easiest would be to find a PPA (Personal Package Archive) which has Python 3.6.

A PPA which is mentioned in many places on the internet is Jonathon F’s PPA. So how would we go about adding this PPA using Ansible? Turns out there are two modules that can help us out here, apt_key and apt_repository. Apt_key allows us to specify the public key associated with the repository, to make sure any releases we get are really from Jonathon. And apt_repository then adds the repository to the apt configuration. So let’s add these two tasks to the playbook, before the install task (Ansible runs tasks in the order specified):

Now run vagrant provision (or Tools | Vagrant | Provision), to rerun the playbook. After completing, we should see the summary:

At this point, let’s create a requirements.txt with the libraries we’ll use today, in this case, just Flask:

Most Linux distributions use the system interpreter themselves, that’s one of the reasons for virtualenvs being best practice. So let’s create a virtualenv, and then install these packages. As the python-3.6 package didn’t include pip, we’ll first need to install pip. Then, using pip, we’ll need to install virtualenv into the system interpreter. After that we’ll be able to create a new virtualenv with the requirements we specify. To do this, specify at the end of the playbook:

First, we’re using the apt module to install pip. Then, we’re using Ansible’s pip module to install the virtualenv package. And finally we’re using the pip module again to now create the virtualenv, and then install the packages in the newly created virtualenv. Vagrant automatically mounts the project directory in the /vagrant folder in the VM, so we can refer to our requirements.txt file this way.

At this point we have our Python environment ready, and we could continue going the same way to add a database and anything else we might desire. Let’s have a look to see how we can organize our playbook further. Firstly, we’ve now hardcoded paths with ‘vagrant’, which prevents us from reusing the same playbook later on AWS. Let’s change this:

The first thing we can do is define variables for these paths. If the variable syntax looks familiar, that’s because it is: Ansible is written in Python, and uses jinja2 for templating.

If we were to add database plays to the same playbook, we’re mixing things that we may want to separate later. Wouldn’t it be easier to have these Python plays somewhere we can call them, and have the database plays in another place? This is possible using Ansible roles. Let’s refactor this playbook into a Python role.

Ansible roles are essentially a folder structure with YAML files that are used to specify the things necessary for the role. To refactor our plays into a Python role, we just need to create several folders: $PROJECT_HOME/roles/python/tasks, and then place a file called main.yml in that last tasks folder. Copy the list of tasks from our playbook into that file, making sure to unindent them:

Afterwards, specify in the playbook which role to apply:

That’s all there’s to it! To make sure everything runs smoothly still, run vagrant provision once more to make sure everything is applied to the VM.

Running Code from PyCharm

Now that we have a provisioned VM ready to go, let’s write some code!

First let’s set up the Python interpreter. Go to File | Settings | Project Interpreter. Then use the gear icon to select ‘Add Remote’, and choose Vagrant. PyCharm automatically detects most settings, we just need to put the path to the Python interpreter to tell PyCharm about the virtualenv we created:

Vagrant Interpreter

 

Now create a new script, let’s name it server.py and add Flask’s Hello World:

Make sure that you use the host='0.0.0.0' kwarg, as Flask by default only binds to localhost, and we wouldn’t be able to access our application later.

Now to create a run configuration, just navigate to the script as usual, and select ‘Single instance only’ to prevent the app not starting when the port is already in use:

Basic Flask Run Config

By marking the run configuration as ‘single instance only’ we make sure that we can’t accidentally start the script twice and get a ‘Port already in use’ error.

After saving the run configuration, just click the regular Run or Debug button, and the script should start.

Flask Running

In the next blog post we’ll have a look at an application where we build a REST API on top of a database. Continue reading now!

Posted in Tutorial | Tagged , , , , , , | 4 Comments