PyCharm 2017.1 Out Now: Faster debugger, new test runners, and more

PyCharm 2017.1 is out now! Get it now for a much faster debugger, improved Python and JavaScript unit testing, and support for the six library.

  • The Python debugger got forty times faster for Python 3.6 projects, and up to two times faster for older versions of Python
  • We’ve added support for the six compatibility library
  • Unit test runners for Python have been rebuilt from the ground up: you can now run any test configuration with PyCharm
  • Are you a full stack developer? We’ve improved our JavaScript unit testing: gutter icons indicating whether a test passed and support for Jest, Facebook’s JS testing framework (only available in PyCharm Professional edition)
  • Zero-latency typing is now on by default: typing latencies for PyCharm 2017.1 are lower than those for Sublime Text and Emacs
  • Support for native Docker for Mac – no more need to use SOCAT! (only available in PyCharm Professional edition)
  • And more!

Get PyCharm 2017.1 now from our website

Please let us know what you think about PyCharm! You can reach us on Twitter, Facebook, and by leaving a comment on the blog.

PyCharm Team
-The Drive to Develop

Posted in Release Announcements | Tagged | 12 Comments

Why Postgres Should be your Document Database Webinar Recording

This Monday Jim Fulton, one of the first Python contributors, hosted a webinar about storing JSONB documents in PostgreSQL. Watch it now:

Known mostly for its mature SQL and data-at-scale infrastructure, the PostgreSQL project added a “JSONB” column type in its 9.4 release, then refined it over the next two releases. While using it is straightforward, combining it in hybrid structured/unstructured applications along with other facilities in the database can require skill.

In this webinar, Python and database consultant Jim Fulton shows us how to use JSONB and related machinery for pure and hybrid Python document-oriented applications. We also briefly discuss his long history back to the start of Python, and finish with his unique NewtDB library for native Python objects coupled to JSONB queries.

Jim uses PyCharm Professional during the webinar. PyCharm Professional bundles the database tools from JetBrains DataGrip, our database IDE. However, the webinar itself is focused on the concepts of JSONB.

You can find Jim’s code on GitHub: https://github.com/jimfulton/pycharm-170320

If you have any questions or comments about the webinar, feel free to leave them in the comments below, or you can reach us on Twitter. Jim is on Twitter as well, his Twitter handle is @j1mfulton.

-PyCharm Team
The Drive to Develop

Posted in Webinar | Tagged | 2 Comments

Inside the Debugger: Interview with Elizaveta Shashkova

PyCharm 2017.1 has several notable improvements, but there’s one that’s particularly fun to talk about: debugger speedups. PyCharm’s visual debugger regularly gets top billing as a feature our customer value the highest. Over the last year, the debugger saw a number of feature improvements and several very impressive speedups. In particular, for Python 3.6 projects, PyCharm can use a new Python API to close the gap with a non-debug run configuration.

pycharm_webinar_inside_the_debugger

If you’ve been to PyCon or EuroPython and come by our booth, chances are you’ve seen Elizaveta Shashkova talking about the debugger to a PyCharm user, or giving a conference talk. Let’s talk to Liza about her work on PyCharm, the debugger, and her upcoming talk at PyCon.

Can you share with us a bit of your background and what you do on the PyCharm development team?

I started my career at JetBrains as a Summer Intern two and half years ago – I implemented a debugger for Jinja2 templates and Dmitry Trofimov (the creator of the PyCharm’s debugger) was my mentor. After that, I joined the PyCharm Team as a Junior developer and implemented a Thread Concurrency Visualizer under the supervision of Andrey Vlasovskikh, and my graduation thesis was based on it.

At the moment I’m supporting the Debugger and the Console in PyCharm.

People really like PyCharm’s debugger. Can you describe how it works, behind the scenes?

The debugger consists of two main parts: the UI (written in Java and Kotlin) and the Python debugger (written in Python). The most interesting part is on the Python side – the pydevd module, which we share with PyDev (the Python plugin for Eclipse).

We don’t use the pdb standard debugger for Python, but we implement our own debugger. At first glance, it’s quite simple, because it’s based on the standard sys.settrace() function, and in fact it just handles events which the Python interpreter produces for every line in the running script.

Of course, there are a lot of interesting frameworks or Python modules, where debugging doesn’t work by default, that’s why we add special support inside the debugger: for PyQt threads, for interactive mode in pyplot, for creation new processes, for debugging in Docker and others.

Note: A year ago we did an interview with the creator of PyDev about the funded debugger speedups.

The Cython extensions gave a big speedup. Can you explain how it works and the performance benefit?

Yes, Cython speedups were implemented by Fabio Zadrozny and they gave a 40% speed improvement for the debugger. Fabio found the most significant debugger bottlenecks and optimized them. Some of them were rewritten in Cython and it gave even more – a 140% speed improvement. Fabio has done a really great job!

On to new stuff. The upcoming PyCharm, paired with Python 3.6, gives some more debugger speedups, right?

Yes, as I’ve already mentioned, for Python interpreters before version 3.6 we used to use the standard Python tracing function. But in Python 3.6 the new frame evaluation API was introduced and it gave us a great opportunity to avoid using tracing functions and instead implement a new mechanism for debugging.

And it gave us a really significant performance improvement, for example, in some cases the debugger has become 80 times faster than it used to be, and it has become at least 10 times faster in the worst case. In some partial cases, it has become almost as fast as running without debugging.

We have so many users’ reports about debugger’s slowness, and now I hope they will be happy to try the new fast version of the debugger. Unfortunately, it’s available for Python 3.6 only.

What changed in Python 3.6 to allow this?

The new frame evaluation API was introduced to CPython in PEP 523 and it allows to specify a per-interpreter function pointer to handle the evaluation of frames.

In other words, it means that we can get access to the code when entering a new frame, but before the execution of this frame has started. And this means that we can modify the frame’s code and introduce our breakpoints right into bytecode: execution of the frame hasn’t started yet so we won’t break anything.

When we used the tracing function, the idea was similar: when entering a new frame we checked if there are any breakpoints in the current frame and, if they exist, continue tracing for every line in the frame. And sometimes it led to the serious performance problems.

But in the new frame evaluation debugger, this problem was solved: we just introduce the breakpoint into the code and the other lines in the scope don’t matter. Instead of adding an additional call to the tracing function for every line in the frame, with Python 3.6 we add just one call to “breakpoint”, and that means that the script under debugging runs almost as fast as without the debugger.

Congratulations on your Python 3.6 Debugging talk being accepted for PyCon. Who will be interested in your talk?

This talk will be interesting for people who want to learn something new about the features of Python 3.6. Also, it will be useful for people who want to learn yet another reason to move to the Python 3.6: a fast debugger, which should appear in many Python IDEs.

Moreover, after the talk people will understand, how the PyCharm’s debugger works, and why such fast debugging wasn’t possible in the previous versions of Python.
This talk is for experienced Python developers, who aren’t afraid of calling Python’s C API functions and doing bytecode modifications. :)

What is the next big thing in debugging to work on in the next year?

We have a rather old and important problem in the debugger related to the evaluation and showing big objects in the debugger. This problem has existed from the beginning, but it has become really important during the last few years. I believe it gained visibility due to the increased number of scientists who use PyCharm and work with big data frames. At the moment we have some technical restrictions for implementing this feature, but we’re going to implement it in the near future.

Posted in Interview | Tagged | 2 Comments

PyCharm 2016.3.3 Released

Shortly after announcing PyCharm 2017.1 Release Candidate today we bring you some important backport fixes for the current PyCharm stable version.  Please welcome PyCharm 2016.3.3.

For the detailed list of changes please see the Release Notes.

If you’re running PyCharm 2016.3.2, use Help | Check for Updates to upgrade using a patch. Otherwise, feel free to download the update from our website, or using the ToolboxApp.

– PyCharm team

 

Posted in Release Announcements | Tagged | 7 Comments

PyCharm 2017.1 Release Candidate

Great news: We are now approaching the final steps towards the first major release in 2017. So today we’ve published the PyCharm 2017.1 Release Candidate. Download it now from our early access page.

PyCharm_Poff20171_splash

Since the previous early access preview build we’ve fixed a lot of bugs in various subsystems and also introduced some important improvements: details in the release notes.

In case you missed what’s new in upcoming PyCharm 2017.1, check out the early access page to learn about the features included in the new version. Some highlights:

  • The Python debugger got forty times faster for Python 3.6 projects, and up to two times faster for older versions of Python
  • Improved Python and JavaScript testing
  • Support for the six library
  • Zero-latency typing
  • Support for native Docker for macOS
  • And much more

We hope that there will be no major bugs in the RC build, however, should you encounter any problems, please report them to YouTrack – we’ll still have a bit of time to fix stuff before the final release.

Stay tuned for a PyCharm 2017.1 release announcement and follow us on Twitter.

-PyCharm team

Posted in Early Access Preview | Tagged | 12 Comments

Interview with Jim Fulton for “Why Postgres Should Be Your Document Database” Webinar

Jim Fulton, my friend and former co-worker, was one of the original leaders in the Python community. Over the years he created important Python software (cPickle), Python’s original “killer app” (Zope, as stated in magazine articles), and a leading build system (buildout).

He’s now turning his two decades of experience with Python databases to PostgreSQL and its JSONB column type for high-performance document applications, and we have for a PyCharm webinar on March 20. Jim is an independent consultant for Python and database projects and took some time out for an interview.
pycharm_webinar_w780

Quick summary…what’s the webinar about, who is it for, and why should they care?

The webinar is about using PostgreSQL as a document database and tradeoffs of using PostgreSQL versus non-transactional databases. We’ll look at examples using PyCharm to develop database-intensive web applications in Python.

Let’s rewind to two points in the past. It’s 1994, you and I meet at the first Python “conference” at NIST. What were you doing and what attracted you to Python?

I was supporting data-analysis activities at the USGS. We were using a data manipulation tool named Rand RDB. It was based on the idea of creating a 4GL by using Unix shell pipelines to string together data manipulation components. Data transformations were expressed using Perl expressions. Perl was predominantly a text-manipulation language at the time (Perl 4) and made it easy to introduce computational mistakes with numeric data.

I decided I wanted to replace the data transformation tool with a scripting language that was more careful with numbers. I asked on the Perl mailing list if Perl might grow OO features to allow me more control over how computations were performed. I was assured that Perl would never support OO. :) This was in 1994.

I went shopping for an OO scripting language and found Python. The OO features and the clean indentation-based syntax was very appealing to me. Python had a much more robust type system that already behaved the way I needed it too. It had a short tutorial that allowed me to have a working knowledge of the language in a couple of hours. (The language was much smaller then.) I started using it for other projects and found I could get to working software much quicker than with Perl, which itself was a major improvement over other languages I was using, like Fortran and Ada.

Next…it’s 1998 and the software soon to be known as Zope is being open sourced. What was Zope and why did it matter?

Zope was an application platform that people could use to build web applications completely through the web. At the time, the tools for building web applications were very primitive. There weren’t established Python web frameworks and non-python alternatives were typically very low-level and difficult to use. With the through-the web model, people could install the application and get started building their applications very quickly. Not only was there no edit-compile cycle, there was no edit-process-restart cycle. Changes were visible right away.

It provided an “object file system”. Content was arranged in a hierarchy. People could assemble applications using pre-built components such as folders, files, templates, SQL “methods” and Python methods. Zope’s URLs reflected the hierarchy. Competing systems typically hid hierarchies behind big fancy numbers (http://example.com/foo/0,1245,,00.html).

Zope also implemented “environmental acquisition”, allowing objects lower in the hierarchy to access data from objects higher in the hierarchy. This was a form of delegation that was similar to the way JavaScript objects get data from their prototypes. Acquisition supported component reuse and hierarchical security.

Although I didn’t realize it at the time, Zope’s through-the-web development experience with real-time changes backed by an object-oriented database was a lot like traditional Smalltalk environments with persistent memory images. I’m a big fan of Smalltalk, and was very happy to learn that Adele Goldberg, one of the creators of Smalltalk used Zope for a time.

Zope contained an object database, written in Python. Why, what was interesting about it, and was it the right decision?

The object-database, ZODB, both suggested and enabled Zope’s hierarchical through-the-web development model. It made data management as simple as saving attributes or items on Python objects. ZODB has a lot of interesting features beyond being a transactional ACID object-oriented database, including a pluggable layered storage architecture, multi-level caching, conflict resolution, transactional undo, and time travel.

Building the ZODB was the right decision, because Zope wouldn’t have existed without it.

PostgreSQL has rich JSON support. What should developers think about it?

It’s an opportunity to get some of the benefits of a document-oriented database:

  • Flexible schema
  • Composite objects avoiding joins
  • Ease of getting started

…while keeping the benefits of Postgres:

  • ACID/transactions
  • The ability to easily combine variable and hierarchical data with a relational schema.

Another way to think of this is, if you have chunks of data that are always used together and those data aren’t easily described by flat records, then it may make sense to leverage JSON data types, because you can avoid joins and ORMs.

The JSON support is well-integrated with the rest of PostgreSQL. You can do relational joins between JSON columns and other columns, as well as self-joins to query related values.

BTW, when we talk about JSON in PostgreSQL, we really mean the JSONB type, not the JSON type. While there are theoretical uses for the JSON type, it’s mostly a decoy, which many newcomers fall for. :)

How has this impacted some of the design you do for customer work?

It’s allowed me to pursue flexible OO designs while still taking advantage of PostgreSQL’s strengths.

First of all, there are some standard flexible indexes for JSONB columns that support a wide variety of common queries. For example, suppose I have documents that describe tasks like:

I can search for tasks by tag:

And the search will be supported by the generic index, as will search by submitter:

The generic index only does exact matches, but we can also use expression indexes to create BTrees indexes:

Which supports queries like:

Or we can create a fulltext index on title:

Supporting searches like:

Some things that are important to note:

  • The queries above are possible without indexes
  • The last two examples indexed expressions

The ability to index expressions provides a lot of power, especially when combined with stored procedures.  What’s happening with an expression index is that the expression is evaluated at index time, not search time. (It’s a little more complicated than that…expressions may be evaluated at search time depending on memory and result sizes.)

The projects I’ve worked on often involve heterogeneous data where the way data are accessed might depend on content type.  I can index a text-extraction function that extracts text in a content-type specific way and then do text searches that span different content types. Function calls can be indexed like any other expression.

You’re a big fan of transactions. Can you give a sneak peek of what you’ll cover in the webinar about transactions?

Well, I’m a big fan of correct software and transactions make correctness a lot easier because:

  • Transactions are atomic. If there’s an error after data have been saved, the database will rollback the changes. Without atomicity, the programmer would have to make updates that canceled previous change, which might fail.
  • Transactions provide consistency not just within, but between different database records.
  • Transactions are isolated. There can be multiple hosts, processes and threads operating on the data and transactions prevent them from interfering. Transactions provide an alternative to other concurrency-control techniques that are notorious for producing errors.

I’ll talk about why transactions are important and the tradeoffs one should consider when considering non-transactional databases. I’ll touch on ways that web frameworks support transaction management and challenges in dealing with inherently non-transactional systems like email.

Your talk is about PostgreSQL as a document database. But you actually propose a further step into objects. Can you explain?

Managing data as JSON documents provides a lot of flexibility and frees developers from performing complex joins just to assemble objects. ORMs can help with this of course, but with a lot of additional complexity. In fact, with an ORM, the joins are still there, they’re just expressed a different way.

But documents are fairly dumb. Imagine if Python didn’t have classes and all you had were dictionaries, lists, strings, numbers, booleans and None.

Recently I created Newt DB to combine the ease of use of ZODB with the power of PostgreSQL. Newt DB provides a full object-oriented database on top of Postgres. It saves data using 2 representations, python pickles and JSON documents. The pickle format is far more expressive than JSON and so supports the full range of Python types, including application-defined classes. The JSON representation isn’t full fidelity, but is optimized to facilitate search, display and reporting in PostgreSQL and other, non-Python clients.

Modeling objects as Python + JSON avoids joins to assemble objects. Cross-object references avoid joins in Python. (Technically, these are self-joins, but they’re automated transparently.) As objects are accessed, they’re loaded and cached transparently.

Posted in Interview | Tagged | Leave a comment

PyCharm 2017.1 EAP 10 (build 171.3780.17)

This week we’re releasing the last, but not the least, EAP for PyCharm 2017.1. Get the EAP now.

We’ve resolved many bugs, including:

  • Virtualenvs are now created with pip 9.0.1 and setuptools 28.8.0
  • The default API URL for Docker is now set to the unix socket, which is the default location on macOS and most Linux distributions [pro only]

If you’d like to read about all the issues we’ve resolved this past week, check out the release notes.

Any improvements marked ‘Pro only’ are only available in PyCharm Professional Edition. You can use the EAP version of PyCharm Professional Edition for free for 30 days.

We’d like to encourage you to try out this new EAP version. To keep up-to-date with our EAP releases set your update channel to Early Access Program: Settings | Appearance & Behavior | System Settings | Updates, Automatically check updates for “Early Access Program”

We do our best to find all bugs before we release, but in these preview builds there might still be some bugs in the product. If you find one, please let us know on YouTrack, or contact us on Twitter @PyCharm.

-PyCharm Team
The Drive to Develop

Posted in Early Access Preview | Tagged | 3 Comments

Webinar: “Why Postgres Should Be Your Document Database” with Jim Fulton, March 20

Python and database developers are increasingly interested in document storage and PostgreSQL has an increasingly-attractive story. Come explore Postgres as a document database in our free webinar:

  • Monday, March 20th
  • 16:00 European Time, 11AM Eastern Daylight Time
  • Register here

pycharm_webinar_w780

Did you know PostgreSQL excels at storing JSON documents? Known mostly for its mature SQL and data-at-scale infrastructure, the PostgreSQL project added a “JSONB” column type in its 9.4 release, then refined it over the next two releases. While using it is straightforward, combining it in hybrid structured/unstructured applications along with other facilities in the database can require skill.

In this webinar, Python and database consultant Jim Fulton shows us how to use JSONB and related machinery for pure and hybrid Python document-oriented applications. We’ll also briefly discuss his long history back to the start of Python, and finish with his unique NewtDB library for native Python objects coupled to JSONB queries.

Jim will use PyCharm Professional during the webinar. PyCharm Professional bundles the database tools from JetBrains DataGrip, our database IDE. However, the webinar itself is focused on the concepts of JSONB.

Speaking to you

Jim Fulton is the creator of Zope (the first open source application server) and ZODB, a pure-Python object database. Jim now works on NewtDB, which combines the ease of use of ZODB’s object with PostgreSQL’s JSONB to provide a hybrid object-oriented and document-oriented database. He also created and maintains the Buildout development and deployment tool. Jim has been a Python contributor and leader since 1994.

As an independent consultant, Jim works on architectures for large-scale Python and database projects, bringing decades of business experience for both.

Posted in Webinar | Tagged | 2 Comments

PyCharm 2017.1 EAP 9 (build 171.3691.3)

We’re getting closer to the release of PyCharm 2017.1, and you can already get this week’s preview version from our Early Access Program (EAP) page. Get the EAP now!

This week we’ve included one small new feature:

  • You can now move and copy files on remote hosts by drag and dropping them in the ‘browse remote hosts’ tool window. (Tools | Deployment | Browse Remote Host) [Pro only]

We’ve also fixed various bugs:

  • Python namespace issues. PyCharm wouldn’t detect a folder due to case-insensitivity on Windows. PyCharm will from now on always use case-sensitive resolve.
  • Various other things have been fixed. See the release notes for details.

Any improvements marked ‘Pro only’ are only available in PyCharm Professional Edition. You can use the EAP version of PyCharm Professional Edition for free for 30 days.

We’d like to encourage you to try out this new EAP version. To keep up-to-date with our EAP releases set your update channel to Early Access Program: Settings | Appearance & Behavior | System Settings | Updates, Automatically check updates for “Early Access Program”

We do our best to find all bugs before we release, but in these preview builds there might still be some bugs in the product. If you find one, please let us know on YouTrack, or contact us on Twitter @PyCharm.

-PyCharm Team
The Drive to Develop

Posted in Early Access Preview | Tagged | 1 Comment

PyCharm 2017.1 EAP 8 (build 171.3566.25)

Our eighth Early Access Program (EAP) release for PyCharm 2017.1 is out now! Get it from our EAP page.

New Features

  • Vue.js initial support. Vue is a MVVM (Model-View-ViewModel) JavaScript framework which is becoming ever more popular. To use it, you need to install the Vue.js plugin. You can install the plugin by going to Settings | Plugins | Install JetBrains Plugins, and then choosing Vue.js. [Pro only]vuejs

Bug Fixes

  • Several issues with the new Data View were resolved. If you use numpy or pandas in one of your projects, you can try it out by debugging and clicking ‘View as numpy array’ or ‘View as pandas DataFrame’ for the appropriate variables. See our EAP 5 blog post for some more information about this feature.
  • Fixes for type annotations
  • Debugger improvements
  • Platform enhancements

See our release notes for more details about improvements in this version.

Any improvements marked ‘Pro only’ are only available in PyCharm Professional Edition. You can use the EAP version of PyCharm Professional Edition for free for 30 days.

We’d like to encourage you to try out this new EAP version. To keep up-to-date with our EAP releases set your update channel to Early Access Program: Settings | Appearance & Behavior | System Settings | Updates, Automatically check updates for “Early Access Program”

We do our best to find all bugs before we release, but in these preview builds there might still be some bugs in the product. If you find one, please let us know on YouTrack, or contact us on Twitter @PyCharm.

-PyCharm Team
The Drive to Develop

 

Posted in Early Access Preview | Tagged | 1 Comment