JetBrains Research

Research is crucial for progress and innovation, which is why at JetBrains we are passionate about both scientific and market research

JetBrains Research Digest 2023: Volume 1

JetBrains Research is a division of JetBrains dedicated to applied scientific research in innovation and education that collaborates with some of the world’s top scientific institutions. Unlike traditional academic institutions, JetBrains Research places no strict demands on the research teams to produce high-impact publications. This allows the researchers greater freedom to focus their efforts on the essence of their work, rather than on applying for grants.

JetBrains Research includes over 100 researchers working on projects across 11 lab groups. These groups explore a wide range of topics, engaging with both applied and purely theoretical tasks.

In this digest, we present the most recent news from JetBrains Research and bring you the results shared by four of our research teams.

Applied Program Analysis Lab

The Applied Program Analysis Lab (APAL) focuses on applying program analysis techniques to the problems that developers encounter in their day-to-day activities.

On May 14, at the 16th International Workshop on Search-Based and Fuzz Testing in Melbourne, Australia, Azat Abdullin from the APAL team presented the results of their ongoing research in Java test generation based on our symbolic execution engine Kex. The team entered the Java tool competition for the third time, and the results showed gradual improvements to the performance of Kex and its ability to cover complex code. The feedback provided by the competition organizers helped identify what the team should improve next, suggesting a focus on the readability of the generated tests and the quality of the created test oracles.

Machine Learning Methods in Software Engineering Lab

The Machine Learning Methods in Software Engineering Lab (ML4SE) works on ​​improving modern software engineering tools and discovering new ways to develop and maintain code.

This past May, the team published four articles:

Optimizing Duplicate Size Thresholds in IDEs

Konstantin Grotov, Sergey Titov, Alexandr Suhinin, Yaroslav Golubev, and Timofey Bryksin co-authored an article on clone detection.

In this article, they addressed the question of the minimum token length threshold to be used for filtering out trivial, uninteresting code clones. IDEs have to highlight similar pieces of code in a way that does not annoy the user, and to achieve that, they need to use a threshold on clone size. In this paper, the authors explored the possibility of updating the existing threshold for related IDEs by comparing clone distributions. They searched for a threshold for the Jupyter-focused IDEs – Datalore and DataSpell – in comparison to PyCharm. The results showed that Jupyter notebooks have more clones of a larger size, and so to detect the same percentage of them, the threshold needs to be higher. The authors hope that this simple pipeline can help other researchers compare the distributions of clones for other related languages, for example, JVM-based languages.

PyEvolve: Automating Frequent Code Changes in Python ML Systems

Malinda Dilhara and Danny Dig from the University of Colorado Boulder, together with Ameya Ketkar, wrote a paper on automating frequent code changes in Python.

The rapid evolution of machine learning techniques necessitates frequently repeated code change patterns, ranging from simple API migrations to changes involving several complex control structures such as for loops. Performing such changes manually is tedious, but current state-of-the-art techniques for inferring such transformation rules also fall short as they are unable to handle unseen variants of complex changes. This paper presents PyEvolve – a novel tool that mines frequent change patterns, infers the transformation rules, and then transplants them automatically to new target sites. The tool has a novel transformation rule inference engine that can account for both dataflow and control flow, advancing the science behind transformation-by-example tools. Without this novel technique, 70% of the code changes made by PyEvolve would be impossible to automate.

Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks

Dmitry Pasechnyuk from the Mohammed bin Zayed University of Artificial Intelligence, Anton Prazdnichnykh, Mikhail Evtikhiev, and Timofey Bryksin from JetBrains Research shared their results on the impact of optimization methods on tasks in machine learning for software engineering (ML4SE).

The authors assessed 24 optimizers and found that the choice of optimizer can have a great impact on ML4SE model performance. They also observed that the relative optimizer performance can strongly depend on the model architecture. Yet, it appeared independent of the dataset or the task type (method name prediction vs. documentation generation). The authors recommend RAdam + LookAhead as their optimizer of choice, followed by RAdam and DiffGrad as the fallback options, instead of the de facto industry standard Adam optimizer. Finally, the authors suggest that hyperparameter grid search can further improve the model performance, and as the relative optimizer performance seems to have only a weak dependence on the dataset size, conducting a grid search for the best optimizer on a small subset of the training dataset is recommended.

Analyzing the Quality of Submissions in Online Programming Courses

Maria Tigina, Anastasiia Birillo, Yaroslav Golubev, and Timofey Bryksin from JetBrains Research in collaboration with Hieke Keuning from Utrecht University and Nikolay Vyahhi from Hyperskill co-authored a paper analyzing the quality of code in student submissions on the Hyperskill platform.

The authors began by identifying the most prevalent code quality issues, and trying to understand their causes. Then, the authors analyzed how students fix, or do not fix, these issues. The paper includes several case studies of individual prominent issues, including two cases where even after correctly passing all the tests, the students submitted more than 30 additional attempts to improve code quality. The authors pointed out several potential problems of Massive Open Online Course (MOOC) systems and course content that may cause some of these issues. The study suggests several points of potential improvement to the MOOC system. The main practical takeaways of the study are as follows: First, the theoretical part and the pre-written code should not introduce or incentivize issues themselves. Second, it is useful to report code quality to students and reward them with grades. Finally, hints and messages should be adapted for novices.

The team also presented these results at the most prestigious software engineering conference in the world, the International Conference on Software Engineering (ICSE), as well as some colocated events, including the recurrent Mining Software Repositories conference (MSR). Yaroslav Golubev, Danny Dig, Mikhail Evtikhiev, Maria Tigina, and Anastasiia Birillo traveled to Melbourne, Australia, to take part in this conference, present the results of their work, and attend the talks given by others, networking and finding collaborations for future work.

In June, the team published one article:

Just-in-Time Code Duplicates Extraction

Eman Abdullah AlOmar from Stevens Institute of Technology, Anton Ivanov, Zarina Kurbatova, Yaroslav Golubev, Timofey Bryksin from JetBrains Research, and Ali Ouni from ÉTS Montreal, University of Quebec, together with Mohamed Wiem Mkaouer, Le Nguyen, Amit Kini, and Aditya Thakur from Rochester Institute of Technology described a plugin designed to reduce unwanted code duplicates.

In this work, the authors describe the AntiCopyPaster tool, developed as a way to deal with unwanted code duplicates as soon as they are introduced in code. The tool, implemented as a plugin, monitors the IDE and catches cases where the code is pasted and remains as duplicate. At this point, the plugin uses a machine learning model to determine whether the fragment is worth deduplicating, and if so, proactively suggests that the user extract the duplicate into a new method and replace its uses with calls to this new method. The paper includes a detailed comparison of various machine learning classifiers used in the plugin and shows that random forests and convolutional neural networks perform best for this task. A user study, conducted on 50+ students, showed that the majority found the tool helpful.

Last but not least, the ML4SE team collaborated with the JetBrains Academy and Kotlin team at JetBrains to release the semester-long curriculum Programming in Kotlin:

Ready-to-teach Programming in Kotlin course

The course materials include slides covering core Kotlin concepts, quizzes, tests, and homework assignments with hands-on coding exercises. The materials were designed specifically for Kotlin educators, and they are available for free. Educators can use the curriculum as is or adjust it to fit their educational needs.

In June, Anastasiia Birillo from the ML4SE team held a livestream presentation, launching the new course materials and sharing her teaching experience. Anastasiia teaches a Kotlin course at Constructor University in Bremen, Germany, and Neapolis University Pafos, Cyprus. 

Mobile Robotics Algorithms Lab

The Mobile Robotics Algorithms Lab works on developing intelligent, fully autonomous mobile robots, as well as education programs in robotics.

In July, Kirill Krinkin from the lab, together with Varsha. V and S. P. Shiva Prakash from JSS Science and Technology University, Mysuru, India, published the article titled Energy-efficient data transmissions in heterogeneous ultra-dense networks using a hybrid metaheuristic optimization. The article is not directly related to the main focus of the lab, but it is still a significant contribution, so we wanted to tell you about the result.

Ultra-dense networks (UDNs) are one of the key technologies in fifth-generation (5G) networks. Like their predecessors, 5G networks are cellular networks, in which the service area is divided into small geographical regions called cells. All 5G wireless devices in a cell are connected to the internet and telephone network by radio waves through a local antenna in the cell. UDNs are mainly adopted to deal with the explosive growth of mobile data and its consequential energy consumption issues. UDNs consist of mobile users, restricting the small cells (SC) from offering seamless services as movement may disrupt transmissions. To provide an effective solution, the paper introduces an energy-efficient framework that enables effective data transmissions irrespective of the users’ mobility. The architecture includes a macro base station (BS), microcells, picocells, and femtocells. The SCs are responsible for transferring the data received from the mobile users to the macro BS. The proposed model uses a hybrid algorithm called the firefly-oriented multiverse optimization (FF-MVO) algorithm that works iteratively to identify the most optimal path to reach the macro BS for each transmission from the user. The proposed model is simulated in the network simulator 3 (ns-3) platform and is evaluated alongside existing models. The proposed algorithm proved better than the other models in finding the optimal path to result in energy-efficient transmissions.

Intelligent Collaboration Tools Lab

The Intelligent Collaboration Tools Lab works on exploring collaborative software engineering tools such as communication engines, issue trackers, and code review platforms, as well as devising novel approaches to tool support for collaborative work.

In June, the team’s paper Assessing the Impact of File Ordering Strategies on Code Review Process won the Best Industry Paper Award at the International Conference on Evaluation and Assessment in Software Engineering (EASE).

The paper, co-authored by Farid Bagirov, Pouria Derakhshanfar, Alexey Kalina, Elena Kartysheva, and Vladimir Kovalenko, explores the impact of file ordering on code review.

Popular modern code review tools, such as Gerrit and GitHub, sort files in a code review in alphabetical order. A prior study conducted on open-source projects showed that the changed files’ positions in the code review affect the review process. Files placed lower in the order were shown to have less chance of receiving reviewing efforts than the other files. Hence, there is a higher chance of missing defects in these files. This paper explored the impact of file order in the code review of IntelliJ IDEA. First, the authors verified the results of the prior study on a big proprietary software project. Then, they explored an alternative to the default alphabetical order, ordering changed files according to their code diff. The results confirmed the observations of the previous study: Reviewers leave more comments on the files shown higher in the code review. Moreover, even in an environment that automatically sorts files alphabetically, ordering modified files according to their code diff places higher up in the code review the problematic files that require more effort to review. Thus, the proposed ordering allows the problematic files to receive more attention during a code review. The authors have suggested that various ordering strategies for code review require further exploration.


That’s it for this digest. Stay tuned for future updates from our research teams! 

If you have any questions or comments, do not hesitate to contact us at info@research.jetbrains.org.