Intelligent Code Art
A combination of qualities, such as shape, color, or form, that pleases the aesthetic senses, especially the sight.
What happens when you train a machine to create beauty? You get the next iteration of code generated art.
At JetBrains, we use our code-generated art to create original graphics for all of our splash screens, banners, and release graphics. Each product has its own unique design to bring pleasing aesthetics to your desktop.
This version of the graphics generator uses a neural network to create both animated and static graphics.
The images that our image generator produces are essentially landscapes of feed-forward neural network mapping functions! Although there’s a couple of tricks we’ve made inside that make them more beautiful, most of them are based on what exact data we are passing to our networks.
The key tool for generating eye-pleasing images with little effort is Mixer mode. It combines the images that you’ve liked and uses them to produce new ones that are visually similar. It’s a simple implementation of a genetic algorithm – parameters of the selected images are mixed together and have the chance to evolve into something entirely new. Using information about which images the users mixed – and therefore which of them could be considered beautiful – we can train a binary classification model that can predict which set of parameters may lead to a beautiful image.
So the use of neural networks is two-fold – they are used for the image generation process itself, and for searching for the optimal initial conditions of the aforementioned process for generating eye-catching results.
Now you can be your own designer and create a wallpaper for your desktop that is truly and uniquely yours. Simply visit the Desktop Art page on our website or go straight to code2art.jetbrains.com. If you want to learn how the Neural Network works from the inside, check out the Datalore notebook.
Here are just some examples of what you can create with the help of our generator.
How the GUI works
The front-end serves as the User Interface for the Mixer mode and the controllers to tune the particular outcome of the neural network in Solo mode. Since the same front-end code was, and still is, used for all the versions of the generator, it is improved a lot with each version. This time the functionality of the layers – which can be seen as separate components that are able to be configured and generate an independent piece of static or moving art – is finally polished. Also, the GUI logic is made independent of any visual representation, and it is now pluggable, so in the future it should be possible to connect it to any visual provider of sliders, knobs, inputs, and buttons.
How to use the tool’s new capabilities
After you have chosen a product such as IntelliJ IDEA or MPS in the dropdown, just click on the Prescribed button, and it will create a splash screen of this product in all its animated glory.
Interacting with Mixer mode
On the initial screen you can see nine different images, each generated for you individually by the neural network. We call it Mixer mode since this is where you can mix random ideas, select the ones you really like, and create art that reflects your inner self.
If none of the suggested images are to your taste, press Regenerate and get nine fresh new images, which will all in some way be different from the previous ones. But before pressing Regenerate, please keep in mind that you can always fine-tune the ones that have already been generated for you – we’ll go deeper into this later. If, on the other hand, you already like some of the existing images, just click them one by one and then press the Cross-breed button. The images you selected will remain, and some new images will be generated that are a mix of the ones you selected. Repeat this as many times as you like until you find the perfect image.
Additionally, the neural network learns from your choices – let’s call it “crowdsourcing”. The world knows what is truly beautiful. By gathering and combining this collective knowledge, the neural network can also come to know what is beautiful. Mixer mode already comes with a neural network that has been trained on jetbrainers’ choices – you can compare them with your own or disable the pre-trained network by switching from Trained by to wild.
Double-click any of the images in Mixer mode to enter Solo mode, where you have control over all aspects of that particular image. Under the Neuro folder on the right you have different sliders and checkboxes: play about with them, and have fun experimenting.
You can double-click the image at any time to go back to Mixer mode.
Saving the scene
When you are ready to share the perfect combination of images, or if you want to return to your work later to add final touches, simply press the Get URL button. This will generate a unique URL in the address bar of your browser that you can use to share your current creation or to come back to it later.
Animating the Solo image
Press the Animate button in Solo mode and wait for a bit. Maybe more than a bit. Maybe you will be in a queue, and we will tell you where you are in the queue. You can close the tab and come back later – just be sure to save the scene and press Animate again to check the status. Eventually, you will get an animated video of your image. It is infinite and looped like the Moebius strip. Press the Back to static button to go back to the static version of your image.
You can generate a URL for the animated scene and share it with the world.
If you want to have an .mp4 file instead, you can! This is a really tricky process, but it’s totally worth the effort. Just kidding, all you have to do is press the Export Video button! 😉
The same conditions apply as for animating: there’s a queue and a rendering progress bar, but you can generate a URL to save the scene during the rendering process, and use it to come back later to see if rendering is finished (you can safely close the browser tab while you’re waiting).
To try your luck, press the I feel lucky button. Non-artificial, non-intelligent randomness will suggest a combination of settings out of nowhere. Even random settings can produce a thing of beauty. They can produce ugliness too… but more often beauty.
The technical details
Technically, the server-side is split into several parts:
- The neural network-based* image generation engine built with TensorFlow.
- The video rendering engine that produces videos from the images.
- The scene storage.
- The task queue for distributing CPU- and GPU-intensive tasks between server components.
- Mixer mode – the tool for evolving the generated images via genetic algorithms.
- There’s also yet another neural network built on top of Mixer mode and our image generation engine which aggregates users’ preferences to predict which images are more likely to be beautiful.
*The neural network itself was created in Datalore by JetBrains – an online notebook for data scientists. See the network architecture details here. Create your copy of the notebook and experiment with model parameters yourself.
Enjoy playing with the generator and don’t forget to share your results with others on social media with #code2art and tag @JetBrains. We would love to see your art!
The Drive to Develop