Solving Advent of Code in Rust, With Just Enough AI

A year ago, I wrote a blog post about solving Advent of Code puzzles using Rust as the implementation language. I believe it’s still relevant if you plan to use Rust this year. In one section, I advised limiting the use of AI and demonstrated how to disable the relevant functionality in RustRover, either partially or completely, while solving puzzles. Just a year later, we live in a very different world when it comes to using AI in software development. Yet here’s what Eric Wastl, the creator of Advent of Code, writes about using AI:
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve – no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
In this blog post, I want to argue with Eric. After all, when we go to a gym, isn’t it because there are specific tools there that help us get stronger? All those dumbbells, kettlebells, barbells with weight plates, pull-up bars, and various machines – we use them for a reason, right? We want to get stronger, so we go to a gym and use the tools. Why not use the tools that help us grow our coding skills?
Note the shift here. I fully agree with Eric: we shouldn’t use AI to solve the puzzles, but we can (and even should) use AI along the way. Why? Because being able to apply AI to writing code is a must-have skill in today’s world. Also, using AI isn’t a simple yes-or-no decision – it’s a spectrum. I’ll elaborate on that shortly. But first, I’d like to invite you to join the Solving Advent of Code 2025 in Rust contest and share the following message from the RustRover team.
Solve Advent of Code 2025 Puzzles in Rust and Win Prizes
Before we explain how to enter this year’s contest, we’d like to address last year’s Advent of Code in Rust. Unfortunately, we were unable to send the prizes to the winners because we overlooked an important logistical detail when launching the competition – we didn’t ask participants to ensure that their GitHub profiles included an email address or social media handle. We’re truly sorry about this.
To avoid the same issue this year, please make sure your email or social media handle is listed on your GitHub profile, so Santa can deliver your well-earned gifts. 🎁
As a gesture of appreciation, we’d also like to congratulate the three winners of the 2024 challenge, and we’re ready to send out the long-overdue prizes. Well done – great minds, great solutions!
- Duro – 4900
- Mark Janssen – 4769
- Michel Krämer – 4742
Thank you for participating and for your patience. We hope you’ll join us again this year, continue solving Advent of Code challenges in Rust, and keep contributing to the Rust community.
How to Enter the Contest
- Make sure your GitHub profile includes an email address or social media handle.
- Go to the Leaderboard section of your Advent of Code profile and enter one of the codes below:
- Leaderboard 1: 4223313-0557c16e
- Leaderboard 2: 2365659-de227312
- Complete at least three Advent of Code puzzles in Rust.
- Share your solutions on GitHub and add aoc-2025-in-rust to the Topics field of your repository. To do this, click the gear icon in the top right-hand corner of your repository page and edit the Topics list.
By competing for the top positions on our leaderboards, you can win one of the Amazon Gift Card prizes. As a small apology for last year’s issue, we’re offering five prizes instead of three:
- 1st place – USD 150
- 2nd place – USD 100
- 3rd place – USD 70
- 4th place – USD 50
- 5th place – USD 30
Plus, USD 20 gift cards for five randomly selected participants.
GitHub Template
We’ve prepared a GitHub template to help you quickly set up your Advent of Code project in Rust. You don’t have to use it for your solutions, but it streamlines the setup and lets you focus on what really matters.
To use it:
- Log in to GitHub.
- Click Use this template (please don’t fork).
- Once the setup is complete, clone the project in RustRover.
- Don’t forget to add aoc-2025-in-rust to the Topics field of your repository.
Skills we train while solving coding puzzles
Alright, if you aim to compete in leaderboards, then no AI for you. Code completion in an IDE? Well, I don’t know – maybe ed and rustc are all you need to demonstrate your puzzle-solving power. That way, you show that it’s all about the speed of your brain, your keyboard, your CPU, your network card, and your internet provider. Read the rest if you’re not competing with anyone.
Advent of Code is great precisely because it exercises so many real-world engineering muscles. Some of these muscles benefit from AI spotting you; others atrophy if you let AI do the heavy lifting. Here’s a closer look at which skills belong in each category.
Structuring code for a single puzzle and for the whole competition. Advent of Code puzzles are small, but the whole event is long. Structuring your solution so it doesn’t become a pile of spaghetti by Day 7 is a real skill. Should you use AI here? Absolutely yes – as a reviewer, not as a decision-maker. Ask AI to suggest module layouts, compare different folder structures, or propose ways to reuse code across days. But don’t outsource the structural thinking itself. Knowing how to architect small but flexible solutions is one of the main professional skills AoC trains, and AI should support your design, not replace it.
Reading the problem text and coming up with an initial idea. This skill is core to the spirit of Advent of Code. Reading carefully, extracting requirements, noticing tricky edge cases, and forming an initial idea – that’s exactly what Eric wants humans to practice. And he’s right: don’t use AI here. Don’t ask for summaries, hints, or solution outlines. Let your own brain wrestle with the puzzle text. This is one of the purest forms of algorithmic problem solving, and it’s the part AI takes away if you let it.
Choosing the right library and the right level of abstraction. Rust has plenty of useful crates, but AoC often rewards sticking to the standard library. Should AI help? Sure – but in moderation. Asking, “Is there a crate for fast grid manipulation?” or “Is there a simple way to parse this with nom?” mirrors real-world development. As long as you make the final call yourself, AI here acts like a knowledgable colleague pointing you toward options, not handing you the solution.
Choosing the right data structure. This is both an AoC skill and a general CS one. Selecting between vectors, hash maps, BTreeMaps, VecDeque, or a custom struct requires understanding the trade-offs. AI can help explain those trade-offs or remind you of performance characteristics. But don’t ask AI which data structure solves the puzzle. The puzzle is making that choice. Use AI to deepen understanding, not to skip the thinking.
Parsing the input into a convenient structure. AI shines here. Parsing is often tedious, repetitive, and not the focus of the puzzle. If you’d rather not spend 20 minutes writing yet another loop splitting lines on spaces, let AI write the initial parser. You’ll still check it, tweak it, and integrate it with your own logic, but AI can save your cognitive energy for the interesting bits.
Choosing the right algorithm. This is the heart of competitive puzzle solving. Deciding whether something requires BFS, DP, a custom state machine, or a greedy approach is a deeply human skill – and one that Advent of Code trains extremely well. This is another area where I’d say: no AI. If you rely on the model to pick the algorithm, you’ve skipped the actual puzzle. You can use AI afterward to compare approaches or learn alternatives, but not during the solving phase.
Picking the right language feature for the job. Rust is full of elegant features – iterators, pattern matching, ownership tricks, generics, traits, lifetimes. Sometimes AI can remind you of a syntactic trick or propose a more idiomatic expression of your idea. That’s fine, as long as the idea itself is yours. Using AI to teach you small idioms or propose cleaner code is actually great training, but avoid asking AI to “rustify” a solution you don’t understand.
Adding a visualization. Visualizations aren’t typically required, but they’re fun and often deepen understanding. Here, AI is extremely useful – whether generating a quick plot with plotters, helping build a tiny TUI with ratatui, or producing a debug print layout. This is auxiliary work, not the core puzzle, so go ahead and use the tools.
Testing your code. Do you need tests for AoC? Strictly speaking, no. But writing a couple of targeted tests is a great habit: testing parsing, edge cases, or parts of your algorithm. AI is a good assistant here: it can generate test scaffolding, propose property tests, or create sample data variations. As long as you understand what the tests check, this is a safe area to lean on AI.
Benchmarking solutions. Benchmarking is a professional skill AoC can absolutely help train – especially on later days, when naive solutions melt your CPU. AI can help you set up criterion benchmarks or interpret microbenchmarking results, but you should decide what to measure and why. Benchmarking is partly technical and partly philosophical: what trade-offs matter? AI can help with the technical part.
Learning stuff along the way. This is the most important skill of all. Every Advent of Code teaches something: an algorithm you forgot, a data structure you never used, a Rust feature you always meant to try. Learning with AI is natural and encouraged. Just ask it questions like a mentor, not like a puzzle-solver-for-hire. Explanation, context, and examples? Great. Solutions to the actual puzzle? Skip those.
Learning how to prompt coding agents to get a fully functional solution. Remember, Eric Wastl writes, “If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.” But is that really true? In my experience, not quite. Most “AI prompting exercises” out there are either artificial or oversimplified. Advent of Code puzzles, on the other hand, are perfect precisely because they aren’t designed for AI. Many of them can’t be solved in one shot, even by a strong coding agent, such as Junie or Claude Agent. They require iterative refinement, mid-course corrections, and careful steering – exactly the techniques you need to master if you want to use AI effectively in real-world development. In other words, these puzzles become some of the best training grounds for learning how to prompt coding agents. You’ll learn how to break down problems, feed partial context, debug collaboratively with the model, and guide it away from dead ends. These are practical, valuable skills, and Advent of Code offers an endless supply of opportunities to practice them.
Implementing the personal AI strategy for AoC in RustRover
At this point, the key question becomes: which skills do you want to train this year? There’s no single correct answer. Maybe you want to sharpen pure algorithmic thinking with zero AI help. Maybe you want to practice integrating AI into your daily workflow. Maybe you want something in between.
The important part is: make this a conscious choice, not an accidental default.
Sample strategies
Here are a few possible “AI strategies” you can adopt:
- Strategy A: Pure human mode. No AI, no inline completion, no chat. You open RustRover, you read the problem, you write the code. This maximizes training of problem understanding, algorithm selection, data structures, and Rust fluency. It’s also the closest to what Eric Wastl has in mind.
- Strategy B: Assisted implementation mode. You solve the puzzle on paper (or in your head) first: understand the problem, pick the algorithm, decide on data structures. Only then you let AI help with implementation details: parsing, boilerplate, small refactorings, docs. This is a great mode if you want to protect the “thinking” parts while still practicing how to collaborate with AI in code.
- Strategy C: Agent steering mode. You deliberately practice guiding an AI coding agent toward a fully working solution. You still read and understand the puzzle, but you treat the model as a junior pair programmer: you prompt, correct, re-prompt, adjust the approach, and iterate. This is ideal if your goal is to improve at prompting coding models, debugging their output, and managing multi-step interactions.
You can even mix and match strategies across puzzle parts and days. For example, you might tackle Part 1 in Pure human mode to fully engage with the core problem, then switch to Assisted implementation or Agent steering for Part 2, where the twist often builds on the same logic. And on particularly difficult days, you might choose a more AI-assisted strategy from the start.
Using RustRover’s AI Assistant chat
For this section, I’m talking about direct chat with a model inside RustRover’s AI Assistant, not higher-level agents like Junie. You’re essentially talking to the model about your codebase and puzzle, asking it to:
- explain parts of your solution or the standard library;
- suggest refactorings and idiomatic Rust patterns;
- help with parsing and data wrangling;
- generate tests or benchmarks.
The goal is to keep you in charge of the solution, while the model helps with “muscle work” and with explanations.
If you want to train prompting as a skill, treat each puzzle as a mini lab:
- Set constraints explicitly. “I already chose the algorithm: Dijkstra’s algorithm on a grid. Don’t change the approach, just help me implement it idiomatically in Rust.”
- Provide context. Paste the relevant part of the puzzle description and your current code, and explain what you’re stuck on: “Parsing is done, I now need to maintain a priority queue of states. Help me implement this using BinaryHeap.”
- Iterate, don’t restart. Instead of “rewrite everything”, use prompts like “Here is the current solution and the bug I see. Propose a minimal fix.”
This way, you’re not just getting answers; you’re practicing how to drive a coding model effectively.
Tweaking inline completion settings
Finally, inline completion can quietly shape the way you write code – sometimes too much. For Advent of Code, consider tuning it to match your chosen strategy:
- If you’re in Pure human mode, you might want to turn inline completion off completely, or at least make it less aggressive.
- If you’re in Assisted implementation mode, keep inline completion on, but be disciplined: accept suggestions only for clearly mechanical code (loops, parsing, simple matches), not for the core algorithm.
- If you’re in Agent steering mode, you can let inline completion be quite active, but you should review what it proposes and ask the chat assistant to explain non-obvious pieces.
The key idea: your RustRover setup should reflect your personal AI training plan for Advent of Code, not the other way around.
Conclusion
Advent of Code remains one of the best ways to sharpen your coding skills, and AI doesn’t have to diminish that experience – it can enhance it when used intentionally. We shouldn’t let AI solve the puzzles for us, but we can absolutely let it help us write better, cleaner, and faster Rust code. The real challenge is choosing which skills you want to train: from algorithms and data structures to testing, visualization, and prompting coding agents effectively. With the right strategy, AoC becomes not just a seasonal tradition but a focused workout for both your problem-solving mind and your AI collaboration skills. RustRover gives you all the knobs and switches you need to fine-tune that strategy, from chat-based assistance to inline completion settings.
Most importantly, Advent of Code is fun – and every puzzle you attempt, no matter how you solve it, makes you a better engineer. So pick your approach, open RustRover, and go solve some puzzles.
