The Evolution of Async Rust: From Tokio to High-Level Applications
Disclaimer: This article was created using AI-based writing and communication companions. With its help, the core topics of this rich and nuanced livestream were conveniently distilled into a compact blog post format.
In our yet another JetBrains livestream, Vitaly Bragilevsky was joined by Carl Lerche, the creator of Tokio, for an in-depth conversation about the evolution of async Rust. Tokio has become the de facto asynchronous runtime for high-performance networking in Rust, powering everything from backend services to databases. During the discussion, they explored how async Rust matured over the years, the architectural decisions behind Tokio, common challenges developers face today, and where the ecosystem is heading next. If you missed the live session, you can watch the full recording on JetBrains TV. Below, you’ll find a structured recap of the key questions and insights from the conversation.
Q1. What is TokioConf and why did you decide to organize it?
TokioConf is the first conference dedicated to the Tokio ecosystem, taking place in Portland, Oregon. This year marks ten years since Tokio was first announced, making it a natural moment to bring the community together. Use the code jetbrains10 for 10% off the general admission ticket (excluding any add-ons).
Tokio and Rust have become foundational technologies for infrastructure-level networking software, including databases and proxies. The conference is meant to reflect that maturity and growth. While the name highlights Tokio, the scope includes broader async and networking topics in Rust.
“Tokio and Rust have become one of the default ways companies build infrastructure-level networking software these days.”
Q2. When people hear “Async Rust,” what should they picture?
Async Rust is about more than performance. While handling high concurrency is a key advantage, async programming also improves how developers structure event-driven systems.
Timeouts, cancellation, and managing multiple in-flight tasks become significantly easier in async Rust compared to traditional threaded approaches. Async in Rust leverages the ownership model and Drop, enabling safe and clean cancellation patterns.
“Async is both performance, but also a way of managing lots of in-flight threads of logic well.”
Q3. How did Tokio begin? Why did Rust need it?
Tokio evolved from earlier experimentation with non-blocking I/O in Rust. Initially, Rust only had blocking socket APIs, and building efficient network systems required low-level abstractions. The journey went from Mio (epoll bindings), to the Future trait, and to async/await. Async/await was a major milestone in making async programming ergonomic in Rust.
“The way async/await ended up being designed is actually quite impressive.”
The language team managed to deliver memory safety and zero-cost abstractions in a way that wasn’t obvious at the time.
Q4. Could Rust have something like Java’s virtual threads?
Rust originally had green threads and coroutines before version 1.0, but they were removed to preserve zero-cost abstractions and C-level performance characteristics. The overhead and complexity of stack management for green threads conflicted with Rust’s design goals at the time.
“Rust actually started with lightweight virtual threads and coroutines.”
Whether such a feature could return is an open question, but today’s Rust async model is fundamentally different.
Q5. How does cancellation work in Async Rust?
Cancellation in Rust is implemented through Drop. When you drop a future, its cleanup logic runs automatically.
If the future directly owns a socket, it closes immediately. If the socket is owned by another task (for example in Hyper), cancellation signals cascade through channels and trigger cleanup.
However, async functions can be dropped at any point, and developers must write defensively to handle that reality correctly.
Q6. Why did Tokio become the dominant async runtime?
Tokio became the de facto standard largely due to ecosystem momentum. Early crates like Hyper built on Tokio, and once that foundation solidified, switching runtimes required compelling reasons.
Other runtimes exist (especially for embedded or specialized contexts) but for general server-side development, Tokio’s ecosystem depth made it the default.
“There just wasn’t a good reason to not use Tokio.”
Q7. What about io_uring? Is it the future?
io_uring can provide benefits, especially for batching filesystem operations. However, for networking workloads, real-world gains are often limited. It is more complex than epoll and has historically had more security issues. That said, Tokio allows mixing in io_uring-specific crates when you have a clear use case.
“I’ve not seen real performance benefits with swapping out io_uring for sockets under the hood in Tokio.”
Q8. What were the most important design decisions in Tokio?
Tokio intentionally avoided reinventing scheduling patterns. Instead, it adopted proven strategies from Go and Erlang, including work-stealing schedulers.
The philosophy was to provide:
- Good defaults,
- Strong performance,
- Escape hatches for advanced tuning.
The goal was to make Tokio easy enough for most developers while still enabling performance optimization when needed.
Q9. What are common mistakes in Async Rust?
The biggest issue comes from cooperative scheduling. Tasks only yield at .await, so long CPU-heavy work without awaiting can stall the runtime. Tokio provides runtime metrics to help detect such problems. Understanding how the scheduler works is crucial to avoiding tail-latency problems.
“Because async is cooperative scheduling, you have to make sure you’re yielding back to the runtime regularly enough.”
Q10. What’s the best way to debug Async Rust?
Debugging async systems often involves:
- Tracing,
- Runtime metrics,
- Async backtraces,
- Traditional debuggers.
Stuck tasks and high tail latency remain the hardest issues to diagnose. Better static analysis and linting tools could significantly improve this area in the future.
“The biggest pitfall stems down to developers accidentally canceling something and not handling the cancellation appropriately.”
Q11. What is Toasty, and why are you building it?
Rust has matured as a systems and infrastructure language, but higher-level web application tooling remains underdeveloped. Toasty aims to explore that space by building a productive, ergonomic data modeling and query layer. The goal is not just performance, but developer ergonomics – while still preserving escape hatches for advanced use cases.
Q12. Can Rust move into high-level web frameworks?
Rust already has a foothold in many organizations thanks to its infrastructure strengths. As internal Rust ecosystems grow, the demand for higher-level tooling increases. The missing piece is ergonomic, opinionated frameworks that prioritize productivity. The long-term vision is not to replace existing ecosystems, but to expand Rust’s reach upward into full-stack development.
“I do think there’s a way to build productive and ergonomic libraries with Rust that focus on ease of use.”
Closing Thoughts
Rust has firmly established itself as the best choice for many infrastructure-level systems. The next frontier is higher-level application development. Tokio solved async infrastructure and now the ecosystem is evolving toward productivity and full-stack capability.
If you’re interested:
- Explore Toasty on the Tokio GitHub
- Join the Tokio Discord
- Attend TokioConf in Portland, Oregon
Watch our previous lifestream with Herbert Wolverson and explore everything you wanted to ask about Rust
