Kotlin
A concise multiplatform language developed by JetBrains
How Backend Development Teams Use Kotlin in 2025: Insights from a Certified Trainer
This is the first guest post in a two-part series from José Luis González. José Luis has a PhD in software development and is a JetBrains-certified Kotlin Trainer, who works with developers and engineering teams to deepen their Kotlin skills and apply the language effectively in real projects. At Hyperskill, he runs Kotlin instructor-led training for teams at Hyperskill that focus on advanced topics and practical problem-solving rather than theory.
1. What are the top 3 Kotlin anti-patterns from self-taught teams, and how do you fix each one?
This is a good question. The basic anti-pattern I see in self-taught Kotlin teams is that they still design hierarchies of abstract classes, factories, and deep inheritance trees because that’s how they learned Java. But Kotlin thrives on data-oriented, sealed, and expression-based designs. Overusing inheritance leads to fragile hierarchies, bloated APIs, and code that fights Kotlin’s strengths.
The simplest improvement is to flatten your design. Use sealed classes, data classes, and when expressions to model domain state declaratively instead of relying on polymorphism. Prefer composition and smart constructors over inheritance.
Moving to what’s catching my attention in the last 2 years, I see subtler issues from teams that already “know Kotlin,” but haven’t kept up with how the language evolved.
A major issue I still see is the use of ambient singletons: things like global object repositories or custom service locators. They make code look simpler at first, but in reality they hide dependencies and make testing much harder.
Context parameters would fix it: the successor to context receivers, so functions declare the services they need and callers supply them, no globals required:
interface Clock {
fun now(): Instant
}
context(_: Clock)
fun stamp() = now()
with(object: Clock{ override fun now() = Clock.System.now() }) {
stamp()
}
This is Beta in Kotlin 2.2 and replaces context receivers with IDE-assisted migration.
Another common trap is the illusion of type safety through typealias.
typealias UserId = String doesn’t create a new type as it’s only an alias, so UserId and String are fully interchangeable. In 2025, the idiomatic solution is to use value classes (inline classes), which introduce true domain types while remaining allocation-free at runtime.
@JvmInline
value class UserId(
val v: String
)
fun load(u: UserId) {}
// load("abc") // not good
load(UserId("abc")) // better
A more subtle anti-pattern I still see is treating coroutines as ‘threads with nicer syntax.’ The telltale signs are GlobalScope.launch, random runBlocking calls in production, and code that ignores structured concurrency entirely. The modern fix: stop launching in GlobalScope, tie every coroutine to a real scope or lifecycle, keep runBlocking in main() or tests, and use coroutineScope/supervisorScope to compose concurrent work instead of escaping it.
suspend fun load() = supervisorScope {
val a = async { repo.a() };
val b = async { repo.b() };
a.await() to b.await()
}
GlobalScope is a @DelicateCoroutinesApi escape hatch, and official guidance frames runBlocking as a bridge for tests and entry points, not production call paths.
(Author’s remark: since Kotlin 2.0, K2 is the default compiler across JVM/Native/JS/Wasm: lean on these features confidently in KMP codebases.)
2. When teams ask you about testing async Kotlin code, what’s the exact testing pattern you show them that works reliably in CI/CD?
I show them one reliable pattern that scales and never flakes. The key is to inject dispatchers, not hardcode Dispatchers.IO, and test with runTest plus a StandardTestDispatcher (or MainDispatcherRule on Android). This lets you control virtual time deterministically using advanceTimeBy() or advanceUntilIdle(), so no real delays or sleeps are needed. Always assert both results and cancellation: tests should prove that CancellationException propagates and does not get swallowed.
Here’s the compact pattern:
@OptIn(ExperimentalCoroutinesApi::class)
@Test fun flow_is_deterministic() = runTest {
val testDispatcher = StandardTestDispatcher(testScheduler)
val repo = Repo(svc = FakeSvc(), dispatchers = object : AppDispatchers {
override val io = testDispatcher; override val default = testDispatcher
})
val job = launch { assertEquals("A" to "B", repo.load()) }
advanceUntilIdle() // run coroutines deterministically
assertTrue(job.isCompleted) // also test job.cancel() + advanceUntilIdle()
}
If you’re testing a Flow, use Turbine instead of assertEquals:
flow.test {
assertEquals("A", awaitItem())
cancelAndIgnoreRemainingEvents()
}
This pattern works consistently in CI/CD because it avoids timing flakiness, doesn’t rely on real delays, and enforces proper structured concurrency.
3. A team just deployed their first Kotlin microservice and it’s using 3x more memory than expected. What’s your diagnostic checklist, and what are the most likely culprits?
If a new Kotlin microservice suddenly uses three times more memory than expected, I start by checking whether it’s really a Kotlin issue or just how the JVM behaves under load. The first thing is to see what kind of memory grows.
If the heap is flat but RSS keeps rising, it’s usually thread stacks or direct buffers. Each thread costs around 1-2 MB, and Dispatchers.IO can easily spin up hundreds under blocking I/O. Limit it:
val io = Dispatchers.IO.limitedParallelism(64)
❗ And make sure you’re not using raw threads where coroutines would do. That’s a common “Java habit” that quietly kills memory efficiency.
Next, if the heap itself grows, look for unbounded coroutines or collections: things like Channel.UNLIMITED, flow.buffer() without limits, or caches that never evict. Backpressure fixes most of these:
flow.buffer(256, onBufferOverflow = BufferOverflow.DROP_OLDEST)
Then check the HTTP layer. Frameworks like Ktor with CIO or Netty may buffer entire requests in memory. Stream instead of aggregating:
call.receiveChannel().copyTo(File("upload.bin").outputStream())
On the JVM side, always set explicit container limits so it doesn’t grab all available RAM:
-XX:MaxRAMPercentage=60 -XX:MaxMetaspaceSize=128m -XX:MaxDirectMemorySize=128m
Finally, profile before guessing. I usually attach an async-profiler or JDK Flight Recorder to see where allocations come from, and combine that with jcmd VM.native_memory summary for off-heap diagnostics.
In short: use real coroutines, not threads; set memory caps; limit dispatcher parallelism; bound every buffer; and verify with profiling. That’s how you fix 90 % of “why is our Kotlin service eating all the RAM?” cases.