Kotlin
A concise multiplatform language developed by JetBrains
KotlinConf’26 Speakers: In Conversation With Lena Reinhard
“Over the last three to five years, many of the promises that drew people to tech have been called into question.”
The tech industry has long promised opportunity, growth, and the chance to build things that reach millions of people. Today, many of those assumptions are being questioned. At KotlinConf’26, Lena Reinhard, leadership coach, former VP of Engineering, and the Day 2 keynote speaker, will explore these shifts in her talk We Were Meant to Be.
Ahead of the conference, we spoke with Lena about the uncertainty many people in tech are feeling today, the realities behind the productivity debate in the age of AI, and what leaders can do to support their teams through change.
As she prepares for KotlinConf ’26, Lena is documenting the process of shaping this keynote in a public work log, sharing the ideas and resources influencing her thinking. You can follow her progress here: The Making of: A Keynote on Tech, Humanity, Crisis, and the Future.
Q: In your keynote We Were Meant to Be, you touch on uncertainty, job insecurity, and how the tech industry is changing. What questions or experiences led you to create this talk, and what do you hope the audience sits with after hearing it?
Lena Reinhard: This is probably the question where I have the longest answer, because there’s a lot of history to this. And that’s also why I’m so excited to talk about it at KotlinConf in May.
My career is over 20 years old now. I actually started in finance, and very early on, the industry went through the 2008 financial crisis. So that was a weird way to start a career.
I’ve now been in tech for 16 years, and during that time I’ve seen many shifts in how the industry works. In the early 2010s, I worked a lot in open source. That’s really how I started my tech career, working with communities like CouchDB and some in the JavaScript ecosystem. Later, I shifted more into working with companies in Silicon Valley while still staying close to open source.
Over the last few years, I’ve worked more with leaders across different companies, from startups to large corporations to NGOs all around the world. That means my lens on the industry has changed over time, depending on who I’m working with and which aspects of the ecosystem I’m seeing.So throughout my career, I’ve spent a lot of time thinking about how technology works and what responsibility we have as people building it. In 2015, I gave the keynote A Talk About Nothing that encapsulated a lot of my thoughts at that point in time, and the question of our role as people building software and the responsibilities that come with that.
“The work we do has a lot of leverage, and the question is how we use that in a way that benefits not only us but also the people who use technology.”
Over the last four or five years, especially since generative AI really took off around 2022, I’ve noticed a lot of uncertainty among industry professionals.
People entered tech for many reasons: building products that reach millions of users, the opportunity for upward mobility, or simply the ability to experiment and create.
“Over the last three to five years, many of those things have been called into question. Software engineers, but also managers, are asking themselves, each other, and sometimes me, how career growth will work, or whether those careers will even exist in the same way. And that uncertainty has only been increasing, and the way that the discourse about this is playing out across the media, from podcasts and social media, to newspapers and “thought leaders,” isn’t helping that.”
At this point, I think people who claim to have definitive answers about what AI will mean for the global economy or for the tech industry, let alone for individuals and our careers, simply don’t have them. Those answers don’t exist at this point.
There are many hypotheses, and it’s important to stay open to them. But it also means that many of the promises that originally motivated people to enter this field are no longer as stable as they once felt.
Even the way people tinker with technology has changed. I know many programmers who used to build countless side projects in their spare time, and even that culture has shifted.
All of those questions and that uncertainty from the past few years ultimately led to this talk.
Q: You’ve written a lot about how to understand and improve productivity in engineering teams. (For example, your article How to Understand, Measure, and Improve Productivity in Your Engineering Team.) With AI becoming more present in our daily work, how do you think our ideas of productivity are shifting, or need to shift, for individuals and teams?
Lena: It’s a great question. And I think the two are very intertwined.
“One thing I often think about is that engineering productivity, and the discourse around it, has been a hot mess for a very long time.”
It’s always been a mix of the work people are doing, how meaningful that work is, and how productive that work appears from the outside.
For example, does your executive team think that you’re actually getting stuff done? And those can be very different things that don’t necessarily overlap.
So productivity has always been difficult for teams. I also don’t know of a company that has really figured it out well. It’s always somewhat ambiguous.
Now, with generative AI and coding assistants entering the picture, the conversation has become even more complicated.
“One big issue is that a lot of the current AI discussion is surrounded by hype and marketing messages that aren’t really backed up by solid data or real-world experience.”
At the same time, executives and senior leaders are often driven by pressure from their boards and investors. At this point, many leaders feel they can’t say, “We’re not doing AI,” because their investors will worry the company is falling behind.
So there are a lot of really messy incentives around this that engineering teams get caught up in.
Navigating that debate is difficult right now. It requires open conversations internally – with managers and teammates.My approach right now is that it’s important to talk about what productivity actually means and how it relates to the company’s goals. I recently wrote more about this in my article What AI Can (and Can’t) Do for Your Engineering Team (Beyond the Hype), where I look at some of the current limitations of AI and where it can actually be useful for teams.
“The goal can’t simply be to get as much stuff done and move as fast as possible. If what you’re working on doesn’t actually help the company achieve its goals, then being fast doesn’t get you anywhere.”
So the conversation should start with: what are our goals, how do we measure progress toward them, and how can AI actually help us get there?
For some teams, AI can be useful for experimentation. For others, it can help with debugging or act as a coding assistant in everyday workflows.
But the key is cutting through the hype and figuring out what is actually useful for your team and for the problems you’re solving for your users.
One thing that concerns me is that AI is already increasing the pressure on teams to produce more output.
I’m seeing discussions again where people think lines of code generated by AI are a useful productivity metric, which they are not. That’s a debate I thought we had already moved past about ten years ago.
At the same time, what I’m hearing from many teams is that people are simply working much more. Instead of working less, they’re working more hours because now, in addition to their regular job, they’re also expected to figure out how to integrate AI into their work, and the scrutiny on “productivity”, most commonly meaning “output”, not outcomes, is intense.
So my advice right now is to cut through the noise as much as possible. Don’t fall for the hype around just running as fast as possible. Focus on the goals: what your team is responsible for, how that connects to the company’s goals, and what meaningful progress and impact actually look like.
I talk about goals until the cows come home, because that’s what teams should ultimately be measured against.
Moving fast only matters if you’re moving in the right direction.
“One way I often describe generative AI tools is that they’re like an overly eager junior engineer who’s extremely confident.”
That kind of person can be great to work with, but they also require constant monitoring and guidance. It’s going to tell you stuff that’s just not true, not out of malice, of course, it doesn’t have a world model, and it’s important that we don’t anthropomorphize these tools. And it’s going to say it in a way that makes you think, “Oh yeah, that sounds great,” but actually it’s just nonsense. And that creates a lot of overhead and context switching. The mental load for teams right now is just much higher than it used to be.
That doesn’t mean the tools are useless. But they require a lot of handholding to produce useful results. They’re currently most useful for people with significant experience as software engineers who know what good software engineering looks like, how it works, and who can then utilize these tools well and productively. Where it gets tricky is that both the process of generation as well as the output look very good and convincing to the untrained eye. That’s where unhelpful discussions come in, like CEOs saying, “I vibe-coded this in two hours, why does our engineering team need this many people, and why are they producing so little? Also, I put my thing live just now.” That’s a tough position to be in.
Q: In your talk description, you say that many of the promises of tech careers have crumbled. From what you’re seeing and hearing, what still draws people to tech today – and how do you think that motivation might evolve?
Lena: Honestly, right now I find that question difficult to answer.
When I look at the people I talk to, and also at discussions in online forums for people who are just entering the field or participating in different communities, my impression is that many people are still drawn by the promises the industry used to offer – things like career progression and stable jobs; the same for building things, being creative, and solving problems.
Those ideas haven’t completely disappeared.
But at the same time, people are much more uncertain about how true those promises still are and how much they can and want to bet their ability to make a decent living on them. There’s a lot more doubt about whether those careers will still exist in the same way, or whether people should pursue something else.
So that uncertainty that’s affecting the entire industry is visible there as well.
And the noise-to-signal ratio is incredibly high. Like we briefly touched on earlier, the debate on social media, industry newsletters, at conferences, etc., also exists about “whether software engineering jobs will still exist in the future.” Those debates don’t really help, and again, no one has the answers.
Q: You work closely with leaders and speak a lot about leadership. For example, you explored the topic in your LeadDev talk on what we really mean when we talk about leadership. In periods of change and instability like the ones many teams are facing now, what do you think leaders most often underestimate about how uncertainty affects their teams?
Lena: One big piece is that leaders often have an information advantage.
Managers – and often technical leads and very senior engineers – are often briefed about changes long before their teams are. They are involved in discussions about reorganizations before they happen, or in creating a new technical strategy.
So they’re often part of shaping those changes, or at least they know about them well in advance.
“When leaders announce a change to their team, they’ve often already processed it. Mentally, they’ve moved on. But for the team, it’s completely new information.”
People need time to process it. They need time to understand what it actually means for them – how it will affect their day-to-day work, their role, how they get things done, or even what success will look like going forward.
I’ve often worked with leaders who become impatient at that stage. They wonder why people can’t just get on board immediately, or why there are so many questions.
But it’s important to remember that you may be in a very different place simply because you’ve had that information for much longer.
Giving people time and actually sitting down with them, explaining things, and listening to their questions requires effort, but it’s really important. Esther Derby, who started as a programmer and has written great books about agile work and handling change, likes to describe what leaders then tend to call “resistance” to change rather than a “response.” I wrote about dealing with these kinds of responses here.
Another pattern I see is that some leaders feel they need to have everything completely figured out before they talk to their teams.
But especially right now, there’s so much uncertainty inside companies and across the entire industry that none of us can really control it.
Things are changing quickly: companies are redesigning career frameworks, rethinking productivity measures, and trying to figure out what the future of work even looks like.
As a leader, you don’t always need to have everything figured out.
Sometimes it’s more helpful to simply acknowledge the uncertainty to say openly that things are chaotic or unclear right now.
That helps address the elephant in the room. It prevents people from feeling like something strange is happening behind the scenes, and it makes it easier to have open conversations.
Because the reality is that no one really has all the answers.
“Leaders often assume that their teams expect certainty from them. But in many cases, what people actually need is openness.”
Being able to say, “I don’t have all the answers, but I’m working through this with you,” is often much more useful.
And empathy matters as well.
Instead of projecting what you think people need, it’s important to sit down with them and understand what they actually need.
Because those two things can be very different.
Lena will explore these ideas in more depth in her keynote at KotlinConf’26.
Don’t miss Lena’s Day 2 keynote.
