Kotlin logo

Kotlin

A concise multiplatform language developed by JetBrains

News

Kubernetes Made Simple: A Guide for JVM Developers

This article was written by an external contributor.

Michael Nyamande

A digital product manager by day, Michael is a tech enthusiast who is always tinkering with different technologies. His interests include web and mobile frameworks, NoCode development, and blockchain development.

Michael on social media

Kubernetes is a container orchestration system for deploying, scaling, and managing containerized applications. If you build services on the Java virtual machine (JVM), you likely know that most microservices run on Kubernetes. Kubernetes has become the de facto standard for running containerized microservices at scale. However, Kubernetes is famously complex, with many new concepts (Pods, Deployments, Services, etc.) to master, and thus, has a steep learning curve.

This tutorial helps ease that complexity for JVM developers. It focuses on what you need to ship a Kotlin or Java Spring Boot app to a cluster, step-by-step, with simple explanations and runnable examples.

You’ll learn the basics of Kubernetes by deploying a Kotlin Spring Boot application onto a Kubernetes cluster. You’ll also cover what deployment and services are, how to manage configurations using ConfigMaps and Secrets, and what the best practices are for running JVM applications on Kubernetes.


Prerequisites
Before diving in, make sure you have the following:

  • Docker: Install and run this locally. Docker builds a container image of your app.
  • Kubernetes: Install a Kubernetes environment. For this tutorial, you’ll use Minikube, a local single-node cluster, and the kubectl CLI for interacting with the cluster. You can download Minikube on their official site, and it comes bundled with kubectl.
  • Docker registry: Create, for example, a Docker Hub account to push and pull your image. You can also use Minikube’s local Docker registry.

Set Up the Sample Kotlin Spring Boot App (Optional)

While you can use an existing Kotlin Spring Boot application for this tutorial, if you want to follow along with the code used here, you can create a new project with Spring Initializr. If you’re using an existing Spring Boot application, you can jump directly to the next section.

Select Kotlin as your language and Java 21 as our runtime. Make sure to add Spring Web, Spring Data JPA, and H2 as dependencies. You’ll use Spring Web to create REST endpoints, Spring Data JPA to connect to a PostgreSQL database, and H2 (an in-memory database) to test the database logic locally:

After creating and downloading the project, locate your main application file. If you used Spring Initializr, the file will be named after your application with Application.kt appended— for example, a project named Demo will have a file called DemoApplication.kt. Add the following code to create a @RestController that returns Hello World, which will let you verify the deployment is working:

import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RestController

@SpringBootApplication
class DemoApplication

fun main(args: Array<String>) {
    runApplication<DemoApplication>(*args)
}

@RestController
class HelloController {
    @GetMapping("/hello")
    fun hello(): String = "Hello World"
}

The entire Spring Boot app and REST controller fit in just a few lines, thanks to Kotlin features like type inference and single-expression functions.

Containerize a JVM App Using Docker

To deploy your application to Kubernetes, you need to initially package it into a container image using Docker. Create a Dockerfile in the project root:

FROM openjdk:21-jdk-slim

WORKDIR /app

# Copy the JAR file from builder stage
COPY build/libs/*-SNAPSHOT.jar app.jar

# Expose port 8080
EXPOSE 8080

# Run the application
ENTRYPOINT ["java", "-jar", "app.jar"]

This Dockerfile uses a lightweight Java 21 base image, copies in the built JAR file, and runs it. Kotlin and Java interoperability means the Spring Boot JAR runs just like any Java app in the container.

To build the image, you initially need to create a JAR file with gradle build or mvn clean package, depending on which build manager you’re using. If using Maven, update the Dockerfile to use target/*.jar instead of build/libs/*-SNAPSHOT.jar.

After that, build the image:

docker build -t kotlin-app:latest .

Before you can push the image to Docker Hub, you need to execute this command to log in:

docker login

Note that you may be prompted to enter your Docker Hub credentials to complete the login step.

Next, push the image to Docker Hub or another registry so your Kubernetes cluster can access it:

docker tag kotlin-app:latest YOUR_DOCKERHUB_USER/kotlin-app:latest
docker push YOUR_DOCKERHUB_USER/kotlin-app:latest

Deploy the Application to a Kubernetes Cluster

To run your application on Kubernetes, you need to tell Kubernetes how to configure and run it. You do this using manifest files, which are typically written in YAML. These files declaratively define the desired state of your application in the cluster. For a basic deployment, you need two key Kubernetes objects: a Deployment manifest and a Service manifest.

Add the Deployment Manifest

A Deployment manages replicated Pods and handles rolling updates. A Pod is Kubernetes’s smallest unit that runs your container. Deployments ensure your specified number of Pods stay running and update them safely without downtime.

Create a file named k8s/deployment.yaml that defines your Deployment so that Kubernetes can run the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kotlin-app-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kotlin-app
  template:
    metadata:
      labels:
        app: kotlin-app
    spec:
      containers:
      - name: kotlin-k8s-app
        image: <your-username>/kotlin-app:latest
        ports:
        - containerPort: 8080

The manifest above makes these declarations:

  • kind: Deployment specifies the type of object.
  • spec.replicas: 1 tells Kubernetes how many instances of the application you want running.
  • spec.selector.matchLabels is how the Deployment knows which Pods to manage. It looks for Pods with the label app: kotlin-app.
  • spec.template is the blueprint for the Pods. It defines the container(s) to run inside the Pod.
  • spec.containers.image specifies the Docker image to pull.
  • spec.containers.ports.containerPort informs Kubernetes that your application inside the container listens on port 8080.

Include the Service Manifest

While a Deployment ensures your Pods are running, those Pods are ephemeral; each time they restart, they get a new internal IP address. A Service solves this by acting as a stable entry point with a fixed IP and DNS name, automatically routing traffic to the Pods identified by its label selector. This guarantees that even if Pods restart or change IPs, traffic still reaches the intended application.

Create a file named service.yaml in the k8s folder:

apiVersion: v1
kind: Service
metadata:
  name: kotlin-app-service
spec:
  selector:
    app: kotlin-app
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  type: NodePort

The manifest defines:

  • kind: Service specifies the object type.
  • spec.selector must match the labels of the Pods (app: kotlin-app). This is how the Service knows where to send traffic.
  • spec.ports maps the Service’s port (port: 8080) to the container’s port (targetPort: 8080).
  • spec.type: NodePort exposes the application on a static port on each node in the cluster, making it accessible for local development with Minikube. In a cloud environment, you typically use a LoadBalancer.

Deploy to a Cluster Using Minikube

To deploy this to a cluster, run Minikube with minikube start and apply the manifests using the following commands:

kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml

After applying, you can verify that everything is running using kubectl get pods. You should then get a result like this:

NAME                                    READY   STATUS    RESTARTS      AGE
kotlin-app-deployment-744476956-bfwg4   1/1     Running   0             20s

To access your application, run minikube service kotlin-app-service. This command finds the service in Minikube and opens a URL in your host browser via port forwarding. The output shows an IP and port (eg http://192.168.49.2:30000). Visiting http://<minikube-ip>:30000/hello should call your Spring app and return the Hello World message.

Extend the Kotlin App with ConfigMap

Hard-coding configuration in images forces rebuilds for simple changes and risks exposing sensitive data. Kubernetes provides ConfigMaps for non-sensitive configuration and Secrets for sensitive data, like passwords.

To demonstrate ConfigMaps, replace the hard-coded greeting with one that can be set through a configuration.

To do this, update the controller to read the message from an environment variable:

@RestController
class HelloController {
    @Value("\${greeting.message:Hello}")
    lateinit var greetingMsg: String

    @GetMapping("/hello")
    fun hello(): String = greetingMsg
}

This code snippet declares a variable greetingMsg, which pulls a value from the environment or defaults to "Hello" if it doesn’t find the specific environment variable.

Now, create a configmap.yaml file in the k8s folder; this sets the greeting configuration so you can change it without rebuilding the image:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kotlin-app-config
data:
  application.properties: |
    greeting.message=Hello from a ConfigMap!

To use this ConfigMap, you need to mount it as a volume into the Pod. This approach prevents configuration values from being accidentally logged in process lists and allows for configuration updates without restarting the Pod.

Additionally, ConfigMaps can store larger configuration files and support multiple configuration formats.

Update your k8s/deployment.yaml so that it uses the new ConfigMap that you defined earlier:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kotlin-app-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kotlin-app
  template:
    metadata:
      labels:
        app: kotlin-app
    spec:
      containers:
      - name: kotlin-k8s-app
        image: <your-username>/kotlin-app:v2
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: config-volume
          mountPath: /app/config
      volumes:
      - name: config-volume
        configMap:
          name: kotlin-app-config

This manifest adds a volumes section that defines a volume named config-volume, which sources its data from the kotlin-app-config ConfigMap. It also adds a volumeMounts entry to the container specification, mounting this volume at /app/config. This setup allows Spring Boot to automatically detect and load the application.properties file from the /config directory, making it easy to manage configuration through Kubernetes.

The deployment also updates the image to an updated Docker image. Let’s create this image by rebuilding the application and creating a new Docker image with an updated tag (eg v2). Then push it to your registry so Kubernetes can pull the latest version:

# 1) Rebuild the Kotlin app 
./gradlew clean build            # Gradle
# or
mvn clean package                # Maven

# 2) Build a new Docker image with a new tag (v2)
docker build -t <your-username>/kotlin-app:v2 .

# 3) Push the image so the cluster can pull it (skip if using Minikube's Docker daemon)
docker push <your-username>/kotlin-app:v2

After pushing the new image, apply the new configmap.yaml and the updated deployment.yaml:

kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/deployment.yaml

Connect to a Database

When your application needs to store and retrieve data, such as user accounts or business records, you need to manage persistent storage alongside your deployments. Kubernetes lets you run databases like PostgreSQL as managed Deployments, using persistent volumes for data durability and Secrets for credentials.

Let’s walk through deploying PostgreSQL and connecting it to your application.

To keep your credentials out of the container and enable safe injection into Pods, you need to define a Secret. A Kubernetes Secret is like a ConfigMap but intended for confidential info (passwords, tokens). Create postgres-secret.yaml to safely store the database credentials:

apiVersion: v1
kind: Secret
metadata:
  name: postgres-secret
type: Opaque
stringData:
  POSTGRES_USER: "postgres"
  POSTGRES_PASSWORD: "mysecretpassword"
  POSTGRES_DB: "greetingsdb"

Note: stringData is used for convenience; Kubernetes stores this as a Base64-encoded value.

Create deployments/postgres.yaml to run PostgreSQL and expose it with a stable DNS name:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_DB
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: POSTGRES_DB
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: POSTGRES_USER
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: POSTGRES_PASSWORD
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-service
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
  type: NodePort

This manifest does two things:

  • Creates a Service named postgres-service so your application can connect to the database using a stable DNS name.
  • Creates a Deployment that runs PostgreSQL, using the Secret for the password. It mounts /var/lib/postgresql/data to a volume (here, an emptyDir for simplicity). In production, you’d use a StatefulSet and a PersistentVolumeClaim to ensure data persists across Pod restarts and node failures.

Let’s also update the Kotlin app to connect to a PostgreSQL database. In this example, the app returns a custom greeting with the user’s details that it pulls from the database. You can use Spring Data JPA.

To connect to a PostgreSQL database and use JPA, you need to add the PostgreSQL Java Database Connectivity (JDBC) driver. The PostgreSQL driver is essential because it allows your application to communicate with the database running in Kubernetes. Add this to the dependencies block in your build.gradle.kts (or pom.xml if you’re using Maven) so it’s available at compile and runtime:

implementation("org.postgresql:postgresql")

For local development, the application uses H2 (which you added earlier as a dependency) as a lightweight option for testing without having to spin up a full PostgreSQL instance. The application interacts only with PostgreSQL when deployed to the Kubernetes cluster.

Create a file and name it User.kt. Use this to model the users table. Additionally, create a Spring Data JPA repository for database lookups:

import jakarta.persistence.*
import org.springframework.data.jpa.repository.JpaRepository
import org.springframework.stereotype.Repository

@Entity
@Table(name = "users")
class User(
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    val id: Long = 0,

    @Column(nullable = false)
    val name: String = "",

    @Column(nullable = false)
    val email: String = ""
)

@Repository
interface UserRepository : JpaRepository<User, Long> {
    fun findByName(name: String): User?
}

This snippet uses a Kotlin class to define a User entity. With Kotlin’s primary constructor syntax, you can declare mutable properties and initialize the object in a single definition, eliminating the need for boilerplate getters and setters required in Java entities. The snippet also defines a UserRepository that handles retrieving user details from the database.

Update the main controller with this GetMapping to return dynamic greetings based on username:

import org.springframework.web.bind.annotation.PathVariable

@RestController
class HelloController(private val userRepository: UserRepository) {

    @GetMapping("/hello")
    fun hello(): String = "Hello World"

    @GetMapping( "/hello/{name}")
    fun getGreeting( @PathVariable name: String = "world"): String =
        userRepository.findByName(name)
            ?.let { "Hello ${it.name}! Your email is ${it.email}." 
            ?: "Hello $name! (User not found in database)"
}

This code injects the UserRepository into the Controller, allowing you to use it in the getGreeting method. This method returns the user’s name, along with their email, if the user exists in the database; otherwise, it outputs that the user wasn’t found. It uses Kotlin null safety features to produce a response without unsafe casts or a NullPointerException.

Next, update the src/main/resources/application.properties file with the PostgreSQL configuration:

spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect

The properties file configures Spring Data JPA settings. The hibernate.ddl-auto=update property enables automatic schema updates based on the @Entity definitions. This ensures that the User table is created at runtime if it doesn’t exist in the database. The spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect tells hibernate to use PostgreSQL-specific queries.

To use the updated code, rebuild the application and Docker image with the changes, and update the Deployment to include the new environment variables as Secrets:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kotlin-app-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kotlin-app
  template:
    metadata:
      labels:
        app: kotlin-app
    spec:
      containers:
      - name: kotlin-k8s-app
        image: <your-username>/kotlin-app:v3
        ports:
        - containerPort: 8080
        env:
        - name: SPRING_DATASOURCE_URL
          value: "jdbc:postgresql://postgres-service:5432/greetingsdb"
        - name: SPRING_DATASOURCE_USERNAME
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: POSTGRES_USER
        - name: SPRING_DATASOURCE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: POSTGRES_PASSWORD

The configuration now contains a new env section that defines the database URL, username, and password from the Secrets definition. Spring uses these variables to connect to the database.

Apply the new manifests using this command:

kubectl apply -f k8s/postgres-secret.yaml
kubectl apply -f k8s/postgres.yaml
kubectl apply -f k8s/deployment.yaml

You can use minikube service kotlin-app-service to expose the application using an external IP address and navigate to <url>/hello/<username> to test. If the username doesn’t exist in the User table of the PostgreSQL database, you’ll get this output:

Hello <username>! (User not found in database)

Dynamic routing using Ingress

Sometimes you might want to roll out new features to a subset of users to test out how they work before a full production release, for example, during beta testing. To do this, you can have route traffic from your Kubernetes cluster to different services depending on certain rules. This is done via an Ingress. An Ingress sits at the edge of the cluster and routes HTTP traffic to Services based on rules like host, path, or headers.

In this example, you’ll route normal traffic to v2 of the application and route all traffic with a special header to the new v3 image. This allows you to test a new database feature on a subset of users or clients before a full, stable rollout.

To enable the NGINX Ingress controller in Minikube:

minikube addons enable ingress

Create a new v2-application file that contains the deployment and Service for the v2 version of the app, save it to k8s/v2-app.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kotlin-app-v2-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kotlin-app-v2
  template:
    metadata:
      labels:
        app: kotlin-app-v2
    spec:
      containers:
        - name: kotlin-k8s-app-v2
          image: <your-username>/kotlin-app:v2
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: config-volume
              mountPath: /app/config
      volumes:
        - name: config-volume
          configMap:
            name: kotlin-app-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kotlin-app-config
data:
  application.properties: |
    greeting.message=Hello from a v2 stable app!
---
apiVersion: v1
kind: Service
metadata:
  name: kotlin-app-v2-service
spec:
  selector:
    app: kotlin-app-v2
  ports:
    - port: 8080
      targetPort: 8080
  type: ClusterIP

The example above is similar to the Deployment and Service you set up earlier, except the Service type is now ClusterIP instead of NodePort. ClusterIP only exposes the Service within the cluster, making it accessible to other Pods but not directly from outside the cluster. In contrast, NodePort exposes the Service on a static port on each node’s IP, allowing external access. Since the Ingress handles external traffic routing, you use ClusterIP for internal communication between the Ingress and your Services.

With Services in place, you can add the Ingress resources. Create a new ingress file to receive traffic and direct it to the v2 version of your service, and save it as k8s/ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kotlin-app
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kotlin-app-v2-service
            port:
              number: 8080

To direct traffic to the v3 version of the application, you can utilize the canary annotations of the ingress controller. Create another ingress definition file and save it to k8s/ingress-canary.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kotlin-app-canary
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "X-Client-Version"
    nginx.ingress.kubernetes.io/canary-by-header-value: "v3"
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kotlin-app-service
            port:
              number: 8080

The canary Ingress above uses NGINX’s canary annotations to implement header-based routing. When a request includes the header X-Client-Version: v3, the Ingress controller routes it to the kotlin-app-service (your v3 Pods). All other requests without this header go to kotlin-app-v2-service (your stable v2 Pods). This pattern lets you safely test new features in production with a subset of users, such as internal testers or beta users, while the majority of traffic continues to hit the stable version.

The canary: "true" annotation tells NGINX this Ingress is a canary rule, and the canary-by-header annotations define the matching logic.

Apply the new manifests using the following commands:

kubectl apply -f k8s/v2-app.yaml
kubectl apply -f k8s/ingress.yaml
kubectl apply -f k8s/ingress-canary.yaml

To test this out, run minikube tunnel to tunnel your minikube instance and make it available on localhost. To view our application you simply need to navigate to http://localhost/hello.

You can verify the routing behavior with curl. A request without the header goes to v2:

curl  http://127.0.0.1/hello 

This returns “Hello from a v2 stable app!”.

Running the same request with the X-Client-Version header, returns a response from v3 of the application:

$ curl -H "X-Client-Version:v3" http://127.0.0.1/hello
Hello World

You can also run the same on with /hello/{name} to verify it routes to v3 of the application:

curl -H "X-Client-Version:v3" http://127.0.0.1/hello/mike
Hello mike! (User not found in database)

You can find the tutorial’s full codebase on this GitHub repository. Switch between different branches to access different parts of the tutorial.

Follow These Best Practices

When deploying JVM-based microservices on Kubernetes, keep these practices in mind:

Configure Health Checks (Liveness and Readiness Probes)

Kubernetes needs to know if your application is healthy and ready to serve traffic. Health checks let Kubernetes direct traffic to healthy Pods and restart failing ones. Spring Boot Actuator provides /actuator/health/liveness and /actuator/health/readiness endpoints. Kubernetes sends HTTP requests to these endpoints, and non-2xx responses trigger container restarts.

Use ConfigMap and Secret Manifests

Do not hard-code environment-specific or sensitive data into your image. As you learned in this tutorial, it’s best to store non-sensitive configs (like feature flags, greeting messages) in ConfigMaps and more confidential data (passwords, tokens) in Secrets. This makes it easy to change settings without rebuilding containers.

Set CPU/Memory Resource Limits

Kubernetes allows you to set memory and cpu requests and limits. This prevents your app from consuming unlimited resources and impacting other pods. Without limits, a runaway JVM can crash your entire node or be killed unexpectedly, so proper limits ensure cluster stability and cost control.

Conclusion

This tutorial showed you how to containerize and deploy a Kotlin Spring Boot application on Kubernetes. Along the way, you learned important Kubernetes fundamentals, like Pods, Deployments, Services, ConfigMaps, and Secrets.

You also saw why Kotlin is a good choice for server-side development, especially with Spring, because of Kotlin features, like null safety, data classes, and coroutines. If you use Java, you can introduce Kotlin gradually into existing Spring projects without rewriting your stack. Explore more on Kotlin for server-side development on their official landing page.

image description