Spring Boot / Docker / Kubernetes / EKS

nbodev
6 min readJun 28, 2022

--

1- Introduction

2- Versions

3- Code

4- Docker with JIB or with Dockerfile

5- Kubernetes

6- EKS

7- Interesting links

Introduction

This article illustrates the dockerization (containerization) of a Spring Boot application, its deployment with Kubernetes locally, then its deployment on EKS (Amazon Elastic Kubernetes Service, managed Kubernetes on AWS).

Versions

  • Java 11
  • Spring Boot 2.7.0
  • Kubernetes 1.24.0
  • eksctl 0.101.0

Code

Clone the project then run:

cd springboot-dockerized
mvn clean install

Then run:

java -jar ./target/springboot-dockerized-0.0.1-SNAPSHOT.jar

If you want to start it from your java IDE, you can run the DemoApplication class.

Assuming the application is running on port 9090, go to the following url: http://localhost:9090 and since we expose a REST service (DemoController), you will see some details printed in your browser, that means the application works fine.

Docker with JIB or with Dockerfile

JIB

Make sure you can connect to the Docker registry

  • Login to the docker registry, by default we login to docker hub (registry.hub.docker.com), note that this registry is defined in the file pom.xml:
docker login

Create and push the Docker image

  • Build and push the image, run the following in the project root, this will crate and push the Docker image to the Docker repository since we are using jib-maven-plugin the Java Image Builder:
mvn package -P docker

Check the Docker image

  • Pull the Docker image:
docker image pull nbodev/springboot-dockerized
  • Run a Docker container for testing purpose, we validate that there is no error when we run the image:
docker run -p 9090:9090 --rm nbodev/springboot-dockerized
  • Go to the following url: http://localhost:9090 and since we expose a REST service (DemoController), you will see some details printed in your browser.

Dockerfile

Make sure you can connect to the Docker registry

  • Login to the docker registry, by default we login to docker hub (registry.hub.docker.com), note that this registry is defined in the file pom.xml:
docker login

Create and push the Docker image

  • Build the image, run this the in the project root folder where you see the Dockerfile:
docker build -t nbodev/springboot-dockerized .
  • The Dockerfilecontent is the following:
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
  • Push the image:
docker push nbodev/springboot-dockerized

Check the Docker image

  • Run a Docker container for testing purpose, we validate that there is no error when we run the image:
docker run -p 9090:9090 --rm nbodev/springboot-dockerized
  • Go to the following url: http://localhost:9090 and since we expose a REST service (DemoController), you will see some details printed in your browser.

Build a Docker image for a specific platform

  • If you are building the Docker image from a MacOS computer, you will face this issue in case you want to deploy your image to AWS later.
  • You will see this error in your pod logs exec format error.
  • In order to avoid such issue, make sure you build the image for a specific platform, the platform I choose here (linux/arm64) is the one of the AWS node of my future cluster.
  • Run the command docker buildx ls so then you can see the supported platforms, then let's say you build the image for the linux/arm64 platform, for that run:
docker buildx build --platform linux/arm64 -t nbodev/springboot-dockerized .
  • Then push your image:
docker push nbodev/springboot-dockerized
Docker image on dockerhub

Kubernetes

  • Now that our image looks valid let’s move to a deployment with kubernetes.

The YAML file

  • The file content springboot-k8s.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: springboot-app-deployment
namespace: playground
labels:
app: springboot-app
spec:
replicas: 1
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- name: springboot-app
image: nbodev/springboot-dockerized
ports:
- containerPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: springboot-app-svc
namespace: playground
spec:
type: LoadBalancer
selector:
app: springboot-app
ports:
- protocol: TCP
port: 8888 # service port
targetPort: 9090 # container port

Prerequisites

  • Create the namespace:
kubectl create namespace playground

Deploy the pod and the service

  • Use the YAML file for this:
kubectl apply -f <path_to>/springboot-k8s.yml

Delete everything

  • Run the following so you delete the service and the deployment:
kubectl delete deploy springboot-app-deployment -n playground
kubectl delete svc springboot-app-svc -n playground

Optional

  • You can use more than one replicas, for that change replicas: 1 and set it to 2 for instance, your load balancer is now made of 2 instances.
  • Then you can delete a pod by using kubectl delete pod <pod name here> -n playground, you will see that there always be an instance responding when you hit http://localhost:8888, Kubernetes will also start a new pod for you in order to match the expected replicas.

EKS

Prerequisites

  • Make sure you correctly setup the configuration and credential file settings for AWS and install eksctl, see the links below.

Setup your cluster

  • Run the following so you will create a cluster with name nbodev-eks-test-cluster having one node of type r6g.large located in the ap-southeast-1region, this will take a while, behind the scene some CloudFormation tasks are running:
eksctl create cluster --name nbodev-eks-test-cluster --region ap-southeast-1 --nodegroup-name my-nodes --node-type r6g.large --nodes 1

Change the local Kubernetes context to the EKS cluster

  • Switch the Kubernetes context to the newly created one:
kubectl config use-context nbo@nbodev-eks-test-cluster.ap-southeast-1.eksctl.io
  • Run the following in order to get all the existing contexts:
kubectl config get-contexts

Deploy the pod and the service

  • Run:
kubectl apply -f <path_to>/springboot-k8s.yml

Test that all works fine

  • Get the deployed service on EKS:
kubectl get svc -n playground
  • The command above returns, you see that we have an external ip:
NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                                    PORT(S)          AGE
springboot-app-svc LoadBalancer 10.100.174.197 a5a4417f7748642b5bebf6846XXXXXXXXX.ap-southeast-1.elb.amazonaws.com 8888:31322/TCP 5s
  • Visit the external ip on the port 8888: http://a5a4417f7748642b5bebf6846XXXXXXXXX.ap-southeast-1.elb.amazonaws.com:8888, you will see the result of the Rest service, the EXTERNAL-IP could take a while to become public.

Change the nodegroup (optional)

  • You can delete the existing nodegroup without deleting the whole cluster, in short you delete the workers and you provision a new ones, one worker is one EC2 instance, this is the server on which your pod will start, when you play locally with Kubernetes you have one node only:
eksctl delete nodegroup --cluster=nbodev-eks-test-cluster --name=my-nodes
  • Then provision the new one:
eksctl create nodegroup --config-file=<path_to>/my-cluster.yaml
  • See below a sample file where we increase the number of workers to 2 and use a larger instance type:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: nbodev-eks-test-cluster
region: ap-southeast-1
managedNodeGroups:
- name: my-new-nodes
labels: { role: workers }
instanceType: r6g.2xlarge
desiredCapacity: 2
volumeSize: 80
privateNetworking: true

Delete all the resources

  • Delete the cluster, this will delete the related nodegroup:
eksctl delete cluster --name=nbodev-eks-test-cluster

Interesting links

Spring Boot

Docker and Kubernetes

Docker multi platform image build

AWS

eksctl

--

--

No responses yet