It is too complex for me! I want to make it easier!
Is there a way to install pure Kubernetes but a little bit easier?
We installed Kubernetes. It took four weeks and four plays. I did them in one playbook in my environment, but you can have four playbooks if you wish. Still, even if it works for less than 15 minutes and does the job well, it is too complex for me. I learned the greatest UNIX principle—Keep It Simply Stupid—and I want to keep it as simple as possible. Is there a way to make it simpler?
The modern way of installing pure Kubernetes is called kubeadm. Does it work on IBM Power? Yes, it does! You can use the official documentation to install Kubernetes on IBM Power. It is simple. But I will do it a little bit complicated. As usual I will write an Ansible playbook to deploy Kubernetes on IBM Power using kubeadm.
What do we need?
It's almost the same as last time. We need several servers, but I will use my Ansible controller as a jump box for my future Kubernetes cluster. I will have one Kubernetes controller and two worker nodes. You can reuse the inventory from the last playbook.
My inventory has very few differences:
---
all:
children:
apiserver:
hosts:
server:
ansible_host: 10.11.0.2
workers:
hosts:
worker1:
ansible_host: 10.11.0.11
worker2:
ansible_host: 10.11.0.12
vars:
ansible_user: root
ansible_password: youKn0w1t!
Variables
This time, I define several variables. Mostly, the version numbers of the software:
vars:
dest_dir: /usr/local
arch: ppc64le
versions:
containerd: 2.0.4
runc: 1.2.6
cni_plugins: 1.3.0
crictl: 1.32.0
kube: 1.32.3
helm: 3.17.3
pod_cidr: 10.200.0.0/16
service_cidr: 10.32.0.0/24
domain: power-devops.cloud
Life changes very quickly. The latest versions are:
containerd: 2.0.5
runc: 1.3.0
crictl: 1.33.0
kubernetes: 1.33.0
You can update the versions and set them as you wish. However, you must check their compatibility from time to time. I assume that if you want to go from the versions in my playbook to the latest, you shouldn’t have any problems. But I still didn’t test it.
Download software
Yes, we must download software. It is up to you how to proceed with the download. If you are in an air-gapped environment, it can be complicated. I download to my Ansible controller node (which is my laptop) and copy the software from it to the target nodes.
- name: Download software on local box
ansible.builtin.get_url:
url: "{{ item }}"
dest: "./"
delegate_to: localhost
run_once: true
loop:
- "https://github.com/containerd/containerd/releases/download/v{{ versions.containerd }}/containerd-{{ versions.containerd }}-linux-{{ arch }}.tar.gz"
- https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
- "https://github.com/opencontainers/runc/releases/download/v{{ versions.runc }}/runc.{{ arch }}"
- "https://github.com/containernetworking/plugins/releases/download/v{{ versions.cni_plugins }}/cni-plugins-linux-{{ arch }}-v{{ versions.cni_plugins }}.tgz"
- "https://github.com/kubernetes-sigs/cri-tools/releases/download/v{{ versions.crictl }}/crictl-v{{ versions.crictl }}-linux-{{ arch }}.tar.gz"
- "https://dl.k8s.io/release/v{{ versions.kube }}/bin/linux/{{ arch }}/kubeadm"
- "https://dl.k8s.io/release/v{{ versions.kube }}/bin/linux/{{ arch }}/kubelet"
- "https://dl.k8s.io/release/v{{ versions.kube }}/bin/linux/{{ arch }}/kubectl"
- "https://raw.githubusercontent.com/kubernetes/release/v0.18.0/cmd/krel/templates/latest/kubelet/kubelet.service"
- "https://raw.githubusercontent.com/kubernetes/release/v0.18.0/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf"
- "https://get.helm.sh/helm-v{{ versions.helm }}-linux-{{ arch }}.tar.gz"
- https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
It is almost the same software we downloaded in the previous series, but some changes exist. Like kubeadm ;-)
Switch off swap
We start by switching off swap. We don’t need it on our nodes, and our future kubelet service will not start if we don’t disable swap.
Swap can be enabled on worker nodes if you really need it. For many years, Kubernetes failed to start if swap was configured. But the latest versions (1.28, 1.32) introduced support for swap. You can reconfigure the kubelet service to work with swap.
First, we switch off our active swap areas if we have any. If we don’t have it, we ignore all errors:
- name: Switch off swap
ansible.builtin.command:
cmd: swapoff -a
failed_when: false
Next, we remove swap entries from /etc/fstab. I think you want to start Kubernetes even after restarting the Linux box. If you don’t remove the entries, the swap will automatically activate after the reboot, and Kubernetes will fail to start.
- name: Remove swap from /etc/fstab
ansible.builtin.mount:
path: none
fstype: swap
state: absent_from_fstab
Enable forwarding
Before installing the software, we load some Linux kernel modules and enable IP forwarding.
Modules:
- name: Load Linux kernel modules
community.general.modprobe:
name: "{{ item }}"
state: present
persistent: present
loop:
- overlay
- br_netfilter
IP forwarding and some other kernel variables:
- name: Enable IP forwarding
ansible.posix.sysctl:
name: "{{ item }}"
value: 1
reload: true
state: present
loop:
- net.ipv4.ip_forward
- net.ipv6.conf.all.forwarding
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
Copy and unpack the software
Before copying the software, we create the directories we need.
- name: Create directories
ansible.builtin.file:
path: "{{ item }}"
owner: root
group: root
mode: "0755"
state: directory
loop:
- /etc/systemd/system/kubelet.service.d
- /opt/cni/bin
- "{{ dest_dir }}/bin"
This time, we have fewer directories to create. Everything else should be the problem of kubeadm.
Let’s install the software. Some parts of it are normal archives. We must unpack them. We can use the standard Ansible module ansible.builtin.unarchive to get the task done:
- name: Unpack containerd
ansible.builtin.unarchive:
src: "containerd-{{ versions.containerd }}-linux-{{ arch }}.tar.gz"
dest: "{{ dest_dir }}"
owner: root
group: root
mode: "0755"
- name: Unpack cni plugins
ansible.builtin.unarchive:
src: "cni-plugins-linux-{{ arch }}-v{{ versions.cni_plugins }}.tgz"
dest: /opt/cni/bin
owner: root
group: root
mode: "0755"
- name: Unpack crictl
ansible.builtin.unarchive:
src: "crictl-v{{ versions.crictl }}-linux-{{ arch }}.tar.gz"
dest: "{{ dest_dir }}/bin"
owner: root
group: root
mode: "0755"
The helm archive is a little bit different, and we must supply additional arguments:
- name: Unpack helm
ansible.builtin.unarchive:
src: "helm-v{{ versions.helm }}-linux-{{ arch }}.tar.gz"
dest: "{{ dest_dir }}/bin"
owner: root
group: root
mode: "0755"
extra_opts:
- "--strip-components=1"
exclude:
- README.md
- LICENSE
We don’t use helm in this playbook. If you don’t need it on your servers, skip the step.
Some of the files we’ve downloaded are “normal” binaries. We can copy them to the target servers:
- name: Copy binary files
ansible.builtin.copy:
src: "{{ item }}"
dest: "/usr/local/bin/{{ item }}"
owner: root
group: root
mode: "0755"
loop:
- kubeadm
- kubelet
- kubectl
- "runc.{{ arch }}"
I don’t like the runc’s name with the architecture in it. I create a link to it and can use the “normal” name - runc afterwards.
- name: Create runc link
ansible.builtin.file:
path: /usr/local/bin/runc
src: "/usr/local/bin/runc.{{ arch }}"
state: hard
Configure autostart
We already downloaded the service files from Github. We must copy them to the target hosts:
- name: Copy service files
ansible.builtin.copy:
src: "{{ item }}"
dest: "/etc/systemd/system/{{ item }}"
owner: root
group: root
mode: "0644"
loop:
- containerd.service
- kubelet.service
But there is one change we should make. We must ensure the correct path to kubelet in the unit file:
- name: Change path to kubelet
ansible.builtin.lineinfile:
path: /etc/systemd/system/kubelet.service
regexp: "^ExecStart="
line: "ExecStart=/usr/local/bin/kubelet"
state: present
Configuring kubeadm
In this case, we don’t do anything special again. It is standard procedure. We already downloaded kubadm configuration file, and we copy it to the target hosts:
- name: Copy kubeadm.conf
ansible.builtin.copy:
src: 10-kubeadm.conf
dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
owner: root
group: root
mode: "0644"
One more time, we must ensure that we use the correct paths:
- name: Change path to kubeadm
ansible.builtin.lineinfile:
path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
regexp: "^ExecStart=/usr/bin/"
line: "ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS"
state: present
Start services
No, we still don’t have a Kubernetes cluster, but we already start our services. First, we reload systemd:
- name: Reload systemd
ansible.builtin.systemd_service:
daemon_reload: true
Then we start and enable the services:
- name: Start services
ansible.builtin.systemd_service:
name: "{{ item }}"
enabled: true
state: started
loop:
- containerd
- kubelet
Create Kubernetes cluster
We start creating Kubernetes cluster on one node. We use the first node in the apiserver group. The group is called so in my inventory. If you named it another way, don’t forget to rename it in this task and some following tasks.
- name: Create cluster on control plane
ansible.builtin.command:
cmd: /usr/local/bin/kubeadm init --pod-network-cidr="{{ pod_cidr }}" --service-cidr="{{ service_cidr }}"
when: inventory_hostname == groups.apiserver.0
If the command didn’t fail, we should have a Kubernetes cluster at this point. Our Kubernetes cluster consists of one node. Can you name it a cluster? OK, I remember, there was a “VIOS cluster” at the time of VIOS 2.1 or 2.2 with a single node in it.
To expand our cluster, we must get a “join command” that we execute on all other nodes.
If you execute the commands manually, without Ansible, you got the join command in the output of the last command. Of course, I could analyze the output of the last task and get the join command from it. But why should I do it complex, when I can do it simple?
- name: Get join command from control plane
ansible.builtin.command:
cmd: /usr/local/bin/kubeadm token create --print-join-command
when: inventory_hostname == groups.apiserver.0
register: joincmd
If we don’t have the join command, we can’t proceed and stop here:
- name: Stop if there is no join command
ansible.builtin.fail:
msg: "Join command is undefined"
when: hostvars[groups.apiserver.0].joincmd is undefined
Join nodes to the cluster
First, we join all other control plane nodes. I don’t have any, so it is an easy task for me:
- name: Join rest of control planes to the cluster
ansible.builtin.command:
cmd: "{{ hostvars[groups.apiserver.0].joincmd.stdout_lines.0 }}"
when: inventory_hostname in groups['apiserver'] and inventory_hostname != groups.apiserver.0
All our control plane nodes must have the “control plane” role:
- name: Set control plane role
ansible.builtin.command:
cmd: "kubectl label node {{ item }}.{{ domain }} node-role.kubernetes.io/control-plane=''"
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
when: inventory_hostname == groups.apiserver.0
loop: "{{ groups['apiserver'] }}"
Now we join worker nodes to the cluster:
- name: Join worker nodes to the cluster
ansible.builtin.command:
cmd: "{{ hostvars[groups.apiserver.0].joincmd.stdout_lines.0 }}"
when: inventory_hostname in groups['workers']
We set the role “worker” for our worker nodes:
- name: Set worker node role
ansible.builtin.command:
cmd: "kubectl label node {{ item }}.{{ domain }} node-role.kubernetes.io/worker=''"
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
when: inventory_hostname == groups.apiserver.0
loop: "{{ groups['workers'] }}"
Cluster Network configuration
We installed and configured CNI last time and didn’t do anything else with the network. This time, we also installed CNI. However, we will install Flannel in addition to CNI. Flannel is a “layer 3 network fabric designed for Kubernetes.” In my opinion, it simplifies the network configuration. We don’t need black magic with CNI configs and Linux’s ip command. We copy the standard Flannel configuration file that we’ve downloaded before, set the CIDR correctly there, and apply the configuration file. That’s it!
Copying the Flannel configuration file to our first control plane:
- name: Copy flannel config
ansible.builtin.copy:
src: kube-flannel.yml
dest: /root/kube-flannel.yml
when: inventory_hostname == groups.apiserver.0
Setting the CIDR in the Flannel configuration:
- name: Set POD CIDR in flannel config
ansible.builtin.lineinfile:
path: /root/kube-flannel.yml
regexp: '"Network": "10'
line: " \"Network\": \"{{ pod_cidr }}\","
state: present
when: inventory_hostname == groups.apiserver.0
This is the only place where something can go wrong. The Flannel configuration file is YAML, which means that indentation is immensely important. If you write too few or too many spaces in the “line” parameter, your configuration file will not work. You will have a lot of fun trying to find the mistake. Trust me, I did it already ;-)
Now we can apply our configuration:
- name: Apply flannel configuration
ansible.builtin.command:
cmd: kubectl apply -f /root/kube-flannel.yml
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
when: inventory_hostname == groups.apiserver.0
Was it easy?
Join me at Common Europe Congress 2025!
If you are in Europe, you are welcome to join me at the Common Europe Congress 2025. Even if you are not in Europe, you are still welcome!
The Common Europe Congress 2025 will be held from June 2nd to 5th in Gothenburg, Sweden, in a very beautiful place—Gothia Towers. Sweden is always beautiful, especially its nature. I like visiting Sweden, and I am there at least once every year.
The agenda for the Congress is already published. Last years it was the biggest IBM Power-related event in Europe. Just take a second and think about it. The biggest IBM Power event in Europe is organized by volunteers, not IBM. This is the reason to be there. This is not a marketing injection made by IBM, but real gurus and practitioners from all parts of Europe. You usually pay tons of money to get them to your site. The Congress costs just a fraction of their daily price, and you can ask your questions on all three days.
The early bird price is only 1020€ and valid till May 1st. If you are a member of a local Common country group, your price may even be lower (or zero). You can register here.
Remember to say hello to me in Sweden! By the way, “hello” in Swedish is “Hej.”
Your Kubernetes cluster is ready to use!
Yes, if you didn’t notice, you already use your Kubernetes cluster. The Flannel configuration you applied was applied to the running cluster. Similarly, you could do something else.
What about deploying Kubernetes metrics service?
- name: Deploy kubernetes-metrics
ansible.builtin.command:
cmd: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
when: inventory_hostname == groups.apiserver.0
Hmm… The task above will only work if you can access the Internet on your first control plane. But I hope you have already understood the clue, don’t you?
What do you think about this way of installing Kubernetes? Was it easier than the last? Write your thoughts in the comments or in the chat!
Have fun with your new Kubernetes cluster!
Andrey
Hi, I am Andrey Klyachkin, IBM Champion and IBM AIX Community Advocate. This means I don’t work for IBM. Over the last twenty years, I have worked with many different IBM Power customers all over the world, both on-premise and in the cloud. I specialize in automating IBM Power infrastructures, making them even more robust and agile. I co-authored several IBM Redbooks and IBM Power certifications. I am an active Red Hat Certified Engineer and Instructor.
Follow me on LinkedIn, Twitter and YouTube.
You can meet me at events like IBM TechXchange, the Common Europe Congress, and GSE Germany’s IBM Power Working Group sessions.