If you did it, you are on the right way to get a working Kubernetes on IBM Power cluster
It is time to install the most important part of Kubernetes
It was a long preparation journey. I don’t know about you, but I am tired of preparing all possible configurations. When will I have my working Kubernetes cluster? Sad but true—not today. Today, we install and configure the core Kubernetes services such as etcd and apiserver. At the end of the day, we should get a working Kubernetes control plane but without worker nodes.
Yes, it takes more time than planned. I think this is usual if you do something for the first time. But if you do the same operation a second or third time, it takes less time. This is the reason why I automate everything from the very beginning. If I need four Fridays to deploy my first Kubernetes on a Power cluster, then, as you know, I need only 10 to 15 minutes to deploy my second and each following Kubernetes on Power clusters.
All following tasks are executed on our future control plane servers. I have only one in my configuration, and if you read my previous newsletters, it is called simply server
(what a dumb name!) and defined in the group apiserver
in the Ansible inventory.
Generating an encryption key
We need an encryption key to encrypt our future Kubernetes secret. We will do it using the Ansible filter random_string
. But before creating a new key, we must check if we already have one. We want to develop a good, idempotent playbook, don’t we? Ouch, I already told you this is not a playbook but a brain dump. Still, let’s do at least this step right.
- name: Check if encryption config exists
ansible.builtin.stat:
path: /root/encryption-config.yaml
register: enccfg
If the file with our encryption key doesn’t exist, we can create a new key:
- name: Create encryption config
when: not enccfg.stat.exists
block:
- name: Get encryption key
ansible.builtin.set_fact:
encryption_key: "{{ lookup('community.general.random_string', length=32, base64=true) }}"
Now, we create the configuration file:
- name: Create encryption config
ansible.builtin.template:
dest: /root/encryption-config.yaml
src: encryption-config.yaml.j2
The configuration file is created from a template. I owe you the template:
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: {{ encryption_key }}
- identity: {}
Deploying etcd
etcd is a distributed key-value database where Kubernetes stores its state. The word “distributed” means we should have more than one instance of etcd. However, I have only one control plane server and will have only one etcd instance.
We start by copying etcd from jumpbox to our server. We downloaded all pieces we need to jumpbox in the first newsletter.
- name: Copy etcd
ansible.builtin.synchronize:
src: /root/etcd-v3.4.34-linux-ppc64le.tar.gz
dest: /root/etcd-v3.4.34-linux-ppc64le.tar.gz
delegate_to: jumpbox
Now, when we have etcd on the target server, we can unpack it into /usr/local:
- name: Unpack etcd
ansible.builtin.unarchive:
dest: /usr/local/
src: /root/etcd-v3.4.34-linux-ppc64le.tar.gz
remote_src: true
To make it more usable, we create hard links to the etcd binaries in /usr/local/bin:
- name: Create etcd links
ansible.builtin.file:
src: "/usr/local/etcd-v3.4.34-linux-ppc64le/{{ item }}"
path: "/usr/local/bin/{{ item }}"
state: hard
loop:
- etcd
- etcdctl
The etcd software is in its place, and we must enable its autostart. To enable it we must create a systemd unit file and inform systemd about it.
- name: Copy etcd unit file
ansible.builtin.copy:
src: etcd.service
dest: /etc/systemd/system/
This is the etcd.service file:
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \
--name controller \
--initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://127.0.0.1:2380 \
--listen-client-urls http://127.0.0.1:2379 \
--advertise-client-urls http://127.0.0.1:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster controller=http://127.0.0.1:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
The correct way to inform systemd about the new service file would be to add “notify” to the last task. I decided to reload it always:
- name: Reload systemd
ansible.builtin.systemd_service:
daemon_reload: true
I don’t think it should be a problem for systemd to be reloaded one more time even if nothing changes.
Now, systemd knows about the file, and we can start and enable etcd service:
- name: Start etcd
ansible.builtin.systemd_service:
name: etcd
enabled: true
state: started
If you issue the command etcdctl member list at this point, you should get some results, like it is started and listening on some ports.
We must take the next step.
Configuring core Kubernetes services
Our next step is to copy and configure several core Kubernetes services - kube-apiserver, kube-controller-manager, and kube-scheduler.
We copy them first from jumpbox:
- name: Copy Kubernetes binaries
ansible.builtin.synchronize:
src: "/root/{{ item }}"
dest: "/usr/local/bin/{{ item }}"
delegate_to: jumpbox
loop:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- kubectl
Be sure that the binaries have the correct permissions:
- name: Set permissions on Kubernetes binaries
ansible.builtin.file:
path: "/usr/local/bin/{{ item }}"
owner: root
group: root
mode: "0755"
loop:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- kubectl
Now, we place the configuration files. We created some of them last week, and the last - our encryption key - earlier today.
- name: Copy Kubernetes config files
ansible.builtin.copy:
src: "/root/{{ item }}"
dest: "/var/lib/kubernetes/{{ item }}"
remote_src: true
loop:
- encryption-config.yaml
- kube-controller-manager.kubeconfig
- kube-scheduler.kubeconfig
There is one more configuration file that we haven’t had till now. It is kube-scheduler config. We copy it:
- name: Copy Kubernetes Scheduler config
ansible.builtin.copy:
src: kube-scheduler.yaml
dest: /etc/kubernetes/config/kube-scheduler.yaml
Here is the config file:
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
After we copied all configuration and binary files, we must create systemd services and enable their autostart.
We copy the prepared systemd unit files:
- name: Copy systemd unit files
ansible.builtin.copy:
src: "{{ item }}"
dest: "/etc/systemd/system/{{ item }}"
loop:
- kube-apiserver.service
- kube-controller-manager.service
- kube-scheduler.service
Of course, we must have them, and I publish them here.
This is kube-apiserver.service:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--allow-privileged=true \
--apiserver-count=1 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.crt \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-servers=http://127.0.0.1:2379 \
--event-ttl=1h \
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \
--kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \
--kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \
--runtime-config='api/all=true' \
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
--service-account-issuer=https://server.power-devops.cloud:6443 \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
This is a rather long unit file with many options for the kube-apiserver command. I don’t explain the options; you can read more about them in the official kube-apiserver documentation. Don’t forget to change the server name (service-account-issuer), and the service network (service-cluster-ip-range) if you use another service network. You can even create a template to avoid hard-coding values like I did.
This is kube-controller-manager.service:
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--bind-address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kop \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--root-ca-file=/var/lib/kubernetes/ca.crt \
--service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Similar to kube-apiserver, you can find the full list of all possible options and their explanation in the official documentation. Check the networks if they suit your configuration.
This is the smallest unit file - kube-scheduler.service:
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--config=/etc/kubernetes/config/kube-scheduler.yaml \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
For reference, I put the link to the kube-scheduler official documentation.
Again, I choose the wrong way and reload systemd configuration directly, without using “notify” and handlers:
- name: Reload systemd
ansible.builtin.systemd_service:
daemon_reload: true
We can start the services after reload:
- name: Start Kubernetes services
ansible.builtin.systemd_service:
name: "{{ item }}"
enabled: true
state: started
loop:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
Support the Power DevOps Newsletter!
Upgrade to our paid tier to unlock every article in the archive. Become a Founding Member for a little extra and book a 1-to-1 coffee chat with Andrey Klyachkin.
Last step for today
We must deploy the configuration to the newly created control plane. A small change in RBAC policy will enable communication between our control plane and future worker nodes.
We copy the file to the control plane server:
- name: Copy RBAC Kubernetes config
ansible.builtin.copy:
src: kube-apiserver-to-kubelet.yaml
dest: /root/kube-apiserver-to-kubelet.yaml
Using kubectl, we apply the configuration file:
- name: Apply RBAC Kubernetes config
ansible.builtin.command:
cmd: /usr/local/bin/kubectl apply -f /root/kube-apiserver-to-kubelet.yaml --kubeconfig /root/admin.kubeconfig
No, I didn’t forget to post the contents of the configuration file. Here it is:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
It is your turn now! Deploy the control plane till next Friday, and let’s configure the worker nodes next week. We all want to deploy our applications to the Kubernetes cluster!
Have fun creating the control plane for your future Kubernetes cluster!
Andrey
Hi, I am Andrey Klyachkin, IBM Champion and IBM AIX Community Advocate. This means I don’t work for IBM. Over the last twenty years, I have worked with many different IBM Power customers all over the world, both on-premise and in the cloud. I specialize in automating IBM Power infrastructures, making them even more robust and agile. I co-authored several IBM Redbooks and IBM Power certifications. I am an active Red Hat Certified Engineer and Instructor.
Follow me on LinkedIn, Twitter and YouTube.
You can meet me at events like IBM TechXchange, the Common Europe Congress, and GSE Germany’s IBM Power Working Group sessions.