IBM! I am tired to hear the word "OpenShift"! Is there anything better?
Why should I pay so many bucks for the opensource software?
If you talk to an IBM sales rep, you will hear only two words: AI and OpenShift. It sucks! We all know that Red Hat OpenShift is nothing but Kubernetes. Kubernetes is free. Why should I pay the enormous amount of money for OpenShift if I can take Kubernetes and install it? Let’s do it and save some money to please our management and show our financial discipline.
First, if you don’t plan to use containers, you don’t need Kubernetes or OpenShift. Forget about it and use your time for something else.
But if you use containers or plan to use them, the only way is to use Kubernetes. Yes, you’ve heard it correctly. I said “Kubernetes,” not “OpenShift.”
If I ask you whether you use Linux, you will probably answer “yes.” But if we go deep enough, we will find that you don’t use Linux but rather Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, or another Linux distribution.
Because, technically speaking, Linux is a kernel, not an operating system. You use the whole operating system, not the single kernel.
Similarly, Kubernetes is software that you can probably use but should not. It is designed to manage containerized workloads and is a framework and software used to build a Kubernetes distribution, like Red Hat OpenShift or SUSE Rancher. Red Hat OpenShift is a distribution of Kubernetes that can run on multiple platforms, including IBM Power, and is supported by Red Hat. Red Hat packs Kubernetes with additional bells and whistles that make your life prettier, development faster, and operations smoother.
The usual problem is when you want to test Red Hat OpenShift to see if it fits into your development paradigm but are not ready to pay support costs from the very beginning.
If it is your case, you can stop reading here. Instead of reading the newsletter, go to https://developers.redhat.com, create a personal Red Hat account, and join the Red Hat Developer Program. You will get access to the Red Hat OpenShift code and can officially install it. Not for production use! Only learn how to use it and test if your software works with the OpenShift.
The rest of the newsletter is for Red Hat-haters, who can’t even imagine using Red Hat software :-)
Alternatives to OpenShift
I already named one of them - SUSE Rancher. Unfortunately, SUSE Rancher is not available for IBM Power. It is not a choice for me.
Another very popular Kubernetes distribution is VMware Tanzu. Have you still not heard that VMware was bought by Broadcom? If not, then you can test it. But it only works on x86.
There are several other smaller distributions like k3s, which don’t have any support options, and I am not aware if they were ported to IBM Power.
That’s why our first option, if we don’t want to pay Red Hat, is to install “pure” Kubernetes.
This is exactly what I plan to do in the next newsletters.
Pure Kubernetes installation
Can it be done? Yes, I did it.
It took me a lot of time to figure it out and automate the deployment using Ansible. The whole playbook was about 1000 lines. I optimized it a little bit and threw away a lot of duplicate code. Now, it is about 800 lines.
Will it work in your environment? I am not sure. The playbook is more like my brain dump. It’s not really production-ready, not parametrized, and has some disambiguities that make it not really idempotent.
How long does it take to install pure Kubernetes? Now, when I have the playbook, it takes about 15 minutes to deploy everything on the cloud provider of my choice, including LPARs. But as you remember, it took me some time, and the playbook has about 800 lines. It will take some time to publish and comment these 800 lines.
My work is not original but based on the “Kubernetes - The Hard Way” by Kelsey Hightower.
Infrastructure prerequisites
You need 4 servers:
jumpbox. This is our jump server with Internet access. Of course, you can download everything from the Internet manually and then upload it to the server. In this case you don’t need Internet access.
server. This is our future Kubernetes control plane.
worker1. This is our future first worker node.
worker2. This is our future second worker node.
If the words like “worker node” or “control plane” mean nothing to you, you are welcome to read some books on Kubernetes.
All servers can be very small. For the tests, I used 1 vCPU, 4GB RAM, and 20GB storage pro server. Of course, you can’t run any real workload with such resources. If you want to run something useful, assign more resources!
I use Ubuntu 24.04 as the operating system. You can use anything you want, but if you are still reading this, you probably want to use a “really free” operating system.
Ansible inventory
After you have prepared the infrastructure, it is time to write the Ansible inventory. In this case, I use YAML format for the inventory:
---
all:
children:
jump:
hosts:
jumpbox:
ansible_host: 10.11.0.1
apiserver:
hosts:
server:
ansible_host: 10.11.0.2
workers:
hosts:
worker1:
ansible_host: 10.11.0.11
worker2:
ansible_host: 10.11.0.12
vars:
ansible_user: root
ansible_password: Yes, you want to know it!
Some of the names are hard-coded in the playbook. If you have other server names, be sure to change them in the playbook too.
Configure the jump server
- name: Install Jumpbox
hosts: jump
gather_facts: false
Our first task is to prepare our jump server. First, we install some software that we need to take further steps.
- name: Install packages
ansible.builtin.apt:
name:
- wget
- curl
- openssl
- vim
- git
- sshpass
Now it is time to download a lot of software!
- name: Download files
ansible.builtin.get_url:
url: "{{ item }}"
dest: "/root/"
loop:
- https://dl.k8s.io/v1.31.2/bin/linux/ppc64le/kubectl
- https://dl.k8s.io/v1.31.2/bin/linux/ppc64le/kube-apiserver
- https://dl.k8s.io/v1.31.2/bin/linux/ppc64le/kube-controller-manager
- https://dl.k8s.io/v1.31.2/bin/linux/ppc64le/kube-scheduler
- https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.31.1/crictl-v1.31.1-linux-ppc64le.tar.gz
- https://github.com/opencontainers/runc/releases/download/v1.2.1/runc.ppc64le
- https://github.com/containernetworking/plugins/releases/download/v1.6.0/cni-plugins-linux-ppc64le-v1.6.0.tgz
- https://github.com/containerd/containerd/releases/download/v2.0.0/containerd-2.0.0-linux-ppc64le.tar.gz
- https://dl.k8s.io/v1.31.2/bin/linux/ppc64le/kube-proxy
- https://dl.k8s.io/v1.31.2/bin/linux/ppc64le/kubelet
- https://github.com/etcd-io/etcd/releases/download/v3.4.34/etcd-v3.4.34-linux-ppc64le.tar.gz
Let’s create a hard link for kubectl. We will need it many times. If your /root and /usr/local/bin directories are on different filesystems, just copy the file to /usr/local/bin.
- name: Create kubectl link
ansible.builtin.file:
path: /usr/local/bin/kubectl
src: /root/kubectl
state: hard
- name: Set permissions
ansible.builtin.file:
path: /usr/local/bin/kubectl
owner: root
group: root
mode: '0755'
We also need an SSH key, which we will use in the future:
- name: Generate SSH key on jumpbox
community.crypto.openssh_keypair:
path: /root/.ssh/id_ecdsa
type: ecdsa
size: 521
The very first step is done!
Certification authority
Kubernetes uses certificates almost everywhere. We need our own certification authority. If you already have one in your company, it will most probably not work with Kubernetes. You can try and find many errors.
It was maybe the most frustrating part of Kubernetes installation. You create some certificates, you install everything, and the deployment runs without any problems. But in the end, nothing works. Because of the wrong certificates. But you can’t find any error that will follow you to the problem with the certificates.
That’s why do it first as I write. Then, try to modify something, and be prepared to revert any modifications if something goes wrong.
We need a private key for our certification authority.
- name: Generate CA key
community.crypto.openssl_privatekey:
path: /root/ca.key
type: RSA
size: 4096
I didn’t use any passphrases in the keys. From the security perspective, this is a huge mistake! If you are playing, it is OK. But if you plan to use your Kubernetes installation longer, consider some thoughts on security.
Let’s generate a Certificate Signing Request (CSR) for our certification authority.
- name: Generate CA CSR
community.crypto.openssl_csr_pipe:
privatekey_path: /root/ca.key
country_name: DE
state_or_province_name: Hessen
locality_name: Sulzbach
organization_name: Power DevOps
common_name: "Power DevOps KoP CA"
basic_constraints:
- 'CA:TRUE'
basic_constraints_critical: true
key_usage:
- keyCertSign
- cRLSign
key_usage_critical: true
register: ca_csr
After we have CSR, we can sign our CA certificate.
- name: Create CA certificate
community.crypto.x509_certificate:
path: /root/ca.crt
csr_content: "{{ ca_csr.csr }}"
privatekey_path: /root/ca.key
provider: selfsigned
Don’t forget! This is not a real production-ready CA that you can use in your environment everywhere. It is a quick and dirty CA for our future Kubernetes cluster, which is a simple playground.
Certificates!
Now, we must create many certificates for each user, node, and service in our Kubernetes cluster. We start with private keys because it is easy!
- name: Create private keys
community.crypto.openssl_privatekey:
path: "{{ item }}"
type: RSA
size: 4096
loop:
- /root/admin.key
- /root/worker1.key
- /root/worker2.key
- /root/kube-proxy.key
- /root/kube-scheduler.key
- /root/kube-controller-manager.key
- /root/kube-api-server.key
- /root/service-accounts.key
Our next step is to prepare CSRs. Separately from all others, we create CSRs for our future admin and service-accounts users:
- name: Create admin CSR
community.crypto.openssl_csr:
privatekey_path: "/root/{{ item }}.key"
path: "/root/{{ item }}.csr"
country_name: DE
state_or_province_name: Hessen
locality_name: Sulzbach
organization_name: system:masters
common_name: "{{ item }}"
basic_constraints:
- 'CA:FALSE'
extended_key_usage:
- clientAuth
key_usage:
- digitalSignature
- keyEncipherment
loop:
- admin
- service-accounts
Now is the rest of CSRs:
- name: Create all other CSRs
community.crypto.openssl_csr:
privatekey_path: "/root/{{ item.name }}.key"
path: "/root/{{ item.name }}.csr"
country_name: DE
state_or_province_name: Hessen
locality_name: Sulzbach
organization_name: "{{ item.o }}"
common_name: "{{ item.cn }}"
basic_constraints:
- 'CA:FALSE'
extended_key_usage:
- clientAuth
- serverAuth
key_usage:
- digitalSignature
- keyEncipherment
subject_alt_name: "{{ item.san }}"
loop:
- { 'name': 'worker1', 'cn': 'system:node:worker1', 'o': 'system:nodes', 'san': ['IP:127.0.0.1', 'IP:10.11.0.11', 'DNS:worker1', 'DNS:worker1.power-devops.cloud'] }
- { 'name': 'worker2', 'cn': 'system:node:worker2', 'o': 'system:nodes', 'san': ['IP:127.0.0.1', 'IP:10.11.0.12', 'DNS:worker2', 'DNS:worker2.power-devops.cloud'] }
- { 'name': 'kube-proxy', 'cn': 'system:kube-proxy', 'o': 'system:node-proxier', 'san': ['IP:127.0.0.1', 'DNS:kube-proxy'] }
- { 'name': 'kube-scheduler', 'cn': 'system:kube-scheduler', 'o': 'system:system:kube-scheduler', 'san': ['IP:127.0.0.1', 'DNS:kube-scheduler'] }
- { 'name': 'kube-controller-manager', 'cn': 'system:kube-controller-manager', 'o': 'system:kube-controller-manager', 'san': ['IP:127.0.0.1', 'DNS:kube-proxy'] }
- { 'name': 'kube-api-server', 'cn': 'kubernetes', 'o': 'Power DevOps', 'san': ['IP:127.0.0.1', 'IP:10.32.0.1', 'DNS:kubernetes', 'DNS:kubernetes.default', 'DNS:kubernetes.default.svc', 'DNS:kubernetes.default.svc.cluster', 'DNS:kubernetes.svc.cluster.local', 'DNS:server.kubernetes.local', 'DNS:api-server.kubernetes.local', 'DNS:server.power-devops.cloud', DNS:api-server.power-devops.cloud'] }
It is one of the places where you can see hard-coded node names, IP addresses, and host names. Of course, you must change some data, such as IP addresses (10.11.0.11, 10.11.0.12) and domain names (.power-devops.cloud). But be careful. If you change too much, your Kubernetes cluster will not start.
Now we can sign our certificates:
- name: Sign certificates
community.crypto.x509_certificate:
csr_path: "/root/{{ item }}.csr"
path: "/root/{{ item }}.crt"
provider: ownca
ownca_path: "/root/ca.crt"
ownca_privatekey_path: "/root/ca.key"
ownca_not_after: +3653d
ownca_not_before: "-1d"
loop:
- admin
- worker1
- worker2
- kube-proxy
- kube-scheduler
- kube-controller-manager
- kube-api-server
- service-accounts
Finding another system administrator today is easy, but finding an engineer who can build robust and scalable automation is difficult!
Want to become the most valuable professional in your company? Learn to build robust and scalable automation! Yes, it takes time and money, but be sure—the investment pays off. Read more and sign up for the program!
That’s all for today, guys!
Yes, I am tired, too. You were reading it; I was writing it. The code will run for maybe 10 seconds in your environment, but it took days to develop and test it and hours to write it into the newsletter.
Next week, we will continue configuring our future Kubernetes servers.
Have fun installing Kubernetes!
Andrey
Hi, I am Andrey Klyachkin, IBM Champion and IBM AIX Community Advocate. This means I don’t work for IBM. Over the last twenty years, I have worked with many different IBM Power customers all over the world, both on-premise and in the cloud. I specialize in automating IBM Power infrastructures, making them even more robust and agile. I co-authored several IBM Redbooks and IBM Power certifications. I am an active Red Hat Certified Engineer and Instructor.
Follow me on LinkedIn, Twitter and YouTube.
You can meet me at events like IBM TechXchange, the Common Europe Congress, and GSE Germany’s IBM Power Working Group sessions.