I wrote about how to install Linux using IBM AIX NIM:
I wrote how to automate these installations using their automatic installation tools:
What did I miss? I didn’t write how to automate everything above using Ansible.
Do we need Ansible here at all?
Yes, we need!
I prepared my installation resources for Red Hat Enterprise Linux 9.4 and they work well. But if you followed Red Hat announcements last time, you know that there are Red Hat Enterprise Linux 9.5 and 10.0-Beta meanwhile. Not only this. I was asked if Fedora or AlmaLinux or Rocky Linux work the same way. If I prepare their installation resources every time manually, my fingers will burn.
Even worse if I prepare manually different kickstart (autoyast, autoinst) files and GRUB configuration. I want to write my server name only and get the server installed. Similar to what we do with AIX NIM.
This is the reason why I wrote two playbooks for me. I’d like to integrate them with NIM, but I don’t know how to define custom NIM methods. I took a little bit other approach. Which one? You will learn it when we have our playbooks ready.
Some common prerequisites
Of course you must have ISO images of favorite Linux operating system. But I want to make another note. The playbooks I show here were developed to run locally on the NIM server. I didn’t test them with Ansible Automation Platform and I can tell you that they will not run there with ca. 100% probability. If you want to use them with Ansible Automation Platform or start them from other servers, you must change the playbooks.
Because the playbooks will start from the NIM server, the inventory is very simple:
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3.9
Preparing Linux installation resources
Input parameters
Before we start writing tasks we define what input data we need. There are three input parameters for the playbook we must specify each time we run it:
path to the ISO image with Linux
type of Linux distribution
Linux distribution’s version
I write them into variables iso_image
, linux_type
and linux_version
.
There are also some parameters that we should know but they can be hard-coded in the playbook:
Filesystem where Linux installation resources will be created (
rsrc_fs
)Size of the filesystem (
rsrc_size
)Logical volume (
rsrc_lv
) and volume group (rsrc_vg
) names where the filesystem will be created
Creating filesystem for the Linux installation resources
We start by creating the filesystem. Or making sure that it exists and has the correct size. First we create the logical volume:
- name: Create logical volume
ibm.power_aix.lvol:
vg: "{{ rsrc_vg }}"
lv: "{{ rsrc_lv }}"
size: "{{ rsrc_size }}"
state: present
The next step is to create the filesystem or to ensure that it exists.
- name: Create filesystem
ibm.power_aix.filesystem:
filesystem: "{{ rsrc_fs }}"
fs_type: jfs2
device: "{{ rsrc_lv }}"
attributes: "size={{ rsrc_size }}"
state: present
Of course you can do it in one step. Why do I do it in two steps - first logical volumes, and then the file system? Because I don’t like standard logical volume names likes fslv01
and prefer to have my own names. In a small environment it doesn’t matter, but if you have many file systems, it is easier to work with “pronounceable” logical volume names.
The filesystem must be mounted of course:
- name: Mount filesystem
ibm.power_aix.mount:
mount_dir: "{{ rsrc_fs }}"
state: mount
Mount ISO image
Before we mount the ISO image, we check that it exists. If it doesn’t exist, we fail with some error message, which we can interpret later.
- name: Check that ISO image exists
ansible.builtin.stat:
path: "{{ iso_image }}"
get_attributes: false
get_checksum: false
get_mime: false
register: isostat
- name: Fail if ISO image doesn't exist
ansible.builtin.fail:
msg: "ISO image {{ iso_image }} doesn't exist or can't be read"
when: not isostat.stat.exists
I create a temporary directory to mount the ISO image. Why? Other directories like /mnt
can be used by other processes. It is easier for me to create a new temporary mount point and remove it afterwards, than to fail some other process what could run at this time on the system.
- name: Create temporary directory
ansible.builtin.tempfile:
path: /tmp
prefix: linux.
state: directory
register: tmpmnt
Now let’s mount the ISO image:
- name: Mount ISO image
ansible.builtin.command:
cmd: "loopmount -i {{ iso_image }} -m {{ tmpmnt.path }} -o '-V cdrfs -r'"
Preparing NFS share
After the ISO image is mounted we must copy its contents into the future NFS share. The NFS share will be used for servers installation. We create the directory:
- name: Create target directory for NFS
ansible.builtin.file:
path: "{{ rsrc_fs }}/{{ linux_type }}/{{ linux_version }}"
owner: root
group: system
mode: "0755"
state: directory
To copy the files I use ansible.posix.synchronize
module. The module uses the well known tool rsync
to copy the files. First of all it saves a lot of time because only files are copied which are missing on the target. Second I don’t have to think about how many and which files I should copy. I copy all of them. It means of course that rsync must be already installed on the system and Ansible must be able to find it. If rsync is not in PATH
, add environment
section to the task.
- name: Copy files for NFS
ansible.posix.synchronize:
src: "{{ tmpmnt.path }}/"
dest: "{{ rsrc_fs }}/{{ linux_type }}/{{ linux_version }}"
After we copied the files, we can export the NFS share:
- name: Export NFS share
ansible.builtin.command:
cmd: "mknfsexp -d {{ rsrc_fs }}/{{ linux_type }}/{{ linux_version }} -B -v 3,4 -S sys -t ro -c @10.0.0.0/8"
Should you do it here and for the whole network? No. If you have strict security requirements, you can export the share exclusively for your servers during their installation in the next playbook.
Preparing TFTP (kernel and initrd)
We must copy kernel and initrd for each Linux distribution to TFTP directory. But each distribution has its own names for kernel and initrd. We can solve it by creating some variable and defining all possible names there. In the next version I will do it. But now I solved the problem by creating several different tasks for different distributions.
First create the directory for kernel and initrd.
- name: Create target directory for TFTP
ansible.builtin.file:
path: "/tftpboot/{{ linux_type }}/{{ linux_version }}"
owner: root
group: system
mode: "0755"
state: directory
Let’s copy Red Hat’s specific files:
- name: Copy files for TFTP (Red Hat)
when: linux_type == "rhel"
block:
- name: Copy files
ansible.builtin.copy:
src: "{{ tmpmnt.path }}/ppc/ppc64/{{ item }}"
dest: "/tftpboot/{{ linux_type }}/{{ linux_version }}/{{ item }}"
owner: root
group: system
mode: "0644"
loop:
- vmlinuz
- initrd.img
SUSE:
- name: Copy files for TFTP (SLES)
when: linux_type == "sles"
block:
- name: Copy files
ansible.builtin.copy:
src: "{{ tmpmnt.path }}/boot/ppc64le/{{ item }}"
dest: "/tftpboot/{{ linux_type }}/{{ linux_version }}/{{ item }}"
owner: root
group: system
mode: "0644"
loop:
- linux
- initrd
Ubuntu:
- name: Copy files for TFTP (Ubuntu)
when: linux_type == "ubuntu"
block:
- name: Copy files
ansible.builtin.copy:
src: "{{ tmpmnt.path }}/casper/{{ item }}"
dest: "/tftpboot/{{ linux_type }}/{{ linux_version }}/{{ item }}"
owner: root
group: system
mode: "0644"
loop:
- vmlinux
- initrd
At this point we copied everything what we need and can unmount the ISO image.
- name: Unmount ISO image
ansible.posix.mount:
path: "{{ tmpmnt.path }}"
fstab: /dev/null
state: unmounted
- name: Remove temporary directory
ansible.builtin.file:
path: "{{ tmpmnt.path }}"
state: absent
Preparing TFTP (GRUB)
If you’ve read my previous newsletters, you know - there is no easy way to extract GRUB modules if you don’t have some Linux installation. In my case I saved my working GRUB installation and copy it to TFTP.
- name: Create TFTP boot directory
ansible.builtin.file:
path: /tftpboot/linux/grub2/powerpc-ieee1275
owner: root
group: system
mode: "0755"
state: directory
Copying is done with rsync (ansible.posix.synchronize
).
- name: Ensure TFTP boot directory is present
ansible.posix.synchronize:
src: files/linux/
dest: /tftpboot/linux/grub2/powerpc-ieee1275
Because I use SUSE’s version of GRUB, I must create a symlink /boot
to /tftpboot/linux
and enable access to it in /etc/tftpaccess.ctl
:
- name: Ensure GRUB access
ansible.builtin.lineinfile:
line: allow:/boot
path: /etc/tftpaccess.ctl
state: present
owner: root
group: system
mode: "0644"
- name: Create /boot link for GRUB
ansible.builtin.file:
src: /tftpboot/linux
dest: /boot
state: link
Consider supporting Power DevOps Newsletter!
Upgrade to our paid tier to unlock every article in the archive. Become a Founding Member for a little bit extra and book a 1-to-1 coffee chat with Andrey Klyachkin.
Testing
The first playbook is done! We can test it.
# ansible-playbook prepare_linux_resources.yml -e linux_type=rhel -e linux_version=9.5 -e iso_image=/nim/iso/rhel-9.5-ppc64le-dvd.iso
yeah! I’ve got my installation resources for Red Hat Enterprise Linux 9.5! The next step is to create a playbook to prepare the installation. But this is next week.
Have fun with Linux on Power!
Andrey
Hi, I am Andrey Klyachkin, IBM Champion and IBM AIX Community Advocate. It means I don’t work for IBM. Over the last 20 years, I worked with many different IBM Power customers all over the world both on-premise and in the cloud. I specialize in automating IBM Power infrastructures making them even more robust and agile. I co-authored several IBM Redbooks and IBM Power certifications. I am an active Red Hat Certified Engineer and Instructor.
Follow me on LinkedIn, Twitter and YouTube.
Meet me at events like IBM TechXchange, Common Europe Congress, and GSE Germany’s IBM Power Working group sessions.
I learn so much reading your posts.