Updating means: "Make the backup first!"
The most boring and time-consuming task must be automated
I have a huge problem. I want to update my Mac, but I fear doing it. Why? When I start my TimeMachine backup, it tells me it will take 4 days. AIX and VIOS backups are much easier and take less time. Thanks to Jaqui Lynch’s tips and tricks, which she publishes on TechChannel, you know what to do. Her tips blow my mind. But there are so many of them! That’s why it is time to automate them. The next several newsletters will be my homage to Jaqui.
If you manage a big environment, it looks like you are doing updates every single day. You update firmware on the servers, microcode on the adapters, Virtual I/O Servers, AIX servers, Linux servers, … I have more than 100 IBM Power servers to care about. Of course, it means more than 200 Virtual I/O Servers and more than 1000 AIX and Linux LPARs. If you divide it by the number of working days per year, you will get more than 1 update per day. Automated procedures help to do them unattended, to include them in bigger automated processes, or to delegate them to the end-users and avoid management overhead to find a suitable downtime.
The first step before each update
Before we start an update, we must make a system backup. I don’t talk about full-featured backups with all the data you have on your server. I talk about system backup like mksysb on AIX, when you back up only your operating system and its settings. Of course, the basic prerequisite for it is that you split up your applications and your applications’ data from the operating system. There is the rule #1 for each AIX administrator:
Never install any application or save any application data into rootvg!
The same rule applies to Linux, Virtual I/O Server, or any other operating system worldwide. If you follow this rule, your operating system becomes disposable. You can always throw it away, re-install it, or update it, and it will not interfere with your applications.
It also helps you to produce leaner system backups. They take less time and space.
AIX mksysb
The usual way to do system backups on AIX is through mksysb. This is a bootable system backup you can use to restore your operating system using NIM. You can do it by using the command mksysb, or you can start it from the NIM server by defining a new mksysb resource. I usually don’t do it because NIM can’t parallelize very well for many servers. The standard setting on the NIM is 20 parallel threads. You can check it by:
# lsnim -l master | grep max_nimesis_threads
You can set it up to 150:
# nim -o change -a max_nimesis_threads=1000 master
0042-001 nim: processing error encountered on "master":
0042-319 m_chmaster: The max_nimesis_threads attribute may only be assigned
a value within a range of 20 to 150.
This is the approximate range of how many parallel tasks you can run on the NIM server. If you back up 10 LPARs in parallel, it is not a big problem. If you want to back up more, or you don’t know how many backups will run in parallel, you may want to choose another option.
This is the reason I prefer starting mksysb directly from the LPARs and writing it on an NFS share.
VIOS mksysb
On VIOS, there is no command “mksysb
” officially. Of course, you can run it from oem_setup_env mode. However, the official command for backups on VIOS is called backupios. Similarly, the NIM resource for VIOS mksysb is called ios_mksysb
.
Remember that backupios creates only mksysb but doesn’t back up your VIOS configuration data. You can restore VIOS from the mksysb, but all your mappings will be lost. This is the reason why you should do another type of VIOS backup before mksysb. The command for the configuration backup is called viosbr.
My usual VIOS backup procedure always has two steps. First, I create viosbr-backup and then backupios-backup.
My special case - NIM
All of my mksysbs are on the NIM server. If I need to restore a server, its mksysb is already in place. But I can’t back up my NIM server to the NIM server. Of course, I can do it, but it is pointless. I will not restore my NIM server from the NIM server itself. It is possible but it takes too many steps. I create backups of the NIM server on another NFS share on a separate server that has no dependencies on my IBM Power environment. It is up to you how you back up the NIM server and where you store the files. I describe my way of doing ;-)
Step 1. Setting mksysb name
I don’t like gather_facts
at the beginning of playbooks. It takes time and gathers a lot of data I usually don’t need. But I want to create mksysb names based on the current date. I also want to be sure that the date is consistent for all mksysbs made during the run of the playbook. That’s why my first task is to get the current date on my Ansible control node:
- name: Get current date and time
ansible.builtin.setup:
gather_subset: date_time
delegate_to: localhost
run_once: true
become: false
Now I can use the current date in the mksysb name if it was not defined earlier.
- name: Set mksysb file name
ansible.builtin.set_fact:
mksysb_name: "{{ inventory_hostname }}_mksysb.{{ ansible_facts.date_time.date }}"
when: mksysb_name is not defined
Step 2. Check if the playbook runs on VIOS
We have two different procedures - one for AIX and one for VIOS. It means we must first check where we run. The easiest way to check if it is VIOS is to check for the existence of the command /usr/ios/cli/ioscli
. This is the main VIOS command, and it usually exists only on VIOS.
- name: Check if it is VIOS
ansible.builtin.stat:
path: /usr/ios/cli/ioscli
register: ioscli
Step 3. Check if the playbook runs on NIM
Similarly, we can check if the playbook runs on the NIM server by checking the fileset bos.sysmgt.nim.master
. If it is installed, it is the NIM server.
- name: Check if we run on the NIM server
ansible.builtin.command:
cmd: /usr/bin/lslpp -Lqc bos.sysmgt.nim.master
changed_when: false
failed_when: false
register: nimcheck
As for me, I must change my NFS server and NFS share variables if the playbook runs on the NIM server.
- name: Set another NFS server name if we are on NIM
ansible.builtin.set_fact:
nfs_server: "{{ nim_nfs_server }}"
when: nimcheck.rc == 0
- name: Set another NFS share name if we are on NIM
ansible.builtin.set_fact:
nfs_share: "{{ nim_nfs_share }}"
when: nimcheck.rc == 0
Step 4. Copy exclude.rootvg
I don’t like the idea of backing up /tmp
and some other temporary directories. We can use the file /etc/exclude.rootvg
to exclude backing up some of the files, like:
^./tmp/
^./var/tmp/
Note about VIOS backups.
The VIOS command backupios can use /etc/exclude.rootvg
if you specify the option -exclude
to it. The Ansible module ibm.power_vios.backupios
doesn’t support the option. Copying the file to VIOS now makes no sense, but this may change in the future, especially if you create a Github issue to add the missing functionality.
- name: Copy exclude.rootvg
ansible.builtin.copy:
src: exclude.rootvg
dest: /etc/exclude.rootvg
owner: root
group: system
mode: "0644"
when: not ioscli.stat.exists
Step 5. Mount the NFS share for backups
I prefer creating a temporary directory for NFS share inside of /tmp
. The advantage is that I don’t care whether specific directories exist or not or how they are used on the server. Every time the playbook runs, it creates a temporary directory and uses it. Because it is in /tmp
, everyone knows it is some temporary mount. The disadvantage is that the temporary directory and mount are still there if the playbook fails.
To work on this disadvantage, you can use a block
with the always
statement.
- name: Perform mksysb
block:
- name: Create temporary directory
ansible.builtin.tempfile:
path: /tmp
state: directory
suffix: ".tmp"
register: __temp_dir
- name: Mount NFS share for backups
ibm.power_aix.mount:
state: mount
node: "{{ nfs_server }}"
mount_dir: "{{ nfs_share }}"
mount_over_dir: "{{ __temp_dir.path }}"
... all other tasks ...
always:
- name: Unmount NFS share
ansible.posix.mount:
state: unmounted
path: "{{ __temp_dir.path }}"
fstab: /dev/null
- name: Remove temporary directory
ansible.builtin.file:
path: "{{ __temp_dir.path }}"
state: absent
recurse: false
Step 6. Perform VIOS backup
As I already wrote, I first create a viosbr backup and then mksysb. There are Ansible modules for both types of backup - ibm.power_vios.viosbr
and ibm.power_vios.backupios
. They run only on VIOS. Another note - I don’t back up ISO image repositories. If you need them, change the parameter savemedialib
to true
.
- name: Create VIOS viosbr backup
ibm.power_vios.viosbr:
action: backup
file: "/home/padmin/cfgbackups/viosbr.{{ ansible_facts.date_time.date }}"
when: ioscli.stat.exists
- name: Create mksysb of VIOS
ibm.power_vios.backupios:
file: "{{ __temp_dir.path }}/{{ mksysb_name }}"
mksysb: true
savemedialib: false
when: ioscli.stat.exists
Step 7. Perform AIX backup
This is even easier:
- name: Create mksysb of AIX
ibm.power_aix.backup:
action: create
create_data_file: true
type: mksysb
location: "{{ __temp_dir.path }}/{{ mksysb_name }}"
when: not ioscli.stat.exists
Consider supporting Power DevOps Newsletter!
Upgrade to our paid tier to unlock every article in the archive. Become a Founding Member for a little bit extra and book a 1-to-1 coffee chat with Andrey Klyachkin.
Run it!
Now, you should only add the prologue to the playbook, create an inventory with all your servers, and run it! Sometime later, you will have mksysbs from all your AIX and VIO servers.
Add the playbook to the Ansible Automation Platform and schedule it to run regularly.
Have fun creating backups!
Andrey
Hi, I am Andrey Klyachkin, IBM Champion and IBM AIX Community Advocate. This means I don’t work for IBM. Over the last twenty years, I have worked with many different IBM Power customers all over the world, both on-premise and in the cloud. I specialize in automating IBM Power infrastructures, making them even more robust and agile. I co-authored several IBM Redbooks and IBM Power certifications. I am an active Red Hat Certified Engineer and Instructor.
Follow me on LinkedIn, Twitter and YouTube.
You can meet me at events like IBM TechXchange, the Common Europe Congress, and GSE Germany’s IBM Power Working Group sessions.