I spent last week upgrading Virtual I/O Servers with Ansible.
I wanted to write about it and failed!
The story is always easy and the same. You get a task to upgrade Virtual I/O Servers, test it, automate it, and do it. All Virtual I/O Servers already have Python and the Ansible user. What could be easier? Is there anything to write about it? So was my thought at the beginning of the week. You can imagine, it was not so easy. Before you upgrade a Virtual I/O Server, there are some prerequisites you must fulfill.
I thought IBM delivered everything I need to automate the upgrade!
Each VIOS installation has a command: viosupgrade. The command has a corresponding Ansible module. You can take the module, pack it into the playbook, and you have automation!
- name: Start viosupgrade
ibm.power_vios.viosupgrade:
cluster: false
post_install_binary: /tmp/postupgrade.sh
image_file: "/mnt/{{ mksysb }}"
mksysb_install_disks: "{{ alt_hdisk }}"
wait_reboot: true
Oh, I see! We need more steps like mounting an NFS share and providing a separate disk for alt_rootvg.
In reality, we need more. Read the beautiful article by Jaqui Lynch about how she prepares VIOS upgrades and what she does after them.
In my case, the VIOS upgrade failed because of SDDPCM. This time, I will automatically uninstall SDDPCM from VIOSes.
What is SDDPCM?
Many years (decades?) ago, every storage vendor delivered special multipathing software for AIX. If you had EMC, you had to install PowerPath. If you had Hitachi or some clone, you had to install HDLM. If you had IBM, you had to install SDD. The software costed some money, sometimes even a lot.
At some point in time, IBM decided to create a “unified” experience. You probably know the command lspath
on AIX. It requires special Path Control Modules. These PCMs are ODM extensions. At that time, IBM migrated from SDD to SDDPCM on AIX. The only problem is that SDDPCM has not been supported since 2020. All functionality is included in standard AIXPCM, and there is no need for an additional driver or some ODM extensions.
If you have storage from other vendors, you don’t have the problem. You can still install ODM extensions for your storage. If you have IBM storage, check that you don’t have the fileset devices.sddpcm.72.rte
on your systems. If you have it, your automated upgrade procedure will fail.
Removing SDDPCM
You can’t remove SDDPCM if your system has any SDDPCM-managed disks. The first step is to move the disks under the management of AIXPCM. But even before that, we must set policy no_reserve for all our AIXPCM-managed disks. Otherwise, you will have some funny problems on your client LPARs.
- name: Set no_reserve as default for mpioosdisk
ansible.builtin.command:
cmd: /usr/ios/utils/rules -o modify -t disk/fcp/mpioosdisk -a reserve_policy=no_reserve
Now, we can change the default drivers for IBM storage.
- name: Change default drivers
ansible.builtin.command:
cmd: manage_disk_drivers -d {{ item }} -o AIX_AAPCM
loop:
- IBMFlash
- 2107DS8K
- IBMSVC
Of course, you can do it in another order. First, change the default drivers, and then set the no_reserve policy. These settings will be effective only after reboot. We must reboot VIOS:
- name: Reboot the server
ibm.power_aix.reboot:
reboot_timeout: 600
After the VIOS is started, I check the policy one more time, just for a case.
- name: Set reserve_policy on all disks
ansible.builtin.shell:
cmd: lsdev -t mpioosdisk -F name | while read A ; do chdev -l $A -a reserve_policy=no_reserve -P ; done
The last step is to remove the SDDPCM filesets:
- name: Uninstall SDDPCM filesets
ibm.power_aix.installp:
action: deinstall
install_list:
- devices.fcp.disk.ibm.mpio.rte
- devices.sddpcm.72.rte
Finished! Almost…
It worked very well till I got a VIOS with disks in the “Defined” state. They were deleted from the storage already but are still defined on the VIOS. Of course, they couldn’t be migrated to AIXPCM. You should simply remove them:
- name: Remove disks in Defined state
ansible.builtin.shell:
cmd: lsdev -Cc disk -t 2145 -F "name status" | grep -w Defined | while read HD _ ; do rmdev -dl $HD ; done
As you can see, I didn’t do any funny things with Ansible. I used the same shell one-liner I’d use working from the command line. Why? because I am lazy.
Want to know more about Virtual I/O Server management with Ansible?
Yes, you get the information in this newsletter for free! I promise it and plan no changes! Still, you can have more. I have a special offer for you: an E-mail course called “Managing Virtual I/O Server with Ansible.” You will get even more comprehensive how-tos and theoretical and practical knowledge on managing a Virtual I/O Server with Ansible. You will get deep explanations and exercises to learn to practically manage a Virtual I/O Server with Ansible. After the course, you can manage big and small IBM Power installations with the modern automation tool Ansible, make them repeatable and manageable, and save your time for a better life! Click here and subscribe to the course today! The price is only 49€ till the end of January.
One last thing
I hope you don’t run the playbook on all your Virtual I/O Servers at once. It reboots the server, and in this case, it will reboot all your Virtual I/O Servers at once. Either do two different inventories for different Virtual I/O Servers or make two reboots: one for the first VIOSes from the pair and another for the second VIOSes.
What’s next?
Yes, we will upgrade VIOS! Once in the future.
Have fun managing VIOS with Ansible!
Andrey
Hi, I am Andrey Klyachkin, IBM Champion and IBM AIX Community Advocate. This means I don’t work for IBM. Over the last twenty years, I have worked with many different IBM Power customers worldwide, both on-premise and in the cloud. I specialize in automating IBM Power infrastructures, making them even more robust and agile. I co-authored several IBM Redbooks and IBM Power certifications. I am an active Red Hat Certified Engineer and Instructor.
Follow me on LinkedIn, Twitter and YouTube.
You can meet me at events like IBM TechXchange, the Common Europe Congress, and GSE Germany’s IBM Power Working Group sessions.
Hoo yes, I am very glad that we don't have powerpath anylonger but decent IBM storage.