For a client, after the installation of GI 12.2 and RDBMS 12.1,12.2 software, and creating several RAC databases, I needed to apply the latest PSU patch, which is “Patch 27100009 – Grid Infrastructure Jan 2018 Release Update (GI RU) 12.2.0.1.180116”. During this process, I have encountered several issues and learned some tricks worthy of taking notes for future reference.
First of all, we really should have downloaded the latest released OPatch utility. Although in the patch readme it says “You must use the OPatch utility version 12.2.0.1.6 or later to apply this patch for all platforms”, it does not mean OPatch 12.2.0.1.6 won’t give you trouble. Actually with OPatch 12.2.0.1.6 , I got the following error and patch failed:
OPATCHAUTO-68021: The following argument(s) are required: [-wallet]
Based on the MOS doc – Creation of opatchauto wallet in 12.2 in 12.2.0.1.8 (Doc ID 2270185.1), it turned out that “Opatchauto in 12.2 requires the creation of a wallet file with password for the owner of grid software on all nodes. However, the latest version of Opatch (12.2.0.1.9 or higher) does not require wallet as a mandatory parameter with opatchauto. So to save the trouble of creating wallet, I went ahead to download the latest Opatch, which is version12.2.0.10.
Secondly, It appeared that the opatchauto depends on the inventory.xml file, we’d better make sure it only contains the homes we want to apply the patch. In my case, they are GI 12.2 home and RDBMS 12.2 home. This file is located at [inventory_loc]/ ContentsXML directory, while the [inventory_loc] can be found in the file /var/opt/oracle/oraInst.loc (Solaris) or /etc/oraInst.loc (Linux) . Below are the two examples to remove unwanted HOMEs:
/grid/app/12.2.0/grid/oui/bin/runInstaller -silent -detachHome ORACLE_HOME="/opt/oracle/oraclex/product/12.2.0/db_1"
/grid/app/12.2.0/grid/oui/bin/runInstaller -silent -detachHome ORACLE_HOME="/opt/oracle/agent/agent_13.2.0.0.0"
Also it is necessary to make sure that the node list for a HOME is correct in the inventory.xml file, i.e., the node list section should not be empty. The syntax to update node list:
/grid/app/12.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/grid/app/12.2.0/grid "CLUSTER_NODES= "
Lastly, we may have to take care of the ACFS filesystem based on the environment setting. In my environment, GI and RDBMS are not shared, but there are ACFS filesystem. Based on “Supplemental Readme – Patch Installation and Deinstallation for 12.1.0.x.x GI PSU and Database Proactive Bundle Patch (Doc ID 1591616.1)”, I had to unmount the ACFS first, start the opatchauto to patch the GI and RDBMS home together with one node at a time.
To umount the ACFS filesystem, we can proceed with the following steps:
1. Execute the following command to find the names of the CRS managed ACFS file system resource.
# crsctl stat res -w “TYPE = ora.acfs.type” -p | grep VOLUME
2. Execute the following command to stop the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
# srvctl stop filesystem -d -n
What would happen if I had not stopped the acfs filesystem? You would get some error messages like below:
2018/01/31 22:39:42 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2018/01/31 22:40:27 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2018/01/31 22:40:32 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab' 2018/01/31 22:40:44 CLSRSC-205: Failed to uninstall ADVM/ACFS After fixing the cause of failure Run opatchauto resume OPATCHAUTO-68061: The orchestration engine failed. OPATCHAUTO-68061: The orchestration engine failed with return code 1 OPATCHAUTO-68061: Check the log for more details. OPatchAuto failed. OPatchauto session completed at Wed Jan 31 22:40:44 2018 Time taken to complete the session 27 minutes, 3 seconds opatchauto failed with error code 42
To fix this, I went through the following steps:
1. As root disabled CRS: crsctl disable crs.
2. reboot the node to clear the device busy condition.
3. Once the node came back, enabled crs: crsctl enable crs.
4. Performed “opatchauto resume”.
5. Checked crs, found “ora.mgmtdb” offline, did: “srvctl start mgmtdb” to start it up.
(note to myself: Management database is administered through “srvctl start/stop/status mgmtdb” commands.)
It was a good journey to be exposed to the latest PSU patching!
Thanks a lot.