Saturday, November 14, 2020

Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux

 In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on the node which we were patching on. 

Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep running and accessible to the database users during Oracle Grid Infrastructure patching. My patching practice is done in following environment

   * Two nodes Grid Infrastructure 19.8 running on Oracle Linux 7 update 8
   * Host name of nodes are rac01.lab.dbaplus.ca and rac02.lab.dbaplus.ca
   * GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020 are applied in out-of-place(OOP) mode
   * ACFS and AFD are being used
   
1. Create new Oracle Grid Infrastructure (GI) 19c home and prepare Oracle GI Release Update (RU) 19.9 for patching

As the root user on all nodes, create directories for new GI home
[root@rac01]# mkdir -p /u01/app/19.9.0/grid
[root@rac01]# chown -R grid:oinstall /u01/app/19.9.0

As the grid user on first node, download the Oracle Grid Infrastructure 19c image file and extract the files into new created GI home
[grid@rac01]$ cd /u01/app/19.9.0/grid
[grid@rac01]$ unzip -q /u01/media/LINUX.X64_193000_grid_home.zip

As grid user on first node, download and install latest version of OPatch (12.2.0.1.21) into new GI home
[grid@rac01]$ cd /u01/app/19.9.0/grid
[grid@rac01]$ mv OPatch OPatch.old
[grid@rac01]$ unzip -q /u01/media/p6880880_122010_Linux-x86-64_12.2.0.1.21.zip

As grid user on first node, download Oracle GI RU 19.9.0.0.201020 and extract the files into stage/temporary directory
[grid@rac01]$ mkdir /u01/stage/RU
[grid@rac01]$ cd /u01/stage/RU
[grid@rac01]$ unzip -q /u01/media/p31750108_190000_Linux-x86-64.zip

As grid user on first node, download Oracle JavaVM Component Release Update (OJVM RU) 19.9.0.0.201020 and extract the files into stage/temporary directory
[grid@rac01]$ mkdir /u01/stage/OJVM
[grid@rac01]$ cd /u01/stage/OJVM
[grid@rac01]$ unzip -q /u01/media/p31668882_190000_Linux-x86-64.zip

2. Software-Only installing GI with RU applying

GI Software-Only installation is optional, it minimizes my waiting time while working on the patch during the maintenance window.

Starting with Oracle Grid Infrastructure 18c, you can download and apply Release Updates (RUs) and one-off patches during an Oracle GI installation or upgrade with following gridSetup.sh command options

   gridSetup.sh -applyRU <patch_directory_location> -applyOneOffs <comma_seperated_list_of_patch_directory_locations>

As grid user on first node, start GI installation and apply GI RU and OJVM RU (as one-off patch)
[grid@rac01]$ cd /u01/app/19.9.0/grid
[grid@rac01]$ ./gridSetup.sh -applyRU /u01/stage/RU/31750108 -applyOneOffs /u01/stage/OJVM/31668882
Preparing the home to patch...
Applying the patch /u01/stage/RU/31750108...
Successfully applied the patch.
Applying the patch /u01/stage/OJVM/31668882...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2020-11-04_11-33-11AM/installerPatchActions_2020-11-04_11-33-11AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.9.0/grid/install/response/grid_2020-11-04_11-33-11AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2020-11-04_11-33-11AM/gridSetupActions2020-11-04_11-33-11AM.log

After successfully applied GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020, gridSetup.sh will start installation graphic interface, respond to the prompts as following,

  * In the 'Select Configuration Option' screen, select the 'Set Up Software Only' option to perform a software-only installation of Oracle Grid Infrastructure for a standalone server. Click Next.
  * In the 'Cluster Node Information' screen, click Add button to add public host names of all cluster nodes(rac01.lab.dbaplus.ca & rac02.lab.dbaplus.ca). Click Next.
  * Respond to the prompts as needed to set up Oracle Grid Infrastructure
  * The Oracle Grid Infrastructure setup wizard prompts you to run the root.sh script [on each node].

Example of root.sh execution
[root@rac01]# /u01/app/19.9.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.9.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster execute the following command as grid user:
/u01/app/19.9.0/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

The installer installed new GI on all nodes, and the new GI home has GI RU 19.9.0.201020 and OJVM RU 19.9.0.201020 installed.

3. Switching the GI Home without shutting down database instances

As grid user on first node, execute gridSetup.sh with following option to switch GI home

   gridSetup.sh -switchGridHome

If you did not perform Software-Only installation (step 2), run gridSetup.sh with following options to switch GI home

   gridSetup.sh -switchGridHome -applyRU <patch_directory_location> -applyOneOffs <comma_seperated_list_of_patch_directory_locations>

Switch GI home to new patched home
[grid@rac01]$ /u01/app/19.9.0/grid/gridSetup.sh -switchGridHome
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-11-05_11-05-00AM.log

Follow the steps in the configuration wizard to complete the Oracle Grid Infrastructure installation.

Note: During configuration, do NOT select the option to Automatically run configuration scripts.
   
When asked to run root.sh, running it as following with the -transparent and -nodriverupdate flags on the first node.

In order to watch process of Zero-Downtime patch, open two new terminals before execute root.sh as following

  * Terminal 1 (T1) Connect to first node rac01 as grid user, while root.sh is running on first node, keep running following commands to monitor Oracle running processes and database instances

      ps -ef | grep 'd.bin' | grep -v grep
      ps -ef | grep pmon | grep -v grep

    The script root.sh takes a while to complete, in order to keep running above commands, I created a script process.sh to repeat them every 2 seconds, the script can be terminated by press Ctrl + C, or wait for about one hour (should not be that long, it took me about 20 minutes). The source code of the script can be found at the bottom of this post.

  * Terminal 2 (T2) Connect to second node rac02 as grid user and log into ASM instance using sqlplus, while root.sh is running on first node, keep running following sql statements to monitor ASM clients connecting to ASM instance (+ASM2) running on second node rac02,

      select instance_name,count(*) from v$asm_client group by instance_name order by 1;

    I created another script asmclient.sh to run the sql every second.

3.1 Run root.sh on first node

On first node, as root user, 'run root.sh -transparent -nodriverupdate'
[root@rac01]# /u01/app/19.9.0/grid/root.sh -transparent -nodriverupdate
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.9.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
LD_LIBRARY_PATH='/u01/app/19.8.0/grid/lib:/u01/app/19.9.0/grid/lib:'
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2020-11-05_11-14-18AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2020-11-05_11-14-18AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac01/crsconfig/crs_prepatch_apply_oop_rac01_2020-11-05_11-14-18AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac01/crsconfig/crs_prepatch_apply_oop_rac01_2020-11-05_11-14-18AM.log
2020/11/05 11:14:26 CLSRSC-347: Successfully unlock /u01/app/19.9.0/grid
2020/11/05 11:14:27 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac01/crsconfig/crs_postpatch_apply_oop_rac01_2020-11-05_11-14-27AM.log
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [481622232].
2020/11/05 11:14:44 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'
2020/11/05 11:27:19 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [481622232].
2020/11/05 11:28:46 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/11/05 11:28:47 CLSRSC-672: Post-patch steps for patching GI home successfully completed.

On T1, start process.sh while/before starting root.sh on first node, the script will keep running for about one hour unless you press ctrl+c. The output as following
================== Before root.sh started =====================
root     27131     1  0 Sep10 ?        05:56:43 /u01/app/19.8.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
grid     27478     1  0 Sep10 ?        01:53:53 /u01/app/19.8.0/grid/bin/mdnsd.bin
grid     27480     1  0 Sep10 ?        04:55:13 /u01/app/19.8.0/grid/bin/evmd.bin
grid     27520     1  0 Sep10 ?        02:09:43 /u01/app/19.8.0/grid/bin/gpnpd.bin
grid     27584     1  0 Sep10 ?        05:54:32 /u01/app/19.8.0/grid/bin/gipcd.bin
root     27640     1  4 Sep10 ?        2-16:05:32 /u01/app/19.8.0/grid/bin/osysmond.bin
grid     27671     1  0 Sep10 ?        06:40:33 /u01/app/19.8.0/grid/bin/ocssd.bin
root     28227     1  0 Sep10 ?        04:58:39 /u01/app/19.8.0/grid/bin/octssd.bin reboot
root     28296     1  0 Sep10 ?        05:15:10 /u01/app/19.8.0/grid/bin/crsd.bin reboot

oracle     902     1  0 Sep25 ?        00:06:07 ora_pmon_DB19C01_2
oracle   19865     1  0 Sep17 ?        00:07:23 ora_pmon_DB19C02_2
oracle   22872     1  0 Sep10 ?        00:07:45 ora_pmon_DB12C01_2
grid     28754     1  0 Sep10 ?        00:04:27 asm_pmon_+ASM1
grid     28997     1  0 Sep10 ?        00:03:55 apx_pmon_+APX1
grid     30193     1  0 Sep10 ?        00:04:03 mdb_pmon_-MGMTDB

================== After root.sh started and ran a while =====================
root     21092     1  0 11:27 ?        00:00:00 /u01/app/grid/crsdata/rac01/csswd/oracsswd.bin
-----------------------------------------------
oracle     902     1  0 Sep25 ?        00:06:07 ora_pmon_DB19C01_2
oracle   19865     1  0 Sep17 ?        00:07:23 ora_pmon_DB19C02_2

================== After root.sh completed running =====================
root     21586     1  5 11:27 ?        00:00:03 /u01/app/19.9.0/grid/bin/ohasd.bin reboot CRS_AUX_DATA=CRS_AUXD_TGIP=yes;_ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8
grid     21764     1  0 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/gpnpd.bin
grid     21766     1  0 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/mdnsd.bin
grid     21768     1  2 11:27 ?        00:00:01 /u01/app/19.9.0/grid/bin/evmd.bin
grid     21945     1  1 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/gipcd.bin
grid     22064     1  4 11:27 ?        00:00:02 /u01/app/19.9.0/grid/bin/ocssd.bin -P
root     22227     1  0 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/octssd.bin reboot
root     22300     1  8 11:27 ?        00:00:03 /u01/app/19.9.0/grid/bin/crsd.bin reboot
root     22319     1  4 11:27 ?        00:00:01 /u01/app/19.9.0/grid/bin/osysmond.bin
-----------------------------------------------
oracle     902     1  0 Sep25 ?        00:06:07 ora_pmon_DB19C01_2
oracle   19865     1  0 Sep17 ?        00:07:23 ora_pmon_DB19C02_2
grid     22748     1  0 11:27 ?        00:00:00 apx_pmon_+APX1
grid     23220     1  0 11:28 ?        00:00:00 asm_pmon_+ASM1
oracle   23546     1  0 11:28 ?        00:00:00 ora_pmon_DB12C01_2

The root.sh shuts down old version database instance (DB12C01 is version 12.2.0.1), ASM instance and crs stack running from old GI home, and starts a watchdog process (oracsswd.bin) as dummy crs service from crsdata directory (neither old GI home nor new GI home).  Then, brings up crs stack from new GI home and shuts down watchdog process. At last, ASM instance and all databases instances are brought up.

How the running 19c database instances access ASM storage while ASM instance is down on local node? The answer is that ASM instance on remote node will help.

On T2, start asmclient.sh while/before starting root.sh on first node, the script will keep running for about one hour unless you press ctrl+c. The output as following
================== Before root.sh started on first node =====================
  +APX2         1
  +ASM2         1
  DB19C01_1     4
  DB12C01_1     4
  DB19C02_1     4
  rac02.lab.dbaplus.ca      1

================== After root.sh started and ASM instance is being shut down on first node =====================
  +APX2         1
  +ASM2         1
  DB19C01_1     4
  DB19C01_2     4
  DB12C01_1     4
  DB19C02_1     4
  DB19C02_2     4
  rac02.lab.dbaplus.ca      1

When ASM instance (+ASM1) is being shut down on first node (rac01), the 19c database instances running on this node (DB19C01_2, DB19C02_2) will be redirected to ASM instance (+ASM2) running on remote node (rac02). That's why database instances can still access ASM storage while the local ASM instance is down.

3.2 Run root.sh on second node (other nodes for more than two nodes cluster)

On second node, as root user, run 'root.sh -transparent -nodriverupdate'
[root@rac02]# /u01/app/19.9.0/grid/root.sh -transparent -nodriverupdate
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.9.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
LD_LIBRARY_PATH='/u01/app/19.0.0/grid/lib:/u01/app/19.9.0/grid/lib:'
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/rootcrs_rac02_2020-11-05_12-33-56AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/rootcrs_rac02_2020-11-05_12-33-56AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_12-33-57AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_12-33-57AM.log
2020/11/05 12:34:03 CLSRSC-347: Successfully unlock /u01/app/19.9.0/grid
2020/11/05 12:34:04 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_postpatch_apply_oop_rac02_2020-11-05_12-34-04AM.log
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [481622232].
2020/11/05 12:34:17 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'
2020/11/05 12:47:01 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2862754030].
SQL Patching tool version 19.9.0.0.0 Production on Thu Nov  5 12:49:37 2020
Copyright (c) 2012, 2020, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_21650_2020_11_05_12_49_37/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
Interim patch 31219897 (OJVM RELEASE UPDATE: 19.8.0.0.200714 (31219897)):
  Binary registry: Not installed
  PDB CDB$ROOT: Applied successfully on 10-SEP-20 07.05.22.344726 PM
  PDB GIMR_DSCREP_10: Applied successfully on 10-SEP-20 07.05.26.197751 PM
  PDB PDB$SEED: Applied successfully on 10-SEP-20 07.05.24.262950 PM
Interim patch 31668882 (OJVM RELEASE UPDATE: 19.9.0.0.201020 (31668882)):
  Binary registry: Installed
  PDB CDB$ROOT: Not installed
  PDB GIMR_DSCREP_10: Not installed
  PDB PDB$SEED: Not installed

Current state of release update SQL patches:
  Binary registry:
    19.9.0.0.0 Release_Update 200930183249: Installed
  PDB CDB$ROOT:
    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 10-SEP-20 07.05.22.342186 PM
  PDB GIMR_DSCREP_10:
    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 10-SEP-20 07.05.26.195906 PM
  PDB PDB$SEED:
    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 10-SEP-20 07.05.24.260687 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10
    The following interim patches will be rolled back:
      31219897 (OJVM RELEASE UPDATE: 19.8.0.0.200714 (31219897))
    Patch 31771877 (Database Release Update : 19.9.0.0.201020 (31771877)):
      Apply from 19.8.0.0.0 Release_Update 200703031501 to 19.9.0.0.0 Release_Update 200930183249
    The following interim patches will be applied:
      31668882 (OJVM RELEASE UPDATE: 19.9.0.0.201020 (31668882))

Installing patches...
Patch installation complete.  Total patches installed: 9

Validating logfiles...done
Patch 31219897 rollback (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31219897/23619699/31219897_rollback__MGMTDB_CDBROOT_2020Nov05_12_50_01.log (no errors)
Patch 31771877 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31771877/23869227/31771877_apply__MGMTDB_CDBROOT_2020Nov05_12_50_01.log (no errors)
Patch 31668882 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31668882/23790068/31668882_apply__MGMTDB_CDBROOT_2020Nov05_12_50_01.log (no errors)
Patch 31219897 rollback (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31219897/23619699/31219897_rollback__MGMTDB_PDBSEED_2020Nov05_12_50_40.log (no errors)
Patch 31771877 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31771877/23869227/31771877_apply__MGMTDB_PDBSEED_2020Nov05_12_50_40.log (no errors)
Patch 31668882 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31668882/23790068/31668882_apply__MGMTDB_PDBSEED_2020Nov05_12_50_40.log (no errors)
Patch 31219897 rollback (pdb GIMR_DSCREP_10): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31219897/23619699/31219897_rollback__MGMTDB_GIMR_DSCREP_10_2020Nov05_12_50_39.log (no errors)
Patch 31771877 apply (pdb GIMR_DSCREP_10): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31771877/23869227/31771877_apply__MGMTDB_GIMR_DSCREP_10_2020Nov05_12_50_40.log (no errors)
Patch 31668882 apply (pdb GIMR_DSCREP_10): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31668882/23790068/31668882_apply__MGMTDB_GIMR_DSCREP_10_2020Nov05_12_50_40.log (no errors)
SQL Patching tool complete on Thu Nov  5 12:51:05 2020
2020/11/05 12:51:31 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/11/05 12:51:33 CLSRSC-672: Post-patch steps for patching GI home successfully completed.

After root.sh ran successfully on all nodes, all Oracle Grid Infrastructure services start running from the new Grid home. 

Zero-downtime GI patching is completed successfully if no ACFS file system is created and ASM storage is not using AFD. Unfortunately, they both are used and next step is required.

4. Update operating system drivers for ACFS and/or AFD

As grid user, validate the status of Oracle drivers (ACFS & AFD) by 'crsctl query driver'
[grid@rac02]$ /u01/app/19.9.0/grid/bin/crsctl query driver activeversion -all
Node Name : rac01
Driver Name : ACFS
BuildNumber : 200626
BuildVersion : 19.0.0.0.0 (19.8.0.0.0)

Node Name : rac01
Driver Name : AFD
BuildNumber : 200626
BuildVersion : 19.0.0.0.0 (19.8.0.0.0)

Node Name : rac02
Driver Name : ACFS
BuildNumber : 200626
BuildVersion : 19.0.0.0.0 (19.8.0.0.0)

Node Name : rac02
Driver Name : AFD
BuildNumber : 200626
BuildVersion : 19.0.0.0.0 (19.8.0.0.0)

[grid@rac02]$ /u01/app/19.9.0/grid/bin/crsctl query driver softwareversion -all
Node Name : rac01
Driver Name : ACFS
BuildNumber : 200813.1
BuildVersion : 19.0.0.0.0 (19.9.0.0.0)

Node Name : rac01
Driver Name : AFD
BuildNumber : 200813.1
BuildVersion : 19.0.0.0.0 (19.9.0.0.0)

Node Name : rac02
Driver Name : ACFS
BuildNumber : 200813.1
BuildVersion : 19.0.0.0.0 (19.9.0.0.0)

Node Name : rac02
Driver Name : AFD
BuildNumber : 200813.1
BuildVersion : 19.0.0.0.0 (19.9.0.0.0)

ACFS & AFD drivers are installed and used, the driver are updated to new version on both nodes, but active driver is still old version. At this time, new version driver cannot be activated because '-nodriverupdate' is used with root.sh which stopped root.sh skip to update operating system drivers for ACFS & AFD.

In order to activate new version of driver, run 'rootcrs.sh -updateosfiles' on each cluster node and restart the cluster nodes. Oops, where is Zero-downtime? Anyway, at least, we can postpone this downtime to when OS maintenance is planned. Or it may be better not to user ACFS & AFD.

Source code of process.sh
#!/bin/bash
for i in {1..1800}
do
  echo 
  echo "================== `date` ====================="
  ps -ef | grep 'd\.bin' | grep -v grep
  echo "-----------------------------------------------"
  ps -ef | grep pmon | grep -v grep
  sleep 2
done

Source code of asmclient.sh
#!/bin/bash
F_TMP=`ps -ef | grep asm_pmon_+ASM | grep -v grep`
F_PID=`echo $F_TMP | awk -F' ' '{print $2}'`
export ORACLE_SID=`echo $F_TMP | awk -F'_' '{ print $NF }'`
export ORACLE_HOME=`pwdx $F_PID | awk -F' ' '{print $NF}' | sed 's/\/dbs\/*$//'`
for i in {1..1800}
do
echo "
  ================== `date` ====================="
  $ORACLE_HOME/bin/sqlplus -S / as sysdba <<EOF
     set head off
     set feed off
     set pagesize 999
     col instance_name for a25
     select instance_name,count(*) from v\$asm_client group by instance_name order by 1;
EOF
  sleep 1
done

Saturday, November 7, 2020

Zero-Downtime Grid Infrastructure Patching root.sh failed with CLSRSC-675: Cluster-wide Grid Plug and Play profile not found

 When running root.sh with zero-downtime GI patching option -transparent and -nodriverupdate, got following errors
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/srvm/admin/rhpmovegi.pl /u01/app/19.0.0/grid /u01/app/19.9.0/grid true' execution failed
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/crs/install/rootcrs.pl  -transparent -nodriverupdate -dstcrshome /u01/app/19.9.0/grid -prepatch' execution failed

Check the log file crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
[grid@rac02]$ tail /u01/app/grid/crsdata/rac02/crsconfig/rootcrs_rac02_2020-11-05_11-55-01AM.log
2020-11-05 11:55:01: Checking if GPnP setup exists at the source dir /u01/app/19.0.0/grid
2020-11-05 11:55:01: GPnP directories are /u01/app/19.0.0/grid/gpnp/profiles /u01/app/19.0.0/grid/gpnp/wallets /u01/app/19.0.0/grid/gpnp/profiles/peer /u01/app/19.0.0/grid/gpnp/wallets/root /u01/app/19.0.0/grid/gpnp/wallets/pa /u01/app/19.0.0/grid/gpnp/wallets/peer /u01/app/19.0.0/grid/gpnp/wallets/prdr
2020-11-05 11:55:01: GPnP files are /u01/app/19.0.0/grid/gpnp/wallets/root/ewallet.p12 /u01/app/19.0.0/grid/gpnp/wallets/peer/cwallet.sso /u01/app/19.0.0/grid/gpnp/wallets/prdr/cwallet.sso /u01/app/19.0.0/grid/gpnp/wallets/pa/cwallet.sso /u01/app/19.0.0/grid/gpnp/profiles/peer/profile.xml
2020-11-05 11:55:01: GPnP file /u01/app/19.0.0/grid/gpnp/wallets/root/ewallet.p12 is either not present or of zero size
2020-11-05 11:55:01: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 675
2020-11-05 11:55:01: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 675
2020-11-05 11:55:01: Command output:
>  CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node. 
>End Command output
2020-11-05 11:55:01: CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.

It is complaining that following GPnP files are missing on current node

  /u01/app/19.0.0/grid/gpnp/wallets/root/ewallet.p12
  /u01/app/19.0.0/grid/gpnp/wallets/peer/cwallet.sso
  /u01/app/19.0.0/grid/gpnp/wallets/prdr/cwallet.sso
  /u01/app/19.0.0/grid/gpnp/wallets/pa/cwallet.sso
  /u01/app/19.0.0/grid/gpnp/profiles/peer/profile.xml

Confirm the files are missing from current node
[grid@rac02]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/root
total 0
[grid@rac02]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/peer
total 0
[grid@rac02]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/prdr
total 0
[grid@rac02]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/pa
total 0
[grid@rac02]$ ls -l /u01/app/19.0.0/grid/gpnp/profiles/peer
total 0

Check if these files exist on other node
[grid@rac01]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/root
total 4
-rwx------ 1 grid oinstall 2408 Sep 10 16:29 ewallet.p12
[grid@rac01]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/peer
total 4
-rwx------ 1 grid oinstall 2933 Sep 10 16:29 cwallet.sso
[grid@rac01]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/prdr
total 4
-rwxr-x--- 1 grid oinstall 1725 Sep 10 16:29 cwallet.sso
[grid@rac01]$ ls -l /u01/app/19.0.0/grid/gpnp/wallets/pa
total 4
-rwx------ 1 grid oinstall 2933 Sep 10 16:29 cwallet.sso
[grid@rac01]$ ls -l /u01/app/19.0.0/grid/gpnp/profiles/peer
total 12
-rw-r--r-- 1 grid oinstall 2153 Sep 10 16:29 profile.old
-rw-r--r-- 1 grid oinstall 2094 Sep 10 16:29 profile_orig.xml
-rw-r--r-- 1 grid oinstall 2153 Sep 10 16:29 profile.xml

Copy the files from remote node rac01 to current host rac02, and re-run root.sh. The issue is gone.

Thursday, November 5, 2020

Oracle AHF installation failed with "AHF-00074: Perl Modules not found: Data::Dumper"

Oracle combined Trace File Analyzer (TFA) & ORAchk/EXAchk into single installer called Oracle Autonomous Health Framework (AHF).

When installing AHF on Linux platform (Red Hat Enterprise Linux / Oracle Linux), got error "AHF-00074: Perl Modules not found :  Data::Dumper"
[root@host01]# ./ahf_setup
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_202300_4904_2020_11_05-18_04_25.log
PERL : /u01/app/19.9.0/grid/perl/bin/perl
[ERROR] : AHF-00074: Perl Modules not found :  Data::Dumper

In log file /tmp/ahf_install_202300_4904_2020_11_05-18_04_25.log, found 
[2020-11-05 18:18:20.784 EST] System Perl : /bin/perl
[2020-11-05 18:18:20.786 EST] Checking Perl Version : /bin/perl
[2020-11-05 18:18:20.791 EST] Perl Version : 5.016003
[2020-11-05 18:18:20.793 EST] PERL_ERROR in checkPerlVersion : 
[2020-11-05 18:18:20.794 EST] Checking Perl Owner : /bin/perl
[2020-11-05 18:18:20.798 EST] Perl Owner : root
[2020-11-05 18:18:20.799 EST] PERL_ERROR in checkPerlOwner : 
[2020-11-05 18:18:20.801 EST] Perl in checkPerlModules : /bin/perl
[2020-11-05 18:18:20.804 EST] Perl Module : Data::Dumper, Version : , Exit Status : 2
[2020-11-05 18:18:20.806 EST] Perl Module not found : Data::Dumper
   ... ...
[2020-11-05 18:18:21.068 EST] Perl from CRS HOME : /u01/app/19.9.0/grid/perl/bin/perl
[2020-11-05 18:18:21.070 EST] Checking Perl Version : /u01/app/19.9.0/grid/perl/bin/perl
[2020-11-05 18:18:21.075 EST] Perl Version : 5.028001
[2020-11-05 18:18:21.077 EST] PERL_ERROR in checkPerlVersion : 
[2020-11-05 18:18:21.078 EST] Checking Perl Owner : /u01/app/19.9.0/grid/perl/bin/perl
[2020-11-05 18:18:21.082 EST] Perl Owner : root
[2020-11-05 18:18:21.083 EST] PERL_ERROR in checkPerlOwner : 
[2020-11-05 18:18:21.085 EST] Perl in checkPerlModules : /u01/app/19.9.0/grid/perl/bin/perl
[2020-11-05 18:18:21.100 EST] Perl Module : Data::Dumper, Version : 2.170, Exit Status : 0

Perl from CRS HOME (/u01/app/19.9.0/grid/perl/bin/perl) succeeded to check perl module "Data::Dumper", but system Perl (/bin/perl) cannot find perl module "Data::Dumper". The module file is Dumper.pm
[root@host01]# find / -name Dumper.pm
/u01/app/oracle/product/19.9.0/dbhome_1/perl/lib/5.28.1/x86_64-linux-thread-multi/Data/Dumper.pm
/u01/app/19.9.0/grid/perl/lib/5.28.1/x86_64-linux-thread-multi/Data/Dumper.pm

File Dumper.pm is only found in Oracle homes, Linux installs this file with package perl-Data-Dumper
[root@host01]# rpm -q perl-Data-Dumper
package perl-Data-Dumper is not installed
[root@host01]#

Install package perl-Data-Dumper
[root@host01]# yum install perl-Data-Dumper
Loaded plugins: langpacks, ulninfo
  ... ...
 Resolving Dependencies
--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================
 Package                    Arch             Version                Repository         Size
============================================================================================
Installing:
 perl-Data-Dumper           x86_64           2.145-3.el7            ol7_latest         47 k

Transaction Summary
============================================================================================
Install  1 Package

Total download size: 47 k
Installed size: 97 k
Is this ok [y/d/N]: y
Downloading packages:

perl-Data-Dumper-2.145-3.el7.x86_64.rpm                                      |  47 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction

  Installing : perl-Data-Dumper-2.145-3.el7.x86_64 [                                    ] 1/1
      ... ...
  Verifying  : perl-Data-Dumper-2.145-3.el7.x86_64                                        1/1 

Installed:
  perl-Data-Dumper.x86_64 0:2.145-3.el7

Complete!

[root@host01]# find / -name Dumper.pm
/u01/app/oracle/product/19.9.0/dbhome_1/perl/lib/5.28.1/x86_64-linux-thread-multi/Data/Dumper.pm
/u01/app/19.9.0/grid/perl/lib/5.28.1/x86_64-linux-thread-multi/Data/Dumper.pm
/usr/lib64/perl5/vendor_perl/Data/Dumper.pm

Retry the AHF installation, the issue is fixed.

Saturday, October 31, 2020

Friday, October 30, 2020

Pluggable database opened with RESTRICTED SESSION due to Sync PDB failed with ORA-65177 during 'alter user'

 When open pluggable database, got following error
SQL> alter pluggable database ORCL open;

Warning: PDB altered with errors.

SQL> 
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCL                           READ WRITE YES

The pluggable databse ORCL is opened with RESTRICTED SESSION. Check the view pdb_plug_in_violations for error details,
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> col name for a10
SQL> col time for a18
SQL> col cause for a15
SQL> col message for a75
SQL> set linesize 255
SQL> alter session set nls_timestamp_format='dd-mon-yy hh24:mi:ss';

Session altered.

SQL> select time,name,type,cause,status,message from pdb_plug_in_violations where status <>'RESOLVED';

TIME               NAME       TYPE      CAUSE           STATUS    MESSAGE
------------------ ---------- --------- --------------- --------- ---------------------------------------------------------------------------
30-oct-20 12:00:52 ORCL       ERROR     Sync Failure    PENDING   Sync PDB failed with ORA-65177 during 'alter user ABCDE_APP account unlock'

When the pdb ORCL is being opened, it tries to synchronize the sql "alter user ABCDE_APP account unlock" between pdb and cdb$root, because the sql was run in cdb$root but not in pdb. Check the status of user ABCDE_APP
SQL> col username for a10
SQL> select con_id,username,account_status,created,common from cdb_users where username='ABCDE_APP';

    CON_ID USERNAME   ACCOUNT_STATUS     CREATED   COM
---------- ---------- ------------------ --------- ---
         1 ABCDE_APP  OPEN               11-JUN-19 YES

SQL> alter session set container=ORCL;

Session altered.

SQL>
SQL> select con_id,username,account_status,created,common from cdb_users where username='ABCDE_APP';

    CON_ID USERNAME   ACCOUNT_STATUS    CREATED   COM
---------- ---------- ----------------- --------- ---
         3 ABCDE_APP  OPEN              04-AUG-20 NO


The user ABCDE_APP was created as common user in root container, but in pdb user ABCDE_APP was created as local user. Both user are using same user name, but are totally different user account in the database. When DBA run sql 'alter user ABCDE_APP account unlock' in root container, the database tries to sync the sql with all pdbs, but the common user cannot be synced in pdb ORCL because the user ABCDE_APP is not the same user.

AS a workaround, the issue can be temporarily solved by deleting the sync-pending statement (alter user ABCDE_APP account unlock) from PDB_SYNC$ in both CDB and PDB 

In CDB root container,
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> 
SQL> col name for a10
SQL> col sqlstmt for a40
SQL> 
SQL> alter session set nls_date_format='dd-mon-yy hh24:mi:ss';

Session altered.

SQL> 
SQL> select ctime,sqlstmt,name,flags,opcode,replay# from pdb_sync$ where sqlstmt like 'alter user ABCDE_APP account unlock%';

CTIME              SQLSTMT                                  NAME            FLAGS      OPCODE     REPLAY#
------------------ ---------------------------------------- ---------- ---------- ---------- ----------
30-oct-20 10:24:42 alter user ABCDE_APP account unlock      ABCDE_APP           0          5          21

SQL> delete from PDB_SYNC$ where sqlstmt like ('alter user ABCDE_APP account unlock%');

1 row deleted.

SQL> commit;

Commit complete.

In pdb ORCL
SQL> alter session set container=ORCL;

Session altered.

SQL> select ctime,sqlstmt,name,flags,opcode,replay# from pdb_sync$ where sqlstmt like 'alter user ABCDE_APP account unlock%';

CTIME              SQLSTMT                                  NAME         FLAGS      OPCOD      EREPLAY#
------------------ ---------------------------------------- ---------- ------- ---------- ----------
30-oct-20 10:24:42 alter user ABCDE_APP account unlock      ABCDE_APP        0          5          21

SQL> delete from PDB_SYNC$ where sqlstmt like 'alter user ABCDE_APP account unlock%';

1 row deleted.

SQL> commit;

Commit complete.

Re-open the pdb
SQL> alter pluggable database ORCL close;

Pluggable database altered.

SQL> alter pluggable database ORCL open;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCL                           READ WRITE NO

The pdb is open without RESTRICT SESSION. However, the issue may happen again when you run 'alter user ABCDE_APP' statement in root container. The permanent solution is that drop local user from pdb and grant common user access to the pdb.

Tuesday, October 27, 2020

Reduce number of privileged access audit files for ASM with rsyslog on Linux

This post shows how to configure rsyslog on Linux (Red Hat Enterprise Linux 7/8) to redirect ASM audit message to OS log files.

If you are looking for the information for Solaris or AIX, please check out following posts

Tuesday, October 20, 2020

Install and Configure Kerberos Authentication on Red Hat Enterprise Linux 8

The demonstration includes three installation/configuration parts,

  Part 1. Install and configure Kerberos Key Distribution Center (KDC) Server
  
     Server name: xdc01.lab.dbaplus.ca
     OS version:  Red Hat Enterprise Linux (RHEL) 8

  Part 2. Install and configure Kerberos client on application (SSH) server
  
     Server 1: host01.lab.dbaplus.ca
     Server 2: host02.lab.dbaplus.ca
     OS version:  Red Hat Enterprise Linux (RHEL) 8
  
  Part 3. Install and configure Kerberos client on workstation to test single-sign-on (SSO) for SSH authentication with Kerberos

Monday, September 28, 2020

GI 19c opatchauto out-of-place patching failed with file permission issues

Environment,

   Grid Infrastructure Home version : 19.7.0.0.200414
   OPatch version: 12.2.0.1.21
   Patch version: P31305339 - GI Release Update 19.8.0.0.200714

On fist node of the cluster, when runng opatchauto with option -outofplace or -prepare-clone, got errors
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto apply -phBaseDir /u01/stage/grid -outofplace -logLevel FINEST
     ... ...
Creating clone for oracle home /u01/app/19.7.0/grid_1.
Failed to perform clone operation for oracle home /u01/app/19.7.0/grid_1. 

Execution of [CloneOracleHomeAction] patch action failed, check log for more details. Failures:
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[FATAL] [INS-32012] Unable to create directory: /u01/app/19.8.0/grid_1, on this server.
   CAUSE: Either proper permissions were not granted to create the directory or there was no space left in the volume.
   ACTION: Check your permission on the selected directory or choose another directory.
[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty. 

After fixing the cause of failure Run opatchauto resume

]

The process failed with error "Unable to create directory: /u01/app/19.8.0/grid_1", the directory is cloned (new) home and the process is trying to apply RU 19.8.0.0.200714 onto the home.

Check the directory permission
[root@rac01]# ls -ld /u01/app/19.8.0/grid_1       
drwxr-xr-x   94 root     oinstall       4096 Sep 20 18:27 /u01/app/19.8.0/grid_1

Chang the directory owner from root to Grid Home owner grid
[root@rac01]$ chown grid /u01/app/19.8.0/grid_1       

Resume the patch process
[root@rac01]# /u01/app/19.7.0/grid_1/OPatch/opatchauto resume

Different errors occur
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-09-20_06-38-13PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........[FATAL] Failed to restore the saved templates to the Oracle home being cloned. Aborting the clone operation

After fixing the cause of failure Run opatchauto resume

]

The root cause is that the opatchauto did not successfully unlock the new cloned home before applying patches on it. Lots of files/directories under new home are still owned by root
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root
/u01/app/19.8.0/grid_1/acfs
/u01/app/19.8.0/grid_1/acfs/tunables
/u01/app/19.8.0/grid_1/auth
/u01/app/19.8.0/grid_1/auth/crs
/u01/app/19.8.0/grid_1/auth/crs/r10-sohei
   ... ...
/u01/app/19.8.0/grid_1/bin/cloudcli
/u01/app/19.8.0/grid_1/bin/clscfg
/u01/app/19.8.0/grid_1/bin/clscfg.bin
/u01/app/19.8.0/grid_1/bin/clsdiaglsnr
   ... ...
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa.jks
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_directories.txt
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_setup.txt
/u01/app/19.8.0/grid_1/xag/mesg/xagus.le

Change the owner of these file/directories to grid
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root -exec chown grid {} \;

Continue opatchauto with resume option,
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto resume

The home will be out-of-place patched, and new home is /u01/app/19.8.0/grid_1.

Sunday, August 2, 2020

HTTP 404 when enabling Friendly URLs or referencing Static Application Files using substitution string #APP_IMAGES#

As it is known, Oracle is trying to deprecate two web listeners Oracle HTTP Server (mod_plsql) and Embedded PL/SQL Gateway, and Oracle REST Data Services (ORDS) will be the only supported web listener for APEX.

Saturday, July 18, 2020

Oracle database 19c "srvctl status service" fails with Java.lang.NullPointerException

The following errors occur while running "srvctl status service -db <db_name>" command from 19.7 database home on AIX and Linux x86_64,
$ srvctl status service -db orcl
Exception in thread "main" java.lang.NullPointerException
        at oracle.ops.opsctl.StatusAction.internalExecuteService(StatusAction.java:2077)
        at oracle.ops.opsctl.StatusAction.executeSIHAService(StatusAction.java:1692)
        at oracle.ops.opsctl.Action.executeSIHA(Action.java:445)
        at oracle.ops.opsctl.OPSCTLDriver.execute(OPSCTLDriver.java:537)
        at oracle.ops.opsctl.OPSCTLDriver.main(OPSCTLDriver.java:252)

Thursday, July 16, 2020

Configure SSL on WebLogic 12c with CA Issued Certificate

Weblogic 12c has demo identity keystore shipped with installation, it makes it very easy to implement SSL communication for WebLogic managed servers in non-production environment. However,  if you are deploying your application in production, especially public accessible, a certificate issued by Certificate Authority (CA) is absolutely necessary. To request and install CA issued certificate on WebLogic 12c, follow these steps,

Sunday, July 12, 2020

Install APEX with deploying ORDS in WebLogic as Web listener

The installation is completed on Oracle Linux 7.5, and following components/ applications are installed,

    JDK 1.8.0_241
    Weblogic 12.2.1.4.0
    Oracle Application Express (APEX) 19.2
    Oracle REST Data Services (ORDS) 19.4

Saturday, June 20, 2020

Agent status shows 'Diagnose for Agent Blocked (Bounce Counter Mismatch)' in OEM 13c

Agent 13.2 home was restored from a backup after failed OS patching maintenance. On the Enterprise Manager 13.2 console, the agent status shows 'Diagnose for Agent Blocked (Bounce Counter Mismatch)', and targets monitored by the agent show 'Diagnose for Status Pending (Post Blackout)'.

Wednesday, May 27, 2020

Database 12.2 crashed with ORA-04031 ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Database 12.2 instance crashed with following messages in alert log,
2020-05-20T22:28:52.724801-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320299):
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320299/orcl_asmb_15204374_i320299.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-05-20T22:29:30.066864-04:00
NOTE: ASMB0 terminating
2020-05-20T22:29:30.067840-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
2020-05-20T22:29:30.070882-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320300):
ORA-4031 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320300/orcl_asmb_15204374_i320300.trc
2020-05-20T22:29:31.376055-04:00
USER (ospid: 15204374): terminating the instance due to error 4031

From the alert log, we can find that asmb process (ospid: 15204374) wrote following errors into trace file orcl_asmb_15204374.trc

ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Then, the asmb process terminated the instance.

This issue happened after the database home was applied Oracle database Release Update April 2020 (12.2.0.1.200414).

It is a bug introduce by Release Update 12.2.0.1.200414, and can be fixed by apply patch 31341859.

Sunday, May 17, 2020

12.2.0.1 init.ohasd does not start clusterware stack after April 2020 RU applied

The init.ohasd process does not start clusterware stack after the 12.2.0.1 Grid Infrastructure home is applied April 2020 Release Update. The scenario is not a typical Grid Infrastructure startup issue. It is a human error and happened when DBA, by accident, ran roothas.sh instead of rootcrs.sh while patching GI cluster home. Before knowing it is human error, the diagnose process is done as following.

Wednesday, May 13, 2020

impdp failed at dbms_logrep_imp.instantiate_schema with ORA-01031: insufficient privileges

When execute impdp to import 12.2 database, ORA-01031 occured as following,
[oracle@host01]$ impdp system@pdb01 dumpfile=appdata.dmp schemas=appdata

Import: Release 12.2.0.1.0 - Production on Tue May 12 17:58:56 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_03":  system/********@pdb01 dumpfile=appdata.dmp schemas=appdata
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema APPDATA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

Failing sql is:
BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'DB02', inst_scn=>'386040157882');COMMIT; END;
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-31625: Schema APPDATA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

Failing sql is:
GRANT SELECT ON "APPDATA"."WORKLOAD_SEQ" TO "USER01"

ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-31625: Schema APPDATA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

Failing sql is:
GRANT SELECT ON "APPDATA"."LOG_SEQ" TO "USER02"

The user SYSTEM, which is used by impdp to connect to database, does not have BECOME USER privilege, 
system@PDB01> select * from dba_sys_privs where privilege='BECOME USER';

GRANTEE    PRIVILEGE       ADMIN_OPTION COMMON INHERITED
---------- --------------- ------------ ------ ---
SYS        BECOME USER     NO           YES    YES

The healthy database privilege grant looks like,
system@PDB01> select * from dba_sys_privs where privilege='BECOME USER';

GRANTEE              PRIVILEGE    ADMIN_OPTION COMMON INHERITED
-------------------- ------------ ------------ ------ ---
DBA                  BECOME USER  NO           YES    YES
IMP_FULL_DATABASE    BECOME USER  NO           YES    YES
SYS                  BECOME USER  NO           YES    YES

It can be fixed by running grand command
grant BECOME USER to DBA,IMP_FULL_DATABASE;

This privilege is normally granted to roles DBA and IMP_FULL_DATABASE. For some reason, it could be revoked. One of the scenario is that DBUA run script dvu121.sql to revoke BECOME USER privileges from DBA and IMP_FULL_DATABASE roles when upgrading database to 12.2 and Database Vault component is VALID.

Friday, April 24, 2020

opatchauto out-of-place patching Oracle Restart and Database home

Out of place (OOP) patching is a new feature introduced by Oracle 12.2 with the latest version of opatchauto. It is a patching method where patching is performed by following operations,

  * Creating a copy of current oracle home
  * Applying patches to the copied home
  * Switching services to the copied home

This patching method helps on moving Oracle home and/or reducing the downtime of patching Oracle home and database. OPatchauto supports OOP patching for the following configurations,

  * Grid Infrastructure (GI) and Real Application Cluster (RAC)
  * Oracle Restart (Standalone Grid Infrastructure) (SIHA) and single instance database (SIDB)

Tuesday, December 24, 2019

DBA_REGISTRY_SQLPATCH shows END status after datapatch ran against database

During postinstallation of a database patching, datapatch utility shows following message while loading modified SQL files into the database,
[oracle@host01]$ $ORACLE_HOME/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Dec 22 14:32:47 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_18481316_2019_12_22_14_32_47/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 191015 in the binary registry and ID 190416 in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30138470 (DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015)

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...
Patch 30138470 apply: SUCCESS
  logfile:  (no errors)
SQL Patching tool complete on Sun Dec 22 14:33:32 2019


It shows 'Patch 30138470 apply: SUCCESS', but the log file name is missing and patch status shows END in view DBA_REGISTRY_SQLPATCH,
SQL> select patch_id,version,action,status,description
  2   from dba_registry_sqlpatch
  3  order by action_time;

  PATCH_ID VERSION     ACTION   STATUS     DESCRIPTION
---------- ----------- -------- ---------- -------------------------------------------------
  29314339 12.2.0.1    APPLY    SUCCESS    DATABASE APR 2019 RELEASE UPDATE 12.2.0.1.190416
  30138470 12.2.0.1    APPLY    END

SQL>


Check the sql execution log,
[oracle@host01]$ cd /u01/app/oracle/cfgtoollogs/sqlpatch/
[oracle@host01]$ ls -l
total 104
drwxr-xr-x    3 oracle   oracle          256 Jul 21 16:28 29314339
drwxr-xr-x    3 oracle   oracle          256 Dec 22 14:33 30138470
drwxr-xr-x    2 oracle   oracle         4096 Jul 21 16:24 sqlpatch_10879194_2019_07_21_16_24_15
drwxr-xr-x    2 oracle   oracle         4096 Dec 22 14:33 sqlpatch_18481316_2019_12_22_14_32_47
-rw-r--r--    1 oracle   oracle         1373 Dec 22 14:31 sqlpatch_history.txt

[oracle@host01]$ cd 30138470
[oracle@host01]$ ls -l
total 0
drwxr-xr-x    2 oracle   oracle          256 Dec 22 15:10 23136421
[oracle@host01]$ cd 23136421
[oracle@host01]$ ls -l
total 440
-rw-r--r--    1 oracle   oracle       184086 Dec 22 14:33 30138470_apply_DB01_2019Dec22_14_33_31.log
[oracle@host01]$ pwd
/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421
[oracle@host01]$
[oracle@host01]$ grep -n 'ORA-' 30138470_apply_DB01_2019Dec22_14_33_31.log
32:ORA-01693: max # extents (505) reached in lob segment
34:ORA-06512: at "SYS.DBMS_SQLPATCH", line 639
35:ORA-06512: at "SYS.DBMS_SQLPATCH", line 605
36:ORA-06512: at line 2
65:ORA-20004: Uninitialized state in install_file
66:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738
159:ORA-20004: Uninitialized state in install_file
       ...
7305:SQL> PROMPT IGNORABLE ERRORS: ORA-04043
7306:IGNORABLE ERRORS: ORA-04043
7313:ORA-20004: Uninitialized state in install_file
7314:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738
7407:ORA-20004: Uninitialized state in install_file
7408:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738


ORA-01693 is found in line 32 of the log file. List the details of the errors,
[oracle@host01]$ head -37 30138470_apply _DB01_2019Dec22_14_33_31.log
SQL>
SQL> SET PAGESIZE 0
SQL> SELECT 'Starting apply for patch 30138470/23136421 on ' ||
  2         SYSTIMESTAMP FROM dual;
Starting apply for patch 30138470/23136421 on 22-DEC-19 02.33.31.103477 PM -05:0
0


SQL> SET PAGESIZE 10
SQL>
SQL> BEGIN
  2      dbms_sqlpatch.patch_initialize(p_patch_id      => 30138470,
  3                                     p_patch_uid     => 23136421,
  4                                     p_flags         => '&flags',
  5                                     p_description   => '&description',
  6                                     p_action        => 'APPLY',
  7                                     p_logfile       => '&full_logfile',
  8                                     p_bundle_series => '&bundle_series');
  9  END;
 10  /
old   4:                                    p_flags         => '&flags',
new   4:                                    p_flags         => 'NB',
old   5:                                    p_description   => '&description',
new   5:                                    p_description   => 'DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015',
old   7:                                    p_logfile       => '&full_logfile',
new   7:                                    p_logfile       => '/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421/30138470_apply_DB01_2019Dec22_14_33_31.log',
old   8:                                    p_bundle_series => '&bundle_series');
new   8:                                    p_bundle_series => 'DBRU');
BEGIN
*
ERROR at line 1:
ORA-01693: max # extents (505) reached in lob segment
SYS.SYS_LOB0000190607C00013$$
ORA-06512: at "SYS.DBMS_SQLPATCH", line 639
ORA-06512: at "SYS.DBMS_SQLPATCH", line 605
ORA-06512: at line 2

[oracle@host01]$


The segment 'SYS.SYS_LOB0000190607C00013$$' reaches the extent maximum limitation (505), and it caused the SQL failure, find out what the segment is
SQL> select owner,object_name,object_type
  2    from dba_objects
  3   where object_name='SYS_LOB0000190607C00013$$';

OWNER                OBJECT_NAME                    OBJECT_TYPE
-------------------- ------------------------------ -----------------------
SYS                  SYS_LOB0000190607C00013$$      LOB

SQL> select owner,table_name,column_name,segment_name
  2    from dba_lobs
  3   where segment_name='SYS_LOB0000190607C00013$$';

OWNER      TABLE_NAME                     COLUMN_NAME          SEGMENT_NAME
---------- ------------------------------ -------------------- -------------------------
SYS        REGISTRY$SQLPATCH              PATCH_DIRECTORY      SYS_LOB0000190607C00013$$

SQL>
SQL> desc sys.registry$sqlpatch
 Name                          Null?    Type
 ----------------------------- -------- -------------------------
 PATCH_ID                      NOT NULL NUMBER                  
 ACTION                        NOT NULL VARCHAR2(15)            
 ACTION_TIME                   NOT NULL TIMESTAMP(6)            
 DESCRIPTION                            VARCHAR2(100)           
 LOGFILE                                VARCHAR2(500)           
 STATUS                                 VARCHAR2(25)            
 VERSION                       NOT NULL VARCHAR2(20)            
 PATCH_UID                     NOT NULL NUMBER                  
 FLAGS                                  VARCHAR2(10)            
 BUNDLE_SERIES                          VARCHAR2(30)            
 BUNDLE_ID                              NUMBER                  
 INSTALL_ID                             NUMBER                  
 PATCH_DIRECTORY                        BLOB                    
 BUNDLE_DATA                            XMLTYPE                 
 PATCH_DESCRIPTOR                       XMLTYPE                 
 POST_LOGFILE                           VARCHAR2(500)           
SQL>
SQL> select owner,segment_name,max_extents
  2    from dba_segments
  3   where segment_name in ('REGISTRY$SQLPATCH','SYS_LOB0000190607C00013$$');

OWNER      SEGMENT_NAME                   MAX_EXTENTS
---------- ------------------------------ -----------
SYS        REGISTRY$SQLPATCH                      505
SYS        SYS_LOB0000190607C00013$$              505


The segment is BLOB column PATCH_DIRECTORY of table SYS.REGISTRY$SQLPATCH, both the BLOB segment and its table have extent maximum limitation 505. To remove the limitation,
SQL> alter table sys.registry$sqlpatch storage (maxextents unlimited);

Table altered.

SQL> alter table sys.registry$sqlpatch
  2  modify lob (patch_directory) (storage (maxextents unlimited));

Table altered.

SQL> select owner,segment_name,max_extents
  2    from dba_segments
  3   where segment_name in ('REGISTRY$SQLPATCH','SYS_LOB0000190607C00013$$');

OWNER      SEGMENT_NAME                   MAX_EXTENTS
---------- ------------------------------ -----------
SYS        REGISTRY$SQLPATCH               2147483645
SYS        SYS_LOB0000190607C00013$$       2147483645



Then re-run the datapatch utility,
[oracle@host01]$ $ORACLE _HOME/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Dec 22 16:43:43 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_35782878_2019_12_22_16_43_43/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 191015 in the binary registry and ID 190416 in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30138470 (DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015)

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 30138470 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421/30138470_apply_DB01_2019Dec22_16_44_33.log (no errors)
SQL Patching tool complete on Sun Dec 22 16:45:22 2019


Now, it gives the log file name. Validate the status,
SQL> select patch_id,version,action,status,description
  2    from dba_registry_sqlpatch order by action_time;

  PATCH_ID VERSION     ACTION   STATUS     DESCRIPTION
---------- ----------- -------- ---------- -------------------------------------------------
  29314339 12.2.0.1    APPLY    SUCCESS    DATABASE APR 2019 RELEASE UPDATE 12.2.0.1.190416
  30138470 12.2.0.1    APPLY    END
  30138470 12.2.0.1    APPLY    SUCCESS    DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015


The patch is installed successfully.

Tuesday, October 22, 2019

DBUA 12.2 Failed while PDBS Recompile Invalid Objects

When upgrading 12.1 database to 12.2 with DBUA, got following error,

Server errors encountered during the execution of "PDBS Recompile Invalid Objects".

Following information is found in DBUA log file,
SQL> EXECUTE dbms_registry_sys.validate_components;
...(20:04:27) Starting validate_apex for APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_DIM_ARRAY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_DIM_ELEMENT" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_ELEM_INFO_ARRAY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_GEOMETRY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_ORDINATE_ARRAY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_POINT_TYPE" to APEX_180200


These privileges were granted by SYS (SYS is grantor) in original database, and missing during upgrading.

Object privileges in original database,
SQL> select GRANTEE,GRANTOR,OWNER,TABLE_NAME,PRIVILEGE from dba_tab_privs
  2   where owner='MDSYS' and grantee='APEX_180200'
  3   order by grantor,table_name;

GRANTEE        GRANTOR    OWNER     TABLE_NAME                     PRIVILEGE
-------------- ---------- --------- ------------------------------ ----------------------
APEX_180200    MDSYS      MDSYS     MDERR                          EXECUTE
APEX_180200    MDSYS      MDSYS     SDO_GEOM_METADATA_TABLE        DELETE
APEX_180200    MDSYS      MDSYS     SDO_GEOM_METADATA_TABLE        SELECT
APEX_180200    MDSYS      MDSYS     SDO_GEOM_METADATA_TABLE        INSERT
APEX_180200    MDSYS      MDSYS     SDO_META                       EXECUTE
APEX_180200    MDSYS      MDSYS     SDO_UTIL                       EXECUTE
APEX_180200    MDSYS      MDSYS     USER_SDO_INDEX_INFO            SELECT
APEX_180200    SYS        MDSYS     SDO_DIM_ARRAY                  EXECUTE
APEX_180200    SYS        MDSYS     SDO_DIM_ELEMENT                EXECUTE
APEX_180200    SYS        MDSYS     SDO_ELEM_INFO_ARRAY            EXECUTE
APEX_180200    SYS        MDSYS     SDO_GEOMETRY                   EXECUTE
APEX_180200    SYS        MDSYS     SDO_ORDINATE_ARRAY             EXECUTE
APEX_180200    SYS        MDSYS     SDO_POINT_TYPE                 EXECUTE


Object privileges in current (partly upgraded) database,
SQL> select GRANTEE,GRANTOR,OWNER,TABLE_NAME,PRIVILEGE from dba_tab_privs
  2   where owner='MDSYS' and grantee='APEX_180200'
  3   order by grantor,table_name;

GRANTEE        GRANTOR    OWNER      TABLE_NAME                     PRIVILEGE
-------------- ---------- ---------- ------------------------------ --------------------
APEX_180200    MDSYS      MDSYS      MDERR                          EXECUTE
APEX_180200    MDSYS      MDSYS      SDO_GEOM_METADATA_TABLE        DELETE
APEX_180200    MDSYS      MDSYS      SDO_GEOM_METADATA_TABLE        INSERT
APEX_180200    MDSYS      MDSYS      SDO_GEOM_METADATA_TABLE        SELECT
APEX_180200    MDSYS      MDSYS      SDO_META                       EXECUTE
APEX_180200    MDSYS      MDSYS      SDO_UTIL                       EXECUTE
APEX_180200    MDSYS      MDSYS      USER_SDO_INDEX_INFO            SELECT


Workaround

Manually grant the missing privileges and click retry in DBUA.