Showing posts with label Patch. Show all posts
Showing posts with label Patch. Show all posts

Wednesday, September 3, 2025

19c datapatch failed with error "ORA-04088: error during execution of trigger 'WMSYS.NO_VM_DDL'"

When running datapatch against Oracle 19c database, it failes with error "ORA-04088: error during execution of trigger 'WMSYS.NO_VM_DDL'".

The datapatch output is something like following,
Patch 37960098 apply (pdb PDB01): WITH ERRORS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/37960098/27641878/37960098_apply_CDB01_PDB01_2025Aug20_15_44_17.log (errors)
  -> Error at line 140412: script rdbms/admin/owmcvws.plb
      - ORA-04088: error during execution of trigger 'WMSYS.NO_VM_DDL'
      - ORA-00604: error occurred at recursive SQL level 2
      - ORA-04061: existing state of  has been invalidated
      - ORA-04061: existing state of package body "WMSYS.LT_CTX_PKG" has been
      - invalidated
      - ORA-04065: not executed, altered or dropped package body "WMSYS.LT_CTX_PKG"
      - ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LT_CTX_PKG"
      - ORA-06512: at "WMSYS.LTADM", line 9800
      - ORA-04061: existing state of package body "WMSYS.LT_CTX_PKG" has been
      - invalidated
      - ORA-04065: not executed, altered or dropped package body "WMSYS.LT_CTX_PKG"
      - ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LT_CTX_PKG"
      - ORA-06512: at "WMSYS.LTADM", line 9532
      - ORA-06512: at "WMSYS.OWM_DYNSQL_ACCESS", line 549
      - ORA-06512: at line 17
      - ORA-06512: at line 7
      - ORA-06512: at line 7
  -> Error at line 142841: script rdbms/admin/owmcvws.plb
      - ORA-04088: error during execution of trigger 'WMSYS.NO_VM_DDL'
      - ORA-00604: error occurred at recursive SQL level 1
      - ORA-04068: existing state of packages has been discarded
      - ORA-04061: existing state of package body "WMSYS.LTUTIL" has been invalidated
      - ORA-04065: not executed, altered or dropped package body "WMSYS.LTUTIL"
      - ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LTUTIL"
      - ORA-06512: at "WMSYS.LTADM", line 9437
      - ORA-06512: at "WMSYS.OWM_DYNSQL_ACCESS", line 544
      - ORA-06512: at line 15
Solution

Disable the trigger which caused the issues, then re-run datapatch.

Friday, May 16, 2025

PRCH-1000 : Failed to stop resources running from Oracle home After out-of-place patching

Oracle databases were running from old home '/u01/app/oracle/product/19.0.0/dbhome_1', new home '/u01/app/oracle/product/19.0.0/dbhome_2' is created by excuting 'opatchauto -out-of-place' from old home. 
After pathing, all databases are swithed to new home. After old home was deinstalled successfully, 'srvctl stop home' failed with following errors,

  PRCH-1059 : Failed while checking status of Oracle home

And 'srvctl status home' also failed with following errors,

  PRCH-1059 : Failed while checking status of Oracle home

Sunday, April 7, 2024

Listener failed to start after Oracle Database out-of-place patching

 After applying patches on Oracle Database home with out-of-place patching. The listener cannot be started with error,

TNS-01201: Listener cannot find executable <ORACLE_HOME>/bin/oracle for SID <ORACLE_SID>

Monday, January 30, 2023

Oracle Database 12.2/18c/19c ORA-00600: [qosdExpStatRead: expcnt mismatch]

Oracle database 12.2, 18c or 19c database alert log file reports ORA-00600: [qosdExpStatRead: expcnt mismatch]. The error message is something like following
2023-01-29T15:55:03.286965-05:00
Errors in file /u01/app/oracle/diag/rdbms/cdb01/cdb01/trace/cdb01_mz00_513756.trc  (incident=97764) (PDBNAME=PDB1):
ORA-00600: internal error code, arguments: [qosdExpStatRead: expcnt mismatch], [65561], [25], [3125688], [1], [], [], [], [], [], [], []
PDB1(3):Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2023-01-29T15:55:03.291141-05:00
Errors in file /u01/app/oracle/diag/rdbms/cdb01/cdb01/trace/cdb01_mz00_513756.trc:
ORA-00600: internal error code, arguments: [qosdExpStatRead: expcnt mismatch], [65561], [25], [3125688], [1], [], [], [], [], [], [], []

Wednesday, October 12, 2022

Upgrade/patch JDK 8 used by Oracle Enterprise Manager 13.5 OMS and Agent

Starting April 2024, a Holistic patch is being introduced in OEM, consolidating all SPB, CPU, and JDK patches into a single patch. Applying a single patch will upgrade JDK, OPatch to the latest and apply all applicable latest FMW product patches on OMS home. OMSPatcher utility will be used to apply Holistic Patching.
There would be no need to upgrade/patch JDK serparately anymore.
This document is for upgrading JDK used by Enterprise Manager (EM) 13.5 in both OMS and Agent home on Linux, Solaris and Windows. 

Thursday, September 29, 2022

How to Apply Patch on Oracle Enterprise Manger 13c (13.4/13.5) Agent in online / offline mode

Oracle Enterprise Manager (EM) 13.4/13.5 agent patch is released as one-off patch, Release Update, or bundle patches. These patches can be applied manually or through EM Cloud Control console.

Wednesday, August 24, 2022

Patch Oracle WebLogic Server 12c Release 2 12.2.1.x (12.2.1.3, 12.2.1.4) including WLS, OHS, Coherence, ADR and RDA

This instruction applies to Oracle WebLogic Server (WLS) 12.2.1.x standalone (Lite, Slim, Generic), Oracle Fusion Middleware (FMW) 12.2.1.x Infrastructure and WLS 12.2.1.x installed with Oracle Enterprise Manager (OEM) 13c.

The installation of these WLS releases includes following components,

  OPatch
  WebLogic Server
  Coherence Server
  Oracle HTTP Server (OHS)
  Automatic Diagnostic Repository (ADR) 

Monday, November 15, 2021

Oracle 19c root.sh not working while gridSetup.sh switchGridHome to software-only installation

When the command "griddSetup.sh -switchGridHome" is used to perform Oracle database 19c out of place (OOP) patching, it will prompt to run root.sh on each cluster node. And root.sh will switch Grid Home from old one (without new patches) to new one (applied with new patches) on each node and also patch Grid Infrastructure Management Repository (GIMR) database on last node of the cluster.

Friday, June 25, 2021

EM 12c/13c Update Oracle Home Path of Targets with SQL

 As out-of-place patching can dramatically help minimize database downtime, especially when multiple databases run out of same home, it does not require all databases shut down at same time, more and more DBA adopt this method. However, if you are patching a home which host multiple databases, you may feel frustrated to update the databases' Oracle Home property in Oracle Enterprise Manage by clicking through different windows and changing it one by one. Now, the first question you will ask is how I can update all databases at one time. The answer is to run SQL (PL/SQL) in EM repository database.

Sunday, June 20, 2021

Script Run datapatch against all running instance in parallel

This script can be used to run datapatch against all currently running instances in parallel to apply sql patches after patches are applied on Oracle homes. It is helpful for post patching operations. It works for different instances running out of different Oracle homes and different home with different owners.

Wednesday, June 16, 2021

Oracle 19.11 roothas.sh failed with "Out of memory" on AIX

When applying Oracle GI Release Update 19.11.0.0.210420 on AIX, command "roothas.sh -postpatch" failed with "Out of memory" as following
[root@host01]# /u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.sh -postpatch
Using configuration parameter file: /u01/app/oracle/product/19.0.0/grid_1/crs/install/crsconfig_params

Sunday, May 2, 2021

Oracle 19.11 deinstall failed with "ERROR: oracle/rat/tfa/util/ManageTfa"

After Oracle database 19c home is applied Release Update 19.11.0.0.210420, the home cannot be de-installed anymore.

The deinstall utility will fail with errors,
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to thanos
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2021-05-02_07-34-15PM/oraInst.loc
Setting oracle.installer.local to false

ERROR: oracle/rat/tfa/util/ManageTfa
Exited from program.


############# ORACLE DEINSTALL TOOL END #############

In deinstall error log,
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at oracle.install.db.deinstall.wrapper.Deinstall.callCleanImpl(Deinstall.java:1876)
        at oracle.install.db.deinstall.wrapper.Deinstall.main(Deinstall.java:907)
Caused by: java.lang.NoClassDefFoundError: oracle/rat/tfa/util/ManageTfa
        at oracle.install.db.deinstall.core.PrepForOUIDeinstall.cleanConfig(PrepForOUIDeinstall.java:187)
        ... 6 more
Caused by: java.lang.ClassNotFoundException: oracle.rat.tfa.util.ManageTfa
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        ... 7 more

Looks like something related to TFA is missing after RU 19.11 is applied.

Workaround,

Roll back RU 19.11, then re-run deinstall utility.

Oracle 19c runInstaller failed with "undefined reference to 'jox_eujs_nowait_'"

Updated on Thursday, Oct 21, 2021 with following,

The error is also found with RU 19.13 (19.13.0.0.211019).
I believe it is same thing with 19.12, though I did not test it.

When installing Oracle 19c with patch apply of Database RU 19.11 and OJVM RU 19.11, runInstaller failed with message

   Error in invoking target 'irman ioracle idrdactl idrdalsnr idrdaproc' of makefile '/u01/app/oracle/product/19.11.0/dbhome_1/rdbms/lib/ins_rdbms.mk'. See /u01/app/oraInventory/logs/InstallActions2021-05-01_08-40-24PM/installActions2021-05-01_08-40-24PM.log for details.

Monday, March 29, 2021

OEM 13c: Oracle Coherence Cache Targets Showing Down after WebLogic Server PSU applied

 After WebLogic Server (WLS) Patch Set Update (PSU) is applied on WLS home installed as part of Oracle Enterprise Manager (EM) 13c, two of Oracle Coherence Cache objects in WebLogic Domain '/EMGC_GCDomain/GCDomain' are shown Down in EM console.

The issues, at least, was seen after applying WLS PSU 12.2.1.3.201217 on EM 13.4.0.9 home, and WLS PSU 12.2.1.4.210330 on EM 13.5.0.0.0 home.

It can be fixed by removing and re-adding the targets as following,

1.  Remove the Oracle Coherence Cache Targets showing Down from OEM

    * In the OEM Console,  navigate to 'Targets' > 'Middleware'
    * On left panel expand 'Target Type', then expand 'Coherence', Select Oracle Coherence Cache. All Oracle Coherence Cache targets will be listed on the right window.
    * Right click on the name of the targets showing Down, click pop-up menu 'Target Setup' > 'Remove Target'
   
    The Down targets are deleted from OEM console.
   
2.  Refresh the domain '/EMGC_GCDomain/GCDomain' to add removed/missing Oracle Coherence Cache objects

    * In the OEM Console,  navigate to 'Targets' > 'Middleware'
    * On left panel, clear the selection you did while removing the targets. The Oracle WebLogic Domain '/EMGC_GCDomain/GCDomain' will be listed in right window
    * Click the domain name '/EMGC_GCDomain/GCDomain'
    * Navigate to the  'Refresh Weblogic Domain' in the Drop-Down menu on the home page of target '/EMGC_GCDomain/GCDomain'
    * New Screen will appear, click on "Add and Update Targets..."
    * Following the prompt to add new found targets
 
The removed targets will be added with new target names.

Saturday, November 14, 2020

Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux

 In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on the node which we were patching on. 

Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep running and accessible to the database users during Oracle Grid Infrastructure patching. My patching practice is done in following environment

   * Two nodes Grid Infrastructure 19.8 running on Oracle Linux 7 update 8
   * Host name of nodes are rac01.lab.dbaplus.ca and rac02.lab.dbaplus.ca
   * GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020 are applied in out-of-place(OOP) mode
   * ACFS and AFD are being used

Saturday, November 7, 2020

Zero-Downtime Grid Infrastructure Patching root.sh failed with CLSRSC-675: Cluster-wide Grid Plug and Play profile not found

 When running root.sh with zero-downtime GI patching option -transparent and -nodriverupdate, got following errors
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/srvm/admin/rhpmovegi.pl /u01/app/19.0.0/grid /u01/app/19.9.0/grid true' execution failed
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/crs/install/rootcrs.pl  -transparent -nodriverupdate -dstcrshome /u01/app/19.9.0/grid -prepatch' execution failed

Monday, September 28, 2020

GI 19c opatchauto out-of-place patching failed with file permission issues

Environment,

   Grid Infrastructure Home version : 19.7.0.0.200414
   OPatch version: 12.2.0.1.21
   Patch version: P31305339 - GI Release Update 19.8.0.0.200714

On fist node of the cluster, when runng opatchauto with option -outofplace or -prepare-clone, got errors
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto apply -phBaseDir /u01/stage/grid -outofplace -logLevel FINEST
     ... ...
Creating clone for oracle home /u01/app/19.7.0/grid_1.
Failed to perform clone operation for oracle home /u01/app/19.7.0/grid_1. 

Execution of [CloneOracleHomeAction] patch action failed, check log for more details. Failures:
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[FATAL] [INS-32012] Unable to create directory: /u01/app/19.8.0/grid_1, on this server.
   CAUSE: Either proper permissions were not granted to create the directory or there was no space left in the volume.
   ACTION: Check your permission on the selected directory or choose another directory.
[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty. 

After fixing the cause of failure Run opatchauto resume

]

The process failed with error "Unable to create directory: /u01/app/19.8.0/grid_1", the directory is cloned (new) home and the process is trying to apply RU 19.8.0.0.200714 onto the home.

Check the directory permission
[root@rac01]# ls -ld /u01/app/19.8.0/grid_1       
drwxr-xr-x   94 root     oinstall       4096 Sep 20 18:27 /u01/app/19.8.0/grid_1

Chang the directory owner from root to Grid Home owner grid
[root@rac01]$ chown grid /u01/app/19.8.0/grid_1       

Resume the patch process
[root@rac01]# /u01/app/19.7.0/grid_1/OPatch/opatchauto resume

Different errors occur
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-09-20_06-38-13PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........[FATAL] Failed to restore the saved templates to the Oracle home being cloned. Aborting the clone operation

After fixing the cause of failure Run opatchauto resume

]

The root cause is that the opatchauto did not successfully unlock the new cloned home before applying patches on it. Lots of files/directories under new home are still owned by root
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root
/u01/app/19.8.0/grid_1/acfs
/u01/app/19.8.0/grid_1/acfs/tunables
/u01/app/19.8.0/grid_1/auth
/u01/app/19.8.0/grid_1/auth/crs
/u01/app/19.8.0/grid_1/auth/crs/r10-sohei
   ... ...
/u01/app/19.8.0/grid_1/bin/cloudcli
/u01/app/19.8.0/grid_1/bin/clscfg
/u01/app/19.8.0/grid_1/bin/clscfg.bin
/u01/app/19.8.0/grid_1/bin/clsdiaglsnr
   ... ...
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa.jks
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_directories.txt
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_setup.txt
/u01/app/19.8.0/grid_1/xag/mesg/xagus.le

Change the owner of these file/directories to grid
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root -exec chown grid {} \;

Continue opatchauto with resume option,
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto resume

The home will be out-of-place patched, and new home is /u01/app/19.8.0/grid_1.

Saturday, July 18, 2020

Oracle database 19c "srvctl status service" fails with Java.lang.NullPointerException

The following errors occur while running "srvctl status service -db <db_name>" command from 19.7 database home on AIX and Linux x86_64,
$ srvctl status service -db orcl
Exception in thread "main" java.lang.NullPointerException
        at oracle.ops.opsctl.StatusAction.internalExecuteService(StatusAction.java:2077)
        at oracle.ops.opsctl.StatusAction.executeSIHAService(StatusAction.java:1692)
        at oracle.ops.opsctl.Action.executeSIHA(Action.java:445)
        at oracle.ops.opsctl.OPSCTLDriver.execute(OPSCTLDriver.java:537)
        at oracle.ops.opsctl.OPSCTLDriver.main(OPSCTLDriver.java:252)

Wednesday, May 27, 2020

Database 12.2 crashed with ORA-04031 ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Database 12.2 instance crashed with following messages in alert log,
2020-05-20T22:28:52.724801-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320299):
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320299/orcl_asmb_15204374_i320299.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-05-20T22:29:30.066864-04:00
NOTE: ASMB0 terminating
2020-05-20T22:29:30.067840-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
2020-05-20T22:29:30.070882-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320300):
ORA-4031 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320300/orcl_asmb_15204374_i320300.trc
2020-05-20T22:29:31.376055-04:00
USER (ospid: 15204374): terminating the instance due to error 4031

From the alert log, we can find that asmb process (ospid: 15204374) wrote following errors into trace file orcl_asmb_15204374.trc

ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Then, the asmb process terminated the instance.

This issue happened after the database home was applied Oracle database Release Update April 2020 (12.2.0.1.200414).

It is a bug introduce by Release Update 12.2.0.1.200414, and can be fixed by apply patch 31341859.

Sunday, May 17, 2020

12.2.0.1 init.ohasd does not start clusterware stack after April 2020 RU applied

The init.ohasd process does not start clusterware stack after the 12.2.0.1 Grid Infrastructure home is applied April 2020 Release Update. The scenario is not a typical Grid Infrastructure startup issue. It is a human error and happened when DBA, by accident, ran roothas.sh instead of rootcrs.sh while patching GI cluster home. Before knowing it is human error, the diagnose process is done as following.