Showing posts with label DBA.Patch. Show all posts
Showing posts with label DBA.Patch. Show all posts

Friday, June 25, 2021

EM 12c/13c Update Oracle Home Path of Targets with SQL

 As out-of-place patching can dramatically help minimize database downtime, especially when multiple databases run out of same home, it does not require all databases shut down at same time, more and more DBA adopt this method. However, if you are patching a home which host multiple databases, you may feel frustrated to update the databases' Oracle Home property in Oracle Enterprise Manage by clicking through different windows and changing it one by one. Now, the first question you will ask is how I can update all databases at one time. The answer is to run SQL (PL/SQL) in EM repository database.

Sunday, June 20, 2021

Script Run datapatch against all running instance in parallel

This script can be used to run datapatch against all currently running instances in parallel to apply sql patches after patches are applied on Oracle homes. It is helpful for post patching operations. It works for different instances running out of different Oracle homes and different home with different owners.

The script accepts one optional parameter as log file path, it will save log files under /tmp if no parameter is presented while the script is started.

In order to have the script succeed, please be advised about following limitations,

1. The script is only tested on Linux and AIX, it does not work on Solaris.

2. Only root or Oracle database home owner is supposed to run this script. If the instances are running out of different Oracle homes which are owned by different OS users, root user is recommended. Otherwise, the instances running out of Oracle home which owner is different from current user will be excluded.

3. Oracle database home is 12c or higher which supports datapatch

4. The script runs following command as root to retrieve Oracle home path
   /bin/ls
   Therefore, if Oracle home owner (normally oracle) runs this script, sudo has to be configure to grant the user privilege to run '/bin/ls' as root without being asked for password.

Wednesday, June 16, 2021

Oracle 19.11 roothas.sh failed with "Out of memory" on AIX

When applying Oracle GI Release Update 19.11.0.0.210420 on AIX, command "roothas.sh -postpatch" failed with "Out of memory" as following
[root@host01]# /u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.sh -postpatch
Using configuration parameter file: /u01/app/oracle/product/19.0.0/grid_1/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/host01/crsconfig/hapatch_2021-06-15_01-53-27PM.log
Out of memory!
Out of memory!
Out of memory!
/u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.sh[137]: 7930494 Segmentation fault(coredump)
The command '/u01/app/oracle/product/19.0.0/grid_1/perl/bin/perl -I/u01/app/oracle/product/19.0.0/grid_1/perl/lib -I/u01/app/oracle/product/19.0.0/grid_1/crs/install -I/u01/app/oracle/product/19.0.0/grid_1/xag /u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.pl -postpatch' execution failed

The error is coming from the perl process (script roothas.pl) executing the root configuration scripts, the perl process does not have enough memory(especially data segments) during the script execution.

Technically, the "Out of memory" error could be seen on AIX while running Oracle 19c perl scripts for root configuration. These perl scripts are usually executed/called by DBA with sh script root.sh, rootupgrade.sh, rootcrs.sh, roothas.sh, etc. Although the error was hard to see before, it is becoming popular since Oracle GI RU 19.11 was released.

It is caused by OS memory allocation method. On AIX, the number of data segments that a process is allowed to use also limits the process memory size. The default number of data segments is one. The size of a data segment is 256 MB. Data segments are shared for both data and stack. The maximum number of additional data segments a process can use is eight (2GB). The number of segments that a process can use for data is controlled by the LDR_CNTRL environment variable. It is defined in the parent process of the process that is to be affected.

Therefore, we are able to fix the issue by increasing process memory size through increasing value of environment variable LDR_CNTRL. For example, the following defines eight additional data segments

export LDR_CNTRL=MAXDATA=0x80000000
<root script>
unset LDR_CNTRL

Here, 
  <root script> is what you have to run as root which got "Out of memory". In my case, it is "roothas.sh -postpatch".
  unset command remove (unset) the LDR_CNTRL environment variable, so that it does not unintentionally affect other processes.

Some argumentative guy says eight additional data segments (2GB) is too large, ok, you can set to 4 as following

export LDR_CNTRL=MAXDATA=0x40000000

Sunday, May 2, 2021

Oracle 19.11 deinstall failed with "ERROR: oracle/rat/tfa/util/ManageTfa"

After Oracle database 19c home is applied Release Update 19.11.0.0.210420, the home cannot be de-installed anymore.

The deinstall utility will fail with errors,
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to thanos
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2021-05-02_07-34-15PM/oraInst.loc
Setting oracle.installer.local to false

ERROR: oracle/rat/tfa/util/ManageTfa
Exited from program.


############# ORACLE DEINSTALL TOOL END #############

In deinstall error log,
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at oracle.install.db.deinstall.wrapper.Deinstall.callCleanImpl(Deinstall.java:1876)
        at oracle.install.db.deinstall.wrapper.Deinstall.main(Deinstall.java:907)
Caused by: java.lang.NoClassDefFoundError: oracle/rat/tfa/util/ManageTfa
        at oracle.install.db.deinstall.core.PrepForOUIDeinstall.cleanConfig(PrepForOUIDeinstall.java:187)
        ... 6 more
Caused by: java.lang.ClassNotFoundException: oracle.rat.tfa.util.ManageTfa
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        ... 7 more

Looks like something related to TFA is missing after RU 19.11 is applied.

Workaround,

Roll back RU 19.11, then re-run deinstall utility.

Oracle 19c runInstaller failed with "undefined reference to 'jox_eujs_nowait_'"

When installing Oracle 19c with patch apply of Database RU 19.11 and OJVM RU 19.11, runInstaller failed with message

   Error in invoking target 'irman ioracle idrdactl idrdalsnr idrdaproc' of makefile '/u01/app/oracle/product/19.11.0/dbhome_1/rdbms/lib/ins_rdbms.mk'. See /u01/app/oraInventory/logs/InstallActions2021-05-01_08-40-24PM/installActions2021-05-01_08-40-24PM.log for details.

The error can be reproduced on Linux x86_64 (RU 19.11 is not released for other platforms yet) as following,

  1. Unzip 19c base installation media 19.3 to new created home

  2. Under new home, replace original OPatch 12.2.0.1.17 with OPatch 12.2.0.1.24

  3. Unzip 19c Database Release Update 19.11.0.0.210420 (Patch 32545013 ) to a separate directory  /stage/ru

  4. Unzip 19c Oracle JavaVM Component Release Update 19.11.0.0.210420 (Patch 32399816) to a separate  directory /stage/ojvm

  5. Start runInstaller from new home with -applyRU and -applyOneOffs options

     ./runInstaller -applyRU /stage/ru/32545013 -applyOneOffs /stage/ojvm/32399816

In the installation log file (file name shown by runInstaller),

INFO:
 - Linking Oracle
INFO:
rm -f /u01/app/oracle/product/19.11.0/dbhome_1/rdbms/lib/oracle
INFO:
   ... ...
INFO:
/u01/app/oracle/product/19.11.0/dbhome_1/lib//libserver19.a(joxwtp.o): In function `jox_eujs_nowait':
joxwtp.c:(.text+0xf7b): undefined reference to `jox_eujs_nowait_'

INFO:
make: *** [/u01/app/oracle/product/19.11.0/dbhome_1/rdbms/lib/oracle] Error 1
INFO: End output from spawned process.
INFO: ----------------------------------
INFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'irman ioracle idrdactl idrdalsnr idrdaproc' of makefile '/u01/app/oracle/product/19.11.0/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/u01/app/oraInventory/logs/InstallActions2021-05-01_08-40-24PM/installActions2021-05-01_08-40-24PM.log' for details.
Exception Severity: 1
INFO: Error in invoking target 'irman ioracle idrdactl idrdalsnr idrdaproc' of makefile '/u01/app/oracle/product/19.11.0/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/u01/app/oraInventory/logs/InstallActions2021-05-01_08-40-24PM/installActions2021-05-01_08-40-24PM.log' for details.

It is reported as a bug. Before the bug fix is released, the workaround is to run installer without OJVM RU 19.11 and apply it separately after installation.

Monday, March 29, 2021

OEM 13c: Oracle Coherence Cache Targets Showing Down after WebLogic Server PSU applied

 After WebLogic Server (WLS) Patch Set Update (PSU) is applied on WLS home installed as part of Oracle Enterprise Manager (EM) 13c, two of Oracle Coherence Cache objects in WebLogic Domain '/EMGC_GCDomain/GCDomain' are shown Down in EM console.

The issues, at least, was seen after applying WLS PSU 12.2.1.3.201217 on EM 13.4.0.9 home, and WLS PSU 12.2.1.4.210330 on EM 13.5.0.0.0 home.

It can be fixed by removing and re-adding the targets as following,

1.  Remove the Oracle Coherence Cache Targets showing Down from OEM

    * In the OEM Console,  navigate to 'Targets' > 'Middleware'
    * On left panel expand 'Target Type', then expand 'Coherence', Select Oracle Coherence Cache. All Oracle Coherence Cache targets will be listed on the right window.
    * Right click on the name of the targets showing Down, click pop-up menu 'Target Setup' > 'Remove Target'
   
    The Down targets are deleted from OEM console.
   
2.  Refresh the domain '/EMGC_GCDomain/GCDomain' to add removed/missing Oracle Coherence Cache objects

    * In the OEM Console,  navigate to 'Targets' > 'Middleware'
    * On left panel, clear the selection you did while removing the targets. The Oracle WebLogic Domain '/EMGC_GCDomain/GCDomain' will be listed in right window
    * Click the domain name '/EMGC_GCDomain/GCDomain'
    * Navigate to the  'Refresh Weblogic Domain' in the Drop-Down menu on the home page of target '/EMGC_GCDomain/GCDomain'
    * New Screen will appear, click on "Add and Update Targets..."
    * Following the prompt to add new found targets
 
The removed targets will be added with new target names.

Saturday, November 14, 2020

Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux

 In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on the node which we were patching on. 

Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep running and accessible to the database users during Oracle Grid Infrastructure patching. My patching practice is done in following environment

   * Two nodes Grid Infrastructure 19.8 running on Oracle Linux 7 update 8
   * Host name of nodes are rac01.lab.dbaplus.ca and rac02.lab.dbaplus.ca
   * GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020 are applied in out-of-place(OOP) mode
   * ACFS and AFD are being used

Saturday, November 7, 2020

Zero-Downtime Grid Infrastructure Patching root.sh failed with CLSRSC-675: Cluster-wide Grid Plug and Play profile not found

 When running root.sh with zero-downtime GI patching option -transparent and -nodriverupdate, got following errors
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/srvm/admin/rhpmovegi.pl /u01/app/19.0.0/grid /u01/app/19.9.0/grid true' execution failed
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/crs/install/rootcrs.pl  -transparent -nodriverupdate -dstcrshome /u01/app/19.9.0/grid -prepatch' execution failed

Monday, September 28, 2020

GI 19c opatchauto out-of-place patching failed with file permission issues

Environment,

   Grid Infrastructure Home version : 19.7.0.0.200414
   OPatch version: 12.2.0.1.21
   Patch version: P31305339 - GI Release Update 19.8.0.0.200714

On fist node of the cluster, when runng opatchauto with option -outofplace or -prepare-clone, got errors
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto apply -phBaseDir /u01/stage/grid -outofplace -logLevel FINEST
     ... ...
Creating clone for oracle home /u01/app/19.7.0/grid_1.
Failed to perform clone operation for oracle home /u01/app/19.7.0/grid_1. 

Execution of [CloneOracleHomeAction] patch action failed, check log for more details. Failures:
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[FATAL] [INS-32012] Unable to create directory: /u01/app/19.8.0/grid_1, on this server.
   CAUSE: Either proper permissions were not granted to create the directory or there was no space left in the volume.
   ACTION: Check your permission on the selected directory or choose another directory.
[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty. 

After fixing the cause of failure Run opatchauto resume

]

The process failed with error "Unable to create directory: /u01/app/19.8.0/grid_1", the directory is cloned (new) home and the process is trying to apply RU 19.8.0.0.200714 onto the home.

Check the directory permission
[root@rac01]# ls -ld /u01/app/19.8.0/grid_1       
drwxr-xr-x   94 root     oinstall       4096 Sep 20 18:27 /u01/app/19.8.0/grid_1

Chang the directory owner from root to Grid Home owner grid
[root@rac01]$ chown grid /u01/app/19.8.0/grid_1       

Resume the patch process
[root@rac01]# /u01/app/19.7.0/grid_1/OPatch/opatchauto resume

Different errors occur
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-09-20_06-38-13PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........[FATAL] Failed to restore the saved templates to the Oracle home being cloned. Aborting the clone operation

After fixing the cause of failure Run opatchauto resume

]

The root cause is that the opatchauto did not successfully unlock the new cloned home before applying patches on it. Lots of files/directories under new home are still owned by root
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root
/u01/app/19.8.0/grid_1/acfs
/u01/app/19.8.0/grid_1/acfs/tunables
/u01/app/19.8.0/grid_1/auth
/u01/app/19.8.0/grid_1/auth/crs
/u01/app/19.8.0/grid_1/auth/crs/r10-sohei
   ... ...
/u01/app/19.8.0/grid_1/bin/cloudcli
/u01/app/19.8.0/grid_1/bin/clscfg
/u01/app/19.8.0/grid_1/bin/clscfg.bin
/u01/app/19.8.0/grid_1/bin/clsdiaglsnr
   ... ...
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa.jks
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_directories.txt
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_setup.txt
/u01/app/19.8.0/grid_1/xag/mesg/xagus.le

Change the owner of these file/directories to grid
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root -exec chown grid {} \;

Continue opatchauto with resume option,
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto resume

The home will be out-of-place patched, and new home is /u01/app/19.8.0/grid_1.

Saturday, July 18, 2020

Oracle database 19c "srvctl status service" fails with Java.lang.NullPointerException

The following errors occur while running "srvctl status service -db <db_name>" command from 19.7 database home on AIX and Linux x86_64,
$ srvctl status service -db orcl
Exception in thread "main" java.lang.NullPointerException
        at oracle.ops.opsctl.StatusAction.internalExecuteService(StatusAction.java:2077)
        at oracle.ops.opsctl.StatusAction.executeSIHAService(StatusAction.java:1692)
        at oracle.ops.opsctl.Action.executeSIHA(Action.java:445)
        at oracle.ops.opsctl.OPSCTLDriver.execute(OPSCTLDriver.java:537)
        at oracle.ops.opsctl.OPSCTLDriver.main(OPSCTLDriver.java:252)

Wednesday, May 27, 2020

Database 12.2 crashed with ORA-04031 ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Database 12.2 instance crashed with following messages in alert log,
2020-05-20T22:28:52.724801-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320299):
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320299/orcl_asmb_15204374_i320299.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-05-20T22:29:30.066864-04:00
NOTE: ASMB0 terminating
2020-05-20T22:29:30.067840-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
2020-05-20T22:29:30.070882-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320300):
ORA-4031 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320300/orcl_asmb_15204374_i320300.trc
2020-05-20T22:29:31.376055-04:00
USER (ospid: 15204374): terminating the instance due to error 4031

From the alert log, we can find that asmb process (ospid: 15204374) wrote following errors into trace file orcl_asmb_15204374.trc

ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Then, the asmb process terminated the instance.

This issue happened after the database home was applied Oracle database Release Update April 2020 (12.2.0.1.200414).

It is a bug introduce by Release Update 12.2.0.1.200414, and can be fixed by apply patch 31341859.

Sunday, May 17, 2020

12.2.0.1 init.ohasd does not start clusterware stack after April 2020 RU applied

The init.ohasd process does not start clusterware stack after the 12.2.0.1 Grid Infrastructure home is applied April 2020 Release Update. The scenario is not a typical Grid Infrastructure startup issue. It is a human error and happened when DBA, by accident, ran roothas.sh instead of rootcrs.sh while patching GI cluster home. Before knowing it is human error, the diagnose process is done as following.

Friday, April 24, 2020

opatchauto out-of-place patching Oracle Restart and Database home

Out of place (OOP) patching is a new feature introduced by Oracle 12.2 with the latest version of opatchauto. It is a patching method where patching is performed by following operations,

  * Creating a copy of current oracle home
  * Applying patches to the copied home
  * Switching services to the copied home

This patching method helps on moving Oracle home and/or reducing the downtime of patching Oracle home and database. OPatchauto supports OOP patching for the following configurations,

  * Grid Infrastructure (GI) and Real Application Cluster (RAC)
  * Oracle Restart (Standalone Grid Infrastructure) (SIHA) and single instance database (SIDB)

Tuesday, December 24, 2019

DBA_REGISTRY_SQLPATCH shows END status after datapatch ran against database

During postinstallation of a database patching, datapatch utility shows following message while loading modified SQL files into the database,
[oracle@host01]$ $ORACLE_HOME/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Dec 22 14:32:47 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_18481316_2019_12_22_14_32_47/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 191015 in the binary registry and ID 190416 in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30138470 (DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015)

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...
Patch 30138470 apply: SUCCESS
  logfile:  (no errors)
SQL Patching tool complete on Sun Dec 22 14:33:32 2019


It shows 'Patch 30138470 apply: SUCCESS', but the log file name is missing and patch status shows END in view DBA_REGISTRY_SQLPATCH,
SQL> select patch_id,version,action,status,description
  2   from dba_registry_sqlpatch
  3  order by action_time;

  PATCH_ID VERSION     ACTION   STATUS     DESCRIPTION
---------- ----------- -------- ---------- -------------------------------------------------
  29314339 12.2.0.1    APPLY    SUCCESS    DATABASE APR 2019 RELEASE UPDATE 12.2.0.1.190416
  30138470 12.2.0.1    APPLY    END

SQL>


Check the sql execution log,
[oracle@host01]$ cd /u01/app/oracle/cfgtoollogs/sqlpatch/
[oracle@host01]$ ls -l
total 104
drwxr-xr-x    3 oracle   oracle          256 Jul 21 16:28 29314339
drwxr-xr-x    3 oracle   oracle          256 Dec 22 14:33 30138470
drwxr-xr-x    2 oracle   oracle         4096 Jul 21 16:24 sqlpatch_10879194_2019_07_21_16_24_15
drwxr-xr-x    2 oracle   oracle         4096 Dec 22 14:33 sqlpatch_18481316_2019_12_22_14_32_47
-rw-r--r--    1 oracle   oracle         1373 Dec 22 14:31 sqlpatch_history.txt

[oracle@host01]$ cd 30138470
[oracle@host01]$ ls -l
total 0
drwxr-xr-x    2 oracle   oracle          256 Dec 22 15:10 23136421
[oracle@host01]$ cd 23136421
[oracle@host01]$ ls -l
total 440
-rw-r--r--    1 oracle   oracle       184086 Dec 22 14:33 30138470_apply_DB01_2019Dec22_14_33_31.log
[oracle@host01]$ pwd
/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421
[oracle@host01]$
[oracle@host01]$ grep -n 'ORA-' 30138470_apply_DB01_2019Dec22_14_33_31.log
32:ORA-01693: max # extents (505) reached in lob segment
34:ORA-06512: at "SYS.DBMS_SQLPATCH", line 639
35:ORA-06512: at "SYS.DBMS_SQLPATCH", line 605
36:ORA-06512: at line 2
65:ORA-20004: Uninitialized state in install_file
66:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738
159:ORA-20004: Uninitialized state in install_file
       ...
7305:SQL> PROMPT IGNORABLE ERRORS: ORA-04043
7306:IGNORABLE ERRORS: ORA-04043
7313:ORA-20004: Uninitialized state in install_file
7314:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738
7407:ORA-20004: Uninitialized state in install_file
7408:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738


ORA-01693 is found in line 32 of the log file. List the details of the errors,
[oracle@host01]$ head -37 30138470_apply _DB01_2019Dec22_14_33_31.log
SQL>
SQL> SET PAGESIZE 0
SQL> SELECT 'Starting apply for patch 30138470/23136421 on ' ||
  2         SYSTIMESTAMP FROM dual;
Starting apply for patch 30138470/23136421 on 22-DEC-19 02.33.31.103477 PM -05:0
0


SQL> SET PAGESIZE 10
SQL>
SQL> BEGIN
  2      dbms_sqlpatch.patch_initialize(p_patch_id      => 30138470,
  3                                     p_patch_uid     => 23136421,
  4                                     p_flags         => '&flags',
  5                                     p_description   => '&description',
  6                                     p_action        => 'APPLY',
  7                                     p_logfile       => '&full_logfile',
  8                                     p_bundle_series => '&bundle_series');
  9  END;
 10  /
old   4:                                    p_flags         => '&flags',
new   4:                                    p_flags         => 'NB',
old   5:                                    p_description   => '&description',
new   5:                                    p_description   => 'DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015',
old   7:                                    p_logfile       => '&full_logfile',
new   7:                                    p_logfile       => '/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421/30138470_apply_DB01_2019Dec22_14_33_31.log',
old   8:                                    p_bundle_series => '&bundle_series');
new   8:                                    p_bundle_series => 'DBRU');
BEGIN
*
ERROR at line 1:
ORA-01693: max # extents (505) reached in lob segment
SYS.SYS_LOB0000190607C00013$$
ORA-06512: at "SYS.DBMS_SQLPATCH", line 639
ORA-06512: at "SYS.DBMS_SQLPATCH", line 605
ORA-06512: at line 2

[oracle@host01]$


The segment 'SYS.SYS_LOB0000190607C00013$$' reaches the extent maximum limitation (505), and it caused the SQL failure, find out what the segment is
SQL> select owner,object_name,object_type
  2    from dba_objects
  3   where object_name='SYS_LOB0000190607C00013$$';

OWNER                OBJECT_NAME                    OBJECT_TYPE
-------------------- ------------------------------ -----------------------
SYS                  SYS_LOB0000190607C00013$$      LOB

SQL> select owner,table_name,column_name,segment_name
  2    from dba_lobs
  3   where segment_name='SYS_LOB0000190607C00013$$';

OWNER      TABLE_NAME                     COLUMN_NAME          SEGMENT_NAME
---------- ------------------------------ -------------------- -------------------------
SYS        REGISTRY$SQLPATCH              PATCH_DIRECTORY      SYS_LOB0000190607C00013$$

SQL>
SQL> desc sys.registry$sqlpatch
 Name                          Null?    Type
 ----------------------------- -------- -------------------------
 PATCH_ID                      NOT NULL NUMBER                  
 ACTION                        NOT NULL VARCHAR2(15)            
 ACTION_TIME                   NOT NULL TIMESTAMP(6)            
 DESCRIPTION                            VARCHAR2(100)           
 LOGFILE                                VARCHAR2(500)           
 STATUS                                 VARCHAR2(25)            
 VERSION                       NOT NULL VARCHAR2(20)            
 PATCH_UID                     NOT NULL NUMBER                  
 FLAGS                                  VARCHAR2(10)            
 BUNDLE_SERIES                          VARCHAR2(30)            
 BUNDLE_ID                              NUMBER                  
 INSTALL_ID                             NUMBER                  
 PATCH_DIRECTORY                        BLOB                    
 BUNDLE_DATA                            XMLTYPE                 
 PATCH_DESCRIPTOR                       XMLTYPE                 
 POST_LOGFILE                           VARCHAR2(500)           
SQL>
SQL> select owner,segment_name,max_extents
  2    from dba_segments
  3   where segment_name in ('REGISTRY$SQLPATCH','SYS_LOB0000190607C00013$$');

OWNER      SEGMENT_NAME                   MAX_EXTENTS
---------- ------------------------------ -----------
SYS        REGISTRY$SQLPATCH                      505
SYS        SYS_LOB0000190607C00013$$              505


The segment is BLOB column PATCH_DIRECTORY of table SYS.REGISTRY$SQLPATCH, both the BLOB segment and its table have extent maximum limitation 505. To remove the limitation,
SQL> alter table sys.registry$sqlpatch storage (maxextents unlimited);

Table altered.

SQL> alter table sys.registry$sqlpatch
  2  modify lob (patch_directory) (storage (maxextents unlimited));

Table altered.

SQL> select owner,segment_name,max_extents
  2    from dba_segments
  3   where segment_name in ('REGISTRY$SQLPATCH','SYS_LOB0000190607C00013$$');

OWNER      SEGMENT_NAME                   MAX_EXTENTS
---------- ------------------------------ -----------
SYS        REGISTRY$SQLPATCH               2147483645
SYS        SYS_LOB0000190607C00013$$       2147483645



Then re-run the datapatch utility,
[oracle@host01]$ $ORACLE _HOME/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Dec 22 16:43:43 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_35782878_2019_12_22_16_43_43/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 191015 in the binary registry and ID 190416 in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30138470 (DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015)

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 30138470 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421/30138470_apply_DB01_2019Dec22_16_44_33.log (no errors)
SQL Patching tool complete on Sun Dec 22 16:45:22 2019


Now, it gives the log file name. Validate the status,
SQL> select patch_id,version,action,status,description
  2    from dba_registry_sqlpatch order by action_time;

  PATCH_ID VERSION     ACTION   STATUS     DESCRIPTION
---------- ----------- -------- ---------- -------------------------------------------------
  29314339 12.2.0.1    APPLY    SUCCESS    DATABASE APR 2019 RELEASE UPDATE 12.2.0.1.190416
  30138470 12.2.0.1    APPLY    END
  30138470 12.2.0.1    APPLY    SUCCESS    DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015


The patch is installed successfully.

Wednesday, May 29, 2019

AIX: Failure to Apply April 2019 Grid Infrastructure Release Update on 12.2 Oracle Restart Home

Environment

  Operation System:  IBM AIX 7.2 on POWER Systems (64-bit)
  Oracle Restart Home:  12.2.0.1 with July 2018 Grid Infrastructure Release Update (GI RU 12.2.0.1.180717)
  OPatch Version: 12.2.0.1.17

Symptoms
 
Opatch reported following errors while applying sub-patch 29314424 (OCW Apr 2019 Release Update 12.2.0.1.190416)
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.lang.UnsatisfiedLinkError: /u01/app/grid/product/12.2.0/grid/lib/libpatchgensh12.so (rtld: 0712-001 Symbol nzos_SetServerNameList was referenced
      from module /u01/app/grid/product/12.2.0/grid/lib/libhasgen12.so(), but a runtime definition
      of the symbol was not found.
rtld: 0712-001 Symbol lxkRegexpComp2 was referenced
      from module /u01/app/grid/product/12.2.0/grid/lib/libhasgen12.so(), but a runtime definition
      of the symbol was not found.
rtld: 0712-001 Symbol lxkRegexpLike2 was referenced
      from module /u01/app/grid/product/12.2.0/grid/lib/libhasgen12.so(), but a ru)

Workaround

Apply Oct 2018 Grid Infrastructure Release Update (GI RU) first, then apply April 2019 GI RU.

Monday, December 31, 2018

EM 13.2 AIX host shows as Linux platform after applied Agent Bundle Patch 13.2.0.0.180930

 After 13.2.0.0.180930 Agent bundle patch 28533438 is applied on AIX agent, on the home page of AIX host, the 'Operating System' shows Linux instead of AIX. Some of platform specific software deployments through EM console may fail with incorrect platform similar messages.

The issue is caused by the bug 28789751 - Applying 13.2.0.0.180930 Agent BP Causing Config Change on Non Linux hosts.

It can be fixed by rolling back the patch and wait for next patch release, or take following as workaround,

1. Rename file <AGENT_HOME>/sysman/admin/scripts/hostOSD.pm, for example

   mv hostOSD.pm hostOSD.pm_28533438

2. Copy back old hostOSD.pm from backup created by opatch, the backup can be found from 
   
   <AGENT_HOME>/.patch_storage/28533438_<time-stamp>/files/oracle.sysman.top.agent/13.2.0.0.0/

Here, <time-stamp> is the time stamp when the patch application was started. For example, I applied patch on Dec 31, 2018, the directory is following

   <AGENT_HOME>/.patch_storage/28533438_Dec_31_2018_11_21_18/files/oracle.sysman.top.agent/13.2.0.0.0/

Now under <AGENT_HOME>/sysman/admin/scripts/, contains the following two hostOSD files,

  hostOSD.pm
  hostOSD.pm_28533438

3. Refresh host configuration on respective host target home page

Login to EM console, Targets -> Hosts, click on host on which you applied patch and do the below

On host home page, click Host to open drop-down menu -> Configuration -> latest, click Refresh button on new window. Wait till the refresh job completed, then check the home page configuration.

Sunday, November 18, 2018

Patch or Upgrade Single instance Database and Oracle Restart Using OEM 13c Fleet Maintenance

Oracle Enterprise Manager (OEM) 13c Database Fleet Maintenance is a centralized mechanism for patching/upgrading of Oracle homes and maintain consistency. It allows OEM Cloud Control administrators to patch database, Oracle Restart (Standalone Grid Infrastructure) or Grid Infrastructure with minimal downtime.

Saturday, February 10, 2018

12.2 database Alert log shows ORA-20001: Latest xml inventory is not loaded into table

While 12.2 database is being started by srvctl, the alert log shows following messages,
Unable to obtain current patch information due to error: 20001, ORA-20001: Latest xml inventory is not loaded into table
ORA-06512: at "SYS.DBMS_QOPATCH", line 777
ORA-06512: at "SYS.DBMS_QOPATCH", line 864
ORA-06512: at "SYS.DBMS_QOPATCH", line 2222
ORA-06512: at "SYS.DBMS_QOPATCH", line 740
ORA-06512: at "SYS.DBMS_QOPATCH", line 2247
===========================================================
Dumping current patch information
===========================================================
Unable to obtain current patch information due to error: 20001
===========================================================