Saturday, November 14, 2020

Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux

 In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on the node which we were patching on. 

Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep running and accessible to the database users during Oracle Grid Infrastructure patching. My patching practice is done in following environment

   * Two nodes Grid Infrastructure 19.8 running on Oracle Linux 7 update 8
   * Host name of nodes are rac01.lab.dbaplus.ca and rac02.lab.dbaplus.ca
   * GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020 are applied in out-of-place(OOP) mode
   * ACFS and AFD are being used

Saturday, November 7, 2020

Zero-Downtime Grid Infrastructure Patching root.sh failed with CLSRSC-675: Cluster-wide Grid Plug and Play profile not found

 When running root.sh with zero-downtime GI patching option -transparent and -nodriverupdate, got following errors
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/srvm/admin/rhpmovegi.pl /u01/app/19.0.0/grid /u01/app/19.9.0/grid true' execution failed
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_11-55-01AM.log
2020/11/05 11:55:01 CLSRSC-675: Cluster-wide Grid Plug and Play profile not found on this node.
Died at /u01/app/19.9.0/grid/crs/install/crsgpnp.pm line 3436.
The command '/u01/app/19.9.0/grid/perl/bin/perl -I/u01/app/19.9.0/grid/perl/lib -I/u01/app/19.9.0/grid/crs/install /u01/app/19.9.0/grid/crs/install/rootcrs.pl  -transparent -nodriverupdate -dstcrshome /u01/app/19.9.0/grid -prepatch' execution failed

Thursday, November 5, 2020

Oracle AHF installation failed with "AHF-00074: Perl Modules not found: Data::Dumper"

Oracle combined Trace File Analyzer (TFA) & ORAchk/EXAchk into single installer called Oracle Autonomous Health Framework (AHF).

When installing AHF on Linux platform (Red Hat Enterprise Linux / Oracle Linux), got error "AHF-00074: Perl Modules not found :  Data::Dumper"
[root@host01]# ./ahf_setup
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_202300_4904_2020_11_05-18_04_25.log
PERL : /u01/app/19.9.0/grid/perl/bin/perl
[ERROR] : AHF-00074: Perl Modules not found :  Data::Dumper

Saturday, October 31, 2020

Friday, October 30, 2020

Pluggable database opened with RESTRICTED SESSION due to Sync PDB failed with ORA-65177 during 'alter user'

 When open pluggable database, got following error
SQL> alter pluggable database ORCL open;

Warning: PDB altered with errors.

SQL> 
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCL                           READ WRITE YES

The pluggable databse ORCL is opened with RESTRICTED SESSION. Check the view pdb_plug_in_violations for error details,
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> col name for a10
SQL> col time for a18
SQL> col cause for a15
SQL> col message for a75
SQL> set linesize 255
SQL> alter session set nls_timestamp_format='dd-mon-yy hh24:mi:ss';

Session altered.

SQL> select time,name,type,cause,status,message from pdb_plug_in_violations where status <>'RESOLVED';

TIME               NAME       TYPE      CAUSE           STATUS    MESSAGE
------------------ ---------- --------- --------------- --------- ---------------------------------------------------------------------------
30-oct-20 12:00:52 ORCL       ERROR     Sync Failure    PENDING   Sync PDB failed with ORA-65177 during 'alter user ABCDE_APP account unlock'

When the pdb ORCL is being opened, it tries to synchronize the sql "alter user ABCDE_APP account unlock" between pdb and cdb$root, because the sql was run in cdb$root but not in pdb. Check the status of user ABCDE_APP
SQL> col username for a10
SQL> select con_id,username,account_status,created,common from cdb_users where username='ABCDE_APP';

    CON_ID USERNAME   ACCOUNT_STATUS     CREATED   COM
---------- ---------- ------------------ --------- ---
         1 ABCDE_APP  OPEN               11-JUN-19 YES

SQL> alter session set container=ORCL;

Session altered.

SQL>
SQL> select con_id,username,account_status,created,common from cdb_users where username='ABCDE_APP';

    CON_ID USERNAME   ACCOUNT_STATUS    CREATED   COM
---------- ---------- ----------------- --------- ---
         3 ABCDE_APP  OPEN              04-AUG-20 NO


The user ABCDE_APP was created as common user in root container, but in pdb user ABCDE_APP was created as local user. Both user are using same user name, but are totally different user account in the database. When DBA run sql 'alter user ABCDE_APP account unlock' in root container, the database tries to sync the sql with all pdbs, but the common user cannot be synced in pdb ORCL because the user ABCDE_APP is not the same user.

AS a workaround, the issue can be temporarily solved by deleting the sync-pending statement (alter user ABCDE_APP account unlock) from PDB_SYNC$ in both CDB and PDB 

In CDB root container,
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> 
SQL> col name for a10
SQL> col sqlstmt for a40
SQL> 
SQL> alter session set nls_date_format='dd-mon-yy hh24:mi:ss';

Session altered.

SQL> 
SQL> select ctime,sqlstmt,name,flags,opcode,replay# from pdb_sync$ where sqlstmt like 'alter user ABCDE_APP account unlock%';

CTIME              SQLSTMT                                  NAME            FLAGS      OPCODE     REPLAY#
------------------ ---------------------------------------- ---------- ---------- ---------- ----------
30-oct-20 10:24:42 alter user ABCDE_APP account unlock      ABCDE_APP           0          5          21

SQL> delete from PDB_SYNC$ where sqlstmt like ('alter user ABCDE_APP account unlock%');

1 row deleted.

SQL> commit;

Commit complete.

In pdb ORCL
SQL> alter session set container=ORCL;

Session altered.

SQL> select ctime,sqlstmt,name,flags,opcode,replay# from pdb_sync$ where sqlstmt like 'alter user ABCDE_APP account unlock%';

CTIME              SQLSTMT                                  NAME         FLAGS      OPCOD      EREPLAY#
------------------ ---------------------------------------- ---------- ------- ---------- ----------
30-oct-20 10:24:42 alter user ABCDE_APP account unlock      ABCDE_APP        0          5          21

SQL> delete from PDB_SYNC$ where sqlstmt like 'alter user ABCDE_APP account unlock%';

1 row deleted.

SQL> commit;

Commit complete.

Re-open the pdb
SQL> alter pluggable database ORCL close;

Pluggable database altered.

SQL> alter pluggable database ORCL open;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCL                           READ WRITE NO

The pdb is open without RESTRICT SESSION. However, the issue may happen again when you run 'alter user ABCDE_APP' statement in root container. The permanent solution is that drop local user from pdb and grant common user access to the pdb.

Tuesday, October 27, 2020

Reduce number of privileged access audit files for ASM with rsyslog on Linux

This post shows how to configure rsyslog on Linux (Red Hat Enterprise Linux 7/8) to redirect ASM audit message to OS log files.

If you are looking for the information for Solaris or AIX, please check out following posts

Tuesday, October 20, 2020

Install and Configure Kerberos Authentication on Red Hat Enterprise Linux 8

The demonstration includes three installation/configuration parts,

  Part 1. Install and configure Kerberos Key Distribution Center (KDC) Server
  
     Server name: xdc01.lab.dbaplus.ca
     OS version:  Red Hat Enterprise Linux (RHEL) 8

  Part 2. Install and configure Kerberos client on application (SSH) server
  
     Server 1: host01.lab.dbaplus.ca
     Server 2: host02.lab.dbaplus.ca
     OS version:  Red Hat Enterprise Linux (RHEL) 8
  
  Part 3. Install and configure Kerberos client on workstation to test single-sign-on (SSO) for SSH authentication with Kerberos

Monday, September 28, 2020

GI 19c opatchauto out-of-place patching failed with file permission issues

Environment,

   Grid Infrastructure Home version : 19.7.0.0.200414
   OPatch version: 12.2.0.1.21
   Patch version: P31305339 - GI Release Update 19.8.0.0.200714

On fist node of the cluster, when runng opatchauto with option -outofplace or -prepare-clone, got errors
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto apply -phBaseDir /u01/stage/grid -outofplace -logLevel FINEST
     ... ...
Creating clone for oracle home /u01/app/19.7.0/grid_1.
Failed to perform clone operation for oracle home /u01/app/19.7.0/grid_1. 

Execution of [CloneOracleHomeAction] patch action failed, check log for more details. Failures:
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[FATAL] [INS-32012] Unable to create directory: /u01/app/19.8.0/grid_1, on this server.
   CAUSE: Either proper permissions were not granted to create the directory or there was no space left in the volume.
   ACTION: Check your permission on the selected directory or choose another directory.
[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty. 

After fixing the cause of failure Run opatchauto resume

]

The process failed with error "Unable to create directory: /u01/app/19.8.0/grid_1", the directory is cloned (new) home and the process is trying to apply RU 19.8.0.0.200714 onto the home.

Check the directory permission
[root@rac01]# ls -ld /u01/app/19.8.0/grid_1       
drwxr-xr-x   94 root     oinstall       4096 Sep 20 18:27 /u01/app/19.8.0/grid_1

Chang the directory owner from root to Grid Home owner grid
[root@rac01]$ chown grid /u01/app/19.8.0/grid_1       

Resume the patch process
[root@rac01]# /u01/app/19.7.0/grid_1/OPatch/opatchauto resume

Different errors occur
Patch Target : rac01->/u01/app/19.7.0/grid_1 Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0/grid_1, host: rac01.
Command failed:  /u01/app/19.8.0/grid_1/perl/bin/perl /u01/app/19.8.0/grid_1/clone/bin/clone.pl ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/19.8.0/grid_1 INVENTORY_LOCATION=/u01/app/oraInventory -defaultHomeName 'CLUSTER_NODES={rac01,rac02}' "LOCAL_NODE=rac01" CRS=false SHOW_ROOTSH_CONFIRMATION=false oracle.install.skipRootPre=true -silent
Command failure output: 
[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image. 
You must use /u01/app/19.8.0/grid_1/gridSetup.sh script available to perform the Software Only install. For more details on image based installation, refer to help documentation.

Starting Oracle Universal Installer...

[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
   ACTION: Oracle recommends that the Oracle Base location is empty.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2020-09-20_06-38-13PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........[FATAL] Failed to restore the saved templates to the Oracle home being cloned. Aborting the clone operation

After fixing the cause of failure Run opatchauto resume

]

The root cause is that the opatchauto did not successfully unlock the new cloned home before applying patches on it. Lots of files/directories under new home are still owned by root
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root
/u01/app/19.8.0/grid_1/acfs
/u01/app/19.8.0/grid_1/acfs/tunables
/u01/app/19.8.0/grid_1/auth
/u01/app/19.8.0/grid_1/auth/crs
/u01/app/19.8.0/grid_1/auth/crs/r10-sohei
   ... ...
/u01/app/19.8.0/grid_1/bin/cloudcli
/u01/app/19.8.0/grid_1/bin/clscfg
/u01/app/19.8.0/grid_1/bin/clscfg.bin
/u01/app/19.8.0/grid_1/bin/clsdiaglsnr
   ... ...
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa.jks
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_directories.txt
/u01/app/19.8.0/grid_1/suptools/tfa/release/tfa_home/tfa_setup.txt
/u01/app/19.8.0/grid_1/xag/mesg/xagus.le

Change the owner of these file/directories to grid
[root@rac01]# find /u01/app/19.8.0/grid_1 -user root -exec chown grid {} \;

Continue opatchauto with resume option,
[root@rac01]# /u01/app/19.7.0/grid _1/OPatch/opatchauto resume

The home will be out-of-place patched, and new home is /u01/app/19.8.0/grid_1.

Sunday, August 2, 2020

HTTP 404 when enabling Friendly URLs or referencing Static Application Files using substitution string #APP_IMAGES#

As it is known, Oracle is trying to deprecate two web listeners Oracle HTTP Server (mod_plsql) and Embedded PL/SQL Gateway, and Oracle REST Data Services (ORDS) will be the only supported web listener for APEX.

Tuesday, July 21, 2020

OEM 13c Configure agent to monitor WebLogic Servers secured with custom certificate

When adding "Oracle Fusion Middleware/WebLogic Domain" to EM 13c with t3s (instead of t3) JMX Protocol, you may get following errors

Failed to discover WebLogic: java.lang.RuntimeException:-java.lang.RuntimeException:-javax.naming.CommunicationException-t3s://wls.dbaplus.ca:7002: -Destination-192.15.49.108,-7002-unreachable;-nested-exception-is: ---javax.net.ssl.SSLHandshakeException: -Received-fatal-alert: -protocol_version; -No-available-router-to-destination. No targets discovered. Check host, port, credentials, and protocol and be sure the server is running.

Check agent trace file <AGENT_INST>/sysman/log/emagent_perl.trc

oracle_wls.pl: 2020-04-26 11:23:42,899: WARN:  Start_discovery_output---------------
oracle_wls.pl: 2020-04-26 11:23:42,899: WARN:  No logging has been configured and default agent logging support is unavailable.
 Handshake failed: TLSv1.3, error = No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
 Handshake failed: TLSv1.2, error = PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 Handshake failed: TLSv1.1, error = PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 Handshake failed: TLSv1, error = Received fatal alert: protocol_version
 Handshake failed: TLSv1.3, error = No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
 Handshake failed: TLSv1.2, error = PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 Handshake failed: TLSv1.1, error = PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 Handshake failed: TLSv1, error = Received fatal alert: protocol_version
 FATAL: Error in discovery : Exception=java.lang.RuntimeException: java.lang.RuntimeException: javax.naming.CommunicationException t3s://wls.dbaplus.ca:7002: Destination 192.15.49.108, 7002 unreachable; nested exception is:
        javax.net.ssl.SSLHandshakeException: Received fatal alert: protocol_version; No available router to destination

If the WebLogic Server (WLS) is configured with custom certificate and you can find message 'unable to find valid certification path to requested target' as previous sample trace file, the issue can be fixed by importing the certificate of each CA involved in issuing the custom certificate into agent local keystore with following command,

  $AGENT_HOME/bin/emctl secure add_trust_cert_to_jks -trust_certs_loc <ca_certificate_file> -alias <certificate_alias> [-password <keystore_pwd>]

Here, <certificate_alias> is used to identify the certificates saved in the keystore, they must be unique for each certificate, <keystore_pwd> is the password of the keystore, the default value is welcome.

For example

emctl stop agent
emctl secure add_trust_cert_to_jks -password welcome -alias dbaplus-root -trust_certs_loc /home/oracle/Root_CA_Certificate.txt
emctl secure add_trust_cert_to_jks -password welcome -alias dbaplus-intermediate -trust_certs_loc /home/oracle/Intermediate_CA_Certificate.txt
emctl start agent

To list the certificates imported into agent monitor keystore,

 $AGENT_HOME/jdk/bin/keytool -list -alias <certificate_alias> -keystore   $AGENT_INSTANCE_HOME/sysman/config/montrust/AgentTrust.jks -storepass welcome -v

If needed, the certificates can be removed from keystore as following

 $AGENT_HOME/jdk/bin/keytool -delete -alias <certificate_alias> -keystore   $AGENT_INSTANCE_HOME/sysman/config/montrust/AgentTrust.jks -storepass welcome -v

Instead of checking trace file "emagent_perl.trc", more details of target discovery errors can be retrieved as following,

In the "Advanced" section of EM Discovery wizard, configure "External Parameters" or "Discovery Debug File Name" to assign a debug file name. Agent will write discovery message into the file with given file name under directory <AGENT_INST>/sysman/log.

The format of assigning debug file name in "External Parameters" is "-DFMWDiscoveryAgLogFile=<Log file name>", it can be used together with other parameters here. These parameters will be passed to the java process which makes a connection to the Administration Server. All the parameters must begin with -D.

Value for "Discovery Debug File Name" is pretty straight, just enter a valid OS file name. The agent side discovery messages for this session will be logged into this file. If this file already exists, it will be updated.

Saturday, July 18, 2020

Oracle database 19c "srvctl status service" fails with Java.lang.NullPointerException

The following errors occur while running "srvctl status service -db <db_name>" command from 19.7 database home on AIX and Linux x86_64,
$ srvctl status service -db orcl
Exception in thread "main" java.lang.NullPointerException
        at oracle.ops.opsctl.StatusAction.internalExecuteService(StatusAction.java:2077)
        at oracle.ops.opsctl.StatusAction.executeSIHAService(StatusAction.java:1692)
        at oracle.ops.opsctl.Action.executeSIHA(Action.java:445)
        at oracle.ops.opsctl.OPSCTLDriver.execute(OPSCTLDriver.java:537)
        at oracle.ops.opsctl.OPSCTLDriver.main(OPSCTLDriver.java:252)

Thursday, July 16, 2020

Configure SSL on WebLogic 12c with CA Issued Certificate

Weblogic 12c has demo identity keystore shipped with installation, it makes it very easy to implement SSL communication for WebLogic managed servers in non-production environment. However,  if you are deploying your application in production, especially public accessible, a certificate issued by Certificate Authority (CA) is absolutely necessary. To request and install CA issued certificate on WebLogic 12c, follow these steps,

Sunday, July 12, 2020

Install APEX with deploying ORDS in WebLogic as Web listener

The installation is completed on Oracle Linux 7.5, and following components/ applications are installed,

    JDK 1.8.0_241
    Weblogic 12.2.1.4.0
    Oracle Application Express (APEX) 19.2
    Oracle REST Data Services (ORDS) 19.4

Saturday, June 20, 2020

Agent status shows 'Diagnose for Agent Blocked (Bounce Counter Mismatch)' in OEM 13c

Agent 13.2 home was restored from a backup after failed OS patching maintenance. On the Enterprise Manager 13.2 console, the agent status shows 'Diagnose for Agent Blocked (Bounce Counter Mismatch)', and targets monitored by the agent show 'Diagnose for Status Pending (Post Blackout)'.

Wednesday, May 27, 2020

Database 12.2 crashed with ORA-04031 ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Database 12.2 instance crashed with following messages in alert log,
2020-05-20T22:28:52.724801-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320299):
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320299/orcl_asmb_15204374_i320299.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-05-20T22:29:30.066864-04:00
NOTE: ASMB0 terminating
2020-05-20T22:29:30.067840-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
2020-05-20T22:29:30.070882-04:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc:
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_asmb_15204374.trc  (incident=320300):
ORA-4031 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_320300/orcl_asmb_15204374_i320300.trc
2020-05-20T22:29:31.376055-04:00
USER (ospid: 15204374): terminating the instance due to error 4031

From the alert log, we can find that asmb process (ospid: 15204374) wrote following errors into trace file orcl_asmb_15204374.trc

ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","unknown object","init_heap_kfsg","ASM extent pointer array")

Then, the asmb process terminated the instance.

This issue happened after the database home was applied Oracle database Release Update April 2020 (12.2.0.1.200414).

It is a bug introduce by Release Update 12.2.0.1.200414, and can be fixed by apply patch 31341859.

Sunday, May 17, 2020

12.2.0.1 init.ohasd does not start clusterware stack after April 2020 RU applied

The init.ohasd process does not start clusterware stack after the 12.2.0.1 Grid Infrastructure home is applied April 2020 Release Update. The scenario is not a typical Grid Infrastructure startup issue. It is a human error and happened when DBA, by accident, ran roothas.sh instead of rootcrs.sh while patching GI cluster home. Before knowing it is human error, the diagnose process is done as following.

Wednesday, May 13, 2020

impdp failed at dbms_logrep_imp.instantiate_schema with ORA-01031: insufficient privileges

When execute impdp to import 12.2 database, ORA-01031 occured as following,
[oracle@host01]$ impdp system@pdb01 dumpfile=appdata.dmp schemas=appdata

Import: Release 12.2.0.1.0 - Production on Tue May 12 17:58:56 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_03":  system/********@pdb01 dumpfile=appdata.dmp schemas=appdata
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema APPDATA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

Failing sql is:
BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'DB02', inst_scn=>'386040157882');COMMIT; END;
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-31625: Schema APPDATA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

Failing sql is:
GRANT SELECT ON "APPDATA"."WORKLOAD_SEQ" TO "USER01"

ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-31625: Schema APPDATA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

Failing sql is:
GRANT SELECT ON "APPDATA"."LOG_SEQ" TO "USER02"

The user SYSTEM, which is used by impdp to connect to database, does not have BECOME USER privilege, 
system@PDB01> select * from dba_sys_privs where privilege='BECOME USER';

GRANTEE    PRIVILEGE       ADMIN_OPTION COMMON INHERITED
---------- --------------- ------------ ------ ---
SYS        BECOME USER     NO           YES    YES

The healthy database privilege grant looks like,
system@PDB01> select * from dba_sys_privs where privilege='BECOME USER';

GRANTEE              PRIVILEGE    ADMIN_OPTION COMMON INHERITED
-------------------- ------------ ------------ ------ ---
DBA                  BECOME USER  NO           YES    YES
IMP_FULL_DATABASE    BECOME USER  NO           YES    YES
SYS                  BECOME USER  NO           YES    YES

It can be fixed by running grand command
grant BECOME USER to DBA,IMP_FULL_DATABASE;

This privilege is normally granted to roles DBA and IMP_FULL_DATABASE. For some reason, it could be revoked. One of the scenario is that DBUA run script dvu121.sql to revoke BECOME USER privileges from DBA and IMP_FULL_DATABASE roles when upgrading database to 12.2 and Database Vault component is VALID.

Friday, April 24, 2020

opatchauto out-of-place patching Oracle Restart and Database home

Out of place (OOP) patching is a new feature introduced by Oracle 12.2 with the latest version of opatchauto. It is a patching method where patching is performed by following operations,

  * Creating a copy of current oracle home
  * Applying patches to the copied home
  * Switching services to the copied home

This patching method helps on moving Oracle home and/or reducing the downtime of patching Oracle home and database. OPatchauto supports OOP patching for the following configurations,

  * Grid Infrastructure (GI) and Real Application Cluster (RAC)
  * Oracle Restart (Standalone Grid Infrastructure) (SIHA) and single instance database (SIDB)

Tuesday, December 24, 2019

DBA_REGISTRY_SQLPATCH shows END status after datapatch ran against database

During postinstallation of a database patching, datapatch utility shows following message while loading modified SQL files into the database,
[oracle@host01]$ $ORACLE_HOME/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Dec 22 14:32:47 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_18481316_2019_12_22_14_32_47/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 191015 in the binary registry and ID 190416 in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30138470 (DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015)

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...
Patch 30138470 apply: SUCCESS
  logfile:  (no errors)
SQL Patching tool complete on Sun Dec 22 14:33:32 2019


It shows 'Patch 30138470 apply: SUCCESS', but the log file name is missing and patch status shows END in view DBA_REGISTRY_SQLPATCH,
SQL> select patch_id,version,action,status,description
  2   from dba_registry_sqlpatch
  3  order by action_time;

  PATCH_ID VERSION     ACTION   STATUS     DESCRIPTION
---------- ----------- -------- ---------- -------------------------------------------------
  29314339 12.2.0.1    APPLY    SUCCESS    DATABASE APR 2019 RELEASE UPDATE 12.2.0.1.190416
  30138470 12.2.0.1    APPLY    END

SQL>


Check the sql execution log,
[oracle@host01]$ cd /u01/app/oracle/cfgtoollogs/sqlpatch/
[oracle@host01]$ ls -l
total 104
drwxr-xr-x    3 oracle   oracle          256 Jul 21 16:28 29314339
drwxr-xr-x    3 oracle   oracle          256 Dec 22 14:33 30138470
drwxr-xr-x    2 oracle   oracle         4096 Jul 21 16:24 sqlpatch_10879194_2019_07_21_16_24_15
drwxr-xr-x    2 oracle   oracle         4096 Dec 22 14:33 sqlpatch_18481316_2019_12_22_14_32_47
-rw-r--r--    1 oracle   oracle         1373 Dec 22 14:31 sqlpatch_history.txt

[oracle@host01]$ cd 30138470
[oracle@host01]$ ls -l
total 0
drwxr-xr-x    2 oracle   oracle          256 Dec 22 15:10 23136421
[oracle@host01]$ cd 23136421
[oracle@host01]$ ls -l
total 440
-rw-r--r--    1 oracle   oracle       184086 Dec 22 14:33 30138470_apply_DB01_2019Dec22_14_33_31.log
[oracle@host01]$ pwd
/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421
[oracle@host01]$
[oracle@host01]$ grep -n 'ORA-' 30138470_apply_DB01_2019Dec22_14_33_31.log
32:ORA-01693: max # extents (505) reached in lob segment
34:ORA-06512: at "SYS.DBMS_SQLPATCH", line 639
35:ORA-06512: at "SYS.DBMS_SQLPATCH", line 605
36:ORA-06512: at line 2
65:ORA-20004: Uninitialized state in install_file
66:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738
159:ORA-20004: Uninitialized state in install_file
       ...
7305:SQL> PROMPT IGNORABLE ERRORS: ORA-04043
7306:IGNORABLE ERRORS: ORA-04043
7313:ORA-20004: Uninitialized state in install_file
7314:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738
7407:ORA-20004: Uninitialized state in install_file
7408:ORA-06512: at "SYS.DBMS_SQLPATCH", line 738


ORA-01693 is found in line 32 of the log file. List the details of the errors,
[oracle@host01]$ head -37 30138470_apply _DB01_2019Dec22_14_33_31.log
SQL>
SQL> SET PAGESIZE 0
SQL> SELECT 'Starting apply for patch 30138470/23136421 on ' ||
  2         SYSTIMESTAMP FROM dual;
Starting apply for patch 30138470/23136421 on 22-DEC-19 02.33.31.103477 PM -05:0
0


SQL> SET PAGESIZE 10
SQL>
SQL> BEGIN
  2      dbms_sqlpatch.patch_initialize(p_patch_id      => 30138470,
  3                                     p_patch_uid     => 23136421,
  4                                     p_flags         => '&flags',
  5                                     p_description   => '&description',
  6                                     p_action        => 'APPLY',
  7                                     p_logfile       => '&full_logfile',
  8                                     p_bundle_series => '&bundle_series');
  9  END;
 10  /
old   4:                                    p_flags         => '&flags',
new   4:                                    p_flags         => 'NB',
old   5:                                    p_description   => '&description',
new   5:                                    p_description   => 'DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015',
old   7:                                    p_logfile       => '&full_logfile',
new   7:                                    p_logfile       => '/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421/30138470_apply_DB01_2019Dec22_14_33_31.log',
old   8:                                    p_bundle_series => '&bundle_series');
new   8:                                    p_bundle_series => 'DBRU');
BEGIN
*
ERROR at line 1:
ORA-01693: max # extents (505) reached in lob segment
SYS.SYS_LOB0000190607C00013$$
ORA-06512: at "SYS.DBMS_SQLPATCH", line 639
ORA-06512: at "SYS.DBMS_SQLPATCH", line 605
ORA-06512: at line 2

[oracle@host01]$


The segment 'SYS.SYS_LOB0000190607C00013$$' reaches the extent maximum limitation (505), and it caused the SQL failure, find out what the segment is
SQL> select owner,object_name,object_type
  2    from dba_objects
  3   where object_name='SYS_LOB0000190607C00013$$';

OWNER                OBJECT_NAME                    OBJECT_TYPE
-------------------- ------------------------------ -----------------------
SYS                  SYS_LOB0000190607C00013$$      LOB

SQL> select owner,table_name,column_name,segment_name
  2    from dba_lobs
  3   where segment_name='SYS_LOB0000190607C00013$$';

OWNER      TABLE_NAME                     COLUMN_NAME          SEGMENT_NAME
---------- ------------------------------ -------------------- -------------------------
SYS        REGISTRY$SQLPATCH              PATCH_DIRECTORY      SYS_LOB0000190607C00013$$

SQL>
SQL> desc sys.registry$sqlpatch
 Name                          Null?    Type
 ----------------------------- -------- -------------------------
 PATCH_ID                      NOT NULL NUMBER                  
 ACTION                        NOT NULL VARCHAR2(15)            
 ACTION_TIME                   NOT NULL TIMESTAMP(6)            
 DESCRIPTION                            VARCHAR2(100)           
 LOGFILE                                VARCHAR2(500)           
 STATUS                                 VARCHAR2(25)            
 VERSION                       NOT NULL VARCHAR2(20)            
 PATCH_UID                     NOT NULL NUMBER                  
 FLAGS                                  VARCHAR2(10)            
 BUNDLE_SERIES                          VARCHAR2(30)            
 BUNDLE_ID                              NUMBER                  
 INSTALL_ID                             NUMBER                  
 PATCH_DIRECTORY                        BLOB                    
 BUNDLE_DATA                            XMLTYPE                 
 PATCH_DESCRIPTOR                       XMLTYPE                 
 POST_LOGFILE                           VARCHAR2(500)           
SQL>
SQL> select owner,segment_name,max_extents
  2    from dba_segments
  3   where segment_name in ('REGISTRY$SQLPATCH','SYS_LOB0000190607C00013$$');

OWNER      SEGMENT_NAME                   MAX_EXTENTS
---------- ------------------------------ -----------
SYS        REGISTRY$SQLPATCH                      505
SYS        SYS_LOB0000190607C00013$$              505


The segment is BLOB column PATCH_DIRECTORY of table SYS.REGISTRY$SQLPATCH, both the BLOB segment and its table have extent maximum limitation 505. To remove the limitation,
SQL> alter table sys.registry$sqlpatch storage (maxextents unlimited);

Table altered.

SQL> alter table sys.registry$sqlpatch
  2  modify lob (patch_directory) (storage (maxextents unlimited));

Table altered.

SQL> select owner,segment_name,max_extents
  2    from dba_segments
  3   where segment_name in ('REGISTRY$SQLPATCH','SYS_LOB0000190607C00013$$');

OWNER      SEGMENT_NAME                   MAX_EXTENTS
---------- ------------------------------ -----------
SYS        REGISTRY$SQLPATCH               2147483645
SYS        SYS_LOB0000190607C00013$$       2147483645



Then re-run the datapatch utility,
[oracle@host01]$ $ORACLE _HOME/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Dec 22 16:43:43 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_35782878_2019_12_22_16_43_43/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
  ID 191015 in the binary registry and ID 190416 in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30138470 (DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015)

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 30138470 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136421/30138470_apply_DB01_2019Dec22_16_44_33.log (no errors)
SQL Patching tool complete on Sun Dec 22 16:45:22 2019


Now, it gives the log file name. Validate the status,
SQL> select patch_id,version,action,status,description
  2    from dba_registry_sqlpatch order by action_time;

  PATCH_ID VERSION     ACTION   STATUS     DESCRIPTION
---------- ----------- -------- ---------- -------------------------------------------------
  29314339 12.2.0.1    APPLY    SUCCESS    DATABASE APR 2019 RELEASE UPDATE 12.2.0.1.190416
  30138470 12.2.0.1    APPLY    END
  30138470 12.2.0.1    APPLY    SUCCESS    DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015


The patch is installed successfully.

Tuesday, October 22, 2019

DBUA 12.2 Failed while PDBS Recompile Invalid Objects

When upgrading 12.1 database to 12.2 with DBUA, got following error,

Server errors encountered during the execution of "PDBS Recompile Invalid Objects".

Following information is found in DBUA log file,
SQL> EXECUTE dbms_registry_sys.validate_components;
...(20:04:27) Starting validate_apex for APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_DIM_ARRAY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_DIM_ELEMENT" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_ELEM_INFO_ARRAY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_GEOMETRY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_ORDINATE_ARRAY" to APEX_180200
ORA-20001: MISSING GRANT: grant execute on "MDSYS"."SDO_POINT_TYPE" to APEX_180200


These privileges were granted by SYS (SYS is grantor) in original database, and missing during upgrading.

Object privileges in original database,
SQL> select GRANTEE,GRANTOR,OWNER,TABLE_NAME,PRIVILEGE from dba_tab_privs
  2   where owner='MDSYS' and grantee='APEX_180200'
  3   order by grantor,table_name;

GRANTEE        GRANTOR    OWNER     TABLE_NAME                     PRIVILEGE
-------------- ---------- --------- ------------------------------ ----------------------
APEX_180200    MDSYS      MDSYS     MDERR                          EXECUTE
APEX_180200    MDSYS      MDSYS     SDO_GEOM_METADATA_TABLE        DELETE
APEX_180200    MDSYS      MDSYS     SDO_GEOM_METADATA_TABLE        SELECT
APEX_180200    MDSYS      MDSYS     SDO_GEOM_METADATA_TABLE        INSERT
APEX_180200    MDSYS      MDSYS     SDO_META                       EXECUTE
APEX_180200    MDSYS      MDSYS     SDO_UTIL                       EXECUTE
APEX_180200    MDSYS      MDSYS     USER_SDO_INDEX_INFO            SELECT
APEX_180200    SYS        MDSYS     SDO_DIM_ARRAY                  EXECUTE
APEX_180200    SYS        MDSYS     SDO_DIM_ELEMENT                EXECUTE
APEX_180200    SYS        MDSYS     SDO_ELEM_INFO_ARRAY            EXECUTE
APEX_180200    SYS        MDSYS     SDO_GEOMETRY                   EXECUTE
APEX_180200    SYS        MDSYS     SDO_ORDINATE_ARRAY             EXECUTE
APEX_180200    SYS        MDSYS     SDO_POINT_TYPE                 EXECUTE


Object privileges in current (partly upgraded) database,
SQL> select GRANTEE,GRANTOR,OWNER,TABLE_NAME,PRIVILEGE from dba_tab_privs
  2   where owner='MDSYS' and grantee='APEX_180200'
  3   order by grantor,table_name;

GRANTEE        GRANTOR    OWNER      TABLE_NAME                     PRIVILEGE
-------------- ---------- ---------- ------------------------------ --------------------
APEX_180200    MDSYS      MDSYS      MDERR                          EXECUTE
APEX_180200    MDSYS      MDSYS      SDO_GEOM_METADATA_TABLE        DELETE
APEX_180200    MDSYS      MDSYS      SDO_GEOM_METADATA_TABLE        INSERT
APEX_180200    MDSYS      MDSYS      SDO_GEOM_METADATA_TABLE        SELECT
APEX_180200    MDSYS      MDSYS      SDO_META                       EXECUTE
APEX_180200    MDSYS      MDSYS      SDO_UTIL                       EXECUTE
APEX_180200    MDSYS      MDSYS      USER_SDO_INDEX_INFO            SELECT


Workaround

Manually grant the missing privileges and click retry in DBUA.

Wednesday, August 21, 2019

12.2: Preupgrade Utility Fails with ORA-06512: at "SYS.DBMS_PREUP"

While running 12.2 Pre-Upgrade Utility (manually or with DBUA) on 12.1 database, received following errors,
Error in preupgrade tool execution.
ERROR -
ERROR - Unable to run sqlplus due to:
ORA-06512: at "SYS.DBMS_PREUP", line 4401
ORA-06512: at "SYS.DBMS_PREUP", line 9714
ORA-06512: at line 8

declare
*
ERROR at line 1:
ORA-01426: numeric overflow
ORA-06512: at line 56

if you are executing the preupgrade tool from a standby database make sure the preupgrade tool has been executed in the primary database If you are executing the tool from Windows, please make sure you have given the appropriate permissions to the Oracle binaries owner.


This is the bug of the Pre-Upgrade Utility installed in current 12.2 home. It can be fixed by installing new version of the utility as following,

1. Download utility ZIP file from My Oracle Support following the instruction of document "How to Download and Run Oracle's Database Pre-Upgrade Utility (Doc ID 884522.1)"
  
2. The current available ZIP file for 12.2 (target upgrade-to version) is preupgrade_12201_cbuild_22_lf.zip, it contains following files,
   dbms_registry_basic.sql         
   dbms_registry_extended.sql
   preupgrade.jar
   preupgrade_driver.sql
   preupgrade_messages.properties
   preupgrade_package.sql

3. Unzip downloaded file to directory $ORACLE_HOME/rdbms/amdin, here $ORACLE_HOME is 12.2 home.

   Note: Rename existing files before unzipping, do not override any files.

Monday, June 10, 2019

Reduce number of privileged access audit files for ASM with syslog on AIX

ASM instance audits all privileged (log in as sysasm, sysdba or sysoper) access by creating audit file for each connection under the audit file destination directories (defined by initialization parameter audit_file_dest). The directory can grow to contain a very large number of files if they are not regularly maintained, especially when initialization parameter audit_sys_operations is true which is default from 12c.

Wednesday, May 29, 2019

AIX: Failure to Apply April 2019 Grid Infrastructure Release Update on 12.2 Oracle Restart Home

Environment

  Operation System:  IBM AIX 7.2 on POWER Systems (64-bit)
  Oracle Restart Home:  12.2.0.1 with July 2018 Grid Infrastructure Release Update (GI RU 12.2.0.1.180717)
  OPatch Version: 12.2.0.1.17

Symptoms
 
Opatch reported following errors while applying sub-patch 29314424 (OCW Apr 2019 Release Update 12.2.0.1.190416)
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.lang.UnsatisfiedLinkError: /u01/app/grid/product/12.2.0/grid/lib/libpatchgensh12.so (rtld: 0712-001 Symbol nzos_SetServerNameList was referenced
      from module /u01/app/grid/product/12.2.0/grid/lib/libhasgen12.so(), but a runtime definition
      of the symbol was not found.
rtld: 0712-001 Symbol lxkRegexpComp2 was referenced
      from module /u01/app/grid/product/12.2.0/grid/lib/libhasgen12.so(), but a runtime definition
      of the symbol was not found.
rtld: 0712-001 Symbol lxkRegexpLike2 was referenced
      from module /u01/app/grid/product/12.2.0/grid/lib/libhasgen12.so(), but a ru)

Workaround

Apply Oct 2018 Grid Infrastructure Release Update (GI RU) first, then apply April 2019 GI RU.

Thursday, January 31, 2019

DBCA Failed with CRS-2566: User 'oracle' does not have sufficient permissions to operate on resource 'ora.driver.afd'

DBCA 12.2.0.1 failed with following errors 

   PRCR-1006: Failed to add resource ora.corcl.db for corcl
   PRCR-1071: Failed to register or update resource ora.corcl.db
   CRS-2566 : User 'oracle' does not have sufficient permissions to operate on resource 'ora.driver.afd', which is part of the dependency specification.

The clusterware is Grid Infrastructure 18c in Standalone mode (Oracle Restart), the GI home owner is grid. Log into system as grid and check the permissions of CRS resourc 'ora.driver.afd'
[grid@host01]$ crsctl stat res ora.driver.afd -p -attr ACL
NAME=ora.driver.afd
ACL=owner:grid:rwx,pgrp:asmadmin:r-x,other::r--,user:grid:r-x

ACL shows that grid is the owner of the resource and have full control (rwx) on the resource, OS group asmadmin has read & execute(r-x), and database owner oracle is not explicitly granted permissions on this resource.  

Check if oracle is member of group asmadmin,
[grid@host01]$ id oracle
uid=104(oracle) gid=112(oinstall) groups=112(oinstall),114(asmdba),116(dba),117(oper),118(racdba),119(backupdba),120(dgdba),121(kmdba)

User oracle is not in the group asmadmin. Therefore, DBCA is telling the truth that oracle does not have permissions on the resource. It is technically easy to fix this issue, but honestly it is not expected there and should be considered as GI 18c bug.

To fix it, just add the group of which oracle is a member (I pick oinstall) to the ACL as following,
[grid@host01]$ crsctl modify resource ora.driver.afd -attr "ACL='owner:grid:rwx,pgrp:asmadmin:r-x,other::r--,group:oinstall:r-x,user:grid:r-x'" -init

Confirm that the ACL is changed,
[grid@host01]$ crsctl stat res ora.driver.afd -p -attr ACL
NAME=ora.driver.afd
ACL=owner:grid:rwx,pgrp:asmadmin:r-x,other::r--,group:oinstall:r-x,user:grid:r-x'

Now, DBCA can create database successfully.

Tuesday, January 15, 2019

Disable Transparent Hugepages on Oracle Linux / Redhat Enterprise Linux 7.5

Transparent Hugepages (THP) are similar to standard HugePages. However, while standard HugePages allocate memory at startup, Transparent Hugepages memory uses the khugepaged thread in the kernel to allocate memory dynamically during runtime, using swappable HugePages.

They may cause delays in accessing memory that can result in node restarts in Oracle RAC environments, or performance issues or delays for Oracle Database single instances. Oracle recommends to disable Transparent HugePages on Oracle database servers.

Following discussion is based on Oracle Linux / Redhat Enterprise Linux 7.5, it may be different on other Linux distributions.

Monday, December 31, 2018

EM 13.2 AIX host shows as Linux platform after applied Agent Bundle Patch 13.2.0.0.180930

 After 13.2.0.0.180930 Agent bundle patch 28533438 is applied on AIX agent, on the home page of AIX host, the 'Operating System' shows Linux instead of AIX. Some of platform specific software deployments through EM console may fail with incorrect platform similar messages.

The issue is caused by the bug 28789751 - Applying 13.2.0.0.180930 Agent BP Causing Config Change on Non Linux hosts.

It can be fixed by rolling back the patch and wait for next patch release, or take following as workaround,

1. Rename file <AGENT_HOME>/sysman/admin/scripts/hostOSD.pm, for example

   mv hostOSD.pm hostOSD.pm_28533438

2. Copy back old hostOSD.pm from backup created by opatch, the backup can be found from 
   
   <AGENT_HOME>/.patch_storage/28533438_<time-stamp>/files/oracle.sysman.top.agent/13.2.0.0.0/

Here, <time-stamp> is the time stamp when the patch application was started. For example, I applied patch on Dec 31, 2018, the directory is following

   <AGENT_HOME>/.patch_storage/28533438_Dec_31_2018_11_21_18/files/oracle.sysman.top.agent/13.2.0.0.0/

Now under <AGENT_HOME>/sysman/admin/scripts/, contains the following two hostOSD files,

  hostOSD.pm
  hostOSD.pm_28533438

3. Refresh host configuration on respective host target home page

Login to EM console, Targets -> Hosts, click on host on which you applied patch and do the below

On host home page, click Host to open drop-down menu -> Configuration -> latest, click Refresh button on new window. Wait till the refresh job completed, then check the home page configuration.

Friday, December 21, 2018

Configure Python with iPython, Jupyter and MySQL on Ubuntu 18.10

If you’re thinking of learning Python, you may be struggling with where and how to start. It could be a pretty hard question before you know why you want to learn Python because Python, as a popular programming language, is used in many fields and industries such as

  . web development (create web applications on a server)
  . software development (work with software developing tools to create work flows)
  . mathematics (scientific computing, data analysis, data visualization)
  . system scripting
 
As I am asked the question by a university student and Python is wildly used for data analysis/visualization in university, I going to demonstrate how to qhickly build a Python data analysis/visualization developing environment on Ubuntu 18.10.

Sunday, November 18, 2018

Patch or Upgrade Single instance Database and Oracle Restart Using OEM 13c Fleet Maintenance

Oracle Enterprise Manager (OEM) 13c Database Fleet Maintenance is a centralized mechanism for patching/upgrading of Oracle homes and maintain consistency. It allows OEM Cloud Control administrators to patch database, Oracle Restart (Standalone Grid Infrastructure) or Grid Infrastructure with minimal downtime.

Friday, August 31, 2018

Oracle Database 18c Password Authentication with Microsoft Active Directory

Oracle database 18c introduces a new feature 'Integration of Active Directory Services with Oracle Database'. It gives us a way to authenticate and authorize users directly with Microsoft Active Directory. Oracle database users and roles can map directly to Active Directory users and groups without using Oracle Enterprise User Security (EUS) or another intermediate directory service. Here, I am going to demonstrate how to make it work step by step.
 
Environment:

  Database Server - Oracle Database 18c Enterprise Edition on Oracle Linux Server 7.5
  Microsoft Active Directory Server - Windows Server 2012 R2

Tuesday, August 14, 2018

Restrict OEM agent 13.2 to use TLS protocol TLSv1.2

Oracle Enterprise Manager 13.2 agent, by default, is configured to use TLS protocol TLSv1.0, TLSv1.1 and TLSv1.2,all versions of TLS when OEM 13.2 released. To restrict agent to use only TLSv1.2 as following,

Friday, August 3, 2018

DBUA 12.2 failed with ORA-01157 at preupgrade tool execution

DBUA got following errors at 'Prerequisite Check' (perupgrade tool execution) while upgrading Oracle database 11.2.0.4 to 12.2.0.1
Error in preupgrade tool execution. ERROR - ERROR - Unable to run sqlplus due to: ORA-06512: at "SYS.DBMS_PREUP",
line 4386 ORA-06512: at "SYS.DBMS_PREUP", line 9108 ORA-06512: at line 8 declare * ERROR at line 1: ORA-01157:
cannot identify/lock data file 202 - see DBWR trace file ORA-01110: data file 202: '+DATA' ORA-06512: at line 56
if you are executing the preupgrade tool from a standby database make sure the preupgrade tool has been executed
in the primary database If you are executing the tool from Windows, please make sure you have given the appropriate
permissions to the Oracle binaries owner. (more details)

Thursday, March 8, 2018

Manually Delete Agent and Targets Using EMCLI in OEM 13c

Although it is recommended to remove targets through OEM console interface, command line utility is more helpful/convenient in some scenarios.

For example, the target host is crashed and cannot be restored, and a new host with new name is built for replacement. Therefore,
the old host and all targets running on this host have to be remove from OEM. Depends on number of targets, it could be a pain to remove
them through console. At this time, manually running EMCLI command becomes the best way.

Following command removed the agent running on server host01 and all targets monitored by this agent,
[oracle@oms1]$ $EMCLI_HOME/bin/emcli login -username=sysman
Enter password :

Login successful
[oracle@oms1]$ $EMCLI_HOME/bin/emcli delete_target -name="host01.dbaplus.ca:3872" -type="oracle_emd" -delete_monitored_targets -async
Target "host01.dbaplus.ca:3872:oracle_emd" deleted successfully
[oracle@oms1]$


We can also run EMCLI command to remove a specific target,
$EMCLI_HOME/bin/emcli delete_target -name="<target-name>" -type="<target-type>"


To remove database (instance)  TESTDB,
$EMCLI_HOME/bin/emcli delete_target -name="TESTDB" -type="oracle_database"


To remove ASM instance target +ASM_host01.dbaplus.ca,
$EMCLI_HOME/bin/emcli delete_target -name="+ASM_host01.dbaplus.ca" -type="osm_instance"


To Oracle listener LISTENER_host01.dbaplus.ca,
$EMCLI_HOME/bin/emcli delete_target -name="LISTENER_host01.dbaplus.ca" -type="oracle_listener"

Thursday, March 1, 2018

How to delete archive log files using PL/SQL


Although RMAN is always the first choice for most DBAs to delete archive log files, PL/SQL could be easier for this scenario,

User wants to switch an existing database from NOARCHIVELOG mode to ARCHIVELOG mode in order to increase the database availability because the database backup job has to shut down database when running in NOARCHIVELOG mode.

Monday, February 19, 2018

ORA-01017 invalid username/password after 11g database upgraded to 12.2 pluggable database

Oracle database 11.2.0.3 was upgraded to 12.2.0.1 and plugged into CDB as pluggable database with 'create pluggable database ... using ...' command. The container database (CDB) was creted with 12.2 dbca and all built-in users' passwords (sys,system,dbsnmp, etc) were set to same ones as in original 11g database.

Saturday, February 10, 2018

12.2 database Alert log shows ORA-20001: Latest xml inventory is not loaded into table

While 12.2 database is being started by srvctl, the alert log shows following messages,
Unable to obtain current patch information due to error: 20001, ORA-20001: Latest xml inventory is not loaded into table
ORA-06512: at "SYS.DBMS_QOPATCH", line 777
ORA-06512: at "SYS.DBMS_QOPATCH", line 864
ORA-06512: at "SYS.DBMS_QOPATCH", line 2222
ORA-06512: at "SYS.DBMS_QOPATCH", line 740
ORA-06512: at "SYS.DBMS_QOPATCH", line 2247
===========================================================
Dumping current patch information
===========================================================
Unable to obtain current patch information due to error: 20001
===========================================================

Friday, February 2, 2018

Desktop Icons and Panels Missing in VNC Viewer Connection

When VNC viewer connected to Linux server (Oracle Enterprise Linux 6.4), the desktop only show the wallpaper
and the desktop icons and panels are missing. 

And the log file /home/oracle/.vnc/host01.lab.dbaplus.ca:3.log contains following:
Fri Feb  2 20:11:16 2018
 vncext:      VNC extension running!
 vncext:      Listening for VNC connections on all interface(s), port 5903
 vncext:      created VNC server for screen 0
gnome-session[9361]: WARNING: Unable to find provider 'gnome-panel' of required component 'panel'
gnome-session[9361]: WARNING: Unable to find provider 'nautilus' of required component 'filemanager'

Verify if the .desktop file for each component in the error messages exists,
[oracle@host01]$ ls -l /usr/share/applications/*panel*
ls: cannot access /usr/share/applications/*panel*: No such file or directory

[oracle@host01]$ ls -l /usr/share/applications/*nautilus*
ls: cannot access /usr/share/applications/*nautilus*: No such file or directory

The files are missing. Check if the packages are installed, 
[oracle@host01]$ rpm -q gnome-nautilus
package gnome-nautilus is not installed
[oracle@host01]$  rpm -q gnome-panel
package gnome-panel is not installed

Neither is found, install them,
[root@host01]# yum install gnome-panel nautilus
<<messages are truncated>>
Installed:
  gnome-panel.x86_64 0:2.30.2-14.el6                         nautilus.x86_64 0:2.28.4-19.el6

Dependency Installed:
  exempi.x86_64 0:2.1.0-5.el6                              gnome-disk-utility-libs.x86_64 0:2.30.1-2.el6
  gnome-session-xsession.x86_64 0:2.28.0-18.el6            gvfs.x86_64 0:1.4.3-15.el6
  libatasmart.x86_64 0:0.17-4.el6_2                        libcdio.x86_64 0:0.81-3.1.el6
  libcroco.x86_64 0:0.6.2-5.el6                            libexif.x86_64 0:0.6.21-5.el6_3
  libgsf.x86_64 0:1.14.15-5.el6                            librsvg2.x86_64 0:2.26.0-5.el6_1.1
  mtools.x86_64 0:4.0.12-1.el6                             nautilus-extensions.x86_64 0:2.28.4-19.el6
  smp_utils.x86_64 0:0.94-4.el6                            udisks.x86_64 0:1.0.1-4.el6
  unique.x86_64 0:1.1.4-2.el6

Complete!

[root@host01]# rpm -q nautilus
nautilus-2.28.4-19.el6.x86_64
[root@host01]# rpm -q gnome-panel
gnome-panel-2.30.2-14.el6.x86_64

[root@host01]# ls -l /usr/share/applications/*panel*
-rw-r--r-- 1 root root 14928 Apr  7  2011 /usr/share/applications/gnome-panel.desktop
[root@host01]# ls -l /usr/share/applications/*nautilus*
-rw-r--r-- 1 root root  3393 Apr 29  2012 /usr/share/applications/gnome-nautilus-autorun-software.desktop
-rw-r--r-- 1 root root 10634 Apr 29  2012 /usr/share/applications/gnome-nautilus-browser.desktop
-rw-r--r-- 1 root root 12690 Apr 29  2012 /usr/share/applications/gnome-nautilus-computer.desktop
-rw-r--r-- 1 root root  3960 Apr 29  2012 /usr/share/applications/gnome-nautilus.desktop
-rw-r--r-- 1 root root 11229 Apr 29  2012 /usr/share/applications/gnome-nautilus-file-management-properties.desktop
-rw-r--r-- 1 root root  3058 Apr 29  2012 /usr/share/applications/gnome-nautilus-folder-handler.desktop
-rw-r--r-- 1 root root  7651 Apr 29  2012 /usr/share/applications/gnome-nautilus-home.desktop
[root@host01]#

Now, we can start vnc-viewer to create a connection with everything needed.

Thursday, January 25, 2018

AIX Disk Only Visible to One ASM Instance of Grid Infrastructure

Oracle Grid Infrastructure 12.2.0.1
AIX 7.1


New disks are added to both nodes of Grid Infrastructure, and the disk permission is configured properly,
[grid@host01]$ ls -l /dev/*hdisk[56]
brw-------      1 root      system         21, 26 Jan 22 10:58 /dev/hdisk5
brw-------      1 root      system         21, 25 Jan 22 10:58 /dev/hdisk6
crw-rw----    1 grid     asmadmin     21, 26 Jan 24 16:07 /dev/rhdisk5
crw-rw----    1 grid     asmadmin     21, 25 Jan 24 16:06 /dev/rhdisk6

[grid@host02]$ ls -l /dev/*hdisk2[56]
brw-------      1 root      system          21, 25 Jan 22 11:08 /dev/hdisk5
brw-------      1 root      system          21, 31 Jan 22 11:08 /dev/hdisk6
crw-rw----    1 grid     asmadmin     21, 25 Jan 22 11:08 /dev/rhdisk5
crw-rw----    1 grid     asmadmin     21, 31 Jan 24 14:49 /dev/rhdisk6

Saturday, January 20, 2018

Upgrading Grid Infrastructure from 12.1 to 12.2 failed with [INS-20802]

Customer is upgrading two-node Oracle Grid Infrastructure from version 12.1.0.2 to version 12.2.0.1, and gridSetup.sh returned error [INS-20802] during Preparing for deploying Oracle Grid Infrastructure Management Repository.

Wednesday, January 17, 2018

ORA-01403: no data found on ctxposup.sql while upgrading from 11g to 12.2

From: Oracle Database - Enterprise Edition - Version 11.2.0.3 + PSU 11.2.0.3.5 (14727310)
To:   Oracle Database - Enterprise Edition - Version 12.2.0.1.0
Operation System -  Solaris 10 1/13 s10s_u11wos_24a SPARC


It failed to upgrade database from 11.2.3 to 12.2 with error 'ORA-01403: no data found', and following message is found from log file 'Oracle_Server.log',

REASON:
      ERRORS FOUND: During Upgrade
         FILENAME: /u01/app/oracle/cfgtoollogs/dbua/upgrade2018-01-17_11-42-08-AM/DB01/catupgrd0.log AT LINE NUMBER: 825076
------------------------------------------------------
Identifier CONTEXT 18-01-17 12:37:20
SCRIPT      = [/u01/app/oracle/product/12.2.0/dbhome_1/ctx/admin/ctxposup.sql] <===== STATEMENT 1
ERROR      = [ORA-01403: no data found ORA-06512: at line 28
ORA-06512: at line 8
]
STATEMENT = [declare
  l_owner#  number;
  l_errnum  number;
  l_count   number;

begin
  select user# into l_owner# from sys."_BASE_USER" where name='CTXSYS';
  select idx_id into l_count from dr$index where        <===== STATEMENT 2
    idx_name = 'ENT_EXT_DICT_OBJ' and          
    idx_owner# = l_owner#;                       

  /* if this object does not exist yet, create it. */  
  if (0 = l_count) then
    CTX_DDL.create_policy('CTXSYS.ENT_EXT_DICT_OBJ',    <===== STATEMENT 3
      filter        => 'CTXSYS.NULL_FILTER',
      section_group => 'CTXSYS.NULL_SECTION_GROUP',
      lexer        => 'CTXSYS.BASIC_LEXER',
      stoplist        => 'CTXSYS.EMPTY_STOPLIST',
      wordlist        => 'CTXSYS.BASIC_WORDLIST'
  );
  end if;
exception
  when others then
    l_errnum := SQLCODE;
    if (l_errnum = -20000) then
      null;
    else
      raise;
    end if;
end;]

Script ctxposup.sql (STATEMENT 1) which upgrades Oracle Text raised ORA-01403 while running PL/SQL block listed in the above. Testing individual statements in the block as following:

sys@DB01> select user#  from sys."_BASE_USER" where name='CTXSYS';

     USER#
----------
       100

sys@DB01> ALTER SESSION SET CURRENT_SCHEMA = CTXSYS;

Session altered.

sys@DB01> select idx_id from dr$index where
  2    idx_name = 'ENT_EXT_DICT_OBJ' and
  3    idx_owner# = 100;

no rows selected

The object CTXSYS.ENT_EXT_DICT_OBJ (a policy of Oracle Text) does not exist in the database. Therefore, STATEMENT 2 (select ... into ... from) returned ORA-01403. Apparently, The purpose of STATEMENT 2 is to test whether CTXSYS.ENT_EXT_DICT_OBJ exists, but listing records is not a better choice for exist-testing than counting the number of records. Let's try to replace STATEMENT 2 in script filectxposup.sql with following code

 select count(idx_id) into l_count from dr$index where

Start the upgrade again, it succeeded!

Monday, January 15, 2018

ORA-65500 When using DUPLICATE to Create Standby Database

Oracle Database - Enterprise Edition - Version 12.2.0.1 + RU 12.2.0.1.171017
Operation System - Oracle Solaris 11.3 SPARC

When running RMAN command DUPLICATE to create physical standby database as following:

RMAN> duplicate target database 
2>    for standby 
3>    from active database 
4>    dorecover 
5>    spfile 
6>      reset control_files 
7>      set db_unique_name='DBNEW' 
8>      set fal_server='DB01' 
9>    nofilenamecheck; 

using target database control file instead of recovery catalog

Starting Duplicate Db at 15-JAN-18
current log archived
... ...

Got error ORA-65500:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 01/15/2018 11:46:19
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on clone_default channel at 01/15/2018 11:46:19
RMAN-11003: failure during parse/execution of SQL statement: alter system set  db_unique_name='DBNEW' comment='' scope=spfile
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

Oracle reported this issue as Bug 20977794 which causes the ORA-65500 to be returned with any attempt to change DB_UNIQUE_NAME for database when a CRS resource for it exists. As RMAN attempts to make a change to db_unique_name, this error will occur.

Current system is configured with Oracle Restart (Grid Infrastructure - standalone) 12.2.0.1. Although it is supposed to be fixed in 12.2.0.1 base release, the error still happened. Anyway, we can safely try to remove database resource from crs registry as workaround,

[oracle@dbaplus]$ srvctl config database
DBNEW

The database DBNEW does exist in the crs registry, try to remove it

[oracle@dbaplus]$ srlvctl remove database -db DBNEW
Remove the database DBNEW? (y/[n]) y
[oracle@dbaplus]$

Then DUPLICATE can run successfully.

Friday, December 15, 2017

ASM instance shows 'Down' in OEM 13.2 after upgraded to 12.2

Standalone Grid Infrastruture (Oracle Restart) is upgraded to 12.2 and everything works well. However, the ASM instance shows 'Down' in Enterprise Manager 13.2.

Checked 'Monitoring Configuration' of ASM target, all information is correct and 'Test Connection' succeeds.  We can log into ASM instance page in OEM to do management tasks (creat/drop diskgroup, etc). Everything looks good except the Target Status.

Log onto the host where ASM is running on, and run following commands from OEM agent home:

1. Get the target name of ASM instance
$ emctl config agent listtargets | grep ASM
[+ASM_host01.dbaplus.ca, osm_instance]



2. Find out the 'real time' results of the 'REsponse' metric (test connection from local agent to target)
$ emctl getmetric agent +ASM_host01.dbaplus.ca,osm_instance,Response
Oracle Enterprise Manager Cloud Control 12c Release 5 
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Status,oraerr
0
,Failed to connect: java.sql.SQLException: ORA-28040: No matching authentication protocol


Wow, '0' is returned with 'ORA-28040'. I believe many DBAs saw error 28040 when upgrading their databases to 12c. OEM agent (12c Release 5) is using old Oracle client (10g) to access ASM instance which is running on 12.2.

Solution:
Add following lines to $GRID_HOME/network/admin/sqlnet.ora
SQLNET.ALLOWED_LOGON_VERSION_SERVER=10
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=10
 

Test the connection again:
$emctl getmetric agent +ASM_host01.dbaplus.ca,osm_instance,Response
Oracle Enterprise Manager Cloud Control 12c Release 5 
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Status,oraerr
1
,
 


'1' is returned and indicates that the ASM  is up and running. 

And the ASM target shows 'Up' in OEM. 

Saturday, October 28, 2017

Alert Log Filter Expression Editable when view-mode in Monitoring Template of OEM 13.2

OMS 13.2.0.0.171017
DB Plug-in 13.2.2.0.170930
EM Agent 13.2.0.0.170930

A customer asked for helping to apply patch 24334660 on the Enterprise Manager 13.2. Before starting patching, check the Oracle Home patching level as following,
[oracle@oms1]$ /u01/app/oracle/product/em/middleware/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 13.9.1.3.0
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/em/middleware
Central Inventory : /u01/app/oracle/oraInventory
   from           : /u01/app/oracle/product/em/middleware/oraInst.loc
OPatch version    : 13.9.1.3.0
OUI version       : 13.9.1.0.0
OUI location      : /u01/app/oracle/product/em/middleware/oui
Log file location : /u01/app/oracle/product/em/middleware/cfgtoollogs/opatch/opatch2017-10-28_10-43-03AM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/product/em/middleware"

<<Output truncated>>

Patch  26625183     : applied on Fri Oct 20 16:32:23 EDT 2017
Unique Patch ID:  21574333
Patch description:  "EMBP Patch Set Update 13.2.0.0.171017"
   Created on 8 Oct 2017, 20:17:42 hrs PST8PDT
   Bugs fixed:
     26782785, 25341210, 26090681, 26680573, 25110496, 24953203, 24459731
     26364865, 25861385, 26027847, 26542313, 20511097, 25967196, 25230688
     24701883, 25326446, 25217253, 25534065, 23506806, 25244726, 25179524
     25136456, 22143858, 25354849, 25615522, 25674055, 25219896, 25255429
     23251874, 24334660, 24944461, 23743109, 24426862, 25701178, 25268183
     23549064, 25506784, 25497731, 25497622, 24789016, 24742172, 25355038
     25163555, 25044101, 24608207, 23619694, 24753427, 24734911, 23705752
     24917276, 24614251, 22901546, 23196054, 21470098

<<Output truncated>>


Apparently, the OMS home has been applied PSU 13.2.0.0.171017, which already included the fix of the bug. Customer performs following steps to reproduce his issue,

In Monitoring Templates window, click the name of desired template and get into template details window, then click the Glasses icon in 'Edit' column besides metric 'Generic Alert Log Error'. In the new windows, the 'Alert Log Filter Expression' can be changed, but the new value cannot be saved (retained) after click OK. The value goes back to old value when come into this window again. It seems a bug/defect here. Actually, the window is a view-mode (read only) window with title 'View Advanced Settings: Generic Alert Log Error'. Yes, it gives you opportunity to type words in view-mode window, but does not change anything, a funny behaviour.

It is not a correct way to edit a metric in a monitoring template by clicking the template name. Instead, select the record (row) of the template and click 'Edit' button. In 'Edit Monitoring Template' window, the metric can be changed permanently.