Pages

Thursday, June 16, 2022

Oracle Data Pump Export/Import SODA Collection Data

Simple Oracle Document Access (SODA ) collections can be exported and imported using Oracle Data Pump Utilities starting with version 19.6.

In Oracle database, a SODA collection is analogous to database table. A SODA collection is mapped to /backed by a table. A SODA document is analogous to, and is in fact backed by, a row of a database table. The row has one column for each document component: key, content, version, and so on.

When a SODA colleciton is created, a database table (mapped table) is created under current user and related collection metadata is saved under schema XDB. When inserting a document into a collection, it actually insert a record into the collection mapped table.

Therefore, export/import SODA include two parts,

  * Export/import the mapped table with Data Pump utility. Utility has to be 19.6 or higher
  * Copy collection metadata. Currently, Data Pump does not have specific export path for SODA Collection Metadata, it means there is no database object type for it used in export/import parameters INCLUDE & EXCLUDE. As workaround, the metadata can be 'copied' by generating creation script in source database and running generated script in destination database.


Step 1. Export the mapped table (contains SODA documents) with Data Pump

The SODA Collection mapped tables can be exported individually or together with other tables/objects in SCHEMA mode or FULL mode.

For example, user soda_user logs into database and list his SODA collections and tables as following
soda_user@db01> select uri_name,object_type,object_schema,object_name from user_soda_collections;

URI_NAME          OBJECT_TYPE  OBJECT_SCHEMA   OBJECT_NAME
----------------- ------------ --------------- ------------------------------
SodaCollection1   TABLE        SODA_USER       SodaCollection1
SodaCollection2   TABLE        SODA_USER       SodaCollection2

soda_user@db01> select table_name from user_tables;

TABLE_NAME
------------------------------
SodaCollection1
SodaCollection2
The first SQL list all SODA Collections for current user 'SODA_USER', the two collections' names are 'SodaCollection1' and 'SodaCollection2' (column URI_NAME of view USER_SODA_COLLECITONS) and their respective mapped tables are created under current schema 'SODA_USER' with same names as collection names. 

Note: the collection name and table name are case sensitive.

The sencod SQL list all tables owned by current user, the output confirms that the two mapped tables listed in first SQL do be there.

The mapped tables can be exported with command

   expdp username/password schemas=<schema_name> include=<table_list> dumpfile=<dump_file> logfile=<log_file>

or

   expdp username/password schemas=<schema_name> dumpfile=<dump_file> logfile=<log_file>

If possible, I prefer the second command because I do not have to list all tables if many, especially the headache to escape the quota (double & single) characters in the INCLUDE parameter.

For example, export specified tables as following
expdp soda_user/password schemas=soda_user include=TABLE:\"in (\'SodaCollection1\',\'SodaCollection2\')\"  dumpfile=soda_collection.dmp logfile=soda_collection_exp.log

Step 2. Unload SODA collection metadata by generatting collection-creating script from source database

Log into source database as user who created the collection (soda_user in my example database), run following PL/SQL to generate scripts
set serveroutput on
set feedback off
set linesize 32767
begin
  dbms_output.put_line('set serveroutput on');
  dbms_output.put_line('declare');
  dbms_output.put_line('  soda_c soda_collection_t;');
  dbms_output.put_line('begin');
  for c1 in (select uri_name, json_descriptor from user_soda_collections) loop
    dbms_output.put_line('  -- Collection '''||c1.uri_name||'''');
    dbms_output.put_line('  begin');
    dbms_output.put_line('    soda_c:=dbms_soda.create_collection(collection_Name=>'''||c1.uri_name||''', metadata=>'''||c1.json_descriptor||''');');
    dbms_output.put_line('    dbms_output.put_line(''  [INFO]  SODA Collection ['||c1.uri_name||'] created successfully'');');
    dbms_output.put_line('  exception');
    dbms_output.put_line('    when others then');
    dbms_output.put_line('      dbms_output.put_line(''  [ERROR] SODA Collection ['||c1.uri_name||'] creation failed with errors:'');');
    dbms_output.put_line('      dbms_output.put_line(SQLERRM);');
    dbms_output.put_line('  end;');
  end loop;
  dbms_output.put_line('end;');
  dbms_output.put_line('/');
end;
/

The script reads SODA collections' names (column URI_NAME of view USER_SODA_COLLECTIONS) and respective collections' metadata (column JSON_DESCRIPTION of view USER_SODA_COLLECTION), and generates creation statements for each collection.

Save the output as script file. If it is run with SQL*Plus or SQLcl, the output can be saved with SPOOL command.

Sample output as following
set serveroutput on
declare
  soda_c soda_collection_t;
begin
  -- Collection 'SodaCollection1'
  begin
    soda_c:=dbms_soda.create_collection(collection_Name=>'SodaCollection1', metadata=>'{"schemaName":"SODA_USER","tableName":"SodaCollection1","keyColumn":{"name":"ID","sqlType":"VARCHAR2","maxLength":255,"assignmentMethod":"CLIENT"},"contentColumn":{"name":"DOCUMENT","sqlType":"BLOB","compress":"NONE","cache":true,"encrypt":"NONE","validation":"STANDARD"},"lastModifiedColumn":{"name":"MODIFIEDON"},"mediaTypeColumn":{"name":"MEDIATYPE"},"creationTimeColumn":{"name":"CREATEDON"},"readOnly":false}');
    dbms_output.put_line('  [INFO]  SODA Collection [SodaCollection1] created successfully');
  exception
    when others then
      dbms_output.put_line('  [ERROR] SODA Collection [SodaCollection1] creation failed with errors:');
      dbms_output.put_line(SQLERRM);
  end;
  -- Collection 'SodaCollection2'
  begin
    soda_c:=dbms_soda.create_collection(collection_Name=>'SodaCollection2', metadata=>'{"schemaName":"SODA_USER","tableName":"SodaCollection2","keyColumn":{"name":"ID","sqlType":"VARCHAR2","maxLength":255,"assignmentMethod":"CLIENT"},"contentColumn":{"name":"DOCUMENT","sqlType":"BLOB","compress":"NONE","cache":true,"encrypt":"NONE","validation":"STANDARD"},"lastModifiedColumn":{"name":"MODIFIEDON"},"versionColumn":{"name":"VERSION","method":"MD5"},"creationTimeColumn":{"name":"CREATEDON"},"readOnly":false}');
    dbms_output.put_line('  [INFO]  SODA Collection [SodaCollection2] created successfully');
  exception
    when others then
      dbms_output.put_line('  [ERROR] SODA Collection [SodaCollection2] creation failed with errors:');
      dbms_output.put_line(SQLERRM);
  end;
end;
/

In the example, the database user SODA_USER has two SODA collections 'SodaColleciton01' and 'SodaColleciton02'. 

Note: the value of "schemaName" in metadata section is "SODA_USER" which owns the SODA data, it is hard-coded in the script. Thereore, the generated script has to be run under same user in destination database.

Step 3. Import the mapped table (contains SODA documents) into destination database with Data Pump

The mapped tables can be imported into destination database with command

   impdp username/password schemas=<schema_name> dumpfile=<dump_file> logfile=<log_file>

Here, <dump_file> is dump file created in step 1.

Step 4. Load SODA collection metadata into destination database by running collection-creating script created in step 2

Since the script hard-coded the schema name in metadata section, it has to be run by same user in source database. In my example, the hard-coded schema name is SODA_USER, and the user SODA_USER has to be created in my source database and run the script as user SODA_USER.

If the mapped tables are imported to different schema in step 3 (with parameter REMAP_SCHEMA), you have to manually edit the script to replace the old value (SODA_USER in my example) of "schemaName" with the new schema name, then run it as new user.

Wednesday, May 25, 2022

ORA-02291: integrity constraint (WWV_FLOW_FND_GU_INT_G_FK) violated - parent key not found - When Importing Workspace In APEX configured with ORDS & WebLogic

When creating users during importing workspace in APEX, got ORA-02291 error like following
ORA-02291: integrity constraint (APEX_200100.WWV_FLOW_FND_GU_INT_G_FK) violated - parent key not found
ORA-06512: at "APEX_200100.WWV_FLOW_FND_USER_INT", line 2067
ORA-06512: at "APEX_200100.WWV_FLOW_FND_USER_API", line 339
ORA-06512: at line 2
02291. 00000 - "integrity constraint (%s.%s) violated - parent key not found"
*Cause:    A foreign key value has no matching primary key value.
*Action:   Delete the foreign key or add a matching primary key.
This error occurs because pre-created user groups (SQL Developer, etc) do not have the same IDs in two APEX instances. It was reported as bug of APEX 4.2.1 when ORDS is used and deployed with WebLogic. But it also reported on APEX 18.1 and I got it on my APEX  20.1.

As a workaround, it can fixed by remove group assignment from the "create_fnd_user" statement in the workspace import SQL script as following,

Original SQL
begin
wwv_flow_fnd_user_api.create_fnd_user (
  p_user_id                      => '2471724602488540',
  p_user_name                    => 'DEVELOPER01',
  p_first_name                   => 'Tom',
  p_last_name                    => 'Scott',
  p_description                  => '',
  p_email_address                => 'user01@dbaplus.ca',
  p_web_password                 => '1CE93BEEE43FF616144AC66B076A57D7729B7F53E08230A1D344DF88CF52CC2B1D2D7383BB874B794EAB3841BA9FB466B759DC33E3C2C6E8C8C2A5D7C611F5D7',
  p_web_password_format          => '5;5;10000',
  p_group_ids                    => '1794918882609493:1795050459609497:1795182004609497:',
  p_developer_privs              => 'ADMIN:CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL',
  p_default_schema               => 'APP_SCHEMA',
  p_account_locked               => 'N',
  p_account_expiry               => to_date('202002030936','YYYYMMDDHH24MI'),
  p_failed_access_attempts       => 0,
  p_change_password_on_first_use => 'Y',
  p_first_password_use_occurred  => 'Y',
  p_allow_app_building_yn        => 'Y',
  p_allow_sql_workshop_yn        => 'Y',
  p_allow_websheet_dev_yn        => 'Y',
  p_allow_team_development_yn    => 'Y',
  p_allow_access_to_schemas      => '');
end;
/
New statement
begin
wwv_flow_fnd_user_api.create_fnd_user (
  p_user_id                      => '2471724602488540',
  p_user_name                    => 'DEVELOPER01',
  p_first_name                   => 'Tom',
  p_last_name                    => 'Scott',
  p_description                  => '',
  p_email_address                => 'user01@dbaplus.ca',
  p_web_password                 => '1CE93BEEE43FF616144AC66B076A57D7729B7F53E08230A1D344DF88CF52CC2B1D2D7383BB874B794EAB3841BA9FB466B759DC33E3C2C6E8C8C2A5D7C611F5D7',
  p_web_password_format          => '5;5;10000',
  p_group_ids                    => '',
  p_developer_privs              => 'ADMIN:CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL',
  p_default_schema               => 'APP_SCHEMA',
  p_account_locked               => 'N',
  p_account_expiry               => to_date('202002030936','YYYYMMDDHH24MI'),
  p_failed_access_attempts       => 0,
  p_change_password_on_first_use => 'Y',
  p_first_password_use_occurred  => 'Y',
  p_allow_app_building_yn        => 'Y',
  p_allow_sql_workshop_yn        => 'Y',
  p_allow_websheet_dev_yn        => 'Y',
  p_allow_team_development_yn    => 'Y',
  p_allow_access_to_schemas      => '');
end;
/

Friday, April 29, 2022

EM 12c/13c Configure Enterprise Manager with custom or third-party SSL Certificates

When deciding to configure Oracle Enterprise Manager with custom or third-party SSL certificates,  you usually need to complete configuration tasks with following EM components

  *  EM Cloud Control Console
  *  Oracle Management Service (OMS) and EM Agent
  *  WebLogic Server (WLS)

The configuration can be done in following steps.

Thursday, March 31, 2022

Script - Enable Customized Password Complexity in Oracle non-cdb or cdb environment

The script creates a password complexity verification function, and enable the function with database profile. It also sets password policy with profile.

Script notes:

1. The script is executed by SYS in single instance database or RAC database

2. The script enable password complexity and policy by setting database profile, by default, it will change DEFAULT profile, it can be changed by changing the value of SQL*Plus substitution variable S_PROFILE in the script.

3. The verification function name (default name is dbaplus_verify_function) and all password complexity and password policy options can be changed by yourself with the substitution variables (define statements in the script)

4. The script can be executed in non-cdb or pluggable database (pdb) which is open in READ WRITE mode

5. If script is executed in root container (CDB$ROOT) of container database (CDB), it will apply the changes in all pdbs which are open in READ WRITE mode. If seed pdb (PDB$SEED) is open in READ ONLY mode, it will be re-open in READ WRITE mode to apply the change, then re-open in READ ONLY mode. Any pdbs which is not open will be skipped.
--------------------------------------------------
define S_VERIFY_FUNCTION='dbaplus_verify_function'
define S_PROFILE='DEFAULT'
define S_PASSWORD_LIFE_TIME='unlimited'
define S_PASSWORD_GRACE_TIME='7'
define S_PASSWORD_REUSE_TIME='1'
define S_PASSWORD_REUSE_MAX='10'
define S_PASSWORD_LOCK_TIME='.0104'
define S_FAILED_LOGIN_ATTEMPTS='5'
define S_NO_OF_CHAR='12'
define S_NO_OF_DIFF='1'
define S_NO_OF_UPPERCASE='1'
define S_NO_OF_LOWERCASE='1'
define S_NO_OF_DIGIT='1'
define S_NO_OF_SPECIAL='1'
--------------------------------------------------
set echo off
set verify off
set feedback off
set trimspool on
set serveroutput on
declare
  isRAC varchar2(5);
  isContainer varchar2(5);
  i_return integer;
  i_handle integer;
  s_container varchar2(255);
  s_open_mode varchar2(255);
  -- SQL to grant execute privilege on the function to public
  s_grant varchar2(100) := 'grant execute on &S_VERIFY_FUNCTION to public';
  s_close_seed varchar2(255) := 'alter pluggable database PDB$SEED close immediate';
  s_open_seed varchar2(255) := 'alter pluggable database PDB$SEED open read only';
  s_open_rw_seed varchar2(255) := 'alter pluggable database PDB$SEED open read write';
  -- SQL to alter profile
  s_modify_profile varchar2(2000):=q'[
    alter profile &S_PROFILE limit
      password_life_time &S_PASSWORD_LIFE_TIME
      password_grace_time &S_PASSWORD_GRACE_TIME
      password_reuse_time &S_PASSWORD_REUSE_TIME
      password_reuse_max &S_PASSWORD_REUSE_MAX
      password_lock_time &S_PASSWORD_LOCK_TIME
      failed_login_attempts &S_FAILED_LOGIN_ATTEMPTS
      password_verify_function &S_VERIFY_FUNCTION]';
  -- PL/SQL block to create custom password verify function
  s_create_func varchar2(2000):=q'[
    create or replace function &S_VERIFY_FUNCTION( username varchar2, password varchar2, old_password varchar2)
     return boolean is
       p_diff    integer;
       p_lang    varchar2(512);
       p_message varchar2(512);
       p_return  number;
    begin
       -- Get the context p_lang and use utl_lms for p_messages
       p_lang := sys_context('userenv','lang');
       p_lang := substr(p_lang,1,instr(p_lang,'_')-1);
       --19c ora_complexity_check(password varchar2, chars number, letter number, uppercase number, lowercase number, digit number, special number)
       --12c ora_complexity_check(password varchar2, chars number, letter number, upper number, lower number, digit number, special number)
       if not ora_complexity_check(password, &S_NO_OF_CHAR,null,&S_NO_OF_UPPERCASE,&S_NO_OF_LOWERCASE,&S_NO_OF_DIGIT,&S_NO_OF_SPECIAL) then
          return(false);
       end if;
       -- Check if the password differs from the previous password by at least S_NO_OF_DIFF characters
       if old_password is not null then
          p_diff := ora_string_distance(old_password, password);
          if p_diff < &S_NO_OF_DIFF then
             p_return := utl_lms.get_message(28211, 'RDBMS', 'ORA', p_lang, p_message);
             raise_application_error(-20000, utl_lms.format_message(p_message, '&S_NO_OF_DIFF'));
          end if;
       end if;
       return(true);
    end;]';
begin
  -- CDB
  select value into isContainer from v$parameter where name='enable_pluggable_database';
  -- RAC database
  select value into isRAC from v$parameter where name='cluster_database';
  if isRAC='TRUE' then
     s_close_seed := s_close_seed||' instances=all';
     s_open_seed := s_open_seed||' instances=all';
  end if;
  s_container := sys_context('userenv','con_name');
  if s_container<>'CDB$ROOT' then -- In Non-CDB or PDB
     execute immediate s_create_func;
     execute immediate s_grant;
     execute immediate s_modify_profile;
     dbms_output.new_line;
     if isContainer='TRUE' then
        dbms_output.put_line('Succeeded in pluggable database '''||s_container||'''');
     else
        dbms_output.put_line('Succeeded in non-cdb database '''||s_container||'''');
     end if;
  else -- In root container CDB$ROOT
     execute immediate s_create_func;
     execute immediate s_grant;
     execute immediate s_modify_profile;
     dbms_output.new_line;
     dbms_output.put_line('Succeeded in root container '''||s_container||'''.');
     begin
       i_handle := dbms_sql.open_cursor;
       --for c1 in (select name,open_mode from v$pdbs where open_mode='READ WRITE' or (name='PDB$SEED' and open_mode='READ ONLY'))
       for c1 in (select name,open_mode from v$pdbs)
       loop
         s_open_mode := c1.open_mode;
         if c1.name='PDB$SEED' and c1.open_mode='READ ONLY' then
            execute immediate s_close_seed;
            execute immediate s_open_rw_seed;
            s_open_mode := 'READ WRITE';
         end if;
         if s_open_mode='READ WRITE' then
            dbms_sql.parse(c=>i_handle, statement=>s_create_func, language_flag=>DBMS_SQL.NATIVE, container=>c1.name);
            i_return := dbms_sql.execute(i_handle);
            dbms_sql.parse(c=>i_handle, statement=>s_grant, language_flag=>DBMS_SQL.NATIVE, container=>c1.name);
            i_return := dbms_sql.execute(i_handle);
            dbms_sql.parse(c=>i_handle, statement=>s_modify_profile, language_flag=>DBMS_SQL.NATIVE, container=>c1.name);
            i_return := dbms_sql.execute(i_handle);
            dbms_output.new_line;
            dbms_output.put_line('Succeeded in pluggable database '''||c1.name||'''.');
            if c1.name='PDB$SEED' then
               execute immediate s_close_seed;
               execute immediate s_open_seed;
            end if;
         else
            dbms_output.new_line;
            dbms_output.put_line('Skipped pluggable database '''||c1.name||''' which is in '''||s_open_mode||''' mode.');
         end if;
       end loop;
       dbms_sql.close_cursor(i_handle);
     exception
       when others then
         dbms_sql.close_cursor(i_handle);
         raise;
     end;
  end if;
  dbms_output.new_line;
end;
/

Friday, March 11, 2022

Export/import APEX workspace and application with SQLcl

One of the reason why I prefer SQLcl to SQL*Plus is that SQLcl has built-in "apex" commands. APEX workspace and application can be easily export / import with SQLcl.

Note: If you have multiple versions of APEX schemas created in the database, you have to log into database as current version of APEX schema to run the commands. If you log in as other privileged users (i.e. SYS, SYSTEM), you have to set current schema before running apex command with following SQL,

  alter session set current_schema=<current-apex-schema>;

For example, if APEX 19.1 is installed
alter session set current_schema=APEX_190100;
1. Export / Import APEX workspace

Workspace can be exported with following command

  apex -workspaceid <workspace_id> <-expWorkspace | -expMinimal | -expFiles>

The command will create a SQL script file under current directory with name w<workspace_id>.sql, it can be run with one of following options

      -expWorkspace:  Export workspace identified by -workspaceid
      -expMinimal:    Only export workspace definition, users, and groups
      -expFiles:      Export all workspace files identified by -workspaceid

And <workspace id> can be listed/found with following SQL

  select workspace_id,workspace from apex_workspaces;

For example
apex_190100@ddb01> select workspace_id,workspace from apex_workspaces;

      WORKSPACE_ID WORKSPACE
  ---------------- --------------------------
                11 COM.ORACLE.APEX.REPOSITORY
                12 COM.ORACLE.CUST.REPOSITORY
                10 INTERNAL
  4482715458638925 SALES
  2104419987831886 FINANCE
  6822807081190901 HR
If export workspace SALES,
apex_190100@ddb01> apex export -workspaceid 4482715458638925 -expWorkspace
Exporting Workspace 4482715458638925:'SALES'

  Completed at Thu Mar 10 19:56:28 EST 2022
It creates a SQL script file named w4482715458638925.sql. This script file is the workspace SALES export file. To import this workspace to new APEX instance, just run this script under APEX schema in that database.

For example, import SALES workspace to database db02
apex_190100@db02> @w4482715458638925.sql
API Last Extended:20190331
Your Current Version:20190331
This import is compatible with version: 20190331
COMPATIBLE (You should be able to run this import without issues.)

WORKSPACE 4482715458638925
Creating workspace SALES...
Creating Groups...
Creating group grants...
Creating Users...
...workspace objects
...RESTful Services
SCHEMA SALESDATA - User Interface Defaults, Table Defaults
SCHEMA HRDATA - User Interface Defaults, Table Defaults
User Interface Defaults, Attribute Dictionary
... elapsed: 14.1 sec
...done
2. Export / import application

Export single application with following command

  apex export -applicationid <application_id>

Export all applications of specified workspace

  apex export -workspaceid <workspace_id>

Both commands will create one SQL script file for each application under current directory. The script file name is f<application_id>.sql.

For example, export all applications of workspace SALES
apex_190100@db01> apex export -workspaceid 4482715458638925
Exporting Application 108:'Promotion Channel'

  Completed at Fri Mar 11 21:23:27 EST 2022

Exporting Application 126:'Sales History'

  Completed at Fri Mar 11 21:23:27 EST 2022

Exporting Application 128:'Customers'

  Completed at Fri Mar 11 21:23:27 EST 2022
Totally, three applications are exported as three SQL script files. The files' name are f108.sql, f126.sql and f128.sql.

To import application, run the exported SQL with APEX schema in new database. For example, import application "Sales History" (application id 126) into database db02
apex_190100@db02> @f126.sql
API Last Extended:20190331
Your Current Version:20190331
This import is compatible with version: 20190331
COMPATIBLE (You should be able to run this import without issues.)

APPLICATION 126 - Sales History
--application/delete_application
--application/create_application
--application/shared_components/navigation/lists/data_load_process_train_load_customers
--application/shared_components/navigation/lists/admin_reports
--application/shared_components/navigation/lists/order_page
--application/shared_components/navigation/lists/mobile_options
--application/shared_components/navigation/lists/footer_controls
--application/shared_components/navigation/lists/charts
--application/shared_components/navigation/lists/data_load_process_train_load_customers_002
--application/shared_components/navigation/lists/reports
   ...<output truncated> ...
--application/deployment/checks
--application/deployment/buildoptions
--application/end_environment
... elapsed: 8.32 sec
...done

Monday, March 7, 2022

Database failed to start with errors ORA-01565 ORA-17503 ORA-27300 ORA-27301 ORA-27302

 When starting database, get errors ORA-01565 ORA-17503 ORA-27300 ORA-27301 ORA-27302. For example,
$ srvctl start database -db db01
PRCR-1079 : Failed to start resource ora.db01.db
CRS-5017: The resource action "ora.db01.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/db01/parameterfile/spfiledb01.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/db01/parameterfile/spfiledb01.ora
ORA-27300: OS system dependent operation:open failed with status: 13
ORA-27301: OS failure message: Permission denied
ORA-27302: failure occurred at: sskgmsmr_7
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/host01/crs/trace/ohasd_oraagent_grid.trc".

CRS-2674: Start of 'ora.db01.db' on 'host01' failed
The ORA-2730x errors could be seen in many different scenarios, they are usually caused by OS system call error or OS configuration issue and are in the form of
ORA-27300: OS system dependent operation:%s failed with status: %s
ORA-27301: OS failure message: %s
ORA-27302: failure occurred at: %s
One of them is as in previous example, it shows error ORA-27300 with 'status 13' and ORA-27302 with 'sskgmsmr_7'. This issue could happen when Oracle ASM is running as an account (usually grid) which is different from Oracle database home owner (usually oracle). For example, Oracle Grid Infrastructure (GI) installation or role separate Oracle Restart (standalone GI) installation. In these environments, if you got following errors
ORA-27300: OS system dependent operation:open failed with status: 13
ORA-27301: OS failure message: Permission denied
ORA-27302: failure occurred at: sskgmsmr_7
The culprit should be the permission of the directory "<GRID_HOME>/dbs" or file "<GRID_HOME>/dbs/ab_+ASM.dat". The permissions in the previous example system as following
$ ls -ld /u01/app/grid/product/19.0.0/grid_1/dbs
drwxr-xr-x    2 grid     oinstall        256 Feb 09 14:39 /u01/app/grid/product/19.0.0/grid_1/dbs
$ cd /u01/app/grid/product/19.0.0/grid_1/dbs
$ ls -l
total 56
-rw-rw----    1 grid     oinstall       2304 Feb 09 14:39 ab_+ASM.dat
-rw-rw----    1 grid     oinstall       1544 Feb 09 15:00 hc_+ASM.dat
-rw-r-----    1 grid     oinstall         57 Sep 25 2020  init+ASM.ora
-rw-r--r--    1 grid     oinstall       3079 May 16 2015  init.ora
-rw-r-----    1 grid     oinstall       9728 Sep 25 2020  orapw+ASM
Everyone has read/execute (r-x) permission on directory "dbs", but only owner (grid) and group (oinstall) have read/write permission on file "ab_+ASM.dat" and other users do not have access to the file.

The issues can be fixed by granting read permission to other users as following
$ chmod o+r ab_+ASM.dat
$ ls -l
total 56
-rw-rw-r--    1 grid     oinstall       2304 Feb 09 14:39 ab_+ASM.dat
-rw-rw----    1 grid     oinstall       1544 Feb 09 15:09 hc_+ASM.dat
-rw-r-----    1 grid     oinstall         57 Sep 25 2020  init+ASM.ora
-rw-r--r--    1 grid     oinstall       3079 May 16 2015  init.ora
-rw-r-----    1 grid     oinstall       9728 Sep 25 2020  orapw+ASM

Thursday, March 3, 2022

Oracle SQLcl installation and configuration

Oracle SQL Developer Command Line (SQLcl) is a free command-line interface for Oracle Database that combines the power of SQL*Plus and SQL Developer. It allows you to interactively or batch execute SQL and PL/SQL. SQLcl provides in-line editing, statement completion, and command recall for a feature-rich experience, as well as supporting your previously written SQL*Plus scripts.

Saturday, February 26, 2022

Script Find the Regular Source File of the Multi-level Symbolic Link on Linux / Unix

 A bash script is used to find out the final regular source file or directory of a multi-level symbolic on Linux and Unix (AIX, Solaris) platform.

Suppose there are following file and symbolic links on current system
-rw-r--r--   1 user01   group01      12 Feb 26 14:08 /dir1/file1.txt
lrwxrwxrwx   1 user01   group01      17 Feb 26 14:13 /dir2/file2.lnk -> ../dir1/file1.txt
lrwxrwxrwx   1 user01   group01      30 Feb 26 14:14 /dir3/file3.lnk -> /dir2/file2.lnk
lrwxrwxrwx   2 user01   group01      17 Feb 26 14:18 /dir4/file4_1.lnk -> ../dir3/file3.lnk
lrwxrwxrwx   1 user01   group01      11 Feb 26 14:21 /dir4/file4_2.lnk -> file4_1.lnk
And run the script getSrc.sh against symbolic link 'file4_2.lnk', the script should return regular file 'file1.txt' with proper path,
$ findSrc.sh /dir4/file4_2.lnk
/dir1/file1.txt
The script code as following
#!/bin/bash
srcFile="$1"
while [ -h "$srcFile" ]
do
  tmpStr=`ls -ld "$srcFile"`
  tmpSrc=`expr "$tmpStr" : '.*-> \(.*\)$'`
  if expr "$tmpSrc" : '/.*' > /dev/null; then
     srcFile="$tmpSrc"
  else
     srcFile=`dirname "$srcFile"`"/$tmpSrc"
  fi
done
if [[ $srcFile == *".."* ]] || [[ $srcFile == *"."* ]] ; then
   workDir=`pwd`
   tmpDir=`dirname "$srcFile"`
   tmpFile=`basename "$srcFile"`
   tmpDir=`cd "$tmpDir" > /dev/null && pwd`
   if [ x"$tmpDir" != x ]; then
      srcFile="$tmpDir""/$tmpFile"
   fi
   cd "$workDir"
fi
echo $srcFile
Line 4. The logical operator '-h' return TRUE if the following file "$srcFile" is a symbolic link.

Line 5. Gets the output of command 'ls -ld' against the symbolic link given by the first parameter of the script run and assign the output as string to variable 'tmpStr'. For example, running the command as following
$ ls -ld /dir4/file4
lrwxrwxrwx   1 oracle   oinstall       1 Feb 26 08:16 /dir4/file4 -> file3
The value of the variable tmpStr will be the string printed in red including the SPACE characters inside the string.

Line 6. The command 'expr' is run with string matching operator ':', it is doing anchored pattern match of regular expression in the string value of variable 'tmpStr', the match starts from beginning of the string. The regular expression '.*-> \(.*\)$' including following parts

  '.*'  represents any number of any character, it occurs twice in the expression. The first occurrence is followed by three characters '-> ' (hyphen, greater-than and space), they are together to match the string value from the beginning letter 'l' until the SPACE character before string 'file3' in the previous example output of command 'ls -ld'.
  
  '\(' and '\)' the back slash '\' is escape character and telles command 'expr' that the following characters '(' and ')' are parenthesis asking to return the string matching the expression inside the parenthesis. Here, the expression in the parenthesis is second occurrence of '.*', which will match all the characters after the three-character-group '-> ', it is 'file3' in the previous output example of command 'ls -ld'.

Eventually, the output of command 'expr' will be assigned to variable 'tmpSrc'.

Line 7. Test if the path of file saved in variable 'tmpSrc' is absolute path (start with '/').

Line 10. If the test is FALSE in line 7, it means the path of linked (source) file is relative path related to symbolic link file. Therefore, the full path of the source file has to be prefixed with the path of symbolic link.

Line 13-22. It is optional block, it makes the path of source file friendly. It will change path like following

    /dir1/dir2/../../dir3/file.txt
    /dir1/dir2/./../dir3/file.txt
    
to following more friendly format

    /dir3/file.txt
    /dir1/dir3/file.txt

Monday, February 21, 2022

Oracle RAC 19c pdb open failed with ORA-12963: A read-only instance cannot be the first instance to open a database

When opening pluggable database (pdb) on one node of two-node 19c RAC, got following errors
sys@rac01> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-12963: A read-only instance cannot be the first instance to open a database.
Open the pdb on the other node. It is, by default, in read/write mode
sys@rac02> show pdbs

    CON_ID CON_NAME           OPEN MODE  RESTRICTED
---------- ------------------ ---------- ----------
         2 PDB$SEED           READ ONLY  NO
         3 pdb1               MOUNTED
sys@rac02> alter pluggable database pdb1 open;

Pluggable database altered.

sys@rac02> show pdbs

    CON_ID CON_NAME           OPEN MODE  RESTRICTED
---------- ------------------ ---------- ----------
         2 PDB$SEED           READ ONLY  NO
         3 pdb1               READ WRITE NO
And, try to open pdb on local node again, it will be opened successfully, but will be READ ONLY mode
sys@rac01> alter pluggable database pdb1 open;

Pluggable database altered.

sys@rac01> show pdbs

    CON_ID CON_NAME           OPEN MODE  RESTRICTED
---------- ------------------ ---------- ----------
         2 PDB$SEED           READ ONLY  NO
         3 pdb1               READ ONLY  NO
Check the instances' status with command 'srvctl status database'
[oracle@rac01]$ srvctl status database -db orcl -verbose
Instance orcl_1 is running on node rac02. Instance status: Open.
Instance orcl_2 is running on node rac01. Instance status: Open,Readonly.
The instance is started in READ ONLY mode on node rac01, it usually happens when the node is assigned to a parallel query server pool (PQPOOL). A node assigned to PQPOOL is treated as a leaf node and database instance running on leaf node will be READ ONLY. 

Check the server pool configuration of the database
[oracle@rac01]$ srvctl config database -db orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/PARAMETERFILE/spfile.299.1096020569
Password file: +DATA/ORCL/PASSWORD/pwdorcl.289.1096019487
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbpool,pool2
PQ Helper Server Pools: pool2
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances:
Configured nodes:
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is policy managed
The database is configured with two pools: dbpool and pool2, and pool2 is a PQPOOL (PQ Helper Server Pools, also called parallel query server pool).

Check server pools' assignment with command 'srvctl status srvpool'
[oracle@rac01]$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 0
Server pool name: dbpool
Active servers count: 1
Server pool name: pool2
Active servers count: 1
The cluster is two-node RAC, both server pool dbpool and PQPOOL pool2 have one server assigned. It means orcl has one instance, which server node is in dbpool, will be normal (read/write) mode, and another instance, which server node is in pool2, will be read only mode.

Since it is a two-node RAC, and both nodes are hub nodes, PQPOOL is not necessary. The PQPOOL can be removed to fix the issue as following

1. Remove PQPOOL configuration from database with command 'srvctl modify database'
[oracle@rac01]$ srvctl stop database -db orcl

[oracle@rac01]$ srvctl modify database -db orcl -pqpool

[oracle@rac01]$ srvctl config database -db orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/PARAMETERFILE/spfile.299.1096020569
Password file: +DATA/ORCL/PASSWORD/pwdorcl.289.1096019487
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbpool
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances:
Configured nodes:
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is policy managed
2. Remove PQPOOL with command 'srvcl remove srvpool'
[oracle@rac01]$ srvctl remove srvpool -serverpool pool2
[oracle@rac01]$
[oracle@rac01]$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 0
Server pool name: dbpool
Active servers count: 2
The server pool dbpool gets two servers assigned, and database orcl will get two instances running in normal mode.

Wednesday, February 16, 2022

Oracle 12.2 EXPDP Fails Exporting Full PDB with ORA-39126 On KUPW$WORKER.FETCH_XML_OBJECTS

This issue happens in Oracle database 12.2 or higher version, the example in this post happened in 19.13.

When running Oracle Data Pump Export utility expdp to export PDB in Full Mode, the job fails with ORA-39126 and ORA-00942. Following is an example
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS [ORA-00942: table or view does not exist

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 12630
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA", line 9571
ORA-06512: at "SYS.DBMS_METADATA", line 2976
ORA-06512: at "SYS.DBMS_METADATA", line 3608
ORA-06512: at "SYS.DBMS_METADATA", line 5008
ORA-06512: at "SYS.DBMS_METADATA", line 5327
ORA-06512: at "SYS.DBMS_METADATA", line 9552
ORA-06512: at "SYS.KUPW$WORKER", line 15119

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
900000045fb46b00     33543  package body SYS.KUPW$WORKER.WRITE_ERROR_INFORMATION
900000045fb46b00     12651  package body SYS.KUPW$WORKER.DETERMINE_FATAL_ERROR
900000045fb46b00     15452  package body SYS.KUPW$WORKER.FETCH_XML_OBJECTS
900000045fb46b00      3917  package body SYS.KUPW$WORKER.UNLOAD_METADATA
900000045fb46b00     13746  package body SYS.KUPW$WORKER.DISPATCH_WORK_ITEMS
900000045fb46b00      2439  package body SYS.KUPW$WORKER.MAIN
b00000055ff65380         2  anonymous block

DBMS_METADATA.SET_FILTER
DBMS_METADATA.SET_FILTER
DBMS_METADATA.SET_FILTER
KUPW: In FETCH_XML_OBJECTS
KUPW: End seqno is: 24
KUPF$FILE.OPEN_CONTEXT
KUPF$FILE.OPEN_CONTEXT
DBMS_METADATA.FETCH_XML_CLOB
DBMS_METADATA.FETCH_XML_CLOB
KUPW: In procedure DETERMINE_FATAL_ERROR with ORA-00942: table or view does not exist
The issues is usually seen when components installed in pdb do not match (less than) components installed in root container. For example, in root container, components installed as following
sys@orcl> alter session set container=cdb$root;

Session altered.

sys@orcl> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 pdb01                          READ WRITE NO

sys@orcl> select comp_id,comp_name from dba_registry;

COMP_ID                        COMP_NAME
------------------------------ --------------------------------------------------
CATALOG                        Oracle Database Catalog Views
CATPROC                        Oracle Database Packages and Types
JAVAVM                         JServer JAVA Virtual Machine
XML                            Oracle XDK
CATJAVA                        Oracle Database Java Packages
APS                            OLAP Analytic Workspace
RAC                            Oracle Real Application Clusters
XDB                            Oracle XML Database
OWM                            Oracle Workspace Manager
CONTEXT                        Oracle Text
ORDIM                          Oracle Multimedia
SDO                            Spatial
XOQ                            Oracle OLAP API
OLS                            Oracle Label Security
APEX                           Oracle Application Express
DV                             Oracle Database Vault

sys@orcl> alter session set container=PDB$SEED;

Session altered.

sys@orcl> select comp_id,comp_name from dba_registry;

COMP_ID                        COMP_NAME
------------------------------ --------------------------------------------------
CATALOG                        Oracle Database Catalog Views
CATPROC                        Oracle Database Packages and Types
JAVAVM                         JServer JAVA Virtual Machine
XML                            Oracle XDK
CATJAVA                        Oracle Database Java Packages
APS                            OLAP Analytic Workspace
RAC                            Oracle Real Application Clusters
XDB                            Oracle XML Database
OWM                            Oracle Workspace Manager
CONTEXT                        Oracle Text
ORDIM                          Oracle Multimedia
SDO                            Spatial
XOQ                            Oracle OLAP API
OLS                            Oracle Label Security
APEX                           Oracle Application Express
DV                             Oracle Database Vault
Components installed in pdb
sys@orcl> alter session set container=pdb01;

Session altered.

sys@orcl> select comp_id,comp_name from dba_registry;

COMP_ID                        COMP_NAME
------------------------------ --------------------------------------------------
CATALOG                        Oracle Database Catalog Views
CATPROC                        Oracle Database Packages and Types
RAC                            Oracle Real Application Clusters
OWM                            Oracle Workspace Manager
XDB                            Oracle XML Database
Both root container and seed pdb have 16 components installed, but pdb01 only have 5 components installed. It usually happends when pdb is created from remote clone. If you got the issue in this scenario with database 12.2 or higher, it can be fixed by reloading Data Pump as following

1. Open all pluggable databases
alter pluggable database all open;
2. Run following script as SYS
@?/rdbms/admin/dpload.sql
3. Recompile invalid objects as SYS
@?/rdbms/admin/utlrp.sql
If reloading Data Pump is not applicable for you, schema mode can be used instead of full mode.

Wednesday, February 9, 2022

Configure Reverse Path Filter parameter "rp_filter" on Oracle Linux and Red Hat Enterprise Linux

When installing Oracle Grid Infrastructure (GI) 12c or higher version (18c, 19c, 21c), multiple network interfaces (NIC) can be used as private interconnection to implement inter-communication load balance and failover. However, if this configuration is adopted on Oracle Linux (OL) 6 or Red Hat Enterprise Linux (RHEL) 6 or higher, the GI installation prerequists checking shows error 

Saturday, February 5, 2022

Oracle Cloud Infrastructure Plugin xxx not present for instance ocid1.instance.oc1.*

When you enable or configure moniter or managerment features on compute instance in Oracle Cloud Infrastructure (OCI), you have to enable the relavant Oracle Cloud Agent plugin. 

For example, if you want to use Bastion to connect to your compute instance residing in private subnet, you have to enable Bastion plugin on the instance. When enabling the plugin, you may see following error messsages,

Plugin Bastion not present for instance <ocid of the instance>

It happens because Oracle Cloud Agent running on the instance cannot access OCI services. The OCI services are out of the network where the instance resides. Instance's network is private subnet and does not have access to outside. Therefore a Service Gateway or NAT Gateway and proper route table rules are needed in the private subnet.

The issue can be fixed as following,

1. Create a Service Gateway for "All Services in Oracle Services Network" if it does not exist in the VCN in which the subnet resides. 

* In the Console, confirm you're viewing the instance on which you want to enable the plugin. 

* In the "Instance information" tab, click the name link beside "Virtual cloud network" under section "Instance details"

* On the left side of the page, click Service Gateways.

* Click Create Service Gateway.

* Enter the following values:

  Name: A descriptive name for the service gateway. 
  Create in compartment: Select same compartment where the VCN is created.
  Services: Select "All Services in Oracle Services Network". Currently, only two options for selection. The other "OCI Object Storage" is not applicable.
  Click "Create Service Gateway".

2. Update routing for the subnet

* Back to the instance home and in "nstance information" tab, click the name link beside "Subnet" under section "Primary VNIC"

* In "Subnet Information" tab, click the name link beside "Route Table"

* Under "Route Rules", click "Add Route Rules" and enter the following values:

  Target Type: Service Gateway.
  Destination Service: Select "All Services in Oracle Services Network"
  Target Service Gateway: Select the gateway which is created earlier. If it does not show in the list, click "Change Comparment" to choose the compartment where the Service Gateway is created.
  Click "Save".

Wait for a few minutes, the issue should be gone and the plugin status shows "Running".

Wednesday, February 2, 2022

Oracle 19c GI installation failed with PRVG-13605 : Query of chrony daemon for its time sources could not be performed

The issue happens on Oracle Linux / Red Hat Enterprise Linux 7/8 on which chronyd is adopted as the network time protocol daemon instead of ntpd. When installing Oracle Grid Infrastructure 19c on these platform, the installation fails during prerequisit check, the message shows that

Verifying Network Time Protocol (NTP) ...FAILED
  Verifying chrony daemon is synchronized with at least one external time
  source ...FAILED
  rac01: PRVG-13605 : Query of chrony daemon for its time sources could not
         be performed on node "rac01".
         PRVG-2043 : Command "/usr/bin/chronyc sources " failed on node
         "rac01" and produced the following output:
         506 Cannot talk to daemon

  rac02: PRVG-13605 : Query of chrony daemon for its time sources could not
            be performed on node "rac02".
            PRVG-2043 : Command "/usr/bin/chronyc sources " failed on node
            "rac02" and produced the following output:
            506 Cannot talk to daemon

Manually test the status of service chronyd,
[root@rac01]# systemctl status chronydchronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-02-02 17:10:33 EST; 22s ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 43769 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 43764 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 43768 (chronyd)
    Tasks: 1 (limit: 3355442)
   Memory: 376.0K
   CGroup: /system.slice/chronyd.service
           └─43768 /usr/sbin/chronyd

[root@rac01 ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? ts1.lab.dbaplus.ca            2   6     3    18   -166us[ -166us] +/-   17ms
^? ts2.lab.dbaplus.ca            2   6     3    17   +427us[ +427us] +/-   25ms

[root@rac01 ~]# su - grid
Last login: Wed Feb  2 20:11:48 EST 2022 on pts/1

[grid@rac01]$ chronyc sources
506 Cannot talk to daemon
The service "chronyd" is running and user root can run command "chronyc sources" successfully, and the command list all the time sources that chronyd is accessing. The sources are servers ts1.lab.dbaplus.ca and ts2.lab.dbaplus.ca.

However Grid Infrastructure (GI) owner grid cannot run the command, failed with message "506 Cannot talk to daemon".

The reason is that the system is stopping chrony daemon from listening on the UDP port 323 for management connections made by chronyc. This option is set by adding following line in chrony configuration file "/etc/chrony.conf"
cmdport 0
This line can be commented out (add # at the beginging of the line) to enable chronyc management connctions. The line as following
# cmdport 0
After saving the file "/etc/chrony.conf", restart chronyd service
systemctl restart chronyd
Now, normal user (non root) grid can run "chronyc sources" successfully.

Friday, December 31, 2021

EM 12c/13c Send Email Notification When OMS or Repository Database Down

In Oracle Enterprise Manager 12c/13c deployment, normally the OMS is responsible for sending all metric alert notifications, but in cases where the OMS or repository database is down, it is not able to do this. Therefore,  the Out of Band email notification is introduced as a backup mechanism for sending email notification when the OMS or repository database itself is down.

When it happens, the email notification will be generated by the agent which monitors the OMS and Repository target (oracle_emrep) target. Therefore this method is only applicable to the agent which is installed on the same machine as the OMS (also known as the chained agent). In the case of a multi-OMS setup, the Agent on the primary OMS node is responsible for monitoring the oracle_emrep target.

The Out of Band email notification is implemented by configuring Agent property on OMS host or primary OMS node of multi-OMS setup. Configuring steps as following,

1. Stop agent on the OMS host

  <AGENT_INST_HOME>/bin/emctl stop agent

2. Set agent properties

Add following lines to agent property files <AGENT_INST_HOME>/sysman/config/emd.properties
emd_email_gateway=<email_SMTP_server>
emd_email_address=<email_address_of_notification_receivers>
emd_from_email_address=<email_address_of_notification_sender>
Line 2 set email address(es) to which the email notification will be sent. You can specify more than one email address separated by commas. For example, in my lab system, it is configures as following
emd_email_gateway=mail.lab.dbaplus.ca
emd_email_address=dba@lab.dbaplus.ca,sa@lab.dbaplus.ca
emd_from_email_address=oemadmin@lab.dbaplus.ca
3. Start agent

  <AGENT_INST_HOME>/bin/emctl start agent

4. Validate the properties with following commands

  <AGENT_INST_HOME>/bin/emctl getproperty agent -name emd_email_gateway
  <AGENT_INST_HOME>/bin/emctl getproperty agent -name emd_email_address
  <AGENT_INST_HOME>/bin/emctl getproperty agent -name emd_from_email_address

The commands should return the values configured in step 2.

EM 13c Agent is unreachable as its first severity has not yet come after blackout end

The agent status shows "Agent Unreachable (Post Blackout)" in Oracle Enterprise Manager 13c console. On the agent home page, it shows following error message,

  Agent is unreachable as its first severity has not yet come after blackout end.

It is seen in EM 13c, can can be fixed by running following commands on agent host,

  ${AGENT_INST_HOME}/bin/emctl stop agent
  ${AGENT_INST_HOME}/bin/emctl  clearstate agent
  ${AGENT_INST_HOME}/bin/emctl start agent

Thursday, December 30, 2021

RMAN left defunct server processes and NetBackup nborautil processes

RMAN jobs can successfully backup database to tape with NetBackup, but leave server processes (RMAN channels) and NetBackup nborautil processes running. The processes look like defunct processes.

The issue is observed in Oracle Database 19c RMAN backup job with NetBackup 8.1.2 on AIX 7.2, it may also happen on other platform with NetBackup 8.1.2 or 8.2. When it happens, nborautil processes can be seen running like following
$ ps -ef | grep nborautil
  oracle 17236328  6489154  31 12:14:13      - 129:18 -bprdtype 2 -use_stdin -client host01 -bprd -noxmloutput -ignorenamespace -jsonoutput 26 -eoichar /usr/openv/netbackup/bin/nborautil
  oracle 32506230  5440300  26 12:47:03      - 113:20 -bprdtype 2 -use_stdin -client host01 -bprd -noxmloutput -ignorenamespace -jsonoutput 26 -eoichar /usr/openv/netbackup/bin/nborautil

$ ps -ef | grep 6489154
  oracle 17236328  6489154  30 12:14:13      - 129:44 -bprdtype 2 -use_stdin -client host01 -bprd -noxmloutput -ignorenamespace -jsonoutput 26 -eoichar /usr/openv/netbackup/bin/nborautil
  oracle  6489154        1   0   Dec 27      - 11:47 oracleORCL (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
$
$ ps -ef | grep 5440300
  oracle 32506230  5440300  24 12:47:03      - 113:44 -bprdtype 2 -use_stdin -client host01 -bprd -noxmloutput -ignorenamespace -jsonoutput 26 -eoichar /usr/openv/netbackup/bin/nborautil
  oracle  5440300        1   0   Dec 28      - 10:19 oracleORCL (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
Two NetBackup nborautil processes seem defunct, their parent process id are 6489154 and 5440300 respectively. Both parent processes are server processes of Oracle Database Instance ORCL. Check the session status of the server processes,
sys@ORCL> select s.sid,s.serial#,s.username,s.machine,s.program,s.event
  2  from v$session s, v$process p
  3  where s.paddr=p.addr and p.spid in (5440300,6489154);

  SID    SERIAL# USERNAME  MACHINE  PROGRAM                   EVENT
----- ---------- --------- -------- ------------------------- ------------------
  300      44503 SYS       host01   rman@host01 (TNS V1-V3)   Backup: MML shutdown
  178      17334 SYS       host01   rman@host01 (TNS V1-V3)   Backup: MML shutdown
The server processes are waiting for event "Backup: MML shutdown", it means waiting for NetBackup nborautil process to complete, and nborautil never exits, though Veritas Support claims it only takes longer time than expected not hang. As I saw, every RMAN job will leave new nborautil processes running, and eventually the defunct processes will use up all CPU resource and hang the system.

Veritas Support claims that this is bug of NetBackup on version 8.1.2/8.2. This issue will occur on all Oracle backups using RMAN script if the database is comprised of many datafiles.

The Oracle backups will  have an unusually long delay between when the last child job completes and when the parent job completes.  They delay may extend to several hours, even days.   

There are no performance or slow behaviour observed prior to the completion of the data transfer.  The delay is only during the meta data cataloging after the data transfer jobs have completed.

Smaller databases and those with fewer datafiles may not experience this delay.

So far, Veritas does not release any fix for it. If possible, upgrade NetBackup to higher version (e.x. 9). As a workaround, reducing backup datafiles in single job may help.

Friday, December 10, 2021

Mount DVD/CDROM with command on Linux CentOS / Red Hat / Oracle Linux / Ubuntu

 Minimal install is an important security for deploying Linux. It means only essential packages are installed. Therefore, GUI is usually not available. It becomes helpful to know how to mount CDROM/DVD when you need to install packages/software from DVD.

Basically, different Linux distributions and different versions may have different commands to do that. Here, I am trying to demonstrate a way which can be used on most of the Linux distributions/versions. It includes three steps,

* Identify DVD/CDROM device name (drive name)
* Create mount point (directory)
* Mount the DVD/CDROM on created mount point

Step 1. Identify DVD/CDROM device name (drive name)

This is most important step, and the only chanllege to mount the drive. Many commands/utilities can display DVD/CDROM device name, such as blkid, lsscsi, dmesg. But they are not always available. Most time, I use following command

  cat /proc/sys/dev/cdrom/info

It lists the information of the DVD/CDROM drive, and includes drive name. It proves to be working, at least, on CentOS, Red Hat Enterprise Linux, Oracle Linux, Ubuntu.

Sample output,
[root@RHEL4]# cat /proc/sys/dev/cdrom/info
CD-ROM information, Id: cdrom.c 3.20 2003/12/17

drive name:             sr1     sr0
drive speed:            1       32
drive # of slots:       1       1
Can close tray:         1       1
Can open tray:          1       1
Can lock tray:          1       1
Can change speed:       0       1
Can select disk:        0       0
Can read multisession:  1       1
Can read MCN:           1       1
Reports media changed:  1       1
Can play audio:         1       1
Can write CD-R:         0       0
Can write CD-RW:        0       0
Can read DVD:           0       1
Can write DVD-R:        0       0
Can write DVD-RAM:      0       0
Can read MRW:           0       1
Can write MRW:          0       1
Can write RAM:          0       1
The the example, the server has two DVD/CDROM drives sr1 and sr0. The full path of their device names are respectively /dev/sr1 and /dev/sr0.

Step 2. Create mount point

Mount point is a directory where you are going to mount DVD/CDROM. You can create your preferred directory with command

mkdir -p <directory>

For example
[root@RHEL4]# mkdir -p /media/cdrom
Step 3. Mount DVD/CDROM on the mount point with command,

  mount <device name> <mount point>

For example,
[root@RHEL4]# mount /dev/sr0 /media/cdrom
mount: block device /dev/sr0 is write-protected, mounting read-only

Sunday, December 5, 2021

Oracle 19c Installation on Red Had Enterprise Linux 8 / Oracle Linux 8 – [INS-08101] Unexpected error while executing the action at state: ‘supportedOSCheck’

When installing Oracle 19c Grid Infrastructure (GI) or 19c Database (runInstaller), got following error,
[INS-08101] Unexpected error while executing the action at state: 'supportedOSCheck'
Error details,
Cause - No additional information available.

Action - Cantact Oracle Spport Services or refer to the software manual.

Summary
 - java.lang.NullPointerException
The reason is that Oracle 19c is certified with Red Hat Enterprise Linux 8 / Oracle Linux 8 from 19.7, the base installation 19.3 is not certified.

The error can be fixed by downloading Release Update (RU) 19.7 or higher and starting installation with applyRU option as following,
./gridSetup.sh -applyRU <GI_RU_UNZIP_DIR>    - Install Grid Infrastructure
./runInstaller -applyRU <GI_RU_UNZIP_DIR>    - Insall Database
Note: Use Grid Infrastructure RU insted of Database RU to install database (runInstaller), it will apply both Database Release Update and OCW Release Update, and OCW RU includes the fix of the insallation errors.

If you cannot download required RU, as a workaround, you can export the variable
export CV_ASSUME_DISTID=OEL8
Then restart installation gridSetup.sh (for GI) or runInstaller (for Database).

You may ask why I am using OEL8 as the value of the variable CV_ASSUME_DISTID. No reason, I just randomly pick it. No matter what the value is, even the value is null (empty), the installation will be fixed.

The root cause of the installation error is that the installation utility (gridSetup.sh, runInstaller) is java program. When the OS is not supported by the installation, it will try to read value of variable CV_ASSUME_DISTID. The variable can be OS environment variable, just as I did before with export command, or variable and value pair in file $ORACLE_HOME/cv/admin/cvu_config. By default, the variable line is commented out in the file and OS variable is not set, and java program cannot find the variable and raise exeption java.lang.NullPointerException. It should be considered a bug.

Wednesday, December 1, 2021

Use oradebug To List All Internal Events Set In Oracle Database

Oracle internal events can be set at both sesson level and system level as following,

A. Session level
Session level events only affect current session. Users, who has ALTER SESSION privilege granted, can set event at session level with SQL command,

    alter session set events '<event context>' ;

For example
# -- Set sql_trace (event 10046) to level 12
user01@orcl> alter session set events '10046 trace name context forever, level 12';

# -- Unset/disable sql_trace (event 10046)
user01@orcl> alter session set events '10046 trace name context off';

# -- Set event for ORA-1652: unable to extend temp segment by %s in tablespace %s
user01@orcl> alter session set events '1652 trace name errorstack';

# -- Unset event for ORA-1652
user01@orcl> alter session set events '1652 trace name context off';
B. System level
Events set at systm level affect all database sessions. Users, who has both ALTER SYSTEM and ALTER SESSION privileges granted, can set events at system level with SQL command,

    alter system set events '<event context>';

For example
# -- Set sql_trace (event 10046) to level 12
user01@orcl> alter system set events '10046 trace name context forever, level 12';

# -- Unset/disable sql_trace (event 10046)
user01@orcl> alter system set events '10046 trace name context off';

# -- Set event for ORA-1652: unable to extend temp segment by %s in tablespace %s
user01@orcl> alter system set events '1652 trace name errorstack';

# -- Unset event for ORA-1652
user01@orcl> alter system set events '1652 trace name context off';
Note: ALTER SYSTEM SET EVENTS will set the events at system level for all sessions, as well as session level for current session. Therefore, the user has to be granted both ALTER SYSTEM and ALTER SESSION privileges to run this command.

In order to find out / list all events set at system level or session level, following steps is needed

1. Run SQL*Plus and connect to database as sysdba

The oradebug command will be run in SQL*Plus.

2. Identify the session, this step is only needed if you want to list session level events set in the sessions other than current session

Find out the session's Oracle process identifier (PID), Operating system process identifier (SPID) and Oracle process name (PNAME) with following SQL,
select p.pid, p.spid, p.pname, s.username, s.machine, s.program
  from v$process p, v$session s 
 where p.addr = s.paddr;
Sample output
PID SPID   PNAME  USERNAME   MACHINE         PROGRAM
--- ------ ------ ---------- --------------- ----------------------
 45 24432         USER01     workstation01   sqlplus.exe
The the sample output shows that USER01 is running sqlplus.exe on machine workstation01, which session PID is 45 and SPID is 24432.

3. Attached to the identified session

A. If you want to list system level events or session level events in current session, run oradebug command ,

   oradebug setmypid

For example
sys@orcl> oradebug setmypid
Statement processed.
B. If you want to list events set in the session other than current session, run one of following oradebug commands,

   oradebug setorapid      <PID>
   oradebug setospid       <SPID>
   oradebug setorapname    <PNAME>

For example, attaching the session found in step 2
sys@sys@orcl> oradebug setorapid 45
Oracle pid: 45, Unix process pid: 24432, image: oracle@dbhost01
Or
sys@sys@orcl> oradebug setospid 24432
Oracle pid: 45, Unix process pid: 24432, image: oracle@dbhost01
Here, oradebug setorapname cannot be used to attach the session because the process of the session does not have PNAME (PNAME is null). 

4. List the events

A. List events set at system level, run command

   oradebug eventdump system   -- 10g or newer
   oradebug dump events 4      -- 8i/9i

Sample output
sys@orcl> oradebug eventdump system
1652 trace name errorstack
The trace is enabled for ORA-1652 at system level.

B. List events set at session/process level, run command

   oradebug eventdump session   -- 10g or newer
   oradebug dump events 1          -- 8i/9i for session
   oradebug eventdump process  -- 10g or newer
   oradebug dump events 2          -- 8i/9i for process

Sample output
sys@orcl> oradebug eventdump process
1652 trace name errorstack
sys@orcl> oradebug eventdump session
sql_trace level=12
1652 trace name errorstack
The trace is set for ORA-1652 at both process and session level. And sql_trace is set at session level, the trace level is 12.

Tuesday, November 30, 2021

Identify SQL statements consuming temporary tablespace

You may have to find out the SQL statements which are using temporary tablespace when you get storage pressure for temporary tablespace.

Monday, November 15, 2021

Oracle 19c root.sh not working while gridSetup.sh switchGridHome to software-only installation

When the command "griddSetup.sh -switchGridHome" is used to perform Oracle database 19c out of place (OOP) patching, it will prompt to run root.sh on each cluster node. And root.sh will switch Grid Home from old one (without new patches) to new one (applied with new patches) on each node and also patch Grid Infrastructure Management Repository (GIMR) database on last node of the cluster.

Unfortunately, roo.sh could only work on first node, and not on others if you deploy the software/patches in following steps,

Step 1. Install new GI home with software only option with command,

   ./gridSetup.sh -applyRU <RU_Patch_Location> -applyOneOffs <comma_separated_one-off_patches_location>

At the end of the installation, you will be asked to run root.sh on each node. This software-only version root.sh does not touch old GI home and any running cluster processes (listener, asm, etc.). The new home is applied with new patctes (specified by command option applyRU and/or applyOneOffs).

Step 2. Switch GI from old home to new patched home with command,

   ./gridSetup.sh -switchGridHome

This installation/configuration utility will create a new version of rootconfig.sh under directory "$ORACLE_HOME/crs/config" with home-switching and patching options enabled on local node (known as fist node), the affected options as following,
   Software-only version            New version
   ---------------------------      --------------------------
   PATCH_HOME=false                 PATCH_HOME=true
   MOVE_HOME=false                  MOVE_HOME=true
   TRANSPARENT_MOVE_HOME=false      TRANSPARENT_MOVE_HOME=true
   SWONLY_MULTINODE=true            SWONLY_MULTINODE=false
The rootconfig.sh is called by root.sh. Technically, the utility gridSetup.sh should copy this new version of rootconfig.sh to all other nodes. However, it does not always happen and you will see nothing being changed by root.sh on nodes other than first node where the utility is running. This issue is experienced when new home is patches with Oracle Release Update 19.13.0.0.211019.

If root.sh does not work on other nodes, you can fix it by manually copying new version of rootconfig.sh from first node to other nodes.

Note: Always make a backup of existing rootconfig.sh before overriding it with new rootconfig.sh copied from first node and it is also a good practice to compare two versions of rootconfig.sh and verify what the differences are. 

Saturday, November 6, 2021

EM 12c/13c Agent Unreachable (Agent is running but is currently not read to accept client requests)

EM 12c/13c create Agent Unreachable alerts as follows: 
Host=host01.dbaplus.ca 
Target type=Agent 
Target name=host01.dbaplus.ca:3872 
Categories=Availability 
Message=Agent Unreachable (REASON = The agent is running but is currently not ready to accept client requests). Host is reachable. 
Severity=Critical 
Event reported time=Nov 6, 2021 12:36:48 PM EDT 
Operating System=SunOS
Platform=sparc
Event Type=Target Availability 
Event name=Status 
Availability status=Agent Unreachable
Update Details:
Agent Unreachable (REASON = The agent is running but is currently not ready to accept client requests). Host is reachable.
It usually happens when the agent is monitoring many targets or heavily loaded.  Agent cannot respond to OMS heartbeat, it cause oms to generate Agent Unreachable alerts.

Check the value of agent property MaxInComingConnections with following command
${AGNET_INSTANCE_HOME}/bin/emctl getproperty agent -name MaxInComingConnections
Sample output if the property is configured,
$ emctl getproperty agent -name MaxInComingConnections
Oracle Enterprise Manager Cloud Control 13c Release 5  
Copyright (c) 1996, 2021 Oracle Corporation.  All rights reserved.
MaxInComingConnections=50
If the property is not configured, the output looks like following,
$ emctl getproperty agent -name MaxInComingConnections
Oracle Enterprise Manager Cloud Control 13c Release 5  
Copyright (c) 1996, 2021 Oracle Corporation.  All rights reserved.
MaxInComingConnections is not a valid configuration property
The property MaxInComingConnections can be configured by adding following line in agent configuration file "${AGENT_INSTANCE_HOME}/sysman/config/emd.properties",
MaxInComingConnections=200
If the property already exists, you can find this line (the value may be different depending on your system) in the file, and there is no need to add new line,  just update the value to a bigger number (e.g. 200).

Note, it is a best practice to do a backup before editing the configuration file.

After saving the change, restart the agent.

Friday, November 5, 2021

Javascript/CSS: Add Copy-to-Clipboard Button in HTML Code Example

When feeling tired to select the text before copying them, I decide to add a COPY button to every code example block in my blog. The button will copy the whole example block to clipboard. I reached it through 4 steps.

Wednesday, November 3, 2021

Javascript/CSS: Add Line Number to HTML Source Code Block

 When I shared the source code in my blog before, I used to reference source code by highlighting the code lines with HTML tag. It was not an issue until I had many references to different lines of a long clode block. It could take too much of my time to highlight or mark the reference when I post an article. Therefore, I added the line number to the source code in my posts. The line-numbered code block looks like,

Sunday, October 24, 2021

EM 12c/13c The listener is down: Failed to run lsnrctl

The listener is up and running. However, the lisener target shows as Down in the Oracle Enterprise Manager Cloud Control Console and the incident message as following,
The listener is down: Failed to run lsnrctl.

Friday, October 1, 2021

Change SYS password in Oracle Data Guard

In Oracle Data Guard configuration, redo transport uses Oracle Net sessions to transport redo data. These redo transport sessions are authenticated using either the Secure Socket Layer (SSL) protocol or a remote login password file. Most time, remote login password file is adopted because of the difficulty of meeting SSL authentication requirements.

Monday, August 30, 2021

Database startup failed with ORA-600 [dbgripmg_2: infinite init action] [ADR_CONTROL_AUX]

Oracle database startup failed with ORA-00600 as following
SQL> startup
ORACLE instance started.

Total System Global Area 8589878792 bytes
Fixed Size                 12854792 bytes
Variable Size            5402263552 bytes
Database Buffers         3154116608 bytes
Redo Buffers               20643840 bytes
ORA-00600: internal error code, arguments: [dbgripmg_2: infinite init action],
[11], [ADR_CONTROL_AUX], [], [], [], [], [], [], [], [], []

Errors ORA-00700 [dbgrmblcp_corrupt_page]/ORA-00600 [dbgrmblgp_get_page_1] can be found from alert log as following,
Dumping diagnostic data in directory=[cdmp_20210830103319], requested by (instance=1, osid=85656173 (M000)), summary=[incident=186167].
2021-08-30 10:33:19.773000 -04:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /u01/app/oracle/diag/rdbms/db01/DB01/trace/DB01_m000_11797188.trc  (incident=186169) (PDBNAME=CDB$ROOT):
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/INCIDENT.ams], [11], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/db01/DB01/incident/incdir_186169/DB01_m000_11797188_i186169.trc
Errors in file /u01/app/oracle/diag/rdbms/db01/DB01/trace/DB01_ora_59376292.trc  (incident=186150) (PDBNAME=CDB$ROOT):
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/ADR_CONTROL.ams], [11], [], [], [], [], [], [], [], [], []
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/ADR_CONTROL.ams], [11], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/db01/DB01/incident/incdir_186150/DB01_ora_59376292_i186150.trc
Dumping diagnostic data in directory=[cdmp_20210830103320], requested by (instance=1, osid=85656173 (M000)), summary=[incident=186168].
2021-08-30 10:33:21.211000 -04:00
Errors in file /u01/app/oracle/diag/rdbms/db01/DB01/trace/DB01_m000_11797188.trc  (incident=186170) (PDBNAME=CDB$ROOT):
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/INCIDENT.ams], [11], [], [], [], [], [], [], [], [], []
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/INCIDENT.ams], [11], [], [], [], [], [], [], [], [], []
Errors in file /u01/app/oracle/diag/rdbms/db01/DB01/trace/DB01_m000_11797188.trc  (incident=186171) (PDBNAME=CDB$ROOT):
ORA-00600: internal error code, arguments: [dbgrmblgp_get_page_1], [11], [0], [0], [], [], [], [], [], [], [], []
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/INCIDENT.ams], [11], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/db01/DB01/incident/incdir_186171/DB01_m000_11797188_i186171.trc
Errors in file /u01/app/oracle/diag/rdbms/db01/DB01/trace/DB01_ora_59376292.trc  (incident=186151) (PDBNAME=CDB$ROOT):
ORA-00600: internal error code, arguments: [dbgrmblgp_get_page_1], [11], [0], [0], [], [], [], [], [], [], [], []
ORA-00700: soft internal error, arguments: [dbgrmblcp_corrupt_page], [/u01/app/oracle/diag/rdbms/db01/DB01/metadata/ADR_CONTROL.ams], [11], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/db01/DB01/incident/incdir_186151/DB01_ora_59376292_i186151.trc

ADRCI is accessing ams files which are corrupted. In the example alert, corrupted files are INCIDENT.ams and ADR_CONTROL.ams under $ADR_HOME. The corrupted ams file may be different in different scenarios.

Solution is that deleting the corrupted files and restart database.

Wednesday, July 7, 2021

AQ Queue Monitor Does Not Change Message MSG_STATE from WAIT to READY on RAC

In RAC database, the Advanced Queue (AQ) message MSG_STATE (column of view AQ$<QUEUE_TABLE>) stays with value 'WAIT' and will not be changed to 'READY' any more. It usually happens after database was shut down with immediate/abort option followed by database patching/upgrading. 

Friday, June 25, 2021

EM 12c/13c Update Oracle Home Path of Targets with SQL

 As out-of-place patching can dramatically help minimize database downtime, especially when multiple databases run out of same home, it does not require all databases shut down at same time, more and more DBA adopt this method. However, if you are patching a home which host multiple databases, you may feel frustrated to update the databases' Oracle Home property in Oracle Enterprise Manage by clicking through different windows and changing it one by one. Now, the first question you will ask is how I can update all databases at one time. The answer is to run SQL (PL/SQL) in EM repository database.

Wednesday, June 23, 2021

EM 12c/13c Change Lifecycle Status with SQL

The Lifecycle Status property of EM targets is often used to prioritize the notification of incident rule. For example, DBA will be paged while database which Lifecycle Status is 'Production' crashed, and only emailed for 'Test' database. This post is going to show how to use sql scripts to find out targets with different Lifecycle Status and how to change the Lifecycle Status.

Tuesday, June 22, 2021

EM 12c/13c How to find out new discovered targets with SQL

 When agent is deployed to new host, or new targets is installed or created on existing hosts, Oracle Enterprise Manger (EM) Cloud Control can discover the targets automatically. However, EM does not promote the new discovered targets automatically.

Sunday, June 20, 2021

Script Run datapatch against all running instance in parallel

This script can be used to run datapatch against all currently running instances in parallel to apply sql patches after patches are applied on Oracle homes. It is helpful for post patching operations. It works for different instances running out of different Oracle homes and different home with different owners.

The script accepts one optional parameter as log file path, it will save log files under /tmp if no parameter is presented while the script is started.

In order to have the script succeed, please be advised about following limitations,

1. The script is only tested on Linux and AIX, it does not work on Solaris.

2. Only root or Oracle database home owner is supposed to run this script. If the instances are running out of different Oracle homes which are owned by different OS users, root user is recommended. Otherwise, the instances running out of Oracle home which owner is different from current user will be excluded.

3. Oracle database home is 12c or higher which supports datapatch

4. The script runs following command as root to retrieve Oracle home path
   /bin/ls
   Therefore, if Oracle home owner (normally oracle) runs this script, sudo has to be configure to grant the user privilege to run '/bin/ls' as root without being asked for password.

Friday, June 18, 2021

Install PostgreSQL on RHEL/OL/CentOS 6/7/8

There are many ways to install PostgreSQL on Linux box. Here, I am going to use yum to install different version of PostgreSQL on Red Hat Enterprise Linux (RHEL) 7/8, it also works for Oracle Linux (OL) / CentOS 7/8.

Thursday, June 17, 2021

Java Stored Procedure failed with java.lang.OutOfMemoryError

Oracle database Jave stored procedure failed with "java.lang.OutOfMemoryError", the error stack looks like
ORA-29532: Java call terminated by uncaught Java exception: java.lang.OutOfMemoryError
ORA-06512: at "USER01.MYJAVAPROC", line 28
ORA-06512: at line 7

This error is thrown when there is insufficient space to allocate an object in the Java heap. In this case, The Java garbage collector cannot make space available to accommodate a new object, and the heap cannot be expanded further.  

Wednesday, June 16, 2021

Oracle 19.11 roothas.sh failed with "Out of memory" on AIX

When applying Oracle GI Release Update 19.11.0.0.210420 on AIX, command "roothas.sh -postpatch" failed with "Out of memory" as following
[root@host01]# /u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.sh -postpatch
Using configuration parameter file: /u01/app/oracle/product/19.0.0/grid_1/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/host01/crsconfig/hapatch_2021-06-15_01-53-27PM.log
Out of memory!
Out of memory!
Out of memory!
/u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.sh[137]: 7930494 Segmentation fault(coredump)
The command '/u01/app/oracle/product/19.0.0/grid_1/perl/bin/perl -I/u01/app/oracle/product/19.0.0/grid_1/perl/lib -I/u01/app/oracle/product/19.0.0/grid_1/crs/install -I/u01/app/oracle/product/19.0.0/grid_1/xag /u01/app/oracle/product/19.0.0/grid_1/crs/install/roothas.pl -postpatch' execution failed

Tuesday, June 15, 2021

Configure yum with proxy server on RHEL/OL/Fedora

When using yum to maintain packages on hosts running Red Hat Enterprise Linux (RHEL), Oracle Linux (OL) or Fedora, you may have to configure proxy option if the host is running behind firewall.

In order to do that, add following lines to yum configuration file /etc/yum.conf

proxy=http://<proxy server>:<port>
proxy_username=<user name>
proxy_password=<password>

Here, 

  <proxy server> is the host name or IP address of proxy server
  <user name> user name if proxy server requires
  <password>  password if proxy server requires

File /etc/yum.conf  example
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
proxy=http://proxy.dbaplus.ca:8080
proxy_username=user01
proxy_password=user01pwd

Thursday, June 3, 2021

How to Permanently Change PowerShell Prompt

 The PowerShell command prompt indicates that PowerShell is ready to run a command,
PS C:\>

It is determined by the built-in Prompt function and can be customized by running following command to re-define Prompt function,

  function prompt {"<Script block>"}

Here, <Script block> is script block which will determine/generate prompt value (character string). For example, following have prompt include current date and time
PS C:\>
PS C:\> function Prompt {"$(Get-Date)> "}
06/03/2021 11:39:23>
06/03/2021 11:39:25>

The change is only valid for current session, the prompt of new session is still default value. In order to keep new prompt for all sessions, you have to create your own Prompt function and saving it in your PowerShell profile as following,

1. Find out your PowerShell profile file with "$profile"
PS C:\> $profile
C:\Users\admin\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
PS C:\>

2. Add customized Prompt function to profile

If the file or directory does not exist, you can manually create it, then add customized Prompt function to the file. For example, add following lines
function Prompt { "PS [" + ${ENV:USERNAME} + "@" + ${ENV:COMPUTERNAME} + "]> "}

The prompt will be in the format "PS [user-name@computer-name]> " in all new PowerShell sessions.

Sunday, May 16, 2021

Grid Infrastructure 12.2 restore OCR, Voting File and mgmtdb when disk group corrupted

For some reason, Oracle desupported the placement of OCR and voting files directly on a shared file system from Grid Infrastructure 12.2 until rescinding the desupport for Standalone Clusters from 19.3. Therefore, when GI 12.2 is installed, OCR, Voting file, and OCR backup location are, by default, configured in the ASM disk group, and mostly likely in the same disk group. When the disk group cannot be mounted because of any issues, the cluster will not be brought up anymore.

Technically, GI has to be re-configured as like a new installation, that could be a big job. However, it may not be that bad. Although 12.2 does not allow you have OCR backup location out of ASM disk group, DBA still can copy the OCR backup file from ASM disk group to file system with command 'asmcmd cp'. I am going to demonstrate how we can maximize the opportunity of bringing the cluster backup without rebuild/reinstall it.

Sunday, May 9, 2021

Windows How to log all output on the console to text file

Windows Command Prompt console does not have built-in facility to log the console output to file. In order to implement logging function, PowerShell has to be used instead of normal command console.

To enable logging, running following command under PowerShell prompt,

  Start-Transcript [-Path] "<file-name>" [-Append]

Here, 

  -Path parameter is used to specify the file "<file-name>" and "<file-name>" is full path of log file in which output messages will be saved

  -Append Indicates that the logging text will be added to the end of an existing file instead of overriding.

To stop logging, running command

  Stop-Trasnscript

Note: PowerShell can be started on Windows by clicking "Windows PowerShell" applicaiton or running command "PowerShell" in normal "Command Prompt" console, but DO NOT start PowerShell in "Windows Terminal". If the PowerShell is started from "Window Terminal", Start-Transcript may not be able to log everything. For example, non-Windows built-in command (e.g. sqlplus) output cannot be completely logged.
  
Example

PS> Start-Transcript -Path "C:\temp\test.log"
Transcript started, output file is C:\temp\test.log
PS>
PS> sqlplus system/oracle@orcl

SQL*Plus: Release 12.2.0.1.0 Production on Sun May 9 20:30:15 2021

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 13 2021 16:46:29 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

system@orcl> select host_name from v$instance;

HOST_NAME
----------------------------------------------------------------
host01

system@orcl> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
PS>
PS> Stop-Transcript
Transcript stopped, output file is C:\temp\test.log

The content of file C:\temp\test.log

**********************
Windows PowerShell transcript start
Start time: 20210509202857
Username: dbaplus
RunAs User: dbaplus
Configuration Name: 
Machine: wkstn01 (Microsoft Windows NT 10.0.19042.0)
Host Application: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Process ID: 15732
PSVersion: 5.1.19041.906
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.19041.906
BuildVersion: 10.0.19041.906
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
**********************
Transcript started, output file is C:\temp\test.log
PS> sqlplus system/oracle@orcl

SQL*Plus: Release 12.2.0.1.0 Production on Sun May 9 20:30:15 2021

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 13 2021 16:46:29 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

system@orcl> select host_name from v$instance;

HOST_NAME
----------------------------------------------------------------
host01

system@orcl> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
PS> Stop-Transcript
**********************
Windows PowerShell transcript end
End time: 20210509203043
**********************

Wednesday, May 5, 2021

OEM 12c/13c Database Target Discovered and Named with host name as suffix

 When EM discovers database targets, it generates default target name for the database in one of following formats,

  db_unique_name.db_domain   if both parameters db_unique_name and db_domain are set
  db_unique_name             if parameter db_unique_name is set, but db_domain is not set
  db_name.db_domain          if parameter db_unique_name is not set, but db_domain is set
  db_name                    if neither db_unique_name nor db_domain is set

EM discovery script retrieves these parameters' values from parameter file (pfile or spfile) of the database, not from running instance. If the discovery process has issue to locate or process parameter file, EM will name database target in format,

  <sid>_<hostname>
  
Therefore, when you find the new discovered database target is named in this format, it means EM agent discovery script has issue with parameter file. The details can be found from agent trace file "<AGENT_INST_HOME>/sysman/log/emagent_perl.trc". Most popular error messages look like

  ERROR:  initParameterFileUtl::convertSPFileToPFile: Failed to convert spfile
  
or

  ERROR:  initParameterFileUtl::getParameterFile: Cannot find any init parameter file for instance <instancename> in oracle home  <oracle_home>

For example, when Oracle Restart (standalone Grid Infrastructure) is installed and ASM storage is configured on server host1.dbaplus.ca, DBCA creates a database orcl (sid & db_name) in ASM diskgroup, the spfile will also be saved in ASM diskgroup and no parameter file (pfile/spfile) will be created under directory <ORACLE_HOME>/dbs. The database will be discovered by EM with default name orcl_host1.dbaplus.ca and you will see errors in agent trace file "emagent_perl.trc",

oracledb.pl: 2021-05-04 04:03:08,923: INFO:  DB_LISTENER_DISCOVERY:  processing sid="orcl"
oracledb.pl: 2021-05-04 04:03:08,926: ERROR:  initParameterFileUtl::getParameterFile: Cannot find any init parameter file for instance orcl in oracle home /u01/app/oracle/product/19.9.0/dbhome_1/dbs
oracledb.pl: 2021-05-04 04:03:08,930: ERROR:  initParameterFileUtl::getParameterFile: Cannot find any init parameter file for instance orcl in oracle home /u01/app/oracle/product/19.9.0/dbhome_1/dbs
oracledb.pl: 2021-05-04 04:03:09,065: ERROR:  initParameterFileUtl::getParameterFile: Cannot find any init parameter file for instance orcl in oracle home /u01/app/oracle/product/19.9.0/dbhome_1/dbs
oracledb.pl: 2021-05-04 04:03:09,069: ERROR:  initParameterFileUtl::getParameterFile: Cannot find any init parameter file for instance orcl in oracle home /u01/app/oracle/product/19.9.0/dbhome_1/dbs

The reason is that discovery script cannot find parameter file from <ORACLE_HOME>/dbs. To fix this problem, create a pfile 'initorcl.ora' under <ORACLE_HOME>/dbs with following content,

   spfile='<full path of spfile saved in diskgroup>'

Note: DO NOT leave any SPACE character at the beginning of the line (before word "spfile"). If you did, you will not see any errors in the trace file, but the target name will be <sid>_<hostname>.

Sample init file

$ srvctl config database -db orcl | grep spfile
Spfile: +DATA/orcl/PARAMETERFILE/spfile.919.1071658047
$
$ echo "spfile='+DATA/orcl/PARAMETERFILE/spfile.919.1071658047'" > $ORACLE_HOME/dbs/initorcl.ora
$
$ cat $ORACLE_HOME/dbs/initorcl.ora
spfile='+DATA/orcl/PARAMETERFILE/spfile.919.1071658047'