Hemant K Chitale

Subscribe to Hemant K Chitale feed
I am an Oracle Database Specialist in Singapore. Please note that this site uses cookies.

Updated: 2 hours 4 min ago

Do DataFiles get updated when in BACKUP mode ?

Sun, 2023-10-15 05:33

 The Oracle feature allowing the DBA to put a database in "BACKUP" mode is useful for environments where non-RMAN methods / storage snapshots etc are used to make database backups.

However, there seems to be misconception that datafiles are not updated when the database is in BACKUP mode.

Here is a quick demonstration to disprove it.  I first place the database in BACKUP mode and then, from other sessions, start-off transactions and checkpoints and switch logfiles.  I then check if the datafile headers are updated and whether the datafiles themselves are updated.



17:55:28 SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1240322

17:55:34 SQL> alter system checkpoint;

System altered.

17:55:42 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240326 15-OCT-23 17:55:42
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240326 15-OCT-23 17:55:42
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240326 15-OCT-23 17:55:42
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240326 15-OCT-23 17:55:42
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240326 15-OCT-23 17:55:42
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240326 15-OCT-23 17:55:42
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240326 15-OCT-23 17:55:42
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240326 15-OCT-23 17:55:42

8 rows selected.

17:56:01 SQL> alter database begin backup;

Database altered.

17:56:10 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240339 15-OCT-23 17:56:10
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240339 15-OCT-23 17:56:10
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240339 15-OCT-23 17:56:10
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240339 15-OCT-23 17:56:10
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240339 15-OCT-23 17:56:10
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240339 15-OCT-23 17:56:10
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240339 15-OCT-23 17:56:10
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240339 15-OCT-23 17:56:10

8 rows selected.

17:56:18 SQL>


The BEGIN BACKUP causes a Checkpoint.  Now I start off transactions and log file switches and checkpoints from other sessions and monitor the state here.



17:59:15 SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/oracle/archivelog/HEMANT
Oldest online log sequence     118
Next log sequence to archive   120
Current log sequence           120
17:59:18 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240339 15-OCT-23 17:56:10
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240339 15-OCT-23 17:56:10
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240339 15-OCT-23 17:56:10
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240339 15-OCT-23 17:56:10
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240339 15-OCT-23 17:56:10
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240339 15-OCT-23 17:56:10
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240339 15-OCT-23 17:56:10
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240339 15-OCT-23 17:56:10

8 rows selected.

17:59:25 SQL> !ls -l /opt/oracle/oradata/HEMANT/*dbf
-rw-r-----. 1 oracle oinstall   5251072 Oct 15 17:56 /opt/oracle/oradata/HEMANT/HR_DATA.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 17:56 /opt/oracle/oradata/HEMANT/indx01.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 17:56 /opt/oracle/oradata/HEMANT/indx02.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 17:56 /opt/oracle/oradata/HEMANT/indx03.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 17:52 /opt/oracle/oradata/HEMANT/redo01.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 17:58 /opt/oracle/oradata/HEMANT/redo02.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 17:58 /opt/oracle/oradata/HEMANT/redo03.dbf
-rw-r-----. 1 oracle oinstall 419438592 Oct 15 17:56 /opt/oracle/oradata/HEMANT/sysaux.dbf
-rw-r-----. 1 oracle oinstall 566239232 Oct 15 17:56 /opt/oracle/oradata/HEMANT/system.dbf
-rw-r-----. 1 oracle oinstall 104865792 Oct 15 17:56 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf
-rw-r-----. 1 oracle oinstall 209723392 Oct 15 17:56 /opt/oracle/oradata/HEMANT/undotbs.dbf

17:59:38 SQL>
18:01:04 SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/oracle/archivelog/HEMANT
Oldest online log sequence     119
Next log sequence to archive   121
Current log sequence           121
18:01:17 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240339 15-OCT-23 17:56:10
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240339 15-OCT-23 17:56:10
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240339 15-OCT-23 17:56:10
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240339 15-OCT-23 17:56:10
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240339 15-OCT-23 17:56:10
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240339 15-OCT-23 17:56:10
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240339 15-OCT-23 17:56:10
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240339 15-OCT-23 17:56:10

8 rows selected.

18:01:25 SQL> !ls -l /opt/oracle/oradata/HEMANT/*dbf
-rw-r-----. 1 oracle oinstall   5251072 Oct 15 17:56 /opt/oracle/oradata/HEMANT/HR_DATA.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 17:56 /opt/oracle/oradata/HEMANT/indx01.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 17:56 /opt/oracle/oradata/HEMANT/indx02.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 17:56 /opt/oracle/oradata/HEMANT/indx03.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:01 /opt/oracle/oradata/HEMANT/redo01.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 17:58 /opt/oracle/oradata/HEMANT/redo02.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:00 /opt/oracle/oradata/HEMANT/redo03.dbf
-rw-r-----. 1 oracle oinstall 419438592 Oct 15 17:56 /opt/oracle/oradata/HEMANT/sysaux.dbf
-rw-r-----. 1 oracle oinstall 566239232 Oct 15 18:00 /opt/oracle/oradata/HEMANT/system.dbf
-rw-r-----. 1 oracle oinstall 104865792 Oct 15 17:56 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf
-rw-r-----. 1 oracle oinstall 209723392 Oct 15 18:01 /opt/oracle/oradata/HEMANT/undotbs.dbf

18:01:34 SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1240837

18:01:44 SQL>
18:01:44 SQL> !sleep 120

18:04:00 SQL>
18:04:04 SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/oracle/archivelog/HEMANT
Oldest online log sequence     122
Next log sequence to archive   124
Current log sequence           124
18:04:10 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240339 15-OCT-23 17:56:10
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240339 15-OCT-23 17:56:10
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240339 15-OCT-23 17:56:10
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240339 15-OCT-23 17:56:10
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240339 15-OCT-23 17:56:10
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240339 15-OCT-23 17:56:10
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240339 15-OCT-23 17:56:10
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240339 15-OCT-23 17:56:10

8 rows selected.

18:04:16 SQL> !ls -l /opt/oracle/oradata/HEMANT/*dbf
-rw-r-----. 1 oracle oinstall   5251072 Oct 15 18:03 /opt/oracle/oradata/HEMANT/HR_DATA.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:03 /opt/oracle/oradata/HEMANT/indx01.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:03 /opt/oracle/oradata/HEMANT/indx02.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:03 /opt/oracle/oradata/HEMANT/indx03.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:03 /opt/oracle/oradata/HEMANT/redo01.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:03 /opt/oracle/oradata/HEMANT/redo02.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:03 /opt/oracle/oradata/HEMANT/redo03.dbf
-rw-r-----. 1 oracle oinstall 419438592 Oct 15 18:03 /opt/oracle/oradata/HEMANT/sysaux.dbf
-rw-r-----. 1 oracle oinstall 566239232 Oct 15 18:03 /opt/oracle/oradata/HEMANT/system.dbf
-rw-r-----. 1 oracle oinstall 104865792 Oct 15 18:03 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf
-rw-r-----. 1 oracle oinstall 209723392 Oct 15 18:03 /opt/oracle/oradata/HEMANT/undotbs.dbf

18:04:24 SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1242436

18:04:34 SQL>
18:05:13 SQL> !sleep 120

18:07:15 SQL>
18:07:17 SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/oracle/archivelog/HEMANT
Oldest online log sequence     124
Next log sequence to archive   126
Current log sequence           126
18:07:20 SQL> alter system archive log current;

System altered.

18:07:30 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240339 15-OCT-23 17:56:10
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240339 15-OCT-23 17:56:10
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240339 15-OCT-23 17:56:10
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240339 15-OCT-23 17:56:10
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240339 15-OCT-23 17:56:10
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240339 15-OCT-23 17:56:10
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240339 15-OCT-23 17:56:10
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240339 15-OCT-23 17:56:10

8 rows selected.

18:07:36 SQL> !ls -l /opt/oracle/oradata/HEMANT/*dbf
-rw-r-----. 1 oracle oinstall   5251072 Oct 15 18:07 /opt/oracle/oradata/HEMANT/HR_DATA.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:07 /opt/oracle/oradata/HEMANT/indx01.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:07 /opt/oracle/oradata/HEMANT/indx02.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:07 /opt/oracle/oradata/HEMANT/indx03.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:07 /opt/oracle/oradata/HEMANT/redo01.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:06 /opt/oracle/oradata/HEMANT/redo02.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:07 /opt/oracle/oradata/HEMANT/redo03.dbf
-rw-r-----. 1 oracle oinstall 419438592 Oct 15 18:07 /opt/oracle/oradata/HEMANT/sysaux.dbf
-rw-r-----. 1 oracle oinstall 566239232 Oct 15 18:07 /opt/oracle/oradata/HEMANT/system.dbf
-rw-r-----. 1 oracle oinstall 104865792 Oct 15 18:07 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf
-rw-r-----. 1 oracle oinstall 209723392 Oct 15 18:07 /opt/oracle/oradata/HEMANT/undotbs.dbf

18:07:43 SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1242872

18:07:50 SQL>
18:07:50 SQL> alter system checkpoint;

System altered.

18:08:23 SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1243044

18:08:29 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1240339 15-OCT-23 17:56:10
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1240339 15-OCT-23 17:56:10
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1240339 15-OCT-23 17:56:10
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1240339 15-OCT-23 17:56:10
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1240339 15-OCT-23 17:56:10
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1240339 15-OCT-23 17:56:10
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1240339 15-OCT-23 17:56:10
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1240339 15-OCT-23 17:56:10

8 rows selected.

18:08:35 SQL> !ls -l /opt/oracle/oradata/HEMANT/*dbf
-rw-r-----. 1 oracle oinstall   5251072 Oct 15 18:08 /opt/oracle/oradata/HEMANT/HR_DATA.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:08 /opt/oracle/oradata/HEMANT/indx01.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:08 /opt/oracle/oradata/HEMANT/indx02.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:08 /opt/oracle/oradata/HEMANT/indx03.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:08 /opt/oracle/oradata/HEMANT/redo01.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:08 /opt/oracle/oradata/HEMANT/redo02.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:07 /opt/oracle/oradata/HEMANT/redo03.dbf
-rw-r-----. 1 oracle oinstall 419438592 Oct 15 18:08 /opt/oracle/oradata/HEMANT/sysaux.dbf
-rw-r-----. 1 oracle oinstall 566239232 Oct 15 18:08 /opt/oracle/oradata/HEMANT/system.dbf
-rw-r-----. 1 oracle oinstall 104865792 Oct 15 18:08 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf
-rw-r-----. 1 oracle oinstall 209723392 Oct 15 18:08 /opt/oracle/oradata/HEMANT/undotbs.dbf

18:08:41 SQL>


I now, finally, issue an END BACKUP and check the status.


18:09:45 SQL> alter database end backup;

Database altered.

18:09:51 SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1244942

18:09:58 SQL> @get_dbf_chk_scns

     FILE# NAME                                                     CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- -------------------------------------------------------- ------------------ ------------------
         1 /opt/oracle/oradata/HEMANT/system.dbf                               1243041 15-OCT-23 18:08:23
         2 /opt/oracle/oradata/HEMANT/sysaux.dbf                               1243041 15-OCT-23 18:08:23
         3 /opt/oracle/oradata/HEMANT/undotbs.dbf                              1243041 15-OCT-23 18:08:23
         4 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf             1243041 15-OCT-23 18:08:23
         5 /opt/oracle/oradata/HEMANT/indx01.dbf                               1243041 15-OCT-23 18:08:23
         6 /opt/oracle/oradata/HEMANT/HR_DATA.dbf                              1243041 15-OCT-23 18:08:23
        10 /opt/oracle/oradata/HEMANT/indx02.dbf                               1243041 15-OCT-23 18:08:23
        11 /opt/oracle/oradata/HEMANT/indx03.dbf                               1243041 15-OCT-23 18:08:23

8 rows selected.

18:10:05 SQL> !ls -l /opt/oracle/oradata/HEMANT/*dbf
-rw-r-----. 1 oracle oinstall   5251072 Oct 15 18:09 /opt/oracle/oradata/HEMANT/HR_DATA.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:09 /opt/oracle/oradata/HEMANT/indx01.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:09 /opt/oracle/oradata/HEMANT/indx02.dbf
-rw-r-----. 1 oracle oinstall  10493952 Oct 15 18:09 /opt/oracle/oradata/HEMANT/indx03.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:08 /opt/oracle/oradata/HEMANT/redo01.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:09 /opt/oracle/oradata/HEMANT/redo02.dbf
-rw-r-----. 1 oracle oinstall  10486272 Oct 15 18:09 /opt/oracle/oradata/HEMANT/redo03.dbf
-rw-r-----. 1 oracle oinstall 419438592 Oct 15 18:09 /opt/oracle/oradata/HEMANT/sysaux.dbf
-rw-r-----. 1 oracle oinstall 566239232 Oct 15 18:09 /opt/oracle/oradata/HEMANT/system.dbf
-rw-r-----. 1 oracle oinstall 104865792 Oct 15 18:09 /opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf
-rw-r-----. 1 oracle oinstall 209723392 Oct 15 18:09 /opt/oracle/oradata/HEMANT/undotbs.dbf

18:10:10 SQL>


So, while the database was in BACKUP mode (from 17:56:10 to 18:09:45) the datafiles were being updated continuously, Log Switches were happening and ArchiveLogs were being generated.  I also had ALTER SYSTEM CHECKPOINT commands being issued.

The  last Checkpoint command issued from the same session that issued the BEGIN BACKUP  that was marked as completed (obviously, all previous Checkpoints where also completed) when the  END BACKUP command from the same session.

So a BEGIN BACKUP
a.  Issues a Checkpoint
b. Does *not* stop updates to datafiles
The last Checkpoint is marked as completed when an END BACKUP is issued -- but that doesn't mean that datafiles are not being updated.

Categories: DBA Blogs

Restoring a PDB from another CDB can increase your SCN

Sat, 2023-09-23 22:08

 I start with a 19c CDB called "ORCLCDB" and PDB "ORCLPDB1" that is present in it.


SQL> col name format a16
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
2778483057 ORCLCDB             20906515

SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1         READ WRITE
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL>
SQL> alter pluggable database orclpdb1 close;

Pluggable database altered.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 24 09:39:55 2023
Version 19.12.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> backup pluggable database orclpdb1 tag for_migration;

Starting backup at 24-SEP-23
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00034 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf
channel ORA_DISK_1: starting piece 1 at 24-SEP-23
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
channel ORA_DISK_2: starting piece 1 at 24-SEP-23
channel ORA_DISK_1: finished piece 1 at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp tag=FOR_MIGRATION comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp tag=FOR_MIGRATION comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 24-SEP-23

Starting Control File and SPFILE Autobackup at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2023_09_24/o1_mf_s_1148377605_ljz585bw_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-SEP-23

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 24 09:48:04 2023
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> alter pluggable database orclpdb1 unplug into '/opt/oracle/product/19c/dbhome_1/orclpdb1.xml';

Pluggable database altered.

SQL> !ls -l /opt/oracle/product/19c/dbhome_1/orclpdb1.xml
-rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:50 /opt/oracle/product/19c/dbhome_1/orclpdb1.xml

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


Now I go to the target CDB CDB2 and identify that there is no conflicting PDB.  I also find that it has a lower SCN.

SQL> col name format a16
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664227

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664231

SQL>
SQL> select * from v$pdbs
  2  /

    CON_ID       DBID    CON_UID GUID
---------- ---------- ---------- --------------------------------
NAME
--------------------------------------------------------------------------------------------------------------------------------
OPEN_MODE  RES OPEN_TIME                                                                   CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY
---------- --- --------------------------------------------------------------------------- ---------- ---------- ---------- --------
SNAPSHOT_PARENT_CON_ID APP APP APP APPLICATION_ROOT_CON_ID APP PRO LOCAL_UNDO   UNDO_SCN UNDO_TIME CREATION_ DIAGNOSTICS_SIZE
---------------------- --- --- --- ----------------------- --- --- ---------- ---------- --------- --------- ----------------
 PDB_COUNT AUDIT_FILES_SIZE   MAX_SIZE MAX_DIAGNOSTICS_SIZE MAX_AUDIT_SIZE LAST_CHANGE TEM
---------- ---------------- ---------- -------------------- -------------- ----------- ---
TENANT_ID
------------------------------------------------------------------------------------------------------------------------------------
UPGRADE_LEVEL GUID_BASE64
------------- ------------------------------
         2 2054948555 2054948555 88129263B99F4BBDE0530100007F7BDF
PDB$SEED
READ ONLY  NO  24-SEP-23 09.26.31.678 AM +08:00                                               1997190  957349888       8192 ENABLED
                       NO  NO  NO                          NO  NO           1        280           04-MAY-19                0
         0                0          0                    0              0 COMMON USER NO

            1 iBKSY7mfS73gUwEAAH973wA=

         3 2205532678 2205532678 E6BD9C73839C10BEE0550A00275FC834
PDBTMP
MOUNTED                                                                                      17541716  999292928       8192 ENABLED
                       NO  NO  NO                          NO  NO           1        280           21-AUG-22                0
         0                0          0                    0              0 COMMON USER NO

            1 5r2cc4OcEL7gVQoAJ1/INAA=


I then have the backups and the "closed" datafiles from the source CDB copied over to the target.  (The CREATE PLUGGABLE DATABASE command will not be using the RMAN Backups but the PDB Describle XML file and the "closed" datafiles).

oracle19c>pwd
/tmp/From_Source
oracle19c>ls -l
total 1882384
-rw-r-----. 1 oracle oinstall 1146109952 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp
-rw-r-----. 1 oracle oinstall  758202368 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp
-rw-r-----. 1 oracle oinstall   23232512 Sep 24 09:55 o1_mf_s_1148377605_ljz585bw_.bkp
-rw-r--r--. 1 oracle oinstall      12583 Sep 24 09:59 orclpdb1.xml
oracle19c>


I now prepare to copy the PDB to the target CDB2.. First I check for compatibility.. And then I plug in the PDB with a new name "ORCLPDB1_NEW"

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664231
SQL> set serveroutput on
SQL> declare
  2   compatible constant varchar2(3) :=
  3     case dbms_pdb.check_plug_compatibility(
  4       pdb_descr_file=>'/tmp/From_Source/orclpdb1.xml',
  5       pdb_name=>'ORCLPDB1')
  6     when true then 'YES'
  7     else 'NO'
  8  END;
  9  begin
 10    dbms_output.put_line(compatible);
 11  end;
 12  /
YES

PL/SQL procedure successfully completed.

SQL>
SQL> show parameter db_create

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /opt/oracle/oradata

SQL>
SQL> create pluggable database orclpdb1_new
  2  using '/tmp/From_Source/orclpdb1.xml'
  3  copy;  --- copy will copy the datafiles to the location based on db_create_file_dest and CDB database name

Pluggable database created.

SQL>
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20910195

SQL>
SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP           MOUNTED
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1_NEW     READ WRITE

SQL>


The SCN of the target CDB2 has now been set to the much higher 20910195 (that was 20664231 before I "plugged in" ORCLDB1 as ORCLDB1_NEW).  This is because it "read" the SCN from the headers of the datafiles that were plugged in.


I can go back to the source and drop the PDB.

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
2778483057 ORCLCDB             20910076

SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1         MOUNTED
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL> 
SQL> drop pluggable database orclpdb1 including datafiles;

Pluggable database dropped.

SQL> 
SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


So, I have "migrated" ORCLPDB1 from ORCLCDB to CDB2 and, in the process, have found that the SCN in CDB2 got reset to a much higher value, because it "inherited" the higher Checkpoint SCN that was in the datafiles of ORCLPDB1.

I can verify that all the datafiles (that are OPEN) in CDB2 can get stamped with the new SCN.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20658011
         1          3              0           20658011
         1          4              0           20658011
         1          7              0           20658011
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20657657
         3         59              0           20657657
         3         60              0           20657657
         4         69              0           20908595
         4         70              0           20908595
         4         71              0           20908595
         4         72              0           20908595
         4         73              0           20908595
         4         74              0           20908595
         4         75              0           20908595
         4         76              0           20908595

18 rows selected.

SQL> alter system checkpoint;

System altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912767
         1          3              0           20912767
         1          4              0           20912767
         1          7              0           20912767
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20657657
         3         59              0           20657657
         3         60              0           20657657
         4         69              0           20912767
         4         70              0           20912767
         4         71              0           20912767
         4         72              0           20912767
         4         73              0           20912767
         4         74              0           20912767
         4         75              0           20912767
         4         76              0           20912767

18 rows selected.

SQL> col name format a16
SQL> select con_id, name, open_mode from v$pdbs;

    CON_ID NAME             OPEN_MODE
---------- ---------------- ----------
         2 PDB$SEED         READ ONLY
         3 PDBTMP           MOUNTED
         4 ORCLPDB1_NEW     READ WRITE

SQL> alter pluggable database pdbtmp open;

Pluggable database altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912767
         1          3              0           20912767
         1          4              0           20912767
         1          7              0           20912767
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20912888
         3         59              0           20912888
         3         60              0           20912888
         4         69              0           20912767
         4         70              0           20912767
         4         71              0           20912767
         4         72              0           20912767
         4         73              0           20912767
         4         74              0           20912767
         4         75              0           20912767
         4         76              0           20912767

18 rows selected.

SQL> alter system checkpoint;

System altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912967
         1          3              0           20912967
         1          4              0           20912967
         1          7              0           20912967
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20912967
         3         59              0           20912967
         3         60              0           20912967
         4         69              0           20912967
         4         70              0           20912967
         4         71              0           20912967
         4         72              0           20912967
         4         73              0           20912967
         4         74              0           20912967
         4         75              0           20912967
         4         76              0           20912967

18 rows selected.

SQL>
SQL> alter session set container=ORCLPDB1_NEW;

Session altered.
SQL> col name format a54
SQL> select name from v$datafile;

NAME
------------------------------------------------------
/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_system_ljz7d02h_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_sysaux_ljz7d02l_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_undotbs1_ljz7d02m_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_users_ljz7d02o_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_data_min_ljz7d02p_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_hemant_d_ljz7d02r_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_hemant_d_ljz7d02s_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_tpcctab_ljz7d02t_.dbf


8 rows selected.

SQL>



CON_ID=1 is Root and the datafiles get the new SCN after a Checkpoint.   Note these datafiles still had the "older" SCN 20658011 before I issued the Checkpoint.
CON_ID=2  is the SEED PDB so it does not get updated.
CON_ID=3 is PDTMP  at 20657657 and is stamped to 2091288 after I OPEN it.
CON_ID=4 is the ORCLPDB1_NEW that I plugged in today.
All the non-SEED datafiles are stamped again (to 20912967) after another Checkpoint.

Categories: DBA Blogs

The FILTER operation for an EXISTS query

Wed, 2023-08-09 04:26

 Let's assume a business case where we want to track customers who have ever placed orders with us.

(The business case could be extended to identify customers who have registered with us at least 3 years ago but have not placed any order in the last 12months --- they could either be targets for an advertising or promotions/discount campaign or customers to be retired, depending on the nature of our business and products.  But I'll keep the query simple here).


Name              Null?       Type            
_________________ ___________ _______________ 
CUST_ID           NOT NULL    NUMBER(6)       
CUST_NAME         NOT NULL    VARCHAR2(56)    
CUST_REGN_DATE    NOT NULL    DATE                                                     
SQL> select count(*) from customers;
                                                                                                                     
   COUNT(*)                                                                                                                   
___________                                                                                                                            
        100                                                                                                                                   
                                                                                                                                                   
SQL> desc sales_orders;
                                                                                                                                                                    
Name          Null?       Type          
_____________ ___________ _____________ 
ORDER_ID      NOT NULL    NUMBER(12)    
SALE_DATE                 DATE          
CUST_ID                   NUMBER(6)     
PRODUCT_ID                NUMBER(6)     
QUANTITY                  NUMBER(12)    
SQL> 
SQL> select count(*) from sales_orders;

   COUNT(*) 
___________ 
    1979580 

SQL> select count(distinct(cust_id)) from sales_orders;

   COUNT(DISTINCT(CUST_ID)) 
___________________________ 
                         99 

SQL> 
SQL> create table sale_made_tracker (cust_id number(6,0), sales_made varchar2(3));

Table SALE_MADE_TRACKER created.

SQL> 

Given that we have 100 customers and 1.98million sales orders, we want to add an entry to the SALE_MADE_TRACKER for a customer with a sales order.  However, this is an "on-demand" query that an analyst runs from a front-end, customer-by-customer for specific customers.  Thus, it is not executed as a batch for all 100 customers and 1.98million sales orders.

The developer likes to use Bind Variables and comes up with a query like this :


insert into sale_made_tracker
select cust_id, 'YES'
from customers a
where a.cust_name = :bind_cust_name
and exists
(select '1'
 from customers c, sales_orders s
 where s.cust_id=c.cust_id
 and c.cust_id = a.cust_id)
 

This allows the user to specify a customer name without knowing the cust_id when executing the query.

When testing this in 23c, I've had to add the NO_UNNEST Hint so that I could see the FILTER operation (similar to what I saw in a current 12.1 database):


 SQL> explain plan for
  2  insert into sale_made_tracker
  3  select cust_id, 'YES'
  4  from customers a
  5  where a.cust_name = :bind_cust_name
  6  and exists
  7  (select  '1' 
  8   from customers c, sales_orders s
  9   where s.cust_id=c.cust_id
 10   and c.cust_id = a.cust_id) 
 11  /

Explained.

SQL> 
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT                                                                                 
_________________________________________________________________________________________________ 
Plan hash value: 1657757803                                                                       
                                                                                                  
----------------------------------------------------------------------------------------------    
| Id  | Operation                | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    
----------------------------------------------------------------------------------------------    
|   0 | INSERT STATEMENT         |                   |     1 |    45 |  2447   (1)| 00:00:01 |    
|   1 |  LOAD TABLE CONVENTIONAL | SALE_MADE_TRACKER |       |       |            |          |    
|   2 |   NESTED LOOPS SEMI      |                   |     1 |    45 |  2447   (1)| 00:00:01 |    
|*  3 |    TABLE ACCESS FULL     | CUSTOMERS         |     1 |    43 |     3   (0)| 00:00:01 |    
|   4 |    VIEW PUSHED PREDICATE | VW_SQ_1           |   208 |   416 |  2444   (1)| 00:00:01 |    
|   5 |     NESTED LOOPS         |                   | 20828 |   528K|  2444   (1)| 00:00:01 |    
|*  6 |      INDEX UNIQUE SCAN   | SYS_C0012587      |     1 |    13 |     1   (0)| 00:00:01 |    
|*  7 |      TABLE ACCESS FULL   | SALES_ORDERS      | 20828 |   264K|  2443   (1)| 00:00:01 |    
----------------------------------------------------------------------------------------------    
                                                                                                  
Predicate Information (identified by operation id):                                               
---------------------------------------------------                                               
                                                                                                  
   3 - filter("A"."CUST_NAME"=:BIND_CUST_NAME)                                                    
   6 - access("C"."CUST_ID"="A"."CUST_ID")                                                        
   7 - filter("S"."CUST_ID"="A"."CUST_ID")                                                        
                                                                                                  
Note                                                                                              
-----                                                                                             

PLAN_TABLE_OUTPUT                                           
___________________________________________________________ 
   - dynamic statistics used: dynamic sampling (level=2)    

25 rows selected. 

SQL> delete plan_table;

15 rows deleted.

SQL> 
SQL> explain plan for
  2  insert into sale_made_tracker
  3  select cust_id, 'YES'
  4  from customers a
  5  where a.cust_name = :bind_cust_name
  6  and exists
  7  (select /*+ NO_UNNEST */ '1' 
  8   from customers c, sales_orders s
  9   where s.cust_id=c.cust_id
 10   and c.cust_id = a.cust_id) 
 11  /

Explained.

SQL> 
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT                                                                                 
_________________________________________________________________________________________________ 
Plan hash value: 2963538242                                                                       
                                                                                                  
----------------------------------------------------------------------------------------------    
| Id  | Operation                | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    
----------------------------------------------------------------------------------------------    
|   0 | INSERT STATEMENT         |                   |     1 |    43 |     6   (0)| 00:00:01 |    
|   1 |  LOAD TABLE CONVENTIONAL | SALE_MADE_TRACKER |       |       |            |          |    
|*  2 |   FILTER                 |                   |       |       |            |          |    
|*  3 |    TABLE ACCESS FULL     | CUSTOMERS         |     1 |    43 |     3   (0)| 00:00:01 |    
|   4 |    NESTED LOOPS          |                   |     1 |    26 |     3   (0)| 00:00:01 |    
|*  5 |     INDEX UNIQUE SCAN    | SYS_C0012587      |     1 |    13 |     1   (0)| 00:00:01 |    
|*  6 |     TABLE ACCESS FULL    | SALES_ORDERS      | 20828 |   264K|     2   (0)| 00:00:01 |    
----------------------------------------------------------------------------------------------    
                                                                                                  
Predicate Information (identified by operation id):                                               
---------------------------------------------------                                               
                                                                                                  
   2 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "SALES_ORDERS" "S","CUSTOMERS"              
              "C" WHERE "C"."CUST_ID"=:B1 AND "S"."CUST_ID"=:B2))                                 
   3 - filter("A"."CUST_NAME"=:BIND_CUST_NAME)                                                    
   5 - access("C"."CUST_ID"=:B1)                                                                  
   6 - filter("S"."CUST_ID"=:B1)                                                                  
                                                                                                  
Note                                                                                              

PLAN_TABLE_OUTPUT                                           
___________________________________________________________ 
-----                                                       
   - dynamic statistics used: dynamic sampling (level=2)    

26 rows selected. 

SQL> delete plan_table;

7 rows deleted.

SQL> 

So, for the purpose of this demo of the FILTER operation for an EXISTS clause, I will use the second form of the query.



If I run the query for customer name "HEMANT" :

 SQL> exec :bind_cust_name := 'HEMANT'

PL/SQL procedure successfully completed.

SQL> 
SQL> insert /*+ gather_plan_statistics */ into sale_made_tracker
  2  select cust_id, 'YES'
  3  from customers a
  4  where a.cust_name = :bind_cust_name
  5  and exists
  6  (select /*+ NO_UNNEST */ '1' 
  7   from customers c, sales_orders s
  8   where s.cust_id=c.cust_id
  9   and c.cust_id = a.cust_id) 
 10  /

1 row inserted.

SQL> 
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT                                                                                           
___________________________________________________________________________________________________________ 
SQL_ID  1j3jggfsurwg4, child number 0                                                                       
-------------------------------------                                                                       
insert /*+ gather_plan_statistics */ into sale_made_tracker select                                          
cust_id, 'YES' from customers a where a.cust_name = :bind_cust_name and                                     
exists (select /*+ NO_UNNEST */ '1'   from customers c, sales_orders s                                      
where s.cust_id=c.cust_id  and c.cust_id = a.cust_id)                                                       
                                                                                                            
Plan hash value: 2963538242                                                                                 
                                                                                                            
--------------------------------------------------------------------------------------------------------    
| Id  | Operation                | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
--------------------------------------------------------------------------------------------------------    
|   0 | INSERT STATEMENT         |                   |      1 |        |      0 |00:00:00.01 |     195 |    
|   1 |  LOAD TABLE CONVENTIONAL | SALE_MADE_TRACKER |      1 |        |      0 |00:00:00.01 |     195 |    
|*  2 |   FILTER                 |                   |      1 |        |      1 |00:00:00.01 |     193 |    
|*  3 |    TABLE ACCESS FULL     | CUSTOMERS         |      1 |      1 |      1 |00:00:00.01 |       7 |    
|   4 |    NESTED LOOPS          |                   |      1 |      1 |      1 |00:00:00.01 |     186 |    
|*  5 |     INDEX UNIQUE SCAN    | SYS_C0012587      |      1 |      1 |      1 |00:00:00.01 |       1 |    
|*  6 |     TABLE ACCESS FULL    | SALES_ORDERS      |      1 |  20828 |      1 |00:00:00.01 |     185 |    
--------------------------------------------------------------------------------------------------------    
                                                                                                            
Predicate Information (identified by operation id):                                                         
---------------------------------------------------                                                         
                                                                                                            

PLAN_TABLE_OUTPUT                                           
___________________________________________________________ 
   2 - filter( IS NOT NULL)                                 
   3 - filter("A"."CUST_NAME"=:BIND_CUST_NAME)              
   5 - access("C"."CUST_ID"=:B1)                            
   6 - filter("S"."CUST_ID"=:B1)                            
                                                            
Note                                                        
-----                                                       
   - dynamic statistics used: dynamic sampling (level=2)    
                                                            

33 rows selected. 

SQL> 
Which shows that a total of 185 blocks were read from the SALES_ORDERS table for "HEMANT"'s orders.  Plus 1 block for the Index Unique Scan and 7 blocks for the Full Table Scan of CUSOMERS (totally 193 blocks for the SELECT) {Ignore the A-Rows being 0 for the actual INSERT in the Plan output, the execution shows that 1 row was inserted)



But if I run the query for  customer name "LARRY" :
SQL> exec :bind_cust_name := 'LARRY'

PL/SQL procedure successfully completed.

SQL> 
SQL> insert /*+ gather_plan_statistics */ into sale_made_tracker
  2  select cust_id, 'YES'
  3  from customers a
  4  where a.cust_name = :bind_cust_name
  5  and exists
  6  (select /*+ NO_UNNEST */ '1' 
  7   from customers c, sales_orders s
  8   where s.cust_id=c.cust_id
  9   and c.cust_id = a.cust_id) 
 10  /

0 rows inserted.

SQL> 
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT                                                                                           
___________________________________________________________________________________________________________ 
SQL_ID  1j3jggfsurwg4, child number 0                                                                       
-------------------------------------                                                                       
insert /*+ gather_plan_statistics */ into sale_made_tracker select                                          
cust_id, 'YES' from customers a where a.cust_name = :bind_cust_name and                                     
exists (select /*+ NO_UNNEST */ '1'   from customers c, sales_orders s                                      
where s.cust_id=c.cust_id  and c.cust_id = a.cust_id)                                                       
                                                                                                            
Plan hash value: 2963538242                                                                                 
                                                                                                            
--------------------------------------------------------------------------------------------------------    
| Id  | Operation                | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
--------------------------------------------------------------------------------------------------------    
|   0 | INSERT STATEMENT         |                   |      1 |        |      0 |00:00:00.06 |    8723 |    
|   1 |  LOAD TABLE CONVENTIONAL | SALE_MADE_TRACKER |      1 |        |      0 |00:00:00.06 |    8723 |    
|*  2 |   FILTER                 |                   |      1 |        |      0 |00:00:00.06 |    8723 |    
|*  3 |    TABLE ACCESS FULL     | CUSTOMERS         |      1 |      1 |      1 |00:00:00.01 |       7 |    
|   4 |    NESTED LOOPS          |                   |      1 |      1 |      0 |00:00:00.06 |    8716 |    
|*  5 |     INDEX UNIQUE SCAN    | SYS_C0012587      |      1 |      1 |      1 |00:00:00.01 |       1 |    
|*  6 |     TABLE ACCESS FULL    | SALES_ORDERS      |      1 |  20828 |      0 |00:00:00.06 |    8715 |    
--------------------------------------------------------------------------------------------------------    
                                                                                                            
Predicate Information (identified by operation id):                                                         
---------------------------------------------------                                                         
                                                                                                            

PLAN_TABLE_OUTPUT                                           
___________________________________________________________ 
   2 - filter( IS NOT NULL)                                 
   3 - filter("A"."CUST_NAME"=:BIND_CUST_NAME)              
   5 - access("C"."CUST_ID"=:B1)                            
   6 - filter("S"."CUST_ID"=:B1)                            
                                                            
Note                                                        
-----                                                       
   - dynamic statistics used: dynamic sampling (level=2)    
                                                            

33 rows selected. 

SQL>

Which shows that a total of 8,715 blocks were read for "LARRY"'s orders from the SALES_ORDERS table alone but actual rows returned was 0 -- so no rows were inserted into SALE_MADE_TRACKER.

Both "HEMANT" and "LARRY" exist in the CUSTOMERS table.


SQL> select cust_id, cust_name 
  2  from customers
  3  where cust_id in (1,2)
  4* /

   CUST_ID CUST_NAME    
__________ ____________ 
         1 HEMANT       
         2 LARRY        

SQL>
SQL> select cust_id, count(*)
  2   from sales_orders
  3  where cust_id in (1,2)
  4  group by cust_id
  5  order by 1
  6* /

   CUST_ID    COUNT(*) 
__________ ___________ 
         1       20502 

SQL> 
SQL> select /*+ gather_plan_statistics FULL (s) */ count(*)
  2  from sales_orders s
  3* /

   COUNT(*) 
___________ 
    1979580 

SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT                                                                                
________________________________________________________________________________________________ 
SQL_ID  042jxgv8y5xny, child number 0                                                            
-------------------------------------                                                            
select /*+ gather_plan_statistics FULL (s) */ count(*) from                                      
sales_orders s                                                                                   
                                                                                                 
Plan hash value: 1409771706                                                                      
                                                                                                 
---------------------------------------------------------------------------------------------    
| Id  | Operation          | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
---------------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |              |      1 |        |      1 |00:00:00.03 |    8715 |    
|   1 |  SORT AGGREGATE    |              |      1 |      1 |      1 |00:00:00.03 |    8715 |    
|   2 |   TABLE ACCESS FULL| SALES_ORDERS |      1 |   2082K|   1979K|00:00:00.02 |    8715 |    
---------------------------------------------------------------------------------------------    
                                                                                                 
Note                                                                                             
-----                                                                                            
   - dynamic statistics used: dynamic sampling (level=2)                                         
                                                                                                 

19 rows selected. 

SQL> 

While "HEMANT" has 20,502 sales orders, "LARRY", despite being a registered customer, has not issued any order.

The EXISTS query for "HEMANT" had to read only 185 blocks from the SALES_ORDERS table until it was *stopped* by the FILTER operation because the FILTER here stops the subquery as soon as the first row is returned (which you can also see as actual rows being "1" from the SALES_ORDERS table) -- as soon as the subquery returns TRUE.

"LARRY" has no orders so the EXISTS query ran through the whole SALES_ORDERS table to read 8,715 blocks.


Side note on the use of the Bind Variable here :  The fact that the developer used Bind Variables means that the Estimated Rows from the SALES_ORDERS table  in the Execution Plan was always going to be the same -- no matter which customer name was queried for -- because it prevents the use of Histograms (if they existed) on the tale.

Categories: DBA Blogs

Recent Video PlayLists

Mon, 2023-01-09 08:19

 I've added a few videos to my YouTube Channel in the past two years.


Oracle MultiTenant Database Configurations


Oracle DataGuard


Categories: DBA Blogs

Instance Parameters for Pluggable Databases

Sun, 2022-10-30 09:38
Since 12c Oracle has permitted Pluggable Databases with the MultiTenant Architecture. Normally, instance parameters (those in the init<SID>.ora or spfile<SID>.ora) apply to the whole Instance -- thus to all the Pluggable Databases.

However, certain parameters can be changed at the PDB (Pluggable Database) level.

Oracle stores the values for these changed parameters in the PDB_SPFILE$ table in the Root Container.

Here is a simple demonstration (in 19c) :


-- connect to the root and check certain parameter
SQL> connect / as sysdba
Connected.
SQL> show parameter optimizer_features_enable

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable string 19.1.0
SQL> show parameter sort_area_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sort_area_size integer 65536
SQL> show parameter workarea_size_policy

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy string AUTO
SQL>


-- override them at the PDB level for ORCLPDB1
SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> alter system set optimizer_features_enable='12.2.0.1';

System altered.

SQL> alter system set sort_area_size=10485760;
alter system set sort_area_size=10485760
*
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> -- note that changing S_A_S has to be done as "DEFERRED" to apply to subsequent sessions
SQL> alter system set sort_area_size=10485760 deferred;

System altered.

SQL> alter system set workarea_size_policy='MANUAL';

System altered.

SQL>
SQL> show parameter optimizer_features_enable;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable string 12.2.0.1
SQL> show parameter workarea_size_policy;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy string MANUAL
SQL>

-- connect to the PDB and verify that they have changed for the user session
SQL> connect hemant/hemant@orclpdb1
ERROR:
ORA-28002: the password will expire within 7 days


Connected.
SQL> show parameter optimizer_features_enable

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable string 12.2.0.1
SQL> show parameter sort_area_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sort_area_size integer 10485760
SQL> show parameter workarea_size_policy

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy string MANUAL
SQL>


-- return to the root CDB and check that it retains the old (default) values
SQL> connect / as sysdba
Connected.
SQL> show parameter optimizer_features_enable

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable string 19.1.0
SQL> show parameter sort_area_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sort_area_size integer 65536
SQL> show parameter workarea_size_policy

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
workarea_size_policy string AUTO
SQL>


-- identify from the CDB where the overriding values are
SQL> connect / as sysdba
Connected.
SQL> get show_orclpdb1_params.sql
1 select pdb.dbid, pdb.con_uid, sp.name, sp.value$
2 from v$pdbs pdb, pdb_spfile$ sp
3 where pdb.name = 'ORCLPDB1'
4 and pdb.con_uid=sp.pdb_uid
5* order by sp.name
SQL> /

DBID CON_UID NAME VALUE$
---------- ---------- ---------------------------------------- ----------------------------------------
4057974068 4057974068 deferred_segment_creation FALSE
4057974068 4057974068 nls_date_format ''
4057974068 4057974068 optimizer_features_enable '12.2.0.1'
4057974068 4057974068 sort_area_size 10485760
4057974068 4057974068 workarea_size_policy 'MANUAL'

SQL>


(The two parameters "deferred_segment_creation" and "nls_date_format") are in my YouTube demonstration).

Thus, we can see that Oracle stores PDB parameters in a table called PDB_SPFILE$ in the Root Container.
These will apply whenever the Pluggable Database is re-opened -- and thus override the parameters for the Root that are from the Server Parameter File (spfile<SID>.ora)

I have presented a short Video Demonstration as well on YouTube.

 


Categories: DBA Blogs

SCN Synchronization when using a Database Link

Wed, 2022-09-07 09:36

 In my previous blog post about querying across a Database Link, I had stated "when running a query (just a SELECT statement) requires the two databases to synchronise SCN.  Whichever is the database with the lower SCN has to increment its SCN to the other database's (higher) SCN "


Here is a quick demonstration :


-- First I verify the current SCN in two databases "ORCLCDB" and "CDB2"
ORCLCDB SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
ORCLCDB 07-SEP-22 10.10.49.643026 PM +08:00 19617903

ORCLCDB SQL>

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
CDB2 07-SEP-22 10.10.52.378969 PM +08:00 18562711

CDB2 SQL>
-- there is a wide gap between the two -- with ORCLCDB being ahead by approximately 1million
============================================================================================



-- now I connect a client to a PDB in ORCLCDB
-- and run a SELECT query that queries across a DBLink a table in a PDB in CDB2
-- this query is NOT a user transaction, only a SELECT statement
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> set time on
22:11:33 SQL>
22:11:48 SQL> select count(*) from tmp_data@pdbtmp_data;

COUNT(*)
----------
73181

22:12:05 SQL>


-- Now I check the SCNs in the two databases
ORCLCDB SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
ORCLCDB 07-SEP-22 10.12.19.955477 PM +08:00 19620089

ORCLCDB SQL>

CDB2 SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
CDB2 07-SEP-22 10.12.23.499409 PM +08:00 19620089

CDB2 SQL>
-- I can see that CDB2's SCN has "mysteriously" caught up with ORCLCDB
-- in fact they are exactly the same
-- in the real world there would still be some difference by the time I query again
-- because other transactions in either or both the databases would have incremented the SCN(s)
================================================================================================



-- now I disconnect from the client
-- this *closes* the database link
22:12:58 SQL> disconnect
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

-- and reconnect to the client
22:13:04 SQL> connect hemant/hemant@orclpdb1
Connected.
22:13:14 SQL>

-- I compare the SCNs in the two databases
ORCLCDB SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
ORCLCDB 07-SEP-22 10.13.22.738662 PM +08:00 19620223

ORCLCDB SQL>

CDB2 SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
CDB2 07-SEP-22 10.13.30.874874 PM +08:00 19620229

CDB2 SQL>
-- the gap is very small
-- in fact some internal operations in CDB2 have slightly incremented it ahead
===============================================================================



-- I run a PLSQL procedure in my client that does 1million COMMITs in the PDB
-- to increase the SCN only in ORCLCDB
22:14:06 SQL> execute onemillioncommits;

PL/SQL procedure successfully completed.

22:15:03 SQL>
22:15:07 SQL>


-- I now compare the two SCNs
ORCLCDB SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
ORCLCDB 07-SEP-22 10.15.21.713518 PM +08:00 20654338

ORCLCDB SQL>
CDB2 SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
CDB2 07-SEP-22 10.15.23.779855 PM +08:00 19620296

CDB2 SQL>
-- and find that there is again a huge gap between the two databases, with ORCLCDB ahead again
==============================================================================================



-- I query across the Database Link again from my client session
22:15:46 SQL> select count(*) from tmp_data@pdbtmp_data;

COUNT(*)
----------
73181

22:15:50 SQL>


-- and now compare the two SCNs
ORCLCDB SQL>select name, systimestamp, current_scn from v$database;
NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
ORCLCDB 07-SEP-22 10.16.02.362146 PM +08:00 20654373

ORCLCDB SQL>
CDB2 SQL>select name, systimestamp, current_scn from v$database;

NAME SYSTIMESTAMP CURRENT_SCN
--------- ------------------------------------ -----------
CDB2 07-SEP-22 10.16.14.122201 PM +08:00 20654453

CDB2 SQL>
-- now the gap is very small
-- some other operations in ORCLCDB have already incremented the SCN after the query
====================================================================================


So, simply running a SELECT statement (not even a User-Initiated COMMIT causes the two databases to synchronise SCNs.  The database with the lower SCN "inherits" the higher SCN.

So, the next time you see database SCNs mysteriously jump (e.g. in V$DATABASE.CURRENT_SCN  or in V$ARCHIVED_LOG.FIRST_CHANGE#  or in V$DATAFILE.CHECKPOINT_SCN) very significantly, check to see if you have queries to/from the database that use one or more Database Links.

Categories: DBA Blogs

Querying across a Database Link

Tue, 2022-08-23 10:17

 In this demonstration, assume that I have a local table called "TARGET_DATA" with a list of Countries and Products.  A new company is being acquired and that organisations's product list has been loaded into a table called TMP_DATA in a different database PDBTMP.

So, I run a MINUS query to find the local products that are not in the new company's product list -- but this query is run across a Database Link.

Here is how I create the Database Link, run the MINUS query and then obtain a tkprof of the trace file generated for the query.


SQL> create database link pdbtmp_data
2 connect to data_load identified by data_load using 'pdbtmp';

Database link created.

SQL>
SQL> alter session set tracefile_identifier='MY_DBLINK_QRY';

Session altered.

SQL> alter session set statistics_level='ALL';

Session altered.

SQL>
SQL> exec dbms_session.session_trace_enable(waits=>TRUE,binds=>FALSE);

PL/SQL procedure successfully completed.

SQL>
SQL> select * from target_data
2 minus
3 select * from tmp_data@pdbtmp_data
4 /

...
...

600 rows selected.

SQL> select 'x' from dual;

'
-
x

SQL> show array
arraysize 15
SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


-- tkprof of the trace file

select * from target_data
minus
select * from tmp_data@pdbtmp_data

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.24 0 0 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 41 0.17 0.30 453 455 0 600
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 43 0.19 0.55 453 455 1 600

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
600 600 600 MINUS (cr=455 pr=453 pw=0 time=306868 us starts=1)
71682 71682 71682 SORT UNIQUE (cr=455 pr=453 pw=0 time=142899 us starts=1 cost=872 size=2876679 card=73761)
73761 73761 73761 TABLE ACCESS FULL TARGET_DATA (cr=455 pr=453 pw=0 time=80877 us starts=1 cost=130 size=2876679 card=73761)
71143 71143 71143 SORT UNIQUE (cr=0 pr=0 pw=0 time=148559 us starts=1 cost=831 size=2927240 card=73181)
73181 73181 73181 REMOTE TMP_DATA (cr=0 pr=0 pw=0 time=96385 us starts=1 cost=80 size=2927240 card=73181)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PGA memory operation 135 0.00 0.00
SQL*Net message to client 41 0.00 0.00
Disk file operations I/O 1 0.00 0.00
db file sequential read 1 0.00 0.00
db file scattered read 21 0.00 0.04
SQL*Net message to dblink 4 0.00 0.00
SQL*Net message from dblink 4 0.00 0.00
SQL*Net more data from dblink 352 0.01 0.05
SQL*Net message from client 41 8.84 8.89
********************************************************************************
SQL ID: 04vfkrajpkrnj Plan Hash: 1388734953

select 'x'
from
dual


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 FAST DUAL (cr=0 pr=0 pw=0 time=1 us starts=1 cost=2 size=0 card=1)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 11.31 11.31



********************************************************************************


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.02 0.24 0 0 1 0
Execute 3 0.01 0.01 0 676 0 1
Fetch 43 0.17 0.30 453 455 0 601
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 48 0.20 0.56 453 1131 1 602

Misses in library cache during parse: 2
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PGA memory operation 137 0.00 0.00
SQL*Net message to client 44 0.00 0.00
SQL*Net message from client 44 16.85 37.06
db file sequential read 1 0.00 0.00
single-task message 1 0.02 0.02
SQL*Net message from dblink 15 0.05 0.15
SQL*Net message to dblink 14 0.00 0.00
Disk file operations I/O 1 0.00 0.00
db file scattered read 21 0.00 0.04
SQL*Net more data from dblink 352 0.01 0.05




The query returns 600 rows but with an ARRAYSIZE of 15 for the sqlplus session, it results in 40 round trips (SQL*Net message to/from client)  (plus 1 additional for the parse) resulting a total of 41 SQL*Net messages.
However, the 73,181 rows from the remote database (PDBTMP) appear as 4 DBLink messages (SQL*Net message to/from dblink) and 352 "more data from dblink" waits.  So, there were many more network packets from PDBTMP to my local database.
But the total count of dblink messages is 15 (from dblink) and 14 (to dblink) -- an excess of about 11 over the initial 4.  Which means that there were some additional "chatty" interaction between my local database session and the one created in PDBTMP.


I also trace the session that is created by the Database Link operation in PDBTMP.  The tkprof for the trace file of that session is below :

SQL ID: bv2zwsyu4hq3k Plan Hash: 0

SELECT /*+ FULL(P) +*/ *
FROM
"TMP_DATA" P


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.05 4 101 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.04 0.05 4 101 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 109

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.14 0.14
********************************************************************************
SQL ID: 33v4pbtw6zjxt Plan Hash: 3633341221

SELECT "COUNTRY","PRODUCT_NAME"
FROM
"TMP_DATA" "TMP_DATA"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 0.05 0.08 450 455 0 73181
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.05 0.08 450 455 0 73181

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 109
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
73181 73181 73181 TABLE ACCESS FULL TMP_DATA (cr=455 pr=450 pw=0 time=66315 us starts=1 cost=124 size=2927240 card=73181)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 4 0.00 0.00
db file sequential read 1 0.00 0.00
db file scattered read 19 0.00 0.03
SQL*Net message from client 4 20.26 20.31
SQL*Net more data to client 352 0.00 0.00



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----------- ---------- ---------- ----------
Parse 2 0.04 0.05 4 101 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 4 0.05 0.08 450 455 0 73181
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.09 0.13 454 556 0 73181

Misses in library cache during parse: 2

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PGA memory operation 1 0.00 0.00
log file sync 1 0.00 0.00
SQL*Net message to client 11 0.00 0.00
SQL*Net message from client 11 20.26 20.47
db file sequential read 1 0.00 0.00
db file scattered read 19 0.00 0.03
SQL*Net more data to client 352 0.00 0.00




Note how the "select * from tmp_data" portion passed to PDBTMP is rewritten as 
SELECT /*+ FULL(P) +*/ *
FROM
"TMP_DATA" P
- and then as
SELECT "COUNTRY","PRODUCT_NAME"
FROM
"TMP_DATA" "TMP_DATA"

The remote database also shows that it sent 4 messages to/from "client" (in this case the client was my database session in my local database, not the sqlplus program) and 352 "more data" packets.  These numbers match what we see in the local trace file.
However, this also shows an additional 11 messages to/from "client".
There are some additional lines in the raw trace file which indicate what these overheads are.

One of them is for the logon trigger in PDBTMP that I have created to ensure that the DBLink connection generates a trace file.
The others are for recursive parse and other calls in PDBTMP (where parse success is sent to my "client" session in my local database).  Calls like :
select decode(upper(failover_method), NULL, 0 , 'BASIC', 1, 'PRECONNECT', 2 , 'PREPARSE', 4 , 0), decode(upper(failover_type), NULL, 1,  'NONE', 1, 'SESSION', 2, 'SELECT',  4, 'TRANSACTION',  8, 'AUTO' , 32,  1 ), failover_retries, failover_delay, flags, nvl(replay_initiation_timeout, 900), decode(upper(session_state_consistency), 'STATIC', 16, 0), failover_restore from sys.service$ where name = :1

insert into link_logons$(logon_time , source_id) VALUES ( SYSTIMESTAMP AT TIME ZONE 'UTC', :srcid )

XCTEND rlbk=0, rd_only=0, tim=....

XCTEND rlbk=1, rd_only=1, tim=...

XCTEND rlbk=0, rd_only=1, tim=...


So, a Database Link query has overheads. Also, when running a query (just a SELECT statement) requires the two databases to synchronise SCN.  Whichever is the database with the lower SCN has to increment its SCN to the other database's (higher) SCN  A distributed query is effectively a *transaction* which requires implicit COMMIT/ROLLBACK.

(also notice the "insert into link_logon$"  which seems to be an audit implementation in the remote database PDBTMP).


Bonus : This is the ON LOGON TRIGGER in PDTMP which causes it to write to a Trace File :


SQL> select dbms_metadata.get_ddl('TRIGGER','DATA_LOAD_LOGON_TRIG') from dual;

DBMS_METADATA.GET_DDL('TRIGGER','DATA_LOAD_LOGON_TRIG')
--------------------------------------------------------------------------------

CREATE OR REPLACE NONEDITIONABLE TRIGGER "SYS"."DATA_LOAD_LOGON_TRIG"
after logon on database
WHEN (user = 'DATA_LOAD') begin
execute immediate 'alter session set tracefile_identifier=''DATA_LOAD''';
execute immediate 'alter session set statistics_level=''ALL''';
dbms_session.session_trace_enable(waits=>TRUE,binds=>FALSE);
end;
ALTER TRIGGER "SYS"."DATA_LOAD_LOGON_TRIG" ENABLE


Such a trigger is useful when you want to trace every logon by a particular user but don't know when the logon will occur.
Categories: DBA Blogs

Direct Path Inserts and Locks

Sun, 2022-08-14 04:40

 In my previous post I had demonstrated how a Direct Path Insert in one session blocks another Direct Path Insert from another session, except when the two sessions explicitly name separate target Partitions.

Here, I will lock at how Oracle creates Locks for such operations.  In the output below, the SQLPrompt is set to either 'Sesn_1' or 'Sesn_2' or 'SYSTEM' (the DBA session) to indicate which session is executing the SQL statement.

First, running the Direct Path Insert without specifying the target Partition name.



SYSTEM>l
1 select object_name, subobject_name, object_type, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name in ('MY_PART_TABLE')
5 and object_type in ('TABLE','TABLE PARTITION')
6* order by 1, 2 nulls first, 4
SYSTEM>/

OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
---------------- ---------------- ----------------------- ---------- --------------
MY_PART_TABLE TABLE 81817
MY_PART_TABLE P_100 TABLE PARTITION 81818 81827
MY_PART_TABLE P_200 TABLE PARTITION 81819 81828
MY_PART_TABLE P_300 TABLE PARTITION 81820 81820

SYSTEM>
SYSTEM>truncate table hemant.my_part_table;

Table truncated.

SYSTEM>
SYSTEM>select count(*) from v$locked_object;

COUNT(*)
----------
0

SYSTEM>



Sesn_1>select distinct sid from v$mystat;

SID
----------
138

Sesn_1>
Sesn_1>l
1 insert /*+ APPEND */ into my_part_table
2 select rownum, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
Sesn_1>/

50 rows created.

Sesn_1>



Sesn_2>select distinct sid from v$mystat;

SID
----------
384

Sesn_2>
Sesn_2>l
1 insert /*+ APPEND */ into my_part_table
2 select rownum+101, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
Sesn_2>/
---------- Sesn_2 is now in a Wait



SYSTEM>select event from v$session where sid=384;

EVENT
----------------------------------------------------------------
enq: TM - contention

SYSTEM>
SYSTEM>select session_id, object_id, locked_mode
2 from v$locked_object
3 /

SESSION_ID OBJECT_ID LOCKED_MODE
---------- ---------- -----------
138 81817 6
384 81817 0

SYSTEM>
---------- Object ID 81817 is the Table itself, locked by Session 138 -- Sesn_1
SYSTEM>select sid, type, id1, id2, lmode, request, block
2 from v$lock
3 where sid in (138,384)
4 order by 1
5 /

SID TY ID1 ID2 LMODE REQUEST BLOCK
---------- -- ---------- ---------- ---------- ---------- ----------
138 TM 81817 0 6 0 1
138 AE 134 4057974068 4 0 0
138 TX 393246 7880 6 0 0
384 AE 134 4057974068 4 0 0
384 TM 81817 0 0 6 0

SYSTEM>
---------- Session 138 has TM Lock Mode6 on Object 81817 -- the Table itself
---------- Session 384 (Sesn_2, Waiting) is requesting a TM Lock Mode6



So, it is clear that Session 1 (SID 138) had a TM Lock (Mode->6) on the Table, blocking Session 2 (384) (which is requesting the same Mode->6 lock) when the Direct Path Insert specified the Table name alone.
The TX lock by SID 138  is the Transaction Row Exclusive Lock.  What is blocking Session 2 (SID 384) is that it is requesting a Mode->6 lock on the same Object (the Table) as already held by Session 1 (SID 138)

Then, when I issue a Rollback (or Commit) from Sesn_1, the Insert by Sesn_2 goes through.

Before this next test, I rollback both the Inserts

Next, with the target Partition named



Sesn_1>l
1 insert /*+ APPEND */ into my_part_table partition (p_100)
2 select rownum, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
Sesn_1>/

50 rows created.

Sesn_1>



Sesn_2>l
1 insert /*+ APPEND */ into my_part_table partition (p_200)
2 select rownum+101, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
Sesn_2>/

50 rows created.

Sesn_2>



SYSTEM>select event from v$session where sid=384;

EVENT
----------------------------------------------------------------
SQL*Net message from client

SYSTEM>
SYSTEM>select session_id, object_id, locked_mode
2 from v$locked_object
3 /

SESSION_ID OBJECT_ID LOCKED_MODE
---------- ---------- -----------
138 81817 3
138 81818 6
384 81817 3
384 81819 6

SYSTEM>
SYSTEM>l
1 select sid, type, id1, id2, lmode, request, block
2 from v$lock
3 where sid in (138,384)
4* order by 1,2,3
SYSTEM>/

SID TY ID1 ID2 LMODE REQUEST BLOCK
---------- -- ---------- ---------- ---------- ---------- ----------
138 AE 134 4057974068 4 0 0
138 TM 81817 0 3 0 0
138 TM 81818 0 6 0 0
138 TX 65540 6296 6 0 0
384 AE 134 4057974068 4 0 0
384 TM 81817 0 3 0 0
384 TM 81819 0 6 0 0
384 TX 458781 6211 6 0 0

8 rows selected.

SYSTEM>
---------- Both sessions SIDs 138 (Sesn_1) and 384 (Sesn_2) have the Table Object (81817) locked in Mode3, not Mode6
---------- But the Partitions (Objects 81818 and 81819) are locked in Mode6
---------- Neither is SID 138 Blocking any other session nor is SID 384 being blocked


Now we see that the Table (object 81817) is locked in Mode->3 and not Mode->6  by by *both sessions* while the respective Partitions P_100 (object 81818) and P_200 (81819) are locked in Mode->6 without session 384 (Sesn_2) waiting for a block.

Thus, as the Table itself is not locked in Mode->6, Sesn_2 is allowed a Direct Path Insert into another Partition -- only so as long as it explicitly names the Target Partition. (If Sesn_2 attempts to do a Direct Path Insert without naming a Target Partition, it will, again begin waiting on Sesn_1 without the REQUEST being evident in v$lock)

Mode 6 is Lock Table in Exclusive Mode in the first case.
In the second case, the Table is locked in Mode 3 which is Row-Exclusive, not Table level.

The "TX" is a Transaction Enqueue 

The "AE" Lock type is "Edition Enqueue" --- which we are not concerned with in this test, as it relates to Editioning (which I am not currently using) and will appear even if Editioning is not being used.


Categories: DBA Blogs

Direct Path Insert into a Partitioned Table

Fri, 2022-08-12 09:45

 Normally a Direct Path Insert that is not committed blocks other concurrent Direct Path Inserts.  This behaviour also extends to Partitioned Tables.

Thus :


--------- From Session 1 : inserting only into the first Partition p_100
SQL> l
1 create table my_part_table (id_col number, data_col varchar2(25))
2 partition by range (id_col)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301)
6* )
SQL>
SQL> /

Table created.

SQL>
SQL> l
1 insert /*+ APPEND */ into my_part_table
2 select rownum, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
SQL> /

50 rows created.

SQL>

---------- From Session 2 : where we know that the rows will actually be inserted into a different Partition p_200
SQL> l
1 insert /*+ APPEND */ into my_part_table
2 select rownum+101, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
SQL> /

-- Yet, Session 2 is blocked and has to wait untill Session 1 does a COMMIT or ROLLBACK


Even though the 2 sessions will be inserting into separate Partitions (i.e. separate Segments). the first session blocks  the other Direct Path Insert.

However, this blocking can be avoided by explicitly naming the target Partition

Thus :


--------- From Session 1 : inserting only into the first Partition p_100 explicitly named 
SQL> l
1 insert /*+ APPEND */ into my_part_table partition (p_100)
2 select rownum, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
SQL> /

50 rows created.

SQL>
---------- From Session 2 : inserting into the second Parition p_200 explicitlly named
SQL> l
1 insert /*+ APPEND */ into my_part_table partition (p_200)
2 select rownum+101, dbms_random.string('X',12)
3 from dual
4* connect by rownum < 51
SQL> /

50 rows created.

SQL>
-------- Of course, both sessions can't requery until they COMMIT or ROLLBACK
--- note : Session 2 can query and see rows inserted by Session 1 and vice-versa, but not their own rows
SQL> select count(*) from my_part_table;
select count(*) from my_part_table
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL>
SQL> commit; -- issued by both sessions so as to be able to see their own rows as well

Commit complete.

SQL>
SQL> select count(*) from my_part_table;

COUNT(*)
----------
100

SQL>
SQL> select count(*) from my_part_table partition (p_100);

COUNT(*)
----------
50

SQL> select count(*) from my_part_table partition (p_200);

COUNT(*)
----------
50

SQL>


Thus, although there is general advice not to explicitly name a target Partition, I find this method useful if I have multiple concurrent Direct Path Inserts.




Categories: DBA Blogs

The format of the ROWID

Sun, 2022-08-07 04:31

 A ROWID is a method of identifying the physical location of a row in an Oracle Database.  An Index on a Table captures the ROWIDs for the rows holding the index key values and these entries in the Index are how an Index lookup redirects a query to the row (i.e. physical location) in the table.

A ROWID (called an Extended ROWID) consists of 4 components :

    -    DataObject Number

    -    DataFile Number Relative to the Tablespace

    -    DataBlock Number (within the DataFile)

    -    RowNumber within the DataBlock


A Partitioned Table actually consists of multiple segments.  Each segment has a different DataObject Number.

Here is a quick demo of the difference between a normal (Non-Partitioned) Table and a Partitioned Table :


The Normal Table :



SQL> create table NONPARTITIONED (id_col number, data_col varchar2(1000)) pctfree 99 tablespace HEMANT_DATA;

Table created.

SQL>
SQL> insert into NONPARTITIONED
2 values (1, dbms_random.string('X',999));

1 row created.

SQL> insert into NONPARTITIONED
2 values (2,'This is the second row');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> select rowid, id_col, substr(data_col,1,24)
2 from NONPARTITIONED
3 order by id_col
4 /

ROWID ID_COL SUBSTR(DATA_COL,1,24)
------------------ ---------- ------------------------------------------------------------------------------------------------
AAAT70AAgAAAACTAAA 1 CBXBRIP5ZNQ9VPZNC4HHVJJH
AAAT70AAgAAAACXAAA 2 This is the second row

SQL>
SQL> l
1 select id_col,
2 dbms_rowid.rowid_object(rowid) ObjectNumber,
3 dbms_rowid.rowid_relative_fno(rowid) RelativeFileNumber,
4 dbms_rowid.rowid_block_number(rowid) BlockNumber
5 from NONPARTITIONED
6* order by id_col
SQL> /

ID_COL OBJECTNUMBER RELATIVEFILENUMBER BLOCKNUMBER
---------- ------------ ------------------ -----------
1 81652 32 147
2 81652 32 151

SQL>


Because I created the Table with PCTFREE 99 and inserted a long string in the first row, the second row was created in a different block.  Both Blocks are in the same Relative File Number (32) and belong to the same Object (ObjectNumber 81652).  Is this really the Object ID ?



The Partitioned Table :


SQL> l
1 create table PARTITIONED (id_col number, data_col varchar2(1000))
2 partition by range (id_col)
3 (
4 partition P_1 values less than (2) tablespace HEMANT_DATA,
5 partition P_2 values less than (3) tablespace HEMANT_DATA,
6 partition P_3 values less than (4) tablespace HEMANT_DATA,
7 partition P_MAX values less than (MAXVALUE) tablespace HEMANT_DATA
8 )
9* tablespace HEMANT_DATA
SQL> /

Table created.

SQL>
SQL> insert into PARTITIONED
2 values (1, dbms_random.string('X',999));

1 row created.

SQL> insert into PARTITIONED
2 values (2,'This is the second row');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> select rowid, id_col, substr(data_col,1,24)
2 from PARTITIONED
3 order by id_col
4 /

ROWID ID_COL SUBSTR(DATA_COL,1,24)
------------------ ---------- ------------------------------------------------------------------------------------------------
AAAT77AAfAAAAJ3AAA 1 RFU3DNMCD6GXL2ZNV9DDGBG2
AAAT78AAfAAAAZ3AAA 2 This is the second row

SQL>
SQL> l
1 select id_col,
2 dbms_rowid.rowid_object(rowid) ObjectNumber,
3 dbms_rowid.rowid_relative_fno(rowid) RelativeFileNumber,
4 dbms_rowid.rowid_block_number(rowid) BlockNumber
5 from PARTITIONED
6* order by id_col
SQL> /

ID_COL OBJECTNUMBER RELATIVEFILENUMBER BLOCKNUMBER
---------- ------------ ------------------ -----------
1 81659 31 631
2 81660 31 1655

SQL>


In this case, the two rows are in different Blocks not because of the PCTFREE (which has defaulted to 10) but because they are in different Segments -- as you can see from the ObjectNumbers being different for the two rows.
(You might have also noticed that these were created in a separate datafile, FILENUMBER 31 instead of 32 {as was for the first table}, but that is because Oracle tries to allocate new segments across different datafiles)

In the ROWID format the ObjectNumber is actually the *Data Object Number* that identifies the  Segment, not the Object Number of the Table.

Thus, to verify the Segments of the two tables, I can query and check :


SQL> l
1 select object_name, subobject_name, object_type, object_id, data_object_id
2 from user_objects
3 where object_name in ('NONPARTITIONED','PARTITIONED')
4 and object_type in ('TABLE','TABLE PARTITION')
5* order by 1, 2 nulls first, 4
SQL> /

OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
---------------- ---------------- ----------------------- ---------- --------------
NONPARTITIONED TABLE 81652 81652
PARTITIONED TABLE 81658
PARTITIONED P_1 TABLE PARTITION 81659 81659
PARTITIONED P_2 TABLE PARTITION 81660 81660
PARTITIONED P_3 TABLE PARTITION 81661 81661
PARTITIONED P_MAX TABLE PARTITION 81662 81662

6 rows selected.

SQL>



Thus, for the NONPARTITIONED Table, the Object_ID and Data_Object_ID and that returned by DBMS_ROWID are all the same -- 81652.  
But the logical entry for the PARTITIONED Table has an Object_ID of 81658 but, without any segment and, therefore, without a Data_Object_ID.
The rows in this Partitioned Table are actually created in the two different Partition Segments with the corresponding Data_Object_ID  (81659 and 81660).



We know that when we rebuild a Table, the ROWID changes.  But this is actually because a new Segment is allocated.  

Thus, if I were to do a MOVE of the "Normal" Table :



SQL> alter table NONPARTITIONED move;

Table altered.

SQL>
SQL> l
1 select id_col,
2 dbms_rowid.rowid_object(rowid) ObjectNumber,
3 dbms_rowid.rowid_relative_fno(rowid) RelativeFileNumber,
4 dbms_rowid.rowid_block_number(rowid) BlockNumber
5 from NONPARTITIONED
6* order by id_col
SQL> /

ID_COL OBJECTNUMBER RELATIVEFILENUMBER BLOCKNUMBER
---------- ------------ ------------------ -----------
1 81663 32 155
2 81663 32 156

SQL>
SQL> l
1 select object_name, subobject_name, object_type, object_id, data_object_id
2 from user_objects
3 where object_name in ('NONPARTITIONED','PARTITIONED')
4 and object_type in ('TABLE','TABLE PARTITION')
5* order by 1, 2 nulls first, 4
SQL> /

OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
---------------- ---------------- ----------------------- ---------- --------------
NONPARTITIONED TABLE 81652 81663
PARTITIONED TABLE 81658
PARTITIONED P_1 TABLE PARTITION 81659 81659
PARTITIONED P_2 TABLE PARTITION 81660 81660
PARTITIONED P_3 TABLE PARTITION 81661 81661
PARTITIONED P_MAX TABLE PARTITION 81662 81662

6 rows selected.

SQL>


Executing a MOVE of the Non-Partitioned Table resulted in a change of the *Data Object Number* (i.e DATA_OBJECT_ID) (from 81652 to 81663) without changing the OBJECT_ID.



For a couple of more interesting aspects of ROWIDs, see this YouTube video "Think you know how the ROWID works? Think again!" by Connor McDonald




Categories: DBA Blogs

Index Statistic NUM_ROWS excludes Rows with NULL value

Sun, 2022-06-12 05:03

 A quick demonstration of the difference between the Table Statistic NUM_ROWS and the Index Statistic NUM_ROWS



SQL> desc customers
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
CUST_ID NOT NULL NUMBER
CUST_NAME VARCHAR2(128)
CUST_CITY VARCHAR2(128)
CUST_START_DATE DATE

SQL> select index_name, column_position, column_name
2 from user_ind_columns
3 where table_name = 'CUSTOMERS'
4 order by 1,2
5 /

INDEX_NAME COLUMN_POSITION COLUMN_NAME
---------------- --------------- ----------------
CUST_CITY_NDX 1 CUST_CITY
CUST_ID_NDX 1 CUST_ID

SQL>
SQL> select count(*) from customers;

COUNT(*)
----------
1299647

SQL>
SQL> exec dbms_stats.gather_table_stats('','CUSTOMERS');

PL/SQL procedure successfully completed.

SQL>
SQL> select last_analyzed, sample_size, num_rows
2 from user_tables
3 where table_name = 'CUSTOMERS'
4 /

LAST_ANAL SAMPLE_SIZE NUM_ROWS
--------- ----------- ----------
12-JUN-22 1299647 1299647

SQL>
SQL> select index_name, last_analyzed, sample_size, num_rows
2 from user_indexes
3 where table_name = 'CUSTOMERS'
4 order by index_name
5 /

INDEX_NAME LAST_ANAL SAMPLE_SIZE NUM_ROWS
---------------- --------- ----------- ----------
CUST_CITY_NDX 12-JUN-22 1179648 1179648
CUST_ID_NDX 12-JUN-22 1299647 1299647

SQL>
SQL> select count(*)
2 from customers
3 where cust_city is null
4 /

COUNT(*)
----------
119999

SQL>
SQL> select 1299647-1179648 from dual;

1299647-1179648
---------------
119999

SQL>
SQL> create index cust_id_city_ndx
2 on customers(cust_id, cust_city)
3 /

Index created.

SQL>
SQL> select index_name, last_analyzed, sample_size, num_rows
2 from user_indexes
3 where table_name = 'CUSTOMERS'
4 order by index_name
5 /

INDEX_NAME LAST_ANAL SAMPLE_SIZE NUM_ROWS
---------------- --------- ----------- ----------
CUST_CITY_NDX 12-JUN-22 1179648 1179648
CUST_ID_CITY_NDX 12-JUN-22 1299647 1299647
CUST_ID_NDX 12-JUN-22 1299647 1299647

SQL>
SQL> drop index cust_id_city_ndx;

Index dropped.

SQL>
SQL> create index cust_city_id_ndx
2 on customers(cust_city, cust_id)
3 /

Index created.

SQL>
SQL> select index_name, last_analyzed, sample_size, num_rows
2 from user_indexes
3 where table_name = 'CUSTOMERS'
4 order by index_name
5 /

INDEX_NAME LAST_ANAL SAMPLE_SIZE NUM_ROWS
---------------- --------- ----------- ----------
CUST_CITY_ID_NDX 12-JUN-22 1299647 1299647
CUST_CITY_NDX 12-JUN-22 1179648 1179648
CUST_ID_NDX 12-JUN-22 1299647 1299647

SQL>


In recent versions, a CREATE INDEX implicitly includes a Gather Stats call on the new Index by default, so the two new Indexes also had updated statistics.

The CUST_CITY column has 119999 rows with NULLs.  So, the Statistics on the Index CUST_CITY_NDX on this column did not include these new rows.

The two new Indexes that I created (CUST_ID_CITY_NDX and CUST_CITY_ID_NDX) were composite indexes where at least one column (CUST_ID - which is a Primary Key) is a NOT NULL.
Therefore, Statistics on these Indexes did include all the rows as, for every row in the Table, at least 1 column had a Not NULL value.

Categories: DBA Blogs

PDB Recovery Isolation in 21c

Sun, 2022-05-08 10:14

 In my previous blog post, where I demonstrated PDB Flashback, I had alluded to the "PDB Recovery Isolation" feature of 21c.

Here is a demonstration.

On the Primary, I issue only two commands to create a new PDB from an *existing, active* PDB (a PDB that is in Read-Write mode and is not  PDB$SEED) :  The source PDBHKC is active and you will notice the Archive Log Sequence# being incremented significantly as transactions are in-flight and continuing while PDBKC is being CREATEd and OPENed.


SQL> create pluggable database pdbnew from pdbhkc;

Pluggable database created.

SQL> alter pluggable database pdbnew open;

Pluggable database altered.

SQL> quit


These are the relevant Primary Database Alert entries (not every entry) :

2022-05-08T22:32:56.582810+08:00
create pluggable database pdbnew from pdbhkc

2022-05-08T22:33:02.783531+08:00
PDBNEW(6):Endian type of dictionary set to little

2022-05-08T22:33:03.103173+08:00
****************************************************************
Pluggable Database PDBNEW with pdb id - 6 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
local undo-1, localundoscn-0x000000000000011d
****************************************************************
PDBNEW(6):Media Recovery Start
2022-05-08T22:33:03.293420+08:00
PDBNEW(6):Serial Media Recovery started
PDBNEW(6):max_pdb is 6
2022-05-08T22:33:03.393288+08:00
PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 1 Seq 73 Reading mem 0
PDBNEW(6): Mem# 0: /u02/oradata/DB213H1/redo01.log
2022-05-08T22:33:03.541053+08:00
Thread 1 advanced to log sequence 75 (LGWR switch), current SCN: 4931897
Current log# 3 seq# 75 mem# 0: /u02/oradata/DB213H1/redo03.log
2022-05-08T22:33:03.541508+08:00
PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 2 Seq 74 Reading mem 0
PDBNEW(6): Mem# 0: /u02/oradata/DB213H1/redo02.log
2022-05-08T22:33:03.598468+08:00
PDBNEW(6):Incomplete Recovery applied until change 4931866 time 05/08/2022 22:33:02
PDBNEW(6):NET (PID:16256): Media Recovery Complete [dbsdrv.c:15048]
2022-05-08T22:33:03.615889+08:00
NET (PID:16388): Archived Log entry 119 added for B-1102494684.T-1.S-74 ID 0x7fbfe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:03.786205+08:00
PDBNEW(6):Pluggable database PDBNEW pseudo opening
PDBNEW(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBNEW(6):Autotune of undo retention is turned on.
PDBNEW(6):Undo initialization recovery: Parallel FPTR failed: start:15704441 end:15704446 diff:5 ms (0.0 seconds)
PDBNEW(6):Undo initialization recovery: err:0 start: 15704439 end: 15704511 diff: 72 ms (0.1 seconds)
PDBNEW(6):[16256] Successfully onlined Undo Tablespace 2.
PDBNEW(6):Undo initialization online undo segments: err:0 start: 15704512 end: 15704530 diff: 18 ms (0.0 seconds)
PDBNEW(6):Undo initialization finished serial:0 start:15704439 end:15704538 diff:99 ms (0.1 seconds)
PDBNEW(6):Database Characterset for PDBNEW is AL32UTF8
PDBNEW(6):Pluggable database PDBNEW pseudo closing
PDBNEW(6):JIT: pid 16256 requesting stop
PDBNEW(6):Closing sequence subsystem (15704602605).
PDBNEW(6):Buffer Cache flush started: 6
PDBNEW(6):Buffer Cache flush finished: 6
Completed: create pluggable database pdbnew from pdbhkc


2022-05-08T22:33:18.209272+08:00
alter pluggable database pdbnew open
2022-05-08T22:33:18.210926+08:00
PDBNEW(6):Pluggable database PDBNEW opening in read write
PDBNEW(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBNEW(6):Autotune of undo retention is turned on.
PDBNEW(6):Endian type of dictionary set to little
PDBNEW(6):Undo initialization recovery: Parallel FPTR complete: start:15718778 end:15718780 diff:2 ms (0.0 seconds)
PDBNEW(6):Undo initialization recovery: err:0 start: 15718778 end: 15718780 diff: 2 ms (0.0 seconds)
PDBNEW(6):[16256] Successfully onlined Undo Tablespace 2.
PDBNEW(6):Undo initialization online undo segments: err:0 start: 15718780 end: 15718873 diff: 93 ms (0.1 seconds)
PDBNEW(6):Undo initialization finished serial:0 start:15718778 end:15718875 diff:97 ms (0.1 seconds)
PDBNEW(6):Database Characterset for PDBNEW is AL32UTF8
PDBNEW(6):Deleting old file#13 from file$
PDBNEW(6):Deleting old file#14 from file$
PDBNEW(6):Deleting old file#15 from file$
PDBNEW(6):Adding new file#22 to file$(old file#13). fopr-1, newblks-35840, oldblks-19200
PDBNEW(6):Adding new file#23 to file$(old file#14). fopr-1, newblks-47360, oldblks-15360
PDBNEW(6):Adding new file#24 to file$(old file#15). fopr-1, newblks-12800, oldblks-12800
PDBNEW(6):Successfully created internal service PDBNEW at open
****************************************************************
Post plug operations are now complete.
Pluggable database PDBNEW with pdb id - 6 is now marked as NEW.
****************************************************************
PDBNEW(6):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)


2022-05-08T22:33:21.760166+08:00
Thread 1 advanced to log sequence 81 (LGWR switch), current SCN: 4933127
Current log# 3 seq# 81 mem# 0: /u02/oradata/DB213H1/redo03.log


2022-05-08T22:33:23.162867+08:00
PDBNEW(6):Opening pdb with no Resource Manager plan active
Pluggable database PDBNEW opened read write
Completed: alter pluggable database pdbnew open


(The "Deleting old file#" and "Adding new file#" entries are simply about updating the controlfile)
Note that Log Sequence# has moved from 72 to 81 during the course of the CREATE and OPEN of the new PDB.


These are the Standby Alert Log entries :


2022-05-08T22:32:57.237583+08:00
ARC0 (PID:1354): Archived Log entry 55 added for B-1102494684.T-1.S-73 ID 0x7f7ae6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:32:57.237902+08:00
rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-74 [krsr.c:18143]
2022-05-08T22:32:57.386776+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-74 (in transit) [krsm.c:6185]
2022-05-08T22:32:57.388789+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 74 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-08T22:33:02.900342+08:00
Recovery created pluggable database PDBNEW
PDBNEW(6):Tablespace-SYSTEM during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #22 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_system_k7ho4s6r_.dbf'.
PDBNEW(6):Tablespace-SYSAUX during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #23 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_sysaux_k7ho4s6s_.dbf'.
PDBNEW(6):Tablespace-UNDOTBS1 during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #24 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_undotbs1_k7ho4s6t_.dbf'.
PDBNEW(6):Tablespace-TEMP during PDB create offlined since source is in r/w mode or this is a refresh clone
2022-05-08T22:33:03.709388+08:00
rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-75 [krsr.c:18143]
2022-05-08T22:33:03.724813+08:00
ARC2 (PID:1362): Archived Log entry 56 added for B-1102494684.T-1.S-74 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:04.051241+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-75 (in transit) [krsm.c:6185]
2022-05-08T22:33:04.052945+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 75 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_4_k5yt8y5f_.log
2022-05-08T22:33:07.786948+08:00
rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-76 [krsr.c:18143]
2022-05-08T22:33:07.811804+08:00
ARC0 (PID:1354): Archived Log entry 57 added for B-1102494684.T-1.S-75 ID 0x7f7ae6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:07.926136+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-76 (in transit) [krsm.c:6185]
2022-05-08T22:33:07.927698+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 76 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-08T22:33:08.525489+08:00
PDBNEW(6):Full restore complete of datafile 24 /u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_undotbs1_k7ho53ny_.dbf. Elapsed time: 0:00:01
PDBNEW(6): checkpoint is 4932003
PDBNEW(6): last deallocation scn is 3928601
PDBNEW(6):Datafile #24 has been copied to the standby.
2022-05-08T22:33:11.424337+08:00
rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-77 [krsr.c:18143]
2022-05-08T22:33:11.492712+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-77 (in transit) [krsm.c:6185]
2022-05-08T22:33:11.494117+08:00
ARC2 (PID:1362): Archived Log entry 58 added for B-1102494684.T-1.S-76 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:11.552303+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 77 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_4_k5yt8y5f_.log
2022-05-08T22:33:12.363978+08:00
PDBNEW(6):Full restore complete of datafile 23 /u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_sysaux_k7ho54o6_.dbf. Elapsed time: 0:00:04
PDBNEW(6): checkpoint is 4932003
PDBNEW(6): last deallocation scn is 3862144
PDBNEW(6):Datafile #23 has been copied to the standby.
2022-05-08T22:33:14.572554+08:00
rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-78 [krsr.c:18143]
2022-05-08T22:33:14.572781+08:00
Trying to expand controlfile section 11 for Oracle Managed Files
Expanded controlfile section 11 from 56 to 112 records
Requested to grow by 56 records; added 2 blocks of records
2022-05-08T22:33:14.585789+08:00
ARC0 (PID:1354): Archived Log entry 59 added for B-1102494684.T-1.S-77 ID 0x7f7ae6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:14.650210+08:00
PR00 (PID:1438): Media Recovery Waiting for T-1.S-78 (in transit) [krsm.c:6185]
2022-05-08T22:33:14.698805+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 78 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-08T22:33:15.936076+08:00
PDBNEW(6):Full restore complete of datafile 22 /u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_system_k7ho58jz_.dbf. Elapsed time: 0:00:03
PDBNEW(6): checkpoint is 4932003
PDBNEW(6): last deallocation scn is 3860354
PDBNEW(6):Datafile #22 has been copied to the standby.
2022-05-08T22:33:15.948328+08:00
PDBNEW(6):PDB Side Media Recovery started for pdbid(6)
PDBNEW(6):.... (PID:12667): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBNEW(6):max_pdb is 6
2022-05-08T22:33:16.031099+08:00
PDBNEW(6):Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_75_k7ho53pg_.arc
2022-05-08T22:33:16.045816+08:00
PDBNEW(6):Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_76_k7ho57fz_.arc
2022-05-08T22:33:16.076227+08:00
PDBNEW(6):Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_77_k7ho5bgp_.arc
PDBNEW(6):.... (PID:12667): Media Recovery Waiting for T-1.S-78 (in transit) [krsm.c:6185]
2022-05-08T22:33:16.106119+08:00
PDBNEW(6):Media Recovery of Online Log [Thread=1, Seq=78]
2022-05-08T22:33:16.106586+08:00
PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 5 Seq 78 Reading mem 0
PDBNEW(6): Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
PDBNEW(6):PSR[6]:influxScn=4932081,nxtCkptScn=4932081,minScn=4932004,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843
PDBNEW(6):The merge request has been submitted by side recovery for pdbid 6
2022-05-08T22:33:16.114934+08:00
PR00 (PID:1438): Background Media Recovery cancelled with status 16037 [krd.c:27039]
PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions
2022-05-08T22:33:16.119431+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:1438): Managed Recovery not using Real Time Apply [krsm.c:15876]
Recovery interrupted!
Recovered data files to a consistent state at change 4932082
Stopping change tracking
2022-05-08T22:33:16.347544+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
2022-05-08T22:33:17.205906+08:00
PDBNEW(6):.... (PID:12667): Side Recovery Complete [krds.c:1584]
2022-05-08T22:33:17.250805+08:00
ARC3 (PID:1364): Archived Log entry 60 added for B-1102494684.T-1.S-78 ID 0x7fd0e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:17.251185+08:00
rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-79 [krsr.c:18143]
2022-05-08T22:33:20.411643+08:00
ARC1 (PID:1360): Archived Log entry 61 added for B-1102494684.T-1.S-79 ID 0x7fd1e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:20.411942+08:00
rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-80 [krsr.c:18143]
2022-05-08T22:33:21.944250+08:00
rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-81 [krsr.c:18143]
2022-05-08T22:33:21.944465+08:00
ARC2 (PID:1362): Archived Log entry 62 added for B-1102494684.T-1.S-80 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]


I have extracted below the important entries on the Standby that show the behaviour of PDB Recovery Isolation --- that Oracle at the Standby handles Recovery of the new PDB separately



2022-05-08T22:33:02.900342+08:00
Recovery created pluggable database PDBNEW
PDBNEW(6):Tablespace-SYSTEM during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #22 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_system_k7ho4s6r_.dbf'.
PDBNEW(6):Tablespace-SYSAUX during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #23 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_sysaux_k7ho4s6s_.dbf'.
PDBNEW(6):Tablespace-UNDOTBS1 during PDB create offlined since source is in r/w mode or this is a refresh clone
PDBNEW(6):File #24 added to control file as '/u02/oradata/DB213H1_STDBY/DE81F2476C7F3F80E0538338A8C0063E/datafile/o1_mf_undotbs1_k7ho4s6t_.dbf'.
PDBNEW(6):Tablespace-TEMP during PDB create offlined since source is in r/w mode or this is a refresh clone

PDBNEW(6):PDB Side Media Recovery started for pdbid(6)
PDBNEW(6):.... (PID:12667): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBNEW(6):max_pdb is 6

PDBNEW(6):Recovery of Online Redo Log: Thread 1 Group 5 Seq 78 Reading mem 0
PDBNEW(6): Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
PDBNEW(6):PSR[6]:influxScn=4932081,nxtCkptScn=4932081,minScn=4932004,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843
PDBNEW(6):The merge request has been submitted by side recovery for pdbid 6
2022-05-08T22:33:16.114934+08:00
PR00 (PID:1438): Background Media Recovery cancelled with status 16037 [krd.c:27039]
PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions
2022-05-08T22:33:16.119431+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:1438): Managed Recovery not using Real Time Apply [krsm.c:15876]
Recovery interrupted!
Recovered data files to a consistent state at change 4932082
Stopping change tracking
2022-05-08T22:33:16.347544+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1438.trc:
ORA-16037: user requested cancel of managed recovery operation
2022-05-08T22:33:17.205906+08:00
PDBNEW(6):.... (PID:12667): Side Recovery Complete [krds.c:1584]

2022-05-08T22:33:17.250805+08:00
ARC3 (PID:1364): Archived Log entry 60 added for B-1102494684.T-1.S-78 ID 0x7fd0e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:17.251185+08:00
rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-79 [krsr.c:18143]
2022-05-08T22:33:20.411643+08:00
ARC1 (PID:1360): Archived Log entry 61 added for B-1102494684.T-1.S-79 ID 0x7fd1e6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:20.411942+08:00
rfs (PID:12549): Opened LNO:5 for DBID:3870017436 B-1102494684.T-1.S-80 [krsr.c:18143]
2022-05-08T22:33:21.944250+08:00
rfs (PID:12549): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-81 [krsr.c:18143]
2022-05-08T22:33:21.944465+08:00
ARC2 (PID:1362): Archived Log entry 62 added for B-1102494684.T-1.S-80 ID 0x7fbbe6abae9c LAD:1 [krse.c:4933]
2022-05-08T22:33:37.537823+08:00
all data files of pdbid 6 are brought online.
Started logmerger process
2022-05-08T22:33:37.592170+08:00

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
PR00 (PID:12697): Managed Recovery starting Real Time Apply [krsm.c:15865]
max_pdb is 6
2022-05-08T22:33:37.736778+08:00
Parallel Media Recovery started with 2 slaves
2022-05-08T22:33:37.858641+08:00
Stopping change tracking
2022-05-08T22:33:37.914220+08:00
PR00 (PID:12697): Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_78_k7ho5f5y_.arc [krd.c:9408]
2022-05-08T22:33:38.024387+08:00
PR00 (PID:12697): Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_79_k7ho5jc9_.arc [krd.c:9408]
2022-05-08T22:33:38.548017+08:00
PR00 (PID:12697): Media Recovery Log /u01/app/oracle/DB213H1_STDBY/archivelog/2022_05_08/o1_mf_1_80_k7ho5kvg_.arc [krd.c:9408]
2022-05-08T22:33:38.764339+08:00
PR00 (PID:12697): Media Recovery Waiting for T-1.S-81 (in transit) [krsm.c:6185]
2022-05-08T22:33:38.764667+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 81 Reading mem 0




First, it identifies that the Source PDB is in Read-Write mode.
Then it begins a "Side Media Recovery" for the new PDB PDBNEW(6).
Later, it temporarily halts recovery of the full Standby CDB to merge the  "Side Recovery".
Finally, it resumes "normal" Recovery of the Standby from ArchiveLog Sequence 78 onwards.


Categories: DBA Blogs

PDB Flashback in 21c

Tue, 2022-05-03 02:24

 Oracle 21c allows you to Flashback a Pluggable Database to a past point-in-time / SCN.


Here is a quick demo  (also with alert log messages from the Primary and Standby) :


On the Primary :


SQL> create pluggable database pdbhkc admin user hemant identified by hemant;

Pluggable database created.

SQL> alter pluggable database pdbhkc open;

Pluggable database altered.

SQL>
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
3688309

SQL>
SQL> alter session set container=PDBHKC;

Session altered.

SQL> grant dba to hemant; -- grant is done *after* SCN 3688309

Grant succeeded.

SQL> connect hemant/hemant@pdbhkc
Connected.
SQL>
SQL> create table obj_list as select * from user_objects;

Table created.

SQL> select count(*) from obj_list;

COUNT(*)
----------
0

SQL>
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
3688878

SQL>
SQL> insert into obj_list select * from user_objects;

1 row created.

SQL> commit;

Commit complete.

SQL> insert into obj_list select * from obj_list;

1 row created.

SQL> commit;

Commit complete.

SQL> insert into obj_list select * from obj_list;

2 rows created.

SQL> commit;

Commit complete.

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
3689045

SQL>
SQL> quit
Disconnected from Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0


Now I will attempt a FLASHBACK to SCN 3688309

[oracle@primary admin]$ rman target /

Recovery Manager: Release 21.0.0.0.0 - Production on Tue May 3 14:18:15 2022
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved.

connected to target database: DB213H1 (DBID=3870017436)

RMAN> alter pluggable database pdbhkc close;

Statement processed

RMAN>
RMAN> flashback pluggable database pdbhkc to scn 3688309;

Starting flashback at 03-MAY-22
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=152 device type=DISK


starting media recovery
media recovery complete, elapsed time: 00:00:03

Finished flashback at 03-MAY-22

RMAN>
RMAN> alter pluggable database pdbhkc open resetlogs;

Statement processed

RMAN>
RMAN> quit


Recovery Manager complete.
[oracle@primary admin]$ sqlplus hemant/hemant@pdbhkc

SQL*Plus: Release 21.0.0.0.0 - Production on Tue May 3 14:20:01 2022
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> select count(*) from user_objects;

COUNT(*)
----------
0

SQL>
SQL> select current_scn from v$database; -- this fails because DBA grant is not yet present
select current_scn from v$database
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDBHKC;

Session altered.

SQL> grant dba to hemant;

Grant succeeded.

SQL> connect hemant/hemant@pdbhkc;
Connected.
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
3691628

SQL>
SQL> connect / as sysdba
Connected.
SQL> alter system archive log current;

System altered.

SQL>
SQL> col name format a8
SQL> get pdb_incarnations.sql
1 select pi.con_id, p.name, pi.pdb_incarnation#,
2 pi.incarnation_scn, pi.prior_pdb_incarnation#, pi.status
3 from v$pdb_incarnation pi, v$pdbs p
4 where pi.con_id=p.con_id
5* order by 1,2,3,4
SQL> /

CON_ID NAME PDB_INCARNATION# INCARNATION_SCN PRIOR_PDB_INCARNATION# STATUS
---------- -------- ---------------- --------------- ---------------------------------------- -------
2 PDB$SEED 0 1 PARENT
2 PDB$SEED 0 2601843 0 CURRENT
3 PDB1 0 1 PARENT
3 PDB1 0 2601843 0 CURRENT
4 PDBHKC 0 2601843 PARENT
4 PDBHKC 1 3688310 0 CURRENT

6 rows selected.

SQL>


Note how the "GRANT DBA" was after the Flashback SCN and had to be regranted. After the FLASHBACK, the custom table "obj_list" does not exist.


Primary Alert Log Entries :

Primary Instance alert log :
==============================
2022-05-03T14:18:28.808263+08:00
alter pluggable database pdbhkc close
2022-05-03T14:18:28.815690+08:00
PDBHKC(4):Pluggable database PDBHKC closing
PDBHKC(4):JIT: pid 7263 requesting stop
PDBHKC(4):Closing sequence subsystem (5915010500).
PDBHKC(4):Buffer Cache flush started: 4
PDBHKC(4):Buffer Cache flush finished: 4
Pluggable database PDBHKC closed
Completed: alter pluggable database pdbhkc close
2022-05-03T14:19:12.776065+08:00
RMAN flashback PDB to scn 3688309
Flashback Restore Start
Restore Flashback Pluggable Database PDBHKC (4) until change 3686342
Flashback Restore Complete
Flashback Media Recovery Start
2022-05-03T14:19:13.675674+08:00
Serial Media Recovery started
max_pdb is 4
2022-05-03T14:19:13.769641+08:00
Recovery of Online Redo Log: Thread 1 Group 1 Seq 28 Reading mem 0
Mem# 0: /u02/oradata/DB213H1/redo01.log
2022-05-03T14:19:13.970865+08:00
Recovery of Online Redo Log: Thread 1 Group 2 Seq 29 Reading mem 0
Mem# 0: /u02/oradata/DB213H1/redo02.log
2022-05-03T14:19:14.621235+08:00
Incomplete Recovery applied until change 3688310 time 05/03/2022 14:12:57
Flashback Media Recovery Complete
Flashback Pluggable Database PDBHKC (4) recovered until change 3688310
Completed: RMAN flashback PDB to scn 3688309
2022-05-03T14:19:42.382165+08:00
alter pluggable database pdbhkc open resetlogs
2022-05-03T14:19:42.461760+08:00
Online datafile 15
Online datafile 14
Online datafile 13
PDBHKC(4):Pluggable database PDBHKC pseudo opening
PDBHKC(4):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBHKC(4):Autotune of undo retention is turned on.
PDBHKC(4):Endian type of dictionary set to little
PDBHKC(4):Undo initialization recovery: Parallel FPTR failed: start:5988531 end:5988533 diff:2 ms (0.0 seconds)
PDBHKC(4):Undo initialization recovery: err:0 start: 5988531 end: 5988547 diff: 16 ms (0.0 seconds)
PDBHKC(4):[7263] Successfully onlined Undo Tablespace 2.
PDBHKC(4):Undo initialization online undo segments: err:0 start: 5988547 end: 5988555 diff: 8 ms (0.0 seconds)
PDBHKC(4):Undo initialization finished serial:0 start:5988531 end:5988556 diff:25 ms (0.0 seconds)
PDBHKC(4):Database Characterset for PDBHKC is AL32UTF8
PDBHKC(4):Pluggable database PDBHKC pseudo closing
PDBHKC(4):JIT: pid 7263 requesting stop
PDBHKC(4):Closing sequence subsystem (5988605782).
PDBHKC(4):Buffer Cache flush started: 4
PDBHKC(4):Buffer Cache flush finished: 4
PDBHKC(4):Pluggable database PDBHKC opening in read write
PDBHKC(4):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
PDBHKC(4):Autotune of undo retention is turned on.
PDBHKC(4):Endian type of dictionary set to little
PDBHKC(4):Undo initialization recovery: Parallel FPTR complete: start:5989023 end:5989027 diff:4 ms (0.0 seconds)
PDBHKC(4):Undo initialization recovery: err:0 start: 5989023 end: 5989027 diff: 4 ms (0.0 seconds)
PDBHKC(4):[7263] Successfully onlined Undo Tablespace 2.
PDBHKC(4):Undo initialization online undo segments: err:0 start: 5989027 end: 5989188 diff: 161 ms (0.2 seconds)
PDBHKC(4):Undo initialization finished serial:0 start:5989023 end:5989194 diff:171 ms (0.2 seconds)
PDBHKC(4):Pluggable database PDBHKC dictionary check beginning
2022-05-03T14:19:43.379867+08:00
PDBHKC(4):Pluggable Database PDBHKC Dictionary check complete
PDBHKC(4):Database Characterset for PDBHKC is AL32UTF8
2022-05-03T14:19:43.935738+08:00
PDBHKC(4):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)
2022-05-03T14:19:44.385306+08:00
PDBHKC(4):Opening pdb with no Resource Manager plan active
2022-05-03T14:19:44.967205+08:00
Control autobackup written to DISK device

handle '/u01/app/oracle/DB213H1/autobackup/2022_05_03/o1_mf_s_1103725184_k71ld0sw_.bkp'

Pluggable database PDBHKC closed
Completed: alter pluggable database pdbhkc open resetlogs
2022-05-03T14:21:30.693825+08:00
2022-05-03T14:21:30.693825+08:00
PDBHKC(4):TABLE AUDSYS.AUD$UNIFIED: ADDED INTERVAL PARTITION SYS_P328 (2864) VALUES LESS THAN (TIMESTAMP' 2022-05-04 00:00:00')
2022-05-03T14:23:50.678383+08:00
ALTER SYSTEM ARCHIVE LOG
2022-05-03T14:23:50.737171+08:00
Thread 1 advanced to log sequence 30 (LGWR switch), current SCN: 3691895
Current log# 3 seq# 30 mem# 0: /u02/oradata/DB213H1/redo03.log
2022-05-03T14:23:50.840999+08:00
Deleted Oracle managed file /u01/app/oracle/DB213H1/archivelog/2022_04_20/o1_mf_1_9_k5yopcl1_.arc
2022-05-03T14:23:50.843496+08:00
NET (PID:7571): Archived Log entry 29 added for B-1102494684.T-1.S-29 ID 0x7f6ee6abae9c LAD:1 [krse.c:4933]
2022-05-03T14:23:50.879882+08:00
Deleted Oracle managed file /u01/app/oracle/DB213H1/archivelog/2022_04_20/o1_mf_1_10_k5yos1cc_.arc


Note the messages 
"RMAN flashback PDB to scn 3688309" 
"Restore Flashback Pluggable Database PDBHKC (4) until change 3686342" 
"Incomplete Recovery applied until change 3688310" 
"Completed: RMAN flashback PDB to scn 3688309"



Standby Alert Log Entries :

Standby Instance alert log :
============================
2022-05-03T14:19:42.855709+08:00
PDBHKC(4):PDB Side Media Recovery started for pdbid(4)
PDBHKC(4):.... (PID:6103): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBHKC(4):max_pdb is 4
PDBHKC(4):Datafile 15 (ckpscn 3690771) is orphaned on PDB incarnation#=1
PDBHKC(4):Datafile 14 (ckpscn 3690771) is orphaned on PDB incarnation#=1
PDBHKC(4):Datafile 13 (ckpscn 3690771) is orphaned on PDB incarnation#=1
PDBHKC(4):PSR[4]: the incarnation of data files don't match, so we are trying to fetch data files again from the primary database.
2022-05-03T14:19:42.962411+08:00
PDBHKC(4):Media Recovery failed with error 65138
PDBHKC(4):ORA-65138 is signaled while preparing the media recovery.
2022-05-03T14:19:43.506242+08:00
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #0
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #1
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #2
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #3
PDBHKC(4):Applying tablespace dictionary check redo for tablespace #4
PDBHKC(4):Applying datafile dictionary check redo for datafile #13
PDBHKC(4):Applying datafile dictionary check redo for datafile #14
PDBHKC(4):Applying datafile dictionary check redo for datafile #15
2022-05-03T14:19:45.892494+08:00
PDBHKC(4):Full restore complete of datafile 15 /u02/oradata/DB213H1_STDBY/DE155F0301020A30E0538338A8C07AFA/datafile/o1_mf_undotbs1_k71kvzq4_.dbf. Elapsed time: 0:00:00
PDBHKC(4): checkpoint is 3691257
PDBHKC(4): last deallocation scn is 3
PDBHKC(4):Datafile #15 has been copied to the standby.
2022-05-03T14:19:49.845284+08:00
PDBHKC(4):Full restore complete of datafile 14 /u02/oradata/DB213H1_STDBY/DE155F0301020A30E0538338A8C07AFA/datafile/o1_mf_sysaux_k71kvzq3_.dbf. Elapsed time: 0:00:03
PDBHKC(4): checkpoint is 3691259
PDBHKC(4): last deallocation scn is 1071312
PDBHKC(4):Datafile #14 has been copied to the standby.
2022-05-03T14:19:53.340847+08:00
PDBHKC(4):Full restore complete of datafile 13 /u02/oradata/DB213H1_STDBY/DE155F0301020A30E0538338A8C07AFA/datafile/o1_mf_system_k71kvzpm_.dbf. Elapsed time: 0:00:03
PDBHKC(4): checkpoint is 3691268
PDBHKC(4): last deallocation scn is 2645444
PDBHKC(4):Datafile #13 has been copied to the standby.
2022-05-03T14:19:53.354185+08:00
PDBHKC(4):PDB Side Media Recovery started for pdbid(4)
PDBHKC(4):.... (PID:6103): Managed Recovery starting Real Time Apply [krsm.c:15865]
PDBHKC(4):max_pdb is 4
PDBHKC(4):.... (PID:6103): Media Recovery Waiting for T-1.S-29 (in transit) [krsm.c:6185]
2022-05-03T14:19:53.419125+08:00
PDBHKC(4):Media Recovery of Online Log [Thread=1, Seq=29]
2022-05-03T14:19:53.419332+08:00
PDBHKC(4):Recovery of Online Redo Log: Thread 1 Group 5 Seq 29 Reading mem 0
PDBHKC(4): Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
PDBHKC(4):PSR[4]:influxScn=3691275,nxtCkptScn=3691275,minScn=3691268,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843
PDBHKC(4):The merge request has been submitted by side recovery for pdbid 4
2022-05-03T14:19:53.462873+08:00
PR00 (PID:1896): Background Media Recovery cancelled with status 16037 [krd.c:27039]
PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions
2022-05-03T14:19:53.466115+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1896.trc:
ORA-16037: user requested cancel of managed recovery operation
PR00 (PID:1896): Managed Recovery not using Real Time Apply [krsm.c:15876]
Recovery interrupted!
Recovered data files to a consistent state at change 3691276
Stopping change tracking
2022-05-03T14:19:53.679424+08:00
Errors in file /u01/app/oracle/diag/rdbms/db213h1_stdby/DB213H1/trace/DB213H1_pr00_1896.trc:
ORA-16037: user requested cancel of managed recovery operation
2022-05-03T14:19:54.506794+08:00
PDBHKC(4):.... (PID:6103): Side Recovery Complete [krds.c:1584]
2022-05-03T14:20:14.680123+08:00
all data files of pdbid 4 are brought online.
Started logmerger process
2022-05-03T14:20:14.739054+08:00

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
PR00 (PID:6144): Managed Recovery starting Real Time Apply [krsm.c:15865]
max_pdb is 4
2022-05-03T14:20:14.902709+08:00
Parallel Media Recovery started with 4 slaves
2022-05-03T14:20:15.022317+08:00
Stopping change tracking
PR00 (PID:6144): Media Recovery Waiting for T-1.S-29 (in transit) [krsm.c:6185]
2022-05-03T14:20:15.207572+08:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 29 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_5_k5yt8yjw_.log
2022-05-03T14:23:50.977661+08:00
rfs (PID:2029): Opened LNO:4 for DBID:3870017436 B-1102494684.T-1.S-30 [krsr.c:18143]
2022-05-03T14:23:50.978472+08:00
ARC1 (PID:1786): Archived Log entry 11 added for B-1102494684.T-1.S-29 ID 0x7f21e6abae9c LAD:1 [krse.c:4933]
2022-05-03T14:23:51.046002+08:00
PR00 (PID:6144): Media Recovery Waiting for T-1.S-30 (in transit) [krsm.c:6185]
2022-05-03T14:23:51.046688+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 30 Reading mem 0
Mem# 0: /u02/oradata/DB213H1_STDBY/onlinelog/o1_mf_4_k5yt8y5f_.log


Note the messages 
"PDBHKC(4):PSR[4]: the incarnation of data files don't match, so we are trying to fetch data files again from the primary database."
"PDBHKC(4):Media Recovery failed with error 65138"
"PDBHKC(4):ORA-65138 is signaled while preparing the media recovery."
"PDBHKC(4):Full restore complete of datafile ..<fileid>" for each datafile in the PDB
"PDBHKC(4):Datafile #<fileid> has been copied to the standby." for each datafile in the PDB
"PDBHKC(4):PDB Side Media Recovery started for pdbid(4)"
"PDBHKC(4):.... (PID:6103): Managed Recovery starting Real Time Apply [krsm.c:15865]"
"PDBHKC(4):PSR[4]:influxScn=3691275,nxtCkptScn=3691275,minScn=3691268,krdsini=0,krsmrs=9082,nof=0,kcvrlc=1102494684,kcvrls=2601843"
"PDBHKC(4):The merge request has been submitted by side recovery for pdbid 4"
"PR00 (PID:1896): Background Media Recovery cancelled with status 16037 [krd.c:27039]"
"PSR: Background Media Recovery will be reactivated in an attempt to take over PDB side recovery sessions"
"ORA-16037: user requested cancel of managed recovery operation"
"PR00 (PID:1896): Managed Recovery not using Real Time Apply [krsm.c:15876]"
"PDBHKC(4):.... (PID:6103): Side Recovery Complete [krds.c:1584]"
"all data files of pdbid 4 are brought online."
" Started logmerger process" 
"PR00 (PID:6144): Managed Recovery starting Real Time Apply [krsm.c:15865]"

These messages would relate to the new 12c feature "PDB Recovery Isolation" whereby Oracle DataGuard detects that one PDB has to be recovered separately -- i.e. using a separate background session.  (this seems to be called "side recovery" in the messages)

Here are the ORA error descriptions :


65138, 00000, "Data file %s of pluggable database %s belongs to an orphan PDB incarnation."
// *Cause: Either the specified data file was restored from a backup that was
// taken during a period of time that was discarded by
// a RESETLOGS operation, or Oracle could not identify which pluggable
// database incarnation the file belongs to. One would also encounter
// this error if a pluggable database (PDB) flashback or restore was
// performed to an orphan PDB incarnation followed by complete
// recovery of the PDB. The alert log contains more information.
// *Action: Restore a backup of this file that belonged to either the current
// or a prior incarnation of the pluggable database. If you are using
// RMAN to restore, RMAN will automatically select a correct backup.
// If you are using Flashback, then perform flashback to a point
// prior to the start of the orphan incarnation.


16037, 00000, "user requested cancel of managed recovery operation"
// *Cause: The managed standby database recovery operation has been
// canceled per user request.
// *Action: No action is required.
//

16037 is a standard message because Oracle stops media recovery for a while.

However, this behaviour indicates that Oracle actually attempts to copy the entire PDB from the Primary to the Standby.  It would be interesting to see how it works with an available backup of the PDB at the Standby -- particularly so if the PDB is very large and copy-over-the-network would take time.

Categories: DBA Blogs

A Fresh Look at AWR Reports --- 4 : SQL*Net Waits

Fri, 2022-04-15 06:18

 After the first three posts on CPU Usage, Redo Size and I/O Profile, I present my interpretation of SQL*Net waits.

From the same AWR Report as the first three reviews, I have extracted the specific SQL*Net waits from the Foreground Wait Events table:

SQL*Net Waits from the Foreground Waits Table

"SQL*Net more data from client" waits occur when the server process hasn't received the required data (e.g. an INSERT statement data) within 1 network packet and has to wait to receive more more.  "SQL*Net more data to client" is the reverse -- the client has requested, say a large row or dataset that doesn't fit into 1 network packet.  The network packet size is determined essentially by two parameters "Session Data Unit" (aka "SDU") and "Transport Data Unit" (aka "TDU").  When sending rows out of the server, it can automatically also do compression also. ARRAYSIZE (or Fetch Size, depending on your client code, in java) can also achieve some compression. TDU is obsolete in all recent versions but SDU can still be tuned.  For guidance on tuning SDU,  section 14.2 of the Tuning Guide.

"SQL*Net message to client" doesn't really show time taken to send the message (e.g. sql select statement result or execution of commit) to the client.  It is actually the time taken by the Server Process to put the results on to the Network Layer on the DB Server.  Beyond that, the Oracle Server Process does *not* how long it takes for the data to reach the client (whether a user desktop or an application server / middle-tier).  The actual time would depend on network latency and bandwidth.  So, you can interpret the number of occurrences of the Wait Event but NOT the time.

"SQL*Net break/reset" would be presented when an error occurs or an unhandled exception is raised. A high number of such waits indicate repeated occurrences of errors in SQL calls.

"SQL*Net message from client" may or may not be a problem (and that is why Oracle does not include it in % DB time).  There are different manifestations of this event :

1. The database server has sent a row or a set of rows (depending on arraysize or fetchsize) and is waiting for the client / application server to acknowledge and request the next set of rows.  The client could be a user who is running, say, SQL*Developer, and has seen the first 50 rows of a 1000 row result set but has walked away from his desk to get some coffee and has not yet requested the next set of 50 rows. (Yes, it is true that SQL*Developer and most clients do NOT fetch and present all the rows to the user immediately, this is is done in batches -- the parameter is configurable in most client or java programs).  The user may come back 5minutes or 30minutes later and ask for the next 50 rows.  Till then, the Oracle Server Process is "waiting" on this event.

2. It could also be an application server / middle tier that is formatting / sorting the rows or is writing the results to disk or is joining this set of rows with another data set (even from another source -- e.g. an ODBC connection, an Excel file etc) and is taking some milliseconds to do so.  Till then, the Oracle Server Process may be "waiting" on this event.

3. The database server has sent the results of a query (or success of execution of a DML) to the client / application server and is waiting for the next SQL call.  The next SQL call could be a COMMIT (or ROLLBACK) or any INSERT/UPDATE/DML statement.  It may come 1second later or it may come 30minutes later or it may come 10hours later.  As long as the client / application server has maintained a session on the Database, the Oracle Server Process will be "waiting" on this event.



Other than the above set of "Wait" Events relating to a client / application server, there may occur exactly the same Events relating to a Database Link. :  "SQL*Net more data from/to dblink",  "SQL*Net message from/to dblink" and SQL*Net break/reset to dblink".  They don't appear in my AWR because my example load does not include Database Links to / from other databases.  But you need to interpret them as well.  For example, if you have Database Links for periodic refresh's of Materialized Views between Databases, you may well see such "SQL*Net message" and "SQL*Net more data" waits relating to "dblink"s.


Categories: DBA Blogs

A Fresh Look at AWR Reports -- 3 : I/O Profile

Sat, 2022-04-02 03:56

 After viewing CPU Usage and Redo Size statistics, here's a look at the I/O Profile.

Here's the header which indicates that this AWR report is for duration of 2hours on 19c on Windows :

AWR Header


Instead of the Load Profile, this time I will skip directly to the I/O Profile section :

I/O Profile

In the above table, "Database" seems to include both "Datafile" and "Tempfile".

Also, for some co-relation, the IOStat section :

IOStat by Filetype Summary

As I've already mentioned in my previous 2 posts, the load I that I generated was more I/O Intensive than CPU Intensive.
Here we can see a break-down of the I/O Profile.  I am more interested in the *Writes*.

From the previous post, we've already seen that 20GB of writes were for Duplexed Online Redo Logs and 10GB for ArchiveLogs.
But what about the rate per second ?

Online Redo Log file writes were 2.7MB/second.

Datafile + Tempfile writes were 3.3MB/second.  Note that Tempfile I/O was higher than Datafile I/O.

Although Datafile requests per second were higher, Datafile writes by DBWR may well be in smaller batches (i.e. fewer blocks at each write) --- more I/O calls (85.59), less total written bytes (1.388M).  This is in the nature of DBWR behaviour as DBWR periodically "collects" dirty buffers to flush to disk.

Writes to Tempfile for Sort/GroupBy operations (10.59 for 1.89M) can be more blocks per call.  This is because such operations can consist of large number of blocks in each sesssion. The writes are from PGA, not the Buffer Cache in the SGA.

So, what could have been happening here, that Tempfile writes were so high, relative to Datafile writes ?  Sort Operations, Group By Queries, Create / Rebuild Index operations.

Note : Create / Rebuild operations are unlikely to be all visible in the SQL Statements section of the AWR report because, as DDL statements, they get invalidated and aged out of the Shared Pool quickly before each AWR Snapshot run.  Yet, their I/O profile is visible in the IO Sections of the AWR Report).
Create/Rebuild Index operations do not generate Redo for their Tempfile writes but *do* generate Redo for their Datafile writes if they are not executed as NoLogging operations.









Categories: DBA Blogs

A Fresh Look at AWR Reports -- 2 : Redo Size

Wed, 2022-03-23 09:39

 In my previous blog post, I showed how to interpret CPU Usage from an AWR report.

Here, I take the same 2hour AWR report and help interpret Redo Size -- ie. volume of Redo Generated and Written to Disk.

Here's the header which indicates that this AWR report is for duration of 2hours on 19c on Windows :

AWR Header


These are the first few lines of the Load Profile :

First few lines of Load Profile

This indicates that the Redo Size (total Redo Generated) is just about 10GB  (multiply Redo Size bytes per Second by the total elapsed seconds (120.71) minutes)

I can confirm that from the Key Instance Activity Stats section :

Instance Activity Statistics

So, did my database write about 10GB (about 9.8GB actually, if you divide by 1024s) of Redo to disk ?   Is that true ?

Let me go to the IOStat(s) section.  (Please ignore the "average service time" in this table because I've run this test on a PC with a standard Hard-Disk -- not an Enterprise SAN !)

IO Statistics by FileType

What do I see here ?  The total volume written to Log Files (i.e. Online Redo Logs) is 19GB !

Why is that so ?

Because, in my database, every Redo Log Group is duplexed -- it has 2 members.  So, Redo of approximately 9.8GB actually translates to 19GB (or close to 20GB) of Physical Writes to Online Redo Log files.

Note that there are 10GB of Reads from  the Online Redo Log files.  That is the Log Archiver Process(es) writing out the Online Redo Logs to the Archive Logs -- which you can see is 10GB of Writes to Archive Log !

Thus, the initial "Redo Size" of 10GB is actually 30GB of Physical Writes to Disk (20GB for Duplexed Online Redo Logs  PLUS 10GB of Archive Logs).

If I had defined an additional ArchiveLogDestination on my server (as I have done in previous implementations where dedicated disks or NFS mount points were assigned), there would have been another 10GB of Physical Writes.

For every Standby Database (note how in previous blog posts here and here , I have demonstrated 3 separate Standby Databases for the Primary Database), 10GB of Redo has to be transferred over the Network and then written to the Standby Redo Logs at each Standby Site/Server.

So, my 10GB of Redo could effectively translate to anywhere from 30GB to 70GB of Physical Write of Redo  Plus 10GB to 30GB of transfer over the network !!

When you look at the SAN or underlying storage, you must consider how all those writes get through.  Does the SAN have enough Write-Through Cache ?  Are those Redo Logs and the one or more ArchiveLogDestination(s) on the Primary Server  actually on separate Disks  -- you might have separate FileSystem MountPoints or ASM DiskGroups but how do they translate down to the Disks in the SAN ?


Categories: DBA Blogs

A Fresh Look at AWR Reports -- 1 : CPU Usage

Sun, 2022-03-20 09:57

 I have covered AWR Reports (from 10g to 12c, including MultiTenant) in the past.  You could search for "AWR" in the "Search this Blog" in the panel on the right or by specifying :   
site://hemantoracledba.blogspot.com AWR
in the Google Search Window

In this post I present a comparison between OS and Oracle CPU Statistics.

AWR Header

So, I have an AWR Report for 19c on Windows for a duration of 120minutes with 12 CPUs.  That is a total of 86,400 available CPU seconds (120minutes X 12 CPUs X 60seconds_per_minute)



Load Profile


The Load Profile shows 0.4 Oracle CPU seconds per Elapsed Seconds.  This means that, on average, I am consuming less than 1 CPU.


Listing of Waits and Wait Classes

Oracle Database CPU time is 13.5% of total Oracle Time.  This is 1,918 seconds.
Note very carefully that this does NOT mean that the Oracle Database Processes account for 13.5% of Server CPU Usage.



O.S. and Oracle Instance CPU
This shows that the Server reports 24% CPU Usage but Oracle used only 2.4% of the total CPU available (approximately 2.4% of 86,400seconds is 2,073seconds close to Oracle's DB CPU figure)
This also translates about 9% of the total actual CPU "busyness" on the Server.

So, if Oracle consumed only 9% of the actual CPU Usage on the server, I would need to look for other processes on the same server that would have been consuming CPU.  

This could be other Oracle Database Instances (my report is for database instance "CDB1" -- was there / were there one or more other Database Instances active on the same server ?).  This could also be non-Oracle programs, Application Server software,  other system / monitoring tools running (I've seen monitoring tools alone consuming 20% of the CPU on a server !).

If this server is hosting only this 1 Database Instance (CDB1), it is excessively provisioned (12 CPUs) for the actual load.  Of course, the load that I have generated here is I/O intensive, but it means that I could have actually reduced the number of CPUs to, say, 4 CPUs and paid much less in Hardware, Oracle Licensing and Oracle Support costs.  (4 CPUs would have allowed for the OS and the other non-Oracle processes that seem to be accounting for 24% user-time of 12 CPUs).




Categories: DBA Blogs

SQL Execution Statistics -- 4 : Adaptive Plans

Sat, 2022-03-12 08:45

In the previous 3 blog posts, I have shown how you can use the GATHER_PLAN_STATISTICS Hint or the STATISTICS_LEVEL='ALL' to get sql execution statistics.

This may work with Adaptive Plans as well.



SQL> select /*+ gather_plan_statistics */ count(*)
2 from tables_list t
3 where (owner, table_name)
4 not in
5 (
6 select owner, table_name
7 from indexes_list i
8 )
9 /

COUNT(*)
----------
593

SQL> select * from table(dbms_xplan.display_cursor('',format=>'ALLSTATS LAST +COST +ADAPTIVE'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID g6q8m94krdvz7, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from tables_list t where
(owner, table_name) not in ( select owner, table_name from
indexes_list i )

Plan hash value: 2460844421

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 11 (100)| 1 |00:00:00.01 | 37 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 37 | | | |
| * 2 | HASH JOIN ANTI | | 1 | 22 | 11 (0)| 593 |00:00:00.01 | 37 | 1335K| 1335K| 1376K (0)|
|- 3 | NESTED LOOPS ANTI | | 1 | 22 | 11 (0)| 2202 |00:00:00.01 | 17 | | | |
|- 4 | STATISTICS COLLECTOR | | 1 | | | 2202 |00:00:00.01 | 17 | | | |
| 5 | INDEX FAST FULL SCAN| TABLES_LIST_NDX | 1 | 2202 | 5 (0)| 2202 |00:00:00.01 | 17 | | | |
|- * 6 | INDEX RANGE SCAN | INDEXES_LIST_NDX_1 | 0 | 2918 | 6 (0)| 0 |00:00:00.01 | 0 | | | |
| 7 | INDEX FAST FULL SCAN | INDEXES_LIST_NDX_1 | 1 | 2918 | 6 (0)| 2918 |00:00:00.01 | 20 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")
6 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)


31 rows selected.

SQL>
SQL> alter session set statistics_level='ALL';

Session altered.

SQL> select count(*)
2 from tables_list t
3 where (owner, table_name)
4 not in
5 (
6 select owner, table_name
7 from indexes_list i
8 )
9 /

COUNT(*)
----------
593

SQL> select * from table(dbms_xplan.display_cursor('',format=>'ALLSTATS LAST +COST +ADAPTIVE'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID c1fdcdxhvjrat, child number 0
-------------------------------------
select count(*) from tables_list t where (owner, table_name) not in (
select owner, table_name from indexes_list i )

Plan hash value: 2460844421

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 11 (100)| 1 |00:00:00.01 | 37 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 37 | | | |
| * 2 | HASH JOIN ANTI | | 1 | 22 | 11 (0)| 593 |00:00:00.01 | 37 | 1335K| 1335K| 1381K (0)|
|- 3 | NESTED LOOPS ANTI | | 1 | 22 | 11 (0)| 2202 |00:00:00.01 | 17 | | | |
|- 4 | STATISTICS COLLECTOR | | 1 | | | 2202 |00:00:00.01 | 17 | | | |
| 5 | INDEX FAST FULL SCAN| TABLES_LIST_NDX | 1 | 2202 | 5 (0)| 2202 |00:00:00.01 | 17 | | | |
|- * 6 | INDEX RANGE SCAN | INDEXES_LIST_NDX_1 | 0 | 2918 | 6 (0)| 0 |00:00:00.01 | 0 | | | |
| 7 | INDEX FAST FULL SCAN | INDEXES_LIST_NDX_1 | 1 | 2918 | 6 (0)| 2918 |00:00:00.01 | 20 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")
6 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)


30 rows selected.

SQL>


ote : The two SQL statements, although being the same, have different SQL_IDs because of the presence / absence of the GATHER_PLAN_STATISTICS Hint.

Categories: DBA Blogs

SQL Execution Statistics -- 3 : Using STATISTICS_LEVEL

Sun, 2022-03-06 09:11

 In my previous posts here and here, I have shown how to use the GATHER_PLAN_STATISTICS Hint to collect additional information.  But what if you can't modify the SELECT query statement ?

You could set STATISTICS_LEVEL='ALL' at the session, run the query and then reset STATISTICS_LEVEL to the default  'TYPICAL'



SQL> alter session set tracefile_identifier='H2';

Session altered.

SQL> alter session set statistics_level='ALL';

Session altered.

SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>TRUE); -- just to get the SQL_ID in the trace file

PL/SQL procedure successfully completed.

SQL>
SQL> select
2 d.department_id, d.department_name, e.first_name, e.last_name
3 from hr.departments d join hr.employees e
4 on d.department_id=e.department_id
5 order by 1,2,4,3
6 /
....
....
....
106 rows selected.

SQL>
SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>FALSE);

PL/SQL procedure successfully completed.

SQL>
QL> select * from table(dbms_xplan.display_cursor('4aa2k5b98ybfr',format=>'ALLSTATS LAST')); -- SQL_ID identified from the trace file

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 4aa2k5b98ybfr, child number 0
-------------------------------------
select d.department_id, d.department_name, e.first_name, e.last_name
from hr.departments d join hr.employees e on
d.department_id=e.department_id order by 1,2,4,3

Plan hash value: 2209899241

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 106 |00:00:00.10 | 10 | 14 | | | |
| 1 | SORT ORDER BY | | 1 | 106 | 106 |00:00:00.10 | 10 | 14 | 11264 | 11264 |10240 (0)|
| 2 | MERGE JOIN | | 1 | 106 | 106 |00:00:00.10 | 10 | 14 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 27 | 12 |00:00:00.03 | 2 | 2 | | | |
| 4 | INDEX FULL SCAN | DEPT_ID_PK | 1 | 27 | 12 |00:00:00.03 | 1 | 1 | | | |
|* 5 | SORT JOIN | | 12 | 107 | 106 |00:00:00.06 | 8 | 12 | 9216 | 9216 | 8192 (0)|
| 6 | VIEW | index$_join$_002 | 1 | 107 | 106 |00:00:00.06 | 8 | 12 | | | |
|* 7 | HASH JOIN | | 1 | | 106 |00:00:00.06 | 8 | 12 | 1610K| 1610K| 1658K (0)|
| 8 | INDEX FAST FULL SCAN | EMP_DEPARTMENT_IX | 1 | 107 | 106 |00:00:00.06 | 4 | 6 | | | |
| 9 | INDEX FAST FULL SCAN | EMP_NAME_IX | 1 | 107 | 107 |00:00:00.01 | 4 | 6 | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
filter("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
7 - access(ROWID=ROWID)


30 rows selected.

SQL>


Thus, I was able to get similar execution statistics without the Hint if I set STATISTICS_LEVEL='ALL'.

(Note : The top level operation here is a MERGE JOIN while that in the first example of the first post of this series on 27-January-22 was a HASH JOIN. The result here is 106 rows, not the 108 rows in the first example. The first example was in an HR schema in another database -- e.g. you would see DEPARTMENT_ID=280 'New Department' and Employee 'HEMANT CHITALE' in that listing.  These are two different HR schemas where I have modified some of the data.  Thus, you'll see a difference in the Execution Plan as well between the first example and this one. although the query is similar -- because the databases, patch levels,  possibly optimizer parameters, schema and data are different.)

Even a slight difference in data or parameters or patch levels can result in different execution plans !







Categories: DBA Blogs

SQL Execution Statistics -- 2 : SQL_ID from a Tracefile

Tue, 2022-02-22 08:45

In my previous blog post, I have demonstrated sql execution statistics for the last SQL statement of the current session.

However, you can trace SQL execution in the current session or in any other session and then review the information from the tracefile.

(For different methods of SQL Tracing see this series of blog posts

For example, in an SQLPlus session :



SQL> alter session set tracefile_identifier='CUSTOMER_COUNT';

Session altered.

SQL>
SQL> select *
2 from v$diag_info
3 where name = 'Default Trace File';

INST_ID NAME
---------- ----------------------------------------------------------------
VALUE
---------------------------------------------------------------------------------------
CON_ID
----------
1 Default Trace File
/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_9416_CUSTOMER_COUNT.trc
0


SQL>
SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>TRUE);

PL/SQL procedure successfully completed.

SQL> select /*+ gather_plan_statistics */ count(*) from tpcc.orders;

COUNT(*)
----------
303697

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>

From the tracefile :
PARSING IN CURSOR #139722640816616 len=62 dep=0 uid=106 oct=3 lid=106 tim=20489021058 hv=3158106211 ad='777865e0' sqlid='du49mwuy3ts33'
select /*+ gather_plan_statistics */ count(*) from tpcc.orders
END OF STMT
PARSE #139722640816616:c=10950,e=11447,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=630573765,tim=20489021057
EXEC #139722640816616:c=52,e=51,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=630573765,tim=20489021253
FETCH #139722640816616:c=78319,e=464572,p=1504,cr=1508,cu=0,mis=0,r=1,dep=0,og=1,plh=630573765,tim=20489485885
STAT #139722640816616 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=1508 pr=1504 pw=0 str=1 time=464570 us)'
STAT #139722640816616 id=2 cnt=303697 pid=1 pos=1 obj=80213 op='TABLE ACCESS FULL ORDERS (cr=1508 pr=1504 pw=0 str=1 time=566811 us cost=411 size=0 card=303697)'
FETCH #139722640816616:c=1,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=630573765,tim=20489486318


From a tkprof of the tracefile :
SQL ID: du49mwuy3ts33 Plan Hash: 630573765

select /*+ gather_plan_statistics */ count(*)
from
tpcc.orders


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.07 0.46 1504 1508 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.08 0.47 1504 1508 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=1508 pr=1504 pw=0 time=464570 us starts=1)
303697 303697 303697 TABLE ACCESS FULL ORDERS (cr=1508 pr=1504 pw=0 time=566811 us starts=1 cost=411 size=0 card=303697)


SQL> select * from table(dbms_xplan.display_cursor('du49mwuy3ts33',format=>'ALLSTATS LAST +COST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID du49mwuy3ts33, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from tpcc.orders

Plan hash value: 630573765

-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 411 (100)| 1 |00:00:00.38 | 1506 | 1505 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.38 | 1506 | 1505 |
| 2 | TABLE ACCESS FULL| ORDERS | 1 | 303K| 411 (1)| 303K|00:00:00.07 | 1506 | 1505 |
-------------------------------------------------------------------------------------------------------------


14 rows selected.

SQL>


From the tracefile, I could identify the SQL_ID (du49mwuy3ts33) and then run dbms_xplan.display_cursor for that SQL_ID.  

There is a slight discrepancy between the tracefile and the dbms_xplan.display_cursor output on the Actual Time and Buffers ("cr" in the tracefile) and Physical Reads ("pr" in the tracefile).  Similarly, you might notice a discrepancy in the time reported in the raw trace file STAT lines for operation Ids 2 and 1.  When the timing is in microseconds, some discrepancies may arise.
You must also note that gather_plan_statistics itself introduces an overhead on execution time for SQL statements that run in microseconds/milliseconds.


(For an explanation of reading the tracefile, see this post on the details and and this post on the summary)

Categories: DBA Blogs

Pages