Feed aggregator
AWS Certified Machine Learning – Specialty: Step-by-Step Hands-On
AWS Certified Machine Learning – Specialty [MLS-C01] Step By Step Activity Guides (Hands-On Labs) Hands-On Activity Guides that you must perform in order to learn AWS Certified Machine Learning and clear AWS MLS-C01 certification exam. Read the blog to get a deep understanding of the AWS MLS-C01 certification exam at k21academy.com/awsml05. The blog covers: • […]
The post AWS Certified Machine Learning – Specialty: Step-by-Step Hands-On appeared first on Oracle Trainings for Apps & Fusion DBA.
How to export data for specific time period using DBMS_DATAPUMP api.
Hostname for oracle database will change
cursor
Can I change an existing column to a computed column with an Alter Table statement
Oracle Pivot table
Manual block level recovery steps on standby database using backup from primary database
FAST RECOVERY AREA best practice and not recommend
Comparing Timestamps with Time Zone
If you break out into a sweat reading the title, it probably means that like me, you have had too little exposure to working with timestamps in Oracle.

Until recently I never really had much to do with time zones because I lived in the (now even moreso, due to covid) insular state of Western Australia. In WA most places pretend that there is no such thing as time zones – so our exposure to Oracle data types is limited to simple DATEs and TIMESTAMPs, with nary a time zone in sight. We just set the server time zone to AUSTRALIA/Perth and forget it.
Now I’ve helped build a system that needs to concurrently serve the needs of customers in any time zone – whether in the US, in Africa, or here in Australia. We therefore set the server time zone to UTC and use data types that support time zones, namely:
- TIMESTAMP WITH TIME ZONE – for dates and times that need to include the relevant time zone;
and - TIMESTAMP WITH LOCAL TIME ZONE – for dates and times of system events (e.g. record audit data) that we want to always be shown as of the session time zone (i.e. UTC), and we don’t care what time zone they were originally created in.
A colleague came to me with the following issue: a business rule needed to check an appointment date/time with the current date; if the appointment was for the prior day, an error message should be shown saying that they were too late for their appointment. A test case was failing and they couldn’t see why.
Here is the code (somewhat obfuscated):
if appointment_time < trunc(current_time) then :p1_msg := 'This appointment was for the previous day and has expired.'; end if;
We had used TRUNC here because we want to check if the appointment time was prior to midnight of the current date, from the perspective of the relevant time zone. The values of appointment_time and current_time seemed to indicate it shouldn’t fail:
appointment_time = 05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth current_time = 05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth
We can see that the appointment time and current time are in the same time zone, and the same day – so the tester expected no error message would be shown. (Note that the “current time” here is computed using localtimestamp at the time zone of the record being compared)
After checking that our assumptions were correct (yes, both appointment_time and current_time are TIMESTAMP WITH TIME ZONEs; and yes, they had the values shown above) we ran a query on the database to start testing our assumptions about the logic being run here.
select to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time, to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time from dual APPT_TIME = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH' CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH'
So far so good. What does an ordinary comparison show for these values?
with q as ( select to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time, to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time from dual) select q.appt_time, q.current_time, case when q.appt_time < q.current_time then 'FAIL' else 'SUCCESS' end test from q; APPT_TIME = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH' CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH' TEST = 'SUCCESS'
That’s what we expected; the appointment time is not before the current time, so the test is successful. Now, let’s test the expression actually used in our failing code, where the TRUNC has been added:
with q as ( select to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time, to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time from dual) select q.appt_time, q.current_time, trunc(q.current_time), case when q.appt_time < trunc(q.current_time) then 'FAIL' else 'SUCCESS' end test from q; APPT_TIME = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH' CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH' TRUNC(CURRENT_TIME) = '03/05/2021' TEST = 'FAIL'
Good: we have reproduced the problem. Now we can try to work out why it is failing. My initial suspicion was that an implicit conversion was causing the issue – perhaps the appointment date was being converted to a DATE prior to the comparison, and was somehow being converted to the UTC time zone, which was the database time zone?
with q as ( select to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time, to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time from dual) select q.appt_time, q.current_time, cast(q.appt_time as date), cast(q.current_time as date) from q; APPT_TIME = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH' CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH' CAST(APPT_TIME AS DATE) = '03/05/2021 07:00:00 AM' CAST(CURRENT_TIME AS DATE) = '03/05/2021 06:45:00 AM'
Nope. When cast to a DATE, both timestamps still fall on the same date. Then I thought, maybe when a DATE is compared with a TIMESTAMP, Oracle first converts the DATE to a TIMESTAMP?
with q as ( select to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time, to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time from dual) select q.appt_time, q.current_time, cast(trunc(q.current_time) as timestamp with time zone), case when q.appt_time < trunc(q.current_time) then 'FAIL' else 'SUCCESS' end test from q; APPT_TIME = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH' CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH' CAST(TRUNC(CURRENT_TIME) AS TIMESTAMP) = '05-MAR-2021 12.00.00.000000 AM +00:00' TEST = 'FAIL'
Ah! Now we can see the cause of our problem. After TRUNCating a timestamp, we have converted it to a DATE (with no timezone information); since Oracle needs to implicitly convert this back to a TIMESTAMP WITH TIME ZONE, it simply slaps the UTC time zone on it. Now, when it is compared with the appointment time, it fails the test because the time is 12am (midnight) versus 7am.
Our original requirement was only to compare the dates involved, not the time of day; if the appointment was on the previous day (in the time zone relevant to the record), the error message should appear. We therefore need to ensure that Oracle performs no implicit conversion, by first converting the appointment time to a DATE:
with q as ( select to_timestamp_tz('05-MAR-2021 07.00.00.000000 AM AUSTRALIA/Perth') as appt_time, to_timestamp_tz('05-MAR-2021 06.45.00.000000 AM AUSTRALIA/Perth') as current_time from dual) select q.appt_time, q.current_time, case when cast(q.appt_time as date) < trunc(q.current_time) then 'FAIL' else 'SUCCESS' end test from q; APPT_TIME = '05-MAR-2021 07.00.00.000000000 AM AUSTRALIA/PERTH' CURRENT_TIME = '05-MAR-2021 06.45.00.000000000 AM AUSTRALIA/PERTH' TEST = 'SUCCESS'
Our logic therefore should be:
if cast(appointment_time as date) < trunc(current_time) then :p1_msg := 'This appointment was for the previous day and has expired.'; end if;
It should be noted that if the tester had done this just an hour later in the day, they would not have noticed this problem – because Perth is +08:00, and the timestamps for the test data were prior to 8am in the morning.
Lesson #1: in any system that deals with timestamps and time zones it’s quite easy for subtle bugs to survive quite a bit of testing.
Lesson #2: when writing any comparison code involving timestamps and time zones, make sure that the data types are identical – and if they aren’t, add code to explicitly convert them first.
Fetching last record from a table
select purging in V$ARCHIVED_LOG
Insert 400 million faster from one table to another table
Commit point optimization and Golden Gate
use_nl_with_index
One of the less well-known hints is the hint /*+ use_nl_with_index() */ (link to 19c reference manual) which appeared in the 10g timeline, where the manuals supply the following description:
The USE_NL_WITH_INDEX
hint will cause the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table but only under the following condition. If no index is specified, the optimizer must be able to use some index with at least one join predicate as the index key. If an index is specified, the optimizer must be able to use that index with at least one join predicate as the index key.
It looks like a fairly redundant hint, really, since it could easily (and with greater safely, perhaps) be replaced by the pair /*+ use_nl(alias) index(alias) */ with some refinement on the index() hint that I’ve shown. In fact I think I’ve only ever seen the hint “in the wild” once, and that was in an internal view definition where it had been used incorrectly (see this recent update to a note on one of the dynamic performance views that I wrote a few years ago).
The note I’ve just referenced prompted me to take a closer look at the hint to see how accurate the definition was. Here’s a data set I created for testing:
rem rem Script: use_nl_with_index.sql rem Author: Jonathan Lewis rem Dated: Mar 2021 rem rem Last tested rem 19.3.0.0 rem create table t1 as with generator as ( select rownum id from dual connect by level <= 1e4 -- > comment to avoid WordPress format issue ) select rownum id, mod(rownum,10) n10, mod(rownum,1000) n1000, mod(rownum,2000) n2000, lpad(mod(rownum,1000),10,'0') v1000, lpad('x',100,'x') padding from generator v1, generator v2 where rownum <= 1e5 -- > comment to avoid WordPress format issue ; create table t2 as select distinct n10, n1000, v1000 from t1 ; create index t1_i1000 on t1(n1000); create index t1_i10_1000 on t1(n10,n1000); create index t1_i2000 on t1(n2000); create bitmap index t1_b1 on t1(n1000, n10);
I’ve set up the data to do a join between t2 and t1, and I’m going to hint a query to force the join order t2 -> t1, and thanks to the data pattern the default path should be a hash join. Once I’ve established the default path I’m going to use the use_nl_with_index() hint to see how it behaves with respect to the various indexes I’ve created. So here’s the query with the default path:
set autotrace traceonly explain select /*+ leading(t2 t1) */ t1.* from t2, t1 where t2.n10 = 1 and t1.n1000 = t2.n1000 ; Execution Plan ---------------------------------------------------------- Plan hash value: 2959412835 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10000 | 1318K| 259 (8)| 00:00:01 | |* 1 | HASH JOIN | | 10000 | 1318K| 259 (8)| 00:00:01 | |* 2 | TABLE ACCESS FULL| T2 | 100 | 700 | 2 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL| T1 | 100K| 12M| 252 (6)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("T1"."N1000"="T2"."N1000") 2 - filter("T2"."N10"=1) Note ----- - this is an adaptive plan
So the join order is as required, and the default is a hash join. The join predicate is t1.n1000 = t2,n1000, and if you examine the indexes I’ve created you’ll see I’ve got
- t1_i1000 on t1(n1000) – the perfect index
- t1_i10_1000 on t1(n10, n1000) – which could be used for a skip scan
- t1_i2000 on t1(n2000) – which doesn’t include a suitable join predicate
- t1_b1 on t1(n1000, n10) – which is a bitmap index
So here are the first batch of tests – all rolled into a single statement with optional hints included:
select /*+ leading(t2 t1) use_nl_with_index(t1) -- use_nl_with_index(t1 t1_i1000) -- use_nl_with_index(t1(n1000)) */ t1.* from t2, t1 where t2.n10 = 1 and t1.n1000 = t2.n1000 ; Execution Plan ---------------------------------------------------------- Plan hash value: 3315267048 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10000 | 1318K| 10133 (1)| 00:00:01 | | 1 | NESTED LOOPS | | 10000 | 1318K| 10133 (1)| 00:00:01 | | 2 | NESTED LOOPS | | 10000 | 1318K| 10133 (1)| 00:00:01 | |* 3 | TABLE ACCESS FULL | T2 | 100 | 700 | 2 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | T1_I1000 | 100 | | 1 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| T1 | 100 | 12800 | 101 (0)| 00:00:01 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("T2"."N10"=1) 4 - access("T1"."N1000"="T2"."N1000")
If I don’t specify an index the optimizer picks the best possible index; alternatively I can specify the index on (n1000) by name or by description and the optimizer will still use it. So what do I get if I reference the index on (n2000):
select /*+ leading(t2 t1) use_nl_with_index(t1(n2000)) */ t1.* from t2, t1 where t2.n10 = 1 and t1.n1000 = t2.n1000 ; Execution Plan ---------------------------------------------------------- Plan hash value: 2959412835 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10000 | 1318K| 259 (8)| 00:00:01 | |* 1 | HASH JOIN | | 10000 | 1318K| 259 (8)| 00:00:01 | |* 2 | TABLE ACCESS FULL| T2 | 100 | 700 | 2 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL| T1 | 100K| 12M| 252 (6)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("T1"."N1000"="T2"."N1000") 2 - filter("T2"."N10"=1) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (U - Unused (1)) --------------------------------------------------------------------------- 3 - SEL$1 / T1@SEL$1 U - use_nl_with_index(t1(n2000)) Note ----- - this is an adaptive plan
I’m back to the tablescan with hash join – and since I’m testing on 19.3.0.0 Oracle kindly tells me in the Hint Report that I have an unused hint: the one that can’t be used because the referenced index doesn’t have any columns that are join predicates.
So what about the skip scan option:
select /*+ leading(t2 t1) use_nl_with_index(t1(n10, n1000)) -- use_nl_with_index(t1(n10)) -- index_ss(t1 (n10)) */ t1.* from t2, t1 where t2.n10 = 1 and t1.n1000 = t2.n1000 ;
Even though the index I’ve specified in the hint does contain a column in the join predicate the execution plan reports a full tablescan and hash join – unless I include an explicit index_ss() hint: but in that case I might as well have used the vanilla flavoured use_nl() hint. I did have a look at the 10053 (CBO) trace file for this example, and found that if I didn’t include the index_ss() hint the optimizer calculated the cost of using an index full scan (and no other option) for every single index on t1 before choosing the tablescan with hash join.
Finally, and without repeating the query, I’ll just note that when I referenced t1_b1 (n1000, n10) in the hint Oracle was happy to use the index in a nested loop join:
--------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10000 | 1318K| 2182 (1)| 00:00:01 | | 1 | NESTED LOOPS | | 10000 | 1318K| 2182 (1)| 00:00:01 | | 2 | NESTED LOOPS | | 10000 | 1318K| 2182 (1)| 00:00:01 | |* 3 | TABLE ACCESS FULL | T2 | 100 | 700 | 2 (0)| 00:00:01 | | 4 | BITMAP CONVERSION TO ROWIDS| | | | | | |* 5 | BITMAP INDEX RANGE SCAN | T1_B1 | | | | | | 6 | TABLE ACCESS BY INDEX ROWID | T1 | 100 | 12800 | 2182 (1)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("T2"."N10"=1) 5 - access("T1"."N1000"="T2"."N1000") filter("T1"."N1000"="T2"."N1000")Summary
The use_nl_with_index() hint generally works as described in the manuals – with the exception that it doesn’t consider an index skip scan as a valid option when trying to match the join predicate. That exception is one of those annoying little details that could waste a lot of your time.
Since it’s so easy to replace use_nl_with_index() with a pair of hints – including an index hint that could be an index_desc(), index_ss(), or index_combine() hint – I can’t come up with a good reason for using the use_nl_with_index() hint.
Partner Webcast – High Performance Weblogic on OCI with Oracle GraalVM Enterprise
We share our skills to maximize your revenue!
Purge Database Audit Trail Table
Segment for AUD$/FGA_LOG$ tables reside in SYSAUX tablespace and will be moved AUDIT_TBS before configuring purge.
The requirement is to purge audits older than 7 years (366*7=2562) [l_days NUMBER := 2562]
DEMO:
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 3 18:22:35 2021
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
18:22:35 SYS@DB01 AS SYSDBA> select file_name from dba_data_files where tablespace_name='SYSAUX';
FILE_NAME
--------------------------------------------------------------------------------
/oradata/db01/datafile/sysaux_01.dbf
Elapsed: 00:00:00.01
18:22:44 SYS@DB01 AS SYSDBA> create tablespace AUDIT_TBS datafile '&location/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g;
Enter value for location: /oradata/db01/datafile
old 1: create tablespace AUDIT_TBS datafile '&location/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g
new 1: create tablespace AUDIT_TBS datafile '/oradata/db01/datafile/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g
Tablespace created.
Elapsed: 00:00:24.68
18:24:29 SYS@DB01 AS SYSDBA> @ audit.sql
18:24:37 SYS@DB01 AS SYSDBA>
18:24:37 SYS@DB01 AS SYSDBA> select owner, segment_name, segment_type, tablespace_name, round(bytes/1024/1024,2) USED_MB
18:24:37 2 from dba_segments where segment_name IN ('AUD$','FGA_LOG$') order by 2
18:24:37 3 ;
OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME USED_MB
-------------------- -------------------- ------------------ -------------------- ----------
SYS AUD$ TABLE SYSTEM 2946
SYS FGA_LOG$ TABLE SYSTEM .06
Elapsed: 00:00:00.09
18:24:37 SYS@DB01 AS SYSDBA> select MIN(TIMESTAMP), MAX(TIMESTAMP), TRUNC(systimestamp)-MIN(TIMESTAMP) diff_day, COUNT(*)
18:24:37 2 from dba_audit_trail
18:24:37 3 ;
MIN(TIMES MAX(TIMES DIFF_DAY COUNT(*)
--------- --------- ---------- ----------
23-SEP-14 03-MAR-21 2352.58642 20801590
Elapsed: 00:00:53.32
18:25:30 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:25:30 2 ;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
DB AUDIT TABLESPACE SYSAUX FGA AUDIT TRAIL
DB AUDIT TABLESPACE SYSAUX STANDARD AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
10 rows selected.
Elapsed: 00:00:00.01
18:25:30 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_last_arch_ts
18:25:30 2 ;
no rows selected
Elapsed: 00:00:00.00
18:25:30 SYS@DB01 AS SYSDBA> exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')
PL/SQL procedure successfully completed.
Elapsed: 00:09:17.79
18:34:48 SYS@DB01 AS SYSDBA> ;
1 select * from dba_audit_mgmt_last_arch_ts
2*
18:34:48 SYS@DB01 AS SYSDBA> exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')
PL/SQL procedure successfully completed.
Elapsed: 00:00:07.83
18:34:56 SYS@DB01 AS SYSDBA> ;
1 select * from dba_audit_mgmt_last_arch_ts
2*
18:34:56 SYS@DB01 AS SYSDBA>
18:34:56 SYS@DB01 AS SYSDBA> begin
18:34:56 2 dbms_audit_mgmt.INIT_CLEANUP (
18:34:56 3 audit_trail_type => dbms_audit_mgmt.audit_trail_all,
18:34:56 4 default_cleanup_interval => 24);
18:34:56 5 end;
18:34:56 6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:05.21
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> begin
18:35:01 2 if
18:35:01 3 dbms_audit_mgmt.IS_CLEANUP_INITIALIZED(dbms_audit_mgmt.AUDIT_TRAIL_ALL) then
18:35:01 4 dbms_output.put_line('******* YES *******');
18:35:01 5 else
18:35:01 6 dbms_output.put_line('******* NO *******');
18:35:01 7 end if;
18:35:01 8 end;
18:35:01 9 /
******* YES *******
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:35:01 2 ;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
14 rows selected.
Elapsed: 00:00:00.01
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> begin
18:35:01 2 dbms_audit_mgmt.CREATE_PURGE_JOB(
18:35:01 3 audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
18:35:01 4 audit_trail_purge_interval => 24,
18:35:01 5 audit_trail_purge_name => 'PURGE_ALL_AUDIT_TRAILS',
18:35:01 6 use_last_arch_timestamp => TRUE);
18:35:01 7 end;
18:35:01 8 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.77
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> begin
18:35:02 2 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(
18:35:02 3 audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
18:35:02 4 last_archive_time => SYSTIMESTAMP-2562);
18:35:02 5 end;
18:35:02 6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.05
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> begin
18:35:02 2 DBMS_SCHEDULER.create_job (
18:35:02 3 job_name => 'AUDIT_LAST_ARCHIVE_TIME',
18:35:02 4 job_type => 'PLSQL_BLOCK',
18:35:02 5 job_action => 'DECLARE
18:35:02 6 l_days NUMBER := 2562;
18:35:02 7 BEGIN
18:35:02 8 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 9 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 10 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 11 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 12 END;',
18:35:02 13 start_date => SYSTIMESTAMP,
18:35:02 14 repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
18:35:02 15 end_date => NULL,
18:35:02 16 enabled => TRUE,
18:35:02 17 comments => 'Automatically set audit last archive time.');
18:35:02 18 end;
18:35:02 19 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.07
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:35:02 2 ;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
14 rows selected.
Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_last_arch_ts
18:35:02 2 ;
AUDIT_TRAIL RAC_INSTANCE LAST_ARCHIVE_TS
-------------------- ------------ ---------------------------------------------
STANDARD AUDIT TRAIL 0 26-FEB-14 06.35.02.000000 PM +00:00
Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA> select job_action from dba_scheduler_jobs where job_name='PURGE_ALL_AUDIT_TRAILS'
18:35:02 2 ;
JOB_ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(15, TRUE); END;
Elapsed: 00:00:00.17
18:35:02 SYS@DB01 AS SYSDBA> select job_action from dba_scheduler_jobs where job_name='AUDIT_LAST_ARCHIVE_TIME'
18:35:02 2 ;
JOB_ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
DECLARE
l_days NUMBER := 2562;
BEGIN
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-l_days);
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML, TRUNC(SYSTIMESTAMP)-l_days);
END;
Elapsed: 00:00:00.14
18:35:02 SYS@DB01 AS SYSDBA> commit;
Commit complete.
Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA>
--- audit.sql col parameter_name for a30 col parameter_value for a20 col audit_trail for a20 col owner for a20 col segment_name for a20 col tablespace_name for a20 col last_archive_ts for a45 col job_action for a150 set lines 200 pages 100 serverout on echo on select owner, segment_name, segment_type, tablespace_name, round(bytes/1024/1024,2) USED_MB from dba_segments where segment_name IN ('AUD$','FGA_LOG$') order by 2 ; select MIN(TIMESTAMP), MAX(TIMESTAMP), TRUNC(systimestamp)-MIN(TIMESTAMP) diff_day, COUNT(*) from dba_audit_trail ; select * from dba_audit_mgmt_config_params order by 1 ; select * from dba_audit_mgmt_last_arch_ts ; exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS') ; exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS') ; begin dbms_audit_mgmt.INIT_CLEANUP ( audit_trail_type => dbms_audit_mgmt.audit_trail_all, default_cleanup_interval => 24); end; / begin if dbms_audit_mgmt.IS_CLEANUP_INITIALIZED(dbms_audit_mgmt.AUDIT_TRAIL_ALL) then dbms_output.put_line('******* YES *******'); else dbms_output.put_line('******* NO *******'); end if; end; / select * from dba_audit_mgmt_config_params order by 1 ; begin dbms_audit_mgmt.CREATE_PURGE_JOB( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL, audit_trail_purge_interval => 24, audit_trail_purge_name => 'PURGE_ALL_AUDIT_TRAILS', use_last_arch_timestamp => TRUE); end; / begin dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, last_archive_time => SYSTIMESTAMP-2562); end; / begin DBMS_SCHEDULER.create_job ( job_name => 'AUDIT_LAST_ARCHIVE_TIME', job_type => 'PLSQL_BLOCK', job_action => 'DECLARE l_days NUMBER := 2562; BEGIN dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days); dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days); dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-l_days); dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML, TRUNC(SYSTIMESTAMP)-l_days); END;', start_date => SYSTIMESTAMP, repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;', end_date => NULL, enabled => TRUE, comments => 'Automatically set audit last archive time.'); end; / select * from dba_audit_mgmt_config_params order by 1 ; select * from dba_audit_mgmt_last_arch_ts ; select job_action from dba_scheduler_jobs where job_name='PURGE_ALL_AUDIT_TRAILS' ; select job_action from dba_scheduler_jobs where job_name='AUDIT_LAST_ARCHIVE_TIME' ; commit;
Find all sundays of the year
Two approaches to see the Segment Advisor findings?
DBVERIFY (dbv) outputs block_id for bigfiles
I posted a question about this and didn’t find an answer. I searched both Oracle’s support site and the internet in general. I ran the DBVERIFY utility dbv against a bigfile datafile that had corrupt blocks and wanted to relate the blocks back to a table using the DBA_EXTENTS view. For smallfile datafiles I could have used dbms_utility.data_block_address_block. But the manual says that it does not work with bigfiles. I did a small test and found that with bigfiles the address output by dbv is just the block_id within the data file. With a smallfile tablespace it was some combination of block_id and file_id. Really, it is more helpful for dbv to spit out the block_id if you are running it against a datafile because you already know which datafile you have. I will include some of the output of the test below.
Steps of my test:
- create a small bigfile tablespace
- create empty table nologging
- take a rman backup
- do a nologging update
- delete tablespace/datafile
- restore and recover it
- verify corruption exists
- run dbv to get DBA – block address
- run rman backup validate to get file id and block id
- select from dba_extents to get block locations for table
1 - create a small bigfile tablespace
create bigfile tablespace big datafile '/home/oracle/product/oradata/ORCL/big.dbf' size 10M;
[oracle@ora19 bigfiletests]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Wed Mar 3 07:46:15 2021
Version 19.10.0.0.0
Copyright (c) 1982, 2020, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0
SQL> create bigfile tablespace big datafile '/home/oracle/product/oradata/ORCL/big.dbf' size 10M;
Tablespace created.
This was on a small 19c test database on a Linux VM.
2 - create empty table - maybe ctas and truncate select * from dba_tables;
ORCL:SYSTEM>create table bigfiletest tablespace big nologging as select * from dba_tables where 1=2;
Table created.
Had to create table with NOLOGGING to make the insert append below unrecoverable.
3 - take a rman backup
[oracle@ora19 ORCL]$ rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Wed Mar 3 08:11:29 2021
Version 19.10.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1534990846)
RMAN> backup database;
Starting backup at 03-MAR-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/home/oracle/product/oradata/ORCL/system01.dbf
input datafile file number=00003 name=/home/oracle/product/oradata/ORCL/sysaux01.dbf
input datafile file number=00004 name=/home/oracle/product/oradata/ORCL/undotbs01.dbf
input datafile file number=00005 name=/home/oracle/product/oradata/ORCL/big.dbf
input datafile file number=00007 name=/home/oracle/product/oradata/ORCL/users01.dbf
channel ORA_DISK_1: starting piece 1 at 03-MAR-21
channel ORA_DISK_1: finished piece 1 at 03-MAR-21
piece handle=/home/oracle/product/db/19.0.0/dbs/04voq09o_4_1_1 tag=TAG20210303T081136 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:05
Finished backup at 03-MAR-21
Starting Control File and SPFILE Autobackup at 03-MAR-21
piece handle=/home/oracle/product/db/19.0.0/dbs/c-1534990846-20210303-02 comment=NONE
Finished Control File and SPFILE Autobackup at 03-MAR-21
Just a regular backup before doing the unrecoverable insert append into nologging table.
4 - do a nologging update - insert append select * from dba_tables commit
ORCL:SYSTEM>insert /*+ append */ into bigfiletest
2 select * from dba_tables where rownum < 2;
1 row created.
ORCL:SYSTEM>
ORCL:SYSTEM>commit;
Commit complete.
Just one row – should be one corrupt block.
5 - delete tablespace/datafile
[oracle@ora19 ORCL]$ cd /home/oracle/product/oradata/ORCL
[oracle@ora19 ORCL]$ ls -altr
total 2813440
drwxr-x---. 3 oracle oinstall 17 Jul 30 2019 ..
-rw-r-----. 1 oracle oinstall 209715712 Mar 3 07:50 redo02.log
-rw-r-----. 1 oracle oinstall 209715712 Mar 3 07:50 redo03.log
-rw-r-----. 1 oracle oinstall 57679872 Mar 3 07:51 temp01.dbf
drwxr-x---. 2 oracle oinstall 4096 Mar 3 08:02 .
-rw-r-----. 1 oracle oinstall 1142956032 Mar 3 08:11 system01.dbf
-rw-r-----. 1 oracle oinstall 692068352 Mar 3 08:11 sysaux01.dbf
-rw-r-----. 1 oracle oinstall 356524032 Mar 3 08:11 undotbs01.dbf
-rw-r-----. 1 oracle oinstall 5251072 Mar 3 08:11 users01.dbf
-rw-r-----. 1 oracle oinstall 10493952 Mar 3 08:14 big.dbf
-rw-r-----. 1 oracle oinstall 209715712 Mar 3 08:15 redo01.log
-rw-r-----. 1 oracle oinstall 10600448 Mar 3 08:15 control01.ctl
-rw-r-----. 1 oracle oinstall 10600448 Mar 3 08:15 control02.ctl
[oracle@ora19 ORCL]$ rm big.dbf
Now the unrecoverable nologging insert append change is lost. It is not on the backup and not on the redo or archived redo logs.
6 - restore and recover it
[oracle@ora19 ORCL]$ rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Wed Mar 3 08:16:07 2021
Version 19.10.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1534990846)
RMAN> alter tablespace big offline immediate;
using target database control file instead of recovery catalog
Statement processed
RMAN> restore tablespace big;
Starting restore at 03-MAR-21
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/product/oradata/ORCL/big.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/product/db/19.0.0/dbs/04voq09o_4_1_1
channel ORA_DISK_1: piece handle=/home/oracle/product/db/19.0.0/dbs/04voq09o_4_1_1 tag=TAG20210303T081136
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 03-MAR-21
RMAN> recover tablespace big;
Starting recover at 03-MAR-21
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 03-MAR-21
RMAN> alter tablespace big online;
Statement processed
Simple tablespace restore and recovery. Had to alter tablespace offline immediate because the file was not there.
7 - verify corruption exists
ORCL:SYSTEM>select * from bigfiletest;
select * from bigfiletest
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 787)
ORA-01110: data file 5: '/home/oracle/product/oradata/ORCL/big.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
This just shows that the block is corrupt. It also gives us the file number (which we already knew) and the block id which would relate back to DBA_EXTENTS.
8 - run dbv to get DBAs - block addresses
dbv file=/home/oracle/product/oradata/ORCL/big.dbf blocksize=8192
[oracle@ora19 ORCL]$ dbv file=/home/oracle/product/oradata/ORCL/big.dbf blocksize=8192
DBVERIFY: Release 19.0.0.0.0 - Production on Wed Mar 3 08:21:45 2021
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
DBVERIFY - Verification starting : FILE = /home/oracle/product/oradata/ORCL/big.dbf
DBV-00201: Block, DBA 787, marked corrupt for invalid redo application
This was kind of the moment of truth. The DBA from the DBVERIFY utility dbv was 787 which is the same as the block number in the error from the select.
9 - run rman backup validate to get file id and block ids
[oracle@ora19 ORCL]$ rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Wed Mar 3 08:30:47 2021
Version 19.10.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1534990846)
RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE;
...
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5 OK 1 53 1280 2503068
File Name: /home/oracle/product/oradata/ORCL/big.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 1
Index 0 0
Other 0 1226
ORCL:SYSTEM>select * from V$DATABASE_BLOCK_CORRUPTION;
no rows selected
ORCL:SYSTEM>
ORCL:SYSTEM>select * from V$NONLOGGED_BLOCK;
FILE# BLOCK# BLOCKS NONLOGGED_START_CHANGE# NONLOGGED NONLOGGED_END_CHANGE# NONLOGGED RESETLOGS_CHANGE# RESETLOGS OBJECT# REASON CON_ID
---------- ---------- ---------- ----------------------- --------- --------------------- --------- ----------------- --------- ---------------------------------------- --------- ----------
5 787 1 2502865 2502865 1920977 02-MAY-19 74043 UNKNOWN 0
RMAN VALIDATE has the same block number – 787.
10 - select from dba_extents to get block locations for table
ORCL:SYSTEM>select * from dba_extents
2 where segment_name='BIGFILETEST';
OWNER SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
-------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ------------------ ------------------------------ ---------- ---------- ---------- ---------- ---------- ------------
SYSTEM BIGFILETEST TABLE BIG 0 5 784 65536 8 1024
Seems to be the forth block. The extent starts at block 784 but block 787 is corrupt.
I had a larger test database with many corrupt blocks due to the way we had populated it with an RMAN restore and recover. I knew which table was corrupt because I ran select count(*) queries against every table on the database and only found one corrupt. Using the DBA value from dbv against the DBA_EXTENTS view for over 300 sample corrupt blocks that all pointed back to the table I knew was corrupt. I queried it like this:
SELECT
tablespace_name, segment_type, owner, segment_name
FROM my_extents
WHERE file_id = 29 and 15340893 between block_id AND block_id + blocks - 1;
I created the my_extents table from dba_extents to speed up these queries:
create table my_extents as
select * from dba_extents;
create index my_extents_i1 on my_extents(block_id);
execute dbms_stats.gather_table_stats('MYOWNER','MY_EXTENTS');
Anyway, I do not know if this holds true in every situation, but it appears that the DBA values from dbv for bigfiles correspond to the block_id values in DBA_EXTENTS.
Bobby
How to configure additional listeners on ODA
Oracle Database Appliance has quite a lot of nice features, but when looking into the documentation, at least one thing is missing. How to configure multiple listeners? Odacli apparently doesn’t know what’s a listener. Let’s find out how to add new ones.
odacliEverything should be done using odacli on ODA, but unfortunately odacli has no commands for configuring listeners:
odacli -h | grep listener
nothing!
One could tell me that configuring a listener is easy, you just have to describe it into the listener.ora file, for example:
echo "DBI_LSN=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda-dbi-test)(PORT=1576))))" >> $ORACLE_HOME/network/admin/listener.ora
… and start it with:
lsnrctl start DBI_LSN
But if it works fine, it’s not the best way to do that. Why? Simply because it will not survive a reboot.
A better way to configure a listener: through Grid IntrastructureODA makes use of Grid Infrastructure for its default listener on port 1521. The listener is an Oracle service running in Grid Infrastructure, so additional listeners should be declared in the Grid Infrastructure using srvctl. This is an example to configure a new listener on port 1576:
su - grid
which srvctl
/u01/app/19.0.0.0/oracle/bin/srvctl
srvctl add listener -listener DBI_LSN -endpoints 1576
srvctl config listener -listener DBI_LSN
Name: DBI_LSN
Type: Database Listener
Network: 1, Owner: grid
Home:
End points: TCP:1576
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
srvctl start listener -listener DBI_LSN
ps -ef | grep tnslsn | grep DBI
oracle 71530 1 0 12:41 ? 00:00:00 /u01/app/19.0.0.0/oracle/bin/tnslsnr DBI_LSN -no_crs_notify -inherit
The new listener is running fine, and the listener.ora has been completed with this new item:
cat /u01/app/19.0.0.0/oracle/network/admin/listener.ora | grep DBI
DBI_LSN=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=DBI_LSN)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_DBI_LSN=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_DBI_LSN=SUBNET # line added by Agent
For sure, configuring a listener on a particular port is only possible if this port is not in use.
Removing a listenerIf you want to remove a listener, you just need to remove the service from Grid Infrastructure:
su - grid
srvctl stop listener -listener DBI_LSN
srvctl remove listener -listener DBI_LSN
ps -ef | grep tnslsn | grep DBI
no more listener DBI_LSN running
cat /u01/app/19.0.0.0/oracle/network/admin/listener.ora | grep DBI
no more configuration in the listener.ora file
Obviously, if you plan to remove a listener, please make sure that no database is using it prior removing it.
How to use this listener for my databaseSince 12c, a new LREG process in the instance is doing the registration of the database in a listener. Previously, this job was done by PMON process. The default behavior is to register the instance in the standard listener on 1521. If you want to configure your database with the new listener you just created with srvctl, configure the local_listener parameter:
su - oracle
. oraenv <<< DBTEST
sqlplus / as sysdba
alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=oda-dbi-test)(PORT=1576))' scope=both;
alter system register;
exit;
No need to reboot anything.
What about standard 1521 listener?You may think about removing the standard listener on port 1521. But I wouldn’t do that. I think it’s better to keep this default one, even if none of your databases are using it. It could later cause troubles when patching or configuring something else on your ODA.
ConclusionThe listener management with odacli could come one day, but for now (19.9 and before) you still have to configure it using Grid Infrastructure. It’s quite easy and pretty straightforward if you do it the proper way.
Cet article How to configure additional listeners on ODA est apparu en premier sur Blog dbi services.
Pages
