Feed aggregator
Non-availability of new versions of Oracle database for solaris sparc.
POUR performance on CLEANUP/PURGE SYS.$AUD
Good Audit Trail Design
Posted by Pete On 02/10/23 At 12:37 PM
About utl_smtp
External table stopped working on Oracle 19c
Partitioning by YYYY and subpartitioning by MM
Pros and Cons of Developing Your Own ChatGPT Plugin
Zero-ETL: What is it and what Oracle products support it
Businesses are constantly finding themselves increasingly reliant on data ingestion, big data, analytics, and following traditional ETL (Extract, Transform, Load) […]
The post Zero-ETL: What is it and what Oracle products support it appeared first on DBASolved.
Drop RAC Database Using RMAN
As part of data center consolidation, some databases are no longer required and need to be dropped.
I have previouly posted about dropping databases; however, it was single instance vs RAC.
### Check status of database:
db01-oracle:/home/oracle$ srvctl status database -d loadtest -v
Database loadtest is running. Instance status: Open.
db01-oracle:/home/oracle$
### Modify cluster database parameters:
SQL> show parameter cluster_database
NAME TYPE VALUE
------------------------- ----------- ------------
cluster_database boolean FALSE
cluster_database_instances integer 1
SQL> show parameter db_name
NAME TYPE VALUE
------------------------- ----------- ------------
db_name string loadtest
### Startup force mount restrict:
SQL> startup force mount restrict;
ORACLE instance started.
Total System Global Area 2.1475E+10 bytes
Fixed Size 3795712 bytes
Variable Size 1.2885E+10 bytes
Database Buffers 8522825728 bytes
Redo Buffers 63311872 bytes
Database mounted.
!!! Verify logins=RESTRICTED and parallel=NO !!!
SQL> select logins, parallel from v$instance;
LOGINS PAR
---------- ---
RESTRICTED NO
SQL>
### RMAN: drop database including backups ###
/home/oracle$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Mon Sep 25 11:41:45 2023
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: loadtest (DBID=983159180, not open)
RMAN> drop database including backups;
database name is "loadtest" and DBID is 983159180
Do you really want to drop all backups and the database (enter YES or NO)?
nected to target database: loadtest (DBID=983159180, not open)
RMAN> drop database including backups;
database name is "loadtest" and DBID is 983159180
Do you really want to drop all backups and the database (enter YES or NO)? yes
using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=101 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 7.1.3.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=148 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 7.1.3.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=199 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=248 device type=DISK
..............................
deleted archived log
archived log file name=+ORFLASHBK/loadtest/ARCHIVELOG/2023_09_22/thread_1_seq_150008.8873.1148227803 RECID=150008 STAMP=1148227802
Deleted 8 objects
..............................
database name is "loadtest" and DBID is 983159180
database dropped
RMAN>
How To Drop A RAC Database Using RMAN(Doc ID 732042.1) Joel Kallman Day 2023 : Announcement
Since 2016 we’ve had an Oracle community day where we push out content on the same day to try and get a bit of a community buzz. The name has changed over the years, but in 2021 it was renamed to the “Joel Kallman Day”. Joel was big on community, and it seems like a … Continue reading "Joel Kallman Day 2023 : Announcement"
The post Joel Kallman Day 2023 : Announcement first appeared on The ORACLE-BASE Blog.Joel Kallman Day 2023 : Announcement was first posted on October 1, 2023 at 3:39 pm.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
ERP Oracle Database Security
Posted by Pete On 29/09/23 At 01:20 PM
Schema Privileges and Grant option
Find the select statement for a sys_refcursor
Oracle GoldenGate 23c “Free” – What you need to know!
Last week at Oracle Cloud World, the Oracle GoldenGate Product team announced the latest version of Oracle GoldenGate “Free”! This […]
The post Oracle GoldenGate 23c “Free” – What you need to know! appeared first on DBASolved.
LLama 2 LLM for PDF Invoice Data Extraction
A Performance Comparison between Oracle’s REGEXP_SUBSTR and SUBSTR functions
Oracle SQL offers support for POSIX regular expressions by means of a suite of native functions.

Whilst providing the ability for complex search conditions that regular expressions offer, REGEXP_SUBSTR has acquired a reputation for being a fair bit slower when compared to the standard SUBSTR function.
What I’m going to demonstrate here is :
- how SUBSTR seems generally to be faster than REGEXP_SUBSTR
- how the performance of REGEXP_SUBSTR can improve dramatically when used with INSTR
- REGEXP_SUBSTR performs better when it matches the start of a string
To start with though, well discover why you’ll never see a Sith Lord on Sesame Street ( hint : it’s the way they count in a Galaxy Far, Far Away)…
A Test TableWe have a two-column table that looks like this :
create table star_wars_numbers
(
id number generated always as identity,
episode varchar2(4000)
)
/
insert into star_wars_numbers( episode)
values('Four');
insert into star_wars_numbers( episode)
values('Four|Five');
insert into star_wars_numbers( episode)
values('Four|Five|Six');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight|Nine');
commit;
Whilst the contents of the EPISODES column may be the result of arcane Sith ritual, Moof-Milker activity in the design process cannot be ruled out.
Either way, the EPISODES column contains up to 11 values, with multiple columns being separated by a ‘|’.
Extracting each value in SQLUsing SUBSTR and INSTR to separate out the values in EPISODES is, as ever, quite a bit of typing…
select
id,
episode,
substr
(
episode,
1,
case
when instr(episode, '|') > 0 then instr(episode, '|') -1
else length(episode)
end
) as "A New Hope",
case when instr(episode, '|', 1, 1) > 0 then
substr
(
episode,
instr(episode, '|', 1, 1) + 1, -- start
case
when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1
else length(episode)
end
-
instr(episode, '|', 1, 1)
)
end as "The Empire Strikes Back",
case when instr(episode, '|', 1, 2) > 0 then
substr( episode, instr(episode, '|', 1, 2) + 1,
case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end
-
instr(episode, '|', 1, 2))
end as "Return of the Jedi",
case when instr(episode, '|', 1, 3) > 0 then
substr( episode, instr(episode, '|', 1, 3) + 1,
case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end
-
instr(episode, '|', 1, 3))
end as "Phantom Menace",
case when instr(episode, '|', 1, 4) > 0 then
substr( episode, instr(episode, '|', 1, 4) + 1,
case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end
-
instr(episode, '|', 1, 4))
end as "Attack of the Clones",
case when instr(episode, '|', 1, 5) > 0 then
substr( episode, instr(episode, '|', 1, 5) + 1,
case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end
-
instr(episode, '|', 1, 5))
end as "Revenge of the Sith",
case when instr(episode, '|', 1, 6) > 0 then
substr( episode, instr(episode, '|', 1, 6) + 1,
case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end
-
instr(episode, '|', 1, 6))
end as "The Force Awakens",
case when instr(episode, '|', 1, 7) > 0 then
substr( episode, instr(episode, '|', 1, 7) + 1,
case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end
-
instr(episode, '|', 1, 7))
end as "Rogue One",
case when instr(episode, '|', 1, 8) > 0 then
substr( episode, instr(episode, '|', 1, 8) + 1,
case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end
-
instr(episode, '|', 1, 8))
end as "Solo",
case when instr(episode, '|', 1, 9) > 0 then
substr( episode, instr(episode, '|', 1, 9) + 1,
case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end
-
instr(episode, '|', 1, 9))
end as "The Last Jedi",
case when instr(episode, '|', 1, 10) > 0 then
substr( episode, instr(episode, '|', 1, 10) + 1,
case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end
-
instr(episode, '|', 1, 10))
end as "The Rise of Skywalker"
from star_wars_numbers
/
…although these days ( version 12c and later), we can save ourselves a bit of effort by using an inline function…
with function extract_episode( i_string in varchar2, i_pos in pls_integer)
return varchar2
is
begin
if i_pos = 1 then
return substr(i_string, 1, case when instr(i_string, '|') > 0 then instr(i_string, '|') - 1 else length(i_string) end);
end if;
return
case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end
-
instr( i_string, '|', 1, i_pos - 1))
end;
end;
select
id, episode,
extract_episode( episode, 1) as "A New Hope",
extract_episode( episode, 2) as "Empire Strikes Back",
extract_episode( episode, 3) as "Return of the Jedi",
extract_episode( episode, 4) as "The Phantom Menace",
extract_episode( episode, 5) as "Attack of the Clones",
extract_episode( episode, 6) as "Revenge of the Sith",
extract_episode( episode, 7) as "The Force Awakens",
extract_episode( episode, 8) as "Rogue One",
extract_episode( episode, 9) as "Solo",
extract_episode( episode, 10) as "The Last Jedi",
extract_episode( episode, 11) as "The Rise of Skywalker"
from star_wars_numbers
/
Whether you find the equivalent regexp query more elegant or just horrible is somewhat subjective :
select
regexp_substr(episode, '[^|]+') as "A New Hope",
regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
regexp_substr(episode, '[^|]+',1, 9) as "Solo",
regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from star_wars_numbers
/
Irrespective of aesthetic considerations, I should explain the regexp in use here, if only so I can understand it when I read this a few months from now :
[^|] - match any character that isn't pipe ("|") + - match one or more times
Next, we need to find out how the regexp function stacks up when it comes to performance, and we’re not going to find that with a mere 11 rows of data…
Performance Test SetupLet’s make some use of all that space in the Galaxy Far, Far Away…
create table galaxy (
id number generated always as identity,
episode varchar2(4000))
/
begin
for i in 1..100000 loop
insert into galaxy(episode)
select episode from star_wars_numbers;
commit;
end loop;
end;
/
exec dbms_stats.gather_table_stats(user, 'galaxy');
All of the following tests were run on Oracle 19c Enterprise Edition running on Oracle Linux.
Everything was run in a SQL*Plus session from the command line on the database server.
The queries were run in SQL*Plus with the following settings :
set lines 130 clear screen set autotrace trace set timing on
Each query was executed at least twice consecutively to ensure that results were not skewed by the state of the buffer cache.
It’s also worth noting that, I found no low-level explanation for the performance discrepancies between the two functions when trawling through trace files. Therefore, I’ve concentrated on elapsed time as the main performance metric in these tests.
Test 1 – All Fields extracted in the Select ClauseLet’s start with the SUBSTR function ( referred to as “Standard” henceforth) :
select
id,
substr
(
episode, -- input string
1, -- start position
case
when instr(episode, '|') > 0 then instr(episode, '|') -1
else length(episode)
end -- number of characters to extract
) as "A New Hope",
case when instr(episode, '|', 1, 1) > 0 then
substr
(
episode,
instr(episode, '|', 1, 1) + 1, -- start
case
when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1
else length(episode)
end
-
instr(episode, '|', 1, 1)
)
end as "The Empire Strikes Back",
case when instr(episode, '|', 1, 2) > 0 then
substr( episode, instr(episode, '|', 1, 2) + 1,
case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end
-
instr(episode, '|', 1, 2))
end as "Return of the Jedi",
case when instr(episode, '|', 1, 3) > 0 then
substr( episode, instr(episode, '|', 1, 3) + 1,
case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end
-
instr(episode, '|', 1, 3))
end as "Phantom Menace",
case when instr(episode, '|', 1, 4) > 0 then
substr( episode, instr(episode, '|', 1, 4) + 1,
case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end
-
instr(episode, '|', 1, 4))
end as "Attack of the Clones",
case when instr(episode, '|', 1, 5) > 0 then
substr( episode, instr(episode, '|', 1, 5) + 1,
case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end
-
instr(episode, '|', 1, 5))
end as "Revenge of the Sith",
case when instr(episode, '|', 1, 6) > 0 then
substr( episode, instr(episode, '|', 1, 6) + 1,
case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end
-
instr(episode, '|', 1, 6))
end as "The Force Awakens",
case when instr(episode, '|', 1, 7) > 0 then
substr( episode, instr(episode, '|', 1, 7) + 1,
case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end
-
instr(episode, '|', 1, 7))
end as "Rogue One",
case when instr(episode, '|', 1, 8) > 0 then
substr( episode, instr(episode, '|', 1, 8) + 1,
case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end
-
instr(episode, '|', 1, 8))
end as "Solo",
case when instr(episode, '|', 1, 9) > 0 then
substr( episode, instr(episode, '|', 1, 9) + 1,
case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end
-
instr(episode, '|', 1, 9))
end as "The Last Jedi",
case when instr(episode, '|', 1, 10) > 0 then
substr( episode, instr(episode, '|', 1, 10) + 1,
case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end
-
instr(episode, '|', 1, 10))
end as "The Rise of Skywalker"
from galaxy
/
Runnnig this query, we get :
1100000 rows selected. Elapsed: 00:00:20.32 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857676 bytes sent via SQL*Net to client 811886 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Before we look at the REGEXP_SUBSTR equivalent, it’s probably worth considering the more streamlined in-line function version of this query :
with function extract_episode( i_string in varchar2, i_pos in pls_integer)
return varchar2
is
begin
return
case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end
-
instr( i_string, '|', 1, i_pos - 1))
end;
end;
select
id,
substr
(
episode,
1,
case
when instr(episode, '|') > 0 then instr(episode, '|') -1
else length(episode)
end
) as "A New Hope",
extract_episode( episode, 2) as "Empire Strikes Back",
extract_episode( episode, 3) as "Return of the Jedi",
extract_episode( episode, 4) as "The Phantom Menace",
extract_episode( episode, 5) as "Attack of the Clones",
extract_episode( episode, 6) as "Revenge of the Sith",
extract_episode( episode, 7) as "The Force Awakens",
extract_episode( episode, 8) as "Rogue One",
extract_episode( episode, 9) as "Solo",
extract_episode( episode, 10) as "The Last Jedi",
extract_episode( episode, 11) as "The Rise of Skywalker"
from galaxy
/
Whilst it’s a bit more readable, it’s also a lot slower :
1100000 rows selected. Elapsed: 00:00:41.76 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857676 bytes sent via SQL*Net to client 810042 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
REGEXP_SUBSTR however, takes slow to a new level…
select id,
regexp_substr(episode, '[^|]+') as "A New Hope",
regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
regexp_substr(episode, '[^|]+',1, 9) as "Solo",
regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:01:27.25 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857680 bytes sent via SQL*Net to client 809519 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Now see what happens when we give REGEXP_SUBSTR a little help from the humble INSTR (the “hybrid” query) :
select id,
regexp_substr(episode, '[^|]+') as "A New Hope",
decode( instr(episode, '|'), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,1) + 1)) as "The Empire Strikes Back",
decode( instr(episode, '|',1, 2), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,2) + 1)) as "Return of the Jedi",
decode( instr(episode, '|',1, 3), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,3) + 1)) as "The Phantom Menace",
decode( instr(episode, '|',1, 4), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,4) + 1)) as "Attack of the Clones",
decode( instr(episode, '|',1, 5), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,5) + 1)) as "Revenge of the Sith",
decode( instr(episode, '|',1, 6), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,6) + 1)) as "The Force Awakens",
decode( instr(episode, '|',1, 7), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,7) + 1)) as "Rogue One",
decode( instr(episode, '|',1, 8), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,8) + 1)) as "Solo",
decode( instr(episode, '|',1, 9), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,9) + 1)) as "The Last Jedi",
decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy
/
Yes, I have cheated a bit on the aesthetics here by using the more compact DECODE instead of CASE.
However, this does not affect the runtime of the query, which is a bit faster than the pure REGEXP_SUBSTR equivalent :
1100000 rows selected. Elapsed: 00:00:30.83 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857680 bytes sent via SQL*Net to client 810158 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processedTest 1 Results Summary Query MethodTime (secs)SUBSTR ( Standard)20.32REGEXP_SUBSTR with INSTR (Hybrid)30.83SUBSTR with in-line function41.76REGEXP_SUBSTR ( Regexp)87.25
The performance of the Hybrid approach does raise the question of how the REGEXP_SUBSTR compares when we’re just looking to extract a single field from a string, rather than all of them…
Test 2 – Extract the first field onlyIn this instance we’re just looking for the first field in EPISODES.
In this context, the hybrid approach doesn’t apply because we’re always starting our search at the start of the input string.
Starting, once again with the Standard approach :
select id,
substr
(
episode, 1,
case
when instr(episode, '|', 1,1) > 0 then instr(episode, '|', 1,1) -1
else length(episode)
end
) as "A New Hope"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:05.33 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25086067 bytes sent via SQL*Net to client 808790 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Now for REGEXP_SUBSTR :
select id,
regexp_substr( episode, '[^|]+') as "A New Hope"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:06.38 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25086067 bytes sent via SQL*Net to client 808868 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Interestingly, whilst it’s still not as fast, the Regexp runtime is only 20% longer than the Standard.
This contrasts markedly with the 430% longer that the Regexp took for our first test.
Test 2 Results Summary Query MethodTime (secs)Standard5.33Regexp6.38Does this relative performance hold true for any single field in the input string ?
Test 3 – Extract the last field onlyStarting with the Standard query :
select
id,
case when instr(episode, '|', 1, 10) > 0 then
substr( episode, instr(episode, '|', 1, 10) + 1,
case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end
-
instr(episode, '|', 1, 10))
end as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:05.44 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25686033 bytes sent via SQL*Net to client 808915 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Once again, the Regexp Query is much slower…
select
id,
regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:16.16 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25686033 bytes sent via SQL*Net to client 808888 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
…unless we use the hybrid approach…
select
id,
decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:05.60 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25686033 bytes sent via SQL*Net to client 808736 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processedTest 3 Results Summary Query MethodTime (secs)Standard 5.44Hybrid 5.60Regexp16.16 Conclusions
These tests seem to bear out the fact that, in general, SUBSTR offers better performance than REGEXP_SUBSTR.
However, the performance of REGEXP_SUBSTR can be greatly improved if used in conjunction with INSTR.
As always, the decision as to which function to use will depend on the specific circumstances you find yourself in.
Sql Konferenz 2023 in Hanau, Germany
It was a very great pleasure to participate to the Sql Server konferenz 2023 in Hanau, Germany, the 12 and 13 September 2023.
I went with my colleague Petar, of the same Microsoft Sql Server team by dbi services, Switzerland.
For Microsoft Sql Server fans, this event was really the right place to be.
Where is located HanauFirst of all, for those who don’t know where is Hanau, it’s a small and charming city in Germany, 350 km from Basel, Switzerland (3 hours 30min drive), or 30 km from Frankfurt, Germany.

The event took place in the Congress Park Hanau

Once arrived, the first step is as usual to go the Registration desk to get the badge, a blue bag containing some useful furnitures (pen, notepad, program’s…)


Take a look on the agenda here, there are 6 parallel sessions of 1 hour each time, all very interesting and very varied in different themes (Sql Server on premise, Azure, Power BI, Best practices…), from 9am. to 5pm.
Let’s start nowIt was very intense, time passed very quickly each time, you have many choices which session to attend, real dilemma each time.

During the keynote the first day, all speakers, partners or sponsors introduced shortly what they will present.


Some examples of great sessions or speakers, I cannot enumerate all them of course as all were great, and there were a lot, to make it simple I just show here some examples of Sql Server MVP’s speakers, who are evangelists for us as we use their experience in our daily life in Sql Server:
1. Bob Ward – Principal Architect, Microsoft Azure Data.
For all new versions, new features …, he is among the first who is aware and reveal them officially.


Presenting the topic: Transitioning your On-Prem Database Skills to Azure SQL
2. Rie Merrit – Sr Program Manager, MVP & SQL Community for Microsoft, Organizer




Here with Rie Merritt and my colleague Petar Prljevic
3. Glenn Berry – Services Database Engineer, Microsoft Data Platform MVP
Presenting the topic: How to Survive a SQL Server Health Check
He presented some of his Diagnostic queries which are very used and helpful for all DBA’s like me in our daily duties at custom




It was really a great event on Microsoft Sql Server, in my opinion too short because there were so many interesting topics.
I recommend to everyone for those who want to be up to date, or to complete their knowledge
L’article Sql Konferenz 2023 in Hanau, Germany est apparu en premier sur dbi Blog.
Restoring a PDB from another CDB can increase your SCN
I start with a 19c CDB called "ORCLCDB" and PDB "ORCLPDB1" that is present in it.
SQL> col name format a16 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 2778483057 ORCLCDB 20906515 SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1 READ WRITE 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> alter pluggable database orclpdb1 close; Pluggable database altered. SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 24 09:39:55 2023 Version 19.12.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCLCDB (DBID=2778483057) RMAN> backup pluggable database orclpdb1 tag for_migration; Starting backup at 24-SEP-23 using channel ORA_DISK_1 using channel ORA_DISK_2 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00034 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf channel ORA_DISK_1: starting piece 1 at 24-SEP-23 channel ORA_DISK_2: starting full datafile backup set channel ORA_DISK_2: specifying datafile(s) in backup set input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf channel ORA_DISK_2: starting piece 1 at 24-SEP-23 channel ORA_DISK_1: finished piece 1 at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp tag=FOR_MIGRATION comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_2: finished piece 1 at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp tag=FOR_MIGRATION comment=NONE channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03 Finished backup at 24-SEP-23 Starting Control File and SPFILE Autobackup at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2023_09_24/o1_mf_s_1148377605_ljz585bw_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 24-SEP-23 RMAN> RMAN> quit Recovery Manager complete. oracle19c>sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 24 09:48:04 2023 Version 19.12.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 SQL> alter pluggable database orclpdb1 unplug into '/opt/oracle/product/19c/dbhome_1/orclpdb1.xml'; Pluggable database altered. SQL> !ls -l /opt/oracle/product/19c/dbhome_1/orclpdb1.xml -rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:50 /opt/oracle/product/19c/dbhome_1/orclpdb1.xml SQL> SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>
Now I go to the target CDB CDB2 and identify that there is no conflicting PDB. I also find that it has a lower SCN.
SQL> col name format a16 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664227 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664231 SQL> SQL> select * from v$pdbs 2 / CON_ID DBID CON_UID GUID ---------- ---------- ---------- -------------------------------- NAME -------------------------------------------------------------------------------------------------------------------------------- OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY ---------- --- --------------------------------------------------------------------------- ---------- ---------- ---------- -------- SNAPSHOT_PARENT_CON_ID APP APP APP APPLICATION_ROOT_CON_ID APP PRO LOCAL_UNDO UNDO_SCN UNDO_TIME CREATION_ DIAGNOSTICS_SIZE ---------------------- --- --- --- ----------------------- --- --- ---------- ---------- --------- --------- ---------------- PDB_COUNT AUDIT_FILES_SIZE MAX_SIZE MAX_DIAGNOSTICS_SIZE MAX_AUDIT_SIZE LAST_CHANGE TEM ---------- ---------------- ---------- -------------------- -------------- ----------- --- TENANT_ID ------------------------------------------------------------------------------------------------------------------------------------ UPGRADE_LEVEL GUID_BASE64 ------------- ------------------------------ 2 2054948555 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY NO 24-SEP-23 09.26.31.678 AM +08:00 1997190 957349888 8192 ENABLED NO NO NO NO NO 1 280 04-MAY-19 0 0 0 0 0 0 COMMON USER NO 1 iBKSY7mfS73gUwEAAH973wA= 3 2205532678 2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP MOUNTED 17541716 999292928 8192 ENABLED NO NO NO NO NO 1 280 21-AUG-22 0 0 0 0 0 0 COMMON USER NO 1 5r2cc4OcEL7gVQoAJ1/INAA=
I then have the backups and the "closed" datafiles from the source CDB copied over to the target. (The CREATE PLUGGABLE DATABASE command will not be using the RMAN Backups but the PDB Describle XML file and the "closed" datafiles).
oracle19c>pwd /tmp/From_Source oracle19c>ls -l total 1882384 -rw-r-----. 1 oracle oinstall 1146109952 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp -rw-r-----. 1 oracle oinstall 758202368 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp -rw-r-----. 1 oracle oinstall 23232512 Sep 24 09:55 o1_mf_s_1148377605_ljz585bw_.bkp -rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:59 orclpdb1.xml oracle19c>
I now prepare to copy the PDB to the target CDB2.. First I check for compatibility.. And then I plug in the PDB with a new name "ORCLPDB1_NEW"
SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664231 SQL> set serveroutput on SQL> declare 2 compatible constant varchar2(3) := 3 case dbms_pdb.check_plug_compatibility( 4 pdb_descr_file=>'/tmp/From_Source/orclpdb1.xml', 5 pdb_name=>'ORCLPDB1') 6 when true then 'YES' 7 else 'NO' 8 END; 9 begin 10 dbms_output.put_line(compatible); 11 end; 12 / YES PL/SQL procedure successfully completed. SQL> SQL> show parameter db_create NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_create_file_dest string /opt/oracle/oradata SQL> SQL> create pluggable database orclpdb1_new 2 using '/tmp/From_Source/orclpdb1.xml' 3 copy; --- copy will copy the datafiles to the location based on db_create_file_dest and CDB database name Pluggable database created. SQL> SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20910195 SQL> SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP MOUNTED 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1_NEW READ WRITE SQL>
The SCN of the target CDB2 has now been set to the much higher 20910195 (that was 20664231 before I "plugged in" ORCLDB1 as ORCLDB1_NEW). This is because it "read" the SCN from the headers of the datafiles that were plugged in.
SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 2778483057 ORCLCDB 20910076 SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1 MOUNTED 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> drop pluggable database orclpdb1 including datafiles; Pluggable database dropped. SQL> SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>
So, I have "migrated" ORCLPDB1 from ORCLCDB to CDB2 and, in the process, have found that the SCN in CDB2 got reset to a much higher value, because it "inherited" the higher Checkpoint SCN that was in the datafiles of ORCLPDB1.
SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20658011 1 3 0 20658011 1 4 0 20658011 1 7 0 20658011 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20657657 3 59 0 20657657 3 60 0 20657657 4 69 0 20908595 4 70 0 20908595 4 71 0 20908595 4 72 0 20908595 4 73 0 20908595 4 74 0 20908595 4 75 0 20908595 4 76 0 20908595 18 rows selected. SQL> alter system checkpoint; System altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912767 1 3 0 20912767 1 4 0 20912767 1 7 0 20912767 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20657657 3 59 0 20657657 3 60 0 20657657 4 69 0 20912767 4 70 0 20912767 4 71 0 20912767 4 72 0 20912767 4 73 0 20912767 4 74 0 20912767 4 75 0 20912767 4 76 0 20912767 18 rows selected. SQL> col name format a16 SQL> select con_id, name, open_mode from v$pdbs; CON_ID NAME OPEN_MODE ---------- ---------------- ---------- 2 PDB$SEED READ ONLY 3 PDBTMP MOUNTED 4 ORCLPDB1_NEW READ WRITE SQL> alter pluggable database pdbtmp open; Pluggable database altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912767 1 3 0 20912767 1 4 0 20912767 1 7 0 20912767 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20912888 3 59 0 20912888 3 60 0 20912888 4 69 0 20912767 4 70 0 20912767 4 71 0 20912767 4 72 0 20912767 4 73 0 20912767 4 74 0 20912767 4 75 0 20912767 4 76 0 20912767 18 rows selected. SQL> alter system checkpoint; System altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912967 1 3 0 20912967 1 4 0 20912967 1 7 0 20912967 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20912967 3 59 0 20912967 3 60 0 20912967 4 69 0 20912967 4 70 0 20912967 4 71 0 20912967 4 72 0 20912967 4 73 0 20912967 4 74 0 20912967 4 75 0 20912967 4 76 0 20912967 18 rows selected. SQL> SQL> alter session set container=ORCLPDB1_NEW; Session altered. SQL> col name format a54 SQL> select name from v$datafile; NAME ------------------------------------------------------ /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_system_ljz7d02h_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_sysaux_ljz7d02l_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_undotbs1_ljz7d02m_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_users_ljz7d02o_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_data_min_ljz7d02p_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_hemant_d_ljz7d02r_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_hemant_d_ljz7d02s_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_tpcctab_ljz7d02t_.dbf 8 rows selected. SQL>
CON_ID=1 is Root and the datafiles get the new SCN after a Checkpoint. Note these datafiles still had the "older" SCN 20658011 before I issued the Checkpoint.
Oracle Forensics Response
Posted by Pete On 22/09/23 At 01:07 PM
ODA X10: the new Oracle Database Appliances are already there
ODA X8-2 series was sold for 3 years, and I would have expected X9-2 series to be on the market for 2-3 years, but I was wrong. ODA X10 series has just been announced. Maybe Oracle has a different strategy for appliances now, I will explain my thought later.
You will notice that Oracle removed the number 2 in the name of this new ODA series. X9-2S becomes X10-S, X9-2L becomes X10-L and X9-2HA becomes X10-HA. The number 2 makes less sense today: it reffered to the number of sockets/processors. But X10-S only has 1 processor, X10-L has 2, and X10-HA has 4 (2x 2). For sure, the ODA nodes are still 2-socket servers, but this number was quite confusing for some people.
Now let’s try to find out what’s new, what’s good and what’s not so good.
What is an Oracle Database Appliance?ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicate software distribution including Linux, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market positioning for ODA is quite tricky for Oracle: it sits between OCI (the Oracle public Cloud) and Exadata (the highest level database hardware – a kind of ODA on steroids). For some people, ODA avoids migrating to OCI bringing enough simplication for Oracle Database environments, for some other people, ODA is powerfull enough for most workloads and Exadata wouldn’t make sense. For me, ODA has always been one of my preferred solutions, and undoubtly a solution to consider.
For Enterprise Edition users, ODA has a strong advantage over its competitors: capacity on demand licensing. You can enable as few as 2 cores per node, meaning a single processor licence. You can later scale up by enabling additional cores according to your needs.
From Intel Xeon to AMD EpycODA X9-2 was available in 3 flavors:
- X9-2S with 1 Xeon CPU, 256GB of RAM and 2x 6.8TB NVMe SSDs
- X9-2L with 2 Xeon CPUs, 512GB of RAM and 2x up to 12x 6.8TB NVMe SSDs
- X9-2HA with 2 nodes (similar to X9-2L without the disks) and one or two disk enclosures with various configurations (SSDs or spinning disks)
The first change you will notice is the switch from Intel Xeon to AMD Epyc CPU. Oracle already made this change on Exadata platform several months ago. It’s much more a global change from Oracle, moving to AMD on x86-64 servers’ lineup, including those servers in OCI.
X9-2 series ODAs were using Xeon Silver 4314 with 16 cores running between 2.4GHz and 3.4GHz, X10 series ODAs use AMD Epyc 9334 with 32 cores running between 2.7GHz and 3.9 GHz. The number of cores is not so important because most of the customers using ODAs are running Standard Edition or are using Enterprise Edition with a limited number of cores. But regarding pure CPU speed, a significant increase is expected, something like 30% for single thread processing.
Having better CPU doesn’t only mean better performance: it also means less licences for Enterprise Edition users. Having 30% faster CPUs means you need 30% less cores to do the same tasks, if you needed 6 Xeon cores for your workload, you may run the same workload on 4 Epyc cores, so on 2 processor licences instead of 3.
Internal SSDs: back to 480GB disksIt’s was one of my complain for X9-2 series: the 2x M2 internal disks were smaller than those on X8-2, from 480GB to 240GB. Now for X10 series, Oracle brings back 480GB internal disks, and it’s better having this size.
RAM: same sizes but expandable up to 3 timesX10-S has 256GB of RAM, like X9-2S, but you can now triple the size with additional memory modules (768GB). X10-L comes with 512GB in its basic configuration, and you can go up to 1.5TB. X10-HA is basically 2x X10-L with an external storage enclosure, you then have 2x512GB, 2x1TB or 2×1.5TB configurations.
Data disks: same size but less disks for X10-LOn X10-S and L, data disks are the same as X9-2 series: 6.8TB NVMe disks. X10-S has the same limitation as X9-2S, only 2 disks and no possible extension.
There is a catch regarding X10-L. On X9-2L you could have a maximum of 12 disks, meaning a huge 81TB of RAW storage. X10-L is now limited to 8 disks, meaning only 54TB of RAW storage. One could argue that it’s already a big capacity, but this is RAW TB, usable TB is 21TB only when using normal redundancy. For some of my customers having fully populated X8-2M ODAs, their databases won’t fit anymore in one X10-L… Another drawback when it comes to upgrading storage on your X10-L, there is only 4 slots for disks on the front panel. So how can you put 8 disks with only 4 slots? By adding them inside the server, as AIC: Add-In-Card. The disks have a different form factor and you will need to open your server to add these disks, meaning a downtime of your databases. Definitely a loss of flexibility.
X10-HA is not that different compared to X9-2HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being composed of SSDs only, the second one being composed of a mix of SSDs and HDDs. SSDs are still 7.68TB each, and only the HC get a capacity bump thanks to bigger HDDs: from 18TB to 22TB each.
Network interfacesNothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network.
Specific software bundleLatest software bundle for ODA is 19.20, but Oracle just releases a dedicated version of the patch for X10: 19.20.0.1. Next patch bundle will probably be compatible with X10 as well as older versions. Currently, X7-2 series, X8-2 series and X9-2 series are still supported.
What about editions, licences and support?First of all, these new ODAs don’t support Standard Edition 2 anymore! It’s a major breakthrough, as a lot of customers are using Standard Edition 2 on these appliances.
It’s cristal clear in the documentation:
Choice of Oracle Database Software:– Oracle Database 21c Enterprise Edition (DB Systems only)
– Oracle Database 19c Enterprise Edition
My thought is that Oracle will keep X9-2 series for now, and will reserve X10 and AMD Epyc for Enterprise Edition users. X9-2 series is still relevant today, as price and performance match actual needs for most Standard Edition customers.
You may know that ODA is not sold with the database licences: you need to bring yours or buy them at the same time. You will need Enterprise Edition for these new ODAs, starting from 1 processor licence on a S and L models (2 activated cores) and at least 2 processor licences on HA (2 activated cores per node). Enabling your EE licence on an ODA will actually decrease the number of cores on the server to make sure you are compliant, but be careful because it doesn’t prevent you using unlicensed options. You can also use CPU pools to keep remaining CPUs available for other purpose, running application VMs for example.
Regarding support, as other hardware vendors you’ll have to pay for your ODA to be supported, in case of hardware failure. 1st year of support will usually be part of your initial order.
Support for the database versions is limited to 19c and 21c. Don’t forget that only 19c databases are supported with Premier Support. 21c being an innovation release, it’s only for testing.
What are the differences between the 3 models?The X10-S is an entry price point for a small number of small databases.
The X10-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L. OK, disk capacity is far behind previous X9-2L, but most customers won’t put 10 additional disks in their ODA: a fully populated X9-2L was 3 times more expensive than the base model.
The X10-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X10-HA, big infrastructures can be consolidated on a very small number of HA ODAs.
ModelDB EditionnodesURAMRAM MaxRAW TBRAW TB Maxbase priceODA X10-SEE only12256GB768GB13.613.621’579$ODA X10-LEE only12512GB1536GB13.654.434’992$ODA X10-HA HPEE only28/122x 512GB2x 1536GB4636897’723$ODA X10-HA HCEE only28/122x 512GB2x 1536GB39079297’723$ODA X10 specs summary And regarding the price?Looking into the latest engineered systems price list (search exadata price list and you will easily find it), you will find X10 series as well as X9-2 series. Prices for X10-S and X10-L are identical to X9-2S and X9-2L. X10-HA cost 9% more than X9-2HA. X10-HA being quite expensive now (nearly 100’000$), I would definitely compare it to an Exadata Cloud@Customer solution.
Which one should you choose?If you are using Standard Edition, X10 series is not for you. You will need to order X9-2 series, or keep your older ODAs.
If your databases can comfortably fit in the S model, don’t hesitate as you will probably never need more.
Most interesting model is still the new L, maximum disk capacity is 33% less than its predecessor, but it’s a trade off for something like 33% more performance thanks to AMD Epyc. L is quite affordable, has still a great capacity, and is upgradable if you don’t buy the full system at first.
If you still want/need RAC and its associated complexity, the HA is for you.
Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard. No one would recommend buying a single ODA, it’s probably better no ODA at all than a single one.
I would still prefer buying 2x ODA X10-L compared to 1x ODA X10-HA. NVMe speed, no RAC and single box is definitely better. And extreme consolidation may not be the best solution.
ConclusionODA X10 series will bring a nice CPU speed improvement thanks to AMD Epyc switch, but customers running Standard Edition are not concerned and are limited to X9-2 series. X10-L is also a little bit disappointing regarding storage possibilities compared to X9-2L. Nevertheless, ODA is still a good platform for database consolidation and simplification. And until now, it’s still very popular among our customers.
L’article ODA X10: the new Oracle Database Appliances are already there est apparu en premier sur dbi Blog.
Pages
