Feed aggregator

Non-availability of new versions of Oracle database for solaris sparc.

Tom Kyte - 1 hour 21 min ago
Hi Tom! In my organization, an S7-2 sparc server was purchased in 2019 to run Oracle 19c SE 2. Version 21c was released for multiple operating systems except Solaris Sparc and now the latest version 23c is also not available for Solaris Sparc. Could you explain why the new versions are not available for solaris sparc.
Categories: DBA Blogs

POUR performance on CLEANUP/PURGE SYS.$AUD

Tom Kyte - 1 hour 21 min ago
Hi Tom, we need to purge, ARCHIVE and cleanup our SYS.$AUD table. WE do have a RAC x 2 node as an Active DG too. We have a self-defined procedure , which is working appropriately (using DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL). However the table is >700GB and >700 Mil rows. The procedure runs everyday and retrieves the data from $AUD -> makes a CTAS and inserts the data in regular table based on the date range --> then the appropriate date range would be purged and cleaned up. SQL> select /*+ PARALLEL(64) */ to_char(ntimestamp#,'YYYYMMDD') as date_aud , COUNT(0) row_num from sys.aud$ group by to_char(ntimestamp#,'YYYYMMDD') ORDER BY to_char(ntimestamp#,'YYYYMMDD'); a partial selection of AUD table: DATE_AUD ROW_NUM -------- ---------- 20230712 4722355 20230713 4748860 20230714 4231487 20230715 4360975 20230716 4173837 20230717 5148050 20230718 5414122 20230719 4281115 20230720 4599951 20230721 4002760 20230722 3982341 .................. .................. 78 rows selected We use also the max bacth size (1M) for cleanup . SELECT * FROM dba_audit_mgmt_config_params; DB AUDIT CLEAN BATCH SIZE 1000000 STANDARD AUDIT TRAIL The issue is that the CLEANUP/ARCHIVE is very I/O intensive and takes 4-5 hours to cleanup/archive the data for a single date range, and we can run max 2 times a day (from 12pm to 12am) as some very I/O intensive ETL jobs run before and after this time period. We also tried to MOVE the TBS to a new ONE , to have new segment optimized TBS for a better performance using: SQL> BEGIN DBMS_AUDIT_MGMT.set_audit_trail_location( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ audit_trail_location_value => 'NEW_TBS'); END; / But the MOVE process (in a down time) takes more than 20hrs , which is not affordable in a PROD system. WE opened also 2 SRs with no success on an acceptable result in terms of I/O. Also the AUDIT data is getting produced every day , so that we are running behind, but with every daily CLEANUP we get new data so that it takes forever to catch up. Unless you recommend any other approach to achieve this challenge? we do have also a snapshot DG in case you have an idea. Pls let me know if any info missing or need more INPUT. Thank you very much! Ali
Categories: DBA Blogs

Good Audit Trail Design

Pete Finnigan - Mon, 2023-10-02 13:26
Continuing with the series of posting the MS PPT slides (as pdf's) from recent past talks on all subjects Oracle security I have a new one for you. There are a few more still to post. I don't really know....[Read More]

Posted by Pete On 02/10/23 At 12:37 PM

Categories: Security Blogs

About utl_smtp

Tom Kyte - Mon, 2023-10-02 13:26
Hi Tom, I created the procedure send_mail as in your book in the user System. create or replace procedure send_mail ( p_sender in varchar2, p_recipient in varchar2, p_message in varchar2 ) as l_mailhost varchar2(255) := '10.228.1.75' ; l_mail_conn utl_smtp.connection ; begin l_mail_conn := utl_smtp.open_connection(l_mailhost, 25) ; utl_smtp.helo(l_mail_conn, l_mailhost) ; utl_smtp.mail(l_mail_conn, p_sender) ; utl_smtp.rcpt(l_mail_conn, p_recipient ) ; utl_smtp.open_data(l_mail_conn) ; utl_smtp.write_data(l_mail_conn, p_message ) ; utl_smtp.close_data(l_mail_conn) ; utl_smtp.quit(l_mail_conn ); dbms_output.put_line('Message send to the user successfully') ; end ; The procedure compiled without any error. I call the above procedure in the same user as : begin send_mail('vrbala@visto.com', 'vr_bala@yahoo.com', 'This is the test message from oracle server' ) ; end ; I got the following error. What should i do to run this. ORA-20002: 554 Relay rejected for policy reasons. ORA-06512: at "SYS.UTL_SMTP", line 86 ORA-06512: at "SYS.UTL_SMTP", line 223 ORA-06512: at "SYSTEM.SEND_MAIL", line 12 ORA-06512: at line 2 Thanks, Bala
Categories: DBA Blogs

External table stopped working on Oracle 19c

Tom Kyte - Mon, 2023-10-02 13:26
Hello All, I upgraded the version Oracle DB from Oracle Databes 12c Enterprise Edition Release 12.1.0.2.0 to Oracle Databes 19c Enterprise Edition Release 19.0.0.0. I'am trying to load data into my database using an external table. My external table definition: <code>create table TAB_EXT_8094845( A_S_TYPE INTEGER, N_I_ADDR VARCHAR2(255), N_P INTEGER, A_S_ID VARCHAR2(255), C_S_ID VARCHAR2(255), CD_S_ID VARCHAR2(255), A_T_CAUSE INTEGER, A_S_TIME VARCHAR2(255), S_TIME TIMESTAMP, C_TIME TIMESTAMP, D_TIME TIMESTAMP, A_S_E_REALM VARCHAR2(255), IGN VARCHAR2(255), A_S_I_REALM VARCHAR2(255), A_P_D_DELAY INTEGER, A_P_A_ID VARCHAR2(255), A_S_DIV VARCHAR2(255), A_S_DISP INTEGER, A_D_INIT INTEGER, A_D_CA VARCHAR2(255), A_S_STAT VARCHAR2(255), A_E_F_R_NU VARCHAR2(255), D_A VARCHAR2(255), D_A_ORG VARCHAR2(255), D_B VARCHAR2(255), D_B_ORG VARCHAR2(255) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY MY_EXTERNAL_TABLES ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE NOLOGFILE FIELDS TERMINATED BY ',' optionally enclosed by '"' MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS (A_S_TYPE, N_I_ADDR, N_P, A_S_ID, C_S_ID, CD_S_ID, A_T_CAUSE, A_S_TIME, S_TIME char(40) date_format TIMESTAMP WITH TIMEZONE MASK '.hh24:mi:ss.ff3 TZD MON dd yyyy.', C_TIME char(40) date_format TIMESTAMP WITH TIMEZONE MASK '.hh24:mi:ss.ff3 TZD MON dd yyyy.', D_TIME char(40) date_format TIMESTAMP WITH TIMEZONE MASK '.hh24:mi:ss.ff3 TZD MON dd yyyy.', IGN, A_S_E_REALM, A_S_I_REALM, A_P_D_DELAY, A_P_A_ID, A_S_DIV, A_S_DISP, A_D_INIT, A_D_CA, A_S_STAT, A_E_F_R_NU, D_A, D_A_ORG, D_B, D_B_ORG)) LOCATION ('src_file') ) PARALLEL REJECT LIMIT 0 NOMONITORING</code> It works on version 12c, but on version 19c it stopped working. When I try to run query, I get this errors: <code>ORA-12801: error signaled in parallel query server P000 ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-30653: reject limit reached</code> I think there may be something wrong with the dates (S_TIME, C_TIME, D_TIME). An example values from the file: 10:51:50.332 CEST Sep 06 2023 00:00:00.000 UTC Jan 01 1970 10:51:50.527 CEST Sep 06 2023 10:51:50.169 CEST Sep 06 2023 00:00:00.000 UTC Jan 01 1970 10:51:50.504 CEST Sep 06 2023 Do you have any clues what is wrong? Thanks in advance!
Categories: DBA Blogs

Partitioning by YYYY and subpartitioning by MM

Tom Kyte - Mon, 2023-10-02 13:26
I have a table as follows which is currently partitioned AND sub-partitioned on the same column PRD_DESC fied. The partitions are one per year and 12 monthly sub-partitions per partition <code> CREATE TABLE TESTTAB ( "PRD_DESC" VARCHAR2(6 BYTE)NOT NULL ENABLE, "PRD_DESC_YMW" VARCHAR2(8 BYTE)NOT NULL ENABLE, ... ... ... PARTITION BY RANGE ("PRD_DESC") SUBPARTITION BY LIST ("PRD_DESC") (PARTITION "PROJ_COSTS_DTL_HIST_2009" VALUES LESS THAN ('200999') ( SUBPARTITION "PROJ_COSTS_DTL_HIST_2009_00" VALUES ('200900') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2009_01" VALUES ('200901') , ... ... ... PARTITION "PROJ_COSTS_DTL_HIST_2023" VALUES LESS THAN ('2024') (SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_01" VALUES ('202301') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_02" VALUES ('202302') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_03" VALUES ('202303') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_04" VALUES ('202304') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_05" VALUES ('202305') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_06" VALUES ('202306') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_07" VALUES ('202307') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_08" VALUES ('202308') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_09" VALUES ('202309') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_10" VALUES ('202310') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_11" VALUES ('202311') , SUBPARTITION "PROJ_COSTS_DTL_HIST_2023_12" VALUES ('202312') </code> The data in that field is stored in YYYYMM format. In short, the partition key is on the "YYYY" portion of the column and the sub partitions are on "MM" portion. Disclaimer: I didn't design this and recently inherited this. Every year in Dec, we add partitions and sub partitions to the table manually by splitting the highest partition and also adding sub-partitions manually. I am trying to automate this whole thing by enabling interval partitioning method. But, I am not sure what would the DDL look like if I have to accomplish the objective: partition by YYYY part and sub-partition by MM part. I have this DDL in mind but not sure it would work: <code>CREATE TABLE TESTTAB ( "PRD_DESC" VARCHAR2(6 BYTE)NOT NULL ENABLE, "LDG_PRD_DESC_YMW" VARCHAR2(8 BYTE)NOT NULL ENABLE, ... ... ... ) tablespace USERS partition by range (PRD_DESC) INTERVAL( NUMTOYMINTERVAL (1, 'YEAR')) subpartition by list(PRD_DESC) subpartition template ( subpartition SP_01 values (01), subpartition SP_02 values (02), subpartition SP_03 values (03), subpartition SP_04 values (04), subpartition SP_05 values (05), subpartition SP_06 values (06), subpartition SP_07 values (07), subpartition SP_08 values (08), subpartition SP_09 values (09), subpartition SP_10 values (10), subpartition SP_11 values (11), subpartition SP_12 values (12) ) </code> The probl...
Categories: DBA Blogs

Pros and Cons of Developing Your Own ChatGPT Plugin

Andrejus Baranovski - Mon, 2023-10-02 05:52
I've been running ChatGPT plugin in prod for a month and sharing my thoughts about the pros and cons of developing it. Would I build a new ChatGPT plugin? 

 

Zero-ETL: What is it and what Oracle products support it

DBASolved - Sun, 2023-10-01 16:32

Businesses are constantly finding themselves increasingly reliant on data ingestion, big data, analytics, and following traditional ETL (Extract, Transform, Load) […]

The post Zero-ETL: What is it and what Oracle products support it appeared first on DBASolved.

Categories: DBA Blogs

Drop RAC Database Using RMAN

Michael Dinh - Sun, 2023-10-01 11:09

As part of data center consolidation, some databases are no longer required and need to be dropped.

I have previouly posted about dropping databases; however, it was single instance vs RAC.

### Check status of database:
db01-oracle:/home/oracle$ srvctl status database -d loadtest -v
Database loadtest is running. Instance status: Open.
db01-oracle:/home/oracle$

### Modify cluster database parameters:
SQL> show parameter cluster_database
NAME                       TYPE         VALUE
-------------------------  ----------- ------------
cluster_database boolean   FALSE
cluster_database_instances integer      1

SQL> show parameter db_name
NAME                       TYPE         VALUE
-------------------------  ----------- ------------
db_name                    string      loadtest

### Startup force mount restrict:
SQL> startup force mount restrict;
ORACLE instance started.
Total System Global Area 2.1475E+10 bytes
Fixed Size                  3795712 bytes
Variable Size            1.2885E+10 bytes
Database Buffers         8522825728 bytes
Redo Buffers               63311872 bytes
Database mounted.

!!! Verify logins=RESTRICTED and parallel=NO !!!
SQL> select logins, parallel from v$instance;
LOGINS     PAR
---------- ---
RESTRICTED NO

SQL>

### RMAN: drop database including backups ###
/home/oracle$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Sep 25 11:41:45 2023

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: loadtest (DBID=983159180, not open)

RMAN> drop database including backups;

database name is "loadtest" and DBID is 983159180

Do you really want to drop all backups and the database (enter YES or NO)?

nected to target database: loadtest (DBID=983159180, not open)

RMAN> drop database including backups;

database name is "loadtest" and DBID is 983159180

Do you really want to drop all backups and the database (enter YES or NO)? yes

using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=101 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 7.1.3.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=148 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 7.1.3.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=199 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=248 device type=DISK

..............................

deleted archived log
archived log file name=+ORFLASHBK/loadtest/ARCHIVELOG/2023_09_22/thread_1_seq_150008.8873.1148227803 RECID=150008 STAMP=1148227802
Deleted 8 objects

..............................

database name is "loadtest" and DBID is 983159180
database dropped

RMAN>
How To Drop A RAC Database Using RMAN(Doc ID 732042.1)

Joel Kallman Day 2023 : Announcement

Tim Hall - Sun, 2023-10-01 09:39

Since 2016 we’ve had an Oracle community day where we push out content on the same day to try and get a bit of a community buzz. The name has changed over the years, but in 2021 it was renamed to the “Joel Kallman Day”. Joel was big on community, and it seems like a … Continue reading "Joel Kallman Day 2023 : Announcement"

The post Joel Kallman Day 2023 : Announcement first appeared on The ORACLE-BASE Blog.Joel Kallman Day 2023 : Announcement was first posted on October 1, 2023 at 3:39 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ERP Oracle Database Security

Pete Finnigan - Fri, 2023-09-29 12:06
I did a talk at the UKOUG in 2020 about ERP security and its affects on Oracle database security. I have just uploaded the slides from this ERP Security talk to our website so you can download and have a....[Read More]

Posted by Pete On 29/09/23 At 01:20 PM

Categories: Security Blogs

Schema Privileges and Grant option

Tom Kyte - Fri, 2023-09-29 12:06
Hello, I wanted to ask about new 23C feature - Schema-level Privileges: is it possible to combine it with "WITH GRANT OPTION"? According to documentation https://docs.oracle.com/en/database/oracle/oracle-database/23/dbseg/configuring-privilege-and-role-authorization.html#GUID-EEE53C39-D610-4792-82EA-6F49B880B036 "WITH ADMIN OPTION" is only extension. In other terms - would it be possible for user who got schema privileges to create view on top of tables from another schema or will we have to stick to object level privileges WITH GRANT OPTION included? Thanks in advance, kind regards
Categories: DBA Blogs

Find the select statement for a sys_refcursor

Tom Kyte - Fri, 2023-09-29 12:06
Hello All , can we able to find the select statement which is used for sys_refcursor ? <code>function f_my (id in number ) return sys_refcursor is l_rc sys_refcursor; begin open l_rc for select * from emp where empno=id; return l_rc; end;</code> Here what happens, it returns the result set , but i would like to see which select statement has been used to get the result set? In a simple word? contents of the sys_refcursor Expecting the output /return value as select * from emp where empno=id Thanks in advance.
Categories: DBA Blogs

Oracle GoldenGate 23c “Free” – What you need to know!

DBASolved - Mon, 2023-09-25 08:54

Last week at Oracle Cloud World, the Oracle GoldenGate Product team announced the latest version of Oracle GoldenGate “Free”!  This […]

The post Oracle GoldenGate 23c “Free” – What you need to know! appeared first on DBASolved.

Categories: DBA Blogs

LLama 2 LLM for PDF Invoice Data Extraction

Andrejus Baranovski - Mon, 2023-09-25 02:47
I show how you can extract data from text PDF invoice using LLama2 LLM model running on a free Colab GPU instance. I specifically explain how you can improve data retrieval using carefully crafted prompts.

 

A Performance Comparison between Oracle’s REGEXP_SUBSTR and SUBSTR functions

The Anti-Kyte - Mon, 2023-09-25 01:30

Oracle SQL offers support for POSIX regular expressions by means of a suite of native functions.

Darth Sidious, light-saber in hand, imparting ancient Sith wisdom regarding regular expressions.Regular Expressions are a pathway to many abilities that some consider…unnatural.

Whilst providing the ability for complex search conditions that regular expressions offer, REGEXP_SUBSTR has acquired a reputation for being a fair bit slower when compared to the standard SUBSTR function.

What I’m going to demonstrate here is :

  • how SUBSTR seems generally to be faster than REGEXP_SUBSTR
  • how the performance of REGEXP_SUBSTR can improve dramatically when used with INSTR
  • REGEXP_SUBSTR performs better when it matches the start of a string

To start with though, well discover why you’ll never see a Sith Lord on Sesame Street ( hint : it’s the way they count in a Galaxy Far, Far Away)…

A Test Table

We have a two-column table that looks like this :

create table star_wars_numbers
(
    id number generated always as identity,
    episode varchar2(4000)
)
/

insert into star_wars_numbers( episode)
values('Four');

insert into star_wars_numbers( episode)
values('Four|Five');

insert into star_wars_numbers( episode)
values('Four|Five|Six');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight|Nine');

commit;

Whilst the contents of the EPISODES column may be the result of arcane Sith ritual, Moof-Milker activity in the design process cannot be ruled out.

Either way, the EPISODES column contains up to 11 values, with multiple columns being separated by a ‘|’.

Extracting each value in SQL

Using SUBSTR and INSTR to separate out the values in EPISODES is, as ever, quite a bit of typing…

select 
    id, 
    episode, 
    substr
    (
        episode, 
        1, 
        case 
            when instr(episode, '|') > 0 then instr(episode, '|') -1 
            else length(episode) 
        end
    )  as "A New Hope",
    case when instr(episode, '|', 1, 1) > 0 then
        substr
        (
            episode, 
            instr(episode, '|', 1, 1) + 1, -- start
            case 
                when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1 
                else length(episode) 
            end
            -
            instr(episode, '|', 1, 1)   
        ) 
    end as "The Empire Strikes Back",
    case when instr(episode, '|', 1, 2) > 0 then
        substr( episode, instr(episode, '|', 1, 2) + 1,
            case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 2))
    end as "Return of the Jedi",    
    case when instr(episode, '|', 1, 3) > 0 then
        substr( episode, instr(episode, '|', 1, 3) + 1,
            case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 3))
    end as "Phantom Menace",
    case when instr(episode, '|', 1, 4) > 0 then
        substr( episode, instr(episode, '|', 1, 4) + 1,
            case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 4))
    end as "Attack of the Clones",
    case when instr(episode, '|', 1, 5) > 0 then
        substr( episode, instr(episode, '|', 1, 5) + 1,
            case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 5))
    end as "Revenge of the Sith",
    case when instr(episode, '|', 1, 6) > 0 then
        substr( episode, instr(episode, '|', 1, 6) + 1,
            case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 6))
    end as "The Force Awakens",
    case when instr(episode, '|', 1, 7) > 0 then
        substr( episode, instr(episode, '|', 1, 7) + 1,
            case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 7))
    end as "Rogue One",        
    case when instr(episode, '|', 1, 8) > 0 then
        substr( episode, instr(episode, '|', 1, 8) + 1,
            case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 8))
    end as "Solo",        
    case when instr(episode, '|', 1, 9) > 0 then
        substr( episode, instr(episode, '|', 1, 9) + 1,
            case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 9))
    end as "The Last Jedi",        
    case when instr(episode, '|', 1, 10) > 0 then
        substr( episode, instr(episode, '|', 1, 10) + 1,
            case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 10))
    end as "The Rise of Skywalker"                
from star_wars_numbers
/

…although these days ( version 12c and later), we can save ourselves a bit of effort by using an inline function…

with function extract_episode( i_string in varchar2, i_pos in pls_integer)
    return varchar2
is
begin
    if i_pos = 1 then
        return substr(i_string, 1, case when instr(i_string, '|') > 0 then instr(i_string, '|') - 1 else length(i_string) end);
    end if;    
    return
        case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
            substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
                case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end 
                -
                instr( i_string, '|', 1, i_pos - 1))
        end;
end;
select 
id, episode, 
    extract_episode( episode, 1) as "A New Hope",
    extract_episode( episode, 2) as "Empire Strikes Back",
    extract_episode( episode, 3) as "Return of the Jedi",
    extract_episode( episode, 4) as "The Phantom Menace",
    extract_episode( episode, 5) as "Attack of the Clones",
    extract_episode( episode, 6) as "Revenge of the Sith",
    extract_episode( episode, 7) as "The Force Awakens",
    extract_episode( episode, 8) as "Rogue One",
    extract_episode( episode, 9) as "Solo",
    extract_episode( episode, 10) as "The Last Jedi",
    extract_episode( episode, 11) as "The Rise of Skywalker"
from star_wars_numbers
/
    
    

Whether you find the equivalent regexp query more elegant or just horrible is somewhat subjective :

select 
    regexp_substr(episode, '[^|]+') as "A New Hope",
    regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
    regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
    regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
    regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
    regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
    regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
    regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
    regexp_substr(episode, '[^|]+',1, 9) as "Solo",
    regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
    regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from star_wars_numbers
/

Irrespective of aesthetic considerations, I should explain the regexp in use here, if only so I can understand it when I read this a few months from now :

[^|] - match any character that isn't pipe ("|")
+ - match one or more times

Next, we need to find out how the regexp function stacks up when it comes to performance, and we’re not going to find that with a mere 11 rows of data…

Performance Test Setup

Let’s make some use of all that space in the Galaxy Far, Far Away…

create table galaxy (
    id number generated always as identity,
    episode varchar2(4000))
/

begin
    for i in 1..100000 loop
        insert into galaxy(episode)
        select episode from star_wars_numbers;
        
        commit;
    end loop;
end;
/

exec dbms_stats.gather_table_stats(user, 'galaxy');

All of the following tests were run on Oracle 19c Enterprise Edition running on Oracle Linux.

Everything was run in a SQL*Plus session from the command line on the database server.

The queries were run in SQL*Plus with the following settings :

set lines 130
clear screen
set autotrace trace
set timing on

Each query was executed at least twice consecutively to ensure that results were not skewed by the state of the buffer cache.

It’s also worth noting that, I found no low-level explanation for the performance discrepancies between the two functions when trawling through trace files. Therefore, I’ve concentrated on elapsed time as the main performance metric in these tests.

Test 1 – All Fields extracted in the Select Clause

Let’s start with the SUBSTR function ( referred to as “Standard” henceforth) :

select 
    id, 
    substr
    (
        episode,  -- input string
        1,  -- start position
        case 
            when instr(episode, '|') > 0 then instr(episode, '|') -1 
            else length(episode) 
        end -- number of characters to extract
    )  as "A New Hope",
    case when instr(episode, '|', 1, 1) > 0 then
        substr
        (
            episode, 
            instr(episode, '|', 1, 1) + 1, -- start
            case 
                when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1 
                else length(episode) 
            end
            -
            instr(episode, '|', 1, 1)   
        ) 
    end as "The Empire Strikes Back",
    case when instr(episode, '|', 1, 2) > 0 then
        substr( episode, instr(episode, '|', 1, 2) + 1,
            case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 2))
    end as "Return of the Jedi",    
    case when instr(episode, '|', 1, 3) > 0 then
        substr( episode, instr(episode, '|', 1, 3) + 1,
            case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 3))
    end as "Phantom Menace",
    case when instr(episode, '|', 1, 4) > 0 then
        substr( episode, instr(episode, '|', 1, 4) + 1,
            case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 4))
    end as "Attack of the Clones",
    case when instr(episode, '|', 1, 5) > 0 then
        substr( episode, instr(episode, '|', 1, 5) + 1,
            case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 5))
    end as "Revenge of the Sith",
    case when instr(episode, '|', 1, 6) > 0 then
        substr( episode, instr(episode, '|', 1, 6) + 1,
            case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 6))
    end as "The Force Awakens",
    case when instr(episode, '|', 1, 7) > 0 then
        substr( episode, instr(episode, '|', 1, 7) + 1,
            case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 7))
    end as "Rogue One",        
    case when instr(episode, '|', 1, 8) > 0 then
        substr( episode, instr(episode, '|', 1, 8) + 1,
            case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 8))
    end as "Solo",        
    case when instr(episode, '|', 1, 9) > 0 then
        substr( episode, instr(episode, '|', 1, 9) + 1,
            case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 9))
    end as "The Last Jedi",        
    case when instr(episode, '|', 1, 10) > 0 then
        substr( episode, instr(episode, '|', 1, 10) + 1,
            case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 10))
    end as "The Rise of Skywalker"                
from galaxy
/

Runnnig this query, we get :

1100000 rows selected.

Elapsed: 00:00:20.32

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857676  bytes sent via SQL*Net to client                                                                                     
     811886  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Before we look at the REGEXP_SUBSTR equivalent, it’s probably worth considering the more streamlined in-line function version of this query :

with function extract_episode( i_string in varchar2, i_pos in pls_integer)
    return varchar2
is
begin
    return
        case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
            substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
                case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end 
                -
                instr( i_string, '|', 1, i_pos - 1))
        end;
end;
select 
    id, 
    substr
    (
        episode, 
        1, 
        case 
            when instr(episode, '|') > 0 then instr(episode, '|') -1 
            else length(episode) 
        end
    ) as "A New Hope",
    extract_episode( episode, 2) as "Empire Strikes Back",
    extract_episode( episode, 3) as "Return of the Jedi",
    extract_episode( episode, 4) as "The Phantom Menace",
    extract_episode( episode, 5) as "Attack of the Clones",
    extract_episode( episode, 6) as "Revenge of the Sith",
    extract_episode( episode, 7) as "The Force Awakens",
    extract_episode( episode, 8) as "Rogue One",
    extract_episode( episode, 9) as "Solo",
    extract_episode( episode, 10) as "The Last Jedi",
    extract_episode( episode, 11) as "The Rise of Skywalker"
from galaxy
/

Whilst it’s a bit more readable, it’s also a lot slower :

1100000 rows selected.

Elapsed: 00:00:41.76

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857676  bytes sent via SQL*Net to client                                                                                     
     810042  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

REGEXP_SUBSTR however, takes slow to a new level…

select id, 
    regexp_substr(episode, '[^|]+') as "A New Hope",
    regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
    regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
    regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
    regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
    regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
    regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
    regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
    regexp_substr(episode, '[^|]+',1, 9) as "Solo",
    regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
    regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected.

Elapsed: 00:01:27.25

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857680  bytes sent via SQL*Net to client                                                                                     
     809519  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Now see what happens when we give REGEXP_SUBSTR a little help from the humble INSTR (the “hybrid” query) :

select id, 
    regexp_substr(episode, '[^|]+') as "A New Hope",
    decode( instr(episode, '|'), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,1) + 1)) as "The Empire Strikes Back",
    decode( instr(episode, '|',1, 2), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,2) + 1)) as "Return of the Jedi",
    decode( instr(episode, '|',1, 3), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,3) + 1)) as "The Phantom Menace",
    decode( instr(episode, '|',1, 4), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,4) + 1)) as "Attack of the Clones",
    decode( instr(episode, '|',1, 5), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,5) + 1)) as "Revenge of the Sith",
    decode( instr(episode, '|',1, 6), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,6) + 1)) as "The Force Awakens",
    decode( instr(episode, '|',1, 7), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,7) + 1)) as "Rogue One",
    decode( instr(episode, '|',1, 8), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,8) + 1)) as "Solo",
    decode( instr(episode, '|',1, 9), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,9) + 1)) as "The Last Jedi",
    decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy
/

Yes, I have cheated a bit on the aesthetics here by using the more compact DECODE instead of CASE.
However, this does not affect the runtime of the query, which is a bit faster than the pure REGEXP_SUBSTR equivalent :

1100000 rows selected.

Elapsed: 00:00:30.83

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857680  bytes sent via SQL*Net to client                                                                                     
     810158  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       


Test 1 Results Summary Query MethodTime (secs)SUBSTR ( Standard)20.32REGEXP_SUBSTR with INSTR (Hybrid)30.83SUBSTR with in-line function41.76REGEXP_SUBSTR ( Regexp)87.25

The performance of the Hybrid approach does raise the question of how the REGEXP_SUBSTR compares when we’re just looking to extract a single field from a string, rather than all of them…

Test 2 – Extract the first field only

In this instance we’re just looking for the first field in EPISODES.

In this context, the hybrid approach doesn’t apply because we’re always starting our search at the start of the input string.

Starting, once again with the Standard approach :

select id, 
    substr
    (
        episode, 1, 
        case 
            when instr(episode, '|', 1,1) > 0 then instr(episode, '|', 1,1) -1 
            else length(episode) 
        end 
    ) as "A New Hope"
from galaxy
/
1100000 rows selected.

Elapsed: 00:00:05.33

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25086067  bytes sent via SQL*Net to client                                                                                     
     808790  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Now for REGEXP_SUBSTR :

select id,
    regexp_substr( episode, '[^|]+') as "A New Hope"
from galaxy
/

1100000 rows selected.

Elapsed: 00:00:06.38

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25086067  bytes sent via SQL*Net to client                                                                                     
     808868  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Interestingly, whilst it’s still not as fast, the Regexp runtime is only 20% longer than the Standard.

This contrasts markedly with the 430% longer that the Regexp took for our first test.

Test 2 Results Summary Query MethodTime (secs)Standard5.33Regexp6.38

Does this relative performance hold true for any single field in the input string ?

Test 3 – Extract the last field only

Starting with the Standard query :

select 
    id, 
    case when instr(episode, '|', 1, 10) > 0 then
        substr( episode, instr(episode, '|', 1, 10) + 1,
            case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 10))
    end as "The Rise of Skywalker"                
from galaxy
/
1100000 rows selected.

Elapsed: 00:00:05.44

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25686033  bytes sent via SQL*Net to client                                                                                     
     808915  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Once again, the Regexp Query is much slower…

select 
    id,
    regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy 
/

1100000 rows selected.

Elapsed: 00:00:16.16

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25686033  bytes sent via SQL*Net to client                                                                                     
     808888  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

…unless we use the hybrid approach…

select 
    id,
    decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy 
/
1100000 rows selected.

Elapsed: 00:00:05.60

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25686033  bytes sent via SQL*Net to client                                                                                     
     808736  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       
Test 3 Results Summary Query MethodTime (secs)Standard 5.44Hybrid 5.60Regexp16.16 Conclusions

These tests seem to bear out the fact that, in general, SUBSTR offers better performance than REGEXP_SUBSTR.
However, the performance of REGEXP_SUBSTR can be greatly improved if used in conjunction with INSTR.
As always, the decision as to which function to use will depend on the specific circumstances you find yourself in.

Sql Konferenz 2023 in Hanau, Germany

Yann Neuhaus - Sun, 2023-09-24 17:39
Introduction

It was a very great pleasure to participate to the Sql Server konferenz 2023 in Hanau, Germany, the 12 and 13 September 2023.

I went with my colleague Petar, of the same Microsoft Sql Server team by dbi services, Switzerland.

For Microsoft Sql Server fans, this event was really the right place to be.

Where is located Hanau

First of all, for those who don’t know where is Hanau, it’s a small and charming city in Germany, 350 km from Basel, Switzerland (3 hours 30min drive), or 30 km from Frankfurt, Germany.

The event took place in the Congress Park Hanau

1st step Registration

Once arrived, the first step is as usual to go the Registration desk to get the badge, a blue bag containing some useful furnitures (pen, notepad, program’s…)

Take a look on the agenda here, there are 6 parallel sessions of 1 hour each time, all very interesting and very varied in different themes (Sql Server on premise, Azure, Power BI, Best practices…), from 9am. to 5pm.

Let’s start now

It was very intense, time passed very quickly each time, you have many choices which session to attend, real dilemma each time.

During the keynote the first day, all speakers, partners or sponsors introduced shortly what they will present.

Some examples of great sessions or speakers, I cannot enumerate all them of course as all were great, and there were a lot, to make it simple I just show here some examples of Sql Server MVP’s speakers, who are evangelists for us as we use their experience in our daily life in Sql Server:

1. Bob Ward – Principal Architect, Microsoft Azure Data.

For all new versions, new features …, he is among the first who is aware and reveal them officially.

Presenting the topic: Transitioning your On-Prem Database Skills to Azure SQL

2. Rie Merrit – Sr Program Manager, MVP & SQL Community for Microsoft, Organizer

Here with Rie Merritt and my colleague Petar Prljevic

3. Glenn Berry – Services Database Engineer, Microsoft Data Platform MVP

Presenting the topic: How to Survive a SQL Server Health Check

He presented some of his Diagnostic queries which are very used and helpful for all DBA’s like me in our daily duties at custom

Conclusion

It was really a great event on Microsoft Sql Server, in my opinion too short because there were so many interesting topics.

I recommend to everyone for those who want to be up to date, or to complete their knowledge

L’article Sql Konferenz 2023 in Hanau, Germany est apparu en premier sur dbi Blog.

Restoring a PDB from another CDB can increase your SCN

Hemant K Chitale - Sat, 2023-09-23 22:08

 I start with a 19c CDB called "ORCLCDB" and PDB "ORCLPDB1" that is present in it.


SQL> col name format a16
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
2778483057 ORCLCDB             20906515

SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1         READ WRITE
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL>
SQL> alter pluggable database orclpdb1 close;

Pluggable database altered.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 24 09:39:55 2023
Version 19.12.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> backup pluggable database orclpdb1 tag for_migration;

Starting backup at 24-SEP-23
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00034 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf
channel ORA_DISK_1: starting piece 1 at 24-SEP-23
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
channel ORA_DISK_2: starting piece 1 at 24-SEP-23
channel ORA_DISK_1: finished piece 1 at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp tag=FOR_MIGRATION comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp tag=FOR_MIGRATION comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 24-SEP-23

Starting Control File and SPFILE Autobackup at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2023_09_24/o1_mf_s_1148377605_ljz585bw_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-SEP-23

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 24 09:48:04 2023
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> alter pluggable database orclpdb1 unplug into '/opt/oracle/product/19c/dbhome_1/orclpdb1.xml';

Pluggable database altered.

SQL> !ls -l /opt/oracle/product/19c/dbhome_1/orclpdb1.xml
-rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:50 /opt/oracle/product/19c/dbhome_1/orclpdb1.xml

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


Now I go to the target CDB CDB2 and identify that there is no conflicting PDB.  I also find that it has a lower SCN.

SQL> col name format a16
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664227

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664231

SQL>
SQL> select * from v$pdbs
  2  /

    CON_ID       DBID    CON_UID GUID
---------- ---------- ---------- --------------------------------
NAME
--------------------------------------------------------------------------------------------------------------------------------
OPEN_MODE  RES OPEN_TIME                                                                   CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY
---------- --- --------------------------------------------------------------------------- ---------- ---------- ---------- --------
SNAPSHOT_PARENT_CON_ID APP APP APP APPLICATION_ROOT_CON_ID APP PRO LOCAL_UNDO   UNDO_SCN UNDO_TIME CREATION_ DIAGNOSTICS_SIZE
---------------------- --- --- --- ----------------------- --- --- ---------- ---------- --------- --------- ----------------
 PDB_COUNT AUDIT_FILES_SIZE   MAX_SIZE MAX_DIAGNOSTICS_SIZE MAX_AUDIT_SIZE LAST_CHANGE TEM
---------- ---------------- ---------- -------------------- -------------- ----------- ---
TENANT_ID
------------------------------------------------------------------------------------------------------------------------------------
UPGRADE_LEVEL GUID_BASE64
------------- ------------------------------
         2 2054948555 2054948555 88129263B99F4BBDE0530100007F7BDF
PDB$SEED
READ ONLY  NO  24-SEP-23 09.26.31.678 AM +08:00                                               1997190  957349888       8192 ENABLED
                       NO  NO  NO                          NO  NO           1        280           04-MAY-19                0
         0                0          0                    0              0 COMMON USER NO

            1 iBKSY7mfS73gUwEAAH973wA=

         3 2205532678 2205532678 E6BD9C73839C10BEE0550A00275FC834
PDBTMP
MOUNTED                                                                                      17541716  999292928       8192 ENABLED
                       NO  NO  NO                          NO  NO           1        280           21-AUG-22                0
         0                0          0                    0              0 COMMON USER NO

            1 5r2cc4OcEL7gVQoAJ1/INAA=


I then have the backups and the "closed" datafiles from the source CDB copied over to the target.  (The CREATE PLUGGABLE DATABASE command will not be using the RMAN Backups but the PDB Describle XML file and the "closed" datafiles).

oracle19c>pwd
/tmp/From_Source
oracle19c>ls -l
total 1882384
-rw-r-----. 1 oracle oinstall 1146109952 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp
-rw-r-----. 1 oracle oinstall  758202368 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp
-rw-r-----. 1 oracle oinstall   23232512 Sep 24 09:55 o1_mf_s_1148377605_ljz585bw_.bkp
-rw-r--r--. 1 oracle oinstall      12583 Sep 24 09:59 orclpdb1.xml
oracle19c>


I now prepare to copy the PDB to the target CDB2.. First I check for compatibility.. And then I plug in the PDB with a new name "ORCLPDB1_NEW"

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664231
SQL> set serveroutput on
SQL> declare
  2   compatible constant varchar2(3) :=
  3     case dbms_pdb.check_plug_compatibility(
  4       pdb_descr_file=>'/tmp/From_Source/orclpdb1.xml',
  5       pdb_name=>'ORCLPDB1')
  6     when true then 'YES'
  7     else 'NO'
  8  END;
  9  begin
 10    dbms_output.put_line(compatible);
 11  end;
 12  /
YES

PL/SQL procedure successfully completed.

SQL>
SQL> show parameter db_create

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /opt/oracle/oradata

SQL>
SQL> create pluggable database orclpdb1_new
  2  using '/tmp/From_Source/orclpdb1.xml'
  3  copy;  --- copy will copy the datafiles to the location based on db_create_file_dest and CDB database name

Pluggable database created.

SQL>
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20910195

SQL>
SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP           MOUNTED
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1_NEW     READ WRITE

SQL>


The SCN of the target CDB2 has now been set to the much higher 20910195 (that was 20664231 before I "plugged in" ORCLDB1 as ORCLDB1_NEW).  This is because it "read" the SCN from the headers of the datafiles that were plugged in.


I can go back to the source and drop the PDB.

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
2778483057 ORCLCDB             20910076

SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1         MOUNTED
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL> 
SQL> drop pluggable database orclpdb1 including datafiles;

Pluggable database dropped.

SQL> 
SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


So, I have "migrated" ORCLPDB1 from ORCLCDB to CDB2 and, in the process, have found that the SCN in CDB2 got reset to a much higher value, because it "inherited" the higher Checkpoint SCN that was in the datafiles of ORCLPDB1.

I can verify that all the datafiles (that are OPEN) in CDB2 can get stamped with the new SCN.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20658011
         1          3              0           20658011
         1          4              0           20658011
         1          7              0           20658011
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20657657
         3         59              0           20657657
         3         60              0           20657657
         4         69              0           20908595
         4         70              0           20908595
         4         71              0           20908595
         4         72              0           20908595
         4         73              0           20908595
         4         74              0           20908595
         4         75              0           20908595
         4         76              0           20908595

18 rows selected.

SQL> alter system checkpoint;

System altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912767
         1          3              0           20912767
         1          4              0           20912767
         1          7              0           20912767
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20657657
         3         59              0           20657657
         3         60              0           20657657
         4         69              0           20912767
         4         70              0           20912767
         4         71              0           20912767
         4         72              0           20912767
         4         73              0           20912767
         4         74              0           20912767
         4         75              0           20912767
         4         76              0           20912767

18 rows selected.

SQL> col name format a16
SQL> select con_id, name, open_mode from v$pdbs;

    CON_ID NAME             OPEN_MODE
---------- ---------------- ----------
         2 PDB$SEED         READ ONLY
         3 PDBTMP           MOUNTED
         4 ORCLPDB1_NEW     READ WRITE

SQL> alter pluggable database pdbtmp open;

Pluggable database altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912767
         1          3              0           20912767
         1          4              0           20912767
         1          7              0           20912767
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20912888
         3         59              0           20912888
         3         60              0           20912888
         4         69              0           20912767
         4         70              0           20912767
         4         71              0           20912767
         4         72              0           20912767
         4         73              0           20912767
         4         74              0           20912767
         4         75              0           20912767
         4         76              0           20912767

18 rows selected.

SQL> alter system checkpoint;

System altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912967
         1          3              0           20912967
         1          4              0           20912967
         1          7              0           20912967
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20912967
         3         59              0           20912967
         3         60              0           20912967
         4         69              0           20912967
         4         70              0           20912967
         4         71              0           20912967
         4         72              0           20912967
         4         73              0           20912967
         4         74              0           20912967
         4         75              0           20912967
         4         76              0           20912967

18 rows selected.

SQL>
SQL> alter session set container=ORCLPDB1_NEW;

Session altered.
SQL> col name format a54
SQL> select name from v$datafile;

NAME
------------------------------------------------------
/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_system_ljz7d02h_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_sysaux_ljz7d02l_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_undotbs1_ljz7d02m_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_users_ljz7d02o_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_data_min_ljz7d02p_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_hemant_d_ljz7d02r_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_hemant_d_ljz7d02s_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_tpcctab_ljz7d02t_.dbf


8 rows selected.

SQL>



CON_ID=1 is Root and the datafiles get the new SCN after a Checkpoint.   Note these datafiles still had the "older" SCN 20658011 before I issued the Checkpoint.
CON_ID=2  is the SEED PDB so it does not get updated.
CON_ID=3 is PDTMP  at 20657657 and is stamped to 2091288 after I OPEN it.
CON_ID=4 is the ORCLPDB1_NEW that I plugged in today.
All the non-SEED datafiles are stamped again (to 20912967) after another Checkpoint.

Categories: DBA Blogs

Oracle Forensics Response

Pete Finnigan - Fri, 2023-09-22 15:06
I have spoken a few times on this blog about forensics and Oracle and in 2021 I did a talk at the UKOUG about Oracle forensics. I have just posted the slides from that talk just now to our site....[Read More]

Posted by Pete On 22/09/23 At 01:07 PM

Categories: Security Blogs

ODA X10: the new Oracle Database Appliances are already there

Yann Neuhaus - Thu, 2023-09-21 15:39
Introduction

ODA X8-2 series was sold for 3 years, and I would have expected X9-2 series to be on the market for 2-3 years, but I was wrong. ODA X10 series has just been announced. Maybe Oracle has a different strategy for appliances now, I will explain my thought later.

You will notice that Oracle removed the number 2 in the name of this new ODA series. X9-2S becomes X10-S, X9-2L becomes X10-L and X9-2HA becomes X10-HA. The number 2 makes less sense today: it reffered to the number of sockets/processors. But X10-S only has 1 processor, X10-L has 2, and X10-HA has 4 (2x 2). For sure, the ODA nodes are still 2-socket servers, but this number was quite confusing for some people.

Now let’s try to find out what’s new, what’s good and what’s not so good.

What is an Oracle Database Appliance?

ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicate software distribution including Linux, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market positioning for ODA is quite tricky for Oracle: it sits between OCI (the Oracle public Cloud) and Exadata (the highest level database hardware – a kind of ODA on steroids). For some people, ODA avoids migrating to OCI bringing enough simplication for Oracle Database environments, for some other people, ODA is powerfull enough for most workloads and Exadata wouldn’t make sense. For me, ODA has always been one of my preferred solutions, and undoubtly a solution to consider.

For Enterprise Edition users, ODA has a strong advantage over its competitors: capacity on demand licensing. You can enable as few as 2 cores per node, meaning a single processor licence. You can later scale up by enabling additional cores according to your needs.

From Intel Xeon to AMD Epyc

ODA X9-2 was available in 3 flavors:

  • X9-2S with 1 Xeon CPU, 256GB of RAM and 2x 6.8TB NVMe SSDs
  • X9-2L with 2 Xeon CPUs, 512GB of RAM and 2x up to 12x 6.8TB NVMe SSDs
  • X9-2HA with 2 nodes (similar to X9-2L without the disks) and one or two disk enclosures with various configurations (SSDs or spinning disks)

The first change you will notice is the switch from Intel Xeon to AMD Epyc CPU. Oracle already made this change on Exadata platform several months ago. It’s much more a global change from Oracle, moving to AMD on x86-64 servers’ lineup, including those servers in OCI.

X9-2 series ODAs were using Xeon Silver 4314 with 16 cores running between 2.4GHz and 3.4GHz, X10 series ODAs use AMD Epyc 9334 with 32 cores running between 2.7GHz and 3.9 GHz. The number of cores is not so important because most of the customers using ODAs are running Standard Edition or are using Enterprise Edition with a limited number of cores. But regarding pure CPU speed, a significant increase is expected, something like 30% for single thread processing.

Having better CPU doesn’t only mean better performance: it also means less licences for Enterprise Edition users. Having 30% faster CPUs means you need 30% less cores to do the same tasks, if you needed 6 Xeon cores for your workload, you may run the same workload on 4 Epyc cores, so on 2 processor licences instead of 3.

Internal SSDs: back to 480GB disks

It’s was one of my complain for X9-2 series: the 2x M2 internal disks were smaller than those on X8-2, from 480GB to 240GB. Now for X10 series, Oracle brings back 480GB internal disks, and it’s better having this size.

RAM: same sizes but expandable up to 3 times

X10-S has 256GB of RAM, like X9-2S, but you can now triple the size with additional memory modules (768GB). X10-L comes with 512GB in its basic configuration, and you can go up to 1.5TB. X10-HA is basically 2x X10-L with an external storage enclosure, you then have 2x512GB, 2x1TB or 2×1.5TB configurations.

Data disks: same size but less disks for X10-L

On X10-S and L, data disks are the same as X9-2 series: 6.8TB NVMe disks. X10-S has the same limitation as X9-2S, only 2 disks and no possible extension.

There is a catch regarding X10-L. On X9-2L you could have a maximum of 12 disks, meaning a huge 81TB of RAW storage. X10-L is now limited to 8 disks, meaning only 54TB of RAW storage. One could argue that it’s already a big capacity, but this is RAW TB, usable TB is 21TB only when using normal redundancy. For some of my customers having fully populated X8-2M ODAs, their databases won’t fit anymore in one X10-L… Another drawback when it comes to upgrading storage on your X10-L, there is only 4 slots for disks on the front panel. So how can you put 8 disks with only 4 slots? By adding them inside the server, as AIC: Add-In-Card. The disks have a different form factor and you will need to open your server to add these disks, meaning a downtime of your databases. Definitely a loss of flexibility.

X10-HA is not that different compared to X9-2HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being composed of SSDs only, the second one being composed of a mix of SSDs and HDDs. SSDs are still 7.68TB each, and only the HC get a capacity bump thanks to bigger HDDs: from 18TB to 22TB each.

Network interfaces

Nothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network.

Specific software bundle

Latest software bundle for ODA is 19.20, but Oracle just releases a dedicated version of the patch for X10: 19.20.0.1. Next patch bundle will probably be compatible with X10 as well as older versions. Currently, X7-2 series, X8-2 series and X9-2 series are still supported.

What about editions, licences and support?

First of all, these new ODAs don’t support Standard Edition 2 anymore! It’s a major breakthrough, as a lot of customers are using Standard Edition 2 on these appliances.

It’s cristal clear in the documentation:

Choice of Oracle Database Software:
– Oracle Database 21c Enterprise Edition (DB Systems only)
– Oracle Database 19c Enterprise Edition

My thought is that Oracle will keep X9-2 series for now, and will reserve X10 and AMD Epyc for Enterprise Edition users. X9-2 series is still relevant today, as price and performance match actual needs for most Standard Edition customers.

You may know that ODA is not sold with the database licences: you need to bring yours or buy them at the same time. You will need Enterprise Edition for these new ODAs, starting from 1 processor licence on a S and L models (2 activated cores) and at least 2 processor licences on HA (2 activated cores per node). Enabling your EE licence on an ODA will actually decrease the number of cores on the server to make sure you are compliant, but be careful because it doesn’t prevent you using unlicensed options. You can also use CPU pools to keep remaining CPUs available for other purpose, running application VMs for example.

Regarding support, as other hardware vendors you’ll have to pay for your ODA to be supported, in case of hardware failure. 1st year of support will usually be part of your initial order.

Support for the database versions is limited to 19c and 21c. Don’t forget that only 19c databases are supported with Premier Support. 21c being an innovation release, it’s only for testing.

What are the differences between the 3 models?

The X10-S is an entry price point for a small number of small databases.

The X10-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L. OK, disk capacity is far behind previous X9-2L, but most customers won’t put 10 additional disks in their ODA: a fully populated X9-2L was 3 times more expensive than the base model.

The X10-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X10-HA, big infrastructures can be consolidated on a very small number of HA ODAs.

ModelDB EditionnodesURAMRAM MaxRAW TBRAW TB Maxbase priceODA X10-SEE only12256GB768GB13.613.621’579$ODA X10-LEE only12512GB1536GB13.654.434’992$ODA X10-HA HPEE only28/122x 512GB2x 1536GB4636897’723$ODA X10-HA HCEE only28/122x 512GB2x 1536GB39079297’723$ODA X10 specs summary And regarding the price?

Looking into the latest engineered systems price list (search exadata price list and you will easily find it), you will find X10 series as well as X9-2 series. Prices for X10-S and X10-L are identical to X9-2S and X9-2L. X10-HA cost 9% more than X9-2HA. X10-HA being quite expensive now (nearly 100’000$), I would definitely compare it to an Exadata Cloud@Customer solution.

Which one should you choose?

If you are using Standard Edition, X10 series is not for you. You will need to order X9-2 series, or keep your older ODAs.

If your databases can comfortably fit in the S model, don’t hesitate as you will probably never need more.

Most interesting model is still the new L, maximum disk capacity is 33% less than its predecessor, but it’s a trade off for something like 33% more performance thanks to AMD Epyc. L is quite affordable, has still a great capacity, and is upgradable if you don’t buy the full system at first.

If you still want/need RAC and its associated complexity, the HA is for you.

Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard. No one would recommend buying a single ODA, it’s probably better no ODA at all than a single one.

I would still prefer buying 2x ODA X10-L compared to 1x ODA X10-HA. NVMe speed, no RAC and single box is definitely better. And extreme consolidation may not be the best solution.

Conclusion

ODA X10 series will bring a nice CPU speed improvement thanks to AMD Epyc switch, but customers running Standard Edition are not concerned and are limited to X9-2 series. X10-L is also a little bit disappointing regarding storage possibilities compared to X9-2L. Nevertheless, ODA is still a good platform for database consolidation and simplification. And until now, it’s still very popular among our customers.

L’article ODA X10: the new Oracle Database Appliances are already there est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator