Saturday, June 29, 2024

EMPTY STRING ON NULL for JSON Generation in Oracle Database 23ai

EMPTY STRING ON NULL for JSON Generation in Oracle Database 23ai

Oracle database 23ai has introduced the ability to convert nulls to empty strings during JSON generation.

◉ Setup


We create and populate a table. Notice each row has a different combination of values with regards to null.

conn testuser1/testuser1@//localhost:1521/freepdb1

drop table if exists t1 purge;

create table t1 (
  col1 VARCHAR2(10),
  col2 VARCHAR2(10)
);

insert into t1 values ('ONE', 'TWO');
insert into t1 values ('ONE', null);
insert into t1 values (null, 'TWO');
insert into t1 values (null, null);
commit;


select rownum, col1, col2 from t1;

 ROWNUM COL1 COL2
----------      --------- ----------
         1          ONE       TWO
         2         ONE
         3                         TWO
         4

SQL>

◉ NULL ON NULL and ABSENT ON NULL


In previous releases our only options for null handling during JSON generation were NULL ON NULL and ABSENT ON NULL. We'll use JSON_OBJECT as an example, where the default behaviour is NULL ON NULL.

When using NULL ON NULL, implicitly or explictly, any null values in the data are generated as "null" in the output.

-- Implicit - NULL ON NULL.
select json_object(*) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE","COL2":null}
{"COL1":null,"COL2":"TWO"}
{"COL1":null,"COL2":null}

SQL>


-- Explicit - NULL ON NULL.
select json_object(* null on null) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE","COL2":null}
{"COL1":null,"COL2":"TWO"}
{"COL1":null,"COL2":null}

SQL>

When using ABSENT ON NULL any null values in the data result in the corresponding element being removed from the output.

select json_object(* absent on null) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE"}
{"COL2":"TWO"}
{}

SQL>

Here's an example with JSON_ARRAY, which uses ABSENT ON NULL by default.

-- Default - ABSENT ON NULL
select json_array(col1) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[]
[]

SQL>


-- Explicit - ABSENT ON NULL
select json_array(col1 absent on null) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[]
[]

SQL>


-- NULL ON NULL
select json_array(col1 null on null) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[null]
[null]

SQL>

◉ EMPTY STRING ON NULL


In Oracle database 23ai we now have the option of using EMPTY STRING ON NULL. In this case null values are presented as empty strings in the output.

Here we see JSON_OBJECT being used with EMPTY STRING ON NULL.

select json_object(* empty string on null) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE","COL2":""}
{"COL1":"","COL2":"TWO"}
{"COL1":"","COL2":""}

SQL>

Here is an example of JSON_ARRAY using EMPTY STRING ON NULL.

select json_array(col1 empty string on null) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[""]
[""]

SQL>

◉ Thoughts


In Oracle SQL and PL/SQL an empty string and a null are treated the same. This is not the case in other languages, so it's important to give people an option of presenting nulls as empty strings if required.

The default null handling of each JSON generation function can differ, so it's important to check the documentation, or always be explicit so other developers know your intentions.

Source: oracle-base.com

Friday, June 28, 2024

Announcing Globally Distributed Autonomous Database Support for Oracle Database 23ai Enabling Raft Replication

Announcing Globally Distributed Autonomous Database Support for Oracle Database 23ai Enabling Raft Replication

Over the last four years, the Oracle Database Development team has diligently worked to deliver the next long-term support release: Oracle Database 23ai, with a focus on AI and enhancing developer productivity.

Today, we are thrilled to announce support for Oracle Database 23ai on the Oracle Globally Distributed Autonomous Database!. Oracle Database 23ai, in conjunction with the Oracle Globally Distributed Autonomous Database, is a game-changer for enterprises worldwide seeking to build cloud-scale distributed databases with ultra-high scalability, availability, and advanced AI capabilities.

With the latest release, Oracle Globally Distributed Autonomous Database allows customers to select their preferred replication strategy to ensure high availability and disaster recovery. You can choose between the new built-in Raft replication (introduced in 23ai) and Oracle Data Guard (available in releases 19c and 23ai). The service automatically deploys the specified replication topology to the configured systems and enables seamless data replication. If you are used to NoSQL databases and do not expect to know anything about how replication works, Globally Distributed Autonomous Database built-in Raft replication just works.

Key benefits for Raft replication include:


Fast failover with zero data loss: Raft replication is built into Globally Distributed Autonomous Database to provide a consensus-based, high-performance, low-overhead availability solution, with distributed replicas and fast failover with zero data loss, while automatically maintaining the replication factor if shards fail.

Active-Active symmetric configuration: Each node accepts writes and reads for a subset of data facilitating an Active-Active symmetric distributed database architecture that enhances availability, and optimizes resource utilization globally.

Simplified management: Raft replication is integrated and transparent to applications without requiring configuration of Oracle GoldenGate or Oracle Data Guard. Raft replication automatically reconfigures replication in case of shard host failures or when shards are added or removed from the sharded database.

Automatic data distribution: The service handles workload distribution and balancing upon scaling (addition or removal of shards), including planned or unplanned changes in shard availability. 

Announcing Globally Distributed Autonomous Database Support for Oracle Database 23ai Enabling Raft Replication

By supporting Oracle Database release 23ai, Globally Distributed Autonomous Database offers customers enhanced replication options and advanced AI capabilities including Select AI and AI Vector Search. It provides all of the new benefits on top of 23ai including unparalleled flexibility, improved operational efficiency, enhanced performance, simplified scalability, and compliance for modern applications. Stay tuned for more updates and detailed guides on leveraging these new capabilities to maximize your database’s potential.

Source: oracle.com

Wednesday, June 26, 2024

Oracle Spatial Studio 24.1 Available Now

We’re pleased to announce the release of Oracle Spatial Studio 24.1, a no-code web tool for working with the spatial features of Oracle Database. In this release we've focused on enchancing existing features and including new support for a popular file type, 3D tiles. This blog post will explore some of the enhancements available in this release.

Enhanced embedding of published Studio projects


Spatial Studio provides functionality to embed published maps into other web applications. New capabilities enable interactive behavior between your application and an embedded map using an enhanced API. For an application built on a platform without native map visualization features, embedding Spatial Studio maps is an easy alternative to custom coding. But even for an application built on a platform having map features, Spatial Studio provides advantages, for example high performance in map rendering of large volumes. The image below is an elementary example of a Spatial Studio map embedded into an Oracle APEX application, allowing the user to select items in the map to highlight corresponding records in the native APEX table, and vice versa.

Oracle Spatial Studio 24.1 Available Now

Oracle Spatial Studio 24.1 Available Now

Enhanced Cesium 3D visualizations to support updated 3D Tiles specification


3D Tiles is a popular file format for streaming massive heterogeneous 3D geospatial datasets, allowing for efficient interaction at runtime. The Open Geospatial Consortium updated the 3D Tiles specification in 2023, and subsequently, Cesium has included support for the features in this standard. Spatial Studio has been enhanced to include these features.

Oracle Spatial Studio 24.1 Available Now

Support for incremental Geocoding


Spatial Studio provides the ability to geocode data, whereby addresses are converted to point locations for mapping and spatial analysis. For scenarios where your data are continually or periodically appended with new addresses, Spatial Studio has added the ability to perform “incremental geocoding”. This means that only new addresses are geocoded, regardless of the volume of previously geocoded addresses. This ensures less rework, saving resources, and improving efficiency.

Oracle Spatial Studio 24.1 Available Now

Source: oracle.com

Monday, June 24, 2024

OCI AI Vision Facial Detection in Oracle Analytics Cloud

This post explores the OCI service AI Vision Facial Detection exposed directly in Oracle Analytics. With prebuilt models available, developers can easily build image recognition and text recognition into their applications without machine learning (ML) expertise.  This post will walk users through registering their AI Vision Face Detection model, uploading images, running their dataflow, and analyzing the output. 

Users can analyze single images or a batch of images directly in OAC using the pre-trained face detection model. OCI AI Vision Facial Detection feature provides the following information:

1. Identifies the existence of faces in each image.
2. The location of faces in each image using bounding box coordinates.
3. Facial landmarks for each detected face, including left eye, right eye, nose tip, and left and right edges of mouth.
4. Visual quality of each face: a higher score indicates that the image of the face is more likely to be suitable for biometrics.

Register your OCI AI Vision Face Detection Model


◉ In the OCI AI Vision service, register your Face Detection Model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

◉ Select the connection to your OCI tenancy.
◉ Choose the desired compartment.
◉ Select the face detection model and provide a staging bucket.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Uploading and preparing your image files


To start, first upload images to an OCI bucket.  Once all images reside in a OCI bucket, there are two options to build the required data input file that will be used in your data flow.

1. The first option is to create a CSV/xls file that points directly to all the images you wish to use with the face detection model (see left side of screenshot).
2. The second option option is to create a CSV/xls that points to the root of the bucket where all images reside.  In this scenario, the model will consume all images within the bucket (see right side of screenshot).
3. Once the input data file is created, you can proceed directly to OAC to build your dataflow and run the face detection model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Executing the Face Detection model


Dataset Prep

1. To this point, the following steps have been completed: registering the face detection model, uploading images to an OCI bucket, and creating an input data file.
2. With all these components in place, it's time to build the dataflow and execute the model. First, you need to upload the input data file created in the previous step.
3. Follow the screenshot flow below to upload and save the data file in OAC.  This example shows an input file that points directly to each individual image file.  Once the data file is saved, you can build your dataflow.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Build the Dataflow.

3. To build the dataflow, start with your newly created input dataset and add the AI Face Detection model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

4. With the model added to the data flow, you'll see all the output columns that will be produced by the Face Detection Model.

  • In the parameters sections, choose the input column (the source column of your images within your data file).
  • For the input type selection, specify from the dropdown to indicate whether your input dataset uses images or images sourced directly from a bucket.

5. The last step is to save the model output in a data file.

  • Add a step to your dataflow, 'save data'.
  • Once you provided a dataset name, click run, and provide a dataflow name.
  • As part of the save data process, you'll see an output preview of the Face Detection model. Note that depending on the number of images in your model, execution times will vary (currently OAC supports up to 250 face detections per image).

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Visualize in OAC


With the data flow successfully executed, you can now create a new workbook in OAC and visualize the output. Before starting, you need to first download the vision series plugin from the OAC library and upload it into your OAC deployment, as this is the visualization type you'll use with the Face Detection Model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Build your OAC Workbook

1. Start by adding your model output dataset to the workbook and selecting the Vision Plugin.
2. Then, using facial landmarks, you can now identify the exact coordinates and features of each within your images.
3. In addition, the model provides a quality and confidence score.
4. In this sample, the use of dashboard filters allows for a quick review of all images that were put through the model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

AI Explain

In addition, through the use of AI Explain, you can gain valuable insights into the model where all images are analyzed and reported on together.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Source: oracle.com

Friday, June 21, 2024

Direct Joins for UPDATE and DELETE Statements in Oracle Database 23ai

Direct Joins for UPDATE and DELETE Statements in Oracle Database 23ai

From Oracle database 23ai onward we are allowed to use direct joins to tables to drive UPDATE and DELETE statements.

◉ Setup


The following tables are necessary to run the examples in this article.

drop table if exists t1 purge;
drop table if exists t2 purge;
drop table if exists t3 purge;

create table t1 as
select level as id,
       'CODE' || level as code,
       'Description for ' || level as description
from   dual
connect by level <= 100;

alter table t1 add constraint t1_pk primary key (id);

create table t2 as
select level as id,
       'CODE' || (level*10) as code,
       'Updated description for ' || (level*10) as description
from   dual
connect by level <= 100;

alter table t2 add constraint t2_pk primary key (id);

create table t3 as
select level as id,
       'CODE' || (level*10) as code,
       'Updated description for ' || (level*10) as description
from   dual
connect by level <= 100;

alter table t3 add constraint t3_pk primary key (id);

◉ Direct Joins for UPDATE


First we check the data for the first five rows.

column code format a10
column description format a30

select * from t1 where id <= 5;

        ID CODE       DESCRIPTION
---------- ---------- ------------------------------
         1 CODE1      Description for 1
         2 CODE2      Description for 2
         3 CODE3      Description for 3
         4 CODE4      Description for 4
         5 CODE5      Description for 5

SQL>

Now we update the data in T1 using a join to the T2 table. We want to update the T1.CODE and T1.DESCRIPTION values, using the values from T2.CODE and T2.DESCRIPTION using a join in the ID value.

update t1 a 
set    a.code        = b.code,
       a.description = b.description
from   t2 b
where  a.id = b.id
and    b.id <= 5; 

Now we see the T1.CODE and T1.DESCRIPTION values have been updated.

select * from t1 where id <= 5;

        ID CODE       DESCRIPTION
---------- ---------- ------------------------------
         1 CODE10     Updated description for 10
         2 CODE20     Updated description for 20
         3 CODE30     Updated description for 30
         4 CODE40     Updated description for 40
         5 CODE50     Updated description for 50

SQL>

Let's rollback the changes.

rollback;

We can't use the ANSI join syntax between T1 and T2, but if there were several tables driving the update, those could be joined together using ANSI joins. The following example is a bit silly, but it proves a point by joining T2 to T3.

update t1 a 
set    a.code        = b.code,
       a.description = b.description
from   t2 b
join   t3 c on b.id = c.id
where  a.id = b.id
and    b.id <= 5;

rollback;

◉ Direct Joins for DELETE


First we check the data for the first five rows.

column code format a10
column description format a30

select * from t1 where id <= 5;

        ID CODE       DESCRIPTION
-------- ---------- ------------------------------
         1 CODE1      Description for 1
         2 CODE2      Description for 2
         3 CODE3      Description for 3
         4 CODE4      Description for 4
         5 CODE5      Description for 5

SQL>

We delete rows from T1 based on a query from T2. Notice we have a join between the two tables using the ID column, and one or more predicates to determine which rows from T2 are being used to drive the delete.

delete t1 a 
from   t2 b
where  a.id = b.id
and    b.id <= 5; 

We can see the rows have been deleted.

select * from t1 where id <= 5;

no rows selected

SQL>

Let's rollback the changes.

rollback;

We can add in the FROM keyword after the DELETE keyword, but it doesn't scan well.

delete from t1 a 
from   t2 b
where  a.id = b.id
and    b.id <= 5;

rollback;

We can't use the ANSI join syntax between T1 and T2, but if there were several tables driving the delete, those could be joined together using ANSI joins. The following example is a bit silly, but it proves a point by joining T2 to T3.

delete t1 a 
from   t2 b
join   t3 c on b.id = c.id
where  a.id = b.id
and    b.id <= 5;

rollback;

Source: oracle-base.com

Wednesday, June 19, 2024

DML RETURNING Clause Enhancements in Oracle Database 23ai

DML RETURNING Clause Enhancements in Oracle Database 23ai

This article demonstrates the enhancements to the DML RETURNING clause introduced in Oracle database 23ai.

Setup


The following tables are required to run the examples in this article.

drop table if exists t1;

create table t1 (
  id number,
  code varchar2(6),
  description varchar(25),
  constraint t1_pk primary key (id)
);

insert into t1 values (1, 'ONE', 'Description for ONE');
insert into t1 values (2, 'TWO', 'Description for TWO');
commit;


drop table if exists t2;

create table t2 (
  id          number,
  code        varchar2(6),
  description varchar(25),
  constraint t2_pk primary key (id)
);

insert into t2 values (3, 'THREE', 'Description for THREE');
insert into t2 values (4, 'FOUR', 'Description for FOUR');
commit;

Single Row Update


We can now return old and new values of columns from rows that are touched by an UPDATE statement.

( RETURN | RETURNING) ( OLD | NEW )  expr [ , ( OLD | NEW ) expr ]+ 
INTO  [data_item] ...

In this example we define a variables to hold the old and new values of the CODE and DESCRIPTION columns. We update the T1 table, appending "2" on to the end of the CODE and DESCRIPTION column values for the second row. The RETURNING clause defines the values we want to return, using the OLD and NEW keywords to indicate which value. The INTO clause lists the variables to hold those values.

set serveroutput on
declare
  l_old_code        t1.code%type;
  l_old_description t1.description%type;
  l_new_code        t1.code%type;
  l_new_description t1.description%type;
begin
  update t1
  set    code        = code || '2',
         description = description || '2'
  where  id          = 2
  returning old code , old description, new code, new description
  into l_old_code, l_old_description, l_new_code, l_new_description;

  dbms_output.put_line('l_old_code        = ' || l_old_code);
  dbms_output.put_line('l_old_description = ' || l_old_description);
  dbms_output.put_line('l_new_code        = ' || l_new_code);
  dbms_output.put_line('l_new_description = ' || l_new_description);

  rollback;
END;
/
l_old_code        = TWO
l_old_description = Description for TWO
l_new_code        = TWO2
l_new_description = Description for TWO2

PL/SQL procedure successfully completed.

SQL>

Multiple Row Update


We can do something similar for updates that touch multiple rows, but this time we need to use collections to hold the returned values. In this example we define a collection for each of the returned values. We update the table in a similar way, but this time we change INTO to a BULK COLLECT INTO and list the collections to hold the returned values. We loop through the collections to display the returned values.

set serveroutput on
declare
  type l_old_code_t is table of t1.code%type;
  type l_old_description_t is table of t1.description%type;
  type l_new_code_t is table of t1.code%type;
  type l_new_description_t is table of t1.description%type;

  l_old_code        l_old_code_t;
  l_old_description l_old_description_t;
  l_new_code        l_new_code_t;
  l_new_description l_new_description_t;
begin
  update t1
  set    code        = code || '1',
         description = description || '1'
  returning old code, old description, new code, new description
  bulk collect into l_old_code, l_old_description, l_new_code, l_new_description;

  for i in 1 .. l_old_code.count loop
    dbms_output.put_line('row               = ' || i);
    dbms_output.put_line('l_old.code        = ' || l_old_code(i));
    dbms_output.put_line('l_old.description = ' || l_old_description(i));
    dbms_output.put_line('l_new.code        = ' || l_new_code(i));
    dbms_output.put_line('l_new.description = ' || l_new_description(i));
  end loop;

  rollback;
end;
/
row               = 1
l_old.code        = ONE
l_old.description = Description for ONE
l_new.code        = ONE1
l_new.description = Description for ONE1
row               = 2
l_old.code        = TWO
l_old.description = Description for TWO
l_new.code        = TWO1
l_new.description = Description for TWO1

PL/SQL procedure successfully completed.

SQL>

Insert and Delete


The NEW and OLD keywords can be used on INSERT and DELETE also, but they don't bring anything new to the table. For inserts we have no old values, and for deletes we have no new values.

set serveroutput on
declare
  l_old_code        t1.code%type;
  l_old_description t1.description%type;
  l_new_code        t1.code%type;
  l_new_description t1.description%type;
begin
  insert into t1 (id, code, description)
  values (5, 'FIVE', 'Description for FIVE')
  returning old code , old description, new code, new description
  into l_old_code, l_old_description, l_new_code, l_new_description;

  dbms_output.put_line('l_old_code        = ' || l_old_code);
  dbms_output.put_line('l_old_description = ' || l_old_description);
  dbms_output.put_line('l_new_code        = ' || l_new_code);
  dbms_output.put_line('l_new_description = ' || l_new_description);

  rollback;
END;
/
l_old_code        =
l_old_description =
l_new_code        = FIVE
l_new_description = Description for FIVE

PL/SQL procedure successfully completed.

SQL>

set serveroutput on
declare
  l_old_code        t1.code%type;
  l_old_description t1.description%type;
  l_new_code        t1.code%type;
  l_new_description t1.description%type;
begin
  delete from t1
  where  id = 2
  returning old code , old description, new code, new description
  into l_old_code, l_old_description, l_new_code, l_new_description;

  dbms_output.put_line('l_old_code        = ' || l_old_code);
  dbms_output.put_line('l_old_description = ' || l_old_description);
  dbms_output.put_line('l_new_code        = ' || l_new_code);
  dbms_output.put_line('l_new_description = ' || l_new_description);

  rollback;
END;
/
l_old_code        = TWO
l_old_description = Description for TWO
l_new_code        =
l_new_description =

PL/SQL procedure successfully completed.

SQL>

Source: oracle-base.com

Monday, June 17, 2024

CASE Statement and CASE Expression Enhancements in Oracle Database 23ai

CASE Statement and CASE Expression Enhancements in Oracle Database 23ai

In Oracle database 23ai the simple CASE statement and expression are more flexible, allowing dangling predicates and multiple choices in a single WHEN clause.

This brings the PL/SQL simple CASE statement and expression in line with the SQL:2003 Standard [ISO03a, ISO03b] standard.

◉ Setup


The examples in this article use the following table.

drop table if exists t1 purge;

create table t1 (
  pct number
);

insert into t1 (pct)
values (-1), (0), (10), (40), (70), (80), (90), (100), (101);
commit;

◉ The Problem


In previous releases simple CASE statements and expressions were only capable of performing equality checks. If we needed comparisons other than equality checks we would have to use a searched case statement or expression.

In the following example we use a searched CASE statement to evaluate a threshold. We are using a searched CASE statement because most of the comparisons are not simple equality checks.

set serveroutput on
declare
  l_threshold varchar2(20);
begin
  for cur_rec in (select pct from t1) loop
    case 
      when cur_rec.pct  = 40 then l_threshold := 'Optimal';
      when cur_rec.pct <= 70 then l_threshold := 'Safe';
      when cur_rec.pct <= 80 then l_threshold := 'Check';
      when cur_rec.pct <= 90 then l_threshold := 'Warning';
      when cur_rec.pct  > 90 then l_threshold := 'Critical';
    end case;

    dbms_output.put_line(cur_rec.pct || ' : ' || l_threshold);
  end loop;
end;
/
-1 : Safe
0 : Safe
10 : Safe
40 : Optimal
70 : Safe
80 : Check
90 : Warning
100 : Critical
101 : Critical

PL/SQL procedure successfully completed.

SQL>

This example uses a searched CASE expression to do the same thing.

set serveroutput on
declare
  l_threshold varchar2(20);
begin
  for cur_rec in (select pct from t1) loop
    l_threshold := case 
                     when cur_rec.pct  = 40 then 'Optimal'
                     when cur_rec.pct <= 70 then 'Safe'
                     when cur_rec.pct <= 80 then 'Check'
                     when cur_rec.pct <= 90 then 'Warning'
                     when cur_rec.pct  > 90 then 'Critical'
                   end;

    dbms_output.put_line(cur_rec.pct || ' : ' || l_threshold);
  end loop;
end;
/
-1 : Safe
0 : Safe
10 : Safe
40 : Optimal
70 : Safe
80 : Check
90 : Warning
100 : Critical
101 : Critical

PL/SQL procedure successfully completed.

SQL>

◉ Dangling Predicates


In Oracle 23ai we can do the same thing using a simple CASE statement or expression using dangling predicates. A dangling predicate is an expression with its left operand missing.

This example uses a simple CASE statement with dangling predicates to achieve the same result.

set serveroutput on
declare
  l_threshold varchar2(20);
begin
  for cur_rec in (select pct from t1) loop
    case cur_rec.pct
      when 40    then l_threshold := 'Optimal';
      when <= 70 then l_threshold := 'Safe';
      when <= 80 then l_threshold := 'Check';
      when <= 90 then l_threshold := 'Warning';
      when  > 90 then l_threshold := 'Critical';
    end case;

    dbms_output.put_line(cur_rec.pct || ' : ' || l_threshold);
  end loop;
end;
/
-1 : Safe
0 : Safe
10 : Safe
40 : Optimal
70 : Safe
80 : Check
90 : Warning
100 : Critical
101 : Critical

PL/SQL procedure successfully completed.

SQL>

Here is the simple CASE expression equivalent.

set serveroutput on
declare
  l_threshold varchar2(20);
begin
  for cur_rec in (select pct from t1) loop
    l_threshold := case cur_rec.pct
                     when 40    then 'Optimal'
                     when <= 70 then 'Safe'
                     when <= 80 then 'Check'
                     when <= 90 then 'Warning'
                     when  > 90 then 'Critical'
                   end;

    dbms_output.put_line(cur_rec.pct || ' : ' || l_threshold);
  end loop;
end;
/
-1 : Safe
0 : Safe
10 : Safe
40 : Optimal
70 : Safe
80 : Check
90 : Warning
100 : Critical
101 : Critical

PL/SQL procedure successfully completed.

SQL>

◉ Multiple Choices in a Single WHEN Clause


A single WHEN clause can include multiple equality checks or dangling predicates as a comma-separated list.

In the following example we use a simple CASE statement to show an error if a value is below 0, exactly 0.5 or greater than 100. We also add values 41 and 42 to the optimal threshold.

set serveroutput on
declare
  l_threshold varchar2(20);
begin
  for cur_rec in (select pct from t1) loop
    case cur_rec.pct
      when < 0, 0.5, > 100 then l_threshold := 'Error';
      when 40, 41, 42      then l_threshold := 'Optimal';
      when <= 70           then l_threshold := 'Safe';
      when <= 80           then l_threshold := 'Check';
      when <= 90           then l_threshold := 'Warning';
      when  > 90           then l_threshold := 'Critical';
    end case;

    dbms_output.put_line(cur_rec.pct || ' : ' || l_threshold);
  end loop;
end;
/
-1 : Error
0 : Safe
10 : Safe
40 : Optimal
70 : Safe
80 : Check
90 : Warning
100 : Critical
101 : Error

PL/SQL procedure successfully completed.

SQL>

This is the simple CASE expression equivalent of the previous example.

set serveroutput on
declare
  l_threshold varchar2(20);
begin
  for cur_rec in (select pct from t1) loop
    l_threshold := case cur_rec.pct
                     when < 0, 0.5, > 100 then 'Error'
                     when 40, 41, 42      then 'Optimal'
                     when <= 70           then 'Safe'
                     when <= 80           then 'Check'
                     when <= 90           then 'Warning'
                     when  > 90           then 'Critical'
                   end;

    dbms_output.put_line(cur_rec.pct || ' : ' || l_threshold);
  end loop;
end;
/
-1 : Error
0 : Safe
10 : Safe
40 : Optimal
70 : Safe
80 : Check
90 : Warning
100 : Critical
101 : Error

PL/SQL procedure successfully completed.

SQL>

◉ SQL CASE Expressions


This functionality is not supported directly by SQL CASE expressions.

select pct,
       case pct
         when 0.5, < 0, > 100 then 'Error'
         when 40, 41, 42      then 'Optimal'
         when <= 70           then 'Safe'
         when <= 80           then 'Check'
         when <= 90           then 'Warning'
         when  > 90           then 'Critical'
       end as status
from   t1;

when 0.5, < 0, > 100 then 'Error'
                 *
ERROR at line 3:
ORA-02000: missing THEN keyword

SQL>

We can achieve a similar result by defining a function in the WITH clause to perform the CASE expression.

with
  function get_status(p_pct in number) return varchar2 is
  begin
    return case p_pct
             when < 0, 0.5, > 100 then 'Error'
             when 40, 41, 42      then 'Optimal'
             when <= 70           then 'Safe'
             when <= 80           then 'Check'
             when <= 90           then 'Warning'
             when  > 90           then 'Critical'
           end;
  end;
select pct, get_status(pct) as status
from   t1
/

       PCT STATUS
---------- --------------------------
        -1 Error
         0 Safe
        10 Safe
        40 Optimal
        70 Safe
        80 Check
        90 Warning
       100 Critical
       101 Error

9 rows selected.

SQL>

Source: oracle-base.com

Saturday, June 15, 2024

DB_DEVELOPER_ROLE Role in Oracle Database 23ai

DB_DEVELOPER_ROLE Role in Oracle Database 23ai

Oracle database 23ai introduced a new DB_DEVELOPER_ROLE role to provide the basic roles and privileges Oracle believe are necessary for a database developer.

◉ DB_DEVELOPER_ROLE Role Basics


Oracle database 23ai introduced a new DB_DEVELOPER_ROLE role to provide the basic roles and privileges Oracle believe are necessary for a database developer. Prior to this release, people would often grant the CONNECT and RESOURCE roles as a starting point for most developers, but the DB_DEVELOPER_ROLE role is more than the sum of these two roles, as we will see below.

The DB_DEVELOPER_ROLE role is granted and revoked in the normal way.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

grant db_developer_role to testuser1;

revoke db_developer_role from testuser1;

◉ Comparing CONNECT, RESOURCE and DB_DEVELOPER_ROLE


The following three sections display the system, role and object privileges associated with each role.

    ◉ CONNECT Role

Here are the system, role and object privileges associated with the CONNECT role.

variable v_role VARCHAR2(30)
exec :v_role := 'CONNECT';

-- System Privileges
select sp.privilege
from   dba_sys_privs sp
where  sp.grantee = :v_role
order by 1;

PRIVILEGE
----------------------------------------
CREATE SESSION
SET CONTAINER

SQL>

-- Role Privileges
select rp.granted_role
from   dba_role_privs rp
where  rp.grantee = :v_role
order by 1;

no rows selected

SQL>

-- Object Privileges
column privilege format a30
column table_name format a30

select tp.privilege, tp.table_name 
from   dba_tab_privs tp
where  tp.grantee = :v_role
order by 1, 2;

no rows selected

SQL>

    ◉ RESOURCE Role

DB_DEVELOPER_ROLE Role in Oracle Database 23ai
Here are the system, role and object privileges associated with the RESOURCE role. There are seven more system privileges granted to the RESOURCE role in 23ai compared to 19c.

variable v_role VARCHAR2(30)
exec :v_role := 'RESOURCE';

-- System Privileges
select sp.privilege
from   dba_sys_privs sp
where  sp.grantee = :v_role
order by 1;

PRIVILEGE
----------------------------------------
CREATE ANALYTIC VIEW
CREATE ATTRIBUTE DIMENSION
CREATE CLUSTER
CREATE HIERARCHY
CREATE INDEXTYPE
CREATE MATERIALIZED VIEW
CREATE OPERATOR
CREATE PROCEDURE
CREATE PROPERTY GRAPH
CREATE SEQUENCE
CREATE SYNONYM
CREATE TABLE
CREATE TRIGGER
CREATE TYPE
CREATE VIEW

15 rows selected.

SQL>

-- Role Privileges
select rp.granted_role
from   dba_role_privs rp
where  rp.grantee = :v_role
order by 1;

GRANTED_ROLE
------------------------------------------------
SODA_APP

SQL>

-- Object Privileges
column table_name format a30

select tp.privilege, tp.table_name 
from   dba_tab_privs tp
where  tp.grantee = :v_role
order by 1, 2;

no rows selected

SQL>

    ◉ DB_DEVELOPER_ROLE Role

Here are the system, role and object privileges associated with the DB_DEVELOPER_ROLE role.

variable v_role VARCHAR2(30)
exec :v_role := 'DB_DEVELOPER_ROLE';

-- System Privileges
select sp.privilege
from   dba_sys_privs sp
where  sp.grantee = :v_role
order by 1;

PRIVILEGE
------------------------------
CREATE CUBE
CREATE CUBE BUILD PROCESS
CREATE CUBE DIMENSION
CREATE DIMENSION
CREATE DOMAIN
CREATE JOB
CREATE MINING MODEL
CREATE MLE
CREATE SESSION
DEBUG CONNECT SESSION
EXECUTE DYNAMIC MLE
FORCE TRANSACTION
ON COMMIT REFRESH

13 rows selected.

SQL>

-- Role Privileges
select rp.granted_role
from   dba_role_privs rp
where  rp.grantee = :v_role
order by 1;

GRANTED_ROLE
------------------------------------------------
CTXAPP
RESOURCE

SQL>

-- Object Privileges
column privilege format a30
column table_name format a30

select tp.privilege, tp.table_name 
from   dba_tab_privs tp
where  tp.grantee = :v_role
order by 1, 2;

PRIVILEGE                      TABLE_NAME
------------------------- ------------------------------
EXECUTE                        DBMS_REDACT
EXECUTE                        DBMS_RLS
EXECUTE                        DBMS_TSDP_MANAGE
EXECUTE                        DBMS_TSDP_PROTECT
EXECUTE                        JAVASCRIPT
READ                               V_$PARAMETER
READ                               V_$STATNAME
SELECT                           DBA_PENDING_TRANSACTIONS

SQL>

◉ Considerations


For many years we have been discouraged from using the predefined roles, and encouraged to use the "least privileges" approach when granting privileges to users. From a security perspective, the less privileges a user has, the smaller the attack surface.

We can see from the above comparison, the DB_DEVELOPER_ROLE role has more privileges than the CONNECT and the RESOURCE roles combined. For some people or companies this might be considered too much privilege. It's up to you or your company to decide if this role is suitable or not.

The addition of more privileges to the RESOURCE role means people who are in the habit of granting the CONNECT plus RESOURCE roles are getting something approaching the level of privilege granted by the DB_DEVELOPER_ROLE. It is worth considering the impact of this.

Source: oracle-base.com

Friday, June 14, 2024

Bigfile Tablespace Defaults in Oracle Database 23ai

Bigfile Tablespace Defaults in Oracle Database 23ai

Bigfile tablespaces were introduced in Oracle 10g. In Oracle database 23ai bigfile tablespaces are the default.

◉ BIGFILE Default for SYSAUX, SYSTEM, and USER Tablespaces


In Oracle database 23ai most of the Oracle provided tablespaces are bigfile tablespaces by default.

We connect to the root container and check out the tablespaces for this container. All are bigfile tablespaces excluding the TEMP tablespace.

conn / as sysdba

select tablespace_name, bigfile
from   dba_tablespaces
order by 1;

TABLESPACE_NAME      BIG
------------------------------  ---
SYSAUX                         YES
SYSTEM                         YES
TEMP                              NO
UNDOTBS1                    YES
USERS                           YES

SQL>

We switch to a pluggable database and display the tablespaces for this container. This is similar, but the USERS tablespace in the PDB is a smallfile tablespace. This is the default behaviour in this release of 23ai Free.

alter session set container=freepdb1;

select tablespace_name, bigfile
from   dba_tablespaces
order by 1;

TABLESPACE_NAME   BIG
------------------------------ ---
SYSAUX                         YES
SYSTEM                         YES
TEMP                              NO
UNDOTBS1                   YES
USERS                           NO

SQL>

◉ BIGFILE Default for User Defined Tablespaces


Bigfile is the default file size when creating a new tablespace. In the following example we create a new tablespace without explicitly setting the bigfile/smallfile file size. We are using Oracle Managed Files (OMF), so we don't need to manually name the associated datafile.

create tablespace new_ts datafile size 2g;

Notice the new tablespace has been created as a bigfile tablespace.

select tablespace_name, bigfile
from   dba_tablespaces
order by 1;

TABLESPACE_NAME    BIG
------------------------------ ---
NEW_TS                         YES
SYSAUX                         YES
SYSTEM                         YES
TEMP                               NO
UNDOTBS1                    YES
USERS                            NO

SQL>

◉ Considerations


These new defaults were introduced in 23.4. if I run the same tests in 23.3 I get the following.

conn / as sysdba

select tablespace_name, bigfile
from   dba_tablespaces
order by 1;

TABLESPACE_NAME   BIG
------------------------------ ---
SYSAUX                         NO
SYSTEM                         NO
TEMP                              NO
UNDOTBS1                    NO
USERS                            NO

SQL>

alter session set container=pdb1;

select tablespace_name, bigfile
from   dba_tablespaces
order by 1;

TABLESPACE_NAME   BIG
------------------------------ ---
SYSAUX                         NO
SYSTEM                         NO
TEMP                              NO
UNDOTBS1                    NO
USERS                            NO

6 rows selected.

SQL>

create tablespace new_ts datafile size 2g;

select tablespace_name, bigfile
from   dba_tablespaces
order by 1;

TABLESPACE_NAME  BIG
------------------------------ ---
NEW_TS                         NO
SYSAUX                         NO
SYSTEM                         NO
TEMP                              NO
UNDOTBS1                   NO
USERS                            NO

SQL>

I've been told some of the 23ai cloud services have different defaults in this behaviour. I suspect this is because of different versions of the software at the point of the image creation they are based on. I would expect future release updates to result in all tablespaces, including TEMP, to be bigfile when a clean installation is performed.

Source: oracle-base.com

Wednesday, June 12, 2024

DBMS_JOB Jobs Converted to DBMS_SCHEDULER Jobs in Oracle Database 19c

DBMS_JOB Jobs Converted to DBMS_SCHEDULER Jobs in Oracle Database 19c

The DBMS_JOB package has been deprecated since 12cR2. Oracle 19c takes the demise of the DBMS_JOB package a step further by converting any DBMS_JOB jobs to DBMS_SCHEDULER jobs.

◉ Create a Job Using DBMS_JOB


In Oracle 19c jobs created using the DBMS_JOB package are implemented as DBMS_SCHEDULER jobs, as demonstrated below.

We can see from the output below we don't have any jobs for this user.

conn test/test@pdb1

column what format a30

select job, what from user_jobs;

0 rows selected.

SQL>

column job_name format a30
column job_action format a30

select job_name, job_action from user_scheduler_jobs;

0 rows selected.

SQL>

We create a job using the DBMS_JOB.SUBMIT procedure, but we are not going to issue a COMMIT statement.

declare
  l_job  pls_integer;
begin
  dbms_job.submit (
    job       => l_job,
    what      => 'begin null; end;',
    next_date => trunc(sysdate)+1,
    interval  => 'trunc(sysdate)+1'
  );
end;
/
We can see the job is listed in the USER_JOBS and USER_SCHEDULER_JOBS views.

select job, what from user_jobs;

       JOB      WHAT
---------- ------------------------------
         1        begin null; end;

1 row selected.

SQL>


select job_name, job_action from user_scheduler_jobs;

JOB_NAME                       JOB_ACTION
------------------------------ ------------------------------
DBMS_JOB$_1                    begin null; end;

1 row selected.

SQL>

Notice the JOB_NAME of "DBMS_JOB$_?" for the DBMS_SCHEDULER job that has been generated.

◉ Transactional Jobs


One of the reasons people still use the DBMS_JOB package is it allows you to create jobs that are part of a bigger transaction. If a failure causes an exception, all the current work along with the jobs defined as part of it can be rolled back. We can demonstrate this using the job created above. Remember, we didn't issue a COMMIT, so the job is not visible from another session connected to the same user.

Without closing the original session, open a new connection and check for the jobs.

conn test/test@pdb1

select job, what from user_jobs;

0 rows selected.

SQL>


select job_name, job_action from user_scheduler_jobs;

0 rows selected.

SQL>

Now return to the original session and the jobs are still visible.

select job, what from user_jobs;

       JOB     WHAT
---------- ------------------------------
         1      begin null; end;

1 row selected.

SQL>


select job_name, job_action from user_scheduler_jobs;

JOB_NAME                       JOB_ACTION
------------------------------ ------------------------------
DBMS_JOB$_1                    begin null; end;

1 row selected.

SQL>

Issue a ROLLBACK, and the job definition will be removed.

rollback;

select job, what from user_jobs;

0 rows selected.

SQL>


select job_name, job_action from user_scheduler_jobs;

0 rows selected.

SQL>

As a result, the DBMS_JOB package can still be used to create transactional jobs, that are implemented using the DBMS_SCHEDULER scheduler. This also provides backwards compatibility.

◉ Materialized View Refresh Groups


Up to and including Oracle 18c, materialized view refresh groups were implemented using the kernel APIs exposed by the old DBMS_JOB package. In Oracle 19c things look a little different.

Create a table, materialized and refresh group including that materialized view.

create table t1 (id number);

create materialized view t1_mv
refresh force
on demand
as
select * from t1;

begin
   dbms_refresh.make(
     name                 => 'MINUTE_REFRESH',
     list                 => '',
     next_date            => sysdate,
     interval             => '/*1:mins*/ sysdate + 1/(60*24)',
     implicit_destroy     => false,
     lax                  => false,
     job                  => 0,
     rollback_seg         => null,
     push_deferred_rpc    => true,
     refresh_after_errors => true,
     purge_option         => null,
     parallelism          => null,
     heap_size            => null);
end;
/

begin
   dbms_refresh.add(
     name => 'MINUTE_REFRESH',
     list => 'T1_MV',
     lax  => true);
end;
/

We don't see a job in the USER_JOBS view, but we do see one in the USER_SCHEDULER_JOBS view.

select job, what from user_jobs;

0 rows selected.

SQL>


select job_name, job_action from user_scheduler_jobs;

JOB_NAME                       JOB_ACTION
------------------------------ ------------------------------
MV_RF$J_0_S_210                dbms_refresh.refresh('"TEST"."
                               MINUTE_REFRESH"');


1 row selected.

SQL>

But this job is transactional, in that a ROLLBACK will remove the job, along with the refresh group definition.

rollback;


select job, what from user_jobs;

0 rows selected.

SQL>


select job_name, job_action from user_scheduler_jobs;

0 rows selected.

SQL>

It would appear the refresh group functionality has been re-implemented using the kernel APIs that sit under the DBMS_SCHEDULER package, but without the implicit commit. Similar to the way the DBMS_JOB interface has been re-implemented. This is not 100% backwards compatible, as the associated job is not visible in the USER_JOBS view. If you have any functionality that relies on the link between the refresh groups and the old scheduler, it will need revisiting. I can't imagine that will be a problem for most people.

You can clean up the test table and materialized view using these commands.

drop materialized view t1_mv;
drop table t1 purge;

◉ Security : The CREATE JOB Privilege is Required?


At first glance the loophole discussed here sounds really bad, but remember that even in Oracle 18c, any user connected to the database could create a job using the DBMS_JOB interface, so this loophole is no worse than what came before. It just breaks the DBMS_SCHEDULER security.

As Connor McDonald pointed out, the conversion means users require the CREATE JOB privilege to allow them to create jobs using the DBMS_JOB package, where previously they didn't. We can see this if we create a user with just the CREATE SESSION privilege and attempt to create a job.

create user test2 identified by test2;
grant create session to test2;

conn test2/test2@pdb1

declare
  l_job  pls_integer;
begin
  dbms_job.submit (
    job       => l_job,
    what      => 'begin null; end;',
    next_date => trunc(sysdate)+1,
    interval  => 'trunc(sysdate)+1'
  );
end;
/

Error report -
ORA-27486: insufficient privileges
ORA-06512: at "SYS.DBMS_ISCHED", line 9387
ORA-06512: at "SYS.DBMS_ISCHED", line 9376
ORA-06512: at "SYS.DBMS_ISCHED", line 175
ORA-06512: at "SYS.DBMS_ISCHED", line 9302
ORA-06512: at "SYS.DBMS_IJOB", line 196
ORA-06512: at "SYS.DBMS_JOB", line 168
ORA-06512: at line 4
27486. 00000 -  "insufficient privileges"
*Cause:    An attempt was made to perform a scheduler operation without the
           required privileges.
*Action:   Ask a sufficiently privileged user to perform the requested
           operation, or grant the required privileges to the proper user(s).
SQL>

There is a loophole caused by the refresh group implementation. If we repeat the previous refresh group example, we can see we are able to create a job without the CREATE JOB privilege.

begin
   dbms_refresh.make(
     name                 => 'MINUTE_REFRESH',
     list                 => '',
     next_date            => sysdate,
     interval             => '/*1:mins*/ sysdate + 1/(60*24)',
     implicit_destroy     => false,
     lax                  => false,
     job                  => 0,
     rollback_seg         => null,
     push_deferred_rpc    => true,
     refresh_after_errors => true,
     purge_option         => null,
     parallelism          => null,
     heap_size            => null);
end;
/

select job_name, job_action from user_scheduler_jobs;

JOB_NAME                       JOB_ACTION
------------------------------ ------------------------------
MV_RF$J_0_S_242                dbms_refresh.refresh('"TEST2".
                               "MINUTE_REFRESH"');


1 row selected.

SQL>

That in itself is not devastating because it's for a very specific purpose, but most of Oracle's security is based on you being able to do whatever you want with objects you already own, so what happens if we try to change the attributes?

begin
  dbms_scheduler.set_attribute (
    name      => 'MV_RF$J_0_S_242',
    attribute => 'job_action',
    value     => 'begin null; end;'
  );
end;
/

SELECT job_name, job_action FROM user_scheduler_jobs;

JOB_NAME                       JOB_ACTION
------------------------------ ------------------------------
MV_RF$J_0_S_242                begin null; end;

1 row selected.

SQL>

So we can create a job using the DBMS_REFRESH package, then alter it to suit our purpose, giving us the ability to create a job without the need for the CREATE JOB privilege.

It would appear the re-implementation of the DBMS_REFRESH package has not followed the same security rules as that used by the other scheduler implementations. I'm sure this will get fixed in a future release.

Until this issue is resolved, you should probably revoke EXECUTE on the DBMS_REFRESH package from PUBLIC, as you may already do for the DBMS_JOB package.

Note. I raised this issue as "SR 3-20860955641 : Jobs can be created without the CREATE JOB privilege". This is now Bug 30357828 and is being worked on.

Miscellaneous
 
◉ The CREATE JOB privilege is necessary to create jobs using the DBMS_JOB package.
◉ During upgrades to 19c, any jobs defined using DBMS_JOB get converted to DBMS_SCHEDULER jobs. See Mike Dietrich's post about this.
◉ The SCHEDULER$_DBMSJOB_MAP dictionary table provides the mapping between the old DBA_JOBS job and the DBA_SCHEDULER job.

SQL> desc scheduler$_dbmsjob_map
Name            Null?    Type
--------------- -------- -------------
DBMS_JOB_NUMBER NOT NULL NUMBER
JOB_OWNER       NOT NULL VARCHAR2(128)
JOB_NAME        NOT NULL VARCHAR2(128)
SQL>

Source: oracle-base.com