Monday, May 30, 2022

Oracle Spatial Studio 22.1 now on Oracle Cloud Marketplace

Oracle Spatial Studio is a no-code web tool for accessing the spatial features of Oracle Database. The release of Oracle Spatial Studio 22.1 introduces a variety of valuable new capabilities as summarized here. The Oracle Cloud Marketplace has now been updated with this latest version for new deployments. Please find the listing here. The following walks you through the Marketplace deployment.

Before you begin

Review the prerequisites listed under Usage Instructions, namely 1) if your OCI user is not in the OCI Administrators group, then assign required policies and 2) create Secret(s) in OCI Vault to store passwords.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Once prerequsitises are complete you may Launch Stack to begin the stack wizard.

Enter Stack Information


You may leave defaults, or edit the stack name and description.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Click Next to continue.

Configure Variables


Compute Instance

Leave the default resource name prefix, or edit if desired. Select an Availability Domain and then Server Instance (Compute) Shape.

Note: You now have the option to deploy Spatial Studio to flexible shapes (”Flex shapes”) with the ability to customize the number of OCPUs and the amount of memory for your instance. You now also have the option to deploy to Arm-based compute, which has seen growing popularity for Cloud deployments. The minimun recommended Flex memory for Spatial Studio is 4GB.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Scroll down to continue.

Advanced Configuration

You may leave the default HTTPS port and Spatial Studio admin user name or edit if desired. As described in the prerequisites, you must select a Secret to use for the admin password.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Scroll down to continue.

Configure Networking

You may leave defaults for network configuration which creates new virtual network for the deployment. You may also select an existing virtual network.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Scroll down to continue.

Add SSH keys

Browse and select (or copy/paste) your SSH public key.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Scroll down to continue.

Configure Autonomous Database

On first startup, Oracle Spatial Studio requires connection to an Oracle Database for its metadata repository. The Oracle Spatial Studio deployment process on the Cloud Marketplace now provides the following options for metadata repository configuration:

1. Automatically create a new Oracle Autonomous Data Warehouse and configure Spatial Studio’s metadata repository. When deployment is complete you may log in to Spatial Studio and begin working immediately.

2. Automatically use an existing database and configure Spatial Studio’s metadata repository. When deployment is complete you may log in to Spatial Studio and begin working immediately.

3. Defer metadata repository until first login. When deployment is complete you will log in to Spatial Studio and be prompted to supply database connection info to generate Spatial Studio’s metadata repository.

Select your preferred strategy for Spatial Studio's metadata reopository connection. If creating a new Autonomous Data Warehouse or using an existing Autonomous Data Warehouse, you will need to select a Secret for the database user's password.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Click Next to continue.

Review

Review the summary of your stack configuration.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Scroll to the bottom and check Run Apply in order to automatically run the stack (i.e., perform the deployment) after you click Create..

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Click Create to create and apply the stack.

Monitor progress


You will be navigated to the Job Details page for your stack Apply job. Log content will begin to appear after roughly 1 minute, and deployment will complete in roughly 5 minutes. When State shows Succeeded your job is done. If State shows Failed then the log will contain the cause, such as lack of resource quota in your account.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Scroll down to the bottom of the log for important details:

◉ URL to access your Spatial Studio instance (labelled instance_https_url)
◉ Links for those interested in configuring a HTTPS certificate and configuring Oracle IDCS as SSO provider

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

You may also click on the Application Information tab which provides a button to open your Spatial Studio instance aliong with other deployment info. As noted in Comments, wait 2-3 minutes after Apply job completion to access your Spatial Studio instance.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Access your Spatial Studio instance


After waiting 2-3 minutes following stack Apply job completion, click the button to Open Spatal Studio or navigate to the instance URL shown in the log. The deployment is pre-configured with a self-signed HTTPS certificate, so you will see a browser security warning. Accept and proceed to the site.

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Log in using the Spatial Studio administrator user name (default is admin unless changed in the stack wizard) and password (password associated with the Secret used in the stack wizard).

Oracle Spatial Studio 22.1, Oracle Cloud Marketplace, Oracle Database Exam Prep, Oracle Database Certification, Database Preparation, Oracle Database Skill, Database Jobs, Database News, Database Learning

Source: oracle.com

Friday, May 27, 2022

Difference between Structured and Object-Oriented Analysis

Analysis simple means to study or examine the structure of something, elements, system requirements in detail, and methodical way. Structured analysis and Object-oriented analysis both are important for software development and are analysis techniques used in software engineering. But both are different from each other. 

1. Structured Analysis

Structured analysis is a method of development that allows and gives permission to the analyst to understand and know about the system and all of its activities in a logical way. It is simply a graphic that is used to specify the presentation of the application. 

Example – 


2. Object-Oriented Analysis


Object-Oriented Analysis (OOA) is a technical approach generally used for analyzing and application designing, system designing, or even business designing just by applying object-oriented programming even with the use of visual modeling throughout the process of development to just simply guide the stakeholder communication and quality of the product. it is actually a process of discovery where a team of development understands and models all the requirements of the system. 

Example – 


Difference Between Structured and Object-oriented analysis:

Structured Analysis Object-Oriented Analysis 
The main focus is on the process and procedures of the system.   The main focus is on data structure and real-world objects that are important.
It uses System Development Life Cycle (SDLC) methodology for different purposes like planning, analyzing, designing, implementing, and supporting an information system.   It uses Incremental or Iterative methodology to refine and extend our design. 
It is suitable for well-defined projects with stable user requirements.   It is suitable for large projects with changing user requirements.
Risk while using this analysis technique is high and reusability is also low.   Risk while using this analysis technique is low and reusability is also high.
Structuring requirements include DFDs (Data Flow Diagram), Structured English, ER (Entity Relationship) diagram, CFD (Control Flow Diagram), Data Dictionary, Decision table/tree, State transition diagram.   Requirement engineering includes Use case model (find Use cases, Flow of events, Activity Diagram), the Object model (find Classes and class relations, Object interaction, Object to ER mapping), Statechart Diagram, and deployment diagram.
This technique is old and is not preferred usually.   This technique is new and is mostly preferred.

Source: geeksforgeeks.org

Thursday, May 26, 2022

Unified Auditing for Top-Level Statements in the Oracle Database

Auditing a database has undoubtfully its benefits and more, but for the DBA, this can cause serious headaches in terms of managing space and performance. Especially in the cloud, where security and auditing is even more important that on premises.

Did you know that there are 239 available audit actions in 19c? select * from AUDIT_ACTIONS; shows them all.

Oracle Database Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database News, Oracle Database Certification, Oracle Database Tutorial and Materials

A new database feature introduced in 19c and 21c, called “auditing top-level statements“, can help us minimize the issue with space and performance (caused by auditing), by ignoring SQL statements that run from within PL/SQL procedures, triggers or functions, as they may be less relevant for auditing purposes.

Inside the Oracle database, a top-level statement is a statement that is executed directly by a user, not a statement that is run from within a PL/SQL procedure.

Top-level statements from the user SYS can be also audited. The unified audit trail can grow immensely if all statements are audited. Especially when there is a large number of audit trail records that are generated for a single statement in the unified audit policy. With auditing only top-level statements, we can reduce the audit of the recursive SQL statements. An example of this scenario would be audits for the DBMS_STATS.GATHER_DATABASE_STATS statement, which can generate over 200,000 individual audit records.

The unified audit records are written to a table in the AUDSYS schema called AUD$UNIFIED. Access to the AUDSYS. AUD$UNIFIED table is provided via the view SYS.UNIFIED_AUDIT_TRAIL. Here is the size of AUD$UNIFIED in an autonomous database where I have not configured any out of the box auditing. The auditing options enabled by default in ADB are logon, alter user, insert, update and change password:

Oracle Database Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database News, Oracle Database Certification, Oracle Database Tutorial and Materials

Note that there are two components to the Audit Trail: (1) the AUDSYS schema in the database and (2) the spillover files on disk. More details in Receiving ORA-55940: An error occurred during execution of ktliIngestExternData by Mike Dietrich.

The X$UNIFIED_AUDIT_TRAIL table is owned by SYS, it is not in the SYSAUD schema! Check Unified Auditing – some insights by Thomas Saviour and X$UNIFIED_AUDIT_TRAIL is slow by Magnus Johansson for more details.

Now, configuring an Unified Audit Policy to capture only Top-Level statements is simple:

Oracle Database Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database News, Oracle Database Certification, Oracle Database Tutorial and Materials

The ONLY TOPLEVEL clause in the CREATE AUDIT POLICY statement enables you to audit only the SQL statements that are directly issued by an end user by honoring the audit configuration in the audit policy.

In my autonomous database, I have decided to top-level audit all actions from SYS and ADMIN:

Oracle Database Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database News, Oracle Database Certification, Oracle Database Tutorial and Materials


It is important to understand that the ONLY TOPLEVEL clause has no impact on the output for an individual unified audit trail record. The only effect that ONLY TOPLEVEL has on a policy is to limit the number of records generated for the given unified audit policy.

Source: juliandontcheff.wordpress.com

Wednesday, May 25, 2022

Deeper Analysis from Data Science Model Deployment Logs

Logs are a power tool to understand the behavior of any service or application. In a ML post-deployment logs are extremely needed to track all the interactions over the published ML model in order to understand who is trying to invoke the model and which prediction were made by the model.

Along this article you will be presented on how to collect logs generate by a Data Science Model Deployment, copy them to an Autonomous Data Warehouse and analyze using Oracle Analytics Cloud.

At the end of this article you will have deployed the architecture below:

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Architecture for Deeper Model Deployment Logs Analysis

To achieve the architecture above firstly you need train and deploy a ML model. For that you will use an OCI Data Science Notebook. If you have doubt on how to create an OCI Data Science Notebook you can take a look at this documentation.

Model Deployment


OCI Data Science offers a complete machine learning model life cycle including the model deployment. OCI Data Science Model Deployment is a fully managed infrastructure by OCI to put your model in production and deliver model invocations as a web application HTPP endpoint which serves predictions in a real time.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
OCI Data Science Model Deployment Flow

You will use a pre-prepared notebook to complete all the steps detailed in the flow above. This notebook is in a github repository and will be cloned to your notebook session using the native Git interface. To do that, open your notebook session click Git, then select the option Clone Repository.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Git Clone Repository Menu

Input the following URL https://github.com/dougsouzars/Deeper-Model-Deployment-Logs-Analysis.git and click Clone. All the files in this repository will be copied to your environment creating a new folder. Access the created folder and double click in the file Deeper Model Deployment Logs Analysis.ipynb to open it.

Ensure you are using the conda environment generalml_p37_cpu_v1. If you are not familiarized with conda environments take a look in this article and install the conda environment General Machine Learning for CPUs on Python 3.7. Then, in the menu click Run, and click Run All Cells.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Run All Cells

The notebook should took about 15 minutes to complete the Model Deployment process. After that, at the last cell there is a loop invoking the Model Deployment Endpoint to generate logs for further analysis.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Loop for Model Deployment invocation
 

Autonomous Data Warehouse


Create a new Autonomous Data Warehouse if you do not have one yet.You can follow these steps to create a new Autonomous Data Warehouse instance.

Access your Autonomous Data Warehouse instance and click over Database Actions.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
ADW Database Actions
 
Log in with the admin user and the password you set when you created the database.

The Database Actions | Launchpad window is displayed.

Click SQL.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Open SQL Developer Web
 
Enter the following command:

soda create logs;

Go back to the Autonomous Data Warehouse detail page and click Service Console.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Open Service Console

In the new window click Development and then click Copy under RESTful Services and SODA and save it for later.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Copy RESTful Services and SODA

Go back to the Autonomous Data Warehouse detail page and click DB Connection.

Click Download Wallet, input a password and then click Download. 

Click Close.

Functions


You need create a Function that will be responsible for copy the logs from OCI Logging to Autonomous Data Warehouse. 

After create the Function Application, click Getting started and follow the Cloud Shell Setup tutorial to create the your function until the step 7.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Function Cloud Shell Setup

For steps 8 and 9 run the following commands:

fn init --runtime python send-logs

cd send-logs

This creates a new folder called send-logs and inside it 3 files called func.py, func.yaml and requirements.txt. You will replace the content of the func.py file with:

import ioimport json
import logging
import requests

from fdk import response

#soda insert uses the Autonomous Database REST API to insert JSON documents
def soda_insert(ordsbaseurl, dbschema, dbuser, dbpwd, collection, logentries):
    auth=(dbuser,dbpwd)
    sodaurl = ordsbaseurl + dbschema + '/soda/latest/'
    bulkinserturl = sodaurl + 'custom-actions/insert/' + collection + "/"
    headers = {'Content-Type': 'application/json'}
    resp = requests.post(bulkinserturl, auth=auth, headers=headers, data=json.dumps(logentries))
    return resp.json()

def handler(ctx, data: io.BytesIO=None):
    logger = logging.getLogger()
    logger.info("function start")

    # Retrieving the Function configuration values
    try:
        cfg = dict(ctx.Config())
        ordsbaseurl = cfg["ordsbaseurl"]
        dbschema = cfg["dbschema"]
        dbuser = cfg["dbuser"]
        dbpwd = cfg["dbpwd"]
        collection = cfg["collection"]
    except:
        logger.error('Missing configuration keys: ordsbaseurl, dbschema, dbuser, dbpwd and collection')
        raise

    # Retrieving the log entries from Service Connector Hub as part of the Function payload
    try:
        logentries = json.loads(data.getvalue())
        if not isinstance(logentries, list):
            raise ValueError
import ioimport json
import logging
import requests

from fdk import response

#soda insert uses the Autonomous Database REST API to insert JSON documents
def soda_insert(ordsbaseurl, dbschema, dbuser, dbpwd, collection, logentries):
    auth=(dbuser,dbpwd)
    sodaurl = ordsbaseurl + dbschema + '/soda/latest/'
    bulkinserturl = sodaurl + 'custom-actions/insert/' + collection + "/"
    headers = {'Content-Type': 'application/json'}
    resp = requests.post(bulkinserturl, auth=auth, headers=headers, data=json.dumps(logentries))
    return resp.json()

def handler(ctx, data: io.BytesIO=None):
    logger = logging.getLogger()
    logger.info("function start")

    # Retrieving the Function configuration values
    try:
        cfg = dict(ctx.Config())
        ordsbaseurl = cfg["ordsbaseurl"]
        dbschema = cfg["dbschema"]
        dbuser = cfg["dbuser"]
        dbpwd = cfg["dbpwd"]
        collection = cfg["collection"]
    except:
        logger.error('Missing configuration keys: ordsbaseurl, dbschema, dbuser, dbpwd and collection')
        raise

    # Retrieving the log entries from Service Connector Hub as part of the Function payload
    try:
        logentries = json.loads(data.getvalue())
        if not isinstance(logentries, list):
            raise ValueError
    except:
        logger.error('Invalid payload')
        raise

    # The log entries are in a list of dictionaries. We can iterate over the the list of entries and process them.
    # For example, we are going to put the Id of the log entries in the function execution log
    logger.info("Processing the following LogIds:")
    for logentry in logentries:
        logger.info(logentry["oracle"]["logid"])
    
    # Now, we are inserting the log entries in the JSON Database
    resp = soda_insert(ordsbaseurl, dbschema, dbuser, dbpwd, collection, logentries)
    logger.info(resp)
    if "items" in resp:
        logger.info("Logs are successfully inserted")
        logger.info(json.dumps(resp))
    else:
        raise Exception("Error while inserting logs into the database: " + json.dumps(resp))

    # The function is done, we don't return any response because it would be useless
    logger.info("function end")
    return response.Response(
        ctx,
        response_data="",
        headers={"Content-Type": "application/json"}
    )

You also need update the file requirements.txt including the library requests. So the content of the requirements.txt file should be:

fdk>=0.1.40
requests

When concluded the steps above execute the step 10.

After complete the steps above you should click Configuration and pass your variables.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Functions Parameters Configuration

Input your values for each variable listed:

◉ ordsbaseurl - your RESTful URL you copied previously
◉ dbschema - your autonomous admin schema
◉ dbuser - your autonomous admin user
◉ dbpwd - your autonomous admin password
◉ collection - your collection called logs

After complete the imputation you should have a similar screen as displayed below:

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Filled Functions Configuration

Service Connector Hub


You need to create a Service Connector Hub. Open the navigation menu and click Analytics & AI. Under Messaging, click Service Connector Hub.

Choose the Compartment where you want to create the service connector.

Click Create Service Connector.

Type a Connector Name such as "Send Logs to My Autonomous Database." Avoid entering confidential information.

Select the Resource Compartment where you want to store the new service connector.

Under Configure Service Connector, select your source and target services to move log data to a metric:

- Source: Logging
- Target: Functions

Under Configure source connection, select a Compartment Name, Log Group, and Log.

Under Configure target connection, select the Function Application and Function corresponding to the function you created.

Click Create.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Example of Service connector Hub Fill

After everything done you can check if the logs are been copied looking directly in the Service Connector detail page.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Service Connector Hub Metrics

Oracle Analytics Cloud


As the logs are been copied to the Autonomous Data Warehouse you can create analytics dashboards to understand the Model Deployment behaviors. In this case you will analyze the prediction logs and the dates they were generated.

In the OCI Home window, open the navigation menu and click Analytics & AI. Under Analytics, click Analytics Cloud.

Choose the Compartment where you want to create the Analytics Cloud.

If you have one instance created click in the instance name. If not, click Create Instance, fill your information and click Create.

Access your instance by clicking over the instance name, then click Analytics Home Page.

At the Home Page, in the top right corner click Three Dotted Button. Click Import Workbook/Flow.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Three dotted action menu button

Click Select File, then select the Deeper Model Deployment Logs Analysis.dva file.

In the top menu, click Data, then mouse hover over the adwa and click in the Three Dotted Button. Click Inspect.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Inspect Connection

Input your connection to Autonomous Data Warehouse details and upload the wallet you made the download previously. Click Save.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Caption

In the top menu, click Data, then click in the Workbook to open it. 

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
Open the workbook

You should see a dashboard like below with the predictions analysis.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Guides, Oracle Database Career, Oracle Database Skills, Oracle Database News
OAC Final Dashboard

Source: oracle.com

Monday, May 23, 2022

SQL Set Operator Enhancements (EXCEPT, EXCEPT ALL, MINUS ALL, INTERSECT ALL) in Oracle Database 21c

Oracle 21c includes a number of enhancements to SQL set operators including EXCEPT, EXCEPT ALL, MINUS ALL and INTERSECT ALL.

SQL Set Operator Enhancements, Oracle Database 21c, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Preparation, Oracle Database Certification, Oracle Database SQL

In previous releases the ALL keyword was added to UNION to prevent the removal of duplicate values, and thereby improve performance. In Oracle 21c the ALL keyword can also be added to the MINUS and INTERSECT operators, so their operations are on a per-row basis, rather than a distinct-row basis. Oracle 21c also introduces EXCEPT and EXCEPT ALL operators, which are functionally equivalent to MINUS and MINUS ALL respectively.

◉ Setup

The examples in this article require the following tables.

drop table t1 purge;

drop table t2 purge;

drop table t3 purge;

create table t1 (

  id           number(2) constraint t1_pk primary key,

  record_type  number(2),

  description  varchar2(12)

);

insert into t1 values (1, 10,'TEN');

insert into t1 values (2, 10,'TEN');

insert into t1 values (3, 20,'TWENTY');

insert into t1 values (4, 20,'TWENTY');

insert into t1 values (5, 30,'THIRTY');

insert into t1 values (6, 30,'THIRTY');

commit;

create table t2 (

  id           number(2) constraint t2_pk primary key,

  record_type  number(2),

  description  varchar2(12)

);

insert into t2 values (1, 20,'TWENTY');

insert into t2 values (2, 30,'THIRTY');

commit;

create table t3 (

  id           number(2) constraint t3_pk primary key,

  record_type  number(2),

  description  varchar2(12)

);

insert into t3 values (1, 20,'TWENTY');

insert into t3 values (2, 20,'TWENTY');

insert into t3 values (3, 30,'THIRTY');

insert into t3 values (4, 30,'THIRTY');

commit;

The T1 table has two copies of record types 10, 20 and 30. The T2 table as one copy of record types 20 and 30. The T3 table has two copies of record types 20 and 30.

select record_type, description from t1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

         10     TEN

         20     TWENTY

         20     TWENTY

         30     THIRTY

         30     THIRTY

SQL>

select record_type, description from t2;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         30     THIRTY

SQL>

select record_type, description from t3;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         20     TWENTY

         30     THIRTY

         30     THIRTY

SQL>

◉ MINUS ALL

The MINUS set operator returns all distinct rows selected by the first query but not the second. This is functionally equivalent to the ANSI set operator EXCEPT DISTINCT. The MINUS ALL set operator doesn't remove duplicate rows.

In the following query we select all rows from T1, minus those from T2. Since the initial set now has duplicate rows removed, we only see a single copy of record type 10, and record types 20 and 30 have been removed entirely.

select record_type, description

from   t1

minus

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

SQL>

If we switch the MINUS to a MINUS ALL, the duplicates are no longer removed from the initial set, so we see two copies of record type 10, and the MINUS ALL has only removed one copy of record types 20 and 30 from the result set.

select record_type, description

from   t1

minus all

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

         10     TEN

         20     TWENTY

         30     THIRTY

SQL>

This time we select all the rows from T1, minus the rows from T3. Now we only see the duplicates of record type 10, since both copies of record types 20 and 30 are removed.

select record_type, description

from   t1

minus all

select record_type, description

from   t3

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

         10     TEN

SQL>

◉ INTERSECT ALL

The INTERSECT set operator returns all distinct rows selected by both queries. That means only those rows common to both queries will be present in the final result set. The INTERSECT ALL set operator doesn't remove duplicate rows.

In the following query we use INTERSECT to select all rows from T1 that are present in T2. Notice there is one row each for record types 20 and 30, as both these appear in the result sets for their respective queries.

select record_type, description

from   t1

intersect

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         30     THIRTY

SQL>

We repeat with INTERSECT ALL and get the same result, as the query after the INTERSECT ALL only contains a single copy of record types 20 and 30, so there is only a single intersect for each.

select record_type, description

from   t1

intersect all

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         30     THIRTY

SQL>

This time we use T3 for the second query, so we have duplicates on both sides of the INTERSECT ALL operation. We now see duplicates due to multiple matches on each side.

select record_type, description

from   t1

intersect all

select record_type, description

from   t3

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         20     TWENTY

         30     THIRTY

         30     THIRTY

SQL>

If we switch back to INTERSECT, the duplicates are removed again.

select record_type, description

from   t1

intersect

select record_type, description

from   t3

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         30     THIRTY

SQL>

◉ EXCEPT

The EXCEPT set operator returns all distinct rows selected by the first query but not the second. This is functionally equivalent to the ANSI set operator EXCEPT DISTINCT and the MINUS operator.

In the example below, the first select would return record types 10, 20, 30, but record types 20 and 30 are removed because they are returned by the second select. This leaves a single rows for record type 10.

select record_type, description

from   t1

except

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

SQL>

◉ EXCEPT ALL

The EXCEPT ALL set operator returns all rows selected by the first query but not the second. This is functionally equivalent to the MINUS ALL operator.

In the following query we use EXCEPT to select all rows from T1 that are not present in T2. Since the initial set now has duplicate rows removed, we only see a single copy of record type 10, and record types 20 and 30 have been removed entirely.

select record_type, description

from   t1

except

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

SQL>

If we switch the EXCEPT to an EXCEPT ALL, the duplicates are no longer removed from the initial set, so we see two copies of record type 10, and the EXCEPT ALL has only removed one copy of record types 20 and 30 from the result set.

select record_type, description

from   t1

except all

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

         10     TEN

         20     TWENTY

         30     THIRTY

SQL>

This time we select all the rows from T1, except the rows from T3. Now we only see the duplicates of record type 10, since both copies of record types 20 and 30 are removed.

select record_type, description

from   t1

except all

select record_type, description

from   t3

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

         10     TEN

SQL>

◉ Explicit DISTINCT

In previous releases the DISTINCT keyword could not be added explicitly to the UNION, MINUS and INTERSECT set operators. It's presence was implied. In Oracle 21c the DISTINCT keyword can be added explicitly to these and the new EXCEPT set operator.

select record_type, description

from   t1

union distinct

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

         20     TWENTY

         30     THIRTY

SQL>

select record_type, description

from   t1

minus distinct

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

SQL>

select record_type, description

from   t1

intersect distinct

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         20     TWENTY

         30     THIRTY

SQL>

select record_type, description

from   t1

except distinct

select record_type, description

from   t2

order by 1;

RECORD_TYPE   DESCRIPTION

----------- ------------

         10     TEN

SQL>

◉ Query Transformations

The EXCEPT and EXCEPT ALL set operators are query transformations. When we issue them, they are transformed to MINUS and MINUS ALL respectively.

In all cases, the addition of the DISTINCT keyword with the set operator is transformed to the equivalent statement without the DISTINCT keyword.

Here are the transformations, visible in a 10053 trace file.

EXCEPT             -> MINUS

EXCEPT ALL         -> MINUS ALL

EXCEPT DISTINCT    -> MINUS

UNION DISTINCT     -> UNION

MINUS DISTINCT     -> MINUS

INTERSECT DISTINCT -> INTERSECT