Wednesday, May 31, 2023

Unlimited data-driven collaboration with Data Sharing of Oracle Autonomous Database

Data Sharing is making data available to other users – inside or outside your organization – to facilitate collaboration, improve business processes, expand business opportunities, or just make data available across siloed systems. Data Sharing is as old as computer technology and an essential component of any IT architecture beyond a single computer or system.

The traditional way to share data


In the past, data sharing has primarily relied on traditional methods such as FTP, email, and ETL tools. While these approaches have served their purpose, they had various limitations. One major concern is the lack of scalability, making it difficult for organizations to manage growing volumes of data and recipients as they expand.

Maintaining the infrastructure for these data sharing methods can also be challenging, as it often involves manually configuring and updating servers, security protocols, and user permissions. This maintenance work can become quite labor-intensive and complex. Furthermore, these methods are often slow and inefficient, especially when dealing with large datasets.

Another issue with traditional data sharing methods is the limited support for collaboration and real-time data access. This slows productivity, as team members may need to wait for the latest data to be sent or retrieved instead of accessing it directly. Lastly, security concerns are always prevalent, as these older methods may not have the robust security features needed to protect sensitive information from unauthorized access.

The modern way to share data


Modern data sharing solutions offer a range of features that address the limitations of traditional methods. These solutions enable data sharing without duplicating or propagating data to all recipients, significantly reducing storage and bandwidth requirements. They also provide centralized data governance and management, ensuring consistency across all users. To maintain security, modern data sharing methods grant secure access centrally without directly exposing the data source system, safeguarding sensitive information. Moreover, these solutions are designed to scale with massive data volumes and support platform-independent, open APIs, promoting interoperability and flexibility within diverse technological ecosystems.

Oracle Autonomous Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Learning

Data Sharing in Autonomous Database provides both an open way of sharing data with any recipient, Oracle or non-Oracle, and an optimized Oracle-to-Oracle. Oracle Autonomous Database offers unmatched data sharing capabilities that address all the requirements of modern data sharing architectures for anyone:

◉ Delta Sharing, a modern open data sharing protocol introduced by Databricks, enables secure and efficient data sharing across organizations. It provides a unified, high-performance, and cost-effective solution for sharing large datasets without the need for data duplication or complex ETL processes. The protocol is designed to work with any data storage and processing platform (language, framework, BI tool). By leveraging this open protocol, organizations can overcome the challenges of traditional data sharing methods.
 
◉ Cloud Links are a cutting-edge data sharing solution native to Oracle Autonomous Databases. Leveraging the Oracle Cloud Infrastructure and its metadata Cloud Links enable secure and efficient data sharing between single databases or groups of databases, for example, on a compartmental level. It provides automatic data discovery and notification and instantaneous data access. It offers a streamlined, high-performance, and cost-effective approach for sharing data among multiple Autonomous Databases without the need for data duplication or complex ETL processes. 

Sharing data with ADB, using the Delta Sharing protocol


Oracle Autonomous Database supports the Delta Sharing protocol as a Data Provider and a Data Recipient, enabling secure and seamless data exchange with Oracle and external non-Oracle systems like PowerBI, Spark, Pandas, and others. Using Delta Sharing, Autonomous Database users can easily share data across various platforms and applications.

Whenever you need to share data with external systems - PowerBI, Spark, Pandas, and others - there’s no need to push or duplicate data; with Delta Sharing, a single dataset can be shared and simultaneously consumed by a variety of Delta Sharing clients (recipients), simplifying cross-environment integration and collaboration.

Conversely, when external systems, like Databricks, share data with Autonomous Database, you can consume data from these platforms, allowing organizations to combine their data assets with external ones.

With Delta Sharing, Autonomous Database enhances collaboration efficiency by breaking down barriers between platforms, allowing them to combine datasets without data duplication. See for yourself how easy it is to share data with anyone using Oracle Autonomous Database Data Studio.

Sharing data with ADB, using Cloud Links


Cloud Links, a cloud-native data sharing solution designed for secure and efficient data exchange between Autonomous Databases, offers performance, manageability, and usability optimizations. Leveraging the Oracle Cloud ecosystem and its metadata, data sharing between Autonomous Database is optimized with automatic data discovery, agile and flexible levels of data sharing, and instantaneous data access.

Data Sharing using Cloud Links is optimized for Autonomous Database and provides the most comprehensive data collaboration within the Oracle Cloud ecosystem between all Autonomous Databases.

Conclusion

Modern data sharing is changing how organizations share and work with data assets. Data Sharing in Oracle Autonomous Database addresses older methods' problems and offers secure, efficient, and easy-to-scale data sharing.

By supporting both the open standard Delta Sharing and optimized Oracle-to-Oracle mechanisms, Oracle Autonomous Database makes it easy for users to share data with stakeholders in a secure and governed way. Oracle Data Sharing improves teamwork and collaboration, both inside and outside your organization, allowing you to use valuable data assets for faster decision-making and better efficiency.

Source: oracle.com

Monday, May 29, 2023

MAX_COLUMNS : Increase the Maximum Number of Columns for a Table (Wide Tables) in Oracle Database 23c

Oracle Database 23c, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Learning, Oracle Database Guides

Oracle 23c introduced the MAX_COLUMNS initialization parameter, which allows us to have up to 4096 columns in a table. This is sometimes described as wide tables.

The Problem


By default the maximum number of columns allowed for a table is 1000. The following code creates a table called T1 with 1000 columns with the name "COLn", where "n" is a number from 1 - 1000.

conn testuser1/testuser1@//localhost:1521/freepdb1

declare
  l_col_count number := 1000;
  l_str       clob;
begin
  execute immediate 'drop table if exists t1 purge';
  
  l_str := 'create table t1 (';
  for i in 1 .. l_col_count loop
    l_str := l_str || 'col' || to_char(i) || ' number, ';
  end loop;
  l_str := substr(l_str, 1, length(l_str)-2);
  l_str := l_str || ')';
  
  execute immediate l_str;
end;
/

desc t1

SQL> desc t1
 Name                                            Null?         Type
 ------------------------------------- -------- ----------------------------
 COL1                                                          NUMBER
 COL2                                                          NUMBER
 COL3                                                          NUMBER
... edited for brevity ...
 COL998                                                     NUMBER
 COL999                                                     NUMBER
 COL1000                                                   NUMBER

SQL>

This time we will try 1001 columns.

declare
  l_col_count number := 1001;
  l_str       clob;
begin
  execute immediate 'drop table if exists t1 purge';
  
  l_str := 'create table t1 (';
  for i in 1 .. l_col_count loop
    l_str := l_str || 'col' || to_char(i) || ' number, ';
  end loop;
  l_str := substr(l_str, 1, length(l_str)-2);
  l_str := l_str || ')';
  
  execute immediate l_str;
end;
/

declare
*
ERROR at line 1:
ORA-01792: maximum number of columns in a table or view is 1000
ORA-06512: at line 14


SQL>

The Solution : MAX_COLUMNS


The maximum number of columns is controlled by the MAX_COLUMNS initialization parameter, which has a default value of "STANDARD".

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

show parameters max_columns

NAME                                           TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_columns                          string          STANDARD
SQL>

We set the MAX_COLUMNS value to "EXTENDED" in the PDB and restart the PDB.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

alter system set max_columns=EXTENDED scope=spfile;
shutdown immediate;
startup;

We can now create a table with up to 4096 columns.

conn testuser1/testuser1@//localhost:1521/freepdb1

declare
  l_col_count number := 4096;
  l_str       clob;
begin
  execute immediate 'drop table if exists t1 purge';
  
  l_str := 'create table t1 (';
  for i in 1 .. l_col_count loop
    l_str := l_str || 'col' || to_char(i) || ' number, ';
  end loop;
  l_str := substr(l_str, 1, length(l_str)-2);
  l_str := l_str || ')';
  
  execute immediate l_str;
end;
/

SQL> desc t1
 Name                                        Null?        Type
 ----------------------------------- -------- ----------------------------
 COL1                                                          NUMBER
 COL2                                                          NUMBER
 COL3                                                          NUMBER
... edited for brevity ...
 COL4094                                                    NUMBER
 COL4095                                                    NUMBER
 COL4096                                                   NUMBER

SQL>

We can't revert the setting of MAX_COLUMNS while we have one or more tables with more than 1000 columns.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

alter system set max_columns=STANDARD scope=spfile;

alter system set max_columns=STANDARD scope=spfile
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-60471: max_columns can not be set to STANDARD as there are one or more objects with more than
1000 columns

SQL>

If we drop the table, we can reset the value.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

drop table if exists testuser1.t1 purge;

alter system set max_columns=STANDARD scope=spfile;
shutdown immediate;
startup;

Considerations


Some things to consider about this functionality.

◉ Most of the time 1000 columns is more than enough, but occasionally people have use cases where more would be preferable.
◉ The MAX_COLUMNS parameter can't be set at session level, only system level, but it can be limited to a specific PDB.
◉ The MAX_COLUMNS parameter must be the same on all instances in a RAC cluster.
◉ Having a large number of columns is likely to result in row chaining, even on a clean insert, and increased row migration, depending on the lifecycle of a row.
◉ The "scope=memory" option has been disallowed to force a restart so every subsystem sees the new setting consistently. Thanks to Roger MacNicol for pointing this out.
◉ We must use a compatible client (23c or above) to use this functionality.
◉ The limit of 4096 columns includes virtual columns.
◉ When using wide tables with HCC, you must use HCC Archive Low compression, rather than the default Query High.

Source: oracle-base.com

Friday, May 26, 2023

Full Text Indexing using Oracle Text

Oracle Text, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Certification, Oracle Database Study, Oracle Database Materials

Oracle Text, previously know as interMedia Text and ConText, is an extensive full text indexing technology allowing you to efficiently query free text and produce document classification applications. In this article I'll only scratch the surface of this very complex feature.

◉ Setup


The examples in this article require access to the CTX_DDL package, which is granted as follows.

GRANT EXECUTE ON CTX_DDL TO <username>;

◉ CONTEXT Indexes


The CONTEXT index type is used to index large amounts of text such as Word, PDF, XML, HTML or plain text documents. In this example we will store the data in a BLOB column, which allows us to store binary documents like Word and PDF as well as plain text. Using a CLOB is preferable if only plain text documents are being used.

First we build a sample schema to hold our data.

DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
DROP PROCEDURE load_file_to_my_docs;

CREATE TABLE my_docs (
  id    NUMBER(10)     NOT NULL,
  name  VARCHAR2(200)  NOT NULL,
  doc   BLOB           NOT NULL
);

ALTER TABLE my_docs ADD (
  CONSTRAINT my_docs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE my_docs_seq;

CREATE OR REPLACE DIRECTORY documents AS 'C:\work';

Next we load several files as follows.

CREATE OR REPLACE PROCEDURE load_file_to_my_docs (p_file_name  IN  my_docs.name%TYPE) AS
  v_bfile      BFILE;
  v_blob       BLOB;
BEGIN
  INSERT INTO my_docs (id, name, doc)
  VALUES (my_docs_seq.NEXTVAL, p_file_name, empty_blob())
  RETURN doc INTO v_blob;

  v_bfile := BFILENAME('DOCUMENTS', p_file_name);
  Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
  Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
  Dbms_Lob.Fileclose(v_bfile);

  COMMIT;
END;
/

EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.doc');
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.asp');
EXEC load_file_to_my_docs('XMLOverHTTP9i.asp');
EXEC load_file_to_my_docs('UNIXForDBAs.asp');
EXEC load_file_to_my_docs('emp_ws_access.sql');
EXEC load_file_to_my_docs('emp_ws_test.html');
EXEC load_file_to_my_docs('9ivsSS2000forPerformanceV22.pdf');

Next we create a CONTEXT type index on the doc column and gather table statistics.

CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);

Finally we query table looking for documents with specific content.

SELECT SCORE(1) score, id, name
FROM   my_docs
WHERE  CONTAINS(doc, 'SQL Server', 1) > 0
ORDER BY SCORE(1) DESC;

     SCORE         ID NAME
---------- ---------- ------------------------------------------------
       100        127 9ivsSS2000forPerformanceV22.pdf

1 row selected.

Execution Plan
----------------------------------------------------------
   0       SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
   1    0   SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
   3    2       DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)


SELECT SCORE(1) score, id, name
FROM   my_docs
WHERE  CONTAINS(doc, 'XML', 1) > 0
ORDER BY SCORE(1) DESC;

     SCORE         ID NAME
---------- ---------- ------------------------------------------------
        74        123 XMLOverHTTP9i.asp
         9        125 emp_ws_access.sql

2 rows selected.

Execution Plan
----------------------------------------------------------
   0       SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
   1    0   SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
   3    2       DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)

◉ CTXCAT Indexes


The CTXCAT index type is best suited to smaller text fragments that must be indexed along with other relational data. In this example the data will be stored in a VARCHAR2 column.

First we create a schema to hold the data.

DROP TABLE my_items;
DROP SEQUENCE my_items_seq;
EXEC CTX_DDL.DROP_INDEX_SET('my_items_iset');

CREATE TABLE my_items (
  id           NUMBER(10)      NOT NULL,
  name         VARCHAR2(200)   NOT NULL,
  description  VARCHAR2(4000)  NOT NULL,
  price        NUMBER(7,2)     NOT NULL
);

ALTER TABLE my_items ADD (
  CONSTRAINT my_items_pk PRIMARY KEY (id)
);

CREATE SEQUENCE my_items_seq;

Next we populate the schema with some dummy data.

BEGIN
  FOR i IN 1 .. 1000 LOOP
    INSERT INTO my_items (id, name, description, price)
    VALUES (my_items_seq.NEXTVAL, 'Bike: '||i, 'Bike Description ('||i||')', i);
  END LOOP;

  FOR i IN 1 .. 1000 LOOP
    INSERT INTO my_items (id, name, description, price)
    VALUES (my_items_seq.NEXTVAL, 'Car: '||i, 'Car Description ('||i||')', i);
  END LOOP;

  FOR i IN 1 .. 1000 LOOP
    INSERT INTO my_items (id, name, description, price)
    VALUES (my_items_seq.NEXTVAL, 'House: '||i, 'House Description ('||i||')', i);
  END LOOP;

  COMMIT;
END;
/

Next we create a CTXCAT index on the DESCRIPTION and PRICE columns and gather table statistics. In order to create the index we must create an index-set with a sub-index for each column referenced by the CATSEARCH function.

EXEC CTX_DDL.CREATE_INDEX_SET('my_items_iset');
EXEC CTX_DDL.ADD_INDEX('my_items_iset','price');

CREATE INDEX my_items_name_idx ON my_items(description) INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set my_items_iset');

EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_ITEMS', cascade=>TRUE);

Finally we query table looking for items with a description that contains our specified words and an appropriate price.

SELECT id, price, name
FROM   my_items
WHERE  CATSEARCH(description, 'Bike', 'price BETWEEN 1 AND 5')> 0;

        ID      PRICE NAME
---------- ---------- ------------------------------------------------
         1          1 Bike: 1
         2          2 Bike: 2
         3          3 Bike: 3
         4          4 Bike: 4
         5          5 Bike: 5

5 rows selected.

Execution Plan
----------------------------------------------------------
   0       SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
   2    1     DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'


SELECT id, price, name
FROM   my_items
WHERE  CATSEARCH(description, 'Car', 'price BETWEEN 101 AND 105 ORDER BY price DESC')> 0;

        ID      PRICE NAME
---------- ---------- ------------------------------------------------
      1105        105 Car: 105
      1104        104 Car: 104
      1103        103 Car: 103
      1102        102 Car: 102
      1101        101 Car: 101

5 rows selected.

Execution Plan
----------------------------------------------------------
   0        SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
   2    1     DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'

Every column used to restrict the selection or order the output in the CATSEARCH function should have a sub-index within the index-set. The CTXCAT index type is transactional so there is no need to synchronize the index.

◉ CTXRULE Indexes


The CTXRULE index type can be used to build document classification applications.

First we must define our document categories and store them, along with a suitable query for the MATCHES function.

DROP TABLE my_doc_categories;
DROP TABLE my_categories;
DROP SEQUENCE my_categories_seq;
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;

CREATE TABLE my_categories (
  id        NUMBER(10)      NOT NULL,
  category  VARCHAR2(30)    NOT NULL,
  query     VARCHAR2(2000)  NOT NULL
);

ALTER TABLE my_categories ADD (
  CONSTRAINT my_categories_pk PRIMARY KEY (id)
);

CREATE SEQUENCE my_categories_seq;

INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'Oracle', 'ABOUT(Oracle)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'SQL Server', 'ABOUT(SQL Server)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'UNIX', 'ABOUT(UNIX)');

Next we create a table to hold our documents.

CREATE TABLE my_docs (
  id    NUMBER(10)     NOT NULL,
  name  VARCHAR2(200)  NOT NULL,
  doc   CLOB           NOT NULL
);

ALTER TABLE my_docs ADD (
  CONSTRAINT my_docs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE my_docs_seq;

Then we create an intersection table to resolve the many-to-many relationship between documents and categories.

CREATE TABLE my_doc_categories (
  my_doc_id       NUMBER(10)  NOT NULL,
  my_category_id  NUMBER(10)  NOT NULL
);

ALTER TABLE my_doc_categories ADD (
  CONSTRAINT my_doc_categories_pk PRIMARY KEY (my_doc_id, my_category_id)
);

Next we create a BEFORE INSERT trigger on the MY_DOCS table to automatically assign the documents to the relevant categories as they are being inserted. The MATCHES function is used to decide if the document matches any of our gategory queries. The resulting cursor is used to insert the matches into the intersect table.

CREATE OR REPLACE TRIGGER my_docs_trg
  BEFORE INSERT ON my_docs
  FOR EACH ROW
BEGIN
  FOR c1 IN (SELECT id
             FROM   my_categories
             WHERE  MATCHES(query, :new.doc)>0)
  LOOP
    BEGIN
      INSERT INTO my_doc_categories(my_doc_id, my_category_id)
      VALUES (:new.id, c1.id);
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

Next we create the CTXRULE index to support the trigger. For completeness we also create a CONTEXT index on the document itself, although this is not involved in the category assignment process.

CREATE INDEX my_categories_query_idx ON my_categories(query) INDEXTYPE IS CTXSYS.CTXRULE;
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_CATEGORIES', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOC_CATEGORIES', cascade=>TRUE);

Finally we test the mechanism by inserting some rows and checking the classification.

INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle Document', 'This document constains the word Oracle!');

INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'SQL Server Document', 'This document constains the words SQL Server!');

INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'UNIX Document', 'This document constains the word UNIX!');

INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle UNIX Document', 'This document constains the word UNIX and the word Oracle!');


COLUMN name FORMAT A30;
SELECT a.name, b.category
FROM   my_docs a,
       my_categories b,
       my_doc_categories c
WHERE  c.my_doc_id      = a.id
AND    c.my_category_id = b.id;

NAME                               CATEGORY
------------------------------   ------------------------------
Oracle Document                   Oracle
SQL Server Document            SQL Server
UNIX Document                     UNIX
Oracle UNIX Document           UNIX
Oracle UNIX Document           Oracle

5 rows selected.

The output shows that the documents have been assigned to the correct categories. Note, the "Oracle UNIX Document" document has been assigned to both the "Oracle" and "UNIX" categories.

◉ Index Maintenance


Not all Oracle Text indexes are automatically updated as records are added or deleted. To synchronize the index with the table you must call.

SQL> EXEC CTX_DDL.SYNC_INDEX('my_docs_doc_idx');

Regular synchronizations of the index can be automated using the DBMS_JOB package. The following script is provided to make this task easier.

$ORACLE_HOME/ctx/sample/script/drjobdml.sql

It can be called from SQL*Plus whilst logged on as the index owner as follows.

SQL> @drjobdml.sql index-name interval-mins
SQL> @drjobdml.sql my_docs_doc_idx 60

Regular synchronization of text indexes can cause fragmentation which affects query performance. To correct this situation the index can be rebuilt or optimized. Index optimization can be performed in three basic modes (FAST, FULL or TOKEN). The FAST mode compacts fragmented rows but does not remove old data.

BEGIN
  CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FAST');
END;
/

The FULL mode optimizes either the entire index or a portion of it, with old data removed.

BEGIN
  CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FULL');
END;
/

The TOKEN mode perfoms a full optimization for a specific token.

BEGIN
  CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','TOKEN', token=>'Oracle');
END;
/

Source: oracle-base.com

Wednesday, May 24, 2023

XML, JSON and Oracle Text Search Index Enhancements in Oracle Database 23c

XML, JSON, Oracle Text Search Index Enhancements, Oracle Database 23c, Oracle Database Career, Database Skills, Database Jobs, Database Learning, Database Guides, Database Prep, Database Preparation

In Oracle 23c the CREATE SEARCH INDEX statement allows us to create search indexes on XML, JSON and text data, making the syntax consistent between them.

◉ XML Search Indexes


The index type of XDB.XMLIndex was introduced in a previous release to allow us to index XML data.

drop table if exists t1 purge;

create table t1 (
  id    number,
  data  xmltype
);

create index t1_xmlindex_idx on t1 (data) indextype is xdb.xmlindex;

In Oracle 23c we can create search indexes for XML using the CREATE SEARCH INDEX ... FOR XML syntax. The full syntax is available here.

drop table if exists xml_tab purge;

create table xml_tab (
  id    number,
  data  xmltype
)
xmltype column data
store as transportable binary xml;

create search index xml_tab_search_idx on xml_tab (data) for XML
parameters (
  'search_on text'
);

The SEARCH_ON parameter must be set for XML search indexes.

◉ TEXT : Only text data is indexed for full-text search queries.
◉ VALUE(data_types) : Enables range-search for the specified data types. One or more of BINARY_DOUBLE, NUMBER, TIMESTAMP, VARCHAR2 as a comma-separated list.
◉ TEXT_VALUE(data_types) : A combination of support for full-text searches queries and range-search for the specified data types.

The XMLTYPE column must be stored as transportable binary XML to build a XML search index. For other storage types we can create an Oracle Text search index instead.

drop table if exists xml_tab purge;

create table xml_tab (
  id    number,
  data  xmltype
);

create search index xml_tab_search_idx on xml_tab (data);

◉ JSON Search Indexes


The original syntax for JSON search indexes in Oracle 12.1 was rather ugly.

drop table if exists t1 purge;

create table t1 (
  id    number,
  data  clob,
  constraint t1_json_chk check (data is json)
);

create index t1_search_idx on t1 (data)
  indextype is ctxsys.context
  parameters ('section group ctxsys.json_section_group sync (on commit)');

In Oracle 12.2 the JSON search index syntax was simplified using the CREATE SEARCH INDEX ... FOR JSON syntax. The full syntax is available here.

drop index if exists t1_search_idx;

create search index t1_search_idx on t1 (data) for json;

The SEARCH_ON parameter defaults to TEXT_VALUE for XML search indexes.

- NONE : JSON queries are not supported. Only the data guide index is maintained.
- TEXT : Only text data is indexed for full-text search queries.
- VALUE(data_types) : Enables range-search for the specified data types. One or more of NUMBER, TIMESTAMP, VARCHAR2 as a comma-separated list.
- TEXT_VALUE(data_types) : A combination of support for full-text searches queries and range-search for the specified data types.
- TEXT_VALUE_STRING : A combination of support for full-text searches queries and range-search for NUMBER, TIMESTAMP and VARCHAR2 data types.

◉ Oracle Text Search Indexes


Oracle has a long history of full-text indexing for text data. 

In Oracle 23c we can create full-text search indexes using the CREATE SEARCH INDEX syntax. The full syntax is available here.

drop table if exists text_tab purge;

create table text_tab (
  id    number,
  data  clob
);

create search index text_tab_search_idx on text_tab (data);

Source: oracle-base.com

Tuesday, May 23, 2023

Benefits of 1Z0-931-22 Certification for Your Career Development

1z0-931-22 dumps, 1z0-931-22 exam dumps, 1z0-931-22, 1z0-931-22 exam question, 1z0-931-22 questions, oracle autonomous database certification, oracle autonomous database cloud 2022 professional, autonomous database certification

What Is 1Z0-931-22 Certification?

The 1Z0-931-22 certification is a prestigious credential offered by Oracle, and it specifically focuses on the Oracle Autonomous Database Cloud 2022 Professional. This certification validates an individual's expertise in managing and leveraging data effectively using the latest advancements in data management technology.

By obtaining the 1Z0-931-22 certification, professionals demonstrate their proficiency in Oracle Autonomous Database Cloud 2022, which incorporates cutting-edge technologies such as artificial intelligence, machine learning, and automation. This certification provides individuals with comprehensive knowledge and hands-on experience in utilizing these advanced tools to drive innovation and efficiency in data management processes.

The 1Z0-931-22 certification is highly regarded in the industry and is recognized by employers globally. It serves as a tangible validation of an individual's skills and knowledge in data management and analytics, establishing them as credible experts in the field. Professionals with this certification often enjoy enhanced career prospects, improved earning potential, and peer recognition.

Furthermore, the 1Z0-931-22 certification is not a one-time achievement but a commitment to continuous professional development. Oracle continually updates its certification programs to align with industry trends and emerging practices. Certified professionals have access to a vast network of resources and opportunities for ongoing growth and learning, provided they stay up to date with the latest developments in the field.

Unlocking Success: 1Z0-931-22 Certification Benefits

In the rapidly growing world of technology, staying ahead of the curve is crucial for professionals seeking success. As businesses strive to harness the power of data, the demand for skilled individuals who can manage and leverage data effectively has skyrocketed.

Acquiring the proper certifications in this highly competitive landscape can unlock new opportunities and propel your career forward. One such certification that stands out is the 1Z0-931-22 certification, which offers many advantages for ambitious professionals.

1. Enhanced Career Prospects with 1Z0-931-22 Certification

The 1Z0-931-22 certification can significantly enhance your data management and analytics career prospects. Employers across industries recognize the value of Oracle certifications and often prioritize candidates who hold them. This certification demonstrates your proficiency in Oracle Autonomous Database Cloud 2022 Professional, making you a sought-after asset in today's data-driven organizations. With this credential, you'll have a competitive edge over your peers and open doors to exciting job opportunities.

2. Expertise in Cutting-Edge Technology

The 1Z0-931-22 certification equips you with comprehensive knowledge and expertise in the latest advancements in data management technology. Oracle Autonomous Database Cloud 2022 combines artificial intelligence, machine learning, and automation to revolutionize data management processes. By becoming certified, you'll gain an in-depth understanding and hands-on experience in utilizing these cutting-edge tools effectively. This expertise enhances your professional profile and allows you to drive innovation and efficiency within your organization.

3. Validation of Skills and Knowledge

Earning the 1Z0-931-22 certification is a real validation of your data management and analytics skills and knowledge. It showcases your commitment to continuous learning and professional development, establishing you as a credible expert in your field. This validation can be particularly advantageous when competing for promotions or pursuing leadership positions within your organization. Employers value certified professionals who can bring proven expertise to the table, and this certification provides precisely that.

4. Increased Earning Potential with 1Z0-931-22 Certification

In addition to improving your career prospects, the 1Z0-931-22 certification can positively impact your earning potential. Certified professionals often command higher salaries compared to their non-certified counterparts. With the rapid growth of data-centric roles and the scarcity of qualified professionals, organizations are willing to invest in top talent. By acquiring this certification, and you position yourself as a highly skilled individual who can deliver exceptional results, translating into better compensation packages and financial rewards.

5. Continuous Professional Development

Oracle certifications, including the 1Z0-931-22, go beyond a one-time achievement and provide a platform for continuous professional development and learning. Oracle's commitment to updating its certification programs ensures that certified professionals stay abreast of the latest industry trends and developments. As technology evolves, staying up to date with emerging practices and acquiring new skills becomes essential. With this certification, you can access a vast network of professionals and resources that foster ongoing growth and knowledge enhancement.

6. Recognition and Prestige

Obtaining the 1Z0-931-22 certification brings recognition and prestige to your professional profile. Being associated with a globally recognized certification from Oracle elevates your standing within the industry and among your peers. The certification showcases your dedication to excellence and commitment to delivering high-quality results. As a certified professional, you gain a sense of pride in your accomplishments and enjoy increased credibility in your professional interactions.

In Conclusion

The 1Z0-931-22 certification offers a multitude of benefits for ambitious professionals seeking to unlock success in the realm of data management and analytics. From enhanced career prospects and cutting-edge expertise to validation of skills and increased earning potential, this certification opens doors to new opportunities and propels your professional growth. Additionally, its focus on continuous professional development ensures that you stay at the forefront of industry advancements.

Embrace the power of the 1Z0-931-22 certification and take your career to new heights!

Monday, May 22, 2023

SELECT Without FROM Clause in Oracle Database 23c

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Database Preparation, Database Certification, Database Guides

From Oracle 23c onward we can use a SELECT without a FROM clause in some circumstances.

◉ SELECT Without FROM Clause


From Oracle 23c onward, rather than selecting expressions from the DUAL table, we can just omit the FROM clause completely. So these are functionally identical.

select sysdate;

SYSDATE
---------
04-APR-23

SQL>

select sysdate from dual;

SYSDATE
---------
04-APR-23

SQL>

◉ PL/SQL Support


The omission of the FROM clause is also valid in PL/SQL.

set serveroutput on

declare
  v_date date;
begin
  select sysdate
  into v_date;

  dbms_output.put_line(v_date);
end;
/
04-APR-23

PL/SQL procedure successfully completed.

SQL>

Normally we would expect to achieve the above with a direct assignment, not a SELECT ... INTO, but it wouldn't illustrate the point.

◉ Implicit Statement Results


In other database engines we often see this type of syntax used for passing results out of procedures, so we might expect this to be possible, but unfortunately it's not.

create or replace procedure get_date as
begin
  select sysdate;
end;
/

Warning: Procedure created with compilation errors.

SQL>show errors
Errors for PROCEDURE GET_DATE:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/3      PLS-00428: an INTO clause is expected in this SELECT statement
SQL>

We can of course replicate the expected functionality using implicit statement results, introduced in Oracle 12.1.

create or replace procedure get_date as
  l_cursor sys_refcursor;
begin
  open l_cursor for
    select sysdate;
  dbms_sql.return_result(l_cursor);
end;
/

exec get_date;

PL/SQL procedure successfully completed.

ResultSet #1

SYSDATE
---------
04-APR-23

SQL>

That allows us to mimic what we see in other database engines, but it's not as simple as we might have wanted.

◉ Query Transformation


Let's see what happens behind the scenes when we use this new syntax.

First we flush the shared pool and identify the trace file that will be created for our new session.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba
alter system flush shared_pool;

conn testuser1/testuser1@//localhost:1521/freepdb1

set linesize 100
column value format a65

select value
from   v$diag_info
where  name = 'Default Trace File';

VALUE
-----------------------------------------------------------------
/opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_17498.trc

SQL>

Now we do a 10053 trace of the statement.

alter session set events '10053 trace name context forever';

select sysdate; 

alter session set events '10053 trace name context off';

We check the resulting trace file, searching for the section beginning with "Final query after transformations", and we see the following statement.

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT SYSDATE@! "SYSDATE" FROM "SYS"."DUAL" "DUAL"

The statement has been transformed to include FROM DUAL, so this is syntax candy. The feature lets us type less, but the query we are used to runs on the server.

Source: oracle-base.com

Friday, May 19, 2023

Schema Privileges in Oracle Database 23c

Schema Privileges in Oracle Database 23c

Schema privileges allow us to simplify grants where a user or role needs privileges on all objects in a schema.

◉ Setup


The examples in this article require the following setup.

Create two test users and a role.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

drop user if exists testuser1 cascade;
drop user if exists testuser2 cascade;
drop role if exists t1_schema_role;

create user testuser1 identified by testuser1 quota unlimited on users;
grant db_developer_role to testuser1;

create user testuser2 identified by testuser2 quota unlimited on users;
grant create session to testuser2;

create role t1_schema_role;
Create some objects in the first test user.

conn testuser1/testuser1@//localhost:1521/freepdb1

-- Sequences
create sequence t1_seq;
create sequence t2_seq;


-- Tables
create table t1 (id  number);
insert into t1 values (t1_seq.nextval);
commit;

create table t2 (id  number);
insert into t2 values (t2_seq.nextval);
commit;


-- Views
create view t1_v as select * from t1;
create view t2_v as select * from t2;


-- Procedures
create or replace procedure p1 as
begin
  null;
end;
/

create or replace procedure p2 as
begin
  null;
end;
/

◉ Grant Schema Privileges


The following code shows how to perform various schema privilege grants to users and roles.

conn testuser1/testuser1@//localhost:1521/freepdb1

-- Sequences
grant select any sequence on schema testuser1 to testuser2;
grant select any sequence on schema testuser1 to t1_schema_role;

-- Tables, views, materialized views
grant select any table on schema testuser1 to testuser2;
grant insert any table on schema testuser1 to testuser2;
grant update any table on schema testuser1 to testuser2;
grant delete any table on schema testuser1 to testuser2;
grant select any table on schema testuser1 to t1_schema_role;
grant insert any table on schema testuser1 to t1_schema_role;
grant update any table on schema testuser1 to t1_schema_role;
grant delete any table on schema testuser1 to t1_schema_role;

-- Procedures, functions and packages
grant execute any procedure on schema testuser1 to testuser2;
grant execute any procedure on schema testuser1 to t1_schema_role;

◉ Test Schema Privileges


The following code tests the grants we made earlier.

conn testuser2/testuser2@//localhost:1521/freepdb1

-- Sequences

select testuser1.t1_seq.nextval;

   NEXTVAL
----------
         2

SQL>

select testuser1.t2_seq.nextval;

   NEXTVAL
----------
         2

SQL>


-- Tables

select count(*) from testuser1.t1;

  COUNT(*)
----------
         1

SQL>

select count(*) from testuser1.t2;

  COUNT(*)
----------
         1

SQL>


-- Views
select * from testuser1.t1_v;

        ID
----------
         1

SQL>

select * from testuser1.t2_v;

        ID
----------
         1

SQL>


-- Procedures

exec testuser1.p1;

PL/SQL procedure successfully completed.

SQL>

exec testuser1.p2;

PL/SQL procedure successfully completed.

SQL>

◉ Auditing


Privileges granted at the schema level show up in the audit trail in the normal way, as demonstrated below.

We create and enable a new audit policy on some of the TESTUSER1 objects.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

noaudit policy test_audit_policy;
drop audit policy test_audit_policy;

create audit policy test_audit_policy
  actions delete on testuser1.t1,
          insert on testuser1.t1,
          update on testuser1.t1,
          select on testuser1.t1_seq
  when    'sys_context(''userenv'', ''session_user'') = ''TESTUSER2'''
  evaluate per session
  container = current;

audit policy test_audit_policy;

We connect to the TESTUSER2 user and insert some data by referencing a sequence.

conn testuser2/testuser2@//localhost:1521/freepdb1

insert into testuser1.t1 (id) values (testuser1.t1_seq.nextval);
commit;

We check the audit trail.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

-- You might need to flush the audit information before it is visible.
-- exec dbms_audit_mgmt.flush_unified_audit_trail;


column event_timestamp format a30
column dbusername format a10
column action_name format a20
column object_schema format a10
column object_name format a20

select event_timestamp,
       dbusername,
       action_name,
       object_schema,
       object_name
from   unified_audit_trail
where  dbusername = 'TESTUSER2'
order BY event_timestamp;

EVENT_TIMESTAMP                DBUSERNAME ACTION_NAME          OBJECT_SCH OBJECT_NAME
------------------------------ ---------- -------------------- ---------- --------------------
30-APR-23 05.00.39.944494 PM   TESTUSER2  SELECT               TESTUSER1  T1_SEQ
30-APR-23 05.00.39.948816 PM   TESTUSER2  INSERT               TESTUSER1  T1

SQL>

We can see the individual actions are audited as expected.

◉ Views


The following views display information about schema privileges.

- DBA_SCHEMA_PRIVS
- ROLE_SCHEMA_PRIVS
- USER_SCHEMA_PRIVS
- SESSION_SCHEMA_PRIVS
- V$ENABLEDSCHEMAPRIVS

Here are some examples of their usage.

conn testuser2/testuser2@//localhost:1521/freepdb1

column username format a10
column privilege format a25
column schema format a10

select * from user_schema_privs;

USERNAME   PRIVILEGE                 SCHEMA     ADM COM INH
---------- ------------------------- ---------- --- --- ---
TESTUSER2  EXECUTE ANY PROCEDURE     TESTUSER1  NO  NO  NO
TESTUSER2  SELECT ANY SEQUENCE       TESTUSER1  NO  NO  NO
TESTUSER2  DELETE ANY TABLE          TESTUSER1  NO  NO  NO
TESTUSER2  UPDATE ANY TABLE          TESTUSER1  NO  NO  NO
TESTUSER2  INSERT ANY TABLE          TESTUSER1  NO  NO  NO
TESTUSER2  SELECT ANY TABLE          TESTUSER1  NO  NO  NO

6 rows selected.

SQL>

select * from session_schema_privs;

PRIVILEGE                 SCHEMA
------------------------- ----------
EXECUTE ANY PROCEDURE     TESTUSER1
SELECT ANY SEQUENCE       TESTUSER1
DELETE ANY TABLE          TESTUSER1
UPDATE ANY TABLE          TESTUSER1
INSERT ANY TABLE          TESTUSER1
SELECT ANY TABLE          TESTUSER1

6 rows selected.

SQL>

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

column role format a20
column privilege format a25
column schema format a10

select * from role_schema_privs;

ROLE                 PRIVILEGE                 SCHEMA     ADM COM INH
-------------------- ------------------------- ---------- --- --- ---
T1_SCHEMA_ROLE       EXECUTE ANY PROCEDURE     TESTUSER1  NO  NO  NO
T1_SCHEMA_ROLE       SELECT ANY SEQUENCE       TESTUSER1  NO  NO  NO
T1_SCHEMA_ROLE       DELETE ANY TABLE          TESTUSER1  NO  NO  NO
T1_SCHEMA_ROLE       UPDATE ANY TABLE          TESTUSER1  NO  NO  NO
T1_SCHEMA_ROLE       INSERT ANY TABLE          TESTUSER1  NO  NO  NO
T1_SCHEMA_ROLE       SELECT ANY TABLE          TESTUSER1  NO  NO  NO

6 rows selected.

SQL>

◉ Revoke Schema Privileges


The following code shows how to revoke the schema privileges we granted earlier.

conn testuser1/testuser1@//localhost:1521/freepdb1

-- Sequences
revoke select any sequence on schema testuser1 from testuser2;
revoke select any sequence on schema testuser1 from t1_schema_role;

-- Tables, views, materialized views
revoke select any table on schema testuser1 from testuser2;
revoke insert any table on schema testuser1 from testuser2;
revoke update any table on schema testuser1 from testuser2;
revoke delete any table on schema testuser1 from testuser2;
revoke select any table on schema testuser1 from t1_schema_role;
revoke insert any table on schema testuser1 from t1_schema_role;
revoke update any table on schema testuser1 from t1_schema_role;
revoke delete any table on schema testuser1 from t1_schema_role;

-- Procedures, functions and packages
revoke execute any procedure on schema testuser1 from testuser2;
revoke execute any procedure on schema testuser1 from t1_schema_role;

◉ Considerations


Here are some things to consider when using this functionality.

- The schema privileges can be granted to, and revoked from, users and roles.

- The grant for a specific object type only has to be issued once. Any newly created objects of the same type will automatically be available via the grant.

- From a security perspective schema privileges can present a problem. We always discuss using "least privileges" to minimize the attack surface of our system. Schema privileges go against the "least privileges" principle by granting access to all objects of a specific type. For many use cases we should avoid schema privileges, which makes our lives harder, but potentially safer.

- There are many system and admin privileges that are excluded from schema privileges, listed here.

Source: oracle-base.com

Wednesday, May 17, 2023

Removal of Touch-Once Restriction after Parallel DML (Unrestricted Direct Loads) in Oracle Database 23c

Oracle Database 23c, Oracle Database, Database Skills, Database Jobs, Database Prep, Database Learning, Database Guides

Oracle database 23c makes parallel DML more flexible by removing the touch-once restriction after parallel DML. This is also know as unrestricted direct loads.

◉ The Problem : Touch-Once Restriction


In releases prior to Oracle database 23c we can't select or modify an object after it has been modified using parallel DML in the same transaction. We have to issue either a commit or rollback to end the transaction before it can be referenced. This is called the touch-once restriction. Here is an example of it in 19c.

We create a test table as a copy of ALL_OBJECTS.

drop table t1 purge;

create table t1 as
select *
from   all_objects;

Table created.

SQL>

We enable parallel DML and perform a parallel insert into the T1 table.

alter session enable parallel dml;

insert /*+ parallel(t1 4) */ into t1
select /*+ parallel(t1 4)*/ * from t1;

118,673 rows inserted.

SQL>

Without issuing a commit or rollback, we attempt to select from the T1 table and we get an error caused by the touch-once restriction.

select count(*) from t1;

Error starting at line : 1 in command -
select count(*) from t1
Error at Command Line : 1 Column : 22
Error report -
SQL Error: ORA-12838: cannot read/modify an object after modifying it in parallel
12838. 00000 -  "cannot read/modify an object after modifying it in parallel"
*Cause:    Within the same transaction, an attempt was made to add read or
           modification statements on a table after it had been modified in parallel
           or with direct load. This is not permitted.
*Action:   Rewrite the transaction, or break it up into two transactions
           one containing the initial modification and the second containing the
           parallel modification operation.
SQL>

We have to commit or rollback before referencing the table modified by the parallel DML.

commit;

select count(*) from t1;

   COUNT(*)
-----------
     237346

SQL>

◉ The Solution: The Removal of the Touch-Once Restriction


Oracle database 23c has removed the touch-once restriction, so we no longer have to issue a commit or rollback before referencing the object modified by a parallel DML operation. Here is a repeat of the previous example in Oracle database 23c.

We create a test table as a copy of ALL_OBJECTS.

drop table if exists t1 purge;

create table t1 as
select *
from   all_objects;

Table created.

SQL>

We enable parallel DML and perform a parallel insert into the T1 table.

alter session enable parallel dml;

insert /*+ parallel(t1 4) */ into t1
select /*+ parallel(t1 4)*/ * from t1;

79144 rows inserted.

SQL>

Without issuing a commit or rollback, we attempt to select from the T1 table and we no longer get an error.

select count(*) from t1;

  COUNT(*)
----------
    158288

SQL>

We can also perform more parallel or direct loads without a commit.

insert /*+ parallel(t1 4) */ into t1
select /*+ parallel(t1 4)*/ * from t1;

158288 rows created.

SQL>

insert /*+ append */ into t1
select * from t1;
316576 rows created.

SQL>

We can think of this as a quality of life improvement.

Source: oracle-base.com

Monday, May 15, 2023

Precheck Constraints using JSON Schema in Oracle Database 23c

JSON Schema, Oracle Database 23c, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Certification

In Oracle database 23c we can use the PRECHECK keyword to mark check constraints as being validated externally by an application. The DBMS_JSON_SCHEMA package allows us to describe objects in the form of a JSON schema, which can be passed to an application to define how the data should be presented to the database so it is processed without errors.

◉ Basic Check Constraints


The PRECHECK keyword indicates a check constraint is prechecked by the application before the data it is sent to the database. In its default form, the check constraint is still validated in the database.

In this example we create a table with a check constraint set to PRECHECK. We see invalid JSON data still causes a constraint violation with the PRECHECK option.

drop table if exists t1 purge;

create table t1 (
  id         number,
  valid      varchar2(1),
  constraint t1_pk primary key (id),
  constraint valid_chk check (valid in ('Y','N')) precheck
);

insert into t1 (id, valid) values (1, 'B');
*
ERROR at line 1:
ORA-02290: check constraint (TESTUSER1.VALID_CHK) violated

SQL>

We use the ALTER TABLE command to set the constraint to DISABLE PRECHECK, which means we are totally reliant on the application to validate the data. This means we can insert invalid data if we fail to manually validate it.

alter table t1 modify constraint valid_chk disable precheck;

insert into t1 (id, valid) values (1, 'B');

1 row created.

SQL>

Remember, we can only enable the constraint if the underlying data doesn't violate it, unless we use the ENABLE NOVALIDATE option. In the following example we use the ALTER TABLE command to cycle through various constraint settings.

select status, validated, precheck
from   user_constraints
where  constraint_name = 'VALID_CHK';

STATUS   VALIDATED     PRECHECK
-------- ------------- --------
DISABLED NOT VALIDATED PRECHECK

SQL>

alter table t1 modify constraint valid_chk enable novalidate precheck;

select status, validated, precheck
from   user_constraints
where  constraint_name = 'VALID_CHK';

STATUS   VALIDATED     PRECHECK
-------- ------------- --------
ENABLED  NOT VALIDATED PRECHECK

SQL>

alter table t1 modify constraint valid_chk enable noprecheck;
                                 *
ERROR at line 1:
ORA-02293: cannot validate (TESTUSER1.VALID_CHK) - check constraint
violated

SQL>

truncate table t1;
alter table t1 modify constraint valid_chk enable noprecheck;

select status, validated, precheck
from   user_constraints
where  constraint_name = 'VALID_CHK';

STATUS   VALIDATED     PRECHECK
-------- ------------- --------
ENABLED  VALIDATED

SQL>

◉ JSON Schema Check Constraints


We can use the PRECHECK option for check constraints that validate JSON data against a JSON Schema.

In this example we create a table using the JSON data type, and use a check constraint to validate the JSON data conforms to a specific JSON schema. We are using the PRECHECK option, but even when we insert valid JSON data we get a JSON Schema violation if it doesn't conform to the JSON Schema defintion.

drop table if exists t2 purge;

create table t2 (
  id         number,
  json_data  json,
  constraint t2_pk primary key (id),
  constraint json_data_chk check (json_data is json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}') precheck
);

insert into t2 (id, json_data) values (2, json('{"fruit":"apple"}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

Here we use the ALTER TABLE command to set the constraint to DISABLE PRECHECK, which means we are totally reliant on the application to validate the data matches the JSON Schema. This means we can add valid JSON, which doesn't conform to the JSON Schema definition.

alter table t2 modify constraint json_data_chk disable precheck;

insert into t2 (id, json_data) values (3, json('{"fruit":"apple"}'));

1 row created.

SQL>

We cycle through the various constraint states as before.

select status, validated, precheck
from   user_constraints
where  constraint_name = 'JSON_DATA_CHK';

STATUS   VALIDATED     PRECHECK
-------- ------------- --------
DISABLED NOT VALIDATED PRECHECK

SQL>

alter table t2 modify constraint json_data_chk enable novalidate precheck;

select status, validated, precheck
from   user_constraints
where  constraint_name = 'JSON_DATA_CHK';

STATUS   VALIDATED     PRECHECK
-------- ------------- --------
ENABLED  NOT VALIDATED PRECHECK

SQL>

alter table t2 modify constraint json_data_chk enable noprecheck;
                                 *
ERROR at line 1:
ORA-02293: cannot validate (TESTUSER1.JSON_DATA_CHK) - check constraint violated

SQL>

truncate table t2;
alter table t2 modify constraint json_data_chk enable noprecheck;

select status, validated, precheck
from   user_constraints
where  constraint_name = 'JSON_DATA_CHK';

STATUS   VALIDATED     PRECHECK
-------- ------------- --------
ENABLED  VALIDATED

SQL>

◉ DBMS_JSON_SCHEMA.DESCRIBE


The DESCRIBE function in the DBMS_JSON_SCHEMA package generates a JSON schema describing the referenced object. It supports a variety of objects listed here. The description of the object can be used by an application to validate the data prior to sending it to the database.

We describe the T1 table created earlier. The output includes the table definition, and the check constraint for the VALID column. We've used the JSON_SERIALIZE function to pretty print the output, but this is not necessary.

set long 1000000 pagesize 100
column json_schema format a80

select json_serialize(
         dbms_json_schema.describe(
           object_name => 'T1',
           owner_name  => 'TESTUSER1')
         pretty) as json_schema;

JSON_SCHEMA
--------------------------------------------------------------------------------
{
  "title" : "T1",
  "dbObject" : "TESTUSER1.T1",
  "type" : "object",
  "dbObjectType" : "table",
  "properties" :
  {
    "ID" :
    {
      "extendedType" : "number"
    },
    "VALID" :
    {
      "extendedType" :
      [
        "null",
        "string"
      ],
      "maxLength" : 1,
      "allOf" :
      [
        {
          "enum" :
          [
            "Y",
            "N"
          ]
        }
      ]
    }
  },
  "required" :
  [
    "ID"
  ],
  "dbPrimaryKey" :
  [
    "ID"
  ]
}

SQL>

In this example we restrict the description to the VALID column.

set long 1000000 pagesize 100
column json_schema format a80

select json_serialize(
         dbms_json_schema.describe(
           object_name => 'T1',
           owner_name  => 'TESTUSER1',
           column_name => 'VALID')
         pretty) as json_schema;

JSON_SCHEMA
--------------------------------------------------------------------------------
{
  "dbColumn" : "VALID",
  "extendedType" :
  [
    "null",
    "string"
  ],
  "maxLength" : 1,
  "allOf" :
  [
    {
      "enum" :
      [
        "Y",
        "N"
      ]
    }
  ]
}

SQL>

We describe the T2 table created earlier. The output includes the table definition, and the check constraint for the JSON_DATA column, which itself includes the JSON schema defintion.

set long 1000000 pagesize 100
column json_schema format a80

select json_serialize(
         dbms_json_schema.describe(
           object_name => 'T2',
           owner_name  => 'TESTUSER1')
         pretty) as json_schema;

JSON_SCHEMA
--------------------------------------------------------------------------------
{
  "title" : "T2",
  "dbObject" : "TESTUSER1.T2",
  "type" : "object",
  "dbObjectType" : "table",
  "properties" :
  {
    "ID" :
    {
      "extendedType" : "number"
    },
    "JSON_DATA" :
    {
      "allOf" :
      [
        {
          "type" : "object",
          "properties" :
          {
            "fruit" :
            {
              "type" : "string",
              "minLength" : 1,
              "maxLength" : 10
            },
            "quantity" :
            {
              "type" : "number",
              "minimum" : 0,
              "maximum" : 100
            }
          },
          "required" :
          [
            "fruit",
            "quantity"
          ]
        }
      ]
    }
  },
  "required" :
  [
    "ID"
  ],
  "dbPrimaryKey" :
  [
    "ID"
  ]
}

SQL>

In this example we restrict the description to the JSON_DATA column.

set long 1000000 pagesize 100
column json_schema format a80

select json_serialize(
         dbms_json_schema.describe(
           object_name => 'T2',
           owner_name  => 'TESTUSER1',
           column_name => 'JSON_DATA')
         pretty) as json_schema;

JSON_SCHEMA
--------------------------------------------------------------------------------
{
  "dbColumn" : "JSON_DATA",
  "allOf" :
  [
    {
      "type" : "object",
      "properties" :
      {
        "fruit" :
        {
          "type" : "string",
          "minLength" : 1,
          "maxLength" : 10
        },
        "quantity" :
        {
          "type" : "number",
          "minimum" : 0,
          "maximum" : 100
        }
      },
      "required" :
      [
        "fruit",
        "quantity"
      ]
    }
  ]
}

SQL>

In this example we describe a JSON-relational duality view created in the article here.

set long 1000000 pagesize 200
column json_schema format a80

select json_serialize(
         dbms_json_schema.describe(
           object_name => 'DEPARTMENT_DV',
           owner_name  => 'TESTUSER1')
         pretty) as json_schema;

JSON_SCHEMA
--------------------------------------------------------------------------------
{
  "title" : "DEPARTMENT_DV",
  "dbObject" : "TESTUSER1.DEPARTMENT_DV",
  "dbObjectType" : "dualityView",
  "dbObjectProperties" :
  [
    "insertable",
    "updatable",
    "deletable",
    "check"
  ],
  "type" : "object",
  "properties" :
  {
    "_metadata" :
    {
      "etag" :
      {
        "extendedType" : "string",
        "maxLength" : 200
      },
      "asof" :
      {
        "extendedType" : "string",
        "maxLength" : 20
      }
    },
    "location" :
    {
      "extendedType" :
      [
        "string",
        "null"
      ],
      "maxLength" : 13,
      "dbAnnotations" :
      [
        "update",
        "check"
      ]
    },
    "departmentName" :
    {
      "extendedType" :
      [
        "string",
        "null"
      ],
      "maxLength" : 14,
      "dbAnnotations" :
      [
        "update",
        "check"
      ]
    },
    "departmentNumber" :
    {
      "extendedType" : "number",
      "sqlPrecision" : 2,
      "sqlScale" : 0,
      "dbAnnotations" :
      [
        "check"
      ]
    },
    "employees" :
    {
      "type" : "array",
      "items" :
      {
        "type" : "object",
        "properties" :
        {
          "job" :
          {
            "extendedType" :
            [
              "string",
              "null"
            ],
            "maxLength" : 9,
            "dbAnnotations" :
            [
              "update",
              "check"
            ]
          },
          "salary" :
          {
            "extendedType" :
            [
              "number",
              "null"
            ],
            "sqlPrecision" : 7,
            "sqlScale" : 2,
            "dbAnnotations" :
            [
              "update",
              "check"
            ]
          },
          "employeeNumber" :
          {
            "extendedType" : "number",
            "sqlPrecision" : 4,
            "sqlScale" : 0,
            "dbAnnotations" :
            [
              "check"
            ]
          },
          "employeeName" :
          {
            "extendedType" :
            [
              "string",
              "null"
            ],
            "maxLength" : 10,
            "dbAnnotations" :
            [
              "update",
              "check"
            ]
          }
        },
        "required" :
        [
          "employeeNumber"
        ]
      }
    }
  },
  "required" :
  [
    "departmentNumber"
  ]
}

SQL>

Source: oracle-base.com