Thursday, November 23, 2023

Some new PL/SQL features in Oracle Database 23c

PL/SQL stands for Procedural Language for SQL. PL/SQL is Oracle’s procedural extension for SQL and is is loosely based on Ada (a variant of Pascal developed for the US Deptartment of Defense).  PL/SQL has been available in Oracle Database since version 6 (first released in 1992) while stored PL/SQL procedures/functions/packages/triggers became available with version 7.

Let us have a look at what is new with Oracle Database 23c. There are 5 main areas:

1. The SQL Transpiler


A new init.ora parameters in Oracle Database 23c called SQL_TRANSPILER enables or disables the SQL transpiler (think of a combination of translate and compile). When this parameter is set to ON, the SQL transpiler feature is enabled and PL/SQL functions are automatically transpiled into SQL expressions whenever possible (does not work for all functions), without user intervention. The conversion operation is transparent to users and can improve performance by reducing overhead accrued from switching between the SQL and PL/SQL runtime.

You can change the value with either “alter system” or “alter session” and in RAC different instances can have different values. The default value is OFF and the parameter can be modified on PDB level too.

SQL> alter system set SQL_TRANSPILER=on scope=both;

System altered.

2. The BOOLEAN data type


Another new init.ora parameter called PLSQL_IMPLICIT_CONVERSION_BOOL (false is default) allows/disallows implicit conversions in PL/SQL. PLSQL_IMPLICIT_CONVERSION_BOOL allows or disallows implicit conversions in PL/SQL. Here are the data types that are converted to boolean and vice versa:

Numeric data type to BOOLEAN:

  • Non-zero numeric values are implicitly converted to the BOOLEAN value TRUE
  • The numeric value of 0 is implicitly converted to the BOOLEAN value FALSE

Character data type to BOOLEAN:

  • Character values such as ‘true’, ‘t’, ‘yes’, ‘y’, and ‘on’ (case-insensitive) are implicitly converted to the BOOLEAN value TRUE
  • Character values such as ‘false’, ‘f’, ‘no’, ‘n’, and ‘off’ (case-insensitive) are implicitly converted to the BOOLEAN value FALSE

BOOLEAN to numeric data type

  • The BOOLEAN value TRUE is implicitly converted to the numeric value 1
  • The BOOLEAN value FALSE is implicitly converted to the numeric value 0

BOOLEAN to character data type

The BOOLEAN value TRUE is implicitly converted to the character value ‘TRUE’
The BOOLEAN value FALSE is implicitly converted to the character value ‘FALSE’

When this parameter is set to FALSE, PL/SQL does not support such implicit conversions. Let us see how this works in practice, here is a small table called DBAs and let us add few Oracle ACEs (and few more non-ACEs) into it (in honor of Lionel Messi winning the Golden Ball/Ballon d’Or last night for the 6th time, 30th of October 2023):

Some new PL/SQL features in Oracle Database 23c

Note that regardless of the way we inserted the boolean value for each one the result is either TRUE or FALSE when PLSQL_IMPLICIT_CONVERSION_BOOL is set to TRUE:

Some new PL/SQL features in Oracle Database 23c

3. IF [NOT] EXISTS Syntax Support


In 23c, the clauses IF NOT EXISTS and IF EXISTS are supported by CREATE, ALTER, and DROP DDL statements. They are used to suppress potential errors otherwise raised by the existence or non-existence of a given object, allowing you to write idempotent DDL scripts.

IF NOT EXISTS cannot be used in combination with OR REPLACE in commands using the CREATE DDL statement.

Some new PL/SQL features in Oracle Database 23c

4. Extended CASE Controls


The simple CASE statement is extended in PL/SQL to support the use of dangling predicates and choice lists, allowing for simplified and less redundant code.

Dangling predicates are ordinary expressions with their left operands missing (for example > 275) that can be used as a selector_value either instead of or in combination with any number of literals or expressions. With dangling predicates, more complicated comparisons can be made without requiring a searched CASE statement.

Here is an example of how dangling predicates work:

Some new PL/SQL features in Oracle Database 23c

The CASE statement is extended in PL/SQL to be consistent with the updated definitions of CASE expressions and CASE statements in the SQL:2003 Standard [ISO03a, ISO03b]. Dangling predicates allow tests other than equality to be performed in simple CASE operations. Multiple choices in WHEN clauses allow CASE operations to be written with less duplicated code.

Currently, the dangling predicates IS JSON and IS OF are not supported.

5. JSON Constructor and JSON_VALUE Support of PL/SQL Aggregate Types


The JSON constructor can now accept a PL/SQL aggregate type and return a JSON object or array populated with the aggregate type data. Conversely, the built-in function json_value now supports PL/SQL aggregate types in the RETURNING clause, mapping from JSON to the specified aggregate type.

All PL/SQL record field and collection data element type constraints are honored by json_value, including character max length, integer range checks, and not null constraints.

SQL objects and PL/SQL record type instances, including implicit records created by the %ROWTYPE attribute, are allowed as valid input to the JSON constructor. There is expanded support for user defined types as input streamlines data interchange between PL/SQL applications and languages that support JSON.

Here is a simple example of using the JSON_SERIALIZE function in order to “transform” and “pretty” print a PL/SQL aggregate (we used to call it unofficially a pseudo table in Oracle 7) into JSON:

Some new PL/SQL features in Oracle Database 23c

Here is how the output from the PL/SQL block above looks like:

Some new PL/SQL features in Oracle Database 23c

Source: juliandontcheff.wordpress.com

Wednesday, November 22, 2023

JSON Schema in Oracle Database 23c

JSON Schema in Oracle Database 23c

In Oracle database 23c a JSON Schema can validate the structure and contents of JSON documents in your database.

What is a JSON Schema?


JSON is extremely flexible, but sometimes we want to validate the structure and contents of our JSON. A JSON Schema is a declarative language that allows us to annotate and validate JSON documents. You can get a full explanation of JSON Schema here.

In the examples below we will use the following JSON schema. It will validate the JSON is made up of a JSON object, with two mandatory items, with their minimum and maximum sizes defined.

{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}

Oracle first introduced the ability to display a JSON schema when they introduced the JSON Data Guide in Oracle 12.2, and later made it easier when they enhanced the JSON_DATAGUIDE function in Oracle 18c. In both cases this the ability to display a JSON Schema, not enforce one.

VALIDATE Keyword During Table Creation


We use the VALIDATE clause along with the JSON schema when defining a JSON column in our table.

drop table if exists t1 purge;

create table t1 (
  id         number,
  json_data  json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}',
  constraint t1_pk primary key (id)
);

We can see the JSON schema is associated with the column by using the USER_JSON_SCHEMA_COLUMNS view.

set long 1000000
column table_name format a10
column column_name format a11
column constraint_name format a15
column json_schema format a40

select table_name,
       column_name,
       constraint_name,
       json_schema
from   user_json_schema_columns;

TABLE_NAME COLUMN_NAME CONSTRAINT_NAME JSON_SCHEMA
---------- ----------- --------------- ----------------------------------------
T1         JSON_DATA   SYS_C0012374    {"type":"object","properties":{"fruit":{
                                       "type":"string","minLength":1,"maxLength
                                       ":10},"quantity":{"type":"number","minim
                                       um":0,"maximum":100}},"required":["fruit
                                       ","quantity"]}

SQL>

We create some data to test the JSON schema.

-- Valid
insert into t1 (id, json_data) values (1, json('{"fruit":"apple","quantity":10}'));

1 row created.

SQL>

-- Valid : Extra weight element.
insert into t1 (id, json_data) values (7, json('{"fruit":"apple","quantity":10,"weight":11}'));

1 row created.

SQL>

-- Missing quantity
insert into t1 (id, json_data) values (2, json('{"fruit":"apple"}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

-- Missing fruit
insert into t1 (id, json_data) values (3, json('{"quantity":10}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

-- Fruit name too long.
insert into t1 (id, json_data) values (4, json('{"fruit":"abcdefghijk","quantity":10}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

-- Fruit name too short.
insert into t1 (id, json_data) values (5, json('{"fruit":"","quantity":10}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

-- Quantity too big.
insert into t1 (id, json_data) values (6, json('{"fruit":"apple","quantity":101}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

-- Quantity too small.
insert into t1 (id, json_data) values (7, json('{"fruit":"apple","quantity":-1}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

Notice that one of the valid examples included the ability to include additional properties that were not in the JSON Schema. This is because JSON is extensible by nature, so JSON Schema only validate minimum requirements by default. We can prevent additional properties being included in the data by setting "additionalProperties" to false, as shown below.

drop table if exists t1 purge;

create table t1 (
  id         number,
  json_data  json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"],
  "additionalProperties"  : false
}',
  constraint t1_pk primary key (id)
);

insert into t1 (id, json_data) values (7, json('{"fruit":"apple","quantity":10,"weight":11}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

VALIDATE Keyword With IS JSON Condition


We can use VALIDATE as part of an IS JSON condition. In the following example we recreate the table, this time using the IS JSON condition as part of a check contraint.

drop table if exists t1 purge;

create table t1 (
  id         number,
  json_data  json,
  constraint t1_pk primary key (id),
  constraint json_data_chk check (json_data is json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}')
);

We can also use the VALIDATE keyword with an IS JSON condition in a query. We recreate the table without using the VALIDATE keyword, and populate it with a variety of JSON documents.

drop table if exists t1 purge;

create table t1 (
  id         number,
  json_data  json,
  constraint t1_pk primary key (id)
);

insert into t1 (id, json_data) values (1, json('{"fruit":"apple"}'));
insert into t1 (id, json_data) values (2, json('{"quantity":10}'));
insert into t1 (id, json_data) values (3, json('{"fruit":"apple","quantity":10}'));

We query the table using the IS JSON VALIDATE condition, so we only return data that matches the JSON schema.

select *
from   t1
where  json_data is json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}';

        ID JSON_DATA
---------- --------------------------------------------------------------------------------
         3 {"fruit":"apple","quantity":10}

SQL>

VALIDATE Keyword With Domains


We can create a domain that uses the VALIDATE keyword to check a JSON schema. This allows us to create a reusable JSON schema.

drop domain if exists json_schema_domain;

create domain json_schema_domain as json
  constraint json_data_chk check (json_schema_domain is json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}');

We can use this domain during table creation.

drop table if exists t1 purge;

create table t1 (
  id         number,
  json_data  domain json_schema_domain
);

There is a simplified form of this type of domain. Notice the check constraint has been removed.

drop table if exists t1 purge;
drop domain if exists json_schema_domain;

create domain json_schema_domain as json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}';

We can use this domain during table creation.

create table t1 (
  id         number,
  json_data  domain json_schema_domain
);

Extended Data Types


JSON has limited data type support. Native binary JSON data (OSON format) adds support for scalar types not present in JSON. There is a full list of the extended types here.

For example, JSON doesn't have datetime data types, so it represents all datetime data as strings. How do we validate that data with a JSON Schema? To demonstrate this we will recreate the test table, altering the JSON Schema to add "expiryDate" as a date extended data type.

drop table if exists t1 purge;

create table t1 (
  id         number,
  json_data  json validate '{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100},
                  "expiryDate" : {"extendedType": "date"}},
  "required"   : ["fruit", "quantity", "expiryDate"]
}',
  constraint t1_pk primary key (id)
);

We try to insert some data.

insert into t1 (id, json_data) values (1, json('
{
  "fruit" : "apple",
  "quantity" : 10,
  "expiryDate" : "2023-06-30T09:30:26+0000"
}'));
            *
ERROR at line 1:
ORA-40875: JSON schema validation error

SQL>

Although this represents a valid date, it is a string, so it violates the JSON schema. We can alter the JSON to explicity identify it as a DATE datatype using "$oracleDate", and add the EXTENDED keyword to the end of the JSON constructor call.

insert into t1 (id, json_data) values (1, json('
{
  "fruit" : "apple",
  "quantity" : 10,
  "expiryDate" : {"$oracleDate" : "2023-06-30T09:30:26+0000"}
}' extended));

1 row created.

SQL>

DBMS_JSON_SCHEMA.IS_SCHEMA_VALID


The IS_SCHEMA_VALID function in the DBMS_JSON_SCHEMA package can check the validity of a JSON schema definition. In the following example we call it with a valid JSON schema, then an invalid one.

select dbms_json_schema.is_schema_valid('{
  "type"       : "object",
  "properties" : {"fruit"    : {"type"      : "string",
                                "minLength" : 1,
                                "maxLength" : 10},
                  "quantity" : {"type"      : "number",
                                "minimum"   : 0,
                                "maximum"   : 100}},
  "required"   : ["fruit", "quantity"]
}') as is_valid;

  IS_VALID
----------
         1

SQL>

select dbms_json_schema.is_schema_valid('banana') as is_valid;
*
ERROR at line 1:
ORA-40441: JSON syntax error

SQL>

Source: oracle-base.com

Monday, November 20, 2023

SQL Property Graphs and SQL/PGQ in Oracle Database 23c

Oracle have had a Graph Server and Client product for some time, but in Oracle database 23c some of the property graph functionality has been built directly into the database.

This article focuses on new SQL property graph feature and ignores the previous Graph Server product and PGQL.

Setup


We create a new test user. We add a couple of extra privileges that allow us to do some tracing and display execution plans, but these are not necessary when using SQL property graphs.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

drop user if exists testuser1 cascade;

create user testuser1 identified by testuser1 quota unlimited on users;
grant db_developer_role to testuser1;

-- Only used for tracing and execution plan examples.
grant alter session to testuser1;
grant select_catalog_role to testuser1;

We need to make sure our test user has the ability to create property graphs.

grant create property graph to testuser1;

This privilege is already part of the RESOURCE role, and will probably be added to the DB_DEVELOPER_ROLE role in the future.

We create a table to hold people, and a table to record the connections between those people.

conn testuser1/testuser1@//localhost:1521/freepdb1

drop table if exists sales purge;
drop table if exists products purge;
drop table if exists connections purge;
drop table if exists people purge;

create table people (
  person_id number primary key,
  name      varchar2(15)
);

insert into people (person_id, name)
values (1, 'Wonder Woman'),
       (2, 'Peter Parker'),
       (3, 'Jean Grey'),
       (4, 'Clark Kent'),
       (5, 'Bruce Banner');
commit;

create table connections (
  connection_id  number primary key,
  person_id_1    number,
  person_id_2    number,
  constraint connections_people_1_fk foreign key (person_id_1) references people (person_id),
  constraint connections_people_2_fk foreign key (person_id_2) references people (person_id)
);

create index connections_person_1_idx on connections (person_id_1);
create index connections_person_2_idx on connections (person_id_2);

insert into connections (connection_id, person_id_1, person_id_2)
values (1,  1, 2),
       (2,  1, 3),
       (3,  1, 4),
       (4,  2, 4),
       (5,  3, 1),
       (6,  3, 4),
       (7,  3, 5),
       (8,  4, 1),
       (9,  5, 1),
       (10, 5, 2),
       (11, 5, 3);
commit;

We create a table to hold products, and a table to hold sales, linked back to the people we defined earlier.

create table products (
  product_id  number primary key,
  name        varchar2(10)
);

insert into products (product_id, name)
values (1, 'apple'),
       (2, 'banana'),
       (3, 'lemon'),
       (4, 'lime');
commit;

create table sales (
  sale_id     number primary key,
  person_id   number,
  product_id  number,
  quantity    number,
  constraint sales_people_fk foreign key (person_id) references people (person_id),
  constraint sales_products_fk foreign key (product_id) references products (product_id)
);

create index sales_person_id_idx on sales (person_id);
create index sales_product_idx on sales (product_id);

insert into sales (sale_id, person_id, product_id, quantity)
values (1, 1, 2, 10),
       (2, 1, 3, 5),
       (3, 2, 2, 6),
       (4, 2, 1, 4),
       (5, 3, 2, 3),
       (6, 3, 1, 6),
       (7, 3, 3, 2),
       (8, 4, 3, 8),
       (9, 5, 2, 6),
       (10, 5, 4, 5),
       (11, 5, 3, 3);
commit;

We gather statistics on all the tables.

exec dbms_stats.gather_table_stats(null, 'people');
exec dbms_stats.gather_table_stats(null, 'connections');
exec dbms_stats.gather_table_stats(null, 'products');
exec dbms_stats.gather_table_stats(null, 'sales');

SQL Property Graphs


A property graph is a model that describes nodes (vertices), and the relationships between them (edges). In the case of SQL property graphs introduced in Oracle database 23c, the vertices and edges are schema objects such as tables, external tables, materialized views or synonyms to these objects. The property graph is stored in the database, and can be referenced by SQL/PGQ queries. There is no data materialized by the SQL property graph, it is just metadata. All the actual data comes from the references objects.

We create a property graph where the PEOPLE table represents the vertices (the points) and the CONNECTIONS table represents the edges (the lines between the points).

--drop property graph connections_pg;

create property graph connections_pg
  vertex tables (
    people
      key (person_id)
      label person
      properties all columns
  )
  edge tables (
    connections
      key (connection_id)
      source key (person_id_1) references people (person_id)
      destination key (person_id_2) references people (person_id)
      label connection
      properties all columns
  );

The LABEL keyword allows us to associate a meaningful name, which we can reference in queries. Notice the edge table has a SOURCE KEY and DESTINATION KEY describing the connection between the vertices.

We create a property graph for the sales of products to people. The vertex tables are PEOPLE and PRODUCTS, and the edge table is SALES.

--drop property graph sales_pg;

create property graph sales_pg
  vertex tables (
    people
      key (person_id)
      label person
      properties all columns,
    products
      key (product_id)
      label product
      properties all columns
  )
  edge tables (
    sales
      key (sale_id)
      source key (person_id) references people (person_id)
      destination key (product_id) references products (product_id)
      label sale
      properties all columns
  );

We display the current property graphs using the USER_PROPERTY_GRAPHS view.

column graph_name format a20
column graph_mode format a10
column allows_mixed_types format a17
column inmemory format a8

select * from user_property_graphs;

GRAPH_NAME               GRAPH_MODE ALLOWS_MIXED_TYPE INMEMORY
--------------------                ----------    ----------------- --------
CONNECTIONS_PG      TRUSTED    NO                NO
SALES_PG                       TRUSTED    NO                NO

SQL>

SQL/PGQ (GRAPH_TABLE)

With the property graphs in place we can use the GRAPH_TABLE operator to query the property graphs using SQL/PGQ. We display people who are connected to each other.

select person1, person2
from   graph_table (connections_pg
         match
         (p1 is person) -[c is connection]-> (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2)
       )
order by 1;

PERSON1         PERSON2
--------------- ---------------
Bruce Banner    Wonder Woman
Bruce Banner    Peter Parker
Bruce Banner    Jean Grey
Clark Kent      Wonder Woman
Jean Grey       Clark Kent
Jean Grey       Wonder Woman
Jean Grey       Bruce Banner
Peter Parker    Clark Kent
Wonder Woman    Clark Kent
Wonder Woman    Peter Parker
Wonder Woman    Jean Grey

11 rows selected.

SQL>

Let's break this query down.

  • We call GRAPH_TABLE passing the property graph.
  • We define the graph element pattern in the MATCH clause using the vertex and edge labels to show our relationship.
  • We define the COLUMNS that will be available in our select list.

The MATCH clause elements can include a WHERE clause to limit the rows, and of course we can include a WHERE clause in the main body of the query.

select person1, person2
from   graph_table (connections_pg
         match
         (p1 is person where p1.name = 'Jean Grey') -[c is connection]-> (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2)
       )
order by 1;

PERSON1         PERSON2
--------------- ---------------
Jean Grey       Wonder Woman
Jean Grey       Clark Kent
Jean Grey       Bruce Banner

SQL>

select person1, person2
from   graph_table (connections_pg
         match
         (p1 is person where p1.name = 'Jean Grey') -[c is connection]-> (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2)
       )
where  person2 = 'Clark Kent'
order by 1;

PERSON1               PERSON2
--------------------- ---------------------
Jean Grey             Clark Kent

SQL>

To display the graph using Oracle Graph Visualization we need to include the VERTEX_ID and EDGE_ID functions to identify the IDs of the vertices and edges. These produce a fully qualified description of the element. In the following example we return a single row containing these values.

column id_p1 format a20
column id_c format a20
column id_p2 format a20
set linesize 200 pagesize 100

select person1, person2, id_p1, id_c, id_p2
from   graph_table (connections_pg
         match
         (p1 is person) -[c is connection]-> (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2,
                  vertex_id(p1) AS id_p1,
                  edge_id(c) AS id_c,
                  vertex_id(p2) AS id_p2)
       )
where  rownum = 1
order by 1;

PERSON1         PERSON2         ID_P1                ID_C                 ID_P2
--------------- --------------- -------------------- -------------------- --------------------
Wonder Woman    Peter Parker    {"GRAPH_OWNER":"TEST {"GRAPH_OWNER":"TEST {"GRAPH_OWNER":"TEST
                                USER1","GRAPH_NAME": USER1","GRAPH_NAME": USER1","GRAPH_NAME":
                                "CONNECTIONS_PG","EL "CONNECTIONS_PG","EL "CONNECTIONS_PG","EL
                                EM_TABLE":"PEOPLE"," EM_TABLE":"CONNECTIO EM_TABLE":"PEOPLE","
                                KEY_VALUE":{"PERSON_ NS","KEY_VALUE":{"CO KEY_VALUE":{"PERSON_
                                ID":1}}              NNECTION_ID":1}}     ID":2}}


SQL>

This example uses our second property graph, and allows us to see people who are related by purchasing the same items.

select person1, product, person2
from   graph_table (sales_pg
         match
         (p1 is person) -[s1 is sale]-> (pr is product) <-[s2 is sale]- (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2,
                  pr.name as product)
       )
where  person1 != person2
order by 1;

PERSON1         PRODUCT    PERSON2
--------------- ---------- ---------------
Bruce Banner    lemon      Jean Grey
Bruce Banner    banana     Jean Grey
Bruce Banner    banana     Peter Parker
Bruce Banner    lemon      Clark Kent
Bruce Banner    banana     Wonder Woman
Bruce Banner    lemon      Wonder Woman
Clark Kent      lemon      Bruce Banner
Clark Kent      lemon      Jean Grey
Clark Kent      lemon      Wonder Woman
Jean Grey       banana     Peter Parker
Jean Grey       apple      Peter Parker
Jean Grey       lemon      Wonder Woman
Jean Grey       banana     Wonder Woman
Jean Grey       lemon      Bruce Banner
Jean Grey       banana     Bruce Banner
Jean Grey       lemon      Clark Kent
Peter Parker    banana     Jean Grey
Peter Parker    banana     Wonder Woman
Peter Parker    apple      Jean Grey
Peter Parker    banana     Bruce Banner
Wonder Woman    banana     Jean Grey
Wonder Woman    lemon      Jean Grey
Wonder Woman    lemon      Bruce Banner
Wonder Woman    lemon      Clark Kent
Wonder Woman    banana     Peter Parker
Wonder Woman    banana     Bruce Banner

26 rows selected.

SQL>

As before, we need to include the IDs of the vertices and edges if we want to visualize the graph.

select person1, product, person2, id_p1, id_s1, id_pr, id_s2, id_p2
from   graph_table (sales_pg
         match
         (p1 is person) -[s1 is sale]-> (pr is product) <-[s2 is sale]- (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2,
                  pr.name as product,
                  vertex_id(p1) AS id_p1,
                  edge_id(s1) AS id_s1,
                  vertex_id(pr) AS id_pr,
                  edge_id(s2) AS id_s2,
                  vertex_id(p2) AS id_p2)
       )
where  person1 != person2
order by 1;

Oracle Graph Visualization


There are several ways to deploy the Oracle Graph Visualization tool. In this case we deployed it using Tomcat.

Click on the "Oracle Graph Client" link here, and select the architecture you require. In this case we downloaded "Oracle Graph Webapps 23.2.0 for (Linux x86-64)". The SQL/PGQ support was added to the visualizer in version 23.1, so make sure you are not using an older version.

On the Tomcat server we did the following.

cd /tmp
unzip -oq V1033696-01-linux-webapps.zip
unzip -oq oracle-graph-webapps-23.1.0.zip
cp graphviz-23.2.0-tomcat.war $CATALINA_BASE/webapps/

We were then able to access the visualizer on the following URL.

https://localhost:8443/graphviz-23.2.0-tomcat

We log in with the following details.

Username: testuser1
Password: testuser1
Advanced > Database: jdbc:oracle:thin:@//localhost:1521/freepdb1

Oracle Graph Visualization creates tables, materialized views and views when you run a query. They all have the prefix "$GRAPHVIZ_TEMP_TABLE_". Make sure the user you are connecting to has the privilege to do that.

We click on the "SQL/PGQ" tab and place the following query into the visualizer. Notice it is missing the trailing ";". This query displays the connections between people.

select id_p1, id_c, id_p2
from   graph_table (connections_pg
         match
         (p1 is person) -[c is connection]-> (p2 is person)
         columns (vertex_id(p1) AS id_p1,
                  edge_id(c) AS id_c,
                  vertex_id(p2) AS id_p2)
       )

When we run it we get the following output.

SQL Property Graphs and SQL/PGQ in Oracle Database 23c

We click the settings (cog) button, click the "Customizations" tab, and set the "Vertex Label" to the NAME column.

SQL Property Graphs and SQL/PGQ in Oracle Database 23c

Once we click "OK" we see the graph with the labels.

SQL Property Graphs and SQL/PGQ in Oracle Database 23c

We use the following query against our second property graph to shows the people who have purchased the same products.

select id_p1, id_s1, id_pr, id_s2, id_p2
from   graph_table (sales_pg
         match
         (p1 is person) -[s1 is sale]-> (pr is product) <-[s2 is sale]- (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2,
                  vertex_id(p1) AS id_p1,
                  edge_id(s1) AS id_s1,
                  vertex_id(pr) AS id_pr,
                  edge_id(s2) AS id_s2,
                  vertex_id(p2) AS id_p2)
       )
where  person1 != person2

We run the query. In the "Customizations" tab we set the "Vertex Label" to NAME and the "Edge Label" to QUANTITY.

SQL Property Graphs and SQL/PGQ in Oracle Database 23c

Not only can we see the relationship between the people and the products, but we can also see the quantity of the products purchased on the edge labels.

Query Transformation


Let's see what happens behind the scenes when we use SQL/PGQ.

First we flush the shared pool and identify the trace file that will be created for our new session.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

alter system flush shared_pool;

conn testuser1/testuser1@//localhost:1521/freepdb1

set linesize 100
column value format a65

select value
from   v$diag_info
where  name = 'Default Trace File';

VALUE
-----------------------------------------------------------------
/opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_14020.trc

SQL>

Now we do a 10053 trace of the statement.

alter session set events '10053 trace name context forever';

select person1, product, person2
from   graph_table (sales_pg
         match
         (p1 is person) -[s1 is sale]-> (pr is product) <-[s2 is sale]- (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2,
                  pr.name as product)
       )
where  person1 != person2
order by 1;

alter session set events '10053 trace name context off';

We check the resulting trace file, searching for the section beginning with "Final query after transformations", and we see the following statement.

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "P1"."NAME" "PERSON1",
       "PR"."NAME" "PRODUCT",
       "P2"."NAME" "PERSON2"
FROM   "TESTUSER1"."PEOPLE" "P1",
       "TESTUSER1"."SALES" "S1",
       "TESTUSER1"."PRODUCTS" "PR",
       "TESTUSER1"."SALES" "S2",
       "TESTUSER1"."PEOPLE" "P2"
WHERE  "P1"."NAME"<>"P2"."NAME"
AND    "P1"."PERSON_ID"="S1"."PERSON_ID"
AND    "PR"."PRODUCT_ID"="S1"."PRODUCT_ID"
AND    "P2"."PERSON_ID"="S2"."PERSON_ID"
AND    "PR"."PRODUCT_ID"="S2"."PRODUCT_ID"
ORDER BY "P1"."NAME"

The statement has been transformed to convert the SQL/PGQ into regular joins. In this example there is not a great deal of difference in complexity between the SQL/PGQ and the transformed SQL statement, but in some scenarios the SQL/PGQ can massively simplify the joins and unions necessary to return the data.

Since the SQL/PGQ is converted to regular SQL, it's possible to display the execution plan in the normal way.

select person1, product, person2
from   graph_table (sales_pg
         match
         (p1 is person) -[s1 is sale]-> (pr is product) <-[s2 is sale]- (p2 is person)
         columns (p1.name as person1,
                  p2.name as person2,
                  pr.name as product)
       )
where  person1 != person2
order by 1;

select * from table(dbms_xplan.display_cursor(format=>'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  2j4yv6j0gks9y, child number 0
-------------------------------------
select person1, product, person2 from   graph_table (sales_pg
match          (p1 is person) -[s1 is sale]-> (pr is product) <-[s2 is
sale]- (p2 is person)          columns (p1.name as person1,
      p2.name as person2,                   pr.name as product)
) where  person1 != person2 order by 1

Plan hash value: 3808001794

--------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                     |       |       |    14 (100)|          |
|   1 |  SORT ORDER BY                   |                     |    24 |  1224 |    14  (15)| 00:00:01 |
|*  2 |   HASH JOIN                      |                     |    24 |  1224 |    13   (8)| 00:00:01 |
|*  3 |    HASH JOIN                     |                     |    30 |  1080 |    10  (10)| 00:00:01 |
|*  4 |     HASH JOIN                    |                     |    11 |   330 |     8  (13)| 00:00:01 |
|   5 |      MERGE JOIN                  |                     |    11 |   165 |     5  (20)| 00:00:01 |
|   6 |       TABLE ACCESS BY INDEX ROWID| PRODUCTS            |     4 |    36 |     2   (0)| 00:00:01 |
|   7 |        INDEX FULL SCAN           | SYS_C0012556        |     4 |       |     1   (0)| 00:00:01 |
|*  8 |       SORT JOIN                  |                     |    11 |    66 |     3  (34)| 00:00:01 |
|   9 |        VIEW                      | index$_join$_003    |    11 |    66 |     2   (0)| 00:00:01 |
|* 10 |         HASH JOIN                |                     |       |       |            |          |
|  11 |          INDEX FAST FULL SCAN    | SALES_PERSON_ID_IDX |    11 |    66 |     1   (0)| 00:00:01 |
|  12 |          INDEX FAST FULL SCAN    | SALES_PRODUCT_IDX   |    11 |    66 |     1   (0)| 00:00:01 |
|  13 |      TABLE ACCESS FULL           | PEOPLE              |     5 |    75 |     3   (0)| 00:00:01 |
|  14 |     VIEW                         | index$_join$_005    |    11 |    66 |     2   (0)| 00:00:01 |
|* 15 |      HASH JOIN                   |                     |       |       |            |          |
|  16 |       INDEX FAST FULL SCAN       | SALES_PERSON_ID_IDX |    11 |    66 |     1   (0)| 00:00:01 |
|  17 |       INDEX FAST FULL SCAN       | SALES_PRODUCT_IDX   |    11 |    66 |     1   (0)| 00:00:01 |
|  18 |    TABLE ACCESS FULL             | PEOPLE              |     5 |    75 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$E6D2A9F4
   6 - SEL$E6D2A9F4 / "PR"@"SEL$88741CE7"
   7 - SEL$E6D2A9F4 / "PR"@"SEL$88741CE7"
   9 - SEL$2237A74B / "S1"@"SEL$88741CE7"
  10 - SEL$2237A74B
  11 - SEL$2237A74B / "indexjoin$_alias$_001"@"SEL$2237A74B"
  12 - SEL$2237A74B / "indexjoin$_alias$_002"@"SEL$2237A74B"
  13 - SEL$E6D2A9F4 / "P1"@"SEL$88741CE7"
  14 - SEL$37BF1F62 / "S2"@"SEL$88741CE7"
  15 - SEL$37BF1F62
  16 - SEL$37BF1F62 / "indexjoin$_alias$_001"@"SEL$37BF1F62"
  17 - SEL$37BF1F62 / "indexjoin$_alias$_002"@"SEL$37BF1F62"
  18 - SEL$E6D2A9F4 / "P2"@"SEL$88741CE7"

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("P2"."PERSON_ID"="S2"."PERSON_ID")
       filter("P1"."NAME"<>"P2"."NAME")
   3 - access("PR"."PRODUCT_ID"="S2"."PRODUCT_ID")
   4 - access("P1"."PERSON_ID"="S1"."PERSON_ID")
   8 - access("PR"."PRODUCT_ID"="S1"."PRODUCT_ID")
       filter("PR"."PRODUCT_ID"="S1"."PRODUCT_ID")
  10 - access(ROWID=ROWID)
  15 - access(ROWID=ROWID)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=1) "P1"."NAME"[VARCHAR2,15], "PR"."NAME"[VARCHAR2,10], "P2"."NAME"[VARCHAR2,15]
   2 - (#keys=1) "P1"."NAME"[VARCHAR2,15], "PR"."NAME"[VARCHAR2,10], "P1"."NAME"[VARCHAR2,15],
       "P2"."NAME"[VARCHAR2,15], "P2"."NAME"[VARCHAR2,15]
   3 - (#keys=1) "P1"."NAME"[VARCHAR2,15], "PR"."NAME"[VARCHAR2,10], "P1"."NAME"[VARCHAR2,15],
       "S2"."PERSON_ID"[NUMBER,22]
   4 - (#keys=1) "PR"."PRODUCT_ID"[NUMBER,22], "PR"."NAME"[VARCHAR2,10],
       "P1"."NAME"[VARCHAR2,15], "P1"."NAME"[VARCHAR2,15]
   5 - "PR"."PRODUCT_ID"[NUMBER,22], "PR"."NAME"[VARCHAR2,10], "S1"."PERSON_ID"[NUMBER,22]
   6 - "PR"."PRODUCT_ID"[NUMBER,22], "PR"."NAME"[VARCHAR2,10]
   7 - "PR".ROWID[ROWID,10], "PR"."PRODUCT_ID"[NUMBER,22]
   8 - (#keys=1) "S1"."PRODUCT_ID"[NUMBER,22], "S1"."PERSON_ID"[NUMBER,22]
   9 - "S1"."PRODUCT_ID"[NUMBER,22], "S1"."PERSON_ID"[NUMBER,22]
  10 - (#keys=1) "S1"."PERSON_ID"[NUMBER,22], "S1"."PRODUCT_ID"[NUMBER,22]
  11 - ROWID[ROWID,10], "S1"."PERSON_ID"[NUMBER,22]
  12 - ROWID[ROWID,10], "S1"."PRODUCT_ID"[NUMBER,22]
  13 - "P1"."PERSON_ID"[NUMBER,22], "P1"."NAME"[VARCHAR2,15]
  14 - "S2"."PRODUCT_ID"[NUMBER,22], "S2"."PERSON_ID"[NUMBER,22]
  15 - (#keys=1) "S2"."PERSON_ID"[NUMBER,22], "S2"."PRODUCT_ID"[NUMBER,22]
  16 - ROWID[ROWID,10], "S2"."PERSON_ID"[NUMBER,22]
  17 - ROWID[ROWID,10], "S2"."PRODUCT_ID"[NUMBER,22]
  18 - "P2"."PERSON_ID"[NUMBER,22], "P2"."NAME"[VARCHAR2,15]

Note
-----
   - this is an adaptive plan

92 rows selected.

SQL>

JSON Support


In the previous examples we used PROPERTIES ALL COLUMNS to keep things simple. We could have limited the properties to specific table columns using a comma-separated list, as shown below.

--drop property graph connections_pg;

create property graph connections_pg
  vertex tables (
    people
      key (person_id)
      label person
      properties (name)
  )
  edge tables (
    connections
      key (connection_id)
      source key (person_id_1) references people (person_id)
      destination key (person_id_2) references people (person_id)
      label connection
      properties (person_id_1, person_id_2)
  );

Where we have columns containing JSON data, we can drill down into the JSON data using dot notation, or SQL/JSON function calls to define properties.

We add a JSON column to the PEOPLE table and populate it with some JSON data.

alter table people add (json_data json);

update people
set    json_data = json('{"gender":"female", "universe":"DC"}')
where  person_id = 1;

update people
set    json_data = json('{"gender":"male", "universe":"Marvel"}')
where  person_id = 2;

update people
set    json_data = json('{"gender":"female", "universe":"DC"}')
where  person_id = 3;

update people
set    json_data = json('{"gender":"male", "universe":"Marvel"}')
where  person_id = 4;

update people
set    json_data = json('{"gender":"male", "universe":"Marvel"}')
where  person_id = 5;

commit;

We recreate the property graph including the two new properties using an example of dot notation and a JSON_VALUE call.

drop property graph connections_pg;

create property graph connections_pg
  vertex tables (
    people
      key (person_id)
      label person
      properties (name,
                  people.json_data.gender.string() as gender,
                  json_value(people.json_data, '$.universe') as universe)
  )
  edge tables (
    connections
      key (connection_id)
      source key (person_id_1) references people (person_id)
      destination key (person_id_2) references people (person_id)
      label connection
      properties (person_id_1, person_id_2)
  );

We project the new properties as columns and add them into our select list.

column gender1 format a10
column gender2 format a10
column universe1 format a10
column universe2 format a10

select person1, gender1, universe1, person2, gender2, universe2
from   graph_table (connections_pg
         match
         (p1 is person) -[c is connection]-> (p2 is person)
         columns (p1.name as person1,
                  p1.gender as gender1,
                  p1.universe as universe1,
                  p2.name as person2,
                  p2.gender as gender2,
                  p2.universe as universe2)
       )
order by 1;

PERSON1         GENDER1    UNIVERSE1  PERSON2         GENDER2    UNIVERSE2
--------------- ---------- ---------- --------------- ---------- ----------
Bruce Banner    male       Marvel     Peter Parker    male       Marvel
Bruce Banner    male       Marvel     Wonder Woman    female     DC
Bruce Banner    male       Marvel     Jean Grey       female     DC
Clark Kent      male       Marvel     Wonder Woman    female     DC
Jean Grey       female     DC         Wonder Woman    female     DC
Jean Grey       female     DC         Clark Kent      male       Marvel
Jean Grey       female     DC         Bruce Banner    male       Marvel
Peter Parker    male       Marvel     Clark Kent      male       Marvel
Wonder Woman    female     DC         Clark Kent      male       Marvel
Wonder Woman    female     DC         Jean Grey       female     DC
Wonder Woman    female     DC         Peter Parker    male       Marvel

11 rows selected.

SQL>

Saturday, November 18, 2023

Language bindings for Autonomous Database: An overview for Developers

Language bindings for Autonomous Database: An overview for Developers

As databases evolve to be more autonomous and cloud-friendly, developers need efficient and reliable methods to connect and interact with them. This post will provide an overview on how to connect and query Oracle's Autonomous Database (ADB) from some popular programming languages and some of the differences in each implementation.

Prerequisites


Before diving into the database connections from specific languages, you obviously need to have the language-specific support for each available Oracle database binding. Typically, this means having the language and related tools installed on your system (intepreter/compiler, package manager, etc). For some driver connections, you will also need the Oracle libraries, which are included in Oracle DB installations or can be obtained via the Oracle Instant Client.

Oracle Instant Client enables development and deployment of applications that connect to Oracle Database, either on-premise or in the Cloud. The Instant Client libraries provide the necessary network connectivity and advanced data features to make full use of Oracle Database. The libraries are used by the Oracle APIs of popular languages and environments including Python, Node.js, Go, PHP and Ruby, as well as providing access for Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), JDBC OCI, ODBC and Pro*C applications.

Language bindings for Oracle often rely on the Oracle Instant Client libraries to facilitate connections to Oracle databases, including ADB, either via Oracle Call Interface (OCI) or Oracle Database Programming Interface for C (ODPI-C) (more on these below).

Note: if you're a Mac developer, at the time of this writing, there is no native version of the Oracle Instant Client libraries for Apple silicon based (ARM-based M1/M2) Macbooks. Developers with these type of machines will need to use Rosetta 2 to bridge this gap (until ARM based libraries are released).

A wide range of Oracle database bindings


Once you've set up your Autonomous Database on Oracle Cloud Infrastructure (OCI), the next step is establishing a connection from your application. The landscape of drivers and bindings available for this purpose is as diverse as it is robust:

Language bindings for Autonomous Database: An overview for Developers
Oracle database language bindings

The array of options spans from open-source initiatives to Oracle-supported drivers. These various connectors offer unique advantages and cater to different scenarios. Some are community-driven efforts, reflecting the innovation and collaborative spirit of open-source culture. Others come with the endorsement and support of Oracle, assuring compatibility and often comprehensive documentation.

A critical distinction among these bindings is the division between "Thin" and "Thick" drivers. Thin drivers are designed for streamlined operation; they establish connectivity without the need for Oracle Client libraries, facilitating a lightweight and agile setup. This can be particularly advantageous when looking to minimize the application's footprint or when working within environments where deploying Oracle-specific libraries is impractical.

In contrast, drivers that require Oracle libraries (Thick drivers) tap into a deeper level of Oracle's functionality, leveraging OCI or ODPI-C for their connectivity layers. Each of these layers presents its own complexity and capabilities:

  • OCI (Oracle Call Interface): This is a traditional and feature-rich API that allows for a fine-grained control over database interactions. It is often the go-to choice for applications that need to exploit the full spectrum of Oracle's features and where performance is a critical concern.
  • ODPI-C (Oracle Database Programming Interface for C): Acting as a high-level abstraction over OCI, ODPI-C offers a simpler, more streamlined API that still provides comprehensive Oracle functionality but with reduced complexity. It is suitable for applications that require a balance between ease of use for the developer and access to advanced database features.

Two Rust bindings: rust-oracle and Sibyl


For Rust developers, rust-oracle and Sibyl are two prominent open source "thick" driver bindings. They both depend on Oracle libraries mentioned above, with rust-oracle being OCI-based and Sibyl utilizing ODPI-C. Both provide robust means to connect to ADB, but they differ mostly in terms of implementation complexity and feature set (for example the Sibyl library includes native support for async calls).

Let's take a look at a basic code snippets to connect and pass a query to fetch the server version in rust-oracle:

// Rust code using rust-oracle to connect to ADB and execute a SQL query
use oracle::{Connection, Error, RowValue};

fn main() -> Result<(), Error> {
    // Set the TNS_ADMIN environment variable
    env::set_var("TNS_ADMIN", "/path/to/your/wallet_directory");

    let conn = Connection::connect("user", "password", "//adb.instance.url")?;
    println!("Connected to database successfully.");

    let sql = "SELECT * FROM v$version WHERE banner LIKE 'Oracle%'";
    let rows = conn.query(sql, &[])?;

    for row_result in rows {
        let row: RowValue = row_result?;
        let version: String = row.get("banner")?;
        println!("Oracle DB Version: {}", version);
    }
    Ok(())
}

And now basically the same code with Sibyl:

// Rust code using Sibyl to connect to ADB and execute a SQL query
use sibyl::{Cursor, Error, Session};

fn main() -> Result<(), Error> {
    // Set the TNS_ADMIN environment variable
    env::set_var("TNS_ADMIN", "/path/to/your/wallet_directory");

    let session = Session::new("user", "password", "//adb.instance.url")?;
    println!("Connected to database successfully.");

    let sql = "SELECT * FROM v$version WHERE banner LIKE 'Oracle%'";
    let mut cursor = session.prepare(sql)?.query(())?;

    while let Some(row) = cursor.next_row()? {
        let version: String = row.get(0)?;
        println!("Oracle DB Version: {}", version);
    }

    Ok(())
}

As you can see the code is very similar in both libraries and some of the differences are more apparent when you go into more advanced code like DML handling, async support, etc.

For these snippets to work the Oracle libraries will have to be visible from the dynamic library path in your OS. And remember, if you're on Apple silicon you'll need Rosetta 2 as mentioned before and a build that targets Intel:

cargo build --target x86_64-apple-darwin

One Go binding: godror


For Go developers, godror is a comprehensive package built on a multi-level architecture (Go, CGO, ODPI-C (and then OCI)).

Here's a similar code snippet connectiong to ADB and passing a query:

// Go code using godror to connect to ADB and execute a SQL query
package main

import (
    "database/sql"
    "fmt"
    _ "github.com/godror/godror"
)

func main() {
    // Set the path to the directory containing your Oracle wallet
    os.Setenv("TNS_ADMIN", "/path/to/your/wallet_directory")

    db, err := sql.Open("godror", "user/password@adb.instance.url")
    if err != nil {
        fmt.Println("Error connecting to the database: ", err)
        return
    }
    defer db.Close()

    rows, err := db.Query("SELECT * FROM v$version WHERE banner LIKE 'Oracle%'")
    if err != nil {
        fmt.Println("Error performing query: ", err)
        return
    }
    defer rows.Close()

    for rows.Next() {
        var version string
        if err := rows.Scan(&version); err != nil {
            fmt.Println("Error reading rows: ", err)
            return
        }
        fmt.Println("Oracle DB Version: ", version)
    }
}

Although the library is powerful, according to some tests, the CGO layer can introduce significant overhead so you might want to carefully test the performance of your ADB enabled godror based application.

One "thin" driver: node-oracledb


The node-oracledb binding is an open-source driver supported by Oracle. It operates in both "thick" and "thin" modes ("thin" by default). The "thin" implementation does not depend on the Oracle libraries mentioned above and is generally easier to set up. Let's take a look at a similar code snippet to connect to ADB and pass a query:

// Node.js code using node-oracledb "thin" driver to connect to ADB and execute a SQL query
const oracledb = require('oracledb');

// Set the path to the directory containing your Oracle wallet
process.env.TNS_ADMIN = '/path/to/your/wallet_directory';

async function run() {
    let connection;
    try {
        connection = await oracledb.getConnection({
            user: "user",
            password: "password",
            connectionString: "adb.instance.url"
        });
        console.log("Connected to database successfully.");

        const result = await connection.execute(
            "SELECT * FROM v$version WHERE banner LIKE 'Oracle%'"
        );
        console.log("Oracle DB Version: ", result.rows);
    } catch (err) {
        console.error(err);
    } finally {
        if (connection) {
            try {
                await connection.close();
            } catch (err) {
                console.error(err);
            }
        }
    }
}

run();

As you can see above there are no distinctions at the code level to use "thin" mode or "thick" mode. The difference is at the configuration level where you basically have to enable the "thick" mode and provide the Oracle libraries path:

const oracledb = require('oracledb');

let clientOpts = {};
if (process.platform === 'win32') {
  // Windows
  // If you use backslashes in the libDir string, you will
  // need to double them.
  clientOpts = { libDir: 'C:\\oracle\\instantclient_19_19' };
} else if (process.platform === 'darwin' && process.arch === 'x64') {
  // macOS Intel
  clientOpts = { libDir: process.env.HOME + '/Downloads/instantclient_19_8' };
}
// else on other platforms like Linux the system library search path MUST always be
// set before Node.js is started, for example with ldconfig or LD_LIBRARY_PATH.

// enable node-oracledb Thick mode
oracledb.initOracleClient(clientOpts);

Note that by default the library runs in thin mode, so we don't need any local libraries to facilitate the connection (as the connection protocol is implemented in the driver itself).

Another "thin" driver: python-oracledb (formerly cx_Oracle)


The situation is the same for Python and the python-oracledb library: it works in "thin" mode by default and in "thick" mode with proper configuration. Plus the code is extermely simple:

# Python code using python-oracledb "thin" driver to connect to ADB and execute a SQL query
import oracledb
import os

# Assuming you have your Oracle wallet files in the directory specified in TNS_ADMIN environment variable
os.environ["TNS_ADMIN"] = "/path/to/your/wallet_directory"

# Construct the connection string. This typically includes the TNS name, which you get from your tnsnames.ora file.
connection_string = "user/password@adb_tns_name"

try:
    # Connect to the database
    with oracledb.connect(connection_string) as connection:
        # Create a cursor
        with connection.cursor() as cursor:
            # Execute the SQL query
            cursor.execute("SELECT * FROM v$version WHERE banner LIKE 'Oracle%'")
            
            # Fetch and print the results
            for row in cursor:
                print("Oracle DB Version:", row[0])

except oracledb.Error as e:
    print("Oracle-DB error:", e)

Note that you need to install the library first:

pip install oracledb

And to enable "thick" mode, after installing the Oracle Client libraries, you'd tipically set an environment variable like this:

# Set the environment variable to use thick mode
os.environ["ORACLEDB_THICK_MODE"] = "TRUE"

Driverless connections


Alternatively, developers can bypass traditional drivers altogether using Oracle REST Data Services (ORDS) to create RESTful services directly against ADB. This driverless approach simplifies connectivity. Another similar approach is combining REST with standard drivers like JDBC (there are both "thin" and "thick" versions of JDBC). Even if not "driverless" the driver is transparent to the developer as a higher level framework is used for operations on the database. For instance, you could create APIs with Spring Boot that communicate with ADB over REST, making it accessible from any HTTP-capable environment.

Source: oracle.com