Saturday, August 10, 2024

Introducing Oracle Exadata Database Service on Exascale Infrastructure

We are pleased to announce the general availability of Oracle Exadata Database Service on Exascale Infrastructure. Exadata Database Service provides customers around the world with extreme performance, reliability, availability, and security benefits they depend on for their Oracle Database workloads. With the introduction of highly scalable and affordable Exadata Exascale infrastructure, these benefits are now available in Oracle Cloud Infrastructure (OCI) for any size workload and every Oracle Database customer.
 

Oracle Exadata Exascale: World’s Only Intelligent Data Architecture for Cloud


Exadata Exascale is Oracle’s revolutionary multitenant architecture that combines the intelligence of Exadata with all the benefits of modern clouds. This loosely-coupled architecture incorporates the Exascale intelligent storage cloud and a pool of Exascale database-optimized compute to provide hyper-elasticity while meeting even the most demanding workload requirements.

Introducing Oracle Exadata Database Service on Exascale Infrastructure
Exascale delivers the best of Exadata and the best of cloud

When running Exadata Database Service on Exascale Infrastructure, you have access to its intelligent capabilities across your application portfolio. Database-optimized intelligent infrastructure runs AI, analytics, mission-critical OLTP, and developer workloads faster and more efficiently, reducing overall IT costs. Some of these capabilities include:

• Intelligent AI: The combination of AI Smart Scan with Exadata System Software 24ai offloads key AI Vector Search operations to the Exascale intelligent storage cloud, enabling massive amounts of vector data to be searched up to 30X faster.

• Intelligent analytics: The combination of intelligent columnarization and Smart Scan make hundreds or thousands of processing cores in the Exascale intelligent storage cloud available to process any SQL query.

• Intelligent OLTP: Exascale’s intelligent architecture automatically tiers data between DRAM, flash, and disk enabling low-latency, high-throughput Remote Direct Memory Access (RDMA) to frequently accessed data and delivers the performance of DRAM, the IOPS of flash, and the capacity of disk.

• Database-aware intelligent clones: The Exascale intelligent storage cloud uses redirect-on-write snapshot technology to instantly create space-efficient database clones from read-write sources.

Extreme Performance for Workloads at Any Scale


Until now, Exadata Database Service ran only on dedicated infrastructure allocated to a single tenant. While this approach is instrumental in providing extreme performance for demanding workloads, isolation for every customer, and efficient database consolidation, it requires a higher minimum investment for Exadata adoption.

Starting today, you can enjoy Exadata Database Service benefits at a lower entry cost for smaller, but no less critical, databases. Exadata Database Service on Exascale Infrastructure bridges the gap between dedicated Exadata infrastructure and virtual machines on shared commodity infrastructure. It extends Exadata’s advantages to every organization in every industry, regardless of size or workload.

You can start small using virtual machine (VM) cluster configurations with as little as 8 ECPUs and 22 GB of memory per VM, and 300 GB of intelligent database storage. You can easily scale up the number of ECPUs and number of VMs in the cluster, and then shrink them when you don’t need them. To ensure ECPU capacity is available when you need it, you can reserve ECPUs at a nominal cost.

Introducing Oracle Exadata Database Service on Exascale Infrastructure

The same powerful automation available with Exadata Database Service on Dedicated Infrastructure is used to manage the VM cluster and databases, but with the physical compute and storage abstracted from the service. You just deploy the VM cluster and databases without concern for the Oracle-managed infrastructure on which they are running. Your databases are isolated in VM clusters that are fully under your control. VM images are stored on Exascale block storage, eliminating the size limitations of local drives and facilitating migration in the event of a failure.
 

Benefits of Oracle Exadata Database Service on Exascale Infrastructure


Introducing Oracle Exadata Database Service on Exascale Infrastructure
Exadata Database Service benefits are now even more affordable

• Powerful: Exascale infrastructure inherits all the capabilities of Exadata that deliver extreme performance, reliability, availability, and security. AI, analytics, and mission-critical OLTP workloads are accelerated by Exascale’s intelligent data architecture for cloud.

• Extreme low cost: With Exascale infrastructure, you only pay for the compute and storage resources used by your database, starting with a low, highly affordable minimum size. There’s no additional charge for IOPS, making costs predictable.

• Scalable pooled resources: Exascale infrastructure leverages pools of shared intelligent storage and compute, allowing databases to easily scale without concern for downtime, server-based size limitations, or disruptive migrations.

• Agile development: Exascale infrastructure features the ability to create rapid and efficient database thin clones with native Exadata performance that lowers storage costs and enhances developer agility.

Introducing Oracle Exadata Database Service on Exascale Infrastructure

Application Modernization with Oracle Database 23ai


Exadata Database Service on Exascale Infrastructure exclusively works with Oracle Database 23ai which not only extends extreme performance to AI Vector Search capabilities, but also includes many developer productivity enhancing features such as JSON Relational Duality that dramatically simplifies building and modernizing your applications.

Key Takeaways


Exadata Database Service on Exascale Infrastructure is a new way for any organization to gain the benefits of using Exadata in OCI. Built on Exadata Exascale, the world’s only intelligent data architecture for cloud, this new deployment option delivers Exadata intelligence with all the benefits of modern clouds.

Exadata Database Service on Exascale Infrastructure extends the performance, reliability, availability, and security benefits of Exadata to workloads at any scale. It provides extremely affordable consumption-based pricing and the elasticity to grow as needed.

Your development and testing teams will enhance their productivity with Exascale’s flexible cloning capabilities. By leveraging Oracle Database 23ai features such as JSON Relational Duality and Exadata capabilities like AI Smart Scan, they will be able to quickly modernize applications and integrate new powerful features.

Source: oracle.com

Saturday, August 3, 2024

Table Values Constructor in Oracle Database 23ai

Table Values Constructor in Oracle Database 23ai

The table values constructor allows us to define multiple rows using a single constructor for use in SQL statements.

◉ Setup


The following table is required to run the examples in this article.

drop table if exists t1;

create table t1 (
  id number,
  code varchar2(6),
  description varchar(25),
  constraint t1_pk primary key (id)
);

◉ INSERT


The table values constructor allows us to insert multiple rows into a table in a single step.

insert into t1
values (1, 'ONE', 'Description for ONE'),
       (2, 'TWO', 'Description for TWO'),
       (3, 'THREE', 'Description for THREE');

commit;


select * from t1;

        ID CODE   DESCRIPTION
---------- ------ -------------------------
         1   ONE      Description for ONE
         2   TWO     Description for TWO
         3   THREE  Description for THREE

SQL>
That's a single network round trip without having to combine all the insert statements into a PL/SQL block.

◉ SELECT


The same type of table values constructor can be used in the FROM clause of a SELECT statement. Notice we have to alias the column names so they are presented correctly.

select *
from   (values
          (4, 'FOUR', 'Description for FOUR'),
          (5, 'FIVE', 'Description for FIVE'),
          (6, 'SIX', 'Description for SIX')
       ) a (id, code, description);

        ID CODE DESCRIPTION
---------- ---- --------------------
         4   FOUR Description for FOUR
         5    FIVE  Description for FIVE
         6    SIX    Description for SIX

SQL>

◉ WITH Clause


The table values constructor can be used as part of a WITH clause.

with a (id, code, description) AS (
  values (7, 'SEVEN', 'Description for SEVEN'),
         (8, 'EIGHT', 'Description for EIGHT'),
         (9, 'NINE', 'Description for NINE')
)
select * from a;

        ID CODE  DESCRIPTION
---------- ----- ---------------------
         7   SEVEN Description for SEVEN
         8   EIGHT  Description for EIGHT
         9   NINE   Description for NINE

SQL>

◉ MERGE


The table values constructor can be used as the source data for a MERGE statement.

merge into t1 a
  using (values
          (4, 'FOUR', 'Description for FOUR'),
          (5, 'FIVE', 'Description for FIVE'),
          (6, 'SIX', 'Description for SIX')
        ) b (id, code, description)
  on (a.id = b.id)
  when matched then
    update set a.code        = b.code,
               a.description = b.description
  when not matched then
    insert (a.id, a.code, a.description)
    values (b.id, b.code, b.description);

3 rows merged.

SQL>

select * from t1;

        ID CODE   DESCRIPTION
---------- ------ -------------------------
         1   ONE      Description for ONE
         2   TWO     Description for TWO
         3   THREE  Description for THREE
         4   FOUR    Description for FOUR
         5   FIVE      Description for FIVE
         6   SIX        Description for SIX

6 rows selected.

SQL>

rollback;

Source: oracle-base.com

Wednesday, July 10, 2024

Transport Layer Security (TLS) Connections without a Client Wallet in Oracle Database 23ai

Transport Layer Security (TLS) Connections without a Client Wallet in Oracle Database 23ai

In previous releases HTTPS callouts from the database required the use of a client wallet. From Oracle database 23ai onward we can use the operating system certificate store instead.

Operating systems usually have a certificate store containing the root certificates of popular certificate authorities. This allows the operating system to make trusted connections to sites using those root certificates. These certificate stores are kept up to date with operating patches. Oracle 23ai allows us to make use of the operating system certificate store, rather than having to use a wallet, which removes the burden of certificate management from us.

Test a URL From the Operating System


The simplest way to test a HTTPS URL is to use the curl command from the operating system command line. If a valid HTTPS connection is possible, we should get a "200 OK" response. We can see it works fine.

$ curl -is https://oracle-base.com/sitemap.xml | grep HTTP
HTTP/1.1 200 OK
$
This means the required root certificate is present in the operating system certificate store.

Create an ACL


In order to do a database callout we need an Access Control List (ACL) for the host. The following example creates an ACL for the host "oracle-base.com" on port 443. The principal is TESTUSER1, which is the user we will make the call from.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

begin
  dbms_network_acl_admin.append_host_ace (
    host       => 'oracle-base.com', 
    lower_port => 443,
    upper_port => 443,
    ace        => xs$ace_type(privilege_list => xs$name_list('http'),
                              principal_name => 'testuser1',
                              principal_type => xs_acl.ptype_db)); 
end;
/

conn testuser1/testuser1@//localhost:1521/freepdb1

Test a Database Callout Without a Wallet


We create a procedure using the UTL_HTTP package to return the contents of a URL.

create or replace procedure show_html_from_url (
  p_url  in  varchar2
) as
  l_http_request   utl_http.req;
  l_http_response  utl_http.resp;
  l_text           varchar2(32767);
begin
  -- Make a http request and get the response.
  l_http_request  := utl_http.begin_request(p_url);

  l_http_response := utl_http.get_response(l_http_request);

  -- Loop through the response.
  begin
    loop
      utl_http.read_text(l_http_response, l_text, 32766);
      dbms_output.put_line (l_text);
    end loop;
  exception
    when utl_http.end_of_body then
      utl_http.end_response(l_http_response);
  end;
exception
  when others then
    utl_http.end_response(l_http_response);
    raise;
end show_html_from_url;
/

In previous releases attempting to use the procedure without opening a wallet would result in a certificate validation failure. That is not the case in Oracle 23ai, as we are using the operating system certificate store by default.

set serveroutput on long 1000000
exec show_html_from_url('https://oracle-base.com/sitemap.xml');

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>

... output removed ...

  <url>
</urlset>

PL/SQL procedure successfully completed.

SQL>

We could have achieved a similar result using HTTPURITYPE.

set serveroutput on long 1000000
select HTTPURITYPE.createuri('https://oracle-base.com/sitemap.xml').getclob();

HTTPURITYPE.CREATEURI('HTTPS://ORACLE-BASE.COM/SITEMAP.XML').GETCLOB()
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>

... output removed ...

  <url>
</urlset>

SQL>

Test a Database Callout With a Wallet


We can still use a wallet containing the relevant root certificate, but once we reference the wallet it takes priority. If the wallet doesn't contain the correct root certificate, the call will fail. To demonstrate this we make a new wallet containing a self-signed certificate, rather than the root certificate of the URL we are calling.

mkdir -p /u01/wallet
orapki wallet create -wallet /u01/wallet -pwd WalletPasswd123 -auto_login

orapki wallet add -wallet /u01/wallet -pwd WalletPasswd123 \
  -dn "CN=`hostname`, OU=Example Department, O=Example Company, L=Birmingham, ST=West Midlands, C=GB" \
  -keysize 1024 -self_signed -validity 365

We connect to a new session, open the wallet, and attempt to run the procedure to make the callout. As expected, this results in a certificate validation failure.

conn testuser1/testuser1@//localhost:1521/freepdb1

set serveroutput on long 1000000
exec utl_http.set_wallet('file:/u01/wallet', null);

exec show_html_from_url('https://oracle-base.com/sitemap.xml');
*
ERROR at line 1:
ORA-29273: HTTP request failed
ORA-06512: at "TESTUSER1.SHOW_HTML_FROM_URL", line 26
ORA-29024: Certificate validation failure
ORA-06512: at "SYS.UTL_HTTP", line 380
ORA-06512: at "SYS.UTL_HTTP", line 1189
ORA-06512: at "TESTUSER1.SHOW_HTML_FROM_URL", line 9
ORA-06512: at line 1
Help: https://docs.oracle.com/error-help/db/ora-29273/

SQL>

We get the same error when we try to use HTTPURITYPE.

conn testuser1/testuser1@//localhost:1521/freepdb1

set serveroutput on
exec utl_http.set_wallet('file:/u01/wallet', null);

select HTTPURITYPE.createuri('https://oracle-base.com/sitemap.xml').getclob();

ERROR:
ORA-29273: HTTP request failed
ORA-06512: at "SYS.HTTPURITYPE", line 38
ORA-29024: Certificate validation failure
ORA-06512: at "SYS.UTL_HTTP", line 380
ORA-06512: at "SYS.UTL_HTTP", line 1189
ORA-06512: at "SYS.HTTPURITYPE", line 23
Help: https://docs.oracle.com/error-help/db/ora-29273/

SQL>

We add the correct certificate to the wallet.

orapki wallet add -wallet /u01/wallet -trusted_cert -cert "/tmp/ISRG Root X1.crt" -pwd WalletPasswd123

Now the previous tests work as expected.

conn testuser1/testuser1@//localhost:1521/freepdb1

set serveroutput on long 1000000
exec utl_http.set_wallet('file:/u01/wallet', null);

exec show_html_from_url('https://oracle-base.com/sitemap.xml');

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>

... output removed ...

  <url>
</urlset>

PL/SQL procedure successfully completed.

SQL>


select HTTPURITYPE.createuri('https://oracle-base.com/sitemap.xml').getclob();

HTTPURITYPE.CREATEURI('HTTPS://ORACLE-BASE.COM/SITEMAP.XML').GETCLOB()
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>

... output removed ...

  <url>
</urlset>

SQL>

Mark a Self-Signed Certificate as Trusted


We are not always working with certificates from a certificate authority. Sometimes we may need to make callouts to services using self-signed certificates. Fortunately we can mark self-signed certificates as trusted on the OS of the database server. The following actions were performed as the "root" user.

We create a self-signed certificate, as demonstrated.

mkdir -p ~/certs

openssl req \
  -newkey rsa:4096 -nodes -sha256 -keyout ~/certs/${HOSTNAME}.key \
  -x509 -days 3650 -out ~/certs/${HOSTNAME}.crt \
  -subj "/C=GB/ST=West Midlands/L=Birmingham/O=Example Company/OU=Devs/CN=Tim Hall/emailAddress=me@example.com"

We copy it to the "/etc/pki/ca-trust/source/anchors/" directory and run update-ca-trust to make the change take effect.

cp ~/certs/${HOSTNAME}.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust

We use trust list to check the certificate is present.

# trust list --filter=ca-anchors | grep -B 4 -A 2 "Tim Hall"
pkcs11:id=%EE%3A%98%8F%93%C2%64%23%E0%42%7F%52%97%54%6D%87%7F%C0%2F%05;type=cert
    type: certificate
    label: Tim Hall
    trust: anchor
    category: authority
#

Source: oracle-base.com

Monday, July 8, 2024

XML, JSON and Oracle Text Search Index Enhancements in Oracle Database 23ai

XML, JSON and Oracle Text Search Index Enhancements in Oracle Database 23ai

In Oracle 23ai the CREATE SEARCH INDEX statement allows us to create search indexes on XML, JSON and text data, making the syntax consistent between them.

◉ XML Search Indexes


The index type of XDB.XMLIndex was introduced in a previous release to allow us to index XML data.

drop table if exists t1 purge;

create table t1 (
  id    number,
  data  xmltype
);

create index t1_xmlindex_idx on t1 (data) indextype is xdb.xmlindex;

In Oracle 23ai we can create search indexes for XML using the CREATE SEARCH INDEX ... FOR XML syntax. The full syntax is available here.

drop table if exists xml_tab purge;

create table xml_tab (
  id    number,
  data  xmltype
)
xmltype column data
store as transportable binary xml;

create search index xml_tab_search_idx on xml_tab (data) for XML
parameters (
  'search_on text'
);

The SEARCH_ON parameter must be set for XML search indexes.

  • TEXT : Only text data is indexed for full-text search queries.
  • VALUE(data_types) : Enables range-search for the specified data types. One or more of BINARY_DOUBLE, NUMBER, TIMESTAMP, VARCHAR2 as a comma-separated list.
  • TEXT_VALUE(data_types) : A combination of support for full-text searches queries and range-search for the specified data types.

The XMLTYPE column must be stored as transportable binary XML to build a XML search index. For other storage types we can create an Oracle Text search index instead.

drop table if exists xml_tab purge;

create table xml_tab (
  id    number,
  data  xmltype
);

create search index xml_tab_search_idx on xml_tab (data);

◉ JSON Search Indexes


The original syntax for JSON search indexes in Oracle 12.1 was rather ugly.

drop table if exists t1 purge;

create table t1 (
  id    number,
  data  clob,
  constraint t1_json_chk check (data is json)
);

create index t1_search_idx on t1 (data)
  indextype is ctxsys.context
  parameters ('section group ctxsys.json_section_group sync (on commit)');

In Oracle 12.2 the JSON search index syntax was simplified using the CREATE SEARCH INDEX ... FOR JSON syntax. The full syntax is available here.

drop index if exists t1_search_idx;

create search index t1_search_idx on t1 (data) for json;

The SEARCH_ON parameter defaults to TEXT_VALUE for XML search indexes.

  • NONE : JSON queries are not supported. Only the data guide index is maintained.
  • TEXT : Only text data is indexed for full-text search queries.
  • VALUE(data_types) : Enables range-search for the specified data types. One or more of NUMBER, TIMESTAMP, VARCHAR2 as a comma-separated list.
  • TEXT_VALUE(data_types) : A combination of support for full-text searches queries and range-search for the specified data types.
  • TEXT_VALUE_STRING : A combination of support for full-text searches queries and range-search for NUMBER, TIMESTAMP and VARCHAR2 data types.

◉ Oracle Text Search Indexes


Oracle has a long history of full-text indexing for text data. You can read about it here.

- Full Text Indexing using Oracle Text

In Oracle 23ai we can create full-text search indexes using the CREATE SEARCH INDEX syntax. The full syntax is available here.

drop table if exists text_tab purge;

create table text_tab (
  id    number,
  data  clob
);

create search index text_tab_search_idx on text_tab (data);

Source: oracle-base.com

Saturday, July 6, 2024

Oracle Database 23ai Free RPM Installation On Oracle Linux 8 (OL8)

Oracle Database 23ai Free RPM Installation On Oracle Linux 8 (OL8)

Oracle Database 23ai free can be installed on Oracle Linux 8 using an RPM. This article describes the RPM installation of Oracle Database 23ai free 64-bit on Oracle Linux 8 (OL8) 64-bit. The article is based on a server installation with a minimum of 2G swap and secure Linux set to permissive.

1. Hosts File


The "/etc/hosts" file must contain a fully qualified name for the server.

<IP-address>  <fully-qualified-machine-name>  <machine-name>

For example.

127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.56.107  ol8-23.localdomain  ol8-23

Set the correct hostname in the "/etc/hostname" file.

ol8-23.localdomain

2. Oracle Installation


Download the relevant RPM from download page here.

- oracle-database-free-23ai-1.0-1.el8.x86_64.rpm

With the RPM file downloaded, we can install the Oracle prerequisites using the following commands as the "root" user.

dnf install -y oracle-database-preinstall-23ai

We now install the 23ai software using the following command as the root user. This assumes the RPM file is in the "/tmp" directory.

dnf -y localinstall /tmp/oracle-database-free-23ai-1.0-1.el8.x86_64.rpm

The ORACLE_HOME for the software installation is "/opt/oracle/product/23ai/dbhomeFree".

3. Create Database


In addition to the software installation, the RPM creates a script that allows us to create a demo database called "FREE", with a pluggable database (PDB) called "FREEPDB1". In the following example we set the DB_PASSWORD environment variable so we can do a silent database creation using the script.

# export DB_PASSWORD=SysPassword1

# (echo "${DB_PASSWORD}"; echo "${DB_PASSWORD}";) | /etc/init.d/oracle-free-23ai configure
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database FREE.
Enter SYS user password:
*************
Enter SYSTEM user password:
**************
Enter PDBADMIN User Password:
************
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
33% complete
36% complete
39% complete
43% complete
Completing Database Creation
47% complete
49% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/FREE.
Database Information:
Global Database Name:FREE
System Identifier(SID):FREE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE/FREE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: localhost.localdomain/FREEPDB1
     Multitenant container database: localhost.localdomain
[root@localhost yum.repos.d]#

We can of course create a database in the normal way, using the Database Configuration Assistant (DBCA). We don't have to use this script.

4. Using It


From the "oracle" user we can connect as follows.

export ORACLE_HOME=/opt/oracle/product/23ai/dbhomeFree
export PATH=$ORACLE_HOME/bin:$PATH

-- Root container
sqlplus sys/SysPassword1@//localhost:1521/free as sysdba

-- Pluggable database
sqlplus sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

We can stop and start the service from the root user with the following commands.

/etc/init.d/oracle-free-23ai stop
/etc/init.d/oracle-free-23ai start

5. Vagrant Example


If you want to see it in action, you might want to try one of these Vagrant build.


Source: oracle-base.com

Friday, July 5, 2024

Oracle Database 23ai Free RPM Installation On Oracle Linux 9 (OL9)

Oracle Database 23ai Free RPM Installation On Oracle Linux 9 (OL9)

Oracle Database 23ai free can be installed on Oracle Linux 9 using an RPM. This article describes the RPM installation of Oracle Database 23ai free 64-bit on Oracle Linux 9 (OL9) 64-bit. The article is based on a server installation with a minimum of 2G swap and secure Linux set to permissive.

◉ Hosts File


The "/etc/hosts" file must contain a fully qualified name for the server.

<IP-address>  <fully-qualified-machine-name>  <machine-name>

For example.

127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.56.107  ol9-23.localdomain  ol9-23

Set the correct hostname in the "/etc/hostname" file.

ol9-23.localdomain

◉ Oracle Installation


Download the relevant RPM from download page here.

- oracle-database-free-23ai-1.0-1.el9.x86_64.rpm

With the RPM file downloaded, we can install the Oracle prerequisites using the following commands as the "root" user.

dnf install -y oracle-database-preinstall-23ai

We now install the 23ai software using the following command as the root user. This assumes the RPM file is in the "/tmp" directory.

dnf -y localinstall /tmp/oracle-database-free-23ai-1.0-1.el9.x86_64.rpm

The ORACLE_HOME for the software installation is "/opt/oracle/product/23ai/dbhomeFree".

◉ Create Database


In addition to the software installation, the RPM creates a script that allows us to create a demo database called "FREE", with a pluggable database (PDB) called "FREEPDB1". In the following example we set the DB_PASSWORD environment variable so we can do a silent database creation using the script.

# export DB_PASSWORD=SysPassword1

# (echo "${DB_PASSWORD}"; echo "${DB_PASSWORD}";) | /etc/init.d/oracle-free-23ai configure
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database FREE.
Enter SYS user password:
*************
Enter SYSTEM user password:
**************
Enter PDBADMIN User Password:
************
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
33% complete
36% complete
39% complete
43% complete
Completing Database Creation
47% complete
49% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/FREE.
Database Information:
Global Database Name:FREE
System Identifier(SID):FREE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/FREE/FREE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: localhost.localdomain/FREEPDB1
     Multitenant container database: localhost.localdomain
[root@localhost yum.repos.d]#

We can of course create a database in the normal way, using the Database Configuration Assistant (DBCA). We don't have to use this script.

◉ Using It


From the "oracle" user we can connect as follows.

export ORACLE_HOME=/opt/oracle/product/23ai/dbhomeFree
export PATH=$ORACLE_HOME/bin:$PATH

-- Root container
sqlplus sys/SysPassword1@//localhost:1521/free as sysdba

-- Pluggable database
sqlplus sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

We can stop and start the service from the root user with the following commands.

/etc/init.d/oracle-free-23ai stop
/etc/init.d/oracle-free-23ai start

◉ Vagrant Example


If you want to see it in action, you might want to try one of these Vagrant build.


Source: oracle-base.com

Wednesday, July 3, 2024

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

In this post, I provide an overview and examples of creating datasets from files stored in Oracle Cloud Infrastructure (OCI) Object Storage. I also explore the new capabilities with centralized file storage that enables the ability to schedule periodic reloads of the data stored in Object Storage. I review the creation of an OCI Resource connection including the creation of a dataset using the new connection, using the new UI to search and navigate the compartments, buckets, and objects, and creating datasets based on some files. Finally, I describe how you can manually reload or schedule a periodic reload of the dataset by updating files in Object Storage.

What Is OCI Object Storage?


OCI Object Storage enables you to securely store any type of data in its native format. With built-in redundancy, OCI Object Storage is ideal for building modern applications that require scale and flexibility, because it can be used to consolidate multiple data sources for analytics, backup, or archive purposes.

Creating an OCI Resource Connection


In order to access files stored in OCI Object Storage, you first create an OCI Resource Connection using an API Key. This connection is the same type of connection required for connecting Oracle Analytics to OCI functions and OCI models such as Vision and Language.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Creating Datasets from Files in OCI Object Storage


Once a connection has been successfully created, you can start the process of creating datasets from files in the OCI Object Storage buckets. Start the process by creating a dataset by clicking Create Dataset from the home page. Notice that the OCI Resource Connection is displayed as one of the data sources in the Create Dataset dialog.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Region Selection


After selecting the OCI Connection, use the dialog to change the default region if necessary, and to easily search for the right compartments, buckets, and objects that could include folders, subfolders, and files. Review the default region and change if necessary with the drop-down list.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Navigating and Searching Compartments


After selecting or keeping the default region, either manually navigate or enter a full or partial search string to search all the compartments. The search results are filtered to display only those compartments that meet the search criteria. The search is a wildcard, case-insensitive search.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Navigating and Searching Buckets


After clicking the correct compartment where the buckets reside, do the same type of navigation or wildcard search on the buckets. After locating the bucket where the files reside, click it and notice that all the objects in the selected bucket are displayed on the right-hand panel.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Navigating and Searching Objects and Selecting a File


Again, manually navigate the objects in the bucket, which can consist of folders, subfolders, and files, or perform a wildcard case-insensitive search. After locating the file to import into the dataset, click OK. The system imports the file into OAC and provides a preview of the contents of the file for review. After the review, click OK to bring the file into the Dataset Editor, where a representative sample is extracted and the deep semantic profile is triggered and the results are displayed in the form of the Data Quality Insights for the contents of the file.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Adding Another File from the Same OCI Connection


After creating the table based on the file from the OCI Object Storage bucket, notice that the connection (My OCI Connection), the resource (OCI Object Storage), and the imported file are listed in the left-hand panel. To add another file from the same connection, click the icon to the right of the resource. After clicking that icon, the navigation dialog is displayed again, and you can drill into the bucket again to get the second file. Add as many files as you need and join them to create the dataset. You can also join files from OCI Object Storage with database tables and other files.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Extract Credit – Scheduling a Dataset Reload


One of the really cool advantages of creating datasets from files in OCI Object Storage buckets is that you can create a recurring workflow where an upstream process can place updated files of the same name in the same bucket periodically. You can then schedule dataset refreshes to automatically update the data in cache from those updated files. You can set up either a one-time or a recurring schedule. Additionally, you can check the details of a schedule to see the last run time and the next scheduled run. This process and capability provides a way to update visualizations with the latest data from datasets based on files.

Unlock the Power of Your Data: Seamlessly Create and Refresh Datasets from Files Stored in OCI Object Storage

Call to Action


We hope you've enjoyed this overview of creating datasets from files in OCI Object Storage buckets! And we challenge you to start creating datasets from your files stored in buckets and hope that you find them to be both powerful and user-friendly. Keep exploring the powerful world of self-service data modeling and stay tuned for our upcoming blog posts, where we'll share more tips and tricks on both new and existing features of our product.

Source: oracle.com

Monday, July 1, 2024

Announcing New Spatial Machine Learning Algorithms in OML4Py on Autonomous Database Serverless

We are pleased to announce the general availability of a new feature of Oracle Machine Learning for Python (OML4Py), Spatial AI. OML4Py Spatial AI provides specialized machine learning algorithms that incorporate the effects of location. Using machine learning with spatial algorithms can improve model quality and prediction accuracy by accounting for the effects of location. For example, spatial regression algorithms are able to enhance home value predictions by incorporating the influence of neighboring home values. Spatial algorithms also allow you to detect location patterns, like spatial clustering of traffic accidents. As part of OML4Py on Autonomous Database Serverless, Spatial AI provides  a single environment for spatial ML workflows that minimizes data movement to external ML engines, simplifies your architecture, and accelerates time to value.

Sample notebooks are provided to help get you started. From the Oracle Machine Learning user interface, navigate to Templates > Examples and filter for "spatial". Clicking on the title of a sample notebook opens it in read-only mode to review the content. To create your own editable/runnable copy of a sample notebook, first select the sample notebook by clicking on the tile, click "Create Notebook", and then select your project. Begin with the notebook "OML4Py Spatial AI Run Me First" to seed sample data, and then try other notebooks based on your areas of interest.

Announcing New Spatial Machine Learning Algorithms in OML4Py on Autonomous Database Serverless

After the data are seeded, sample notebooks can be run in any order. For example, in the sample notebook OML4Py Spatial AI Agglomerative Clustering and Regionalization, you apply a spatial ML algorithm that combines sets of Census Block Groups into broad regions based on similar demographics. Such regions are useful for regional marketing, where strategies are adjusted based localized demographics and associated buying patterns. The following image shows the results of a non-spatial clustering based solely on similar feature values (left), and results of clustering with a spatial ML algorithm that combines areas into regions based on similarity of both feature values and location.

Announcing New Spatial Machine Learning Algorithms in OML4Py on Autonomous Database Serverless

Source: oracle.com

Saturday, June 29, 2024

EMPTY STRING ON NULL for JSON Generation in Oracle Database 23ai

EMPTY STRING ON NULL for JSON Generation in Oracle Database 23ai

Oracle database 23ai has introduced the ability to convert nulls to empty strings during JSON generation.

◉ Setup


We create and populate a table. Notice each row has a different combination of values with regards to null.

conn testuser1/testuser1@//localhost:1521/freepdb1

drop table if exists t1 purge;

create table t1 (
  col1 VARCHAR2(10),
  col2 VARCHAR2(10)
);

insert into t1 values ('ONE', 'TWO');
insert into t1 values ('ONE', null);
insert into t1 values (null, 'TWO');
insert into t1 values (null, null);
commit;


select rownum, col1, col2 from t1;

 ROWNUM COL1 COL2
----------      --------- ----------
         1          ONE       TWO
         2         ONE
         3                         TWO
         4

SQL>

◉ NULL ON NULL and ABSENT ON NULL


In previous releases our only options for null handling during JSON generation were NULL ON NULL and ABSENT ON NULL. We'll use JSON_OBJECT as an example, where the default behaviour is NULL ON NULL.

When using NULL ON NULL, implicitly or explictly, any null values in the data are generated as "null" in the output.

-- Implicit - NULL ON NULL.
select json_object(*) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE","COL2":null}
{"COL1":null,"COL2":"TWO"}
{"COL1":null,"COL2":null}

SQL>


-- Explicit - NULL ON NULL.
select json_object(* null on null) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE","COL2":null}
{"COL1":null,"COL2":"TWO"}
{"COL1":null,"COL2":null}

SQL>

When using ABSENT ON NULL any null values in the data result in the corresponding element being removed from the output.

select json_object(* absent on null) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE"}
{"COL2":"TWO"}
{}

SQL>

Here's an example with JSON_ARRAY, which uses ABSENT ON NULL by default.

-- Default - ABSENT ON NULL
select json_array(col1) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[]
[]

SQL>


-- Explicit - ABSENT ON NULL
select json_array(col1 absent on null) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[]
[]

SQL>


-- NULL ON NULL
select json_array(col1 null on null) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[null]
[null]

SQL>

◉ EMPTY STRING ON NULL


In Oracle database 23ai we now have the option of using EMPTY STRING ON NULL. In this case null values are presented as empty strings in the output.

Here we see JSON_OBJECT being used with EMPTY STRING ON NULL.

select json_object(* empty string on null) as data from t1;

DATA
--------------------------------------------------------------------------------
{"COL1":"ONE","COL2":"TWO"}
{"COL1":"ONE","COL2":""}
{"COL1":"","COL2":"TWO"}
{"COL1":"","COL2":""}

SQL>

Here is an example of JSON_ARRAY using EMPTY STRING ON NULL.

select json_array(col1 empty string on null) as data from t1;

DATA
--------------------------------------------------------------------------------
["ONE"]
["ONE"]
[""]
[""]

SQL>

◉ Thoughts


In Oracle SQL and PL/SQL an empty string and a null are treated the same. This is not the case in other languages, so it's important to give people an option of presenting nulls as empty strings if required.

The default null handling of each JSON generation function can differ, so it's important to check the documentation, or always be explicit so other developers know your intentions.

Source: oracle-base.com

Friday, June 28, 2024

Announcing Globally Distributed Autonomous Database Support for Oracle Database 23ai Enabling Raft Replication

Announcing Globally Distributed Autonomous Database Support for Oracle Database 23ai Enabling Raft Replication

Over the last four years, the Oracle Database Development team has diligently worked to deliver the next long-term support release: Oracle Database 23ai, with a focus on AI and enhancing developer productivity.

Today, we are thrilled to announce support for Oracle Database 23ai on the Oracle Globally Distributed Autonomous Database!. Oracle Database 23ai, in conjunction with the Oracle Globally Distributed Autonomous Database, is a game-changer for enterprises worldwide seeking to build cloud-scale distributed databases with ultra-high scalability, availability, and advanced AI capabilities.

With the latest release, Oracle Globally Distributed Autonomous Database allows customers to select their preferred replication strategy to ensure high availability and disaster recovery. You can choose between the new built-in Raft replication (introduced in 23ai) and Oracle Data Guard (available in releases 19c and 23ai). The service automatically deploys the specified replication topology to the configured systems and enables seamless data replication. If you are used to NoSQL databases and do not expect to know anything about how replication works, Globally Distributed Autonomous Database built-in Raft replication just works.

Key benefits for Raft replication include:


Fast failover with zero data loss: Raft replication is built into Globally Distributed Autonomous Database to provide a consensus-based, high-performance, low-overhead availability solution, with distributed replicas and fast failover with zero data loss, while automatically maintaining the replication factor if shards fail.

Active-Active symmetric configuration: Each node accepts writes and reads for a subset of data facilitating an Active-Active symmetric distributed database architecture that enhances availability, and optimizes resource utilization globally.

Simplified management: Raft replication is integrated and transparent to applications without requiring configuration of Oracle GoldenGate or Oracle Data Guard. Raft replication automatically reconfigures replication in case of shard host failures or when shards are added or removed from the sharded database.

Automatic data distribution: The service handles workload distribution and balancing upon scaling (addition or removal of shards), including planned or unplanned changes in shard availability. 

Announcing Globally Distributed Autonomous Database Support for Oracle Database 23ai Enabling Raft Replication

By supporting Oracle Database release 23ai, Globally Distributed Autonomous Database offers customers enhanced replication options and advanced AI capabilities including Select AI and AI Vector Search. It provides all of the new benefits on top of 23ai including unparalleled flexibility, improved operational efficiency, enhanced performance, simplified scalability, and compliance for modern applications. Stay tuned for more updates and detailed guides on leveraging these new capabilities to maximize your database’s potential.

Source: oracle.com

Wednesday, June 26, 2024

Oracle Spatial Studio 24.1 Available Now

We’re pleased to announce the release of Oracle Spatial Studio 24.1, a no-code web tool for working with the spatial features of Oracle Database. In this release we've focused on enchancing existing features and including new support for a popular file type, 3D tiles. This blog post will explore some of the enhancements available in this release.

Enhanced embedding of published Studio projects


Spatial Studio provides functionality to embed published maps into other web applications. New capabilities enable interactive behavior between your application and an embedded map using an enhanced API. For an application built on a platform without native map visualization features, embedding Spatial Studio maps is an easy alternative to custom coding. But even for an application built on a platform having map features, Spatial Studio provides advantages, for example high performance in map rendering of large volumes. The image below is an elementary example of a Spatial Studio map embedded into an Oracle APEX application, allowing the user to select items in the map to highlight corresponding records in the native APEX table, and vice versa.

Oracle Spatial Studio 24.1 Available Now

Oracle Spatial Studio 24.1 Available Now

Enhanced Cesium 3D visualizations to support updated 3D Tiles specification


3D Tiles is a popular file format for streaming massive heterogeneous 3D geospatial datasets, allowing for efficient interaction at runtime. The Open Geospatial Consortium updated the 3D Tiles specification in 2023, and subsequently, Cesium has included support for the features in this standard. Spatial Studio has been enhanced to include these features.

Oracle Spatial Studio 24.1 Available Now

Support for incremental Geocoding


Spatial Studio provides the ability to geocode data, whereby addresses are converted to point locations for mapping and spatial analysis. For scenarios where your data are continually or periodically appended with new addresses, Spatial Studio has added the ability to perform “incremental geocoding”. This means that only new addresses are geocoded, regardless of the volume of previously geocoded addresses. This ensures less rework, saving resources, and improving efficiency.

Oracle Spatial Studio 24.1 Available Now

Source: oracle.com

Monday, June 24, 2024

OCI AI Vision Facial Detection in Oracle Analytics Cloud

This post explores the OCI service AI Vision Facial Detection exposed directly in Oracle Analytics. With prebuilt models available, developers can easily build image recognition and text recognition into their applications without machine learning (ML) expertise.  This post will walk users through registering their AI Vision Face Detection model, uploading images, running their dataflow, and analyzing the output. 

Users can analyze single images or a batch of images directly in OAC using the pre-trained face detection model. OCI AI Vision Facial Detection feature provides the following information:

1. Identifies the existence of faces in each image.
2. The location of faces in each image using bounding box coordinates.
3. Facial landmarks for each detected face, including left eye, right eye, nose tip, and left and right edges of mouth.
4. Visual quality of each face: a higher score indicates that the image of the face is more likely to be suitable for biometrics.

Register your OCI AI Vision Face Detection Model


◉ In the OCI AI Vision service, register your Face Detection Model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

◉ Select the connection to your OCI tenancy.
◉ Choose the desired compartment.
◉ Select the face detection model and provide a staging bucket.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Uploading and preparing your image files


To start, first upload images to an OCI bucket.  Once all images reside in a OCI bucket, there are two options to build the required data input file that will be used in your data flow.

1. The first option is to create a CSV/xls file that points directly to all the images you wish to use with the face detection model (see left side of screenshot).
2. The second option option is to create a CSV/xls that points to the root of the bucket where all images reside.  In this scenario, the model will consume all images within the bucket (see right side of screenshot).
3. Once the input data file is created, you can proceed directly to OAC to build your dataflow and run the face detection model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Executing the Face Detection model


Dataset Prep

1. To this point, the following steps have been completed: registering the face detection model, uploading images to an OCI bucket, and creating an input data file.
2. With all these components in place, it's time to build the dataflow and execute the model. First, you need to upload the input data file created in the previous step.
3. Follow the screenshot flow below to upload and save the data file in OAC.  This example shows an input file that points directly to each individual image file.  Once the data file is saved, you can build your dataflow.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Build the Dataflow.

3. To build the dataflow, start with your newly created input dataset and add the AI Face Detection model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

4. With the model added to the data flow, you'll see all the output columns that will be produced by the Face Detection Model.

  • In the parameters sections, choose the input column (the source column of your images within your data file).
  • For the input type selection, specify from the dropdown to indicate whether your input dataset uses images or images sourced directly from a bucket.

5. The last step is to save the model output in a data file.

  • Add a step to your dataflow, 'save data'.
  • Once you provided a dataset name, click run, and provide a dataflow name.
  • As part of the save data process, you'll see an output preview of the Face Detection model. Note that depending on the number of images in your model, execution times will vary (currently OAC supports up to 250 face detections per image).

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Visualize in OAC


With the data flow successfully executed, you can now create a new workbook in OAC and visualize the output. Before starting, you need to first download the vision series plugin from the OAC library and upload it into your OAC deployment, as this is the visualization type you'll use with the Face Detection Model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Build your OAC Workbook

1. Start by adding your model output dataset to the workbook and selecting the Vision Plugin.
2. Then, using facial landmarks, you can now identify the exact coordinates and features of each within your images.
3. In addition, the model provides a quality and confidence score.
4. In this sample, the use of dashboard filters allows for a quick review of all images that were put through the model.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

AI Explain

In addition, through the use of AI Explain, you can gain valuable insights into the model where all images are analyzed and reported on together.

OCI AI Vision Facial Detection in Oracle Analytics Cloud

Source: oracle.com