Monday, May 31, 2021

Analyst insight - Oracle launches GoldenGate data integration service on Oracle Cloud Infrastructure

Oracle Database Preparation, Oracle Database Certification, Oracle Database Exam Prep, Oracle Database Study Material

Oracle launches GoldenGate data integration service on Oracle Cloud Infrastructure

Oracle is reinventing its GoldenGate real-time replication capability as a managed cloud service running in the Oracle Cloud Infrastructure (OCI) that is designed to provide event-based integration of operational and analytic databases, regardless of where they run.

Oracle has taken another step in rounding out its cloud managed services portfolio by adding its GoldenGate data integration service to the mix. The OCI GoldenGate service is a serverless offering that runs on the Oracle Cloud, providing an event-based approach to integrating operational and analytic databases between on-premises data centers and any public, private, or hybrid clouds.

GoldenGate, which has been typically employed with change-data-capture feeds for keeping different instances of databases across data centers or geographic locations in sync, has often been used for disaster recovery use cases with remote hot sites. While Oracle still supports that use case, with the managed cloud service, Oracle is broadening the focus to integrate data across on-premises and one (or more) public clouds under a common umbrella.

Since its acquisition by Oracle back in 2009, GoldenGate has largely existed outside the limelight; although offered as standalone, GoldenGate was also a component of Oracle’s broader Data Integration suite and Maximum Availability Architecture (MAA). Part of the lack of recognition is that, unlike many Oracle acquisitions, GoldenGate kept the same brand name as the formerly independent company, and unlike a number of Oracle products like Oracle’s Zero Data Loss Recovery Appliance, its branding is not exactly evocative of its function.

Even in the background, GoldenGate built a fairly substantial installed base, counting over 80% of the Fortune Global 100. It was designed to provide a real-time alternative to the batch-oriented data extraction and movement tools of the time. And although Oracle has owned GoldenGate for over a decade, the tool never became Oracle database-specific. In recent years, Oracle has expanded GoldenGate by adding support for big data streaming integration and analytics.

When Oracle acquired GoldenGate, it differentiated by not requiring the staging servers that were the norm at the time; instead, introspected change logs from source databases as a change-data-capture (CDC) tool. Ahead of its time in the early 2000s, fast forward to the present and in the cloud, intercepting database change logs at the source has become the norm for data replication. The pathway to the new managed cloud service offering was opened several years ago when Oracle reengineered GoldenGate with a microservices architecture.

Oracle Database Preparation, Oracle Database Certification, Oracle Database Exam Prep, Oracle Database Study Material
Today, Oracle GoldenGate competes with replication tools from niche providers like Striim (whose cofounders came from Oracle), Qlik Data Integration (formerly Attunity), and HVR; while most of these tools are available in cloud provider marketplaces, they are not delivered as managed PaaS services. Rival cloud services, such as AWS Glue, which is a serverless ETL offering, provides some similar services within the AWS cloud but otherwise is not directly comparable with OCI GoldenGate.

Oracle’s service, which only runs in the OCI, takes advantage of the same autoscaling technologies used by the Oracle Autonomous Database. It can manage connections to remote databases sitting in other clouds. Nonetheless, as Oracle has always kept GoldenGate as a service that would play friendly with non-Oracle databases, we believe that Oracle should offer this service in other public clouds.

Just like the Autonomous Database, Oracle is aggressively pricing the OCI GoldenGate service, in this case at $0.67/hour per vCPU, which it claims is cheaper than rival cloud database migration services. With elastic cloud pricing and by making it much easier to use, Oracle is seeking to expand beyond its enterprise comfort zone to midsized businesses that would not normally use them. The OCI GoldenGate service is generally available now in all of Oracle’s cloud regions globally.

Source: oracle.com

Wednesday, May 26, 2021

How three companies brought the simplicity and flexibility of Oracle Cloud into their own data centers

DB Exam Study Materials, DB Exam Prep, DB Exam Certification, DB Exam Guides, DB Preparation

Beyond the industry buzz around Oracle’s groundbreaking Exadata Cloud@Customer service are hundreds of companies, big and small, that are finding it the obvious solution for bringing cloud innovation into their data centers.

As the benefits of cloud become ever more obvious, many businesses have felt held back by security and regulatory concerns from tapping the cloud’s savings, flexibility, and innovation. Oracle Exadata Cloud@Customer unleashes them. Here are three examples.

The IT team at McMaster University, which enrolls more than 31,000 students, was determined to get the elastic scalability benefits of cloud while keeping control of their data on-premises. “The ability to scale up or down is great for the campus lifecycle calendar,” says Kevin de Kock, the school’s director of enterprise applications and data systems, because students research, register for classes, and make changes in brief bursts of activity only a few times a year. De Kock’s team supports Oracle’s PeopleSoft Campus Solutions among other enterprise applications for students and staff of the university, located in Hamilton, Ontario, Canada.

Bringing Exadata Cloud@Customer into its data center gave McMaster the elastic scalability of Oracle Cloud Infrastructure, as well as a platform to consolidate more than 175 databases to under 75. Exadata Cloud@Customer is built on Exadata Database Machine, an integrated mix of hardware and software designed to run Oracle Database securely, at maximum performance, scalability, and availability. Businesses have been using Exadata for their most challenging database workloads since 2007, often using it to consolidate databases while vastly improving their performance and availability. Exadata is the foundation of Oracle Cloud database services, such as Exadata Cloud@Customer and Autonomous Database. 

With Exadata Cloud@Customer handling routine database management activities, provisioning databases for dev and test projects takes McMaster less than a quarter of the time it once did, freeing employees to do more challenging work. The university has shortened its end-to-end data refresh process from 10 hours to 1.5 hours and its back up from an hour to seven minutes. De Kock reports that ETLs are 25 percent faster and scheduled query runtimes have improved by 70 percent. “Exadata Cloud@Customer was the perfect solution because it provides cloud functionality while hosting on-premises, with a low-risk path to the cloud,” De Kock says. 

Outcomes like this have not gone unnoticed by industry analysts, such as IDC and others. They rank Oracle’s solution for bringing cloud computing to customers’ data centers ahead of programs at AWS, Microsoft, and others. For example, dbinsight’s Tony Baer says, “No other cloud or database provider offers this level of simplicity or flexibility on-premises.”

NTT West builds a region-wide program that meets data sovereignty and compliance requirements

Japan’s Nippon Telegraph and Telephone West Corp (NTT West) wanted to build a more responsive cloud platform for large customers that use its technology services. NTT West brought Oracle Exadata Cloud@Customer into its data centers because its elastic scalability helps them reduce the cost of offering pay-per-use pricing to its customers. And as a true cloud service, it lowers the cost for skilled labor and infrastructure maintenance because that work is managed by Oracle. Equally important for NTT West is that Exadata Cloud@Customer “uses the superior technology of Oracle Database as a cloud service delivered in our own data centers, meeting all of our data sovereignty and compliance requirements,” says Norihito Senda, Nagoya Branch, Advanced Solution Department, NTT West.

As NTT West implemented Exadata Cloud@Customer in its data center, the company saw a 40 percent drop in system processing time. NTT West took advantage of features, such as Exadata SmartScan, to speed up processing of large data batches, and Oracle Multitenant to build multiple pluggable independent databases and simply manage them as one. 

Bringing Oracle Autonomous Database into customers’ data centers is a big example of how Oracle Exadata continues to speed ahead of the competition for hosted data management options. “Oracle is at least two generations ahead of everyone else with respect to delivering a production-proven autonomous database—available in the cloud and on-premises,” says Holger Mueller, VP and principal analyst at Constellation Research.

Lalux creates a new path to innovation

Luxembourg-based insurer, LaLux, had two key goals for its infrastructure refresh: Get more use from the computing resources that they were currently paying for, and enable them to firmly meet industry-standard regulations, such as Solvency 2, Cloud Guidelines IT Governance, GDPR, and more, says Vincent Arnal, CIO of Lalux. Oracle Exadata Cloud@Customer enabled them do both while providing a cloud platform that they could use to get more innovative. 

DB Exam Study Materials, DB Exam Prep, DB Exam Certification, DB Exam Guides, DB Preparation

Using two 1/4 rack configurations of Exadata Cloud@Customer, Lalux got the elastic scalability to handle peaks of six times more than the average usage—plus enough capacity to support the company’s growth and new projects for the next five years. 

Another benefit of the Exadata Cloud@Customer service is that it includes a lot of database features at no additional cost, says Arnal. Such features include Partitioning, Advanced Compression, Advanced Security, Real Application Testing, Active Data Guard, Management Packs, and more. "This fact is a real game-changer because it enables new ways of designing IT solutions,” Arnal says. Creating that kind of new path to innovation is exactly what they’d hope to find with their infrastructure refresh and their journey to the cloud.

Source: oracle.com

Monday, May 24, 2021

Exadata uses Persistent Memory for Fast Transactions

An incredible amount of engineering work has gone into the latest generation of Exadata focused on maintaining and extending our lead in Database performance using Intel Optane Persistent Memory (PMEM) combined with Remote Direct Memory Access (RDMA). In this article, we will concentrate on advancements in the 3 primary areas of database performance and scalability that are critical to performance of transactional applications including:

◉ Reading of Individual Database Blocks from Storage

◉ Writing to the Database Transaction Log

◉ Transfer of Database Blocks Across the Cluster Interconnect

Read More: 1Z0-062: Oracle Database 12c Administration

These latency-sensitive operations deep within the Oracle Database have been greatly improved once again, putting Exadata yet another generation ahead of the competition in both on-premises and Cloud deployments.

Transaction Processing

The example used in the video above shows what happens in one of the simplest and most common examples of a database transaction, which is inserting a web shopping cart into a database. The application in this example has modeled data using JSON (Java Script Object Notation) so all of the complexities of the data are encapsulated into a single object that simply needs to be inserted into the database. This application design minimizes the back & forth interactions between application and database, but will be accelerated even further using Exadata technology. The same fundamental behavior occurs with JSON objects as well as with relational tables because indexes are fundamentally the same in either case. Inserting the JSON object includes updating the index, and some of the index blocks need to be read from storage. The final result must be committed to the database and made durable as we will show.

Oracle Database Tutorial and Material, Oracle Database Prep, Oracle Database Preparation, DB Career, DB Study Materials, DB Guides

Latency is Key


Input/Output Operations Per Second (IOPS) is an important measure of overall performance system-wide, but latency (response-time) is critical for performance of each transaction.  Exadata delivers millions of IOPS, which is dramatically higher than competitors as a measure of total bandwidth. For example, the smallest Exadata X8M available in the Oracle Cloud delivers 11.5 times more IOPS than Oracle Database running on the latest AWS RDS instances (3,000,000 IOPS vs. 260,000 for AWS RDS). Exadata Cloud Service also scales to many times this size by simply adding more Database Servers and Storage Servers. While Exadata delivers this incredible scale, the response-time or latency of individual database operations is critical to transaction performance including single block reads and redo writes during commit processing.

Single Block Reads (<19µsec)


The first time-critical operation to consider is the single block read.  As shown in the transaction example above, databases are often too large to fit into memory and some blocks of data must be read from storage. Even a small amount of I/O can impact transaction performance if those I/O operations are not fast enough. The following graphic shows the high speed connection between Exadata Database Servers and Storage Servers, which enables the software to use Remote Direct Memory Access (RDMA).

Oracle Database Tutorial and Material, Oracle Database Prep, Oracle Database Preparation, DB Career, DB Study Materials, DB Guides

Oracle Database on Exadata reads single blocks of data from storage in less than 19µsec.  This can be easily measured via Automatic Workload Repository (AWR) in the "cell single block physical read" metric. The equivalent AWR metric is "db file sequential read" on non-Exadata systems. There is simply no other solution available that provides this level of performance along with data integrity and fault resilience.

Intel Optane Persistent Memory (PMEM) is used as a caching layer in Exadata, specifically targeted at caching blocks that are the subject of single-block reads, as well as caching writes to the Database transaction log.  Caching data into Persistent Memory allows the Oracle Database to access those contents much faster compared to any storage technology that uses a request/response processing model. The Oracle Database software does not "request" a block from storage. Rather, the Database simply reads the contents of the PMEM and retrieves the blocks it needs. The Exadata Storage Server software is responsible for identifying the "hot" blocks of data and placing them into PMEM.

Accelerating Commits with Persistent Memory (~25µsec)


The second time-critical operation for transaction processing that Exadata addresses is writing to the database transaction log, specifically during COMMIT processing. Writes to the transaction log are always critical, but especially when those writes contain commit markers. It is not simply the OBJECT being written (redo blocks), but the context under which that write is happening (during commit). Oracle has built-in numerous optimizations that accelerate the transaction log, and the latest innovation is use of Persistent Memory to accelerate commit processing. As shown below, the transaction log (redo blocks) are written to 3 Storage Servers in unison.

Oracle Database Tutorial and Material, Oracle Database Prep, Oracle Database Preparation, DB Career, DB Study Materials, DB Guides

Writes to the database transaction log will vary depending on the amount of data being processed, but will generally complete in about 25µsec. The critical redo log write operations can be seen under the AWR metrics "log file sync" and "log file parallel write" in any Oracle Enterprise Edition Database. Transaction processing inside of an Oracle Database occurs primarily in memory, but the transaction log is persisted to storage for durability, and Exadata will persist these writes to 3 Storage Servers simultaneously in a HIGH redundancy configuration (standard in Oracle Cloud). This operation inside of an Oracle Database has simply gotten more critical as processing volumes have increased. Exadata delivers innovations such as the Persistent Memory Commit Accelerator to keep pace with modern transaction processing applications.

Remote Direct Memory Access for Cluster Interconnect (~10µsec)


The third time-critical operation to consider is transfer of blocks across the cluster interconnect. Remote Direct Memory Access (RDMA) is the technology that allows a process on one computer to directly access the memory of another computer such as reading single blocks and writing to the Oracle Database transaction log as discussed above. RDMA is also the basis for the fastest cluster interconnect available and is the foundation for the Exafusion technology that keeps Oracle Database buffer caches synchronized on Exadata.

Oracle Database Tutorial and Material, Oracle Database Prep, Oracle Database Preparation, DB Career, DB Study Materials, DB Guides

The Oracle Cache Fusion protocol is known as Exafusion on the Exadata platform because of enhancements that provide even greater performance. Oracle Database on Exadata uses RDMA to access data in other nodes in the range of ~10µsec, providing greater performance and scalability than other platforms. Traditional networking technology used in non-Oracle Clouds see latencies in the range of 500 to 1,000µsec (50-100 slower).

Source: oracle.com

Friday, May 21, 2021

Freedom to Build - Oracle Adds APEX Low-Code Application Development and Autonomous JSON Database to Growing List of Always Free Oracle Cloud Services

Oracle Database Tutorial and Materials, Oracle Database Exam Prep, Oracle Database Prep, Oracle Database Preparation

Oracle Corporation further demonstrates its commitment to developers by expanding Oracle Cloud Free Tier with two new Always Free services that enable rapidly creating low-code Oracle APEX applications and managing JSON document data. Both services are completely free to Oracle Cloud users for an unlimited time. These two new services add to the growing list of Always Free services released in Oracle Cloud since 2019.

New Always Free Oracle APEX Application Development


With new Always Free Oracle APEX Application Development (aka Free APEX Service), citizen and professional developers can rapidly build and deploy low-code apps in Oracle Cloud free of charge for as long as they need. Oracle APEX empowers developers to create modern data-driven applications up to 38x faster than coding. Developers can get the most from data assets by building converged applications that integrate relational, JSON, spatial, and graph data. They can pull data from the included Oracle Autonomous Database or virtually any REST endpoint. In addition, developers optionally can extend applications by incorporating Oracle and third-party APEX plugins or by adding custom JavaScript, SQL, and PL/SQL logic.

Free APEX Service delivers Oracle APEX running in Oracle Cloud as an all-inclusive and fully managed service with up to 1 OCPU (shared) and 20 GB of storage. It is suitable for diverse low-code applications including bespoke apps, interactive reports, dashboards, spreadsheet replacements, and more. Free APEX Service can support approximately 3-6 users accessing the service simultaneously, plus an unlimited number of applications, developer accounts, and end-user accounts. If applications outgrow these constraints, it is easy to upgrade to Paid APEX Service with a single click and then provision additional OCPUs and storage.

Oracle Database Tutorial and Materials, Oracle Database Exam Prep, Oracle Database Prep, Oracle Database Preparation

New Always Free Oracle Autonomous JSON Database


Always Free Oracle Autonomous JSON Database (aka Free JSON Database) delivers the benefits of Oracle’s Paid Autonomous JSON Database free of charge for as long as needed. Autonomous JSON Database is a cloud document database service that makes it simple to develop JSON-centric applications and handle JSON workloads. It features simple JSON document APIs, full SQL access, serverless scaling, high performance ACID transactions, and comprehensive security. Because the service is a member of the Oracle Autonomous Database family, it automates provisioning, configuring, tuning, scaling, patching, encrypting, and repairing of the database, thus eliminating most database management tasks while still delivering extremely high availability.

Free JSON Database provides up to 1 OCPU (shared) and 20 GB of storage, making it suitable for a wide range of small- to medium-sized JSON-centric applications. If apps built using Free JSON Database outgrow these constraints, a single click can upgrade from Always Free to a paid service, where it is easy to provision additional compute and storage resources.

Oracle Database Tutorial and Materials, Oracle Database Exam Prep, Oracle Database Prep, Oracle Database Preparation

Oracle’s Growing List of Always Free Cloud Services


These new free services are just the most recent additions to Oracle Cloud Free Tier. They complement the existing Always Free Compute VM, Object Storage, Block Storage, Load Balancer, Autonomous Data Warehouse, Autonomous Transaction Processing, and NoSQL Database services. The new Always Free services underscore Oracle’s continuing commitment to provide free Oracle Cloud resources to developers of all kinds, whether they are employed by an enterprise, working independently, or volunteering. With Oracle Cloud, developers everywhere get access to a growing list of free resources for an unlimited time.

Source: oracle.com

Monday, May 17, 2021

Exadata System Software 21.2

Oracle Database Tutorials and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career

Exadata System Software Release 21.2.0.0.0 is now generally available.

This release follows Exadata System Software Release 20.1 and brings further innovations to performance and manageability of the Exadata fleet, making the platform even more predictable in high performance settings, and even more scalable for consolidation environments. Making everything easier with Exadata!

The past year has been an unprecedented time for all, affecting our customers globally, who are expected to do more remotely. With the Exadata 21.2 release, we have focused on manageability and performance, both of which drive towards managing increased workloads with the same or fewer resources. 

Our unique Exadata software allows us to ensure your investment continues performing at the peak of productivity. As mentioned when we released Exadata Software 20.1, "We continue our tradition of protecting your investment in Exadata, with all new software features compatible across RoCE and InfiniBand systems, and all supported Exadata hardware generations”.

Exadata Product Management is happy to state this tradition has not changed with the Exadata Software 21.2 release.

Persistent Storage Index

Most people familiar with Exadata realize the importance of Storage Indexes in avoiding unnecessary I/O from/to the Storage Servers. To recap, Storage Indexes are automatically created to track and maintain min/max values. These indexes are held in ephemeral memory of each Storage Server and are used by data warehouse and analytic workloads to eliminate and avoid IO. New in Exadata 21.2 - Storage Indexes will be persisted on the local disk to avoid rebuilding the storage index after planned maintenance or unplanned downtime. Eliminating the cycles to rebuild Storage Indexes, after server restarts, ensures application performance remains consistent.

Persistent Columnar Cache

Columnar cache is a portion of Exadata Smart Flash Cache that contains data in columnar format. As part of query performance tuning, data in Exadata Smart Flash Cache is automatically converted and stored in columnar format in this cache. New in Exadata 21.2 - If a storage server running in Write-Back mode is shutdown (either gracefully, via upgrades, or due to other failures), the columnar cache will persist, enabling more consistent query performance immediately after a Storage Server restart.

I/O Resource Management Cluster Plan

I/O Resource Management (IORM) is a powerful tool that enables user-defined policies to be assigned to workloads and databases to share Oracle Exadata Storage Servers resources reducing the incidence of "noisy neighbor" impacts to critical databases. New in Exadata 21.2 - Database Machine Administrators are now able to allocate storage resources based on cluster boundaries, rather than individual database or consumer groups. This allows for more flexible and holistic resource allocation. By defining the plan at the cluster level, any database (existing or new) in a cluster inherits the I/O plan of that cluster. Ensuring new databases created without a specific plan do not overrun existing database workloads.

Smart Scan Metadata Sharing

A foundational feature of Exadata Storage Software for analytic workloads is the Smart Scan operation. When Oracle Databases send queries to the Exadata Storage tier, associated metadata is also sent. Query metadata contains information such as projected columns, predicates, session information, bloom filters, and other information. This information is key to driving the query offload to the storage server. New in Exadata 21.2 - Rather than each Parallel Query packaging the metadata for each execution, common query metadata is shared across Smart Scan sessions belonging to the same parallel query. This reduces individual parallel query processing on the storage server thereby increasing the overall system throughput. Best seen in the more sophisticated operations with large amounts of metadata such as data mining, JSON/XML type tables and joins with bloom filters.

Additional Management Features

Oracle Database Tutorials and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career
Other additional features of Exadata Software 21.2 include:

◉ Increased Management for Exadata Networking Infrastructure - New capabilities to monitor and update Cisco switches using Oracle Auto Service Request (ASR) and Patch Manager Utility (patchmgr). This capability covers the latest Cisco Management Switch and the RoCE Network Fabric switches.

◉ Streamlined Rebalancing Operations - Preserving redundancy levels is critical in highly regulated database environments. New options in CELLCLI command allow Exadata to automatically rebalance data before removing ASM disks.

◉ Faster Upgrades - In this world of zero downtime, every minute counts. Exadata Software 21.2 accelerates software updates and reduces the time for updating each server by 5-minutes by pre-staging ILOM updates. On a full rack Exadata that translates to an overall reduction in rolling-upgrade down by almost 2 hours.

◉ Automatic and Faster Recovery from component failures- Additional recovery automation in High Capacity and Extended Storage Servers, and faster failure restoration of Flash Cache, work toward increasing the resilience and predictability of Storage Server performance.

◉ Enhanced Database Server Alerting - The DBMCLI LIST ALERTHISTORY command now shows incidents relating to Oracle Grid Infrastructure and Oracle Database software, allowing administrators to see across components during analysis of incidents.

Source: oracle.com

Tuesday, May 11, 2021

JSON Data Type in Oracle Database 21c

Oracle Database 21c, Oracle Database Tutorial and Material, Oracle Database Certification

The JSON data type was introduced in the Oracle 20c preview release to provide native JSON support and improve the performance of JSON processing. It has become generally available in Oracle 21c.

◉ JSON Data Type

The JSON data type is an Oracle optimized binary JSON format called OSON. It is designed for faster query and DML performance in the database and in database clients from version 20c/21c upward.

We create a column in a table using the JSON data type, much like we would with any other data type.

-- drop table t1 purge;

create table t1 (

  id         number generated always as identity,

  json_data  json,

  constraint ta_pk primary key (id)

);

We can populate the table using JSON data from a number of sources using the JSON constructor.

declare

  l_varchar2  varchar2(32767);

  l_clob      clob;

  l_blob      blob;

begin

  l_varchar2 := '{"fruit":"apple","quantity":10}';

  l_clob     := '{"fruit":"orange","quantity":20}';

  l_blob     := utl_raw.cast_to_raw('{"fruit":"banana","quantity":30}');

  insert into t1 (json_data) values (json(l_varchar2));

  insert into t1 (json_data) values (json(l_clob));

  insert into t1 (json_data) values (json(l_blob));

  commit;

end;

/

Some inserts will work fine without explicitly using the JSON constructor, but it makes sense to use it explicitly.

The data is stored in the table in binary format, so a basic query isn't very useful.

set linesize 200

column json_data format a65

select * from t1;

        ID JSON_DATA

---------- -----------------------------------------------------------------

         1 7B226672756974223A226170706C65222C227175616E74697479223A31307D

         2 7B226672756974223A226F72616E6765222C227175616E74697479223A32307D

         3 7B226672756974223A2262616E616E61222C227175616E74697479223A33307D

SQL>

◉ Query JSON Data

The JSON_SERIALIZE function is used to convert JSON from any supported type into text. It was introduced in Oracle 19c, but it has been extended to support the JSON data type.

set linesize 200

column json_data format a50

select id, json_serialize(json_data) as json_data from t1;

        ID JSON_DATA

---------- --------------------------------------------------

         1 {"fruit":"apple","quantity":10}

         2 {"fruit":"orange","quantity":20}

         3 {"fruit":"banana","quantity":30}

SQL>

We could query values using the normal SQL/JSON functions introduced in previous releases, which also support the new JSON type.

Here is an example of using the JSON_VALUE function.

column fruit format a10

select a.id,

       json_value(a.json_data, '$.fruit') as fruit,

       json_value(a.json_data, '$.quantity' returning number) as quantity

from   t1 a

order by 1;

      ID  FRUIT    QUANTITY

---------- ---------- ----------

         1 apple              10

         2 orange             20

         3 banana             30

SQL>

Here is an example of using the JSON_QUERY function.

column fruit format a10

column quantity format a10

select a.id,

       json_query(a.json_data, '$.fruit' returning varchar2) as fruit,

       json_query(a.json_data, '$.quantity' returning varchar2) as quantity

from   t1 a

order by 1;

        ID FRUIT      QUANTITY

---------- ---------- ----------

         1 "apple"    10

         2 "orange"   20

         3 "banana"   30

SQL>

Here is an example of using the JSON_TABLE function.

select a.id,

       jt.fruit,

       jt.quantity

from   t1 a,

       json_table(a.json_data, '$'

         columns (fruit    varchar2(10 char) path '$.fruit',

                  quantity number path '$.quantity')) jt;

        ID FRUIT        QUANTITY

---------- ---------- ----------

         1 apple              10

         2 orange             20

         3 banana             30

SQL>

We can also query the data using dot notation. Remember the data is binary, so we have to convert it to text using the JSON_SERIALIZE function.

select a.id,

       json_serialize(a.json_data.fruit) as fruit,

       json_serialize(a.json_data.quantity) as quantity

from   t1 a

order by 1;

        ID FRUIT      QUANTITY

---------- ---------- ----------

         1 "apple"    10

         2 "orange"   20

         3 "banana"   30

SQL>

◉ JSON_SCALAR Function

The JSON_SCALAR function creates an instance of a JSON type from a SQL scalar value.

column scalar_number format A20

column scalar_string format A20

column scalar_date format A50

select json_scalar(1) as scalar_number,

       json_scalar('string') as scalar_string,

       json_scalar(date '2020-12-13') as scalar_date

from   dual;

SCALAR_NUMBER        SCALAR_STRING        SCALAR_DATE

-------------------- -------------------- --------------------------------------------------

31                   22737472696E6722     22323032302D31322D31335430303A30303A303022

SQL>

We can see what we created by converting the value back to text using the JSON_SERIALIZE function.

select json_serialize(json_scalar(1)) as scalar_number,

       json_serialize(json_scalar('string')) as scalar_string,

       json_serialize(json_scalar(date '2020-12-13')) as scalar_date

from   dual;

SCALAR_NUMBER        SCALAR_STRING        SCALAR_DATE

-------------------- -------------------- --------------------------------------------------

1                    "string"             "2020-12-13T00:00:00"

SQL>

◉ PL/SQL Object Types for JSON Support

For many operations is may be simpler to use the JSON_TRANSFORM function, introduced in Oracle database 21c, rather than PL/SQL Object Types for JSON. If you do need to use PL/SQL Object Types for JSON, that's no problem.

The JSON_OBJECT_T constructor supports the new JSON data type. The following example retrieves a JSON value from the T1 table and converts it to a JSON_OBJECT_T type. We can then process it with the PL/SQL Object Types for JSON.

set serveroutput on

declare

  l_json  JSON;

  l_obj   json_object_t;

begin

  -- Get the JSON data.

  select json_data

  into   l_json

  from   t1

  where  id = 1;

  -- Create a JSON_OBJECT_T object and output the contents.

  l_obj := json_object_t(l_json);

  dbms_output.put_line('l_obj.stringify = ' || l_obj.stringify);

end;

/

l_obj.stringify = {"fruit":"apple","quantity":10}

PL/SQL procedure successfully completed.

SQL>

Oracle Database 21c, Oracle Database Tutorial and Material, Oracle Database Certification
Once we've finished processing the JSON data in the JSON_OBJECT_T object, we can convert it back to a JSON data type using the TO_JSON member function, and use that to amend the database.

set serveroutput on

declare

  l_json  JSON;

  l_obj   json_object_t;

begin

  -- Get the JSON data.

  select json_data

  into   l_json

  from   t1

  where  id = 1;

  -- Create a JSON_OBJECT_T object and output the contents.

  l_obj := json_object_t(l_json);

  dbms_output.put_line('l_obj.stringify = ' || l_obj.stringify);

  -- Convert it back to JSON.

  l_json := l_obj.to_json;

  dbms_output.put_line('l_json = ' || json_serialize(l_json));

  -- Update the JSON column.

  update t1

  set    json_data = l_json

  where  id = 1;

end;

/

l_obj.stringify = {"fruit":"apple","quantity":10}

l_json = {"fruit":"apple","quantity":10}

PL/SQL procedure successfully completed.

SQL>

There are a number of new member functions, as well as overloads of existing member functions that support the new data type.

Wednesday, May 5, 2021

SQLcl now under the Oracle Free Use Terms and Conditions license

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Prep, Oracle Database Learning

SQLcl: The modern command line


SQLcl is the modern command-line interface for the Oracle Database. It has been adding new features every release including recent additions such as comprehensive liquibase support for the Oracle Database and OCI Cloud Storage. The easiest way to see most of the additions are seen by issuing the help command which highlights the new features over the basics. The additions not seen are things like command completion, multi-line buffer editing, and robust history. The most powerful feature is the script command which enables the usage of javascript / Graal mixed in with SQL scripts. It also allows for the creation of commands to make the tool truly customizable.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Prep, Oracle Database Learning

Access to SQLcl


SQLcl is distributed via the product page on oracle.com and bundled with SQL Developer, Oracle Database, OCI Cloud Shell, OCI Linux Yum Repository, and the OCI Developer Image in the marketplace.

These channels are all subjected to either the Click-Thru license agreement or the overall OCI Cloud license in the case of cloud shell and OCI Yum.
 

SQLcl in Cloud Shell


SQLcl is out of the box installed in cloud shell and ready to use.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Prep, Oracle Database Learning

SQLcl in OCI Linux Yum


Any compute node in Oracle Cloud can issue a simple yum install sqlcl to have the tool installed. This also applies to Oracle REST Data Services with yum install ords. The releases in this repository are updated as releases are loaded to Oracle.com

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Prep, Oracle Database Learning

SQLcl: Easiest Access


SQLcl has just changed its licensing to the Oracle Free Use Terms and Conditions.

That means the tool can now be downloaded without a manual click to agree. This change will also make automated downloads possible.

The location on oracle.com will be unchanged to discover the links on the download page. The release that has gone live today is https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-21.1.1.113.1704.zip

There is also a sqlcl-latest.zip which will always be the most current release. This will assist in making it as easy as possible to download the latest released version of the tool in an automated way. The path to that zip file is: https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip

From now on, you can download SQLcl via a simple cURL command:

curl -O https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip

or even download, extract and install it, ready to connect to your Oracle Databases.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Prep, Oracle Database Learning

Source: oracle.com

Tuesday, May 4, 2021

Oracle Database 21c Features Backported to Oracle Database 19c

Oracle Database 21c, Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Career, Database Preparation, Database Learning

New in Oracle Database 19c Release Updates

Many customers will be aware that the April 2021 Release Update( RU) for Oracle Database 19c - RU 19.11 - was recently made available for download on My Oracle Support (MOS). Make sure to check out Mike Dietrich's blog post on upgrading to the latest RU 19.11.

Customers should also be aware that the latest Oracle Database 19c 'New Features Guide' has been updated to reflect what's new in each Oracle Database 19c RU (in addition to original set of new features introduced). Worthy of note is that a few of the new features introduced in Oracle Database 21c have since been backported to Oracle Database 19c, including SQL (Table) Macros (RU 19.7) and Blockchain Tables (RU 19.10), plus Immutable Tables (RU 19.11).

BTW, the Oracle Database Features and Licensing App (pictured below) has also been updated to reflect new features that have since become available to base releases with recent RUs. 

Oracle Database 21c, Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Career, Database Preparation, Database Learning

For more information on Oracle Database, visit Oracle.com/database/technologies.

Source: oracle.com