Wednesday, March 31, 2021

NEW: Oracle Zero Downtime Migration 21c

Oracle Database 21c, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Learning, Database Career

Zero Downtime Migration (ZDM) 21c is available for download! ZDM is Oracle’s premier solution for moving your on-premises Oracle Database workloads to Oracle Cloud, supporting a variety of Oracle Database versions as the source and most Oracle Cloud Database Services as targets. Zero Downtime Migration 21c enhances the existing functionality by adding the long-requested Logical Migration workflow, which provides even more zero downtime migration choices!

What's New in Oracle Zero Downtime Migration 21c?

Oracle Database 21c, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Learning, Database Career

Logical Migration


In addition to the Physical Migration workflow which has been supported since the first version of ZDM (see for example: https://dbexamstudy.blogspot.com/2021/03/announcing-oracle-zero-downtime.html), ZDM 21c offers a Logical Migration to the Oracle Cloud in eight simple steps, which can be scheduled and monitored as needed. The Logical Migration workflow allows you to migrate to Oracle Autonomous Database leveraging ZDM automation.

Oracle Database 21c, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Learning, Database Career

You can leverage two logical migration workflows with different methodologies:

◉ Logical Offline Migration
     ◉ Data Pump and Database Links based methodology
     ◉ Data Pump and Backup Location based methodology
◉ Logical Online Migration 
     ◉ GoldenGate, Data Pump and Database Links based methodology
     ◉ GoldenGate, Data Pump and Backup Location based methodology

Exadata On-Premises Support


Oracle ZDM 21c includes Exadata Database Machine as a new target. You can now migrate your current database deployments to an on-premises Exadata Database Machine, leveraging the full automation and power of ZDM. Migration to Exadata On-Premises is only supported via ZDM’s physical migration workflow.

Autonomous Database Support


Oracle ZDM 21c can now migrate databases directly to Autonomous Database using any of the logical migration workflows. Supported targets are: 

◉ Autonomous Transaction Processing (Shared & Dedicated)
◉ Autonomous Data Warehouse (Shared & Dedicated)

Non-CDB to CDB Migration Support


Non-CDB to CDB Migration is now supported in both Oracle ZDM’s physical and logical migration workflows, providing you with more migration flexibility via a simplified migration process that includes the Non-CDB to PDB conversion and hence, a wider choice of cloud targets.

Pre-Migration Advisor


A pre-migration advisor that evaluates the source database for feature compatibility and other limits is now included in the logical migration workflow only. You should leverage this advisor in order to better prepare for your database migration journey.

Support for Database Links


Logical Database Migration offers the possibility to establish direct Database Links between the source and the target, thus avoiding the need for a backup location as part of the migration workflow. Support for Database Links is optional, as you can also leverage a backup location such as Object Storage, NFS or the Recovery Appliance for your migration. Database Links based migration is not supported for Autonomous Database Dedicated.

Lighter ZDM Home


The Oracle ZDM software home is now lighter and more compact. Users will notice the difference already when downloading the binaries. ZDM installation and deployment has also been optimized and trimmed, allowing for an even faster instantiation of your first migration.

Source: oracle.com

Monday, March 29, 2021

Announcing Oracle Zero Downtime Migration Release 19.7

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Certification, Database Exam Preparation

ZDM is Oracle’s premier solution to move your on-premises Oracle Database workloads to the Oracle Cloud. ZDM supports a wide range of Oracle Database versions and Oracle Cloud Database Services as targets and ensures minimal to no production database impact during migration Oracle Maximum following Availability Architecture (MAA) principles. Zero Downtime Migration Patch Release 19.7 enhances the existing functionality of ZDM, providing bug fixes and more control of the migration process.

Read More: 1Z0-067: Upgrade Oracle 9i/10g/11g OCA to Oracle Database 12c OCP

Oracle Zero Downtime Migration supports Oracle Database versions 11g, 12c, 18c, 19c and newer versions and can migrate to Oracle Cloud Database Services Virtual Machines, Bare Metal, Exadata Cloud Services and Exadata Cloud at Customer. ZDM performs such  Oracle Database migrations to the Oracle Cloud in eight simple steps, which can be scheduled and monitored as needed. 

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Certification, Database Exam Preparation 

What's New in Oracle Zero Downtime Migration 19.7?

Zero Downtime Migration Patch Release 19.7 enhances the existing functionality of ZDM, providing bug fixes and more control of the migration process. Here is what’s new in a nutshell:

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Certification, Database Exam Preparation

ZDMCLI Command Enhancements


◉ The ZDMCLI MIGRATE DATABASE command has new parameters that facilitate and automate the use of wallet-based credentials. It also no longers requires the -targethome parameter since the target database home is now automatically discovered. The new wallet-based parameters let users specify the full path for the source sys wallet file, the OSS backup user wallet file as well as the TDE keystore wallet file. On the Zero Downtime Migration Service host, these parameters are: -sourcesyswallet, -osswallet & -tdekeystorewallet.

◉ The ZDMCLI QUERY JOB command expands its functionality with new parameters, providing further information on existing migration jobs. These parameters allow for more complex queries retrieving an array of information on zdm migration jobs. Furthermore, queries on specific jobs can now be tied to specific sourcedbs, sourcesids and targetnodes.

ZDMCLI Response File Parameter Enhancements


◉ Oracle ZDM uses a response file to provide relevant parameters related to source database, migration method, target database, backup location and other required elements.

ZDM Backup-related Response File Parameter Enhancements


◉ Oracle ZDM leverages database backups as part of its overall migration workflow. New parameters have been added to the response file, providining users the ability to set intervals for monitoring and reporting the progress of backup and restore operations. The intervals are set by default to ten minutes and can be disabled by setting them to zero.

◉ Further enhancements to the response file include source database service retention, object store access retry, custom location for CURL, pre-authenticated URL for log files upload, custom TNS_ADMIN location and SSH connection reattempt.

◉ Finally, parameters related to post-migration tasks have been added in order to automate datapatch execution. These parameters allow users to skip datapatch execution post-migration (TGT_SKIP_DATAPATCH), let users set up a maximum wait time for datapatch operations post-migration (MAX_DATAPATCH_DURATION_MINS) and let users stop all but one database instances on the target until datapatch completion, after which ZDM will restart all stopped database instances (DATAPATCH_WITH_ONE_INSTANCE_RUNNING).

Bug Fixes


◉ Oracle ZDM Patch Release 19.7, provides a series of bug fixes addressing issues regarding different aspects of the migration workflow. The list of all bug fixes can be found in the ZDM Documentation and Release Notes section on the ZDM homepage.

Source: oracle.com

Friday, March 26, 2021

Building a modern app with Oracle's Converged Database

Synergized, Secure, Scalable: Oracle’s Converged Database helps Development Teams Get Application-Smart

Business today runs on high-performance applications. But in order to innovate and differentiate in an always-on world, application development needs to be swift, efficient, and responsive to complex data. Oracle’s converged database delivers a unified solution loaded with the power to move business forward at speed.

When it comes to doing business in the 21st century, a company’s success depends on its applications. Not only do applications differentiate a company, their strength, speed, responsiveness, reliability, and performance can make or break the customer experience.

More Info: 1Z0-068: Oracle Database 12c - RAC and Grid Infrastructure Administration

As a result, modern applications are more prevalent than ever. To customers, they represent brand experience. To employees, they’re the key to efficient operations and empowerment. To executives, apps drive business growth and competitiveness in the marketplace — factors measured externally by visibility as well as customer acquisition, retention, and loyalty. Internally, apps drive analytics and insight into all operations, which in turn facilitate better corporate decision-making.

In light of all this, drop a layer deeper (across any enterprise) and an important truth becomes crystal clear. Databases are the backbone of every application.

Databases not only contain rafts of crucial data, they keep it safe, secure, and available 24/7. But this data is diverse. Gone are the days of capturing simple transactions and generating monthly reports. Now, customers are their own transaction agents. They travel and transact from mobile phones, laptops, and other devices, any time of day or night. They nurture relationships to other people, places, and things. Their needs and behaviors drive the always-on world.

When it comes to capturing all this data, this trend of transactions pushing out to customers creates exponential compounding of data. But let that data remain latent or untapped, and not only will a company’s applications suffer, they’ll miss the real-time cues that are most valuable to any organization.

Many companies are making their way through this complex landscape with piecemeal solutions — technologies and architectures reached for as needed over time; some of them proprietary, some home-grown, others open-source. Regardless, the resulting patchwork can’t handle the data load. Data fragments. Security becomes inconsistent. 

Administration is ad hoc. The systems become brittle as data compounds, which quickly hampers a company’s agility, as well as its ability to innovate. 

This exacts a high price. A literal and metaphoric cost that can be radically mitigated with Oracle’s converged database. 

Let’s unpack some of the key ways Oracle’s synergized solution delivers the power to keep you smart. Application Smart. On our way there, we’ll begin with a fairly common view of the challenges many companies face, and how IT organizations typically respond — and can course-correct.

The modern application experience is continuously evolving. This change in business needs demands fluid adaptation when it comes to the back-end. Tech stack, application architecture, dev/test, release processes, support: IT’s responsiveness, and its ability to efficiently handle various workflows, becomes the fulcrum of the enterprise — the leverage-point that requires deliberate strategies and savvy choices to deliver the necessary simplification, unification, and automation of architectures, technologies, and processes.

Without these deliberate choices, applications that combine analytics, geo-location, network relationships, machine learning, and transaction activity are often fueled by separate, single-purpose databases and their resulting technology stacks. That may seem viable on the surface, at first-blush. But within this solution construct, the more data moves, the more it degrades an application’s performance. Dev-time increases. Release-schedules lag. Apps become less competitive. And each one requires IT specialists trained in specific stacks.

The overall hit to efficiency can be devastating.

To demonstrate this with specifics, let’s take a quick look at the data, data types, and demands of a truly modern application.

For a long time, the nature of data, and data types, has been row/columns representing relational data. As IT has advanced, data such as key-value, spatial, graph, blockchain, document, time series, and IoT have subsequently emerged. Each of these datasets must co-exist with relational data. When seen in its totality, every modern, data-driven application has inherent data-type complexity. As a result, workloads like transactional, analytics, machine learning, and IoT require different database algorithms to solve for unique demands. This in turn has given rise to single-purpose database solutions.

But while single-purpose databases might appear to be best-of-breed in isolation, they quickly become worst-of-weakness when cobbled and patched together.

As an example, let’s take a look at the retail application of “eShop.biz” — a fictitious company with a common challenge who would come to us after having chosen independent, single-purpose databases as their architecture for online retail.

As you can see in the graphic below, all “eShop.biz” retail services, which included product and customer catalogs; payment, geo-location, cache, and recommend engine services; plus payment, finance, inventory, and delivery services — benefit from a single-purpose database.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Learning, Database Guides
A visual example of eShop.biz’s various single-purpose databases and some of the tech stacks each database holds. 

Committed to meeting the ever-changing demands of B2C online retail, eShop.biz opted to run their prime retail application on the compute layer, hosting the processing of all complex application logic for multiple services within auto-scaling compute environments. These services needed to communicate with the data layer, which, in this case, was built from linked, single-purpose databases — each of which solves a unique and specific problem (such as provisioning a spatial database, or provisioning a document database for a specific catalog or spatial requirement). When viewed one by one, these discreet databases appeared to offer the easiest, most natural, specific solutions for the dev team tasked with adopting the APIs.

But quickly, the reality proved otherwise.

For one, the product-catalog service needed to propagate data from its own unique database to three other databases — in this case, the cache, spatial, and warehouse databases. This meant eShop.biz’s dev team needed to write additional code for the job, which in turn meant any developer assigned to the job needed to deeply understand cache and spatial databases, in order to both make sure transaction communications were frictionless across proprietary APIs, and that response times were fast enough. 

Meanwhile, as the above change was being implemented within the production system, an additional, multi-pronged change needed to be considered. This involved the addition of a geo-spatial service that would empower the recommendation engine to suggest relevant items to the multi-faceted production system. All of this needed to work seamlessly with the warehouse databases, as well as the customer catalog for regional performance, while adding machine-learning capabilities to the warehouse databases in order to optimize eShop.biz’s product-delivery engines.

Complexity grew logarithmically.

In this case, each single-purpose database had its own DDL and DML. As a result, adding the above new features required merging data and application calls, which in turn involved additional code, written in the proprietary APIs of each discreet database. Simultaneously, IT needed to make absolutely sure they weren’t impacting any earlier change with their required additional code.

Very soon, IT realized every change had a base cost that was a function of compute, network, storage, integration, coding, management, and more. Most important, in addition to base-cost per change, the dev team also needed to address the security profiles related to each change, making sure security was in sync with the prime application logic — all while eShop.biz continued to scale.

Needless to say, eShop.biz’s IT organization had a tough time meeting the demands of its retail application. Increasing complexity; proprietary APIs, languages, and transaction models; fragmented data; inconsistent management; fractured security; limited views into various datasets (as opposed to a holistic view); apps that became locked to a discreet database; slow development cycles … the problems multiplied. eShop.biz’s prime application was neither fault-tolerant, nor able to shard data. Far worse, scalability became markedly difficult if not impossible.

eShop.biz knew they needed a multi-pronged solution with the power to address the myriad challenges they suddenly faced.

At the core of the problem, eShop.biz needed to make data-driven application development and deployment simple, streamlined, and swift. They needed faster development cycles (agile development with programming languages), along with easier management, data consistency, data security (the kind that would give them granular security controls, and security profiles at the database level), plus native high-availability, scalability, and cross-site consistency.

Equally important, eShop.biz needed to host multi-model data types, such as JSON, spatial, graph, timeseries, and ledger data. They also needed to run different workloads together, while hosting multiple tenants and in-memory area/cache streams without any code changes. Above all, they needed to streamline to one language, versus many disparate, specialized languages.

Oracle was uniquely positioned and ready to help, as the Oracle’s converged database addressed eShop.biz’s many challenges.

To use a perfect analogy, at the highest level, a converged database is akin to a smart phone — a single device that houses what used to entail multiple separate tools and functions. What once required an analog calendar, desk phone, audio player, alarm clock, pager, camera, etc. became highly simplified and collapsed into a single device — a platform that delivered everything within a unified operating system.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Learning, Database Guides
The smart-phone — a single device that seamlessly runs multiple apps, tools, and functions — is the perfect analogy for the workload aggregation offered by Oracle converged database.

As the following graphic illustrates, a converged database operates in a similar way. It supports the seamless mixing of workloads, data types, and algorithms. It enables SQL, as well as transactions across any data type; it provides unified security and management across all data; and it prevents data fragmentation and copy contagion. 

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Learning, Database Guides
Oracle’s converged databases delivers a single, unified system that supports IT organizations in achieving peak efficiency and cost savings.

Beyond that, by enabling powerful synergies across features, Oracle’s converged database removes initial and recurring integrations costs, while being simpler, more reliable, and far more economical overall.

eShop.biz soon realized that the multi-model flexibility of Oracle’s converged database encompassed relational, JSON, XML, graph, spatial, and OLAP. It also facilitated multiple workloads (including transactions, analytics, in-memory, IoT, steaming, and blockchain), while offering multitenant functionality with container, isolation, orchestration, and scale-out with transactional consistency.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Learning, Database Guides
Oracle’s converged database delivers synergy, efficiency, security, and performance within a single, unified system.

For eShop.biz’s developers, Oracle’s converged database provided one database for all development, while allowing APIs and queries to be written across data types. Meanwhile, dev ops was able to provision dev and test environments for support and UAT much more quickly. Plus, fewer tech-stacks meant dev teams could work efficiently without specialized knowledge. Simultaneously, system admins and DBAs enjoyed consistent, high availability and disaster recovery across all data types and data processing. Even better, data was easily and consistently governed, secured, redacted, and masked for testing and support. 

When it came to infrastructure management, eShop.biz gained a platform that could run as easily in an on-premises data center as it could in the cloud, while providing the ease of migration as budgets and business dictated.

Diving deeper into detail, once deployed, Oracle’s converged database empowered eShop.biz with predictable performance, even as it scaled, with real application clusters (RAC), sharding, and real-time cache analytics in-memory requiring no code changes. When it came to data availability, eShop.biz was also able to build solutions with Oracle’s Active Data Guard, while implementing maximum availability architecture (MAA) with RAC, GG, and DG for the converged database.

Security was also strengthened, with controls like Encryption Everywhere, VPD, redaction, masking, Oracle Label Security, Oracle Audit Vault, and Oracle Database Firewall that secured databases with granular profiles while providing an easy way to implement them. Furthermore, with solutions like Oracle GoldenGate and Oracle Database Advanced Queuing (AQ), the database could handle feeds from distributed systems. All while eShop.biz developers were able to make use of REST services, JSON, SML, Python, Node, SQL, PL/SQL, and machine learning — all of which are native to Oracle’s converged database.

But perhaps the most powerful benefits delivered by the synergy, efficiency, security, and overall performance of Oracle’s converged database resulted in the ability of eShop.biz to accelerate innovation. No longer hampered by compounding complexity, eShop.biz’s IT organization was able to measurably simplify IT operations. This allowed them to quickly respond to eShop’s business needs with smart solutions that reduced business risk in the process.

As a final example of the synergy, efficiency, and multiple benefits of the Oracle converged database, let’s look at how eShop.biz made an architectural change at the database layer.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Learning, Database Guides
eShop.biz streamlined and optimized its architecture at the database layer by adopting the Oracle converged database.

Setting the stage, Oracle’s converged database natively hosts unique data types, including JSON, XML, spatial, graph, timeseries, blockchain, tables, and more. These are run from a single transaction engine, thereby eliminating logarithmic complexity at scale. 

Oracle’s converged database also seamlessly handles different transaction modes, such as analytics, transaction, in-memory, IoT, and blockchain. And its multi-tenant deployment provides a container platform, along with the necessary isolation among the various services. 

Taken together, the above benefits (plus more itemized below) introduced efficiencies and cost savings within eShop.biz’s IT organization, while enabling innovation that improved eShop.biz’s offering and performance.

Because Oracle’s converged database is a single, unified system, that treats all data types with a single DML (Oracle SQL), eShop.biz’s dev team was able to achieve transaction consolidation, as actions that had formerly been performed across single-purpose databases — each of which required additional code — now took place in a single architecture. This consolidation eliminated additional cost, effort, maintenance, and time-consuming integration.

Additionally, by leveraging Pluggable Databases in achieving microservice data isolation with multitenant, eShop.biz simplified its microservice architecture by improving simultaneous isolation among containers, even at scale. This enabled eShop’s prime retail app to quickly respond to business demand. 

Furthermore, because Oracle’s converged database automatically generates REST APIs on top of SQL, eShop.biz developers were able to code less while accessing data like any other service. Not only did this standardize their API workflows, it allowed them to generate new services across warehouses, which helped the business optimize its logistics.

As any dev knows, moving data takes time. eShop.biz developers were able to minimize that time — especially when it came to analytics — with in-place machine learning, since Oracle’s converged database allows machine-learning training to occur with the data in place (by bringing the algorithms to the data ). With 30 algorithms (and counting), this led to additional cost savings for eShop IT.

In-place graph analytics offered a similar advantage to eShop.biz developers, as they found it simpler to discover influencers and dependencies, while better understanding various insights revealed through relationships.

Summing it up, eShop.biz was able to radically simplify their architecture, and thereby, their IT operations, with the many benefits delivered through Oracle’s converged database.

Best of all, the ongoing advantages delivered by Oracle’s unified system have made it possible for eShop.biz developers to efficiently address application requirements while achieving greater operational, security, availability, scalability, and deployment efficiencies. This meant they were able to activate innovation that has optimized and transformed eShop’s business.

Source: oracle.com

Wednesday, March 24, 2021

Graph Databases: What Can They Do?

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Preparation

Let’s say you have to analyze customer buying behavior, uncover collusive crime in financial transactions, provide product recommendations, or discover information about social networks.

Discovering the answer to each of these questions can be complicated with typical systems.


How Do Graphs Work?


Taking data out of the restrictions of the relational database makes it easier to traverse your data to find connections—that is, if you map the data into the form of a graph with entities represented as vertices and relationships represented as edges.

By doing so, you can discover relationships that weren’t obvious before, which is is why graph technology has become increasingly popular and important in today’s world of connected data. For the last three years running, Gartner has selected graph technology as one of its top ten data and analytics trends.

Why Have Graphs Become Popular?


Graph theory has been around for centuries, and graph technology has been around for a few decades. But it has recently grown in popularity for a few key reasons:


◉ Graphs enable new kinds of analysis that are more needed than ever in today’s connected world. They complement relational technology and can become the basis for machine learning, especially with neural networks.

◉ Graphs are very visual, often with easily observable results and they provide simple data modeling.

◉ Graphs have a very flexible data model with no predefined schema, making them extensible and also useful in cases with sparse data.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Preparation

What Do Graphs Do?


Graphs are designed to help you model relationships in your applications. They are purpose-built to handle highly connected data. Because of the increase in the volume and connectedness of today’s data and because so much of today’s data is connected through relationships, there is a tremendous opportunity for graphs to provide extraordinary value for your organization. Uses include:

◉ Use graph analytics to discover social media bots
◉ Find fraud payments quickly
◉ Track the use of sensitive data for compliance purposes
◉ Discover tax evasion
◉ Improve datacenter management
◉ Provide product recommendations
◉ Detect anomalies in social networks, healthcare, and more
◉ Discover outages in utilities networks
◉ Spot vulnerabilities in IP networks
◉ Perform turnover analysis to keep valuable employees
◉ Create portals for citizens to access data from multiple sources, including statistical data, gender, or demographic data
◉ Track chemical and drug names for pharmaceutical companies
◉ Track proteins for vaccine development

The possibilities are virtually limitless. If your data is connected via relationships in some way, graph databases can likely help. Questions that graph can answer include:

◉ Which supplier am I most dependent upon?
◉ Who is the most influential customer?
◉ Do my products appeal to certain communities?
◉ Which anomalous patterns are there that could indicate fraudulent behavior?

Best of all, you can answer these questions with queries that run many times faster. For more use cases, download our ebook on graph technology use cases but keep in mind that our suggestions are only the beginning.

Graph Analytics in Oracle Database and Oracle Autonomous Database


Oracle provides a graph analytics engine with Oracle Database and Oracle Autonomous Database so users can discover more insights in their data by using the power of graph algorithms, pattern matching queries, and visualization.

Having graph technologies in Oracle Database and Oracle Autonomous Database provides scalability, performance, and security. Users can run graph analytics with more than 60 graph algorithms, use a query language and visualizations. Standard interfaces simplify this process and integration with machine learning tools makes it simpler to apply graph to machine learning.

How Does Graph Technology Work?


Oracle is built upon the idea of a multi-model database, meaning there is one database that supports multiple languages and protocols.

Of course there is the relational database. But Oracle also supports additional data models, including JSON, XML, spatial data, and of course, graph structures.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Preparation

Oracle’s graph database has an in-memory server that holds data in a graph format, providing easy access and high speed analytics along with the power of the database. This combines the best of both worlds—graph data held in graph structures, and the power of Oracle Database.

Storing data as a graph and loading it in memory requires only a few easy steps. You can directly load data from standard relational tables into the in-memory server as a graph.

Alternatively, load the data into a property graph schema, for a persisted copy of the graph, and load from the property graph schema into the in-memory graph server. 

Soon, Oracle will release Graph Studio, which builds on existing graph capabilities to make graph analytics and graph database management easier for everyone. It includes automated modeling, integrated visualization, and pre-built workflows for different use cases.

With graph technologies, you can unfold the data landscape in a completely new way. Discover insights. Solve complex problems. Unlock endless possibilities.

Source: oracle.com

Monday, March 22, 2021

JSON_SERIALIZE in Oracle Database 19c

Oracle Database 19c, Oracle Database Certification, Database Career, Database Preparation, Database Guides, Database Tutorial and Material

The JSON_SERIALIZE function converts a JSON document from any supported data type to text.

◉ The Problem

We can store JSON data in a number of different data types, including binary types. Let's create a test table to demonstrate the issue.

-- DROP TABLE json_documents PURGE;

CREATE TABLE json_documents (

  id    NUMBER,

  data  BLOB,

  CONSTRAINT json_documents_is_json CHECK (data IS JSON)

);

INSERT INTO json_documents (id, data) VALUES (1, '{"id":1,"first_name":"Iron","last_name":"Man"}');

COMMIT;

If we try to display the data directly, we don't get anything useful.

SELECT data FROM json_documents;

DATA

--------------------------------------------------------------------------------

7B226964223A312C2266697273745F6E616D65223A2249726F6E222C226C6173745F6E616D65223A

SQL>

Read More: 1Z0-064: Oracle Database 12c - Performance Management and Tuning

We can manually convert a BLOB to a CLOB. For example, we can use the BLOB_TO_CLOB function created by the blob_to_clob.sql script, or for small amounts of data using the UTL_RAW package.

SELECT blob_to_clob(data) AS data FROM json_documents;

DATA

--------------------------------------------------------------------------------

{"id":1,"first_name":"Iron","last_name":"Man"}

SQL>

SELECT UTL_RAW.cast_to_varchar2(data) AS data FROM json_documents;

DATA

--------------------------------------------------------------------------------

{"id":1,"first_name":"Iron","last_name":"Man"}

SQL>

We could also use the JSON_QUERY function to return the whole document, rather than a fragment.

SELECT JSON_QUERY(data, '$') AS data FROM json_documents;

DATA

----------------------------------------------------------------------------------------------------

{"id":1,"first_name":"Iron","last_name":"Man"}

1 row selected.

SQL>

◉ JSON_SERIALIZE Basic Usage

The documentation provides the following description of the JSON_SERIALIZE function.

JSON_SERIALIZE (target_expr [ json_query_returning_clause ] [ PRETTY ]

                [ ASCII ] [ TRUNCATE ] [ json_query_on_error_clause ])

The target expression is the JSON we want to convert.

In its basic form we can convert the JSON data from any supported data type to text, similar to what we did with the BLOB_TO_CLOB function.

SELECT JSON_SERIALIZE(data) AS data FROM json_documents;

DATA

--------------------------------------------------------------------------------

{"id":1,"first_name":"Iron","last_name":"Man"}

SQL>

We can use the JSON_SERIALIZE function to convert the output from other SQL/JSON calls. In this case we use the JSON_OBJECT function to produce a JSON document in binary form, then convert it text using the JSON_SERIALIZE function.

SELECT JSON_SERIALIZE(

         JSON_OBJECT(empno, ename, hiredate RETURNING BLOB)

       PRETTY) AS data

FROM   emp

WHERE  empno = 7369;

DATA

--------------------------------------------------------------------------------

{

  "empno" : 7369,

  "ename" : "SMITH",

  "hiredate" : "1980-12-17T00:00:00"

}

SQL>

◉ Format Output

The returning clause works like that of the other SQL/JSON functions, as described here.

The PRETTY keyword displays the output in a human readable form, rather than minified.

SELECT JSON_SERIALIZE(a.data PRETTY) AS data

FROM   json_documents a

WHERE  a.data.first_name = 'Iron';

DATA

--------------------------------------------------------------------------------

{

  "id" : 1,

  "first_name" : "Iron",

  "last_name" : "Man"

}

SQL>

The TRUNCATE keyword indicates the output should be truncated to fit the return type. In the following example the return type is VARCHAR2(10), so the output is truncated to fit.

SELECT JSON_SERIALIZE(a.data RETURNING VARCHAR2(10) TRUNCATE) AS data

FROM   json_documents a

WHERE  a.data.first_name = 'Iron';

DATA

---------------------------------------------------------------------------------

{"id":1,"f

SQL>

Unlike some of the other SQL/JSON functions, the TRUNCATE function doesn't seem necessary, as it seems to truncate the output to match the returning clause.

SELECT JSON_SERIALIZE(a.data RETURNING VARCHAR2(10)) AS data

FROM   json_documents a

WHERE  a.data.first_name = 'Iron';

DATA

---------------------------------------------------------------------------------

{"id":1,"f

SQL>

The ASCII keyword indicates the output should convert any non-ASCII characters to JSON escape sequences.

◉ Error Handling

If there are any failures during the processing of the data the default response is to return a NULL value. The way an error is handled can be specified explicitly with the ON ERROR clause.

-- Default behaviour.

SELECT JSON_SERIALIZE('This is not JSON!' NULL ON ERROR) AS data

FROM   dual;

DATA

--------------------------------------------------------------------------------

SQL>

SELECT JSON_SERIALIZE('This is not JSON!' ERROR ON ERROR) AS data

FROM   dual;

       *

ERROR at line 2:

ORA-40441: JSON syntax error

SQL>

◉ PL/SQL Support

There is no support for JSON_SERIALIZE in direct PL/SQL assignments.

SET SERVEROUTPUT ON

DECLARE

  l_blob  BLOB;

  l_clob  CLOB;

BEGIN

  l_blob := UTL_RAW.cast_to_raw('{"id":1,"first_name":"Iron","last_name":"Man"}');

  l_clob := JSON_SERIALIZE(l_blob);

  DBMS_OUTPUT.put_line('After : ' || l_clob);

END;

/

  l_clob := JSON_SERIALIZE(l_blob);

            *

ERROR at line 6:

ORA-06550: line 6, column 13:

PLS-00201: identifier 'JSON_SERIALIZE' must be declared

ORA-06550: line 6, column 3:

PL/SQL: Statement ignored

SQL>

The simple workaround for this is to make the assignment using a query from dual.

DECLARE

  l_blob  BLOB;

  l_clob  CLOB;

BEGIN

  l_blob := UTL_RAW.cast_to_raw('{"id":1,"first_name":"Iron","last_name":"Man"}');

  SELECT JSON_SERIALIZE(l_blob)

  INTO   l_clob

  FROM   dual;

  DBMS_OUTPUT.put_line('After : ' || l_clob);

END;

/

After : {"id":1,"first_name":"Iron","last_name":"Man"}

SQL>

◉ 21c Update

Oracle 21c introduced the JSON data type. The JSON_SERIALIZE function also support this new data type.

Friday, March 19, 2021

It’s Independence Day every day with the new Autonomous Data Warehouse Data Tools

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Database Guides, Database Learning

In an earlier blog, we reviewed how Oracle Departmental Data Warehouse enables business teams to get the deep, trustworthy, data-driven insights they need to make quick decisions. We described how the governed, secure solution reduces risks and complexity while increasing both IT and analysts’ productivity - allowing  IT teams to rely on a simple, reliable, and repeatable approach for all data analytics requests from business departments.

Did we stop there? No we didn’t.

We aim to keep providing more value to analysts, as well as to line of business developers, data scientists, and DBAs. To that end, we recently released a new suite of data tools included for free in Autonomous Database.

Indeed, while Oracle Departmental Data Warehouse enables business users in finance, HR and other departments to independently set up data marts in minutes and rapidly get insights from a single source of truth, they may still have needed to turn to IT for operations such as data loading and transformation. The new Autonomous Data Warehouse data tools further decrease business users’ reliance on IT teams – representing a benefit for both groups.

This new suite of built-in, self-service tools includes:

1. Data Loading

Business users can perform drag and drop data loading to swiftly load data themselves from local files such as spreadsheets, databases, and object storage (Oracle and non-Oracle) in Autonomous Data Warehouse. No need to call on IT.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Database Guides, Database Learning

Once loaded, analysts can easily inspect the data and identify quality problems. They can then remediate any quality issue with data transformation, which we’ll talk about next.


2. Data Transformation


Analysts can, independently from IT, perform drag and drop data transformation with zero coding required. Business users simply drag and drop to specify what they want to do, without worrying about how the tool does it under the cover. Transformations can for example include filtering out unneeded data and cleansing.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Database Guides, Database Learning

The new Data Transforms tool leverages the full power of Oracle Data Integrator (ODI), and a new, easy to use web interface. Code is automatically generated for all data sources and targets supported by ODI, including databases, applications, and other sources.


3. Business Modelling


Considering that analysts typically work with semantic/business models rather than directly against tables in a database, our new Business Models tool automatically discovers relationships within data (hierarchies, facts, attributes, levels) and makes it extremely easy to build a business model. This business model is created in Autonomous Data Warehouse, representing a consistent view that can be accessed by all analysts, application developers, data scientists, and used with any analytics tool. It therefore prevents the potential inconsistencies arising from different business models being defined in different analytics tools by different teams.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Database Guides, Database Learning

Another very important benefit is query performance. By recognizing the hierarchy defined in the database, Autonomous Database can automatically pre-compute and store top-level aggregates. It then transparently re-write queries to access them, delivering exceptional performance, even with huge data sets and federated data sources.


4. Data Insights


How would you like to automatically get insights from the data you’ve just loaded? This is exactly what Data Insights does. Autonomous Database uses its knowledge of the business model to automate the ‘slice and dice’ process that an analyst usually performs manually. Data Insights automatically discovers anomalies, outliers and hidden patterns in data using built-in Machine Learning algorithms. It then represents deviations from expected values via charts in Autonomous Data Warehouse, enabling analysts to further investigate. This capability saves them significant time and efforts.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Database Guides, Database Learning


5. Catalog


The new Catalog, built-in Autonomous Database, centralizes metadata to deliver a full and consistent view of your organization’s data and its location. It provides 2 key benefits:

◉ Data lineage: A clear view on data provenance, what data sets were combined, what transformations were completed…etc means that all stakeholders trust the data – and therefore the insights and predictions – which is a major benefit. Teams can focus on making decisions and taking actions, as opposed to arguing about the data (sounds familiar?). Additionally, data lineage is extremely useful in terms of data governance and helping to ensure regulatory compliance.

◉ Impact Analysis: A tree of dependencies helps understand the impact of potential changes, and fix data processing issues by replaying lineage steps.

Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Database Guides, Database Learning


In summary, the new Autonomous Data Warehouse data tools help make business teams even more independent from IT to rapidly go from data to insights. It equips them with powerful enterprise-class data management capabilities that they access from a simple, drag and drop interface. Nothing more to buy, install and integrate. The more independent business users are, the faster they can make decisions and take actions.

And that’s also great news for overworked DBAs and IT teams who may have a hard time keeping up with business users’ demands for increased access to always more data from disparate sources. IT can provide business departments with the autonomy they desire, within the framework of a governed and secure solution, reducing risks while improving their own productivity and ability to focus on higher value tasks.

In addition to the new data tools, analysts, line of business developers, data scientists and DBAs can leverage the following Autonomous Database built-in, self service capabilities:

◉ In-database Machine Learning: to build and deploy high-performance Machine Learning models using Python or SQL, without moving the data to another system.

◉ Graph analytics: to learn from the relationships in data by easily converting relational data into a graph model with automated tools.

◉ Spatial analytics: to manage different types of geospatial data, perform hundreds of location intelligence analytics, and use interactive map visualization tools.

◉ Low-code application development: to build and deploy modern data-driven applications up to 38X faster than with traditional coding in Oracle APEX, a preconfigured, secured and fully managed low-code application development platform.

◉ Security assessment: to discover sensitive data, mask it, evaluate security risks, and implement security controls with Oracle Data Safe.
 
Oracle Autonomous Data Warehouse eliminates all the complexities of operating a data warehouse, automating provisioning, configuring, securing, tuning, scaling, patching, backing up, and repairing of the data mart/data warehouse. The complete suite of built-in, self-service tools, including the new data tools we discussed in this blog, enables analysts, data scientists, and line of business developers to independently deliver even faster results, accelerating insights and time to market. Hence for them, it’s indeed Independence Day every day! And IT teams can enjoy the fireworks with a solution reducing risks and improving productivity.

Source: oracle.com

Wednesday, March 17, 2021

How to Simplify Development and Optimize your Data with Oracle's Converged Database

 In our digitized, data-fueled world, enterprises must create unique assets — and make the most of them — in order to remain competitive. This puts developers front and center when it comes to managing and optimizing database performance. Developers not only shape the underlying technologies throughout the stack, they now drive efficiency and best-practices from a variety of hybrid roles (DevOps, Full-Stack, etc.). But developers are often forced to choose between data productivity or their own. 

Oracle's converged database has rendered that dilemma obsolete.

Developers know better than anyone that the digital landscape is ever-changing. New languages, new frameworks, new tools, new protocols — the demands and opportunities can quickly become overwhelming unless there’s a clear signal amid the noise. In keeping with the KISS (“keep it simple”) principle, developers strive for modern development while not sacrificing operational efficiencies required to keep their companies competitive.

Single-purpose databases, often known as purpose-built databases, are engineered to solve targeted problems. However, when operations grow, integration of multiple single-purpose databases is required. 

Enter Oracle's converged database — a platform loaded with a suite of capabilities that are simpler, easier, and far less error-prone than multiple single-purpose databases combined. 

In this article, we’ll highlight a few key advantages delivered by Oracle converged database that help our customers keep it simple while staying smart. Development Smart. Especially when developing with multiple data types.

As companies pursue the efficiency and relevancy necessary for their success, they must digitize complex processes and data-integration at scale. As a result, developer teams are faced with a difficult choice: optimize for fast application-development now, or prioritize easier data-capture later. Developer productivity or data productivity. It’s a difficult either/or.

In the first case (developer productivity), developer teams might spin-up single-purpose databases for specific projects (especially when it comes to greenfield development). Each database offers a low barrier for entry that often includes a convenient data model for the purpose at hand along with related APIs. Development moves quickly, as each database has its own operational characteristics that are useful in the immediate. But as a project grows and additional databases or cloud services are required, complexity multiplies and data fragments. Efficiencies gained are quickly lost, as each database’s once-enticing characteristics no longer function at scale. As a result, developer teams face long-term difficulties such as lack of standardization (APIs, query languages, etc.), data fragmentation, security risks, and management gaps.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
A snapshot of some of the integration difficulties produced by single-purpose databases.

In the second case (data productivity), especially when it comes to brownfield projects, developers might build on a corporate standard database that’s usually a generic relational or relational multi-model database. But while the corporate standard database enforces official policies and simplifies things such as operations, security, and data reuse (reporting, analytics, etc.), limited functionality can often slow and even prevent innovation. This risks putting the entire business at a disadvantage in a fast-paced, highly competitive market landscape.

Neither of the above scenarios are ideal, and both are limiting in their own ways. Which begs an important question: What if there were a third option? 

That option is Oracle's converged database — a first-in-class solution that balances both developer and data productivity.

At its highest level, a converged database is a multi-model, multi-tenant, multi-workload architecture (see the graphic below). More specifically, Oracle's converged database delivers the data models and access methods needed for modern development on all workloads (OLAP, OLTP, IoT, etc.). Better, it delivers the specific tooling, security, integration, and operational characteristics developers and developer teams are seeking — powerful, smart capabilities that simplify, streamline, and out-perform at scale.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
Oracle Database 19c is the world’s first converged database — one that balances developer and data productivity for a powerful solution at scale.

As any developer will tell you, talk is cheap; show me the code. So let’s take a look at two different environments, along with a few code examples, to better understand how Oracle's converged database streamlines the data-tier for development.

We will illustrate the inherent simplicity of using Oracle's converged database on two three-tier environments within a vacation-rental online-marketplace application. (In this case, we’ll use an Airbnb clone named “Host-a-bnb,” depicted in the graphic below.)

The unique data assets required for this application are:

1. Key-Value (used to store users’ search-histories in order to deliver them quick, personalized future searches)
2. Graph (used to store users’ relationships with one another in order to optimize and enhance social-media features and functionality)
3. Relational (used to store users’ transactions for booking and payments)

The requirements generated by the above translate to the following (for simplicity, we’ll forego including a caching layer):

1. The single-purpose database deployment will consist of MongoDB (key-value), Neo4j (graph), and Firebird (relational) databases.
2. The Oracle Database deployment will consist of a single pluggable database with corresponding tables for key-value, graph, and relational data.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
A simplified example of Host-a-bnb’s three-tier environment when run on multiple single-databases versus Oracle's converged database.

Within this construct, let’s zero-in on something that should be relatively painless: connecting the "Host-a-bnb" web-application to the corresponding data tiers.

SINGLE-PURPOSE DATABASES

1 # INSTALL NPM MODULES
2 npm i mongo
3 npm i neo4j-driver
4 npm i node-firebird

// DEMONSTRATION PURPOSES ONLY
// JS
// ...
1   // CONNECT TO MONGODB CLIENT
2   let mongoClient = require('mongodb').MongoClient; // includes Mongodb module
3       MongoClient.connect(mongoURL,mongoOptions, (err, client) => {
4          if (err) {/* ERROR HANDLING */} else {/* CONNECTED TO MONGODB */}
5      });
6   // CONNECT TO NEO4J
7   const neo4j = require('neo4j-driver'); //include Neo4j module
8   const driver = neo4j.driver(uri, neo4j.auth.basic(user, password));
9   const neo4jSession = driver.session();
10      try {
11        const result = await neo4jSession.run(/* CONNECTED TO NEO4J */);
12      } finally {
13        await session.close();
14      }
15  // CONNECT TO FIREBIRD CLIENT
16  var Firebird = require('node-firebird'); // include firebird module
17  Firebird.attach(options, function(err, firebirdDB) {
18    if (err) {/* ERROR HANDLING */} else {/* CONNECTED TO FIREBIRD */}  
19  });
// ...
// DEMONSTRATION PURPOSES ONLY

ORACLE DATABASE (CONVERGED)

1 # INSTALL NPM MODULES
2 npm i oracledb
3
4

// DEMONSTRATION PURPOSES ONLY
// JS
// ...
1   // CONNECT TO ORACLE DATABASE CLIENT
2   const oracledb = require('oracledb');
3   function oracleDbConnect(dbOptions) {
4       return new Promise(async (resolve, reject) => {
5           try {
6               await oracledb.getConnection(dbOptions).then(conn => {
7                   resolve(/* CONNECTED TO ORACLE DB */);
8               });
9           } catch (err) {     
10              reject(err);  
11          } 
12      });
13  }
14
15
16
17
18
19
// ...
// DEMONSTRATION PURPOSES ONLY

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
A comparison of data-tier connection across single-purpose databases versus the Oracle converged Database.

As you’ll see in the graphic immediately above, some additional work is required when connecting single-purpose databases. Granted, while this example is trivial, it foreshadows the complexity to come. Looking at the code (below), which demonstrates simple queries from each single-purpose database, things start to get messy – fast.

SINGLE-PURPOSE DATABASES

// DEMONSTRATION PURPOSES ONLY
// JS
// Use Case: Host-a-bnb (vacation rental company) wants to build a dashboard that visualizes session data of users who booked bnbs at a
// given date with long-term friends (> 5 year) and needs to pull out the session data to do it. 
// The query below will pull the session data using predefined variables 'VAR_USER_ID', 'VAR_BOOKING_DT', and 'VAR_FRIENDSHIP_LENGTH'.
// ...
// Step 1.a: Let's apporach this problem by querying and pulling all the 'firebirdBookingIds' the user had on the 'VAR_BOOKING_DT' from Firdbird
firebirdBookingIds = [];
Firebird.attach(options, function(err, firebirdDB) {
    if (err) {/* ERROR HANDLING */} else {/* CONNECTED TO FIREBIRD */}  
    firebirdDB.query('SELECT BOOKING_ID FROM BOOKINGS WHERE USER_ID = ? AND BOOKING_DATE = ?', 
    [VAR_USER_ID, VAR_BOOKING_DT], (err, result) => 
    {
        if (err) throw err;
        // Step 1.b: Clean 'firebirdBookingIds' for use in next step (think ETL, what a pain!)
        firebirdBookingIds = transformResult(result) {/* ... */};
        firebirdDB.detach();
    });
})
// Step 2.a: Then, using the 'firebirdBookingIds', query which bookings included friends with 'VAR_FRIENDSHIP_LENGTH' > 5 years from Neo4j
neo4jFilteredBookingIds = [];
session.run(
    'MATCH (u:Person)-[:IS_FRIENDS_WITH]-(friends) WHERE u.id = $user_id AND friends.friendship_length > $fLength' + 
    'RETURN FILTER(x in friends.booking_ids WHERE x IN $userBookingIds)',
    { userId: VAR_USER_ID,
      fLength: VAR_FRIENDSHIP_LENGTH,
      userBookingIds: firebirdBookingIds
    }).then(result => {
        // Step 2.b: Clean 'neo4jFilteredBookingIds' for use in next step (think ETL, what a pain!)
        neo4jFilteredBookingIds = transformResult(result.records) {/* ... */};
    }).catch (error => {
        console.log(error);
    }).then(() => session.close())
// Step 3: Finally, using 'neo4jFilteredBookingIds', return the session history from MongoDB
finalResult = [];   
MongoClient.connect(url,mongoOptions, (err, client) => {
    if(err) console.log(err);
    const db = client.db(/* DB NAME */);
    const sessionHistoryCollection = db.collection(/* COLLECTION_SESSION_HISTORY */);
    sessionHistoryCollection.find({"booking_id": {"$in": [neo4jFilteredBookingIds]}},{"session_data": 1}).toArray((err, items) => {
        if (err) console.log(err);
            finalResult = items;
    })
});
console.log(finalResults); // Print session history
// ...
// DEMONSTRATION PURPOSES ONLY

A simplified example of querying within a single-purpose database data-tier.

In contrast, Oracle's converged database unified approach can query this data as one would expect. In one query the data is joined together and the desired result is returned. No hassle or transformation between databases or data types.

ORACLE DATABASE (CONVERGED)

// DEMONSTRATION PURPOSES ONLY
// JS
// Use Case: Host-a-bnb (vacation rental company) wants to build a dashboard that visualizes session data of users who booked bnbs at a
// given date with long-term friends (> 5 year) and needs to pull out the session data to do it.
// The query below will pull the session data using predefined variables 'VAR_USER_ID', 'VAR_BOOKING_DT', and 'VAR_FRIENDSHIP_LENGTH'.
// ...
finalResult = [];
oracledb.execute(
    `with booking_cte as ( -- Create temporary result set of needed information from *relational data* on the date 'VAR_BOOKING_DT'
    SELECT BOOKING_ID from BOOKINGS WHERE USER_ID = :1 AND BOOKING_DATE = to_date( :2 , 'MM/DD/YY')
), relationship_cte as ( -- Create temporary result set of needed information from *graph data* where frienships with 'VAR_USER_ID' exist
    SELECT USER_ID, FRIEND_ID, RELATIONSHIP_TYPE, FRIENDSHIP_LENGTH, BOOKING_ID FROM RELATIONSHIPS
    START WITH USER_ID = :3
    CONNECT BY NOCYCLE PRIOR
        USER_ID = FRIEND_ID
), session_history_cte as ( -- Create temporary result set of needed information from *key/value data*
SELECT jt.BOOKING_ID, jt.SESSION_DATA from SESSION_HISTORY,
    json_table( json_doc, '$'
        columns (
            nested path '$.history_of_bookings[*]' columns (
                booking_id number path '$.booking_id',
                session_data FORMAT JSON path '$.session_data'
        ))
    ) jt
)SELECT s.SESSION_DATA FROM booking_cte b 
    JOIN relationship_cte r ON r.BOOKING_ID = b.BOOKING_ID
    JOIN session_history_cte s ON s.BOOKING_ID = r.BOOKING_ID
WHERE r.FRIENDSHIP_LENGTH > :4
-- Finally, query to pull 'SESSION_DATA' from the 'BOOKING_ID's where users booked bnbs on the date 'VAR_BOOKING_DT' with 
-- long term friends ('r.FRIENDSHIP_LENGTH' > VAR_FRIENDSHIP_LENGTH [5 years])`,
    [VAR_USER_ID, VAR_BOOKING_DATE, VAR_USER_ID, VAR_FRIENDSHIP_LENGTH], {outFormat: oracledb.ARRAY}
    ).then(result => {
        finalResult = result;
    }).catch(err => {
        console.log(err);
   });
console.log(finalResult); // Print session history
// ...
// DEMONSTRATION PURPOSES ONLY

A simplified example of querying within the Oracle converged database data-tier.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
A comparison of querying (QL) within a single-purpose Databases versus the Oracle converged database data-tiers.

The most notable thing to unpack here is that single-purpose databases put developer teams at a disadvantage when querying data. Each database requires its own specific Query Language (QL, referenced in the graphic above). Beyond that, once queried, the data is pulled in pieces, many times, serially. This is in no way efficient from a dev perspective, as each developer must figure out how to minimize three separate datasets for a single application. Moreover, developers must then perform data-integration manually in order to produce the desired result.

As we can see in the query-code example above, Oracle’s converged database solves this problem at the root by using one QL to perform all queries across the database. This allows the database, instead of the developer, to do the heavy lifting.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
Side-by-side comparison of querying data within single-purpose databases versus the Oracle converged database data-tiers.

Highlighted in the table above, fragmented behavior is systemic in single-purpose database products and significant effort is spent on engineering integrations. As a solution, Oracle’s converged database avoids these pitfalls by providing an inclusive product that not only requires minimum integration but merges cutting-edge functionality across the platform’s architecture. For example, different data types can live in the same table, or can easily be joined together. This cleanly eliminates the need to spin-up another product just to add another data type. Instead, it’s as easy as creating a column or table.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Prep, Database Career
Side-by-side comparison of querying analytics within single-purpose databases versus Oracle's converged database data-tiers.

Operational use of the data is only one aspect of data management. With Oracle's converged database, developers can avoid expensive ETL (Extract, Transform, and Load) operations, often required to move data from one database to another (such as from OLTP to OLAP in order to enable analytics and reporting). This pattern of intuitive, simple, high-quality integration extends throughout Oracle's converged database.

Oracle's converged database is built upon years of innovation and sits alongside a full array of tools designed to maximize efficiency and performance. As the second-largest software company in the world, Oracle provides a variety of products that empower developers across the CI/CD process. These include:

1. Helidon.io

◉ A lightweight, microservice Java-framework that provides simplified microservice development. Best of all, it runs anywhere.

2. Coherence

◉ A scalable, fault-tolerant, cloud-ready, distributed platform for building grid-based applications that store data reliably.

3. Docker/cri-o and Kubernetes

◉ Enables cloud-native applications to be built and deployed alongside monolithic applications via Oracle’s support for containerization and orchestration.

4. WLS Kubernetes Operator

◉ Facilitates the easiest way to develop, deploy, and manage Oracle WebLogic. WLS Kubernetes Operator makes it easy to modernize while broadening existing skill-sets.

5. Verrazzano

◉ An enterprise container platform for deploying traditional and cloud-native applications in multi-cloud and hybrid scenarios.

6. Terraform and  Oracle Cloud Infrastructure (OCI)

◉ Allows developers to safely and predictably create, change, and improve Oracle Cloud Infrastructure.

The role of the developer is ever-expanding. And now more than ever, IT processes are driven by code. Oracle understands this. This is why we provide a developer-friendly environment throughout the CI/CD process. No matter the task, and no matter where you sit, our toolsets will help you keep it simple while staying smart. Development Smart. With simple, secure technologies that offer rock-solid reliability.

In a world of increasing complexity that demands ever-increasing efficiency, that’s a powerful thing indeed.

Source: oracle.com