Wednesday, June 30, 2021

Database Consolidation using Oracle Autonomous Database with fractional OCPU and granular (GB) storage allocation

Oracle Database Tutorial and Material, Oracle Database Career, Oracle Java Guides, Oracle Database Preparation, Oracle Database Study Material

Oracle Autonomous Database is a self-managing database that delivers end-to-end automation of tasks traditionally performed by DBAs. It is available in two infrastructure choices, Shared Exadata Infrastructure and Dedicated Exadata Infrastructure. With Shared, as the name implies, multiple tenants share the same Exadata machine. With Dedicated, customers have their Exadata isolated from other tenants. It offers more control on infrastructure, software, and maintenance lifecycle, including the separation of environments for Production, Development, Test, etc.

A typical use case for Exadata deployment is database consolidation, i.e., the ability to over-provision CPU so that you can assign multiple databases to a physical core. This ability is beneficial for running non-production databases such as Development or Test environments and other non-critical databases where the workload does not justify a whole CPU core. Exadata allows companies to reduce cost without sacrificing the platform's performance, availability, and security.

This blog describes how you can now achieve similar database consolidation on Autonomous Database Dedicated using partial OCPU instead of a whole OCPU. This feature applies to Autonomous Database on Dedicated Infrastructure and Autonomous Database on Cloud@Customer.

Before Oracle Database Release Update 19.11, the maximum number of Autonomous databases that you could provision was equal to the number of OCPUs on the Exadata Infrastructure, with the smallest database being a single OCPU. Therefore, on an Exadata quarter rack with 100 OCPUs, 100 was the maximum number of Autonomous databases you could create. Another related constraint was the minimum storage size of 1 TB per database.

From Release Update 19.11 onwards, you can now provision Autonomous databases using fractional OCPU and storage in GBs. Fractional OCPU means you can create databases with less than 1 OCPU using fractional units, from 0.1 to 0.9 OCPU, with up to 10 databases running on a single OCPU. Using the previous example, you can provision up to 1000 Autonomous databases on an Exadata quarter rack. Along with fractional OCPUs, you can now provision storage in GB instead of TB, with a minimum storage of 32GB and scaling increment of 1GB. So, instead of assigning 1TB for a 90GB development database, Autonomous database users can now allocate 100GB for better storage efficiency. Similar to Autonomous databases with integer OCPU and storage allocation in TB, you can scale up or scale down Autonomous databases with fractional OCPU and storage allocation in GB with support for conversion from fractional to integer and GB to TB or vice versa with no downtime.

To create an Autonomous Transaction Processing database with fractional OCPU and storage allocation in GB, enter a decimal OCPU count between 0.1 and 0.9 and specify the storage size in GB on the Create Autonomous Database page.

Oracle Database Tutorial and Material, Oracle Database Career, Oracle Java Guides, Oracle Database Preparation, Oracle Database Study Material

All Autonomous Database features such as Auto Scaling, Cloning, and Autonomous Data Guard are supported with fractional OCPUs. You can develop and test your production database functionalities such as DR failover with minimal OCPU and storage allocations.

Oracle Database Tutorial and Material, Oracle Database Career, Oracle Java Guides, Oracle Database Preparation, Oracle Database Study Material

As with integer OCPU Autonomous databases, fractional OCPU databases will get performance-related resources allocated proportionally based on the number of OCPUs chosen. For example, an Autonomous database with 0.3 OCPU will get memory and concurrent sessions allocation that is 30% of the memory and concurrent sessions allocation for a single OCPU Autonomous database.

A critical difference between fractional and integer OCPU is the availability of the predefined database services; these are connection types created at database creation time. Autonomous Data Warehouse with fractional OCPU supports only low service with its TCP and TCPS counterparts  (low, low_tls, low_ro, low_ro_tls). Autonomous Transaction Processing with fractional OCPU supports both tp and low services.

Billing for fractional OCPU usage is similar to integer OCPU usage; the total active OCPUs (both integer and fractional) for an Autonomous Container Database are aggregated and then rounded to the nearest integer OCPU. For example, a container database with 38 Autonomous Transaction Processing databases with 0.1 OCPU will be billed as 4 OCPUs.

Fractional OCPU and granular (GB) storage enable Autonomous Database users to reduce costs through database consolidation further. Utilizing the power of the Exadata Infrastructure allows greater consolidation density without sacrificing the platform's performance, availability, and security.

Source: oracle.com

Monday, June 28, 2021

What we found: Oracle Analytics COVID-19 analysis

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Certification, Database Career, Oracle Database Prep

This analysis was conducted in the context of the 2021 Gartner BI Bake-Off. All data used is publicly available, and insights highlighted result from ingesting, preparing, and analyzing that data. This does not represent the opinions of Oracle Corporation and should be used strictly as a demonstration of the Oracle Analytics product line. 

More Info: 1Z0-432: Oracle Real Application Clusters 12c Essentials

Every year Gartner invites analytics vendors to present at the Gartner Data & Analytics Summit to show how their products can help solve real-world problems and find insights into real-world data. Last year we looked at life expectancy. As one would expect, our focus this year is on the COVID-19 pandemic. During our analysis phase in March and April 2021, situations changed rapidly. Each day, new data brought new insights, and larger challenges emerged in real-time. 

◉ Questions about vaccine safety arose during April 2021

◉ There have been significant increases in disease incidence around the globe, especially in India, starting in the last few weeks of April 2021

◉ Government restrictions are loosening in key geographies as case numbers begin to fall and vaccine administration picks up steam while tightening in others, leading to a patchwork of guidelines within and across countries

We've drawn some conclusions—many that are reflected in what we hear from news sources, but also some that aren’t. Looking at this data in a week or a month may lead to different conclusions that we can’t yet anticipate. 

One thing is certain, though: This pandemic has shown gaps in response across and within countries and regions, with socioeconomic factors playing a big role in COVID-19’s relentless spread around the globe. 

The phrase “think globally, act locally” is appropriate to the analysis of data about COVID-19 and the vaccines that help combat it. Management of the pandemic has been handled very differently based on geographic, social, political, and economic factors. Here are some insights that we uncovered. 

◉ While vaccination rates are increasing throughout Europe, new case counts have been trending upward from February onward. As late as December 2020, it looked like the number of cases was trending lower. Vaccinations started in the UK in late December, but the rollout in the EU has been slower than in other geographies, leading Europe back into a COVID case-number growth curve.  

◉ North America is also showing an upward trajectory on COVID cases, but only starting in March 2021. In the US, the highest number of vaccinations given was on April 11. Two days later, safety concerns over one vaccine brand were raised, leading to a lower uptake of vaccines in the US—the vaccine rate has not recovered to the same level in the US since that date.

◉ All geographies are showing an uptick in cases, even as the number of vaccinations climbs. While vaccinations are often viewed as the panacea, the spread of the disease is outpacing the ability to get shots in arms. 

◉ High-income and upper-middle-income countries are far ahead in vaccine doses delivered, which is no surprise. But absolute numbers of vaccinations delivered isn’t the key factor in slowing the infection rate. It’s the percentage of the total population getting fully vaccinated—that’s the difference. For example, about 30% of the US is fully vaccinated as of the end of April 2021, whereas, while India has a large number of vaccines administered, only about 3.1% of the population is fully vaccinated. 

◉ Just twelve countries make up more than 50% of the vaccines given based on vaccines per million measures.    

◉ Governments issued thousands of mandates and actions in an attempt to control the spread of COVID-19. Full lockdown, domestic and international travel restrictions and social distancing (including school closures) in aggregate had the biggest influence on the spread of the disease across many countries in EMEA and South America. Responses to government actions were different by country. 

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Certification, Database Career, Oracle Database Prep
These are just some of the highlights we saw, but this is by no means an exhaustive list. 

◉ Belgium and the UK saw a greater positive impact from lockdown, Argentina and France less so. The first question that comes to mind is “why?” There’s nothing in the data we could find to support different responses by country. But citizens of those countries often point to anecdotal cultural attitudes that may or may not validate the relative impact of lockdowns. 

◉ Social distancing guidelines were not effective in France but were far more effective in Israel.

◉ US results overall weren’t promising in any action as mandates didn’t have desired impacts, especially in the second half of 2020. While we did some drilling down into different states and how they responded, there were markedly different responses based on states and regions. As mandates were set by both states and the federal government, aggregating at the US level did not give clear conclusions. 

◉ The conclusion we draw from all this:  One-size mandates don’t fit all countries and regions.  What works in London may not work in Lyon.  What works in the northeast USA may not work in the southwest USA.  Mandates must be tried and tailored to the geography and the situation on the ground to be effective. 

Tweets about COVID and vaccines are largely neutral to positive. The larger the reach (how many people may be influenced), the more neutral they are. When analyzing Twitter data about specific vaccine brands, there is some variability in tone (positive, neutral, negative) but not enough to say if it’s impacting vaccine hesitancy. As we enter the next phase of vaccinations worldwide, a “charm campaign” may be needed to get past vaccine hesitancy. Tweets from influencers with large reaches may be one avenue to help sway people to get vaccinated. 

Source: oracle.com

Friday, June 25, 2021

Create Graph Databases with Graph Studio

Graph Studio Now Available with Oracle Autonomous Database

Graphs are amazing. You can model data in a new way that lets you understand relationships, discover patterns and anomalies, and classify and analyze connected data. This lets you answer questions such as who the most important customers and suppliers are, where the critical points in a supply chain are, or which financial transactions may be fraudulent. Graph databases make it easier to manage, represent, and interact with complex relationships in data.

More Info: 1Z0-067: Upgrade Oracle 9i/10g/11g OCA to Oracle Database 12c OCP

Oracle is announcing the General Availability of a new feature in Autonomous Database (ADB) built for developers, analysts, and data scientists. The Graph Studio in ADB makes it easy to create graph models from data in your database or data warehouse, perform graph analysis, develop graph applications, and visualize and share results.

For more details, download the Graph Studio ebook


Make Graph Databases Easy with Autonomous Database


With Oracle Autonomous Database, you get a complete graph database platform that can be deployed in minutes with one-click provisioning, integrated tooling, and security. This means you don’t have to be database expert or graph specialist to get started and be productive.

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Certification, Database Guides, Database Learning

The new comprehensive tooling includes:

◉ Automated graph modeling
◉ Extensive graph analytics and graph query support
◉ Advanced notebooks and integrated visualization
◉ Automated install, upgrade, and provisioning
◉ Autosave, backup, and checkpoint data restoration features
◉ Ability to schedule graph analysis
◉ Sample notebooks and pre-built templates and workflows for different graph use cases

Industry-leading Graph Analysis, Database Features and Performance


The graph database, analytic, and visualization features in Autonomous Database offer industry-leading capabilities.

The Graph Studio point-and-click, low-code user interface automates graph data management, modeling, and simplifies analysis, visualization, and deployment. Autonomous Database with Graph Studio gives graph database, graph queries, and graph analytics to analysts, developers, and data scientists across the entire graph analytics lifecycle.

One of the biggest challenges when working with graphs is creating the graph model from your data. The Modeler in graph studio automatically creates graphs from database tables using existing relationships in the data.  You can use these graphs as-is or modify them if you prefer. 

Query the graph, use prebuilt graph algorithms


Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Certification, Database Guides, Database Learning
What makes graphs so powerful is how much easier they make it to explore and analyze relationships. The property graph query language – PGQL – combines familiar SQL constructs like SELECT, FROM, WHERE, GROUP BY and ORDER BY with graph pattern-matching capabilities to search for surrounding nodes, traverse property paths, perform pattern matching, and extract sub-graphs. 

In addition to querying and traversing relationships in your data, graphs simplify analyzing that data by supporting algorithms to discover and identify anomalies, clusters, communities and influencers. Oracle Autonomous Database includes nearly 60 prebuilt graph algorithms.

Powerful graph visualizations and notebooks


Data is only valuable if it’s used. Graph Studio notebooks enable interactive graph visualization through tables, charts, and more. These let you work collaboratively with others and share results through a multi-purpose notebook for collaboration, with guides for documentation, visualization, and interactive analysis.

Enterprise-quality graph database: easier to get started, simpler to use


"After facing performance issues with an open-source graph database due to a skyrocketing number of new users, we turned to the graph feature of Oracle Database. Oracle’s graph database easily achieved scalability while managing massive amounts of user entity information and their keys. And with the Oracle Autonomous Database, we’re gaining large-scale, secure graph capabilities – despite being a startup without a database administrator."

Tatsuro Kamoshida, CTO and Director, AMENIDY

"With Graph Studio in Oracle Autonomous Database, you don’t need to be a graph expert to explore the power of graphs. Oracle’s new simple interface and pre-built workflows make it possible for almost anyone to build graphs, apply analytics, and create visualizations for fraud detection, customer recommendations, smart manufacturing, and other groundbreaking graph use cases.”

Gianni Ceresa, Managing Director of DATAlysis and Oracle ACE Director

"Graph Studio’s point-and-click, low-code user interface enables us to cast traditional relational tables as a graph, allowing us to capitalize on Oracle's comprehensive set of graph algorithms. It's perfect for analyzing complex enterprise-level data relationships."

Dan Vlamis, President, Vlamis Software Solutions 

Source: oracle.com

Wednesday, June 23, 2021

Simple Oracle Document Access (SODA) for PL/SQL in Oracle Database 18c

Oracle Database Preparation, Oracle Database Career, Oracle Database Tutorial and Material, Oracle Database, Oracle PL/SQL, Oracle Database 18c

This article gives an overview of the Simple Oracle Document Access (SODA) for PL/SQL functionality in Oracle Database 18c.

◉ Create a Test Database User

We need a new database user for our testing.

CONN / AS SYSDBA

ALTER SESSION SET CONTAINER=pdb1;

DROP USER sodauser CASCADE;

CREATE USER sodauser IDENTIFIED BY sodauser1

  DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;

GRANT CREATE SESSION, CREATE TABLE TO sodauser;

GRANT SODA_APP TO sodauser;

Notice the grant for the SODA_APP role.

◉ Enable ORDS and SODA

We don't need to enable ORDS for the schema to use SODA for PL/SQL, but it makes sense to do this as it will allow the collection to be accessible from SODA for REST also.

Enable REST web services for the test schema. We could use any unique and legal URL mapping pattern for the schema, so it is not necessary to expose the schema name as we have done here.

CONN sodauser/sodauser1@pdb1

BEGIN

  ORDS.enable_schema(

    p_enabled             => TRUE,

    p_schema              => 'SODAUSER',

    p_url_mapping_type    => 'BASE_PATH',

    p_url_mapping_pattern => 'sodauser',

    p_auto_rest_auth      => FALSE

  );

  COMMIT;

END;

/

We are now ready to start.

◉ Collections

As the name suggests, collections are a way of grouping documents. It probably makes sense to define separate collections for different types of documents, but there is nothing to stop you keeping a variety of document types in a single collection.

- Check Collection Exists

We can check if a collection exists by attempting to open it. If the DBMS_SODA.OPEN_COLLECTION function returns a NULL, we know the collection doesn't exist. If the collection does exist, a reference to it will be returned as the SODA_COLLECTION_T type.

SET SERVEROUTPUT ON

DECLARE

  l_collection  SODA_COLLECTION_T;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  IF l_collection IS NOT NULL THEN

    DBMS_OUTPUT.put_line('collection: ' || l_collection.get_name());

  ELSE

    DBMS_OUTPUT.put_line('Collection does not exist.');  

  END IF;

END;

/

Collection does not exist.

PL/SQL procedure successfully completed.

SQL>

- Create a Collection

The DBMS_SODA.CREATE_COLLECTION function creates a new collection and returns the collection reference as the SODA_COLLECTION_T type.

SET SERVEROUTPUT ON

DECLARE

  l_collection  SODA_COLLECTION_T;

BEGIN

  l_collection := DBMS_SODA.create_collection('TestCollection1');

  IF l_collection IS NOT NULL THEN

    DBMS_OUTPUT.put_line('Collection ID : ' || l_collection.get_name());

  ELSE

    DBMS_OUTPUT.put_line('Collection does not exist.');  

  END IF;

END;

/

collection: TestCollection1

PL/SQL procedure successfully completed.

SQL>

The table has been created in the test schema. The table name is case sensitive, so you will have to double-quote the table name.

DESC "TestCollection1"

 Name                                                  Null?                   Type

 ---------------------------------------  --------       ------------------------------------

 ID                                                        NOT NULL    VARCHAR2(255)

 CREATED_ON                                  NOT NULL    TIMESTAMP(6)

 LAST_MODIFIED                            NOT NULL    TIMESTAMP(6)

 VERSION                                           NOT NULL    VARCHAR2(255)

 JSON_DOCUMENT                          BLOB

SQL>

This is essentially a table holding key-value pairs, with the key being the ID column and the value being the JSON_DOCUMENT column.

- List All Collections

The DBMS_SODA.LIST_COLLECTION_NAMES function returns a list the available collections as the SODA_COLLNAME_LIST_T type.

SET SERVEROUTPUT ON

DECLARE

  l_coll_list  SODA_COLLNAME_LIST_T;

BEGIN

  l_coll_list := DBMS_SODA.list_collection_names;

  IF l_coll_list.COUNT > 0 THEN

    FOR i IN 1 .. l_coll_list.COUNT LOOP

      DBMS_OUTPUT.put_line(i || ' : ' || l_coll_list(i));

    END LOOP;

  END IF;

END;

/

1 : TestCollection1

PL/SQL procedure successfully completed.

SQL>

- Drop a Collection

The DBMS_SODA.DROP_COLLECTION function drops the specified collection and returns "1" if it is successful, or "0" if it fails.

SET SERVEROUTOUT ON

DECLARE

  l_status  NUMBER := 0;

BEGIN

  l_status := DBMS_SODA.drop_collection('TestCollection1');

  DBMS_OUTPUT.put_line('status    : ' || l_status);

END;

/

status    : 1

PL/SQL procedure successfully completed.

SQL>

The table has been removed from the schema.

DESC "TestCollection1"

ERROR:

ORA-04043: object "TestCollection1" does not exist

SQL>

The remaining examples assume the "TestCollection1" collection is present, so if you deleted it previously, recreate it.

◉ Documents

A document is a combination of a JSON document you wish to persist in a collection, along with some document metadata, including a document identifier/key (ID). The document key can be assigned manually, or automatically is the key presented is null.

- Create a Document

A new document is created using the SODA_DOCUMENT_T constructor. There are overloads to create documents from VARCHAR2, CLOB and BLOB content. The document key and media type are optional. If the key is not set manually, a system generated key is used once the document is inserted into a collection.

SET SERVEROUTPUT ON

DECLARE

  l_varchar2_doc  SODA_DOCUMENT_T;

  l_clob_doc      SODA_DOCUMENT_T;

  l_blob_doc      SODA_DOCUMENT_T;

BEGIN

  DBMS_OUTPUT.put_line('==========');

  DBMS_OUTPUT.put_line('Key and VARCHAR2 Content.');

  l_varchar2_doc := SODA_DOCUMENT_T(

                      key => '1234',

                      v_content => '{"employee_number":7369,"employee_name":"SMITH"}'

                    );

  DBMS_OUTPUT.put_line('key       : ' || l_varchar2_doc.get_key);

  DBMS_OUTPUT.put_line('content   : ' || l_varchar2_doc.get_varchar2);

  DBMS_OUTPUT.put_line('media_type: ' || l_varchar2_doc.get_media_type);

  DBMS_OUTPUT.put_line('==========');

  DBMS_OUTPUT.put_line('CLOB Content and Media Type.');

  l_clob_doc := SODA_DOCUMENT_T(

                  c_content => '{"employee_number":7499,"employee_name":"ALLEN"}',

                  media_type => 'application/json'

                );

  DBMS_OUTPUT.put_line('key       : ' || l_clob_doc.get_key);

  DBMS_OUTPUT.put_line('content   : ' || l_clob_doc.get_clob);

  DBMS_OUTPUT.put_line('media_type: ' || l_clob_doc.get_media_type);

  DBMS_OUTPUT.put_line('==========');

  DBMS_OUTPUT.put_line('BLOB Content.');

  l_blob_doc := SODA_DOCUMENT_T(

                  b_content => UTL_RAW.cast_to_raw('{"employee_number":7521,"employee_name":"WARD"}')

                );

  DBMS_OUTPUT.put_line('key       : ' || l_blob_doc.get_key);

  DBMS_OUTPUT.put_line('content   : ' || UTL_RAW.cast_to_varchar2(l_blob_doc.get_blob));

  DBMS_OUTPUT.put_line('media_type: ' || l_blob_doc.get_media_type);

END;

/

==========

Key and VARCHAR2 Content.

key   : 1234

content   : {"employee_number":7369,"employee_name":"SMITH"}

media_type: application/json

==========

CLOB Content and Media Type.

key   :

content   : {"employee_number":7499,"employee_name":"ALLEN"}

media_type: application/json

==========

BLOB Content.

key   :

content   : {"employee_number":7521,"employee_name":"WARD"}

media_type: application/json

PL/SQL procedure successfully completed.

SQL>

Notice the KEY is null unless it is specified manually. A unique system generated key will be assigned once the document is inserted into a collection.

- Insert a Document

A new document is added to the collection using the INSERT_ONE or INSERT_ONE_AND_GET member functions of the SODA_COLLECTION_T type. If you don't care about retrieving a system generated key use the INSERT_ONE member function.

SET SERVEROUTPUT ON

DECLARE

  l_collection  SODA_COLLECTION_T;

  l_document    SODA_DOCUMENT_T;

  l_status      NUMBER;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  l_document := SODA_DOCUMENT_T(

                  b_content => UTL_RAW.cast_to_raw('{"employee_number":7521,"employee_name":"WARD"}')

                );

  l_status := l_collection.insert_one(l_document);

  DBMS_OUTPUT.put_line('status    : ' || l_status);

  COMMIT;

END;

/

status    : 1

PL/SQL procedure successfully completed.

SQL>

If you need to retrieve the system generated key you should use INSERT_ONE_AND_GET member function.

SET SERVEROUTPUT ON

DECLARE

  l_collection    SODA_COLLECTION_T;

  l_document      SODA_DOCUMENT_T;

  l_document_out  SODA_DOCUMENT_T;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  l_document := SODA_DOCUMENT_T(

                  b_content => UTL_RAW.cast_to_raw('{"employee_number":7521,"employee_name":"WARD"}')

                );

  l_document_out := l_collection.insert_one_and_get(l_document);

  DBMS_OUTPUT.put_line('key       : ' || l_document_out.get_key);

  DBMS_OUTPUT.put_line('content   : ' || UTL_RAW.cast_to_varchar2(l_document_out.get_blob));

  DBMS_OUTPUT.put_line('media_type: ' || l_document_out.get_media_type);

  COMMIT;

END;

/

key   : 6D9566A935014FE7BF1D0630B7E44313

content   :

media_type: application/json

PL/SQL procedure successfully completed.

SQL>

Notice the document content is not present in the document returned by the INSERT_ONE_AND_GET function. This is intentional, as it would represent a waste of resources if we were dealing with large document.

We can see rows containing the documents have been added to the associated table.

SELECT COUNT(*) FROM "TestCollection1";

  COUNT(*)

----------

2

SQL>

- Retrieve Documents

You retrieve a document using the FIND_ONE member function of the SODA_COLLECTION_T type.

SET SERVEROUTPUT ON

DECLARE

  l_collection    SODA_COLLECTION_T;

  l_document      SODA_DOCUMENT_T;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  l_document := l_collection.find_one('6D9566A935014FE7BF1D0630B7E44313');

  DBMS_OUTPUT.put_line('key       : ' || l_document.get_key);

  DBMS_OUTPUT.put_line('content   : ' || UTL_RAW.cast_to_varchar2(l_document.get_blob));

  DBMS_OUTPUT.put_line('media_type: ' || l_document.get_media_type);

  COMMIT;

END;

/

key   : 6D9566A935014FE7BF1D0630B7E44313

content   : {"employee_number":7521,"employee_name":"WARD"}

media_type: application/json

PL/SQL procedure successfully completed.

SQL>

- Update a Document

An existing document in the collection is updated using the REPLACE_ONE or REPLACE_ONE_AND_GET member functions of the SODA_COLLECTION_T type. The REPLACE_ONE member function returns "1" if the replace is successful and "0" if isn't.

SET SERVEROUTPUT ON

DECLARE

  l_collection    SODA_COLLECTION_T;

  l_document      SODA_DOCUMENT_T;

  l_status        NUMBER;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  l_document := SODA_DOCUMENT_T(

                  b_content => UTL_RAW.cast_to_raw('{"employee_number":7499,"employee_name":"ALLEN"}')

                );

  l_status := l_collection.replace_one('6D9566A935014FE7BF1D0630B7E44313', l_document);

  DBMS_OUTPUT.put_line('status    : ' || l_status);

  COMMIT;

END;

/

status    : 1

PL/SQL procedure successfully completed.

SQL>

The REPLACE_ONE_AND_GET member function is similar to the INSERT_ONE_AND_GET member function, in that it returns a document minus the content.

SET SERVEROUTPUT ON

DECLARE

  l_collection    SODA_COLLECTION_T;

  l_document      SODA_DOCUMENT_T;

  l_document_out  SODA_DOCUMENT_T;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  l_document := SODA_DOCUMENT_T(

                  b_content => UTL_RAW.cast_to_raw('{"employee_number":7499,"employee_name":"ALLEN"}')

                );

  l_document_out := l_collection.replace_one_and_get('6D9566A935014FE7BF1D0630B7E44313', l_document);

  DBMS_OUTPUT.put_line('key       : ' || l_document_out.get_key);

  DBMS_OUTPUT.put_line('content   : ' || UTL_RAW.cast_to_varchar2(l_document_out.get_blob));

  DBMS_OUTPUT.put_line('media_type: ' || l_document_out.get_media_type);

  COMMIT;

END;

/

key   : 6D9566A935014FE7BF1D0630B7E44313

content   :

media_type: application/json

PL/SQL procedure successfully completed.

SQL>

- Delete a Document

An existing document in the collection is removed using the REMOVE_ONE member function of the SODA_COLLECTION_T type.

SET SERVEROUTPUT ON

DECLARE

  l_collection    SODA_COLLECTION_T;

  l_status        NUMBER;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection1');

  l_status := l_collection.remove_one('6D9566A935014FE7BF1D0630B7E44313');

  DBMS_OUTPUT.put_line('status    : ' || l_status);

  COMMIT;

END;

/

status    : 1

PL/SQL procedure successfully completed.

SQL>

◉ Custom Collections

By default collections have a BLOB payload and the IDs are handled automatically. Srikrishnan Suresh pointed out this default behaviour can be modified by specifying metadata for the collection. The following CREATE_COLLECTION call includes the metadata to allow the key column to be set manually, and alters the document data type to a CLOB data type.

DECLARE

  l_collection  SODA_COLLECTION_T;

  l_metadata    VARCHAR2(32767);

BEGIN

  l_metadata := '{

        "keyColumn":{

            "assignmentMethod": "CLIENT"

        },

        "contentColumn": {

            "sqlType": "CLOB"

        }

    }';

  l_collection := DBMS_SODA.create_collection('TestCollection2', l_metadata);


  IF l_collection IS NOT NULL THEN

    DBMS_OUTPUT.put_line('Collection ID : ' || l_collection.get_name());

  ELSE

    DBMS_OUTPUT.put_line('Collection does not exist.');  

  END IF;

END;

/

Collection ID : TestCollection2

PL/SQL procedure successfully completed.

SQL>

The associated table structure looks different now, matching the metadata.

SQL> desc "TestCollection2"

 Name                                      Null?             Type

 ---------------------------------- --------    ----------------------------

 ID                                        NOT NULL     VARCHAR2(255)

 JSON_DOCUMENT                                      CLOB

SQL>

Oracle Database Preparation, Oracle Database Career, Oracle Database Tutorial and Material, Oracle Database, Oracle PL/SQL, Oracle Database 18c
We can add a document to the collection using the CLOB overload of the SODA_DOCUMENT_T object. In this case we're manually setting the key.

SET SERVEROUTPUT ON

DECLARE

  l_collection    SODA_COLLECTION_T;

  l_clob_doc      SODA_DOCUMENT_T;

  l_document_out  SODA_DOCUMENT_T;

BEGIN

  l_collection := DBMS_SODA.open_collection('TestCollection2');

  l_clob_doc := SODA_DOCUMENT_T(

                  key        => '1234',

                  c_content  => '{"employee_number":7499,"employee_name":"ALLEN"}',

                  media_type => 'application/json'

                );

  l_document_out := l_collection.insert_one_and_get(l_clob_doc);

  DBMS_OUTPUT.put_line('key       : ' || l_document_out.get_key);

  DBMS_OUTPUT.put_line('content   : ' || l_document_out.get_clob);

  DBMS_OUTPUT.put_line('media_type: ' || l_document_out.get_media_type);

  COMMIT;

END;

/

key       : 1234

content   :

media_type: application/json

PL/SQL procedure successfully completed.

SQL>

We can drop the collection in the normal way.

SET SERVEROUTOUT ON

DECLARE

  l_status  NUMBER := 0;

BEGIN

  l_status := DBMS_SODA.drop_collection('TestCollection2');

  DBMS_OUTPUT.put_line('status    : ' || l_status);

END;

/

status    : 1

PL/SQL procedure successfully completed.

SQL>

Source: oracle-base.com

Monday, June 21, 2021

LISTAGG DISTINCT in Oracle Database 19c

Oracle Database 19c, Oracle Database Tutorial and Material, Oracle Database Career, Oracle Database Prep, Database Study Material

The LISTAGG function was introduced in Oracle 11gR2 to make string aggregation simpler. In Oracle 12cR2 it was extended to include overflow error handling. Oracle 19c includes the ability to remove duplicates from the LISTAGG results by including the DISTINCT keyword.

1. Setup

The examples in this article use the following table.

-- DROP TABLE EMP PURGE;

CREATE TABLE EMP (

  EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,

  ENAME VARCHAR2(10),

  JOB VARCHAR2(9),

  MGR NUMBER(4),

  HIREDATE DATE,

  SAL NUMBER(7,2),

  COMM NUMBER(7,2),

  DEPTNO NUMBER(2)

);

INSERT INTO EMP VALUES (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);

INSERT INTO EMP VALUES (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);

INSERT INTO EMP VALUES (7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);

INSERT INTO EMP VALUES (7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20);

INSERT INTO EMP VALUES (7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30);

INSERT INTO EMP VALUES (7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,NULL,30);

INSERT INTO EMP VALUES (7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,NULL,10);

INSERT INTO EMP VALUES (7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,NULL,20);

INSERT INTO EMP VALUES (7839,'KING','PRESIDENT',NULL,to_date('17-11-1981','dd-mm-yyyy'),5000,NULL,10);

INSERT INTO EMP VALUES (7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30);

INSERT INTO EMP VALUES (7876,'ADAMS','CLERK',7788,to_date('13-JUL-87', 'dd-mm-rr')-51,1100,NULL,20);

INSERT INTO EMP VALUES (7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,NULL,30);

INSERT INTO EMP VALUES (7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,NULL,20);

INSERT INTO EMP VALUES (7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,NULL,10);

COMMIT;

2. The Problem

The default action of the LISTAGG function is shown below.

COLUMN employees FORMAT A40

SELECT deptno, LISTAGG(ename, ',') WITHIN GROUP (ORDER BY ename) AS employees

FROM   emp

GROUP BY deptno

ORDER BY deptno;

    DEPTNO EMPLOYEES

---------- ----------------------------------------

        10 CLARK,KING,MILLER

        20 ADAMS,FORD,JONES,SCOTT,SMITH

        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

3 rows selected.

SQL>

Let's add some extra people called "MILLER" into department 10, to give us duplicates in the aggregated list.

INSERT INTO emp VALUES (9998,'MILLER','ANALYST',7782,to_date('23-1-1982','dd-mm-yyyy'),1600,NULL,10);

INSERT INTO emp VALUES (9999,'MILLER','MANADER',7782,to_date('23-1-1982','dd-mm-yyyy'),1500,NULL,10);

COMMIT;

As expected, we now see multiple entries for the name "MILLER" in department 10.

COLUMN employees FORMAT A40

SELECT deptno, LISTAGG(ename, ',') WITHIN GROUP (ORDER BY ename) AS employees

FROM   emp

GROUP BY deptno

ORDER BY deptno;

    DEPTNO EMPLOYEES

---------- ----------------------------------------

        10 CLARK,KING,MILLER,MILLER,MILLER

        20 ADAMS,FORD,JONES,SCOTT,SMITH

        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

3 rows selected.

SQL>

If that's what we are expecting, great. If we want to remove duplicates, what do we do?

3. The Solution : Pre-19c

We could solve this in a number of ways. In the following example we use the ROW_NUMBER analytic function to remove any duplicates, then use the conventional LISTAGG function to aggregate the data.

COLUMN employees FORMAT A40

SELECT e2.deptno, LISTAGG(e2.ename, ',') WITHIN GROUP (ORDER BY e2.ename) AS employees

FROM   (SELECT e.*,

               ROW_NUMBER() OVER (PARTITION BY e.deptno, e.ename ORDER BY e.empno) AS myrank

        FROM   emp e) e2

WHERE  e2.myrank = 1

GROUP BY e2.deptno

ORDER BY e2.deptno;

    DEPTNO EMPLOYEES

---------- ----------------------------------------

        10 CLARK,KING,MILLER

        20 ADAMS,FORD,JONES,SCOTT,SMITH

        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

3 rows selected.

SQL>

Alternatively we could use DISTINCT in an inline view to remove the duplicate rows, then use the conventional LISTAGG function call to aggregate the data.

COLUMN employees FORMAT A40

SELECT e2.deptno, LISTAGG(e2.ename, ',') WITHIN GROUP (ORDER BY e2.ename) AS employees

FROM   (SELECT DISTINCT e.deptno, e.ename

        FROM   emp e) e2

GROUP BY e2.deptno

ORDER BY e2.deptno;

    DEPTNO EMPLOYEES

---------- ----------------------------------------

        10 CLARK,KING,MILLER

        20 ADAMS,FORD,JONES,SCOTT,SMITH

        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

3 rows selected.

SQL>

4. The Solution: 19c Onward

Oracle 19c introduced a simpler solution. We can now include the DISTINCT keyword directly in the LISTAGG function call.

COLUMN employees FORMAT A40

SELECT deptno, LISTAGG(DISTINCT ename, ',') WITHIN GROUP (ORDER BY ename) AS employees

FROM   emp

GROUP BY deptno

ORDER BY deptno;

    DEPTNO EMPLOYEES

---------- ----------------------------------------

        10 CLARK,KING,MILLER

        20 ADAMS,FORD,JONES,SCOTT,SMITH

        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

3 rows selected.

SQL>

The default functionality is to include all results, which we can express explicitly using the ALL keyword.

SELECT deptno, LISTAGG(ALL ename, ',') WITHIN GROUP (ORDER BY ename) AS employees

FROM   emp

GROUP BY deptno

ORDER BY deptno;

    DEPTNO EMPLOYEES

---------- ----------------------------------------

        10 CLARK,KING,MILLER,MILLER,MILLER

        20 ADAMS,FORD,JONES,SCOTT,SMITH

        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

3 rows selected.

SQL>

Friday, June 18, 2021

Graph Database Use Cases for Financial Services Companies

For as long as financial services companies have existed, criminals have tried to defraud them of their earnings. In today’s world, this game of company-versus-fraudster has become more complicated than ever. Criminals are savvier, more inventive, and more audacious, and financial services firms see their reputation and massive fines are on the line.

More Info: 1Z0-071: Oracle Database SQL

But with graph databases, financial services firms have an effective weapon they can use in this ongoing battle. Because no matter how hard they try, financial criminals are often linked by relationships—whether it’s relationships to other criminals, locations, or of course, bank accounts. Graph databases, which make it easier to discover insights into relationships, take advantage of this fact to unfold new possibilities in the financial services world.

Here are just a few examples of use cases that graph databases can address.

Download the free ebook on graph database use cases

Graph database use case: Money laundering.

Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Career, Database Tutorial and Material, Oracle Database Prep, Oracle Database Exam Study

The problem

Conceptually, money laundering is simple. Dirty money is passed around to blend it with legitimate funds and then turned into hard assets. This is the kind of process that was used in the Panama Papers analysis. More specifically, a circular money transfer involves a criminal who sends large amounts of fraudulently obtained money to himself or herself—but hides it through a long and complex series of valid transfers between “normal” accounts.

These “normal” accounts are actually accounts created with synthetic identities. They typically share certain similar information because they are generated from stolen identities (email addresses, addresses, etc.) and it’s this related information that makes graph analysis such a good fit to make them reveal their fraudulent origins.

The graph solution

To make fraud detection simpler, users can create a graph from transactions between entities as well as entities that share some information, including the email addresses, passwords, addresses, and more. Once a graph is created, running a simple query will find all customers with accounts who have similar information, and reveal which accounts are sending money to each other.

Graph database use case: Detecting money mules and mule fraud


Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Career, Database Tutorial and Material, Oracle Database Prep, Oracle Database Exam Study

The problem

Mule fraud involves a person, called a money mule, who transfers illicit goods. This can involve drugs but when it comes to the financial industry, usually involves money. The money mule transfers money to his or her own account, and the money is then transferred to another scam operator who is usually in another country. Traditionally, rule-based models create alerts and the suspicious accounts are flagged by humans. Machine learning is also used to predict human decisions. However, it is often difficult to improve the models because the accounts themselves usually have limited information.

The graph solution

This is where graphs come in. With graph technology, users can take the transaction information as edges and generate more features of the accounts based on surrounding relationships and transactions. For example, by using graph-based centrality scores, users can determine how close certain accounts are to known mule accounts.

In addition, these false accounts often share similar information (such as address or telephone numbers) because such information is necessary for registering the accounts—and the criminals only have so many identities to draw from. By using graph-based queries, graph users can quickly discover the accounts with similar relationships or the accounts involved with patterns like circulation and flag them for further investigation.

Through this method, graph technology can enhance machine learning models trained to discover money mules and mule fraud.

Graph database use case: Real-time fraud detection


The problem

In today’s world, consumers demand instant access to services and to money transfers—which opens up opportunities to criminals. For example, payment services apps try to deliver money as quickly as possible to valid users while also ensuring money isn’t sent for illicit purposes or hiding the real receiver by getting sent in circuitous routes. This necessitates real-time fraud detection.

The graph solution

Because graphs enable lightning-fast answers to queries and because they expand access to data, they have become a popular technology in the realm of real-time fraud detection. When investigating transactions with graph technology, it’s not only the transactions that can be modeled in graphs. Graphs are extremely flexible, which means the heterogeneous surrounding information can also be modeled. For example, client IP addresses, ATM geolocation, card numbers, and account IDs can all become vertices, and the connections can all become edges.

Property graph is often used for fraud detection, especially in online banking and ATM location analysis because users can design the rules for detecting fraud based on datasets. For example, detection rules can be set up for:

◉ IPs which log in with multiple cards registered in different places
◉ Cards used in different places with very far distances
◉ Accounts receiving one-time inbound transactions from other accounts registered in various places

These rules can be applied real-time because Oracle’s graph technologies can:

◉ Keep graphs updated and synchronized to the original relational table dataset
◉ Run high-performance queries and algorithms.

See how PaySafe uses Oracle’s graph database for fraud detection.

Source: oracle.com

Wednesday, June 16, 2021

Shell Commands From PL/SQL

Oracle Database Preparation, Oracle Database Certification, Oracle Database Career, Oracle Database Tutorial and Material, Oracle Database Guides

Using a Java stored procedure it is possible to perform shell commands from PL/SQL.

◉ Create the Java Stored Procedure

First we need to create the Java class to perform the shell command.

CONN test/test

CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "Host" AS

import java.io.*;

public class Host {

  public static void executeCommand(String command) {

    try {

      String[] finalCommand;

      if (isWindows()) {

        finalCommand = new String[4];

        // Use the appropriate path for your windows version.

        //finalCommand[0] = "C:\\winnt\\system32\\cmd.exe";    // Windows NT/2000

        finalCommand[0] = "C:\\windows\\system32\\cmd.exe";    // Windows XP/2003

        //finalCommand[0] = "C:\\windows\\syswow64\\cmd.exe";  // Windows 64-bit

        finalCommand[1] = "/y";

        finalCommand[2] = "/c";

        finalCommand[3] = command;

      }

      else {

        finalCommand = new String[3];

        finalCommand[0] = "/bin/sh";

        finalCommand[1] = "-c";

        finalCommand[2] = command;

      }

      final Process pr = Runtime.getRuntime().exec(finalCommand);

      pr.waitFor();

      new Thread(new Runnable(){

        public void run() {

          BufferedReader br_in = null;

          try {

            br_in = new BufferedReader(new InputStreamReader(pr.getInputStream()));

            String buff = null;

            while ((buff = br_in.readLine()) != null) {

              System.out.println("Process out :" + buff);

              try {Thread.sleep(100); } catch(Exception e) {}

            }

            br_in.close();

          }

          catch (IOException ioe) {

            System.out.println("Exception caught printing process output.");

            ioe.printStackTrace();

          }

          finally {

            try {

              br_in.close();

            } catch (Exception ex) {}

          }

        }

      }).start();

      new Thread(new Runnable(){

        public void run() {

          BufferedReader br_err = null;

          try {

            br_err = new BufferedReader(new InputStreamReader(pr.getErrorStream()));

            String buff = null;

            while ((buff = br_err.readLine()) != null) {

              System.out.println("Process err :" + buff);

              try {Thread.sleep(100); } catch(Exception e) {}

            }

            br_err.close();

          }

          catch (IOException ioe) {

            System.out.println("Exception caught printing process error.");

            ioe.printStackTrace();

          }

          finally {

            try {

              br_err.close();

            } catch (Exception ex) {}

          }

        }

      }).start();

    }

    catch (Exception ex) {

      System.out.println(ex.getLocalizedMessage());

    }

  }

  public static boolean isWindows() {

    if (System.getProperty("os.name").toLowerCase().indexOf("windows") != -1)

      return true;

    else

      return false;

  }

};

/

show errors java source "Host"

◉ Publish the Java call specification

Next we publish the call specification using a PL/SQL "wrapper" PL/SQL procedure.

CREATE OR REPLACE PROCEDURE host_command (p_command  IN  VARCHAR2)

AS LANGUAGE JAVA 

NAME 'Host.executeCommand (java.lang.String)';

/

◉ Grant Privileges

In this example we are granting access to all directories on the server. That is really dangerous. You need to be more specific about these grants and/or be very careful about who you grant access to this functionality.

The relevant permissions must be granted from SYS for JServer to access the file system. In this case we grant access to all files accessible to the Oracle software owner, but in reality that is a very dangerous thing to do.

CONN / AS SYSDBA

DECLARE

  l_schema VARCHAR2(30) := 'TEST'; -- Adjust as required.

BEGIN

  DBMS_JAVA.grant_permission(l_schema, 'java.io.FilePermission', '<<ALL FILES>>', 'read ,write, execute, delete');

  DBMS_JAVA.grant_permission(l_schema, 'SYS:java.lang.RuntimePermission', 'writeFileDescriptor', '');

  DBMS_JAVA.grant_permission(l_schema, 'SYS:java.lang.RuntimePermission', 'readFileDescriptor', '');

END;

/

The affects of the grant will not be noticed until the grantee reconnects. In addition to this, the owner of the Oracle software must have permission to access the file system being referenced.

◉ Test It.

Oracle Database Preparation, Oracle Database Certification, Oracle Database Career, Oracle Database Tutorial and Material, Oracle Database Guides
Finally we call the PL/SQL procedure with our command text.

CONN test/test

SET SERVEROUTPUT ON SIZE 1000000

CALL DBMS_JAVA.SET_OUTPUT(1000000);

BEGIN

  host_command (p_command => 'move C:\test1.txt C:\test2.txt');

  --host_command (p_command => '/bin/mv /home/oracle/test1.txt /home/oracle/test2.txt');

END;

/

The output from the host command can be captured using the DBMS_OUTPUT.get_lines procedure.

CONN test/test

SET SERVEROUTPUT ON SIZE 1000000

CALL DBMS_JAVA.SET_OUTPUT(1000000);

DECLARE

  l_output DBMS_OUTPUT.chararr;

  l_lines  INTEGER := 1000;

BEGIN

  DBMS_OUTPUT.enable(1000000);

  DBMS_JAVA.set_output(1000000);

  host_command('dir C:\');

  --host_command('/bin/ls /home/oracle');

  DBMS_OUTPUT.get_lines(l_output, l_lines);

  FOR i IN 1 .. l_lines LOOP

    -- Do something with the line.

    -- Data in the collection - l_output(i)

    DBMS_OUTPUT.put_line(l_output(i));

  END LOOP;

END;

/

◉ Known Issues.

- Depending on the environment, the process may continue running as a zombie after the command has been executed, even if the destroy() method is called manually. If this happens the process is only cleaned up when the session ends. Under normal circumstances this doesn't represent a problem, but when called as part of a job the zombie processes will only die when the Job Queue Coordinator is stopped.

- No profile is run for the OS callout, so no environment variables will be set. As a result you will need to use full paths to any executables ("ls" becomes "/bin/ls") or scripts. Alternatively, write all operations as scripts and set the relevant environment variables inside the scripts.

Source: oracle-base.com

Monday, June 14, 2021

FTP From PL/SQL

FTP From PL/SQL, Oracle Database Preparation, Oracle Database Career, Database Exam Prep, Oracle Database Learning

Sometimes it's preferable to trigger FTP jobs directly from PL/SQL rather than rely on CRON or AT. This article contains a brief description of the two methods I use.

◉ Shell Script

The first method relies on a java stored procedure, described in Shell Commands From PL/SQL, which can be used to trigger a shell script to perform the transfer. The shell script may look like the following.

Read More: 1Z0-148: Oracle Database - Advanced PL/SQL

#! /bin/ksh

# Move to appropriate directory on local server

cd /extracts

# FTP all files in directory

ftp -inv ftp.company.com <<EOF

user ftpuser ftppassword

# Move to appropriate directory on remote server.

cd /loads

ascii

mput *.*

bye

EOF

◉ PL/SQL FTP API

The second approach uses a combination of the UTL_TCP and UTL_FILE packages to create a simple FTP API (ftp.pks, ftp.pkb). Once the API is loaded into the appropriate schema simple FTP commands can be initiated as follows.

CREATE OR REPLACE DIRECTORY my_docs AS '/u01/app/oracle/';

SET SERVEROUTPUT ON SIZE 1000000

@c:\ftp.pks

@c:\ftp.pkb

-- Retrieve an ASCII file from a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.ascii(p_conn => l_conn);

  ftp.get(p_conn      => l_conn,

          p_from_file => '/u01/app/oracle/test.txt',

          p_to_dir    => 'MY_DOCS',

          p_to_file   => 'test_get.txt');

  ftp.logout(l_conn);

END;

/

-- Send an ASCII file to a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.ascii(p_conn => l_conn);

  ftp.put(p_conn      => l_conn,

          p_from_dir  => 'MY_DOCS',

          p_from_file => 'test_get.txt',

          p_to_file   => '/u01/app/oracle/test_put.txt');

  ftp.logout(l_conn);

END;

/

-- Retrieve a binary file from a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.binary(p_conn => l_conn);

  ftp.get(p_conn      => l_conn,

          p_from_file => '/u01/app/oracle/product/9.2.0.1.0/sysman/reporting/gif/jobs.gif',

          p_to_dir    => 'MY_DOCS',

          p_to_file   => 'jobs_get.gif');

  ftp.logout(l_conn);

END;

/

-- Send a binary file to a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.binary(p_conn => l_conn);

  ftp.put(p_conn      => l_conn,

          p_from_dir  => 'MY_DOCS',

          p_from_file => 'jobs_get.gif',

          p_to_file   => '/u01/app/oracle/jobs_put.gif');

  ftp.logout(l_conn);

END;

/

-- Get a directory listing from a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

  l_list  ftp.t_string_table;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.list(p_conn   => l_conn,

           p_dir   => '/u01/app/oracle',

           p_list  => l_list);

  ftp.logout(l_conn);

  IF l_list.COUNT > 0 THEN

    FOR i IN l_list.first .. l_list.last LOOP

      DBMS_OUTPUT.put_line(i || ': ' || l_list(i));

    END LOOP;

  END IF;

END;

/

-- Get a directory listing (file names only) from a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

  l_list  ftp.t_string_table;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.nlst(p_conn   => l_conn,

           p_dir   => '/u01/app/oracle',

           p_list  => l_list);

  ftp.logout(l_conn);

  IF l_list.COUNT > 0 THEN

    FOR i IN l_list.first .. l_list.last LOOP

      DBMS_OUTPUT.put_line(i || ': ' || l_list(i));

    END LOOP;

  END IF;

END;

/

-- Rename a file on a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.rename(p_conn => l_conn,

             p_from => '/u01/app/oracle/dba/shutdown',

             p_to   => '/u01/app/oracle/dba/shutdown.old');

  ftp.logout(l_conn);

END;

/

-- Delete a file on a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.delete(p_conn => l_conn,

             p_file => '/u01/app/oracle/dba/temp.txt');

  ftp.logout(l_conn);

END;

/

-- Create a directory on a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.mkdir(p_conn => l_conn,

            p_dir => '/u01/app/oracle/test');

  ftp.logout(l_conn);

END;

/

-- Remove a directory from a remote FTP server.

DECLARE

  l_conn  UTL_TCP.connection;

BEGIN

  l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');

  ftp.rmdir(p_conn => l_conn,

            p_dir  => '/u01/app/oracle/test');

  ftp.logout(l_conn);

END;

/

The basic functions are implemented using LOBs to allow FTP without having to access files on the local filesystem. The get and put procedures string these together to form a complete job using all the functions. If a straight forward FTP to, or from, the local filesystem is required it is more efficient to use the GET_DIRECT and PUT_DIRECT procedures as they avoid the temporary LOBs.

The current implementation has the following issues:

- The mput and mget operations are not supported directly, but can be implemented using a combination of the list/nlst and get/put operations.

- The implementation of binary transfers relies on UTL_FILE features only available in Oracle9i Release 2 upwards.

- There is no support for ASCII mode in the PUT_DIRECT procedure.

◉ ACL for 11g

The introduction of Fine-Grained Access to Network Services in Oracle Database 11g Release 1 means you will need to configure an access control list (ACL) to allow UTL_TCP to access the network. The examples above work correctly with the following basic ACL. You will need to amend the FTP server details and username details to match your FTP server address and the Oracle username running the FTP API.

DECLARE

  l_acl_name         VARCHAR2(30) := 'utl_tcp.xml';

  l_ftp_server_ip    VARCHAR2(20) := '192.168.0.131';

  l_ftp_server_name  VARCHAR2(20) := 'ftp.company.com';

  l_username         VARCHAR2(30) := 'TEST';

BEGIN

  DBMS_NETWORK_ACL_ADMIN.create_acl (

    acl          => l_acl_name, 

    description  => 'Allow connections using UTL_TCP',

    principal    => l_username,

    is_grant     => TRUE, 

    privilege    => 'connect',

    start_date   => SYSTIMESTAMP,

    end_date     => NULL);

  COMMIT;

  DBMS_NETWORK_ACL_ADMIN.add_privilege ( 

    acl         => l_acl_name, 

    principal   => l_username,

    is_grant    => FALSE, 

    privilege   => 'connect', 

    position    => NULL, 

    start_date  => NULL,

    end_date    => NULL);

  COMMIT;

  DBMS_NETWORK_ACL_ADMIN.assign_acl (

    acl         => l_acl_name,

    host        => l_ftp_server_ip, 

    lower_port  => NULL,

    upper_port  => NULL);

  DBMS_NETWORK_ACL_ADMIN.assign_acl (

    acl         => l_acl_name,

    host        => l_ftp_server_name, 

    lower_port  => NULL,

    upper_port  => NULL);

  COMMIT;

END;

/

◉ SFTP and FTPS

The world has changed drastically since this article was first written. If you are using FTP today, you are probably making a very big mistake. This leaves you with two FTP-like alternatives.

- FTPS : This is FTP, but using a certificate to encrypt the communication. It requires a suitable FTPS service to be present on the server, so it is not very popular. Anton Scheffer wrote about accessing FTPS directly from PL/SQL. It is essentially what I've done for FTP, with the certificate and wallet thrown in the mix.

- SFTP : This is basically mimicking FTP through SSH. It feels like FTP, but it is actually just a connection to the SSH service on the server, so no additional software is needed. If you need an FTP-like interface, this is probably the best route to go. I've not seen a SFTP implementation from PL/SQL, but I will explain below how I implement this.

To use SFTP from PL/SQL I do the following.

◉ Use keypair authentication between the database server and the remote SSH service, allowing passwordless connections.

◉ Write a shell script to perform the SFTP command.

◉ Call the shell script from a Java Stored Procedure, or preferably an executable job using the scheduler.

Source: oracle-base.com

Friday, June 11, 2021

Custom Database Software Images now available on Gen2 Exadata Cloud@Customer

We are pleased to announce the General Availability (GA) of custom Database Software Images on Gen 2 Exadata Cloud@Customer. Today, Exadata Cloud@Customer users patch and provision their Oracle Database Homes using standard Oracle published images. With this release, you can create custom Database Software Images by specifying the additional customizations on top of the standard Oracle published image and use it to provision and patch Database Homes. This functionality provides complete customer control to standardize and automate the usage of Oracle Database Home software images with specific one-off patches, according to their application compatibility requirements.

Read More: 1Z0-060: Upgrade to Oracle Database 12c

Custom Database Software Images provide,

◉ An easy, automated way to customize Database Software Images with Database Embedded JVM (OJVM) patches and application-specific patches

◉ A way to apply a standardized custom Database Software Image across multiple Database Homes for specific application needs

◉ An easy path to move on-prem databases running custom one-off software updates to Exadata Cloud@Customer

◉ A seamless way to use one-off patches to build custom images within Exadata Cloud@Customer service without special entitlements required to download patches from MOS

Custom Database Software Images are managed as an OCI resource and automatically saved to a service-managed object store bucket. To build a custom image, you can specify a major database version (such as 19c) with specific Release Update (RU) and optionally specify one-off patch IDs as a comma-separated list and/or upload an existing Oracle Database Home inventory file (opatch lsinventory output). You can fully manage the life cycle of these custom Database Software Images, which are compartment aware and belong to a specific region in which they are created. 

OCI Console Experience

Let's take a look how you can use the OCI Console with Custom Database Software Images. 

◉ Create a Database Software Image

◉ Use Custom Database Software Image to Provision Database Home

◉ Patch Database Home with Custom Database Software Image

1. Create Custom Database Software Image

From the Exadata Cloud@Customer service home page, create a custom Database Software Image in a specific OCI region. As part of the custom Database Software Image creation flow, you can select the database version and a specific patch set/release update to build the image. You can optionally specify one-off patch numbers (comma-separated) and/or upload Oracle Database Home inventory file to customize the Database Software Image to meet their specific requirements.

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

The custom Database Software Image is created with the selected database version and specified one-off patches.

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

2. Use Custom Database Software Image to Provision Database Home


While provisioning a Database Home, you are presented with the latest Oracle published software image by default.

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

You can change the database image selection to choose from an older version of Oracle published images or switch to select from their own repository of custom Database Software Images.

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

You can choose a specific custom Database Software Image from their compartment of choice. The selection list also supports a quick filter to narrow down images by their database version.

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

The Database Home details page clearly calls out the database version and the underlying custom Database Software Image (displayed as a hyperlink for quick navigation) used to provision the home. 

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

When a custom Database Software Image is used to provision a home, the image is automatically downloaded and cached for subsequent use on the Local Control Plane Servers by the Exadata Cloud@Customer service.

3. Patch Database Home with Custom Database Software Image


You can choose to patch the Database Home with an Oracle published image or use a custom Database Software Image.

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

You are presented with custom Database Software Images that are built using the latest four RU versions as patching options for the Database Home. In addition, custom Database Software Images with the same version but different one-off patch IDs are also available to apply as a patch. The custom Database Software Image selection list available for patching is filtered by compartment. 

Oracle Database Preparation, Oracle Database Exam Prep, Database Prep, Oracle Database Certification, Oracle Database Career

A few items to note while managing custom Database software Images.

◉ You are billed for the storage used by the custom Database Software Images

◉ Custom Database Software Images do not expire automatically. You are responsible for deleting images that are no longer in use.

◉ Custom Database Software Images are a regional resource. You will need to manage/create them in each region as required.

◉ Oracle Data Guard association will automatically use the custom Database Software Image associated with the primary database to create the new standby Database Home

◉ For the first release, 'Enable Data Guard' operation will not proceed if the underlying Database Software Image used by the primary database is no longer available

Availability


Support for custom Database Software Images is now available on Exadata Cloud@Customer, Exadata Cloud Service and Database Cloud Service BM/VM in all OCI commercial regions.

Source: oracle.com