Monday, November 29, 2021

Steps to add ExaCS ExaCC Storage servers in to Oracle Enterprise Manager (OEM)

Introduction:

Oracle Enterprise Manager Cloud Control provides a complete cloud lifecycle management solution for Oracle Database Exadata Cloud Service (ExaCS) and Oracle Database Exadata Cloud at Customer (ExaCC). In this Blog, we will see step by step to add ExaCS Storage servers into OEM Monitoring. Same steps are applicable for ExaCC.

Prerequisites:

1. Agent is already deployed in ExaCS nodes.

2. Exadata Plugin is available in OMS and Agents.

3. ExaCS VM Nodes, DB, ASM targets are added into Monitoring.

Steps:

To associate the storage servers of the grid infrastructure with the cloud target, we need to create an EM named credential for storing the ExaCLI username and password used for connecting to the Exadata Storage Server. Follow step1 and 2 to get username and password.

 1) Get ExaCLI Username:

      Username is in the format cloud_user_<clustername>

      Obtain clustername from the ExaCS VM.      

       As grid user run  "crsctl get cluster name".  The output is shown as below.

[grid@exaash-XXXXXX ~]$ crsctl get cluster name

CRS-6724: Current cluster name is 'cl-27XXXXca-16b'

The ExaCLI user name is cloud_user_cl-27XXXXca-16b

2) Get ExaCLI Password:

As root user run this script "/opt/exacloud/get_cs_data.py" in ExaCS first node.

#cd /opt/exacloud/

#./get_cs_data.py

The output is shown below.

ExaCli initial password is:

XXXXXXXXXXXX

3) Create ExaCLI Named credential in OEM

       1. Click the setup icon > click Security > select Named Credentials. The Named Credentials page is displayed.

       2. Click Create. The Create Credential page opens.

       3. Provide the following information: (screen shot below).

  • Credential Name: Provide a suitable name to the credential. For example, EXADATA_CRED.
  • Credential Description: Describe the purpose of the credential and the intended use.
  • Authenticating Target Type: Specify the target type for which this credential set will be used for authentication. Select Oracle Exadata Storage Server from the menu.
  • Credential Type: Specify the type of the credential that you're creating. Select Credential for ExaCLI or RESTful API from the menu.
  • Scope: Select the visibility of the credential in Enterprise Manager. Select Global.
  • Username and Password: Provide the user name and password to access the storage cells which we generated in Step1 and 2.
  • Click Save.

You can now view the new credential created in the Named Credentials page.

Oracle Database, Oracle Database Certification, Oracle Database Preparation, Oracle Database Guides, Oracle Database Study

4. We need to create a configuration file in OMS server with the following information,  which will be used to discover the storage servers using Emcli.

Example: we create a file named $OMS_HOME/bin/exacs1.txt with the below content.

configMap.targetName=ExaCS1
configMap.region=Ashburn
configMap.tenancy=DevOCI
configMap.serviceType=ExaCS
configMap.monitorAgentUrl.0=https://exacs-xxxxx1.exacs1.corp.oraclevcn.com:3872/emd/main/
credMap.cellCredSet=SYSMAN:ExaCS_Storage_Cred
host.name.0=exacs-xxxxx1.exacs1.corp.oraclevcn.com
host.name.1=exacs-xxxxx2.exacs1.corp.oraclevcn.com
  • configMap.targetName: Name of the target that you want to create
  • configMap.region: The cloud region where the target is created
  • configMap.tenancy: The tenancy where the cloud target is created
  • configMap.serviceType: Specify ExaCC if the service type is Oracle Database Exadata Cloud at Customer, or ExaCS if the service type is Oracle Database Exadata Cloud Service
  • configMap.monitorAgentUrl.0: The URL of the agent which will be used to monitor the new target. You can obtain this from the agent home page or run the following command to get the agent URL:
  • emcli status agent
  • credMap.cellCredSet: The credential set that we created earlier, which is used to discover the storage servers in the new cloud target. In the above example, SYSMAN is the owner of the credential and EXADATA_CRED is the name of the credential.
    • To obtain the credential set information, click the setup icon > click Security > select Named Credentials. The Named Credentials page is displayed. Collect the required information about the credential set in this page. See Step 3.
  • host.name.x: The host names on which the databases that must be associated with the new cloud target, are run. Specify all the host names for the discovery to complete successfully. Here, x is the host name number which should start with 0, and must be incremented by 1 for every additonal host name added to the file.
5. Run the emcli command in OMS server as oracle user.

cd $OMS_HOME/bin

#./emcli submit_procedure -name=ExaCloudServiceDiscovery -input_file=data:exacs1.txt -notification="scheduled, action required, running" ;

6. Monitor the discovery status in OEM

You can view the status of the discovery in the Procedure Activity page. Go to Enterprise Manager Home page > click the Enterprise icon > click Provisioning and Patching. The following Procedure Activity page is displayed:

Oracle Database, Oracle Database Certification, Oracle Database Preparation, Oracle Database Guides, Oracle Database Study

7. If Storage servers are not discovered, follow this workaround:

Go to ExaCS Node1, take a backup and edit below Perl script:

<Agent_Home>/plugins/oracle.sysman.xa.discovery.plugin_*/discover/csCellDiscovery.pl

Example:/u01/app/oem/agent_13.4.0.0.0/plugins/oracle.sysman.xa.discovery.plugin_*/discover/csCellDiscovery.pl 

Find the line below 
my $exaclicommand = "exacli -l $username -c ".$ip." --xml -e \"list cell attributes name\"";
 
and Replace exacli with /usr/local/bin/exacli 

The line will then look similar to this one below

my $exaclicommand = "/usr/local/bin/exacli  -l $username -c ".$ip." --xml -e \"list cell attributes name\"";

8. In OMS server, run the following command to delete the ExaCS target. 

#./emcli delete_target -name="<target_name>" -type="oracle_exadata_cloud_service" 

9. Perform step 5 again and check the status as outlined in step 6.

10. Explore the Cloud Target Home Page

After the discovery is complete, visit the cloud target home page and explore the various views in EM.

Go to Enterprise Manager Home > click the Targets icon > click Exadata.

You will see the ExaCS Targets including Storage servers as below.

Oracle Database, Oracle Database Certification, Oracle Database Preparation, Oracle Database Guides, Oracle Database Study

Source: oracle.com

Friday, November 26, 2021

Oracle Database 21c (21.3) comes with full production support for Oracle Sharding on Kubernetes and Docker!

We are excited to announce that customers can now take advantage of Kubernetes and Docker based deployment for sharded databases starting with Oracle Database 21c (21.3).

Oracle Sharding distributes segments of a data set across many databases (shards) on independent compute resources, on-premises, or in the cloud. It enables globally distributed, linearly scalable, multi-model databases. It requires no specialized hardware or software. Oracle Sharding does all of this while providing strong consistency, the full power of SQL, support for structured and unstructured data, and the Oracle Database ecosystem (SQL Developer, Enterprise Manager Cloud Control, RMAN, and Data Pump). Oracle Sharding meets data sovereignty requirements and supports applications that require low latency and high availability.

Oracle Sharding on Kubernetes

With many Oracle customers already using Kubernetes to deploy applications and databases, there is an increase in demand to manage large datasets with high-throughputs, and this is where Oracle Sharding can be very useful because it brings extreme scalability, fault isolation, and geographical distribution of data by distributing the data set across different shards.

Oracle Sharding on Kubernetes uses StatefulSet to provide stable, unique network identifiers and stable, persistent storage so you can create and manage your Oracle Sharding replica set natively in Kubernetes with Oracle supported helm and chart templates. Data is stored on a persistent volume so when a pod is recreated, all the data is still there. Some of the other benefits of running Oracle Sharding on Kubernetes are as follows:

◉ Quick deployments using pre-built configurations

◉ Rapid provisioning and de-provisioning of Oracle Sharding makes CI/CD integration easier

◉ Self-monitoring for pod-failures

◉ Elasticity (scale-in and scale out) without losing data

◉ Seamless lifecycle management including patching

The architecture diagram below provides an overview of Oracle Sharding deployment on Kubernetes for both on-premise and cloud. To get started with Oracle Sharding on Kubernetes, follow the instructions on GitHub.

Oracle Database 21c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Preparation, Oracle Database Career, Oracle Database Learning

Customers can also take advantage of running Oracle Sharding with Kubernetes by deploying the workload on Oracle Container Engine for Kubernetes (OKE) in Oracle Cloud which provides additional benefits like pre-built architecture patterns, pre-built images for Oracle Sharding as well as automation of Kubernetes, container, and database lifecycle management. To get started with Oracle Sharding on OKE, download the pre-built image from Oracle’s Container Registry.

Oracle Sharding on Docker


Docker provides a quick mechanism to deploy Oracle Sharding using an existing Oracle Single Instance Database Image. Some of the benefits of running Oracle Sharding on Docker are as follows:

◉ Simplified packaging includes Oracle Database and GDS to expedite delivery
◉ Automatic service startup based on input parameters (YAML or environment variables)
◉ Support for shard scale up and scale down based on input parameters

Oracle provides sample Docker build files to facilitate sharded database installation, configuration, and environment setup for DevOps users for all steps, including installing and configuring the Docker engine, creating a global service manager (shard director) and Oracle Database images, creating a network bridge, creating containers for the Oracle Sharding objects and shard director, and deploying the containers.

Find the instructions and downloads for sharded database deployment with Docker at GitHub repository

Source: oracle.com

Wednesday, November 24, 2021

Announcing a New Sample Schema: Customer Orders

Calling all Oracle Database developers! There's a new sample schema in town!


Customer Orders is a simple new schema for you to use in your presentations, demos, and blogs.

Why Have We Created a New Schema?


The current sample schemas (HR, SH, CO, PM, IX, & BI) were created in the early 2000s, back when 9i was still hip and new.

Nearly two decades and five major releases later, Oracle Database has improved significantly. And the development community has evolved too. JSON has replaced XML as the de facto data transfer format. 

So we wanted a data set showing you how to use JSON in Oracle Database. While also highlighting other enhancements added over the years, such as identity columns.

Customer Orders requires Oracle Database 12c or higher.

While we could have added new features to one of the existing schemas, one of the key reasons for using them is they've become a standard you can rely upon. Everyone works from the same definition. Which means you can use the tables in your blog posts, scripts, and presentations without further explanation. Changing the existing schemas breaks this common reference point.

So we decided to have a fresh start. And Customer Orders was born!

What Is the New Schema?


Customer Orders models a simple retail application using these tables:

Oracle Database Certification, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Learning, Oracle Database Career

The sample data represents a basic clothing line. PRODUCTS.PRODUCT_DETAILS stores a JSON document describing each clothing item and its customer reviews. For example:

{
  "colour" : "red",
  "gender" : "Girl's",
  "brand" : "BRANDNAME",
  "description" : "description",
  "sizes" : [ 
    "1 Yr", "2 Yr", "3-4 Yr", "5-6 Yr", "7-8 Yr", "9-10 Yr" 
  ],
  "reviews" :  [
    {
      "rating" : 9,
      "review" : "Review text"
    }
  ]
}

This allows you the flexibility to use this schema to store a wide range of products. All you need to do is add the relevant attributes to your JSON!

Where Can I Find Customer Orders?


Download the schema from the Oracle sample schemas GitHub repository.

How Do I Install Customer Orders?


To install customer orders, you must use Oracle Database 12.1.0.2 or higher.

Once you've downloaded the scripts, run co_main with the following parameters:

This will drop and recreate the user CO.

@co_main <CO_password> <connect string> <tablespace> <temp tablespace>

For example:

@co_main copassword localhost:1521/pdb USERS TEMP

If you want to install the tables in an existing schema, run the following scripts:

@co_ddl
@co_dml

What Can I Do with This Schema?


Whatever you like! Please use this to build demos, write scripts, and use in presentations.

Looking for inspiration?

The installation includes several views and sample queries to get your SQL juices flowing.

Here are a few examples:

Extract Product Reviews from PRODUCT_DETAILS JSON


The sample data include an array of reviews for each product in its JSON data.

Using 12c's JSON_table, you can extract these out to traditional rows-and-columns, like so:

select p.product_name, r.rating, 
       round ( 
         avg ( r.rating ) over (
           partition by product_name
         ),
         2
       ) avg_rating,
       r.review
from   products p,
       json_table (
         p.product_details, '$'
         columns ( 
           nested path '$.reviews[*]'
           columns (
             rating integer path '$.rating',
             review varchar2(4000) path '$.review'
           )
         )
       ) r;

Find High-Value Customers

Oracle Database Certification, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Learning, Oracle Database Career
Knowing who your best customers are and keeping them happy is a great way to keep your company going.

The following uses 12c's row pattern matching to find all the people who placed at least £100 worth of orders for three consecutive months:

with rws as (
  select 
         o.customer_id, trunc ( o.order_datetime, 'mm' ) order_month,
         sum ( oi.quantity * oi.unit_price ) month_total
  from   products p
  join   order_items oi
  on     p.product_id = oi.product_id
  join   orders o
  on     oi.order_id = o.order_id
  group  by o.customer_id, trunc ( o.order_datetime, 'mm' )
)
  select * from rws 
  match_recognize (
    partition by customer_id
    order by order_month
    measures
      count(*) as num_months,
      sum ( month_total ) as total_value 
    pattern ( high_value consecutive{2,} )
    define
      high_value as 
        month_total >= 100,
      consecutive as 
        order_month = prev ( add_months ( order_month, 1 ) )
        and month_total >= 100
);

Product a Matrix of Sales Value by Month and Year

Finance teams often want sales broken down into a table with months across the top and years down the side.

The following shows you how to do this with the PIVOT clause:

with order_totals as (
  select extract ( year from o.order_datetime ) order_year,
         to_char ( o.order_datetime, 'MON', 'NLS_DATE_LANGUAGE = english' ) order_month,
         sum ( oi.quantity * oi.unit_price ) value_of_orders
  from   orders o
  join   order_items oi
  on     o.order_id = oi.order_id
  group  by extract ( year from o.order_datetime ),
         to_char ( o.order_datetime, 'MON', 'NLS_DATE_LANGUAGE = english' )
)
  select * from order_totals
  pivot (
    sum ( value_of_orders ) value
    for order_month in (
      'JAN' JAN, 'FEB' FEB, 'MAR' MAR, 'APR' APR, 'MAY' MAY, 'JUN' JUN,
      'JUL' JUL, 'AUG' AUG, 'SEP' SEP, 'OCT' OCT, 'NOV' NOV, 'DEC' DEC
    )
  )
order by order_year;

Source: oracle.com

Monday, November 22, 2021

How to split comma separated value strings into rows in Oracle Database

Often you receive a CSV string that you need to convert to rows. To do this with SQL you need to:

◉ Generate a row for each value

◉ For each row N, extract the value at position N from the string

◉ You can do this in Oracle Database with a query like:

with rws as (

  select 'split,into,rows' str from dual

)

  select regexp_substr (

           str,

           '[^,]+',

           1,

           level

         ) value

  from   rws

  connect by level <= 

    length ( str ) - length ( replace ( str, ',' ) ) + 1;

VALUE   

split    

into     

rows

So what's going on here?

The connect by level clause generates a row for each value. It finds how many values there are by:

◉ Using replace ( str, ',' ) to remove all the commas from the string

◉ Subtracting the length of the replaced string from the original to get the number of commas

◉ Add one to this result to get the number of values

The regexp_substr extracts each value using this regular expression:

[^,]+

This searches for:

◉ Characters not in the list after the caret. So everything except a comma.

◉ The plus operator means it must match one or more of these non-comma characters

The third argument tells regexp_substr to start the search at the first character. And the final one instructs it to fetch the Nth occurrence of the pattern. So row one finds the first value, row two the second, and so on.

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Oracle Database Certification, Oracle Database Learning, Oracle Database Career, Oracle Database Preparation
Photo by Toa Heftiba Şinca from Pexels

Before we move on, note the solution above assumes there are no commas at the start or end of the string. If this is possible, you should trim the string first:

with rws as (
  select ',leading,commas,and,trailing,' str from dual
)
  select regexp_substr (
           str,
           '[^,]+',
           1,
           level
         ) value
  from   rws
  connect by level <= 
    length ( trim ( both ',' from str ) ) - 
    length ( replace ( str, ',' ) ) + 1;
    
VALUE      
leading     
commas      
and         
trailing

You can also adapt this to split strings with other delimiters.

How to turn space, semicolon, pipe or other delimited strings into rows


CSV may be the most common format for delimited values. But often you'll want to split strings with other delimiters, such as pipes or semicolons. To do this, swap commas for the new character:

with rws as (
  select 'split;semicolons;into;rows' str from dual
)
  select regexp_substr (
           str,
           '[^;]+',
           1,
           level
         ) value
  from   rws
  connect by level <= 
    length ( str ) - 
    length ( replace ( str, ';' ) ) + 1;
  
VALUE        
split         
semicolons    
into          
rows

Using this basic method you can overcome a common challenge: passing a CSV as input to an in list.

How to do variable in lists


The in condition returns all rows where the target matches one of the values in a comma-separated list. For example:

select employee_id, first_name, last_name
from   hr.employees
where  last_name in ( 'King', 'Kochhar', 'De Haan' );

EMPLOYEE_ID FIRST_NAME    LAST_NAME   
        102 Lex           De Haan      
        156 Janette       King         
        100 Steven        King         
        101 Neena         Kochhar

The issue here is the items in the list need to be separate values. If you pass them as a single CSV string like this:

select employee_id, first_name, last_name
from   hr.employees
where  last_name in ( 'King,Kochhar,De Haan' );

no rows selected

The query returns no rows!

This is because it's searching for someone with the name King,Kochhar,De Haan. Unsurprisingly, no one has that name.

This makes in hard to use with bind variables, particularly if the number of values is unknown.

Using the CSV-to-rows trick above you can pass the search string as a single value and split it into rows like this:

with rws as (
  select 'King,Kochhar,De Haan' str from dual
)
select employee_id, first_name, last_name
from   hr.employees
where  last_name in ( 
  select regexp_substr (
           str,
           '[^,]+',
           1,
           level
         ) value
  from   rws
  connect by level <= 
    length ( str ) - 
    length ( replace ( str, ',' ) ) + 1
);

EMPLOYEE_ID FIRST_NAME    LAST_NAME   
        156 Janette       King         
        100 Steven        King         
        101 Neena         Kochhar      
        102 Lex           De Haan

This enables you to use a bind variable for the in list:

declare
  emp_cur     sys_refcursor;
  search_str  varchar2(100) := 'King,Kochhar,De Haan';
  employee_id integer;
  first_name  varchar2(30);
  last_name   varchar2(30);
begin
  open emp_cur for q'!select employee_id, first_name, last_name
from   hr.employees
where  last_name in ( 
  select regexp_substr (
           :str,
           '[^,]+',
           1,
           level
         ) value
  from   dual
  connect by level <= 
    length ( :str ) - length ( replace ( :str, ',' ) ) + 1
)!' using search_str, search_str, search_str;

  loop
    fetch emp_cur into employee_id, first_name, last_name;
    exit when emp_cur%notfound;
    dbms_output.put_line (  
      employee_id || ' ' || first_name || ' ' || last_name 
    );
  end loop;
end;
/

156 Janette King
100 Steven King
101 Neena Kochhar
102 Lex De Haan

This works well when you're splitting one value, such as a string literal or bind variable.

But what if the strings are stored in a table and you have many rows with strings to split?

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Oracle Database Certification, Oracle Database Learning, Oracle Database Career, Oracle Database Preparation
Image by ejaugsburg from Pixabay

Splitting delimited values stored in columns into rows


Sometimes you'll find columns storing lists of separated values. For example:

create table csvs (
  csv_id   integer
    primary key,
  csv_text varchar2(1000)
);

insert into csvs 
  values ( 1, 'split,into,rows' );
insert into csvs 
  values ( 2, 'even,more,values,to,extract' );
  
commit;

This is bad design, violates first normal form, and causes many headaches.

For example, if you want to join each value to rows in another table, you need to convert the string into rows first.

You can do this with the CSV-to-rows trick above as the basis. But it needs a tweak.

If you swap dual for your table (csvs here), the query links every row with every other row in the table! This leads to generating a huge data set and slowing the query to a crawl.

There are a few ways to avoid this.

From 12c you can use a lateral join. This enables you to write a subquery that uses values from tables to its left. So the database only generates a tree corresponding to each source row.

This gives a query like:

select csv_id, regexp_substr (
         csv_text,
         '[^,]+',
         1,
         rn
       ) val
from   csvs
cross  join lateral (
  select level rn from dual
  connect by level <=   
    length ( csv_text ) - length ( replace ( csv_text, ',' ) ) + 1
);

CSV_ID VAL       
     1 split      
     1 into       
     1 rows       
     2 even       
     2 more       
     2 values     
     2 to         
     2 extract

If you're stuck on ancient versions of Oracle Database, you can extend the connect by clause instead:

select csv_id, regexp_substr (
         csv_text,
         '[^,]+',
         1,
         level
       ) val
from   csvs 
connect by level <=   
  length ( csv_text ) - length ( replace ( csv_text, ',' ) ) + 1
and    prior csv_text = csv_text
and    prior sys_guid () is not null;

CSV_ID VAL       
     2 even       
     2 more       
     2 values     
     2 to         
     2 extract    
     1 split      
     1 into       
     1 rows

So what do the extra connect by clauses do?

◉ prior csv = csv ensures the query only creates new rows linked to the original

◉ prior sys_guid () is not null adds a unique bit of data to each row. This prevents cycles because Oracle Database uses all the columns with prior operators in the connect by to check for loops. If two rows in a tree have the same value for these columns, the database considers this a loop.

These methods help you turn delimited values into rows, but so far we've made a big assumption:

The separator cannot appear within the values themselves.

What do you do if it can?

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Oracle Database Certification, Oracle Database Learning, Oracle Database Career, Oracle Database Preparation
Photo by Toa Heftiba Şinca from Pexels

Converting quoted values containing the separator into rows


Say you've received a CSV with the values in "lastname, firstname" format. Splitting on each comma will place last and first names on separate lines, giving twice the number of rows you need!

To overcome this change the regular expression. Instead of searching for the Nth token that excludes a comma, an alternative is to:

◉ Find the number of tokens by counting how many times "," (quote comma quote) appears; regexp_count ( str, '","' ) is a neat way to do this
◉ Find the position of the next double quote after every second comma. You can do this with instr ( str, '"', 1, ( rn * 2 ) - 1 )
◉ Return all the characters that are not a double quote from this point

Which looks something like:

with rws as (
  select department_id, 
         listagg ( '"' || last_name || ', ' || first_name || '"', ',' ) 
           within group ( order by employee_id ) str
  from   hr.employees
  where  department_id in ( 10, 20, 30 )
  group  by department_id
)
  select department_id, regexp_substr (
           str,
           '[^"]+',
           instr  ( str, '"', 1, (rn*2)-1 ), 
           1
         ) employee
  from   rws, lateral (
    select level rn from dual
    connect by level <=   
      regexp_count ( str, '","' ) + 1
  );
  
DEPARTMENT_ID EMPLOYEE             
           10 Whalen, Jennifer      
           20 Hartstein, Michael    
           20 Fay, Pat              
           30 Raphaely, Den         
           30 Khoo, Alexander       
           30 Baida, Shelli         
           30 Tobias, Sigal         
           30 Himuro, Guy           
           30 Colmenares, Karen

This makes two important assumptions:

◉ Every value contains exactly one comma. If there can be a different number of delimiters in each value, you'll need to adjust the logic.
◉ Each value is enclosed in quotes (or some other character not present in the values themselves)

A note on performance


At this point you may be wondering:

Why not use regexp_count in the original query to find how many values there? Why subtract the lengths of the strings?

The short answer is: you can!

But be aware that regular expressions are slower than other string manipulation functions.

On small data sets processing short CSVs you're unlikely to notice this difference. But if gaining every last microsecond matters to you, avoid the regular expressions. You can also swap out the regexp_substr expression in the select for one using substr and instr.

Whichever delimited values to rows method you use, writing the query is fiddly. As this is a common task, it'd be good to create a function to split strings for you.

You can split a single string with pipelined table functions (PTFs) like this:

create or replace type string_table as 
  table of varchar2 (4000);
/

create or replace function split_string (
  delimited_string varchar2,
  separator        varchar2 default ','
) return string_table
  pipelined
as
  delimited_string_cleaned varchar2(32767);
  substring varchar2(4000);
  pos       pls_integer;
begin
  delimited_string_cleaned := 
    trim ( both separator from delimited_string ) || 
    separator;
  pos := instr ( delimited_string_cleaned, separator );
  substring := substr ( delimited_string_cleaned, 1, pos - 1 );
  
  loop
    exit when substring is null;
    pipe row ( substring );
  
    substring := substr ( 
      delimited_string_cleaned, 
      pos + 1, 
      instr ( 
        delimited_string_cleaned, separator, pos + 1 
      ) - pos - 1 
    );
    pos := instr ( delimited_string_cleaned, separator, pos + 1 );   
  end loop;

  return;
end;
/

select *
from   split_string ( 'King,Kochhar,De Haan' );

COLUMN_VALUE   
King            
Kochhar         
De Haan

While PTFs can accept cursors, making a generic PTF that converts CSVs stored in a table to rows is hard.

Luckily Oracle Database 19c offers a better way: SQL macros!

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Oracle Database Certification, Oracle Database Learning, Oracle Database Career, Oracle Database Preparation

Make reusable string-to-rows functions with SQL macros


Added in Oracle Database 19.6, SQL macros return table SQL expressions. From 21c you can also make macros that return scalar expressions.

You can think of SQL macros as query templates. They return a string containing the text of your expression. The function's parameters are placeholders in this string.

At parse time the database resolves the expression. It replaces the parameter placeholders with the text of the actual values you call it with. It then substitutes this expression back into your query.

This enables you to define a reusable query you can pass tables or columns to at runtime.

This is a big advantage over PTFs which can't accept tables as parameters.

Macros also resolve to pure SQL. This avoids any context switches from SQL to PL/SQL and gives the optimizer better visibility of the query. These benefits mean using macros is likely to be faster than PTFs.

In this post we've identified two template queries:

◉ Taking a CSV string literal and converting it into rows
◉ Extracting delimited values stored in a table

So we need two macros. One that accepts an input string, the other a table name and source column. You could create these as two functions with different names. Or create one overloaded function in a package. I'll do the latter here.

One overload will accept a string, the other a table and column containing delimited values. Both will have an optional parameter for the separator, which defaults to a comma:

create or replace package string_macros_pkg as 
  function split_string ( 
    tab dbms_tf.table_t,
    col dbms_tf.columns_t,
    separator varchar2 default ','
  ) return clob sql_macro;
  
  function split_string ( 
    delimited_string varchar2,
    separator        varchar2 default ','
  ) return clob sql_macro;
end;
/

Place the template queries above into these like so:

create or replace package body string_macros_pkg as 
  function split_string ( 
    tab dbms_tf.table_t,
    col dbms_tf.columns_t,
    separator varchar2 default ','
  ) return clob sql_macro as
    sql_text clob;
  begin
    
    sql_text := 'select t.*, 
         regexp_substr (
           ' || col ( 1 ) || ',
           ''[^'' || separator || '']+'',
           1,
           pos
         ) str,
         pos
  from   tab t,
         lateral (
           select level pos
           from   dual
           connect by level <= 
             length ( ' || col ( 1 ) || ' ) 
               - length ( replace ( ' || col ( 1 ) || ', separator ) ) 
               + 1
         )';
  
    return sql_text;
    
  end split_string;
  
  function split_string ( 
    delimited_string varchar2,
    separator        varchar2 default ','
  ) return clob sql_macro as
      sql_text clob;
  begin
    
    sql_text := 'select 
         regexp_substr (
           delimited_string,
           ''[^'' || separator || '']+'',
           1,
           level
         ) str,
         level pos
  from   dual
  connect by level <= 
    length ( delimited_string ) 
      - length ( replace ( delimited_string, separator ) ) 
      + 1';
  
    return sql_text;
    
  end split_string;

end;
/

With this in place, you can now call the macro, passing either a string or a table's column to split. Here are a few examples to get you going:

select employee_id, first_name, last_name
from   hr.employees
where  last_name in (
  select str 
  from   string_macros_pkg.split_string ( 'King,Kochhar,De Haan' )
);

EMPLOYEE_ID FIRST_NAME    LAST_NAME   
        100 Steven        King         
        156 Janette       King         
        101 Neena         Kochhar      
        102 Lex           De Haan  

select employee_id, first_name, last_name 
from   hr.employees
join   string_macros_pkg.split_string ( 'King,Kochhar,De Haan' )
on     last_name = str;

EMPLOYEE_ID FIRST_NAME    LAST_NAME
        100 Steven        King     
        156 Janette       King     
        101 Neena         Kochhar  
        102 Lex           De Haan  
        
select csv_id, str, pos 
from   string_macros_pkg.split_string ( 
  csvs, columns ( csv_text ), ',' 
);

CSV_ID STR       POS   
     1 split       1 
     1 into        2 
     1 rows        3 
     2 even        1 
     2 more        2 
     2 values      3 
     2 to          4 
     2 extract     5

Source: oracle.com

Friday, November 19, 2021

Free developer resources to help you get started with Oracle Database

Oracle Database Exam, Oracle Database Exam Preparation, Oracle Database Tutorial and Material, Oracle Database Career, Oracle Database Guides, Database Certification

Learn SQL with these free courses

Oracle Dev Gym has many classes to help you learn Oracle technology. All the classes include a free certificate of completion.

Databases for Developers: Foundations

Learn SQL in this FREE 12-part class. It covers the basics of SQL in Oracle Database, teaching you how to:

◉ Create tables

◉ Query and join tables

◉ Add, change, and remove rows with insert, update, and delete

The course is a series of videos to teach you database concepts, interactive SQL tutorials, and quizzes to reinforce the ideas.

Databases for Developers: Next Level

This 10-part course continues where Databases for Developers: Foundations left off, taking you further on your SQL journey. It covers many data manipulation methods, including:

◉ Sorting data

◉ Getting running totals

◉ Converting rows to columns

◉ Hierarchical queries

As with Foundations, each module includes a video, interactive tutorials, and quizzes to test you.

Databases for Developers: Performance

Once you've mastered writing SQL, the next step is to learn how to make it fast. This class teaches you the basics of optimizing SQL. It shows you how to:

◉ Get and read execution plans to understand how the database processes SQL

◉ Make statements faster by creating indexes and materialized views

◉ Find slow SQL

With a range of other tuning techniques, this class will start you on the path to being a SQL tuning expert. It follows the winning formula of videos, interactive SQL tutorials, and quizzes to teach you how to tune SQL.

Analytic SQL for Developers

This 6 module boot camp helps you become an expert with Oracle Analytic SQL functions. This course is a deeper dive into analytic functions. It teaches you how the over () clause works and use it to solve problems, including :

◉ Rank rows

◉ Get running totals

◉ Find values from the next or previous row

◉ Convert rows to string lists

Each module has a series of videos and quizzes to give you a thorough understanding of each topic.

Free access to Oracle Database

Oracle Cloud Free Tier

Oracle's Always Free cloud services are the perfect way to get started with Oracle Database. These give you unlimited access to:

◉ 2 Autonomous Databases, 20 GB each

◉ Up to 4 instances of Arm Compute

◉ 200 GB block volume

◉ 10 GB object storage

Oracle Live SQL

Live SQL is a free, browser-based SQL client. With Live SQL you can save and share your SQL scripts. You can also access the code library. This has thousands of scripts made by community members and a range of interactive tutorials from Oracle experts.

Oracle Database XE

If you want a local database get Oracle Database 18c Express Edition (XE). This is a full-featured release you can install on your machine - all completely free. With Oracle Database XE you can use up to:

◉ 12 GB of user data

◉ 2 GB of database RAM

◉ 2 CPU threads

◉ 3 Pluggable Databases

It also includes the In-Memory, Partitioning, Advanced Analytics, and Advanced Security options. This makes it an ideal local environment for you to tinker with Oracle Database.

Oracle Live Labs

To help you get started with Oracle's cloud services, Oracle LiveLabs has an extensive catalogue of workshops. These tutorials gives you free access to Oracle Cloud tools while you run the lab. With hundreds of workshops to choose from, you can get to grips with Oracle technologies. Whether you're a developer, DBA, or data scientist, you'll find labs to help you learn how these work.

Oracle APEX

Oracle Application Express (APEX) is a low-code development environment. Using APEX, developers can quickly develop and deploy compelling apps that solve real problems and provide immediate value.

Request a free workspace on apex.oracle.com to see how fast it is to build complete applications with APEX.

Stay in touch

Ask the Oracle Mentors (TOM)

Founded by Tom Kyte, Ask TOM is a place to get your questions about Oracle Database and related products answered by a team of Oracle experts. Submit your questions to the team or join the live Ask TOM Office Hours sessions.

Oracle Developer Community Newsletters

Subscribe to these monthly newsletters to hear about the latest blog posts, videos, and product announcements from Oracle.

Oracle Database Exam, Oracle Database Exam Preparation, Oracle Database Tutorial and Material, Oracle Database Career, Oracle Database Guides, Database Certification

Source: oracle.com

Wednesday, November 17, 2021

Is TimesTen an Oracle Database Cache or a System of Record?

Introduction

There seems to be some confusion on whether the Oracle TimesTen In-Memory Database is a cache or a system of record.

I often hear comments like:

◉ TimesTen can only be a cache as it has no persistence

◉ TimesTen is only a cache for Oracle databases

◉ TimesTen should only be used as a cache

This blog looks at the TimesTen fundamentals to answer those questions.

TimesTen Persistence

TimesTen is an in-memory database, so all of the data is in memory all of the time. This means that TimesTen can be optimzed for in-memory performance and as a result it can go really fast. Even though TimesTen is an in-memory database, it still supports ACID transactions and database recovery.

Oracle Database Cache, Oracle Database Exam Prep, Oracle Database Exam Preparation, Database Guides, Database Career, Database Jobs, Database Skills

TimesTen enables ACID transactions and database recovery much like an Oracle database does. TimesTen has transaction log files which store the REDO and UNDO records for transactions.

TimesTen also has checkpoint files which are like Oracle database data files as they store database objects like tables, indexes, views, materialized views, sequences and PLSQL packages/procedures/functions.

The combination of transaction log files and checkpoint files means that TimesTen can re-start from both scheduled and un-scheduled database shutdown much like an Oracle database does. This means than things like power failure, machines being turned off and databases processes being killed will not result in data being lost.

TimesTen transaction log files and checkpoint files are just files stored on a POSIX file system [eg ext4 or XFS on Linux].  The latency and bandwidth of the persistent storage will determine the database load time, and if needed the database recovery time.

The consequence of this design is that TimesTen has the same persistence and recovery ability as databases like Oracle, MySQL, PostgreSQL and SQL Server.

TimesTen as a System of Record


As TimesTen suports ACID transactions, data persistence and recovery, the TimesTen In-Memory Database can be used as a system of record. Most TimesTen customers do not like single points of failure, so TimesTen is almost always deployed in some high availability configuration. 

TimesTen has two different architectures:

◉ TimesTen Classic is a single instance In-Memory Database
◉ TimesTen Scaleout is a multi-instance, shared nothing, scale-out version of the TimesTen In-Memory Database

Oracle Database Cache, Oracle Database Exam Prep, Oracle Database Exam Preparation, Database Guides, Database Career, Database Jobs, Database Skills

TimesTen Classic uses replication to enable high availability. There are many different possible replication configurations, but some form of active-standby or active-active are the most common.

TimesTen Scaleout is a sharded database which has multiple copies of each shard to enable high availability.

The TimesTen In-Memory Database enables low latency SQL applications where microseconds matter for systems such as telecommuniations, financial services and trading systems.

TimesTen can be used a system of record, and about half of the TimesTen's customers use it that way.

TimesTen as a Cache


TimesTen can also be use a read/write, or a read-only cache for other databases.

Oracle Database Cache, Oracle Database Exam Prep, Oracle Database Exam Preparation, Database Guides, Database Career, Database Jobs, Database Skills

  • For read caching, Oracle database triggers + log tables with polling are used
    • Customers define the polling interval on a per table basis to get the commited inserts, updates and deletes of interest
    • Data can dynamically be loaded from the Oracle database if it does not already exist the in the cache tables
  • For write caching, the committed inserts, updates and deletes of interest are written to the Oracle database
    • Parallel replication allows these writes to occur in parallel while maintaining commit order
      • OCI array processing or PLSQL blocks are used to optimize the OracleNet traffic for the writes
For read only caching, TimesTen can also be used as a Oracle GoldenGate target. This does not require triggers on the Oracle database.

Oracle Database Cache, Oracle Database Exam Prep, Oracle Database Exam Preparation, Database Guides, Database Career, Database Jobs, Database Skills

This means that given one or more GoldenGate configured source databases, the committed inserts, updates and deletes of interest can be captured, replicated and applied to a TimesTen database, whether it is TimesTen Classic or TimesTen Scaleout.

Source: oracle.com

Monday, November 15, 2021

Simplified TimesTen 18.1 Licensing

Based on customer feedback, the Oracle TimesTen 18.1 licensing has been simplified:

◉ TimesTen In-Memory Database can now also be used for caching

◉ TimesTen Application Tier Database Cache can now also use TimesTen Scaleout

The complete licensing document for TimesTen 18.1 is here.

The relevant portion of the new licensing text is:

Oracle Database Preparation, Oracle Database Certification, Oracle Database Guides, Oracle Database Learning, Oracle Database Career, Database Guides

Oracle TimesTen In-Memory Database Licensing


Oracle Database Preparation, Oracle Database Certification, Oracle Database Guides, Oracle Database Learning, Oracle Database Career, Database Guides

TimesTen In-Memory Database 18.1 supports:

◉ Single Instance TimesTen Classic
◉ Replicated TimesTen Classic
◉ TimesTen Scaleout
◉ Single Instance TimesTen Classic as a cache for the Oracle Database
◉ Replicated TimesTen Classic as a cache for the Oracle Database
◉ TimesTen Scaleout as a cache for the Oracle Database
 

Oracle TimesTen Application Tier Database Cache Licensing


Oracle Database Preparation, Oracle Database Certification, Oracle Database Guides, Oracle Database Learning, Oracle Database Career, Database Guides

TimesTen Application Tier Database Cache 18.1 supports:

◉ Single Instance TimesTen Classic as a cache for the Oracle Database [Enterprise Edition]

◉ Replicated TimesTen Classic as a cache for the Oracle Database [Enterprise Edition]

◉ TimesTen Scaleout as a cache for the Oracle Database [Enterprise Edition]

◉ At least one table on an Enterprise Edition Oracle Database needs to be cached for this product to be licensed

Disclaimer: These are my personal thoughts and do not represent Oracle's official viewpoint in any way, shape, or form.

Source: oracle.com