Monday, June 29, 2020

DBMS - Data Independence

DBMS - Data Independence, Oracle Database Tutorials and Material, Oracle Database Learning, Database Exam Prep

If a database system is not multi-layered, then it becomes difficult to make any changes in the database system. Database systems are designed in multi-layers as we learnt earlier.

Data Independence


A database system normally contains a lot of data in addition to users’ data. For example, it stores data about data, known as metadata, to locate and retrieve data easily. It is rather difficult to modify or update a set of metadata once it is stored in the database. But as a DBMS expands, it needs to change over time to satisfy the requirements of the users. If the entire data is dependent, it would become a tedious and highly complex job.

DBMS - Data Independence, Oracle Database Tutorials and Material, Oracle Database Learning, Database Exam Prep

Metadata itself follows a layered architecture, so that when we change data at one layer, it does not affect the data at another level. This data is independent but mapped to each other.

Logical Data Independence


Logical data is data about database, that is, it stores information about how data is managed inside. For example, a table (relation) stored in the database and all its constraints, applied on that relation.

Logical data independence is a kind of mechanism, which liberalizes itself from actual data stored on the disk. If we do some changes on table format, it should not change the data residing on the disk.

Physical Data Independence


All the schemas are logical, and the actual data is stored in bit format on the disk. Physical data independence is the power to change the physical data without impacting the schema or logical data.

For example, in case we want to change or upgrade the storage system itself − suppose we want to replace hard-disks with SSD − it should not have any impact on the logical data or schemas.

Saturday, June 27, 2020

What Is Oracle Database?

Oracle Database Tutorial and Material, Oracle Database Certification, Oracle Exam Prep

What is a database


A database is an organized collection of structured data stored electronically in a computer system.

When the computer was first invented, it was mainly used for scientific researches to perform calculation quickly.

Since the computer was adopted more and more, the requirements were also increased to require the computer to store a larger volume of data for fast retrieval.

Before the database system was invented, the flat file structure was commonly used to store data. For example, here is the comma-separated value (CSV) file that stores employee information:

first name, last name, phone
John, Doe, (408)-245-2345
Jane, Doe, (503)-234-2355
...

The CSV file has three columns which are known as fields and rows which are known as records. When the number of rows in the flat file is increased e.g., million rows, it becomes unmanageable.

In the 1970s, Dr. Ted Codd, a computer scientist, invented the relational model for database management. The relational model deals with many issues caused by the flat file model. According to his model, data is organized in entities and attributes, instead of combining everything in a single structure.

An entity is a person, place, or thing and attributes describe the person, place, and thing. For example, you can use the relational model to organize the employee information into an employee entity with the attributes: first name, last name, and phone:

Oracle Database Tutorial and Material, Oracle Database Certification, Oracle Exam Prep

Each employee may have one or more contacts, you can create a contact entity and relate the employee entity to the contacts entity through a relationship called one-to-many.


Oracle Database Tutorial and Material, Oracle Database Certification, Oracle Exam Prep

By the way, we often refer the entities as tables, records as rows and fields as columns.

The relational model is better than the flat file model because it removes the duplicate data e.g. if you put employee and contact information on the same file. The employee, who has more than one contact, will appear in multiple rows.

The Relational Database Management System, or RDBMS in short, manages relational data. Oracle Database is an RDBMS with the largest market share.

Oracle Database Tutorial and Material, Oracle Database Certification, Oracle Exam Prep

Besides the Oracle Database, there are other RDBMS products available. Here are some notable ones:

◉ Db2 from IBM.

◉ SQL Server from Microsoft.

◉ MySQL – the most popular open-source database, also from Oracle.

◉ PostgreSQL – the most advanced open source database.

Oracle Database features


Oracle Database allows you to quickly and safely store and retrieve data. Here are the integration benefits of the Oracle Database:

◉ Oracle Database is cross-platform. It can run on various hardware across operating systems including Windows Server, Unix, and various distributions of GNU/Linux.

◉ Oracle Database has its networking stack that allows application from a different platform to communicate with the Oracle Database smoothly. For example, applications running on Windows can connect to the Oracle Database running on Unix.

◉ ACID-compliant – Oracle is ACID-compliant Database that helps maintain data integrity and reliability.

◉ Commitment to open technologies – Oracle is one of the first Database that supported GNU/Linux in the late 1990s before GNU/Linux become a commerce product. It has been supporting this open platform since then.

Oracle Database has several structural features that make it popular:

◉ Logical data structure – Oracle uses the logical data structure to store data so that you can interact with the database without knowing where the data is stored physically.

◉ Partitioning – is a high-performance feature that allows you to divide a large table into different pieces and store each piece across storage devices.

◉ Memory caching – the memory caching architecture allows you to scale up a very large database that still can perform at a high speed.

◉ Data Dictionary is a set of internal tables and views that support administer Oracle Database more effectively.

◉ Backup and recovery – ensure the integrity of the data in case of system failure. Oracle includes a powerful tool called Recovery Manager (RMAN) – allows DBA to perform cold, hot, and incremental database backups and point-in-time recoveries.

◉ Clustering – Oracle Real Application Clusters (RAC) – Oracle enables high availability that enables the system is up and running without interruption of services in case one or more server in a cluster fails.

Oracle Database Editions


Oracle provides three main editions of Oracle Databases as follows:

1) Enterprise Edition (EE) is the common and expensive edition of the Oracle Database. It has the following characteristics:

◉ No maximum number of CPUs
◉ No limits on memory or database size
◉ Include premium features that are not available in other editions.

2) Standard Edition (SE) is a limited edition of the Enterprise Edition that has the following characteristics:

◉ Limited to four or fewer CPUs
◉ No limit on memory or database size
◉ Include many features, but no as many as EE

3) Expression Edition (XE) is a free-to-use version of the Oracle Database that available on both Windows and GNU/Linux platforms. These are the features of Oracle Database XE 18c:

◉ Limited to 2 CPUs
◉ Can use the maximum of 2GB of RAM, and has 12GB of user data.
◉ Very limited features

Source: oracletutorial.com

Friday, June 26, 2020

HTML with Embedded Images from PL/SQL

Oracle Database Tutorial and Materials, DB Exam Prep, DB Learning

◉ Background


The IMG tag allows you to include images in HTML. Typically you see this with the source containing a URL, as shown below.

<img src="http://oracle-base.com/images/site_logo.gif" />

Using a URL in the tag means the browser (or email client) must make a separate HTTP request to get the image. An alternative is to actually embed the image data into the IMG tag. The basic format of the tag contents is shown below, where "mimetype" is the mime type of the image ("image/png", "image/gif" etc.) and "data" is the base64 encoded data that makes up the image.

<img src="data:<mimetype>;base64,<data>" />

There are a couple of reasons why you might prefer this method:

◉ Reducing the total number of HTTP requests against your app server may improve performance, provided you are not constantly embedding large images.
◉ When HTML may be viewed offline, like HTML emails, embedding the images rather than using URLs can be advantageous.

This article illustrates how to build an embedded image in HTML using PL/SQL.

◉ Setup


The example code in this article requires a number of objects to be created.

First, create a directory object pointing to a physical directory on the database server that the "oracle" user has read/write permissions on. This will be used by the file system examples.

CONN / AS SYSDBA
CREATE OR REPLACE DIRECTORY images AS '/host/';
GRANT READ, WRITE ON DIRECTORY images TO test;

Next, create and populate a table to hold images for the database example. In this case I'm just using a single image called "site_logo.gif" that is loaded into the table from the 'IMAGES' directory created previously.

CONN test/test

CREATE TABLE images (
  id     NUMBER(10)    NOT NULL,
  name   VARCHAR2(50)  NOT NULL,
  image  BLOB          NOT NULL,
  CONSTRAINT images_pk PRIMARY KEY (id),
  CONSTRAINT images_uk UNIQUE (name)
);

CREATE SEQUENCE images_seq;

DECLARE
  l_dir    VARCHAR2(10) := 'IMAGES';
  l_file   VARCHAR2(20) := 'site_logo.gif';
  l_bfile  BFILE;
  l_blob   BLOB;

  l_dest_offset INTEGER := 1;
  l_src_offset  INTEGER := 1;
BEGIN
  INSERT INTO images (id, name, image)
  VALUES (images_seq.NEXTVAL, l_file, empty_blob())
  RETURN image INTO l_blob;

  l_bfile := BFILENAME(l_dir, l_file);
  DBMS_LOB.fileopen(l_bfile, DBMS_LOB.file_readonly);
  -- loadfromfile deprecated.
  -- DBMS_LOB.loadfromfile(l_blob, l_bfile, DBMS_LOB.getlength(l_bfile));
  DBMS_LOB.loadblobfromfile (
    dest_lob    => l_blob,
    src_bfile   => l_bfile,
    amount      => DBMS_LOB.lobmaxsize,
    dest_offset => l_dest_offset,
    src_offset  => l_src_offset);
  DBMS_LOB.fileclose(l_bfile);

  COMMIT;
END;
/

The HTTP example needs access to the internet (or some other HTTP server). If you are using Oracle 11g, you will need to make sure an appropriate ACL is present to allow network access from the database.

We need a way of checking that the HTML is actually generated correctly. To do that we will write it out to the file system using the following procedure.

CREATE OR REPLACE PROCEDURE create_file_from_clob (p_dir  IN VARCHAR2,
                                                   p_file IN VARCHAR2,
                                                   p_clob IN OUT NOCOPY CLOB)
AS
  l_file  UTL_FILE.file_type;
  l_step  PLS_INTEGER := 12000;
BEGIN
  l_file := UTL_FILE.fopen(p_dir, p_file, 'w', 32767);

  FOR i IN 0 .. TRUNC((DBMS_LOB.getlength(p_clob) - 1 )/l_step) LOOP
    UTL_FILE.put(l_file, DBMS_LOB.substr(p_clob, l_step, i * l_step + 1));
    UTL_FILE.fflush(l_file);
  END LOOP;
  UTL_FILE.fclose(l_file);
END;
/

The code in first draft of the article was a little verbose. Anton Scheffer suggested using the following function to do the base64 encoding, so I've switched across to it where possible, or used a similar approach where a straight substitution doesn't fit.

CREATE OR REPLACE FUNCTION base64encode(p_blob IN BLOB)
  RETURN CLOB
IS
  l_clob CLOB;
  l_step PLS_INTEGER := 12000; -- make sure you set a multiple of 3 not higher than 24573
BEGIN
  FOR i IN 0 .. TRUNC((DBMS_LOB.getlength(p_blob) - 1 )/l_step) LOOP
    l_clob := l_clob || UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(DBMS_LOB.substr(p_blob, l_step, i * l_step + 1)));
  END LOOP;
  RETURN l_clob;
END;
/

◉ Encoding Image Data


The original image could come from the file system, a BLOB column in a table in the database or a HTTP request from an app server. We shall deal with each of these below.

Oracle Database Tutorial and Materials, DB Exam Prep, DB Learning
Although I am using stored procedures in these examples, in a real system the code would be placed in packages.

    ◉ Encoding Images from the File System

The following procedure uses the DBMS_LOB package to read chunks of data from a BFILE pointing to the image on the filesystem. The UTL_ENCODE and UTL_RAW packages are used to encode the data and convert it to a string suitable for inclusion into the HTML.

CREATE OR REPLACE PROCEDURE get_enc_img_from_fs (p_dir  IN VARCHAR2,
                                                 p_file IN VARCHAR2,
                                                 p_clob IN OUT NOCOPY CLOB)
AS
  l_bfile BFILE;
  l_step  PLS_INTEGER := 12000;
BEGIN
  l_bfile := BFILENAME(p_dir, p_file);
  DBMS_LOB.fileopen(l_bfile, DBMS_LOB.file_readonly);

  FOR i IN 0 .. TRUNC((DBMS_LOB.getlength(l_bfile) - 1 )/l_step) LOOP
    p_clob := p_clob || UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(DBMS_LOB.substr(l_bfile, l_step, i * l_step + 1)));
  END LOOP;

  DBMS_LOB.fileclose(l_bfile);
END;
/

    ◉ Encoding Images from HTTP

The following procedure is similar to the previous one, except it reads data from a HTTP request, rather than from the filesystem.

CREATE OR REPLACE PROCEDURE get_enc_img_from_http (p_url  IN VARCHAR2,
                                                   p_clob IN OUT NOCOPY CLOB)
AS
  l_http_request   UTL_HTTP.req;
  l_http_response  UTL_HTTP.resp;
  l_raw            RAW(32767);
BEGIN
  l_http_request  := UTL_HTTP.begin_request(p_url);
  l_http_response := UTL_HTTP.get_response(l_http_request);

  BEGIN
    LOOP
      UTL_HTTP.read_raw(l_http_response, l_raw, 12000);
      p_clob := p_clob || UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(l_raw));
    END LOOP;
  EXCEPTION
    WHEN UTL_HTTP.end_of_body THEN
      UTL_HTTP.end_response(l_http_response);
  END;
END;
/

Alternatively, you could achieve the same result using the HTTPURITYPE subtype of the URITYPE.

CREATE OR REPLACE PROCEDURE get_enc_img_from_http (p_url  IN VARCHAR2,
                                                   p_clob IN OUT NOCOPY CLOB)
AS
BEGIN
  p_clob := p_clob || base64encode(HTTPURITYPE.createuri(p_url).getblob());
END;
/

    ◉ Encoding Images from a BLOB Column

The following procedure uses an image stored in a BLOB column of table.

CREATE OR REPLACE PROCEDURE get_enc_img_from_tab (p_image_name IN VARCHAR2,
                                                  p_clob       IN OUT NOCOPY CLOB)
AS
BEGIN
  SELECT p_clob || base64encode(image)
  INTO   p_clob
  FROM   images
  WHERE  name = p_image_name;
END;
/

◉ Test It


We now have three procedures to retrieve image data and encode it so it can be added to a HTML document. Next, we need to write the code to generate some HTML and call one of the procedures to embed the image data. The following code creates a HTML document in a temporary CLOB. Examples of all three procedure calls are present, so try each of them out in turn.

Once the HTML is in the temporary CLOB it could be published as an email, a web page (using the embedded PL/SQL gateway or a mod_plsql enabled application server) or written to the file system. For simplicity I will just write it to the filesystem, but examples of the other technologies are available from the links at the end of the article.

DECLARE
  l_clob  CLOB;
BEGIN
  DBMS_LOB.createtemporary(l_clob, FALSE);

  -- Build the start of the HTML document, including the start of the IMG tag
  -- and place it in a CLOB.
  l_clob := '<html>
   <head>
     <title>Test HTML with Embedded Image</title>
   </head>
   <body>
     <h1>Test HTML with Embedded Image</h1>
     <p>And here it is:</p>
     <img src="data:image/gif;base64,';

  get_enc_img_from_fs (p_dir  => 'IMAGES',
                       p_file => 'site_logo.gif',
                       p_clob => l_clob);

  --get_enc_img_from_http (p_url  => 'http://oracle-base.com/images/site_logo.gif',
  --                       p_clob => l_clob);

  --get_enc_img_from_tab (p_image_name => 'site_logo.gif',
  --                      p_clob       => l_clob);

  -- Close off the IMG tag and complete the HTML document.
  l_clob := l_clob || '" alt="Site Logo" />
  <p>The end.</p>
  </body>
  </html>';

  -- The CLOB now contains the complete HTML with the embedded image, so do something with it.
  -- In this case I'm going to write it to the file system.
  create_file_from_clob (p_dir  => 'IMAGES',
                         p_file => 'EmbeddedImageTest.htm',
                         p_clob => l_clob);


  DBMS_LOB.freetemporary(l_clob);
EXCEPTION
  WHEN OTHERS THEN
    DBMS_LOB.freetemporary(l_clob);
    RAISE;
END;
/

Wednesday, June 24, 2020

DBMS - File Structure

Relative data and information is stored collectively in file formats. A file is a sequence of records stored in binary format. A disk drive is formatted into several blocks that can store records. File records are mapped onto those disk blocks.

File Organization


File Organization defines how file records are mapped onto disk blocks. We have four types of File Organization to organize file records −

DBMS File Structure, Oracle Database Tutorial and Material, Database Certification, Database Exam Prep

Heap File Organization


When a file is created using Heap File Organization, the Operating System allocates memory area to that file without any further accounting details. File records can be placed anywhere in that memory area. It is the responsibility of the software to manage the records. Heap File does not support any ordering, sequencing, or indexing on its own.

Sequential File Organization


Every file record contains a data field (attribute) to uniquely identify that record. In sequential file organization, records are placed in the file in some sequential order based on the unique key field or search key. Practically, it is not possible to store all the records sequentially in physical form.

Hash File Organization


Hash File Organization uses Hash function computation on some fields of the records. The output of the hash function determines the location of disk block where the records are to be placed.

Clustered File Organization


Clustered file organization is not considered good for large databases. In this mechanism, related records from one or more relations are kept in the same disk block, that is, the ordering of records is not based on primary key or search key.

File Operations


Operations on database files can be broadly classified into two categories −

◉ Update Operations
◉ Retrieval Operations

Update operations change the data values by insertion, deletion, or update. Retrieval operations, on the other hand, do not alter the data but retrieve them after optional conditional filtering. In both types of operations, selection plays a significant role. Other than creation and deletion of a file, there could be several operations, which can be done on files.

DBMS File Structure, Oracle Database Tutorial and Material, Database Certification, Database Exam Prep

◉ Open − A file can be opened in one of the two modes, read mode or write mode. In read mode, the operating system does not allow anyone to alter data. In other words, data is read only. Files opened in read mode can be shared among several entities. Write mode allows data modification. Files opened in write mode can be read but cannot be shared.

◉ Locate − Every file has a file pointer, which tells the current position where the data is to be read or written. This pointer can be adjusted accordingly. Using find (seek) operation, it can be moved forward or backward.

◉ Read − By default, when files are opened in read mode, the file pointer points to the beginning of the file. There are options where the user can tell the operating system where to locate the file pointer at the time of opening a file. The very next data to the file pointer is read.

◉ Write − User can select to open a file in write mode, which enables them to edit its contents. It can be deletion, insertion, or modification. The file pointer can be located at the time of opening or can be dynamically changed if the operating system allows to do so.

◉ Close − This is the most important operation from the operating system’s point of view. When a request to close a file is generated, the operating system

     ◉ removes all the locks (if in shared mode),
     ◉ saves the data (if altered) to the secondary storage media, and
     ◉ releases all the buffers and file handlers associated with the file.

The organization of data inside a file plays a major role here. The process to locate the file pointer to a desired record inside a file various based on whether the records are arranged sequentially or clustered.

Monday, June 22, 2020

DBMS - Data Backup

DBMS - Data Backup, Oracle Database Tutorial and Material, Database Guides, Database Learning, Database Exam Prep

Loss of Volatile Storage


A volatile storage like RAM stores all the active logs, disk buffers, and related data. In addition, it stores all the transactions that are being currently executed. What happens if such a volatile storage crashes abruptly? It would obviously take away all the logs and active copies of the database. It makes recovery almost impossible, as everything that is required to recover the data is lost.

Following techniques may be adopted in case of loss of volatile storage −

◉ We can have checkpoints at multiple stages so as to save the contents of the database periodically.

◉ A state of active database in the volatile memory can be periodically dumped onto a stable storage, which may also contain logs and active transactions and buffer blocks.

◉ <dump> can be marked on a log file, whenever the database contents are dumped from a non-volatile memory to a stable one.

Recovery

◉ When the system recovers from a failure, it can restore the latest dump.

◉ It can maintain a redo-list and an undo-list as checkpoints.

◉ It can recover the system by consulting undo-redo lists to restore the state of all transactions up to the last checkpoint.

Database Backup & Recovery from Catastrophic Failure


A catastrophic failure is one where a stable, secondary storage device gets corrupt. With the storage device, all the valuable data that is stored inside is lost. We have two different strategies to recover data from such a catastrophic failure −

◉ Remote backup &minu; Here a backup copy of the database is stored at a remote location from where it can be restored in case of a catastrophe.

◉ Alternatively, database backups can be taken on magnetic tapes and stored at a safer place. This backup can later be transferred onto a freshly installed database to bring it to the point of backup.

Grown-up databases are too bulky to be frequently backed up. In such cases, we have techniques where we can restore a database just by looking at its logs. So, all that we need to do here is to take a backup of all the logs at frequent intervals of time. The database can be backed up once a week, and the logs being very small can be backed up every day or as frequently as possible.

Remote Backup


Remote backup provides a sense of security in case the primary location where the database is located gets destroyed. Remote backup can be offline or real-time or online. In case it is offline, it is maintained manually.

DBMS - Data Backup, Oracle Database Tutorial and Material, Database Guides, Database Learning, Database Exam Prep

Online backup systems are more real-time and lifesavers for database administrators and investors. An online backup system is a mechanism where every bit of the real-time data is backed up simultaneously at two distant places. One of them is directly connected to the system and the other one is kept at a remote place as backup.

As soon as the primary database storage fails, the backup system senses the failure and switches the user system to the remote storage. Sometimes this is so instant that the users can’t even realize a failure.

Saturday, June 20, 2020

UTL_CALL_STACK : Get Detailed Information About the Currently Running Subprogram in Oracle Database 12c

Oracle Database Certification, Oracle Database Study Materials, Oracle Database Exam Prep

Oracle database 12c introduced the UTL_CALL_STACK package to allow programmatic access to the call stack and error stack, giving much greater flexibility for debugging and error handling of PL/SQL code. This is only a replacement for the existing functionality if you need the extra level of control. The existing functionality in the DBMS_UTILITY package is still available and has not been deprecated.

◉ Call Stack


The call stack allows you to identify exactly where you are in the currently running code, which includes information about nesting of subprogram calls. In previous releases this information was displayed using the DBMS_UTILITY.FORMAT_CALL_STACK function, as shown below.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_call_stack AS
BEGIN
  DBMS_OUTPUT.put_line('***** Call Stack Start *****');
  DBMS_OUTPUT.put_line(DBMS_UTILITY.format_call_stack);
  DBMS_OUTPUT.put_line('***** Call Stack End *****');
END;
/

-- Test package to show a nested call.
CREATE OR REPLACE PACKAGE test_pkg AS
  PROCEDURE proc_1;
  PROCEDURE proc_2;
  PROCEDURE proc_3;
END;
/

CREATE OR REPLACE PACKAGE BODY test_pkg AS

  PROCEDURE proc_1 AS
  BEGIN
    proc_2;
  END;

  PROCEDURE proc_2 AS
  BEGIN
    proc_3;
  END;

  PROCEDURE proc_3 AS
  BEGIN
    display_call_stack;
  END;

END;
/

-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Call Stack Start *****
----- PL/SQL Call Stack -----
  object      line  object
  handle    number
name
0xb6d4ac18         4  procedure TEST.DISPLAY_CALL_STACK
0xb6d14298
15  package body TEST.TEST_PKG
0xb6d14298        10  package body
TEST.TEST_PKG
0xb6d14298         5  package body TEST.TEST_PKG
0xb99fe7c8
1  anonymous block

***** Call Stack End *****

PL/SQL procedure successfully completed.

SQL>

As you can see, the output from the DBMS_UTILITY.FORMAT_CALL_STACK function is rather ugly and we have no control over it, other than to manually parse it.

The UTL_CALL_STACK package contains APIs to display the contents of the call stack in a more readable form.

◉ DYNAMIC_DEPTH : The number of subprograms on the call stack, starting at the current position in the call stack, to the initial call.

◉ LEXICAL_DEPTH : Lexical depth of the subprogram within the current call.

◉ UNIT_LINE : Line number in the subprogram of the current call.

◉ SUBPROGRAM : Subprogram name associated with the current call.

◉ CONCATENATE_SUBPROGRAM : Returns the UNIT.SUBPROGRAM.LOCAL_SUBPROGRAM form of the subprogram name.

◉ OWNER : The owner of the subprogram associated with the current call.

◉ CURRENT_EDITION : The edition of the subprogram associated with the current call.

The following example recreates the DISPLAY_CALL_STACK procedure to use the UTL_CALL_STACK package, then re-runs the test.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_call_stack AS
  l_depth PLS_INTEGER;
BEGIN
  l_depth := UTL_CALL_STACK.dynamic_depth;

  DBMS_OUTPUT.put_line('***** Call Stack Start *****');

  DBMS_OUTPUT.put_line('Depth     Lexical   Line      Owner     Edition   Name');
  DBMS_OUTPUT.put_line('.         Depth     Number');
  DBMS_OUTPUT.put_line('--------- --------- --------- --------- --------- --------------------');

  FOR i IN 1 .. l_depth LOOP
    DBMS_OUTPUT.put_line(
      RPAD(i, 10) ||
      RPAD(UTL_CALL_STACK.lexical_depth(i), 10) ||
      RPAD(TO_CHAR(UTL_CALL_STACK.unit_line(i),'99'), 10) ||
      RPAD(NVL(UTL_CALL_STACK.owner(i),' '), 10) ||
      RPAD(NVL(UTL_CALL_STACK.current_edition(i),' '), 10) ||
      UTL_CALL_STACK.concatenate_subprogram(UTL_CALL_STACK.subprogram(i))
    );
  END LOOP;

  DBMS_OUTPUT.put_line('***** Call Stack End *****');
END;
/

-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Call Stack Start *****

Depth     Lexical   Line      Owner     Edition   Name
.             Depth     Number
--------- --------- --------- --------- --------- --------------------
1              0            13       TEST                  DISPLAY_CALL_STACK
2              1            15       TEST                  TEST_PKG.PROC_3
3              1            10       TEST                  TEST_PKG.PROC_2
4              1             5       TEST                  TEST_PKG.PROC_1
5              0             1                             __anonymous_block

***** Call Stack End *****

PL/SQL procedure successfully completed.

SQL>

Starting with the call to DISPLAY_CALL_STACK, we can work back through all the nested calls to the original anonymous block. The output includes the procedure names in the package as well as the associated line numbers of the calls. If we wanted to, we could have displayed the output in reverse order, starting at the top-level call.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_call_stack AS
  l_depth PLS_INTEGER;
BEGIN
  l_depth := UTL_CALL_STACK.dynamic_depth;

  DBMS_OUTPUT.put_line('***** Call Stack Start *****');

  DBMS_OUTPUT.put_line('Depth     Lexical   Line      Owner     Edition   Name');
  DBMS_OUTPUT.put_line('.         Depth     Number');
  DBMS_OUTPUT.put_line('--------- --------- --------- --------- --------- --------------------');

  FOR i IN REVERSE 1 .. l_depth LOOP
    DBMS_OUTPUT.put_line(
      RPAD(i, 10) ||
      RPAD(UTL_CALL_STACK.lexical_depth(i), 10) ||
      RPAD(TO_CHAR(UTL_CALL_STACK.unit_line(i),'99'), 10) ||
      RPAD(NVL(UTL_CALL_STACK.owner(i),' '), 10) ||
      RPAD(NVL(UTL_CALL_STACK.current_edition(i),' '), 10) ||
      UTL_CALL_STACK.concatenate_subprogram(UTL_CALL_STACK.subprogram(i))
    );
  END LOOP;

  DBMS_OUTPUT.put_line('***** Call Stack End *****');
END;
/


-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Call Stack Start *****

Depth     Lexical   Line      Owner     Edition   Name
.         Depth     Number
--------- --------- --------- --------- --------- --------------------
5              0           1                           __anonymous_block
4              1           5        TEST                TEST_PKG.PROC_1
3              1          10        TEST                TEST_PKG.PROC_2
2              1          15        TEST                TEST_PKG.PROC_3
1               0          13        TEST                DISPLAY_CALL_STACK

***** Call Stack End *****

PL/SQL procedure successfully completed.

SQL>

You now have programmatic control to interrogate and display the call stack if you need to.

◉ Error Stack


Exceptions are often handled by exception handlers and re-raised. In some cases, exceptions in nested calls result in different errors being produced by the error handler of the calling routine. The error stack allows you to display chains of errors, making it easier to determine the real cause of the issue. In previous releases this information was displayed using the DBMS_UTILITY.FORMAT_ERROR_STACK function, as shown below.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_error_stack AS
BEGIN
  DBMS_OUTPUT.put_line('***** Error Stack Start *****');
  DBMS_OUTPUT.put_line(DBMS_UTILITY.format_error_stack);
  DBMS_OUTPUT.put_line('***** Error Stack End *****');
END;
/

-- Test package to show a nested call.
CREATE OR REPLACE PACKAGE test_pkg AS
  PROCEDURE proc_1;
  PROCEDURE proc_2;
  PROCEDURE proc_3;
END;
/

CREATE OR REPLACE PACKAGE BODY test_pkg AS

  PROCEDURE proc_1 AS
  BEGIN
    proc_2;
  EXCEPTION
    WHEN OTHERS THEN
      display_error_stack;
  END;

  PROCEDURE proc_2 AS
  BEGIN
    proc_3;
  EXCEPTION
    WHEN OTHERS THEN
      RAISE DUP_VAL_ON_INDEX;
  END;

  PROCEDURE proc_3 AS
  BEGIN
    RAISE NO_DATA_FOUND;
  EXCEPTION
    WHEN OTHERS THEN
      RAISE TOO_MANY_ROWS;
  END;

END;
/


-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Error Stack Start *****
ORA-00001: unique constraint (.) violated
ORA-06512: at "TEST.TEST_PKG", line
16
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512:
at "TEST.TEST_PKG", line 24
ORA-01403: no data found

***** Error Stack End *****

PL/SQL procedure successfully completed.

SQL>

The output from DBMS_UTILITY.FORMAT_ERROR_STACK function is fine, but there are occasional formatting errors and we can't order the output to suit our taste.

The UTL_CALL_STACK package contains APIs to display the contents of the error stack.

◉ ERROR_DEPTH : The number of errors on the error stack.
◉ ERROR_MSG : The error message associated with the current line in the error stack.
◉ ERROR_NUMBER : The error number associated with the current line in the error stack.

The following example recreates the DISPLAY_ERROR_STACK procedure to use the UTL_CALL_STACK package, then re-runs the test.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_error_stack AS
  l_depth PLS_INTEGER;
BEGIN
  l_depth := UTL_CALL_STACK.error_depth;

  DBMS_OUTPUT.put_line('***** Error Stack Start *****');

  DBMS_OUTPUT.put_line('Depth     Error     Error');
  DBMS_OUTPUT.put_line('.         Code      Message');
  DBMS_OUTPUT.put_line('--------- --------- --------------------');

  FOR i IN 1 .. l_depth LOOP
    DBMS_OUTPUT.put_line(
      RPAD(i, 10) ||
      RPAD('ORA-' || LPAD(UTL_CALL_STACK.error_number(i), 5, '0'), 10) ||
      UTL_CALL_STACK.error_msg(i)
    );
  END LOOP;

  DBMS_OUTPUT.put_line('***** Error Stack End *****');
END;
/


-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Error Stack Start *****

Depth     Error     Error
.             Code      Message
--------- --------- --------------------
1            ORA-00001 unique constraint (.) violated

2            ORA-06512 at "TEST.TEST_PKG", line 16

3            ORA-01422 exact fetch returns more than requested number of rows

4            ORA-06512 at "TEST.TEST_PKG", line 24

5            ORA-01403 no data found

***** Error Stack End *****

PL/SQL procedure successfully completed.

SQL>

In the previous example, the display order matches the DBMS_UTILITY.FORMAT_ERROR_STACK output, which reports last to first in the chain. We could easily reverse it to display first to last.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_error_stack AS
  l_depth PLS_INTEGER;
BEGIN
  l_depth := UTL_CALL_STACK.error_depth;

  DBMS_OUTPUT.put_line('***** Error Stack Start *****');

  DBMS_OUTPUT.put_line('Depth     Error     Error');
  DBMS_OUTPUT.put_line('.         Code      Message');
  DBMS_OUTPUT.put_line('--------- --------- --------------------');

  FOR i IN REVERSE 1 .. l_depth LOOP
    DBMS_OUTPUT.put_line(
      RPAD(i, 10) ||
      RPAD('ORA-' || LPAD(UTL_CALL_STACK.error_number(i), 5, '0'), 10) ||
      UTL_CALL_STACK.error_msg(i)
    );
  END LOOP;

  DBMS_OUTPUT.put_line('***** Error Stack End *****');
END;
/


-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Error Stack Start *****

Depth     Error     Error
.             Code      Message
--------- --------- --------------------
5            ORA-01403 no data found

4            ORA-06512 at "TEST.TEST_PKG", line 24

3            ORA-01422 exact fetch returns more than requested number of rows

2            ORA-06512 at "TEST.TEST_PKG", line 16

1            ORA-00001 unique constraint (.) violated

***** Error Stack End *****

PL/SQL procedure successfully completed.

SQL>

◉ Backtrace


Backtrace shows a walk through the call stack from the line where the exception was raised, to the last call before the exception was trapped. This shows the propagation of the exception, which allows you to identify the actual line that caused the problem. In previous releases this information was displayed using the DBMS_UTILITY.FORMAT_ERROR_BACKTRACE function, as shown below.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_backtrace AS
BEGIN
  DBMS_OUTPUT.put_line('***** Backtrace Start *****');
  DBMS_OUTPUT.put_line(DBMS_UTILITY.format_error_backtrace);
  DBMS_OUTPUT.put_line('***** Backtrace End *****');
END;
/

-- Test package to show a nested call.
CREATE OR REPLACE PACKAGE test_pkg AS
  PROCEDURE proc_1;
  PROCEDURE proc_2;
  PROCEDURE proc_3;
END;
/

CREATE OR REPLACE PACKAGE BODY test_pkg AS

  PROCEDURE proc_1 AS
  BEGIN
    proc_2;
  EXCEPTION
    WHEN OTHERS THEN
      display_backtrace;
  END;

  PROCEDURE proc_2 AS
  BEGIN
    proc_3;
  END;

  PROCEDURE proc_3 AS
  BEGIN
    RAISE NO_DATA_FOUND;
  END;

END;
/


-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Backtrace Start *****
ORA-06512: at "TEST.TEST_PKG", line 18
ORA-06512: at "TEST.TEST_PKG", line
13
ORA-06512: at "TEST.TEST_PKG", line 5

***** Backtrace End *****

PL/SQL procedure successfully completed.

SQL>

With the exception of some minor formatting issues, this output is fine and will probably be OK for most situations.

The UTL_CALL_STACK package contains APIs to display the backtrace.

◉ BACKTRACE_DEPTH : The number of backtrace messages on the error stack.
◉ BACKTRACE_LINE : Line number in the subprogram of the current call.
◉ BACKTRACE_UNIT : Subprogram name associated with the current call.

The following example recreates the DISPLAY_BACKTRACE procedure to use the UTL_CALL_STACK package, then re-runs the test.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_backtrace AS
  l_depth PLS_INTEGER;
BEGIN
  l_depth := UTL_CALL_STACK.backtrace_depth;

  DBMS_OUTPUT.put_line('***** Backtrace Start *****');

  DBMS_OUTPUT.put_line('Depth     BTrace     BTrace');
  DBMS_OUTPUT.put_line('.         Line       Unit');
  DBMS_OUTPUT.put_line('--------- --------- --------------------');

  FOR i IN 1 .. l_depth LOOP
    DBMS_OUTPUT.put_line(
      RPAD(i, 10) ||
      RPAD(TO_CHAR(UTL_CALL_STACK.backtrace_line(i),'99'), 10) ||
      UTL_CALL_STACK.backtrace_unit(i)
    );
  END LOOP;

  DBMS_OUTPUT.put_line('***** Backtrace End *****');
END;
/

-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Backtrace Start *****

Depth     BTrace     BTrace
.               Line       Unit
--------- --------- --------------------
1             5       TEST.TEST_PKG
2            13       TEST.TEST_PKG
3            18       TEST.TEST_PKG
***** Backtrace End *****

PL/SQL procedure successfully completed.

SQL>

There is very little you can do with the backtrace, other than reordering it. The "ORA-06512" error is not included, but this is implied because it is a backtrace message. The following example shows the backtrace in reverse order.

-- Procedure to display the call stack.
CREATE OR REPLACE PROCEDURE display_backtrace AS
  l_depth PLS_INTEGER;
BEGIN
  l_depth := UTL_CALL_STACK.backtrace_depth;

  DBMS_OUTPUT.put_line('***** Backtrace Start *****');

  DBMS_OUTPUT.put_line('Depth     BTrace     BTrace');
  DBMS_OUTPUT.put_line('.         Line       Unit');
  DBMS_OUTPUT.put_line('--------- --------- --------------------');

  FOR i IN REVERSE 1 .. l_depth LOOP
    DBMS_OUTPUT.put_line(
      RPAD(i, 10) ||
      RPAD(TO_CHAR(UTL_CALL_STACK.backtrace_line(i),'99'), 10) ||
      UTL_CALL_STACK.backtrace_unit(i)
    );
  END LOOP;

  DBMS_OUTPUT.put_line('***** Backtrace End *****');
END;
/

-- Run the test.
SET SERVEROUTPUT ON
EXEC test_pkg.proc_1;
***** Backtrace Start *****

Depth     BTrace     BTrace
.               Line       Unit
--------- --------- --------------------
3            18       TEST.TEST_PKG
2            13       TEST.TEST_PKG
1             5       TEST.TEST_PKG

***** Backtrace End *****

PL/SQL procedure successfully completed.

SQL>

Friday, June 19, 2020

SQL - Database Tunning

SQL - Database Tunning, Database Certification, Database Learning, Database Study Material, Database Exam Prep

It takes time to become a Database Expert or an expert Database Administrator. This all comes with lot of experience in various database designs and good trainings.

But the following list may be helpful for the beginners to have a nice database performance −

◉ Use 3BNF database design explained in this tutorial in RDBMS Concepts chapter.

◉ Avoid number-to-character conversions because numbers and characters compare differently and lead to performance downgrade.

◉ While using SELECT statement, only fetch whatever information is required and avoid using * in your SELECT queries because it would load the system unnecessarily.

◉ Create your indexes carefully on all the tables where you have frequent search operations. Avoid index on the tables where you have less number of search operations and more number of insert and update operations.

◉ A full-table scan occurs when the columns in the WHERE clause do not have an index associated with them. You can avoid a full-table scan by creating an index on columns that are used as conditions in the WHERE clause of an SQL statement.

◉ Be very careful of equality operators with real numbers and date/time values. Both of these can have small differences that are not obvious to the eye but that make an exact match impossible, thus preventing your queries from ever returning rows.

◉ Use pattern matching judiciously. LIKE COL% is a valid WHERE condition, reducing the returned set to only those records with data starting with the string COL. However, COL%Y does not further reduce the returned results set since %Y cannot be effectively evaluated. The effort to do the evaluation is too large to be considered. In this case, the COL% is used, but the %Y is thrown away. For the same reason, a leading wildcard %COL effectively prevents the entire filter from being used.

◉ Fine tune your SQL queries examining the structure of the queries (and subqueries), the SQL syntax, to discover whether you have designed your tables to support fast data manipulation and written the query in an optimum manner, allowing your DBMS to manipulate the data efficiently.

◉ For queries that are executed on a regular basis, try to use procedures. A procedure is a potentially large group of SQL statements. Procedures are compiled by the database engine and then executed. Unlike an SQL statement, the database engine need not optimize the procedure before it is executed.

◉ Avoid using the logical operator OR in a query if possible. OR inevitably slows down nearly any query against a table of substantial size.

◉ You can optimize bulk data loads by dropping indexes. Imagine the history table with many thousands of rows. That history table is also likely to have one or more indexes. When you think of an index, you normally think of faster table access, but in the case of batch loads, you can benefit by dropping the index(es).

◉ When performing batch transactions, perform COMMIT at after a fair number of records creation in stead of creating them after every record creation.

◉ Plan to defragment the database on a regular basis, even if doing so means developing a weekly routine.

Built-In Tuning Tools


Oracle has many tools for managing SQL statement performance but among them two are very popular. These two tools are −

◉ Explain plan − tool identifies the access path that will be taken when the SQL statement is executed.

◉ tkprof − measures the performance by time elapsed during each phase of SQL statement processing.

If you want to simply measure the elapsed time of a query in Oracle, you can use the SQL*Plus command SET TIMING ON.

Wednesday, June 17, 2020

DBMS - Data Recovery

DBMS - Data Recovery, Oracle Database Certification, DB Exam Prep, DB Learning

Crash Recovery


DBMS is a highly complex system with hundreds of transactions being executed every second. The durability and robustness of a DBMS depends on its complex architecture and its underlying hardware and system software. If it fails or crashes amid transactions, it is expected that the system would follow some sort of algorithm or techniques to recover lost data.

Failure Classification


To see where the problem has occurred, we generalize a failure into various categories, as follows −

Transaction failure

A transaction has to abort when it fails to execute or when it reaches a point from where it can’t go any further. This is called transaction failure where only a few transactions or processes are hurt.

Reasons for a transaction failure could be −

◉ Logical errors − Where a transaction cannot complete because it has some code error or any internal error condition.

◉ System errors − Where the database system itself terminates an active transaction because the DBMS is not able to execute it, or it has to stop because of some system condition. For example, in case of deadlock or resource unavailability, the system aborts an active transaction.

System Crash

There are problems − external to the system − that may cause the system to stop abruptly and cause the system to crash. For example, interruptions in power supply may cause the failure of underlying hardware or software failure.

Examples may include operating system errors.

Disk Failure

In early days of technology evolution, it was a common problem where hard-disk drives or storage drives used to fail frequently.

Disk failures include formation of bad sectors, unreachability to the disk, disk head crash or any other failure, which destroys all or a part of disk storage.

Storage Structure


We have already described the storage system. In brief, the storage structure can be divided into two categories −

◉ Volatile storage − As the name suggests, a volatile storage cannot survive system crashes. Volatile storage devices are placed very close to the CPU; normally they are embedded onto the chipset itself. For example, main memory and cache memory are examples of volatile storage. They are fast but can store only a small amount of information.

◉ Non-volatile storage − These memories are made to survive system crashes. They are huge in data storage capacity, but slower in accessibility. Examples may include hard-disks, magnetic tapes, flash memory, and non-volatile (battery backed up) RAM.

Recovery and Atomicity


When a system crashes, it may have several transactions being executed and various files opened for them to modify the data items. Transactions are made of various operations, which are atomic in nature. But according to ACID properties of DBMS, atomicity of transactions as a whole must be maintained, that is, either all the operations are executed or none.

When a DBMS recovers from a crash, it should maintain the following −

◉ It should check the states of all the transactions, which were being executed.

◉ A transaction may be in the middle of some operation; the DBMS must ensure the atomicity of the transaction in this case.

◉ It should check whether the transaction can be completed now or it needs to be rolled back.

◉ No transactions would be allowed to leave the DBMS in an inconsistent state.

There are two types of techniques, which can help a DBMS in recovering as well as maintaining the atomicity of a transaction −

◉ Maintaining the logs of each transaction, and writing them onto some stable storage before actually modifying the database.

◉ Maintaining shadow paging, where the changes are done on a volatile memory, and later, the actual database is updated.

Log-based Recovery


Log is a sequence of records, which maintains the records of actions performed by a transaction. It is important that the logs are written prior to the actual modification and stored on a stable storage media, which is failsafe.

Log-based recovery works as follows −

◉ The log file is kept on a stable storage media.

◉ When a transaction enters the system and starts execution, it writes a log about it.

<Tn, Start>

◉ When the transaction modifies an item X, it write logs as follows −

<Tn, X, V1, V2>

It reads Tn has changed the value of X, from V1 to V2.

◉ When the transaction finishes, it logs −

<Tn, commit>

The database can be modified using two approaches −

◉ Deferred database modification − All logs are written on to the stable storage and the database is updated when a transaction commits.

◉ Immediate database modification − Each log follows an actual database modification. That is, the database is modified immediately after every operation.

Recovery with Concurrent Transactions


When more than one transaction are being executed in parallel, the logs are interleaved. At the time of recovery, it would become hard for the recovery system to backtrack all logs, and then start recovering. To ease this situation, most modern DBMS use the concept of 'checkpoints'.

Checkpoint

Keeping and maintaining logs in real time and in real environment may fill out all the memory space available in the system. As time passes, the log file may grow too big to be handled at all. Checkpoint is a mechanism where all the previous logs are removed from the system and stored permanently in a storage disk. Checkpoint declares a point before which the DBMS was in consistent state, and all the transactions were committed.

Recovery

When a system with concurrent transactions crashes and recovers, it behaves in the following manner −

DBMS - Data Recovery, Oracle Database Certification, DB Exam Prep, DB Learning

◉ The recovery system reads the logs backwards from the end to the last checkpoint.

◉ It maintains two lists, an undo-list and a redo-list.

◉ If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>, it puts the transaction in the redo-list.

◉ If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it puts the transaction in undo-list.

All the transactions in the undo-list are then undone and their logs are removed. All the transactions in the redo-list and their previous logs are removed and then redone before saving their logs.

Tuesday, June 16, 2020

Data Guard Physical Standby Setup Using the Data Guard Broker in Oracle Database 19c

Oracle Database 19c, DB Exam Prep, DB Study Materials, DB Tutorial and Materials

Data Guard is the name for Oracle's standby database solution, used for disaster recovery and high availability. This article gives an example of the setup and administration of Data Guard using the Data Guard Broker.

◉ TL;DR


If you already know about Data Guard and want to quickly set up a demo environment using VirtualBox and Vagrant you can follow the instructions in my GitHub repository.

Oracle Database 19c Data Guard on Oracle Linux 7 (OL7)
Oracle Database 19c Data Guard on Oracle Linux 8 (OL8)

◉ Assumptions


◉ You have two servers (physical or VMs) with an operating system and Oracle installed on them. In this case I've used Oracle Linux 7.6 and Oracle Database 19c.

◉ The primary server (ol7-19-dg1.locadomain) has a running instance.

◉ The standby server (ol7-19-dg2.locadomain) has a software only installation.

◉ There is nothing blocking communication between the machines over the listener ports. If you are using the default 1521 port, node 1 should be able to communicate to node 2 on 1521 and node 2 should be able communicate with node 1 on 1521. Check network and local firewalls are not blocking the communication.

◉ Primary Server Setup

     ◉ Logging

Check that the primary database is in archivelog mode.

SELECT log_mode FROM v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL>

If it is noarchivelog mode, switch is to archivelog mode.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

Enabled forced logging by issuing the following command.

ALTER DATABASE FORCE LOGGING;
-- Make sure at least one logfile is present.
ALTER SYSTEM SWITCH LOGFILE;

Create standby redo logs on the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In my case, the following standby redo logs must be created on both servers.

-- If Oracle Managed Files (OMF) is used.
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 10 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 11 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 12 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 13 SIZE 50M;

-- If Oracle Managed Files (OMF) is not used.
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 10 ('/u01/oradata/cdb1/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 11 ('/u01/oradata/cdb1/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 12 ('/u01/oradata/cdb1/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 13 ('/u01/oradata/cdb1/standby_redo04.log') SIZE 50M;

If you want to user flashback database, enable it on the primary now, so it will be enabled on the standby also. It's very useful as you will see below.

ALTER DATABASE FLASHBACK ON;

     ◉ Initialization Parameters

Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to "cdb1" on the primary database.

SQL> show parameter db_name

NAME      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name      string cdb1

SQL> show parameter db_unique_name

NAME      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name      string cdb1

SQL>

The DB_NAME of the standby database will be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. For this example, the standby database will have the value "cdb1_stby".

Make sure the STANDBY_FILE_MANAGEMENT parameter is set.

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

     ◉ Service Setup


Entries for the primary and standby databases are needed in the "$ORACLE_HOME/network/admin/tnsnames.ora" files on both servers. You can create these using the Network Configuration Utility (netca) or manually. The following entries were used during this setup. Notice the use of the SID, rather than the SERVICE_NAME in the entries. This is important as the broker will need to connect to the databases when they are down, so the services will not be present.

cdb1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol7-19-dg1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SID = cdb1)
    )
  )

cdb1_stby =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol7-19-dg2)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SID = cdb1)
    )
  )

The "$ORACLE_HOME/network/admin/listener.ora" file on the primary server contains the following configuration.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol7-19-dg1)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = cdb1_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/db_1)
      (SID_NAME = cdb1)
      (ENVS="TNS_ADMIN=/u01/app/oracle/product/19.0.0/db_1/network/admin")
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

The "$ORACLE_HOME/network/admin/listener.ora" file on the standby server contains the following configuration. Since the broker will need to connect to the database when it's down, we can't rely on auto-registration with the listener, hence the explicit entry for the database.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol7-19-dg2)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = cdb1_stby_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/db_1)
      (SID_NAME = cdb1)
      (ENVS="TNS_ADMIN=/u01/app/oracle/product/19.0.0/db_1/network/admin")
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

Once the listener.ora changes are in place, restart the listener on both servers.

lsnrctl stop
lsnrctl start

◉ Standby Server Setup

     ◉ Prepare for Duplicate

Create a parameter file for the standby database called "/tmp/initcdb1_stby.ora" with the following contents.

*.db_name='cdb1'

Create the necessary directories on the standby server.

mkdir -p /u01/app/oracle/oradata/cdb1/pdbseed
mkdir -p /u01/app/oracle/oradata/cdb1/pdb1
mkdir -p /u01/app/oracle/fast_recovery_area/cdb1
mkdir -p /u01/app/oracle/admin/cdb1/adump

Create a password file, with the SYS password matching that of the primary database.

$ orapwd file=/u01/app/oracle/product/19.0.0/db_1/dbs/orapwcdb1 password=Password1 entries=10

     ◉ Create Standby using DUPLICATE

Start the auxiliary instance on the standby server by starting it using the temporary "init.ora" file.

$ export ORACLE_SID=cdb1
$ sqlplus / as sysdba

SQL> STARTUP NOMOUNT PFILE='/tmp/initcdb1_stby.ora';

Connect to RMAN, specifying a full connect string for both the TARGET and AUXILIARY instances. Do not attempt to use OS authentication.

$ rman TARGET sys/Password1@cdb1 AUXILIARY sys/Password1@cdb1_stby

Now issue the following DUPLICATE command.

DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='cdb1_stby' COMMENT 'Is standby'
  NOFILENAMECHECK;

If you need to convert file locations, or alter any initialisation parameters, you can do this during the DUPLICATE using the SET command.

DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='cdb1_stby' COMMENT 'Is standby'
    SET db_file_name_convert='/original/directory/path1/','/new/directory/path1/','/original/directory/path2/','/new/directory/path2/'
    SET log_file_name_convert='/original/directory/path1/','/new/directory/path1/','/original/directory/path2/','/new/directory/path2/'
    SET job_queue_processes='0'
  NOFILENAMECHECK;

A brief explanation of the individual clauses is shown below.

◉ FOR STANDBY: This tells the DUPLICATE command is to be used for a standby, so it will not force a DBID change.

◉ FROM ACTIVE DATABASE: The DUPLICATE will be created directly from the source datafiles, without an additional backup step.

◉ DORECOVER: The DUPLICATE will include the recovery step, bringing the standby up to the current point in time.

◉ SPFILE: Allows us to reset values in the spfile when it is copied from the source server.

◉ NOFILENAMECHECK: Destination file locations are not checked.

Once the command is complete, we can start using the broker.

◉ Enable Broker


At this point we have a primary database and a standby database, so now we need to start using the Data Guard Broker to manage them. Connect to both databases (primary and standby) and issue the following command.

ALTER SYSTEM SET dg_broker_start=true;

On the primary server, issue the following command to register the primary server with the broker.

$ dgmgrl sys/Password1@cdb1
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 26 22:39:33 2018
Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> CREATE CONFIGURATION my_dg_config AS PRIMARY DATABASE IS cdb1 CONNECT IDENTIFIER IS cdb1;
Configuration "my_dg_config" created with primary database "cdb1"
DGMGRL>

Now add the standby database.

DGMGRL> ADD DATABASE cdb1_stby AS CONNECT IDENTIFIER IS cdb1_stby MAINTAINED AS PHYSICAL;
Database "cdb1_stby" added
DGMGRL>

Now we enable the new configuration.

DGMGRL> ENABLE CONFIGURATION;
Enabled.
DGMGRL>

The following commands show how to check the configuration and status of the databases from the broker.

DGMGRL> SHOW CONFIGURATION;

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> SHOW DATABASE cdb1;

Database - cdb1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    cdb1

Database Status:
SUCCESS

DGMGRL> SHOW DATABASE cdb1_stby;

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 5.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    cdb1

Database Status:
SUCCESS

DGMGRL>

◉ Database Switchover


A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover and can be performed using the following commands. Connect to the primary database (cdb1) and switchover to the standby database (cdb1_stby).

$ dgmgrl sys/Password1@cdb1
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 26 22:39:33 2018
Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> SWITCHOVER TO cdb1_stby;
Performing switchover NOW, please wait...
Operation requires a connection to instance "cdb1" on database "cdb1_stby"
Connecting to instance "cdb1"...
Connected as SYSDBA.
New primary database "cdb1_stby" is opening...
Operation requires start up of instance "cdb1" on database "cdb1"
Starting instance "cdb1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "cdb1_stby"
DGMGRL>

Let's switch back to the original primary. Connect to the new primary (cdb1_stby) and switchover to the new standby database (cdb1).

$ dgmgrl sys/Password1@cdb1_stby
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 26 22:53:36 2018
Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> SWITCHOVER TO cdb1;
Performing switchover NOW, please wait...
Operation requires a connection to instance "cdb1" on database "cdb1"
Connecting to instance "cdb1"...
Connected as SYSDBA.
New primary database "cdb1" is opening...
Operation requires start up of instance "cdb1" on database "cdb1_stby"
Starting instance "cdb1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "cdb1"
DGMGRL>

◉ Database Failover


If the primary database is not available the standby database can be activated as a primary database using the following statements. Connect to the standby database (cdb1_stby) and failover.

$ dgmgrl sys/Password1@cdb1_stby
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 26 22:53:36 2018
Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> FAILOVER TO cdb1_stby;
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1_stby"
DGMGRL>

Since the standby database is now the primary database it should be backed up immediately.

The original primary database can now be configured as a standby. If flashback database was enabled on the primary database, then this can be done relatively easily with the following command.

DGMGRL> REINSTATE DATABASE cdb1;
Reinstating database "cdb1", please wait...
Operation requires shut down of instance "cdb1" on database "cdb1"
Shutting down instance "cdb1"...
ORACLE instance shut down.
Operation requires start up of instance "cdb1" on database "cdb1"
Starting instance "cdb1"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "cdb1" ...
Reinstatement of database "cdb1" succeeded
DGMGRL>

If flashback database is not enabled, you would have to manually recreate cdb1 as a standby. The basic process is the reverse of what you did previously.

# 1) Cleanup the old instance.
sqlplus / as sysdba <<EOF
SHUTDOWN IMMEDIATE;
EXIT;
EOF

rm -Rf /u01/app/oracle/oradata/cdb1/*
rm -Rf /u01/app/oracle/fast_recovery_area/cdb1
rm -Rf /u01/app/oracle/fast_recovery_area/cdb1_stby
rm -Rf /u01/app/oracle/admin/cdb1
mkdir -p /u01/app/oracle/fast_recovery_area/cdb1
mkdir -p /u01/app/oracle/admin/cdb1/adump
mkdir -p /u01/app/oracle/oradata/cdb1/pdbseed
mkdir -p /u01/app/oracle/oradata/cdb1/pdb1
rm $ORACLE_HOME/dbs/spfilecdb1.ora

export ORACLE_SID=cdb1
sqlplus / as sysdba <<EOF
STARTUP NOMOUNT PFILE='/tmp/initcdb1_stby.ora';
EXIT;
EOF

# 2) Connect to RMAN.
$ rman TARGET sys/Password1@cdb1_stby AUXILIARY sys/Password1@cdb1

# 3) Duplicate the database.
DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='cdb1' COMMENT 'Is standby'
  NOFILENAMECHECK;

# 4) Connect to DGMDRL on the current primary.
$ dgmgrl sys/Password1@cdb1_stby

# 5) Enable the new standby.
DGMGRL> ENABLE DATABASE cdb1;

◉ Flashback Database


It was already mentioned in the previous section, but it is worth drawing your attention to Flashback Database once more. Although a switchover/switchback is safe for both the primary and standby database, a failover renders the original primary database useless for converting to a standby database. If flashback database is not enabled, the original primary must be scrapped and recreated as a standby database.

An alternative is to enable flashback database on the primary (and the standby if desired) so in the event of a failover, the primary can be flashed back to the time before the failover and quickly converted to a standby database, as shown above.

◉ Read-Only Standby and Active Data Guard


Once a standby database is configured, it can be opened in read-only mode to allow query access. This is often used to offload reporting to the standby server, thereby freeing up resources on the primary server. When open in read-only mode, archive log shipping continues, but managed recovery is stopped, so the standby database becomes increasingly out of date until managed recovery is resumed.

To switch the standby database into read-only mode, do the following.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;

To resume managed recovery, do the following.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

In 11g, Oracle introduced the Active Data Guard feature. This allows the standby database to be open in read-only mode, but still apply redo information. This means a standby can be available for querying, yet still be up to date. There are licensing implications for this feature, but the following commands show how active data guard can be enabled.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Since managed recovery continues with active data guard, there is no need to switch back to managed recovery from read-only mode in this case.

◉ Snapshot Standby


Introduced in 11g, snapshot standby allows the standby database to be opened in read-write mode. When switched back into standby mode, all changes made whilst in read-write mode are lost. This is achieved using flashback database, but the standby database does not need to have flashback database explicitly enabled to take advantage of this feature, thought it works just the same if it is.

Connect to the primary (cdb1) database and convert the standby database (cdb1_stby) to a snapshot standby.

$ dgmgrl sys/Password1@cdb1
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 26 22:53:36 2018
Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> CONVERT DATABASE cdb1_stby TO SNAPSHOT STANDBY;
Converting database "cdb1_stby" to a Snapshot Standby database, please wait...
Database "cdb1_stby" converted successfully
DGMGRL>

When you are finished with the snapshot standby, convert it back to a standby database.

$ dgmgrl sys/Password1@cdb1
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 26 22:53:36 2018
Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> CONVERT DATABASE cdb1_stby TO PHYSICAL STANDBY;
Converting database "cdb1_stby" to a Physical Standby database, please wait...
Operation requires shut down of instance "cdb1" on database "cdb1_stby"
Shutting down instance "cdb1"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "cdb1" on database "cdb1_stby"
Starting instance "cdb1"...
ORACLE instance started.
Database mounted.
Continuing to convert database "cdb1_stby" ...
Database "cdb1_stby" converted successfully
DGMGRL>

The standby is once again in managed recovery and archivelog shipping is resumed. Notice that flashback database is still not enabled.

DGMGRL> SHOW CONFIGURATION;

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 38 seconds ago)

DGMGRL>