Wednesday, April 29, 2020

Extended Data Types in Oracle Database 12c

Oracle Database 12c, DB Exam Study, DB Exam Guides, DB Exam Learning

Prior to Oracle 12c, regardless of the character semantics used, the maximum size of a VARCHAR2, NVARCHAR2 and RAW columns in a database are as follows.

◉ VARCHAR2 : 4000 bytes
◉ NVARCHAR2 : 4000 bytes
◉ RAW : 2000 bytes

With the introduction of Extended Data Types, Oracle 12c optionally increases these maximum sizes.

◉ VARCHAR2 : 32767 bytes
◉ NVARCHAR2 : 32767 bytes
◉ RAW : 32767 bytes

Remember, these figures are in bytes, not characters. The total number of characters that can be stored will depend on the character sets being used.

◉ 12cR2 Update


Prior to the on-prem release, people were suggesting extended data types would be the default in Oracle 12.2. The Database Cloud Service on the Oracle Public Cloud has this feature turned on by default, which added some weight to this suggestion. The on-prem release of Oracle 12.2 does not have extended data types enabled by default.

Extended data types are necessary if you want to use the column-level collation feature available in 12.2 onward.

◉ Enabling Extended Data Types in a non-CDB


The extended data types functionality is controlled using the MAX_STRING_SIZE initialization parameter. The default value is STANDARD, which restricts the maximum sizes to the traditional lengths. Setting the parameter value to EXTENDED allows for the new maximum lengths.

The process of switching to the extended data types is a one-way operation. Once you switch to extended data types you can't switch back without some form of database recovery. In addition to changing the parameter, you must run the "utl32k.sql" script to invalidate and recompile any objects that may be affected by the maximum length changes. An example of the commands required to enable extended data types in a single instance non-CDB database are shown below.

A commenter (Hristiyan) also suggested running PURGE DBA_RECYCLEBIN before proceeding. I've never done this and I've not had any issues as a result of it, but it's probably a good idea.

CONN / AS SYSDBA
-- Precaution to prevent possible failures. Suggested by Hristiyan.
PURGE DBA_RECYCLEBIN

SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
ALTER SYSTEM SET max_string_size=extended;
@?/rdbms/admin/utl32k.sql
SHUTDOWN IMMEDIATE;
STARTUP;

The MAX_STRING_SIZE documentation includes the procedure for other types of database, including:

◉ Pluggable Databases (PDBs)
◉ RAC Databases
◉ Oracle Data Guard Logical Standby Database

The "utl32k.sql" script produces output that looks something like this.

Session altered.

DOC>################################################
DOC>################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform a "SHUTDOWN ABORT"  and
DOC>   restart using UPGRADE.
DOC>###############################################
DOC>###############################################
DOC>#

no rows selected

DOC>################################################
DOC>################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database does not have compatible >= 12.0.0
DOC>
DOC>   Set compatible >= 12.0.0 and retry.
DOC>#################################################
DOC>#################################################
DOC>#

PL/SQL procedure successfully completed.

Session altered.

0 rows updated.

Commit complete.

System altered.

PL/SQL procedure successfully completed.

Commit complete.

System altered.

Session altered.

PL/SQL procedure successfully completed.

No errors.

Session altered.

PL/SQL procedure successfully completed.

Commit complete.

Package altered.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2013-07-10 10:11:26

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2013-07-10 10:11:33

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0

Function created.

PL/SQL procedure successfully completed.

Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 11:26:30
...Compiled 0 out of 2998 objects considered, 0 failed compilation 11:26:31
...263 packages
...255 package bodies
...453 tables
...11 functions
...16 procedures
...3 sequences
...458 triggers
...1322 indexes
...207 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 11:26:31
...Completed key object existence check 11:26:31
...Setting DBMS Registry 11:26:31
...Setting DBMS Registry Complete 11:26:31
...Exiting validate 11:26:31

PL/SQL procedure successfully completed.

SQL>

◉ Enabling Extended Data Types in a PDB


The process of enabling extended data types is similar for pluggable databases, but you have to remember to perform the change on the root container and all pluggable databases.

Prepare the root container and all pluggable databases so we can run the "utl32k.sql" script in them.

CONN / AS SYSDBA
ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE;
EXIT;

A commenter (Hristiyan) also suggested running PURGE DBA_RECYCLEBIN before proceeding. I've never done this and I've not had any issues as a result of it, but it's probably a good idea. Remember to run it in all containers.

Run the "utl32k.sql" script using "catcon.pl", so the change is made to the root container and all the pluggable databases.

$ cd $ORACLE_HOME/rdbms/admin/
$ $ORACLE_HOME/perl/bin/perl catcon.pl -d $ORACLE_HOME/rdbms/admin -l /tmp -b utl32k_output utl32k.sql

Check the output in the log files to make sure the scripts ran correctly.

$ ls /tmp/utl32k_output*
/tmp/utl32k_output0.log  /tmp/utl32k_output3.log
/tmp/utl32k_output1.log  /tmp/utl32k_output_catcon_4581.lst
/tmp/utl32k_output2.log  /tmp/utl32k_output_catcon_4740.lst
$

Restart the database to complete the process.

CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP;

◉ Using Extended Data Types


As stated previously, the maximum sizes are quoted in bytes, so database columns defined using character semantics have differing maximum sizes dependent on the character set used. Remember, NVARCHAR2 is always defined using character semantics.

With extended data types enabled we can see the change to the standard behaviour.

DROP TABLE t1 PURGE;

CREATE TABLE t1 (
  id             NUMBER,
  varchar2_data  VARCHAR2(32767),
  nvarchar2_data NVARCHAR2(16383),
  raw_data       RAW(32767)
);

Table created.

SQL>

The following code inserts some maximum size data into the table and queries the lengths of the data.

INSERT INTO t1
SELECT 1,
       RPAD('X', 32767, 'X') AS varchar2_data,
       RPAD('X', 16383, 'X') AS nvarchar2_data,
       UTL_RAW.cast_to_raw(RPAD('X', 32767, 'X')) as raw_data
FROM   dual;

SELECT id,
       LENGTH(varchar2_data),
       LENGTH(nvarchar2_data),
       LENGTH(raw_data)
FROM   t1;

 ID LENGTH(VARCHAR2_DATA) LENGTH(NVARCHAR2_DATA) LENGTH(RAW_DATA)
---------- --------------------- ---------------------- ----------------
         1                 32767                  16383            32767

1 row selected.

SQL>

◉ Implicit LOBs


If we look at the contents of our schema, we can see that our extended data types are really just a veneer hiding LOB processing. Each of the large columns has an associated LOB segment and LOB index.

COLUMN object_name FORMAT A40

SELECT object_type, object_name
FROM   user_objects
ORDER BY object_type, object_name;

OBJECT_TYPE             OBJECT_NAME
-------------------- ----------------------------------------
INDEX                   SYS_IL0000092349C00002$$
INDEX                   SYS_IL0000092349C00003$$
INDEX                   SYS_IL0000092349C00004$$
LOB                       SYS_LOB0000092349C00002$$
LOB                       SYS_LOB0000092349C00003$$
LOB                       SYS_LOB0000092349C00004$$
TABLE                   T1

7 rows selected.

SQL>

These implicit LOBs come with a number of restrictions beyond those seen with conventional LOBs.

◉ VARCHAR2 and NVARCHAR2 columns with a declared size greater than 4000 bytes are considered extended data types.

◉ RAW columns with a declared size greater than 2000 bytes are considered extended data types.

◉ All extended data types (see the previous two definitions) are stored out-of-line in LOB segments.

◉ You have no manual control over these LOB segments. Their management is totally internal.

◉ The LOB segments will always be stored in the same tablespace as the associated table.

◉ LOBs stored in Automatic Segment Space Management (ASSM) tablespaces are stored as SecureFiles. Those stored in non-ASSM tablespaces are stored as BasicFiles.

◉ All normal LOB restrictions apply.

◉ Thoughts

I think it is worth considering this feature if you are involved in a migration project and don't want to re-factor code to switch to conventional LOB processing, but I don't feel it should be used in a conventional Oracle project.

If you do decide to use this feature in an existing project, you need to do some serious testing before you commit yourself.

◉ Bugs


Petar Spasov mentioned the following bug in the comments related to one of data pump AQ tables (KUPC$DATAPUMP_QUETAB*), which prevented him for using extended data types until the bug was patched.

UTL32K.SQL FAILING WITH ORA-24005 ON AQ QUEUE TABLE (Doc ID 2092248.1)

As mentioned previously, such a large scale change needs to be tested in your system. Don't just assume everything will be fine.

Jwaleet mentioned getting a "ORA-01441: cannot decrease column length because some value is too big" error when trying to enable extended data types. The resolution was to drop a materialized view to allow the database to function. Presumably they rebuilt it after the database was working again.

Tuesday, April 28, 2020

Row Limiting Clause for Top-N Queries in Oracle Database 12c

Oracle Database 12c, Oracle Database Tutorial and Material, Oracle Database Certifications, Oracle Database Exam Prep

1. Introduction


A Top-N query is used to retrieve the top or bottom N rows from an ordered set. Combining two Top-N queries gives you the ability to page through an ordered set. This concept is not a new one. In fact, Oracle already provides multiple ways to perform Top-N queries, as discussed here. These methods work fine, but they look rather complicated compared to the methods provided by other database engines. For example, MySQL uses a LIMIT clause to page through an ordered result set.

SELECT *
FROM   my_table
ORDER BY column_1
LIMIT 0 , 40

Oracle 12c has introduced the row limiting clause to simplify Top-N queries and paging through ordered result sets.

2. Setup


To be consistent, we will use the same example table used in the Top-N Queries article.

Create and populate a test table.

DROP TABLE rownum_order_test;

CREATE TABLE rownum_order_test (
  val  NUMBER
);

INSERT ALL
  INTO rownum_order_test
  INTO rownum_order_test
SELECT level
FROM   dual
CONNECT BY level <= 10;

COMMIT;

The following query shows we have 20 rows with 10 distinct values.

SELECT val
FROM   rownum_order_test
ORDER BY val;

       VAL
----------
         1
         1
         2
         2
         3
         3
         4
         4
         5
         5
         6

       VAL
----------
         6
         7
         7
         8
         8
         9
         9
        10
        10

20 rows selected.

SQL>

3. Top-N Queries


The syntax for the row limiting clause looks a little complicated at first glance.

[ OFFSET offset { ROW | ROWS } ]
[ FETCH { FIRST | NEXT } [ { rowcount | percent PERCENT } ]
    { ROW | ROWS } { ONLY | WITH TIES } ]

Actually, for the classic Top-N query it is very simple. The example below returns the 5 largest values from an ordered set. Using the ONLY clause limits the number of rows returned to the exact number requested.

SELECT val
FROM   rownum_order_test
ORDER BY val DESC
FETCH FIRST 5 ROWS ONLY;

       VAL
----------
        10
        10
         9
         9
         8

5 rows selected.

SQL>

Using the WITH TIES clause may result in more rows being returned if multiple rows match the value of the Nth row. In this case the 5th row has the value "8", but there are two rows that tie for 5th place, so both are returned.

SELECT val
FROM   rownum_order_test
ORDER BY val DESC
FETCH FIRST 5 ROWS WITH TIES;

       VAL
----------
        10
        10
         9
         9
         8
         8

6 rows selected.

SQL>

In addition to limiting by row count, the row limiting clause also allows us to limit by percentage of rows. The following query returns the bottom 20% of rows.

SELECT val
FROM   rownum_order_test
ORDER BY val
FETCH FIRST 20 PERCENT ROWS ONLY;

       VAL
----------
         1
         1
         2
         2

4 rows selected.

SQL>

4. Paging Through Data


Paging through an ordered resultset was a little annoying using the classic Top-N query approach, as it required two Top-N queries, one nested inside the other. For example, if we wanted the second block of 4 rows we might do the following.

SELECT val
FROM   (SELECT val, rownum AS rnum
        FROM   (SELECT val
                FROM   rownum_order_test
                ORDER BY val)
        WHERE rownum <= 8)
WHERE  rnum >= 5;

       VAL
----------
         3
         3
         4
         4

4 rows selected.

SQL>

With the row limiting clause we can achieve the same result using the following query.

SELECT val
FROM   rownum_order_test
ORDER BY val
OFFSET 4 ROWS FETCH NEXT 4 ROWS ONLY;

       VAL
----------
         3
         3
         4
         4

4 rows selected.

SQL>

The starting point for the FETCH is OFFSET+1.

The OFFSET is always based on a number of rows, but this can be combined with a FETCH using a PERCENT.

SELECT val
FROM   rownum_order_test
ORDER BY val
OFFSET 4 ROWS FETCH NEXT 20 PERCENT ROWS ONLY;

       VAL
----------
         3
         3
         4
         4

4 rows selected.

SQL>

Not surprisingly, the offset, rowcount and percent can, and probably should, be bind variables.

VARIABLE v_offset NUMBER;
VARIABLE v_next NUMBER;

BEGIN
  :v_offset := 4;
  :v_next   := 4;
END;
/

SELECT val
FROM   rownum_order_test
ORDER BY val
OFFSET :v_offset ROWS FETCH NEXT :v_next ROWS ONLY;

       VAL
----------
         3
         3
         4
         4

SQL>

5. Extra Information


◉ The keywords ROW and ROWS can be used interchangeably, as can the FIRST and NEXT keywords. Pick the ones that scan best when reading the SQL like a sentence.

◉ If the offset is not specified it is assumed to be 0.

◉ Negative values for the offset, rowcount or percent are treated as 0.

◉ Null values for offset, rowcount or percent result in no rows being returned.

◉ Fractional portions of offset, rowcount or percent are truncated.

◉ If the offset is greater than or equal to the total number of rows in the set, no rows are returned.

◉ If the rowcount or percent are greater than the total number of rows after the offset, all rows are returned.

◉ The row limiting clause can not be used with the FOR UPDATE clause, CURRVAL and NEXTVAL sequence pseudocolumns or in an fast refresh materialized view.

6. Query Transformation


It's worth keeping in mind this new functionality is a query transformation. If we take one of the previous queries and perform a 10053 trace we can see this.

Check the trace file for the session.

SELECT value FROM v$diag_info WHERE  name = 'Default Trace File';

VALUE
----------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb1/cdb1/trace/cdb1_ora_15539.trc

1 row selected.

SQL>

Perform a 10053 trace of the statement.

ALTER SESSION SET EVENTS '10053 trace name context forever';

SELECT val
FROM   rownum_order_test
ORDER BY val DESC
FETCH FIRST 5 ROWS ONLY;

ALTER SESSION SET EVENTS '10053 trace name context off';

The section beginning with "Final query after transformations" shows the statement that was actually processed, after the query transformation.

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "from$_subquery$_002"."VAL" "VAL"
FROM  (SELECT "ROWNUM_ORDER_TEST"."VAL" "VAL",
              "ROWNUM_ORDER_TEST"."VAL" "rowlimit_$_0",
              ROW_NUMBER() OVER ( ORDER BY "ROWNUM_ORDER_TEST"."VAL" DESC ) "rowlimit_$$_rownumber"
       FROM "TEST"."ROWNUM_ORDER_TEST" "ROWNUM_ORDER_TEST") "from$_subquery$_002"
WHERE  "from$_subquery$_002"."rowlimit_$$_rownumber"<=5
ORDER BY "from$_subquery$_002"."rowlimit_$_0" DESC

As you can see, the statement has been rewritten to a form we might have used prior to 12c.

Monday, April 27, 2020

Recovery Manager (RMAN) Table Point In Time Recovery (PITR) in Oracle Database 12c

Oracle Database Tutorial and Materials, Oracle Database Learning, DB Study Materials

In previous releases point in time recovery of a table or table partition was only possible by manually creating a point in time clone of the database, retrieving the table using data pump, then removing the clone. Oracle 12c includes a new RMAN feature which performs all these steps, initiated from a single command.

◉ Setup


To demonstrate this, we need to create a table to do a PITR on. This example assumes you are running in archivelog mode and have adequate backups in place to allow a recovery via a point in time clone. For such a recent modification, using a flashback query would be more appropriate, but this serves the purpose for this test.

CONN / AS SYSDBA

CREATE USER test IDENTIFIED BY test
  QUOTA UNLIMITED ON users;

GRANT CREATE SESSION, CREATE TABLE TO test;

CONN test/test

CREATE TABLE t1 (id NUMBER);
INSERT INTO t1 VALUES (1);
COMMIT;

Check the current SCN.

CONN / AS SYSDBA

SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                 1853267

SQL>

Add some more data since the SCN was checked.

CONN test/test

INSERT INTO t1 VALUES (2);
COMMIT;

SELECT * FROM t1;

        ID
----------
         1
         2

SQL>

Exit from SQL*Plus and log in to RMAN as a root user with SYSDBA or SYSBACKUP privilege.

◉ Table Point In Time Recovery (PITR)


Log in to RMAN as a user with SYSDBA or SYSBACKUP privilege.

$ rman target=/

Issue the RECOVER TABLE command, giving a suitable AUXILIARY DESTINATION location for the auxiliary database. The point in time can be specified using UNTIL SCN, UNTIL TIME or UNTIL SEQUENCE. In the following example the REMAP TABLE clause is included to give the recovered table a new name so we can compare the before and after.

# SCN
RECOVER TABLE TEST.T1
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux' 
  REMAP TABLE 'TEST'.'T1':'T1_PREV';

# TIME
RECOVER TABLE TEST.T1
  UNTIL TIME "TO_DATE('01-JAN-2013 15:00', 'DD-MON-YYYY HH24:MI')"
  AUXILIARY DESTINATION '/u01/aux' 
  REMAP TABLE 'TEST'.'T1':'T1_PREV';

It's rather long, but it clearly shows the creation of the clone and the data pump export and import operations. Once the operation is complete, we can see the T1_PREV table has been created and contains the data as it was when the SCN was captured.

sqlplus test/test

SELECT * FROM t1_prev;

ID
----------
1

SQL>

◉ Table Point In Time Recovery (PITR) to Dump File


Rather than completing the whole recovery, you can just stop at the point where the recovered table is in a data pump dump file, which you can import manually at a later time. The following example uses the DATAPUMP DESTINATION, DUMP FILE and NOTABLEIMPORT clauses to achieve this.

RECOVER TABLE TEST.T1
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux'
  DATAPUMP DESTINATION '/u01/export'
  DUMP FILE 'test_t1_prev.dmp'
  NOTABLEIMPORT;

Once the operation is complete, we can see the resulting dump file in the specified directory.

$ ls -al /u01/export
total 120
drwxr-xr-x. 2 oracle oinstall   4096 Dec 26 17:33 .
drwxrwxr-x. 5 oracle oinstall   4096 Dec 26 12:30 ..
-rw-r-----. 1 oracle oinstall 114688 Dec 26 17:34 test_t1_prev.dmp
$

◉ Table Point In Time Recovery (PITR) in a Pluggable Database (PDB)


The process for performing a point in time recovery of a table in a PDB is similar to that of a non-CDB database. You just have to add the OF PLUGGABLE DATABASE clause.

RECOVER TABLE TEST.T1 OF PLUGGABLE DATABASE pdb1
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux' 
  REMAP TABLE 'TEST'.'T1':'T1_PREV';

◉ Remapping (12.1)


In a previous section we saw an example of remapping a table name. In the following example the table point in time recovery of T1 results in a new table called T1_PREV.

RECOVER TABLE TEST.T1
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux' 
  REMAP TABLE 'TEST'.'T1':'T1_PREV';

It is also possible to remap the tablespace, either independently or in conjunction with the remap of the table name. In the following example the previous table point in time recovery is repeated, but the resulting table is created in the EXAMPLES tablespace, rather than the USERS tablespace.

RECOVER TABLE TEST.T1
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux' 
  REMAP TABLE 'TEST'.'T1':'T1_PREV'
  REMAP TABLESPACE 'USERS':'EXAMPLES';

◉ Remapping (12.2+)


In addition to the remapping operations of 12.1, Oracle 12.2 allows you to remap the schema of the table during a tablespace point in time recovery. The remap table syntax now allows the inclusion of the destination schema. The following example switches the schema from TEST to TEST2.

RECOVER TABLE TEST.T1
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux' 
  REMAP TABLE 'TEST'.'T1':'TEST2'.'T1_PREV'
  REMAP TABLESPACE 'USERS':'EXAMPLES';

◉ Space Check (12.2+)


Table and partition point in time recovery (PITR) requires the creation of an auxiliary instance, created in the location specified by the AUXILIARY DESTINATION clause. In Oracle 12.1 the recovery operation would begin regardless of the space available in the auxiliary destination. If there wasn't enough space the operation would fail.

In Oracle 12.2 RMAN checks the space available for the auxiliary instance before beginning the operation.

Sunday, April 26, 2020

Recovery Manager (RMAN) Database Duplication Enhancements in Oracle Database 12c

Oracle Database Tutorial and Material, Oracle Database Certifications, DB Exam Prep, DB Guides

The Recovery Manager (RMAN) DUPLICATE command has been available since at least Oracle 8i, allowing you perform backup-based duplications of database. Oracle 11g introduced the concept of active database duplicates, allowing database duplication without the need for additional backups.

Active Database Duplication using Backup Sets


In previous releases, active duplicates were performed using implicit image copy backups, transferred directly to the destination server. From 12.1 it is also possible to perform active duplicates using backup sets by including the USING BACKUPSET clause. Compared to image copy backups, the unused block compression associated with a backup set can greatly reduce the amount of data pulled across the network for databases containing lots of unused blocks. The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using backup sets rather than image copy backups.

DUPLICATE DATABASE TO cdb2
  FROM ACTIVE DATABASE
  USING BACKUPSET
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

Active Database Duplication using Compressed Backup Sets


In addition to conventional backup sets, active duplicates can also be performed using compressed backup sets by adding the USING COMPRESSED BACKUPSET clause, which further reduces the amount of data passing over the network. The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using compressed backup sets.

DUPLICATE DATABASE TO cdb2
  FROM ACTIVE DATABASE
  USING COMPRESSED BACKUPSET
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

Active Database Duplication and Encryption


 Always check the licensing implications of encryption before using it on a real system. Some encryption operations require the advanced security option.

Oracle allows backup sets to be encrypted. Transparent encryption uses a wallet to hold the encryption key and is seamless to the DBA, since backup sets are encrypted and decrypted as required using the wallet. Password encryption requires the DBA to enter a password for each backup and restore operation.

Since Oracle 12c now supports active duplicates using backup sets, it also supports encryption of those backup sets using both methods.

◉ If the source database uses transparent encryption of backups, the wallet containing the encryption key must be made available on the destination database.

◉ If password encryption is used on the source database, the SET ENCRYPTION ON IDENTIFIED BY <password> command can be used to define an encryption password for the active duplication process. If you are running in mixed mode, you can use SET ENCRYPTION ON IDENTIFIED BY <password> ONLY to override transparent encryption.

The encryption algorithm used by the active duplication can be set using the SET ENCRYPTION ALGORITHM command, where the possible algorithms can be displayed using the V$RMAN_ENCRYPTION_ALGORITHMS view. If the encryption algorithm is not set, the default (AES128) is used.

The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using password encrypted backup sets.

SET ENCRYPTION ALGORITHM 'AES128';
SET ENCRYPTION ON IDENTIFIED BY MyPassword1 ONLY;

DUPLICATE DATABASE TO cdb2
  FROM ACTIVE DATABASE
  USING BACKUPSET
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

Active Database Duplication and Parallelism (Multisection)


Active database duplications can take advantage of the multisection backup functionality introduced in Oracle 12c, whether using image copies or backup sets. Including the SECTION SIZE clause indicates multisection backups should be used.

There must be multiple channels available for multisection backups to work, so you will either need to configure persistent channel parallelism using CONFIGURE DEVICE TYPE ... PARALLELISM or use set the parallelism for the current operation by performing multiple ALLOCATE CHANNEL commands.

The example below performs an active duplicate of a source database (cdb1) to a new destination database (cdb2) using multisection backups.

CONFIGURE DEVICE TYPE disk PARALLELISM 4;

DUPLICATE DATABASE TO cdb2
  FROM ACTIVE DATABASE
  USING BACKUPSET
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK
  SECTION SIZE 400M;

Multitenant Considerations


All the examples shown previously involve multitenant databases, but there are some extra considerations when you are using the multitenant architecture.

If you are building an "initSID.ora" file from scratch, you must remember to include the following parameter.

enable_pluggable_database=TRUE

The previous examples didn't have to do this as the SPFILE was created as a copy of the source SPFILE, which already contained this parameter setting.

The DUPLICATE command includes some additional clauses related to the multitenant option.

Adding the PLUGGABLE DATABASE clause allows you to specify which pluggable databases should be included in the duplication. The following example creates a new container database (cdb2), but it only contains two pluggable databases (pdb1 and pdb2). The third pluggable database (pdb3) is not included in the clone.

DUPLICATE DATABASE TO cdb2 PLUGGABLE DATABASE pdb1, pdb2
  FROM ACTIVE DATABASE
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

The resulting clone contains the following PDBs.

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1
PDB2

SQL>

Using the SKIP PLUGGABLE DATABASE clause will create a duplicate CDB will all the PDBs except those in the list. The following example creates a container database (cdb2) with a single pluggable database (pdb3). The other two pluggable databases (pdb1 and pdb2) are excluded from the clone.

DUPLICATE DATABASE TO cdb2 SKIP PLUGGABLE DATABASE pdb1, pdb2
  FROM ACTIVE DATABASE
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

The resulting clone contains the following PDBs.

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB3

SQL>

You can also limit the tablespaces that are included in a PDB using the TABLESPACE clause. If we connect to the source container database (cdb1) and check the tablespaces in the pdb1 pluggable database we see the following.

CONN sys/Password1@cdb1 AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TEST_TS

SQL>

Next, we perform a duplicate for the whole of the pdb2 pluggable database, but just the TEST_TS tablespace in the the pdb1 pluggable database.

DUPLICATE DATABASE TO cdb2 PLUGGABLE DATABASE pdb2 TABLESPACE pdb1:test_ts
  FROM ACTIVE DATABASE
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

Checking the completed clone reveals both the pdb1 and pdb2 pluggable databases are present, but the pdb1 pluggable database does not include the USERS tablespace.

CONN sys/Password1@cdb2 AS SYSDBA

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1
PDB2

SQL>

ALTER SESSION SET CONTAINER = pdb1;

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
TEST_TS

SQL>

Clones always contains a fully functional CDB and functional PDBs. Even when we just ask for the TEST_TS tablespace in pdb1, we also get the SYSTEM, SYSAUX and TEMP tablespaces in the PDB. The TABLESPACE clause can be used on it's own without the PLUGGABLE DATABASE clause, if no full PDBs are to be duplicated.

The SKIP TABLESPACE clause allows you to exclude specific tablespaces, rather than use the inclusion approach. The following example clones all the pluggable databases, but excludes the TEST_TS tablespace from pdb1 during the duplicate.

DUPLICATE DATABASE TO cdb2 SKIP TABLESPACE pdb1:test_ts
  FROM ACTIVE DATABASE
  SPFILE
    parameter_value_convert ('cdb1','cdb2')
    set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
    set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
    set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
    set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
    set db_name='cdb2'
  NOFILENAMECHECK;

Not surprisingly, the resulting clone contains all the pluggable databases, but the pdb1 pluggable database is missing the TEST_TS tablespace.

CONN sys/Password1@cdb2 AS SYSDBA

SELECT name FROM v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1
PDB2
PDB3

SQL>

ALTER SESSION SET CONTAINER = pdb1;

SELECT tablespace_name FROM dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS

SQL>

Appendix


The examples in this article are based on the following assumptions.

◉ The source database is a container database (cdb1), with three pluggable databases (pdb1, pdb2 and pdb3).
◉ The destination database is called cdb2.
◉ Both the source and destination databases use file system storage and do not use Oracle Managed ◉ Files (OMF), hence the need for the file name conversions.
◉ The basic setup for active duplicates was performed using the same process described for 11g here.

Between every test the following clean-up was performed.

# Set the paths using the source DB.
export ORAENV_ASK=NO
export ORACLE_SID=cdb1
. oraenv
export ORAENV_ASK=YES

# Set the SID for the new clone (destination).
export ORACLE_SID=cdb2

# Stop the clone if it already exists.
sqlplus / as sysdba <<EOF
SHUTDOWN IMMEDIATE;
EXIT;
EOF

# Cleanup any previous clone attemps.
mkdir -p /u01/app/oracle/admin/cdb2/adump
mkdir -p /u01/app/oracle/admin/cdb2/cdump
mkdir -p /u01/app/oracle/oradata/cdb2/
rm -Rf /u01/app/oracle/oradata/cdb2/*
mkdir -p /u01/app/oracle/oradata/cdb2/pdbseed/
mkdir -p /u01/app/oracle/oradata/cdb2/pdb1/
mkdir -p /u01/app/oracle/oradata/cdb2/pdb2/
mkdir -p /u01/app/oracle/oradata/cdb2/pdb3/
rm $ORACLE_HOME/dbs/spfilecdb2.ora
rm $ORACLE_HOME/dbs/initcdb2.ora
rm $ORACLE_HOME/dbs/orapwcdb2

# Recreate an init.ora and password file.
echo "db_name=cdb2" > $ORACLE_HOME/dbs/initcdb2.ora
orapwd file=/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwcdb2 password=Password1 entries=10

# Mount the clone (auxiliary).
sqlplus / as sysdba <<EOF
STARTUP NOMOUNT;
EXIT;
EOF

# Connect to the target and auxiliary RMAN using the tsnnames.ora entry.
rman target sys/Password1@cdb1 auxiliary sys/Password1@cdb2

At this point the relevant RMAN DUPLICATE command was run.

Saturday, April 25, 2020

Extended Data Types in Oracle Database 12c

Oracle Database 12c, Oracle Database Certifications, DB Guides, DB Exam Prep

Prior to Oracle 12c, regardless of the character semantics used, the maximum size of a VARCHAR2, NVARCHAR2 and RAW columns in a database are as follows.

◉ VARCHAR2 : 4000 bytes
◉ NVARCHAR2 : 4000 bytes
◉ RAW : 2000 bytes

With the introduction of Extended Data Types, Oracle 12c optionally increases these maximum sizes.

◉ VARCHAR2 : 32767 bytes
◉ NVARCHAR2 : 32767 bytes
◉ RAW : 32767 bytes

Remember, these figures are in bytes, not characters. The total number of characters that can be stored will depend on the character sets being used.

◉ 12cR2 Update


Prior to the on-prem release, people were suggesting extended data types would be the default in Oracle 12.2. The Database Cloud Service on the Oracle Public Cloud has this feature turned on by default, which added some weight to this suggestion. The on-prem release of Oracle 12.2 does not have extended data types enabled by default.

◉ Enabling Extended Data Types in a non-CDB


The extended data types functionality is controlled using the MAX_STRING_SIZE initialization parameter. The default value is STANDARD, which restricts the maximum sizes to the traditional lengths. Setting the parameter value to EXTENDED allows for the new maximum lengths.

The process of switching to the extended data types is a one-way operation. Once you switch to extended data types you can't switch back without some form of database recovery. In addition to changing the parameter, you must run the "utl32k.sql" script to invalidate and recompile any objects that may be affected by the maximum length changes. An example of the commands required to enable extended data types in a single instance non-CDB database are shown below.

A commenter (Hristiyan) also suggested running PURGE DBA_RECYCLEBIN before proceeding. I've never done this and I've not had any issues as a result of it, but it's probably a good idea.

CONN / AS SYSDBA
-- Precaution to prevent possible failures. Suggested by Hristiyan.
PURGE DBA_RECYCLEBIN

SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
ALTER SYSTEM SET max_string_size=extended;
@?/rdbms/admin/utl32k.sql
SHUTDOWN IMMEDIATE;
STARTUP;

The MAX_STRING_SIZE documentation includes the procedure for other types of database, including:

◉ Pluggable Databases (PDBs)
◉ RAC Databases
◉ Oracle Data Guard Logical Standby Database

The "utl32k.sql" script produces output that looks something like this.

Session altered.

DOC>##################################################
DOC>##################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform a "SHUTDOWN ABORT"  and
DOC>   restart using UPGRADE.
DOC>##################################################
DOC>##################################################
DOC>#

no rows selected

DOC>##################################################
DOC>##################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database does not have compatible >= 12.0.0
DOC>
DOC>   Set compatible >= 12.0.0 and retry.
DOC>##################################################
DOC>##################################################
DOC>#

PL/SQL procedure successfully completed.

Session altered.

0 rows updated.

Commit complete.

System altered.

PL/SQL procedure successfully completed.

Commit complete.

System altered.

Session altered.

PL/SQL procedure successfully completed.

No errors.

Session altered.

PL/SQL procedure successfully completed.

Commit complete.

Package altered.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2013-07-10 10:11:26

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2013-07-10 10:11:33

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0

Function created.

PL/SQL procedure successfully completed.

Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 11:26:30
...Compiled 0 out of 2998 objects considered, 0 failed compilation 11:26:31
...263 packages
...255 package bodies
...453 tables
...11 functions
...16 procedures
...3 sequences
...458 triggers
...1322 indexes
...207 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 11:26:31
...Completed key object existence check 11:26:31
...Setting DBMS Registry 11:26:31
...Setting DBMS Registry Complete 11:26:31
...Exiting validate 11:26:31

PL/SQL procedure successfully completed.

SQL>

◉ Enabling Extended Data Types in a PDB


The process of enabling extended data types is similar for pluggable databases, but you have to remember to perform the change on the root container and all pluggable databases.

Prepare the root container and all pluggable databases so we can run the "utl32k.sql" script in them.

CONN / AS SYSDBA
ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE;
EXIT;

A commenter (Hristiyan) also suggested running PURGE DBA_RECYCLEBIN before proceeding. I've never done this and I've not had any issues as a result of it, but it's probably a good idea. Remember to run it in all containers.

Run the "utl32k.sql" script using "catcon.pl", so the change is made to the root container and all the pluggable databases.

$ cd $ORACLE_HOME/rdbms/admin/
$ $ORACLE_HOME/perl/bin/perl catcon.pl -d $ORACLE_HOME/rdbms/admin -l /tmp -b utl32k_output utl32k.sql

Check the output in the log files to make sure the scripts ran correctly.

$ ls /tmp/utl32k_output*
/tmp/utl32k_output0.log  /tmp/utl32k_output3.log
/tmp/utl32k_output1.log  /tmp/utl32k_output_catcon_4581.lst
/tmp/utl32k_output2.log  /tmp/utl32k_output_catcon_4740.lst
$

Restart the database to complete the process.

CONN / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP;

◉ Using Extended Data Types


As stated previously, the maximum sizes are quoted in bytes, so database columns defined using character semantics have differing maximum sizes dependent on the character set used. Remember, NVARCHAR2 is always defined using character semantics.

With extended data types enabled we can see the change to the standard behaviour.

DROP TABLE t1 PURGE;

CREATE TABLE t1 (
  id             NUMBER,
  varchar2_data  VARCHAR2(32767),
  nvarchar2_data NVARCHAR2(16383),
  raw_data       RAW(32767)
);

Table created.

SQL>

The following code inserts some maximum size data into the table and queries the lengths of the data.

INSERT INTO t1
SELECT 1,
       RPAD('X', 32767, 'X') AS varchar2_data,
       RPAD('X', 16383, 'X') AS nvarchar2_data,
       UTL_RAW.cast_to_raw(RPAD('X', 32767, 'X')) as raw_data
FROM   dual;

SELECT id,
       LENGTH(varchar2_data),
       LENGTH(nvarchar2_data),
       LENGTH(raw_data)
FROM   t1;

   ID LENGTH(VARCHAR2_DATA) LENGTH(NVARCHAR2_DATA) LENGTH(RAW_DATA)
---------- --------------------- ---------------------- ----------------
         1                 32767                  16383            32767

1 row selected.

SQL>

◉ Implicit LOBs


If we look at the contents of our schema, we can see that our extended data types are really just a veneer hiding LOB processing. Each of the large columns has an associated LOB segment and LOB index.

COLUMN object_name FORMAT A40

SELECT object_type, object_name
FROM   user_objects
ORDER BY object_type, object_name;

OBJECT_TYPE             OBJECT_NAME
----------------------- ----------------------------------------
INDEX                   SYS_IL0000092349C00002$$
INDEX                   SYS_IL0000092349C00003$$
INDEX                   SYS_IL0000092349C00004$$
LOB                       SYS_LOB0000092349C00002$$
LOB                       SYS_LOB0000092349C00003$$
LOB                       SYS_LOB0000092349C00004$$
TABLE                   T1

7 rows selected.

SQL>

These implicit LOBs come with a number of restrictions beyond those seen with conventional LOBs.

◉ VARCHAR2 and NVARCHAR2 columns with a declared size greater than 4000 bytes are considered extended data types.
◉ RAW columns with a declared size greater than 2000 bytes are considered extended data types.
◉ All extended data types (see the previous two definitions) are stored out-of-line in LOB segments.
◉ You have no manual control over these LOB segments. Their management is totally internal.
◉ The LOB segments will always be stored in the same tablespace as the associated table.
◉ LOBs stored in Automatic Segment Space Management (ASSM) tablespaces are stored as SecureFiles. Those stored in non-ASSM tablespaces are stored as BasicFiles.
◉ All normal LOB restrictions apply.

◉ Thoughts


This feature is listed under the "Reduced Cost and Complexities of Migrating to Oracle" section of the New Features Guide. I think it is worth considering this feature if you are involved in a migration project and don't want to re-factor code to switch to conventional LOB processing, but I don't feel it should be used in a conventional Oracle project.

Friday, April 24, 2020

Data Pump (expdp, impdp) Enhancements in Oracle Database 12c

Oracle Database Tutorial and Materials, Oracle Database Learning, Oracle Database Exam Prep

Oracle Data Pump was introduced in Oracle 10g. This article provides an overview of the main Data Pump enhancements in Oracle Database 12c Release 1, including the following.

◉ NOLOGGING Option (DISABLE_ARCHIVE_LOGGING)


The TRANSFORM parameter of impdp has been extended to include a DISABLE_ARCHIVE_LOGGING option. The default setting of "N" has no affect on logging behaviour. Using a value "Y" reduces the logging associated with tables and indexes during the import by setting their logging attribute to NOLOGGING before the data is imported and resetting it to LOGGING once the operation is complete.

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y

The effect can be limited to a specific type of object (TABLE or INDEX) by appending the object type.

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:INDEX

An example of its use is shown below.

$ impdp system/Password1@pdb1 directory=test_dir dumpfile=emp.dmp logfile=impdp_emp.log \
     remap_schema=scott:test transform=disable_archive_logging:y

The DISABLE_ARCHIVE_LOGGING option has no effect if the database is running in FORCE LOGGING mode.

◉ LOGTIME Parameter


The LOGTIME parameter determines if timestamps should be included in the output messages from the expdp and impdp utilities.

LOGTIME=[NONE | STATUS | LOGFILE | ALL]
The allowable values are explained below.

◉ NONE : The default value, which indicates that no timestamps should be included in the output, making the output look similar to that of previous versions.
◉ STATUS : Timestamps are included in output to the console, but not in the associated log file.
◉ LOGFILE : Timestamps are included in output to the log file, but not in the associated console messages.
◉ ALL : Timestamps are included in output to the log file and console.

An example of the output is shown below.

$ expdp scott/tiger@pdb1 tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log logtime=all

Export: Release 12.1.0.1.0 - Production on Wed Nov 20 22:11:57 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options
20-NOV-13 22:12:09.312: Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/********@pdb1 tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log logtime=all
20-NOV-13 22:12:13.602: Estimate in progress using BLOCKS method...
20-NOV-13 22:12:17.797: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
20-NOV-13 22:12:18.145: Total estimation using BLOCKS method: 64 KB
20-NOV-13 22:12:30.583: Processing object type TABLE_EXPORT/TABLE/TABLE
20-NOV-13 22:12:33.649: Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
20-NOV-13 22:12:37.744: Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
20-NOV-13 22:12:38.065: Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
20-NOV-13 22:12:38.723: Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
20-NOV-13 22:12:41.052: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
20-NOV-13 22:12:41.337: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
20-NOV-13 22:13:38.255: . . exported "SCOTT"."EMP"                                8.75 KB      14 rows
20-NOV-13 22:13:40.483: Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
20-NOV-13 22:13:40.507: ******************************************************************************
20-NOV-13 22:13:40.518: Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
20-NOV-13 22:13:40.545:   /home/oracle/emp.dmp
20-NOV-13 22:13:40.677: Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Wed Nov 20 22:13:40 2013 elapsed 0 00:01:36

$

◉ Export View as Table


The VIEWS_AS_TABLES parameter allows Data Pump to export the specified views as if they were tables. The table structure matches the view columns, with the data being the rows returned by the query supporting the views.

VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], ...

To see it working, create a view.

CONN scott/tiger@pdb1

CREATE VIEW emp_v AS
  SELECT * FROM emp;

Now export the view using the VIEWS_AS_TABLES parameter.

$ expdp scott/tiger views_as_tables=scott.emp_v directory=test_dir dumpfile=emp_v.dmp logfile=expdp_emp_v.log

By default expdp creates a temporary table as a copy of the view, but with no data, to provide a source of the metadata for the export. Alternatively to can specify a table with the appropriate structure. This probably only makes sense if you are using this functionality in a read-only database.

◉ Change Table Compression at Import


The TABLE_COMPRESSION_CLAUSE clause of the TRANSFORM parameter allows the table compression characteristics of the tables in an import to be altered on the fly.

TRANSFORM=TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]
The allowable values for the TABLE_COMPRESSION_CLAUSE include the following.

◉ NONE : The table compression clause is omitted, so the table takes on the compression characteristics of the tablespace.
◉ NOCOMPRESS : Disables table compression.
◉ COMPRESS : Enables basic table compression.
◉ ROW STORE COMPRESS BASIC : Same as COMPRESS.
◉ ROW STORE COMPRESS BASIC : Same as COMPRESS.
◉ ROW STORE COMPRESS ADVANCED : Enables advanced compression, also known as OLTP compression.
◉ COLUMN STORE COMPRESS FOR QUERY : Hybrid Columnar Compression (HCC) available in Exadata and ZFS storage appliances.
◉ COLUMN STORE COMPRESS FOR ARCHIVE : Hybrid Columnar Compression (HCC) available in Exadata and ZFS storage appliances.

Compression clauses that contain whitespace must be enclosed by single or double quotes.

An example of its use is shown below.

$ impdp system/Password1@pdb1 directory=test_dir dumpfile=emp.dmp logfile=impdp_emp.log \
     remap_schema=scott:test transform=table_compression_clause:compress

◉ Change Table LOB Storage at Import


The LOB_STORAGE clause of the TRANSFORM parameter allows the LOB storage characteristics of table columns in a non-transportable import to be altered on the fly.

TRANSFORM=LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]

The allowable values for the LOB_STORAGE clause include the following.

◉ SECUREFILE : The LOBS are stored as SecureFiles.
◉ BASICFILE : The LOBS are stored as BasicFiles.
◉ DEFAULT : The LOB storage is determined by the database default.
◉ NO_CHANGE : The LOB storage matches that of the source object.

An example of its use is shown below.

$ impdp system/Password1@pdb1 directory=test_dir dumpfile=lob_table.dmp logfile=impdp_lob_table.log \
     transform=lob_storage:securefile

◉ Dumpfile Compression Options


As part of the Advanced Compression option, you can specify the COMPRESSION_ALGORITHM parameter to determine the level of compression of the export dumpfile. This is not related to table compression discussed previously.

COMPRESSION_ALGORITHM=[BASIC | LOW | MEDIUM | HIGH]

The meanings of the available values are described below.

◉ BASIC : The same compression algorithm used in previous versions. Provides good compression, without severely impacting on performance.

◉ LOW : For use when reduced CPU utilisation is a priority over compression ratio.

◉ MEDIUM : The recommended option. Similar characteristics to BASIC, but uses a different algorithm.

◉ HIGH : Maximum available compression, but more CPU intensive.

An example of its use is shown below.

$ expdp scott/tiger tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log \
        compression=all compression_algorithm=medium

◉ Multitenant Option Support (CDB and PDB)

Oracle Database 12c introduced the multitenant option, allowing multiple pluggable databases (PDBs) to reside in a single container database (CDB). For the most part, using data pump against a PDB is indistinguishable from using it against a non-CDB instance.

Exports using the FULL option from 11.2.0.2 or higher can be imported into a clean PDB in the same way you would expect for a regular full import.

◉ Audit Commands


Oracle 12c allows data pump jobs to be audited by creating an audit policy.

CREATE AUDIT POLICY policy_name
  ACTIONS COMPONENT=DATAPUMP [EXPORT | IMPORT | ALL];

When this policy is applied to a user, their data pump jobs will appear in the audit trail. The following policy audits all data pump operations. The policy is applied to the SCOTT user.

CONN / AS SYSDBA
CREATE AUDIT POLICY audit_dp_all_policy ACTIONS COMPONENT=DATAPUMP ALL;
AUDIT POLICY audit_dp_all_policy BY scott;

Run the following data pump command.

$ expdp scott/tiger tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log

Checking the audit trail shows the data pump job was audited.

-- Flush audit information to disk.
EXEC DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL;

SET LINESIZE 200
COLUMN event_timestamp FORMAT A30
COLUMN dp_text_parameters1 FORMAT A30
COLUMN dp_boolean_parameters1 FORMAT A30

SELECT event_timestamp,
       dp_text_parameters1,
       dp_boolean_parameters1
FROM   unified_audit_trail
WHERE  audit_type = 'Datapump';

EVENT_TIMESTAMP        DP_TEXT_PARAMETERS1       DP_BOOLEAN_PARAMETERS1
------------------------------ ------------------------------ ------------------------------
14-DEC-13 09.47.40.098637 PM   MASTER TABLE:  "SCOTT"."SYS_EX MASTER_ONLY: FALSE, DATA_ONLY:
       PORT_TABLE_01" , JOB_TYPE: EXP  FALSE, METADATA_ONLY: FALSE,
       ORT, METADATA_JOB_MODE: TABLE_ DUMPFILE_PRESENT: TRUE, JOB_RE
       EXPORT, JOB VERSION: 12.1.0.0. STARTED: FALSE
       0, ACCESS METHOD: AUTOMATIC, D
       ATA OPTIONS: 0, DUMPER DIRECTO
       RY: NULL  REMOTE LINK: NULL, T
       ABLE EXISTS: NULL, PARTITION O
       PTIONS: NONE

SQL>

◉ Encryption Password Enhancements


In previous versions, data pump encryption required the ENCRYPTION_PASSWORD parameter to be entered on the command line, making password snooping relatively easy.

In Oracle 12c, the ENCRYPTION_PWD_PROMPT parameter enables encryption without requiring the password to be entered as a command line parameter. Instead, the user is prompted for the password at runtime, with their response not echoed to the screen.

ENCRYPTION_PWD_PROMPT=[YES | NO]

An example of its use is shown below.

$ expdp scott/tiger tables=emp directory=test_dir dumpfile=emp.dmp logfile=expdp_emp.log \
        encryption_pwd_prompt=yes

Export: Release 12.1.0.1.0 - Production on Sat Dec 14 21:09:11 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Encryption Password:
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** tables=emp directory=test_dir
dumpfile=emp.dmp logfile=expdp_emp.log encryption_pwd_prompt=yes
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/POST_TABLE_ACTION
. . exported "SCOTT"."EMP"                               8.765 KB      14 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /tmp/emp.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Sat Dec 14 21:09:55 2013 elapsed 0 00:00:41

$

◉ Transportable Database


The TRANSPORTABLE option can now be combined with the FULL option to transport a whole database.

$ expdp system/Password1 full=Y transportable=always version=12 directory=TEMP_DIR \
   dumpfile=orcl.dmp logfile=expdporcl.log

Thursday, April 23, 2020

Automatic Diagnostics Repository (ADR) Enhancements in Oracle Database 12c (ADRCI)

Oracle Database 12c, Oracle Database Tutorial and Material, Oracle Database Prep, Oracle DB Exam

The basic concepts of the Automatic Diagnostics Repository (ADR) remain unchanged in Oracle Database 12c, but there have been several notable changes to the way the diagnostics information is handled.

◉ DDL Log


Oracle Database 12c, Oracle Database Tutorial and Material, Oracle Database Prep, Oracle DB Exam
Setting the ENABLE_DDL_LOGGING parameter to TRUE activates DDL logging. All DDL statements are logged into a plain text file and an XML file in the ADR location.

$ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/ddl_{SID}.log
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/ddl/log.xml

The parameter is container-specific, so you will need to enable/disable it independently in the root container and any PDBs.

The following example shows how the parameter works and what output you can expect.

CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
ALTER SYSTEM SET ENABLE_DDL_LOGGING=TRUE;

CONN test/test@pdb1

CREATE TABLE test1 (id NUMBER);
ALTER TABLE test1 ADD CONSTRAINT test1_pk PRIMARY KEY (id);
DROP TABLE test1 PURGE;

Navigating to the ADR home for the database and into the "log" directory, we can see the plain text DDL log.

$ cd /u01/app/oracle/diag/rdbms/cdb1/cdb1/log
$ ls
ddl  ddl_cdb1.log  debug  test
$ cat ddl_cdb1.log
Wed Jun 25 10:11:11 2014
diag_adl:CREATE TABLE test1 (id NUMBER)
diag_adl:ALTER TABLE test1 ADD CONSTRAINT test1_pk PRIMARY KEY (id)
diag_adl:DROP TABLE test1 PURGE
$

Moving down into the "ddl" directory, we can see the XML log.

$ cd ddl
$ ls
log.xml
$ cat log.xml
<msg time='2014-06-25T10:11:11.578+01:00' org_id='oracle' comp_id='rdbms'
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='ol6-121.localdomain' host_addr='192.168.56.2'
 version='1'>
 <txt>CREATE TABLE test1 (id NUMBER)
 </txt>
</msg>
<msg time='2014-06-25T10:11:11.596+01:00' org_id='oracle' comp_id='rdbms'
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='ol6-121.localdomain' host_addr='192.168.56.2'>
 <txt>ALTER TABLE test1 ADD CONSTRAINT test1_pk PRIMARY KEY (id)
 </txt>
</msg>
<msg time='2014-06-25T10:11:11.628+01:00' org_id='oracle' comp_id='rdbms'
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='ol6-121.localdomain' host_addr='192.168.56.2'>
 <txt>DROP TABLE test1 PURGE
 </txt>
</msg>
$

Interestingly, there does not appear to be any information in the logs to indicate which container the DDL came from.

◉ Debug Log


Oracle Database 12c has thinned out some of the contents of the alert log and trace files. Some activities that may be considered unusual, but do not directly cause problems with the database are no longer written to the alert log and/or trace files. Instead, they are written to the debug log, typically located as follows.

$ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/debug

As a DBA, you need to remember to keep an eye on the contents of the debug log in addition to the alert log. The debug log is included in any packages created to send to Oracle Support.

◉ New ADRCI Command (SHOW LOG)


The SHOW LOG command has been added to ADRCI to give visibility of the new DDL and debug logs. The available options are visible using the HELP SHOW LOG command.

adrci> help show log

  Usage: SHOW LOG [-l <log name>] [-p <predicate_string>]  [-term]
                    [ [-tail [num] [-f]] ]
  Purpose: Show diagnostic log messages.

  Options:
    [-p <predicate_string>]: The predicate string must be double-quoted.
    The fields in the predicate are the fields:
        ORIGINATING_TIMESTAMP         timestamp
        NORMALIZED_TIMESTAMP          timestamp
        ORGANIZATION_ID               text(65)
        COMPONENT_ID                  text(65)
        HOST_ID                       text(65)
        HOST_ADDRESS                  text(17)
        MESSAGE_TYPE                  number
        MESSAGE_LEVEL                 number
        MESSAGE_ID                    text(65)
        MESSAGE_GROUP                 text(65)
        CLIENT_ID                     text(65)
        MODULE_ID                     text(65)
        PROCESS_ID                    text(33)
        THREAD_ID                     text(65)
        USER_ID                       text(65)
        INSTANCE_ID                   text(65)
        DETAILED_LOCATION             text(161)
        UPSTREAM_COMP_ID              text(101)
        DOWNSTREAM_COMP_ID            text(101)
        EXECUTION_CONTEXT_ID          text(101)
        EXECUTION_CONTEXT_SEQUENCE    number
        ERROR_INSTANCE_ID             number
        ERROR_INSTANCE_SEQUENCE       number
        MESSAGE_TEXT                  text(2049)
        MESSAGE_ARGUMENTS             text(129)
        SUPPLEMENTAL_ATTRIBUTES       text(129)
        SUPPLEMENTAL_DETAILS          text(4000)
        PROBLEM_KEY                   text(65)

    [-l <log_name>]: Output specified log, otherwise all messages from
    all diagnostic logs under the current ADR Home are displayed.

    [-tail [num] [-f]]: Output last part of the log messages and
    output latest messages as the log grows. If num is not specified,
    the last 10 messages are displayed. If "-f" is specified, new data
    will append at the end as new log messages are generated.
    This option requires that a log name be supplied via -l option.

    [-term]: Direct results to terminal. If this option is not specified,
    the results will be open in an editor.
    By default, it will open in emacs, but "set editor" can be used
    to set other editors.

  Examples:
    show log -l debug
    show log -p "message_text like '%tablespace%'"
    show log -l ddl -tail 20

adrci>

As the examples suggest, you can display the contents of the DDL log with the following command.

adrci> show log -l ddl

ADR Home = /u01/app/oracle/diag/rdbms/cdb1/cdb1:
*************************************************************************
Output the results to file: /tmp/utsout_8165_140616_1.ado
adrci>

2014-06-25 10:11:11.578000 +01:00
CREATE TABLE test1 (id NUMBER)
ALTER TABLE test1 ADD CONSTRAINT test1_pk PRIMARY KEY (id)
DROP TABLE test1 PURGE

If some were present, the contents of the debug log can be displayed with the following command.

adrci> show log -l debug

ADR Home = /u01/app/oracle/diag/rdbms/cdb1/cdb1:
*************************************************************************

No diagnostic log in selected home
adrci>

Wednesday, April 22, 2020

Database Migration Assistant for Unicode (DMU) in Oracle Database 12c

The Database Migration Assistant for Unicode (DMU) is listed as a 12c new feature, but it is actually a separate tool that supports a number of different database versions.

Download


Version 1.2 of the DMU is shipped with the 12c database. You can see it by running the following commands.

$ cd $ORACLE_HOME/dmu
$ sh dmu.sh

Oracle recommend you always download the latest release before starting a migration. At the time of writing, version 2.0 is available at OTN.

◉ Oracle Database Migration Assistant for Unicode: Downloads

Once you've downloaded it, it can be unzipped and run from anywhere, but I prefer to unzip it under a new directory in the ORACLE_HOME.

$ mkdir $ORACLE_HOME/dmu-2.0
$ cd $ORACLE_HOME/dmu-2.0
$ unzip /tmp/dmu-2.0.zip

Prepare


Make sure the SYS.DBMS_DUMA_INTERNAL package is installed.

$ sqlplus / as sysdba

SQL> @?/rdbms/admin/prvtdumi.plb

Library created.

Package created.

No errors.

Package body created.

No errors.
SQL>

It's a good idea to create a separate tablespace for the repository, rather than using the SYSAUX tablespace.

SQL> CREATE TABLESPACE unicode_repo DATAFILE SIZE 100M AUTOEXTEND ON NEXT 10M;

If you are running against a database version prior to 11.2.0.3, you will probably need to patch your database. You can see the compatibility matrix here and the patch here.

Migrate


During the migration process the Migration Status tab normally tells you the next step to take, and provides links to jump to the next step so you don't have to use the menu. In this article I chose to use the menus directly, but if you don't like that, always click back on the Migration Status tab between steps to see what you should do next.

We are now ready to start the migration.

$ export JAVA_HOME=$ORACLE_HOME/jdk
$ cd $ORACLE_HOME/dmu-2.0/dmu
$ sh dmu.sh

Right-click on the "Databases" node in the Navigator pane and select the "New Database Connection" option from the popup menu.

Oracle Database Tutorial and Material, Oracle Database Learning, Oracle Database Certifications, Oracle Database 12c

Enter the connection details and click the "Test Connection" button. Once you get a successful test, click the "Save" button to return to the previous screen.


When you return to the previous screen, expand the new connection node in the tree to start the Repository Configuration Wizard. Accept the "Install the repository in migration mode" option by clicking the "Next" button.


Accept the "ALT32UTF8" option by clicking the "Next" button.


Select the tablespace to hold the repository, then click the "Finish" button. Wait for the repository to be created, then click the "OK" button on the subsequent message box.


Right-click on the database connection and select the "Scan Database" option from the popup menu.


Click the "Next" button.


Alter any parameters as required, then click the "Next" button.


Select the objects to scan, then click the "Next" button.


Check the scan details, then click the "Finish" button.


Wait while the scan takes place. Once complete, click the "OK" button on the message box.


Right-click on the database connection and select the "Database Scan Report" option from the popup menu.


Check the report for possible problems.


Right-click on the database connection and select the "Bulk Cleansing" option from the popup menu.


Click the "Next" button.


Accept the "Migrate to character length semantics" option by clicking the "Next" button.


Decide if you want column changes to happen immediately, or during the conversion. Click the "Next" button.


Select the objects to be converted, then click the "Next" button.


Click the "Finish" button. Wait for the bulk cleansing to complete, then click the "OK" button on the resulting message box.


Check the results of the bulk cleansing.


Right-click on the database connection and select the "Convert Database" option from the popup menu.


Click the "Convert" button.


Click the "Yes" button to start the conversion.


The conversion will initiate the Scan Wizard again. Accept all the defaults by clicking the "Next" and "Finish" buttons where appropriate. Once the "Finish" button is clicked, the conversion will start. Wait for it to complete, then click the "OK" button on the final message box. You are presented with a screen showing you the outcome of the conversion.


The Migration Status tab also shows the conversion is complete.


Right-click on the database connection and select the "Configure DMU Repository" option from the popup menu


Select the "uninstall the repository" option and click the "Next" button.


Click the "Finish" button. Once complete, click on the "OK" button on the message box


Exit the Database Migration Assistant for Unicode (DMU) using the "File > Exit" menu option.

The conversion is complete.

Tidy Up


If you created a new tablespace for the migration, you can drop it now.

sqlplus / as sysdba

SQL> DROP TABLESPACE unicode_repo INCLUDING CONTENTS AND DATAFILES;