Wednesday, December 30, 2020

Create a Listener in Oracle Database 19c

Create a Listener in Oracle Database 19c

In this guide, you will learn how to create a listener in Oracle Database 19c with NETCA and Oracle Net Manager.

A listener is a processus allowing to serve a connection to a database instance via a network protocol (TCP / IP, IPC, etc.). It receives incoming client connection requests and manages the traffic of these requests to the database server. Creating a listener is then necessary if you want your users to access the database remotely.

You can create a listener in Oracle Database 19c with Network Configuration Assistant or Oracle Network Manager. These tools are available after you install Oracle Database 19c software.

In the following, I will show you how to create a listener in Oracle Database 19c with these tools.

Create a listener in Oracle Database 19c with NETCA

1. Launch the Network Configuration Assistant with necta command.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

2. Select “Listener Configuration” and click “Next“.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

3. Select “Add” then click “Next“.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

4. Give a name to the listener to easily identify it from other configured listeners and provide Oracle Home user password.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

5. Choose which protocols client applications will use to access the database. For TCP and secure TCP (TCPS), a port number is required.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

6. Provide port numbers for TCP and TCPS protocols. Make sure port numbers are greater than 1024.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

7. Finish the listener configuration.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

Create a listener in Oracle Database 19c using Oracle Network Manager


To create a lister with Oracle Network Manager, you can execute the following steps.

1. Start Oracle Network Manager from Windows start menu. Then, click on “Listeners” and the “+” button

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

2. Give a name to the listener and click “OK”.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

3. Next, select “Listening Locations” and click on “Add Address” to indicate the server remote access details.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

4. Select a protocol (eg.”TCP/IP“) and provide the server hostname (or IP address) and a port number. You can repeat the process to add another protocol (eg. “TCP/IP with SSL“) with “Add Address” button.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

5. Configure the database to use the listener. To do so, click on the listener name, on the left, then choose “Database Services”, then click “Add Database“.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

6. Provide the database global name, the Oracle Home Directory and the SID.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

7. Save the listener configuration and exit.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

Start a listener in Oracle Database 19c


In order to use a listener, you will need to start it. Oracle Database comes with a simple tool to control your listeners: Listener Control or LSNRCTL in short. You can use it to

◉ check the status of a listener;
◉ start or reload a listener;
◉ stop a listener.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

To start [stop, reload] a listener, you can run the command start listener_name [stop listener_name, reload listener_name].

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

You can also control the listener from Windows services. Use the following combination: “Windows + R“, then type services.msc.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

Look for the listener and use the left panel to start, stop or reload it.

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Oracle Database Guides

Monday, December 28, 2020

Upgrading to Oracle Database 19c (Non-CDB) - 11g to 19c

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Preparation

This article provides an overview of upgrading an existing non-CDB database to Oracle 19c. Upgrades can be very complicated, so you must always read the upgrade manual, and test thoroughly before considering an upgrade of a production environment.

This article also includes the conversion of the upgraded database to a pluggable database. You don't have to do this step if you want a non-CDB instance.

Remember, 19c is essentially 12.2.0.3, so the possible upgrade options will be similar to those described in upgrading to Oracle Database 12c.

◉ Assumptions

This article is focused on upgrading a non-CDB database. If your starting point is a database using the multitenant architecture, you should be reading one of the following articles instead.

Multitenant : Upgrading to Oracle Database 19c

Multitenant : Upgrade a PDB using Unplug/Plugin

This article assumes your source database is of a version supported for direct upgrade to 19c.

11.2.0.4, 12.1.0.2, 12.2.0.1, 18

In this example we are doing an upgrade from 11.2 to 19c. The process is very similar for all supported versions, but the pre-upgrade and post-upgrade fixup actions may vary a little.

It's important to have backups of everything before you start! Some of these steps are destructive, and if something goes wrong you have no alternative but to restore from backups and start again.

◉ Prerequisities

Make sure you have all the OS prerequisites in place by running the 19c preinstall package. On Oracle Linux you can do this by installing the preinstall package. It probably makes sense to update the remaining packages also.

yum install -y oracle-database-preinstall-19c

yum update -y

◉ Install 19c Software

You can read about the installation process in more detail here, but for this example I'll keep it brief. The following commands will perform a silent installation of the 19c software.

export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1

export SOFTWARE_DIR=/u01/software

export ORA_INVENTORY=/u01/app/oraInventory

mkdir -p ${ORACLE_HOME}

cd $ORACLE_HOME

/bin/unzip -oq ${SOFTWARE_DIR}/LINUX.X64_193000_db_home.zip

./runInstaller -ignorePrereq -waitforcompletion -silent                        \

    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp               \

    oracle.install.option=INSTALL_DB_SWONLY                                    \

    ORACLE_HOSTNAME=${ORACLE_HOSTNAME}                                         \

    UNIX_GROUP_NAME=oinstall                                                   \

    INVENTORY_LOCATION=${ORA_INVENTORY}                                        \

    SELECTED_LANGUAGES=en,en_GB                                                \

    ORACLE_HOME=${ORACLE_HOME}                                                 \

    ORACLE_BASE=${ORACLE_BASE}                                                 \

    oracle.install.db.InstallEdition=EE                                        \

    oracle.install.db.OSDBA_GROUP=dba                                          \

    oracle.install.db.OSBACKUPDBA_GROUP=dba                                    \

    oracle.install.db.OSDGDBA_GROUP=dba                                        \

    oracle.install.db.OSKMDBA_GROUP=dba                                        \

    oracle.install.db.OSRACDBA_GROUP=dba                                       \

    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false                                 \

    DECLINE_SECURITY_UPDATES=true

Run the root scripts when prompted.

As a root user, execute the following script(s):

        1. /u01/app/oracle/product/19.0.0/dbhome_1/root.sh

At this point you should also patch the new Oracle home, but in this case we will forgo that step to keep things simple.

◉ Run preupgrade.jar

Download the latest "preupgrade.jar" file from MOS 884522.1. If you don't have MOS access you can miss out the next step.

At the time of writing, the latest preupgrade.jar is the one shipped with 19c, so there is no need to download a newer one. I's worth checking to see if this situation has changed when you do your upgrade.

Put the latest "preupgrade.jar" into the 19c Oracle home.

export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1

cd $ORACLE_HOME/rdbms/admin

unzip -o /u01/software/preupgrade_19_cbuild_??_lf.zip

Make sure you are using the original Oracle home and run the "preupgrade.jar".

export ORACLE_SID=db11g

export ORAENV_ASK=NO

. oraenv

export ORAENV_ASK=YES

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1

$ORACLE_BASE/product/19.0.0/dbhome_1/jdk/bin/java -jar $ORACLE_BASE/product/19.0.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT

Here is the output from an example run against an 11.2 database.

$ $ORACLE_BASE/product/19.0.0/dbhome_1/jdk/bin/java -jar $ORACLE_BASE/product/19.0.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT

Report generated by Oracle Database Pre-Upgrade Information Tool Version

19.0.0.0.0 Build: 1 on 2019-02-27T18:57:23

Upgrade-To version: 19.0.0.0.0

=======================================

Status of the database prior to upgrade

=======================================

      Database Name:  DB11G

     Container Name:  Not Applicable in Pre-12.1 database

       Container ID:  Not Applicable in Pre-12.1 database

            Version:  11.2.0.4.0

     DB Patch Level:  No Patch Bundle applied

         Compatible:  11.2.0.4.0

          Blocksize:  8192

           Platform:  Linux x86 64-bit

      Timezone File:  14

  Database log mode:  NOARCHIVELOG

           Readonly:  FALSE

            Edition:  EE

  Oracle Component                       Upgrade Action    Current Status

  ----------------                       --------------    --------------

  Oracle Server                          [to be upgraded]  VALID

  JServer JAVA Virtual Machine           [to be upgraded]  VALID

  Oracle XDK for Java                    [to be upgraded]  VALID

  Oracle Workspace Manager               [to be upgraded]  VALID

  OLAP Analytic Workspace                [to be upgraded]  VALID

  Oracle Enterprise Manager Repository   [to be upgraded]  VALID

  Oracle Text                            [to be upgraded]  VALID

  Oracle XML Database                    [to be upgraded]  VALID

  Oracle Java Packages                   [to be upgraded]  VALID

  Oracle Multimedia                      [to be upgraded]  VALID

  Oracle Spatial                         [to be upgraded]  VALID

  Expression Filter                      [to be upgraded]  VALID

  Rule Manager                           [to be upgraded]  VALID

  Oracle OLAP API                        [to be upgraded]  VALID

==============

BEFORE UPGRADE

==============

  REQUIRED ACTIONS

  ================

  None

  RECOMMENDED ACTIONS

  ===================

  1.  Update NUMERIC INITIALIZATION PARAMETERS to meet estimated minimums.

      This action may be done now or when starting the database in upgrade mode

      using the 19 ORACLE HOME.

       Parameter                                 Currently  19 minimum

       ---------                                 ---------  ------------------

       processes                                       150                 300

      The database upgrade process requires certain initialization parameters

      to meet minimum values.  The Oracle upgrade process itself has minimum

      values which may be higher and are marked with an asterisk.  After

      upgrading, those asterisked parameter values may be reset if needed.

  2.  Remove the EM repository.

      - Copy the $ORACLE_HOME/rdbms/admin/emremove.sql script from the target

      19 ORACLE_HOME into the source 11.2.0.4.0 ORACLE_HOME.

      Step 1: If database control is configured, stop EM Database Control,

      using the following command

        $> emctl stop dbconsole

      Step 2: Connect to the database using the SYS account AS SYSDBA

        SET ECHO ON;

        SET SERVEROUTPUT ON;

        @emremove.sql

      Without the set echo and serveroutput commands, you will not be able to

      follow the progress of the script.

      The database has an Enterprise Manager Database Control repository.

      Starting with Oracle Database 12c, the local Enterprise Manager Database

      Control does not exist anymore. The repository will be removed from your

      database during the upgrade.  This step can be manually performed before

      the upgrade to reduce downtime.

  3.  Remove OLAP Catalog by running the 11.2.0.4.0 SQL script

      $ORACLE_HOME/olap/admin/catnoamd.sql script.

      The OLAP Catalog component, AMD, exists in the database.

      Starting with Oracle Database 12c, the OLAP Catalog (OLAP AMD) is

      desupported and will be automatically marked as OPTION OFF during the

      database upgrade if present. Oracle recommends removing OLAP Catalog

      (OLAP AMD) before database upgrade.  This step can be manually performed

      before the upgrade to reduce downtime.

  4.  Upgrade Oracle Application Express (APEX) manually before the database

      upgrade.

      The database contains APEX version 3.2.1.00.12. Upgrade APEX to at least

      version 18.2.0.00.12.

      Starting with Oracle Database Release 18, APEX is not upgraded

      automatically as part of the database upgrade. Refer to My Oracle Support

      Note 1088970.1 for information about APEX installation and upgrades.

  5.  (AUTOFIXUP) Gather stale data dictionary statistics prior to database

      upgrade in off-peak time using:

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

      Dictionary statistics do not exist or are stale (not up-to-date).

      Dictionary statistics help the Oracle optimizer find efficient SQL

      execution plans and are essential for proper upgrade timing. Oracle

      recommends gathering dictionary statistics in the last 24 hours before

      database upgrade.

      For information on managing optimizer statistics, refer to the 11.2.0.4

      Oracle Database Performance Tuning Guide.

  6.  (AUTOFIXUP) Directly grant ADMINISTER DATABASE TRIGGER privilege to the

      owner of the trigger or drop and re-create the trigger with a user that

      was granted directly with such. You can list those triggers using: SELECT

      OWNER, TRIGGER_NAME FROM DBA_TRIGGERS WHERE

      TRIM(BASE_OBJECT_TYPE)='DATABASE' AND OWNER NOT IN (SELECT GRANTEE FROM

      DBA_SYS_PRIVS WHERE PRIVILEGE='ADMINISTER DATABASE TRIGGER').

      There is one or more database triggers whose owner does not have the

      right privilege on the database.

      The creation of database triggers must be done by users granted with

      ADMINISTER DATABASE TRIGGER privilege. Privilege must have been granted

      directly.

  7.  (AUTOFIXUP) Gather statistics on fixed objects prior the upgrade.

      None of the fixed object tables have had stats collected.

      Gathering statistics on fixed objects, if none have been gathered yet, is

      recommended prior to upgrading.

      For information on managing optimizer statistics, refer to the 11.2.0.4

      Oracle Database Performance Tuning Guide.

  INFORMATION ONLY

  ================

  8.  To help you keep track of your tablespace allocations, the following

      AUTOEXTEND tablespaces are expected to successfully EXTEND during the

      upgrade process.

                                                 Min Size

      Tablespace                        Size     For Upgrade

      ----------                     ----------  -----------

      SYSAUX                             490 MB       738 MB

      SYSTEM                             740 MB      1175 MB

      TEMP                                29 MB       150 MB

      UNDOTBS1                            70 MB       446 MB

      Minimum tablespace sizes for upgrade are estimates.

  9.  Run $ORACLE_HOME/rdbms/admin/catnoexf.sql located in the new Oracle

      Database Oracle home to remove both EXF and RUL.

      Expression Filter (EXF) or Rules Manager (RUL) exist in the database.

      Starting with Oracle Database release 12.1, the Expression Filter (EXF)

      and Database Rules Manager (RUL) features are desupported, and are

      removed during the upgrade process.  This step can be manually performed

      before the upgrade to reduce downtime.

  10. Check the Oracle Backup and Recovery User's Guide for information on how

      to manage an RMAN recovery catalog schema.

      If you are using a version of the recovery catalog schema that is older

      than that required by the RMAN client version, then you must upgrade the

      catalog schema.

      It is good practice to have the catalog schema the same or higher version

      than the RMAN client version you are using.

  ORACLE GENERATED FIXUP SCRIPT

  =============================

  All of the issues in database DB11G

  which are identified above as BEFORE UPGRADE "(AUTOFIXUP)" can be resolved by

  executing the following

    SQL>@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/preupgrade_fixups.sql

=============

AFTER UPGRADE

=============

  REQUIRED ACTIONS

  ================

  None

  RECOMMENDED ACTIONS

  ===================

  11. Upgrade the database time zone file using the DBMS_DST package.

      The database is using time zone file version 14 and the target 19 release

      ships with time zone file version 32.

      Oracle recommends upgrading to the desired (latest) version of the time

      zone file.  For more information, refer to "Upgrading the Time Zone File

      and Timestamp with Time Zone Data" in the 19 Oracle Database

      Globalization Support Guide.

  12. To identify directory objects with symbolic links in the path name, run

      $ORACLE_HOME/rdbms/admin/utldirsymlink.sql AS SYSDBA after upgrade.

      Recreate any directory objects listed, using path names that contain no

      symbolic links.

      Some directory object path names may currently contain symbolic links.

      Starting in Release 18c, symbolic links are not allowed in directory

      object path names used with BFILE data types, the UTL_FILE package, or

      external tables.

  13. (AUTOFIXUP) Gather dictionary statistics after the upgrade using the

      command:

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

      Oracle recommends gathering dictionary statistics after upgrade.

      Dictionary statistics provide essential information to the Oracle

      optimizer to help it find efficient SQL execution plans. After a database

      upgrade, statistics need to be re-gathered as there can now be tables

      that have significantly changed during the upgrade or new tables that do

      not have statistics gathered yet.

  14. Gather statistics on fixed objects after the upgrade and when there is a

      representative workload on the system using the command:

        EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

      This recommendation is given for all preupgrade runs.

      Fixed object statistics provide essential information to the Oracle

      optimizer to help it find efficient SQL execution plans.  Those

      statistics are specific to the Oracle Database release that generates

      them, and can be stale upon database upgrade.

      For information on managing optimizer statistics, refer to the 11.2.0.4

      Oracle Database Performance Tuning Guide.

  INFORMATION ONLY

  ================

  15. Check the Oracle documentation for the identified components for their

      specific upgrade procedure.

      The database upgrade script will not upgrade the following Oracle

      components:  OLAP Catalog,OWB

      The Oracle database upgrade script upgrades most, but not all Oracle

      Database components that may be installed.  Some components that are not

      upgraded may have their own upgrade scripts, or they may be deprecated or

      obsolete.

  ORACLE GENERATED FIXUP SCRIPT

  =============================

  All of the issues in database DB11G

  which are identified above as AFTER UPGRADE "(AUTOFIXUP)" can be resolved by

  executing the following

    SQL>@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/postupgrade_fixups.sql

==================

PREUPGRADE SUMMARY

==================

  /u01/app/oracle/cfgtoollogs/db11g/preupgrade/preupgrade.log

  /u01/app/oracle/cfgtoollogs/db11g/preupgrade/preupgrade_fixups.sql

  /u01/app/oracle/cfgtoollogs/db11g/preupgrade/postupgrade_fixups.sql

Execute fixup scripts as indicated below:

Before upgrade:

Log into the database and execute the preupgrade fixups

@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/preupgrade_fixups.sql

After the upgrade:

Log into the database and execute the postupgrade fixups

@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/postupgrade_fixups.sql

Preupgrade complete: 2019-02-27T18:57:24

$

◉ Perform Pre-Upgrade Actions

The output from the "preupgrade.jar" lists a number of pre-upgrade recommendations. Some must be manually applied. Others are incorporated into the "preupgrade_fixups.sql" script. In the following example we run all the manual operations as well as the "preupgrade_fixups.sql" script.

# 1) Increase the processes parameter.

sqlplus / as sysdba <<EOF

alter system set processes=300 scope=spfile;

shutdown immediate;

startup;

exit;

EOF

# 2) Remove EM DB Console config.

cp $ORACLE_BASE/product/19.0.0/dbhome_1/rdbms/admin/emremove.sql $ORACLE_HOME/rdbms/admin/emremove.sql

sqlplus / as sysdba <<EOF

SET ECHO ON;

SET SERVEROUTPUT ON;

@$ORACLE_HOME/rdbms/admin/emremove.sql

exit;

EOF

# 3) Remove OLAP catalog.

sqlplus / as sysdba <<EOF

@$ORACLE_HOME/olap/admin/catnoamd.sql

exit;

EOF

# 4) Ignoring the APEX upgrade. This is a junk test instance. Normally APEX would be at latest version.

# 5) Included in AUTOFIXUP.

# 6) This will be unnecessary after removal of EM repository above.

# 7) Included in AUTOFIXUP.

# Recompile invalid objects.

sqlplus / as sysdba <<EOF

@$ORACLE_HOME/rdbms/admin/utlrp.sql

SET SERVEROUTPUT ON;

EXECUTE DBMS_PREUP.INVALID_OBJECTS;

exit;

EOF

# Run preupgrade-fixups.sql script.

sqlplus / as sysdba <<EOF

@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/preupgrade_fixups.sql

exit;

EOF

◉ Upgrade the Database

With the pre-upgrade actions complete we can start the upgrade. Shutdown the source database.

sqlplus / as sysdba <<EOF

shutdown immediate;

exit;

EOF

Copy the config files from the old to the new Oracle home. You should check the contents of the "listener.ora" file to see if there are any references to the Oracle home path. If there are, amend them.

cp $ORACLE_HOME/network/admin/*.ora $ORACLE_BASE/product/19.0.0/dbhome_1/network/admin

# Add this to $ORACLE_BASE/product/19.0.0/dbhome_1/network/admin/sqlnet.ora

# Need to correct password versions and remove this.

cat >> $ORACLE_BASE/product/19.0.0/dbhome_1/network/admin/sqlnet.ora <<EOF

# This should be temporary while you deal with old passwords.

SQLNET.ALLOWED_LOGON_VERSION_SERVER=11

EOF

cp $ORACLE_HOME/dbs/orapwdb11g $ORACLE_BASE/product/19.0.0/dbhome_1/dbs/

cp $ORACLE_HOME/dbs/spfiledb11g.ora $ORACLE_BASE/product/19.0.0/dbhome_1/dbs/

Switch to the 19c listener.

lsnrctl stop

export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1

export PATH=${ORACLE_HOME}/bin:$PATH

lsnrctl start

Start the database using the 19c Oracle home, ready for the upgrade.

sqlplus / as sysdba <<EOF

startup upgrade;

exit;

EOF

You can run the upgrade using either of the following commands. The second is actually just a shorthand for the former.

# Regular upgrade command.

cd $ORACLE_HOME/rdbms/admin

$ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql

# Shorthand command.

$ORACLE_HOME/bin/dbupgrade

Here is the output from an example run against an 11.2 database.

$ $ORACLE_HOME/bin/dbupgrade

Argument list for [/u01/app/oracle/product/19.0.0/dbhome_1/rdbms/admin/catctl.pl]

For Oracle internal use only A = 0

Run in                       c = 0

Do not run in                C = 0

Input Directory              d = 0

Echo OFF                     e = 1

Simulate                     E = 0

Forced cleanup               F = 0

Log Id                       i = 0

Child Process                I = 0

Log Dir                      l = 0

Priority List Name           L = 0

Upgrade Mode active          M = 0

SQL Process Count            n = 0

SQL PDB Process Count        N = 0

Open Mode Normal             o = 0

Start Phase                  p = 0

End Phase                    P = 0

Reverse Order                r = 0

AutoUpgrade Resume           R = 0

Script                       s = 0

Serial Run                   S = 0

RO User Tablespaces          T = 0

Display Phases               y = 0

Debug catcon.pm              z = 0

Debug catctl.pl              Z = 0

catctl.pl VERSION: [19.0.0.0.0]

           STATUS: [Production]

            BUILD: [RDBMS_19.2.0.0.0_LINUX.X64_190204]

/u01/app/oracle/product/19.0.0/dbhome_1/rdbms/admin/orahome = [/u01/app/oracle/product/19.0.0/dbhome_1]

/u01/app/oracle/product/19.0.0/dbhome_1/bin/orabasehome = [/u01/app/oracle/product/19.0.0/dbhome_1]

catctlGetOraBaseLogDir = [/u01/app/oracle/product/19.0.0/dbhome_1]

Analyzing file /u01/app/oracle/product/19.0.0/dbhome_1/rdbms/admin/catupgrd.sql

Log file directory = [/tmp/cfgtoollogs/upgrade20190227131211]

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/tmp/cfgtoollogs/upgrade20190227131211/catupgrd_catcon_32060.lst]

catcon::set_log_file_base_path: catcon: See [/tmp/cfgtoollogs/upgrade20190227131211/catupgrd*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/tmp/cfgtoollogs/upgrade20190227131211/catupgrd_*.lst] files for spool files, if any

Number of Cpus        = 2

Database Name         = db11g

DataBase Version      = 11.2.0.4.0

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/db11g/upgrade20190227131213/catupgrd_catcon_32060.lst]

catcon::set_log_file_base_path: catcon: See [/u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/db11g/upgrade20190227131213/catupgrd*.log] files for output generated by scripts

catcon::set_log_file_base_path: catcon: See [/u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/db11g/upgrade20190227131213/catupgrd_*.lst] files for spool files, if any

Log file directory = [/u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/db11g/upgrade20190227131213]

Components in [db11g]

    Installed [APEX APS CATALOG CATJAVA CATPROC CONTEXT JAVAVM ORDIM OWM SDO XDB XML XOQ]

Not Installed [DV EM MGW ODM OLS RAC WK]

Parallel SQL Process Count            = 4

------------------------------------------------------

Phases [0-107]         Start Time:[2019_02_27 19:12:15]

------------------------------------------------------

***********   Executing Change Scripts   ***********

Serial   Phase #:0    [db11g] Files:1    Time: 130s

***************   Catalog Core SQL   ***************

Serial   Phase #:1    [db11g] Files:5    Time: 62s

Restart  Phase #:2    [db11g] Files:1    Time: 0s

***********   Catalog Tables and Views   ***********

Parallel Phase #:3    [db11g] Files:19   Time: 20s

Restart  Phase #:4    [db11g] Files:1    Time: 0s

*************   Catalog Final Scripts   ************

Serial   Phase #:5    [db11g] Files:7    Time: 21s

*****************   Catproc Start   ****************

Serial   Phase #:6    [db11g] Files:1    Time: 18s

*****************   Catproc Types   ****************

Serial   Phase #:7    [db11g] Files:2    Time: 11s

Restart  Phase #:8    [db11g] Files:1    Time: 1s

****************   Catproc Tables   ****************

Parallel Phase #:9    [db11g] Files:67   Time: 29s

Restart  Phase #:10   [db11g] Files:1    Time: 0s

*************   Catproc Package Specs   ************

Serial   Phase #:11   [db11g] Files:1    Time: 83s

Restart  Phase #:12   [db11g] Files:1    Time: 2s

**************   Catproc Procedures   **************

Parallel Phase #:13   [db11g] Files:94   Time: 12s

Restart  Phase #:14   [db11g] Files:1    Time: 0s

Parallel Phase #:15   [db11g] Files:120  Time: 25s

Restart  Phase #:16   [db11g] Files:1    Time: 0s

Serial   Phase #:17   [db11g] Files:22   Time: 3s

Restart  Phase #:18   [db11g] Files:1    Time: 0s

*****************   Catproc Views   ****************

Parallel Phase #:19   [db11g] Files:32   Time: 21s

Restart  Phase #:20   [db11g] Files:1    Time: 1s

Serial   Phase #:21   [db11g] Files:3    Time: 12s

Restart  Phase #:22   [db11g] Files:1    Time: 0s

Parallel Phase #:23   [db11g] Files:25   Time: 132s

Restart  Phase #:24   [db11g] Files:1    Time: 0s

Parallel Phase #:25   [db11g] Files:12   Time: 78s

Restart  Phase #:26   [db11g] Files:1    Time: 0s

Serial   Phase #:27   [db11g] Files:1    Time: 0s

Serial   Phase #:28   [db11g] Files:3    Time: 4s

Serial   Phase #:29   [db11g] Files:1    Time: 0s

Restart  Phase #:30   [db11g] Files:1    Time: 0s

***************   Catproc CDB Views   **************

Serial   Phase #:31   [db11g] Files:1    Time: 1s

Restart  Phase #:32   [db11g] Files:1    Time: 0s

Serial   Phase #:34   [db11g] Files:1    Time: 0s

*****************   Catproc PLBs   *****************

Serial   Phase #:35   [db11g] Files:293  Time: 25s

Serial   Phase #:36   [db11g] Files:1    Time: 0s

Restart  Phase #:37   [db11g] Files:1    Time: 0s

Serial   Phase #:38   [db11g] Files:6    Time: 5s

Restart  Phase #:39   [db11g] Files:1    Time: 1s

***************   Catproc DataPump   ***************

Serial   Phase #:40   [db11g] Files:3    Time: 55s

Restart  Phase #:41   [db11g] Files:1    Time: 0s

******************   Catproc SQL   *****************

Parallel Phase #:42   [db11g] Files:13   Time: 83s

Restart  Phase #:43   [db11g] Files:1    Time: 1s

Parallel Phase #:44   [db11g] Files:11   Time: 8s

Restart  Phase #:45   [db11g] Files:1    Time: 1s

Parallel Phase #:46   [db11g] Files:3    Time: 1s

Restart  Phase #:47   [db11g] Files:1    Time: 0s

*************   Final Catproc scripts   ************

Serial   Phase #:48   [db11g] Files:1    Time: 9s

Restart  Phase #:49   [db11g] Files:1    Time: 0s

**************   Final RDBMS scripts   *************

Serial   Phase #:50   [db11g] Files:1    Time: 24s

************   Upgrade Component Start   ***********

Serial   Phase #:51   [db11g] Files:1    Time: 0s

Restart  Phase #:52   [db11g] Files:1    Time: 0s

**********   Upgrading Java and non-Java   *********

Serial   Phase #:53   [db11g] Files:2    Time: 521s

*****************   Upgrading XDB   ****************

Restart  Phase #:54   [db11g] Files:1    Time: 0s

Serial   Phase #:56   [db11g] Files:3    Time: 32s

Serial   Phase #:57   [db11g] Files:3    Time: 3s

Parallel Phase #:58   [db11g] Files:10   Time: 1s

Parallel Phase #:59   [db11g] Files:25   Time: 4s

Serial   Phase #:60   [db11g] Files:4    Time: 7s

Serial   Phase #:61   [db11g] Files:1    Time: 0s

Serial   Phase #:62   [db11g] Files:32   Time: 3s

Serial   Phase #:63   [db11g] Files:1    Time: 0s

Parallel Phase #:64   [db11g] Files:6    Time: 5s

Serial   Phase #:65   [db11g] Files:2    Time: 29s

Serial   Phase #:66   [db11g] Files:3    Time: 84s

****************   Upgrading ORDIM   ***************

Restart  Phase #:67   [db11g] Files:1    Time: 0s

Serial   Phase #:69   [db11g] Files:1    Time: 1s

Parallel Phase #:70   [db11g] Files:2    Time: 42s

Restart  Phase #:71   [db11g] Files:1    Time: 0s

Parallel Phase #:72   [db11g] Files:2    Time: 1s

Serial   Phase #:73   [db11g] Files:2    Time: 1s

*****************   Upgrading SDO   ****************

Restart  Phase #:74   [db11g] Files:1    Time: 0s

Serial   Phase #:76   [db11g] Files:1    Time: 69s

Serial   Phase #:77   [db11g] Files:2    Time: 2s

Restart  Phase #:78   [db11g] Files:1    Time: 0s

Serial   Phase #:79   [db11g] Files:1    Time: 20s

Restart  Phase #:80   [db11g] Files:1    Time: 0s

Parallel Phase #:81   [db11g] Files:3    Time: 80s

Restart  Phase #:82   [db11g] Files:1    Time: 1s

Serial   Phase #:83   [db11g] Files:1    Time: 5s

Restart  Phase #:84   [db11g] Files:1    Time: 0s

Serial   Phase #:85   [db11g] Files:1    Time: 10s

Restart  Phase #:86   [db11g] Files:1    Time: 0s

Parallel Phase #:87   [db11g] Files:4    Time: 99s

Restart  Phase #:88   [db11g] Files:1    Time: 0s

Serial   Phase #:89   [db11g] Files:1    Time: 0s

Restart  Phase #:90   [db11g] Files:1    Time: 0s

Serial   Phase #:91   [db11g] Files:2    Time: 8s

Restart  Phase #:92   [db11g] Files:1    Time: 1s

Serial   Phase #:93   [db11g] Files:1    Time: 0s

Restart  Phase #:94   [db11g] Files:1    Time: 0s

*******   Upgrading ODM, WK, EXF, RUL, XOQ   *******

Serial   Phase #:95   [db11g] Files:1    Time: 37s

Restart  Phase #:96   [db11g] Files:1    Time: 0s

***********   Final Component scripts    ***********

Serial   Phase #:97   [db11g] Files:1    Time: 1s

*************   Final Upgrade scripts   ************

Serial   Phase #:98   [db11g] Files:1    Time: 26s

*******************   Migration   ******************

Serial   Phase #:99   [db11g] Files:1    Time: 68s

***   End PDB Application Upgrade Pre-Shutdown   ***

Serial   Phase #:100  [db11g] Files:1    Time: 0s

Serial   Phase #:101  [db11g] Files:1    Time: 0s

Serial   Phase #:102  [db11g] Files:1    Time: 140s

*****************   Post Upgrade   *****************

Serial   Phase #:103  [db11g] Files:1    Time: 45s

****************   Summary report   ****************

Serial   Phase #:104  [db11g] Files:1    Time: 1s

***   End PDB Application Upgrade Post-Shutdown   **

Serial   Phase #:105  [db11g] Files:1    Time: 0s

Serial   Phase #:106  [db11g] Files:1    Time: 0s

Serial   Phase #:107  [db11g] Files:1     Time: 75s

------------------------------------------------------

Phases [0-107]         End Time:[2019_02_27 19:50:36]

------------------------------------------------------

Grand Total Time: 2303s

 LOG FILES: (/u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/db11g/upgrade20190227131213/catupgrd*.log)

Upgrade Summary Report Located in:

/u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/db11g/upgrade20190227131213/upg_summary.log

Grand Total Upgrade Time:    [0d:0h:38m:23s]

$

The database is shutdown at the end of the upgrade process, so you need to start it before moving on to the post-upgrade actions.

sqlplus / as sysdba <<EOF

startup

exit;

EOF

◉ Perform Post-Upgrade Actions

The output from the "preupgrade.jar" lists a number of post-upgrade recommendations. Some must be manually applied. Others are incorporated into the "postupgrade_fixups.sql" script. In the following example we run all the manual operations as well as the "postupgrade_fixups.sql" script.

# 11) Time zone file.

sqlplus / as sysdba <<EOF

-- Check current settings.

SELECT * FROM v$timezone_file;

SHUTDOWN IMMEDIATE;

STARTUP UPGRADE;

-- Begin upgrade to the latest version.

SET SERVEROUTPUT ON

DECLARE

  l_tz_version PLS_INTEGER;

BEGIN

  l_tz_version := DBMS_DST.get_latest_timezone_version;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);

  DBMS_DST.begin_upgrade(l_tz_version);

END;

/

SHUTDOWN IMMEDIATE;

STARTUP;

-- Do the upgrade.

SET SERVEROUTPUT ON

DECLARE

  l_failures   PLS_INTEGER;

BEGIN

  DBMS_DST.upgrade_database(l_failures);

  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);

  DBMS_DST.end_upgrade(l_failures);

  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);

END;

/

-- Check new settings.

SELECT * FROM v$timezone_file;

COLUMN property_name FORMAT A30

COLUMN property_value FORMAT A20

SELECT property_name, property_value

FROM   database_properties

WHERE  property_name LIKE 'DST_%'

ORDER BY property_name;

exit;

EOF

# 12) Ignored

# 13) AUTOFIXUP

# 14) Gather fixed object stats.

sqlplus / as sysdba <<EOF

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

exit;

EOF

# AUTOFIXUP

sqlplus / as sysdba <<EOF

@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/postupgrade_fixups.sql

exit;

EOF

Assuming you didn't hit any problems along the way, your database is upgraded and ready to go now. The following steps are only necessary if you want to convert the non-CDB instance to a pluggable database (PDB).

◉ Create New Container Database (CDB)

There are a number of considerations when creating a container database, but for this example we will keep it simple. The command below creates a new container database (CDB) with no user-defined pluggable databases (PDBs).

# Create new instance.

export SYS_PASSWORD=SysPassword1

export ORACLE_SID=cdb1

dbca -silent -createDatabase                                                    \

     -templateName General_Purpose.dbc                                          \

     -gdbname ${ORACLE_SID} -sid ${ORACLE_SID} -responseFile NO_VALUE           \

     -characterSet AL32UTF8                                                     \

     -sysPassword ${SYS_PASSWORD}                                               \

     -systemPassword ${SYS_PASSWORD}                                            \

     -createAsContainerDatabase true                                            \

     -numberOfPDBs 0                                                            \

     -databaseType MULTIPURPOSE                                                 \

     -memoryMgmtType auto_sga                                                   \

     -totalMemory 2000                                                          \

     -storageType FS                                                            \

     -datafileDestination "/u02/oradata/"                                       \

     -redoLogFileSize 500                                                       \

     -emConfiguration NONE                                                      \

     -ignorePreReqs

For a proper conversion we would have to make sure the CDB had all necessary configuration in place, including options and initialisation parameters. We will ignore the rest of the instance setup to keep it simple.

◉ Convert Non-CDB to PDB

Oracle Database 19c, Oracle Database Exam Prep, Oracle Database Preparation
Describe the non-CDB instance and turn it off.

export ORACLE_SID=db11g

sqlplus / as sysdba <<EOF

SHUTDOWN IMMEDIATE;

STARTUP OPEN READ ONLY;

BEGIN

  DBMS_PDB.DESCRIBE(

    pdb_descr_file => '/u01/software/db11g.xml');

END;

/

SHUTDOWN IMMEDIATE;

exit;

EOF

Create a PDB using description of the non-CDB, moving the datafiles into the new location, remembering to run the "noncdb_to_pdb.sql" to clean up before opening the pluggable database.

export ORACLE_SID=cdb1

mkdir -p /u02/oradata/CDB1/pdb1/

sqlplus / as sysdba <<EOF

CREATE PLUGGABLE DATABASE pdb1 USING '/u01/software/db11g.xml'

  MOVE

  FILE_NAME_CONVERT = ('/u02/oradata/db11g/', '/u02/oradata/CDB1/pdb1/');

ALTER SESSION SET CONTAINER=pdb1;

@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

ALTER PLUGGABLE DATABASE OPEN;

ALTER PLUGGABLE DATABASE SAVE STATE;

exit;

EOF

◉ Final Steps

We need to clean up the remains of the non-CDB instance. We can use the DBCA to do this.

dbca -silent -deleteDatabase -sourceDB db11g -sysDBAUserName sys -sysDBAPassword ${SYS_PASSWORD}

We can delete any remaining files.

rm /u01/app/oracle/product/11.2.0.4/db_1/dbs/*db11g*

rm /u01/app/oracle/product/19.0.0/dbhome_1/dbs/*db11g*

rm -Rf $ORACLE_BASE/admin/db11g

rm -Rf /u01/app/oracle/diag/rdbms/db11g

rm -Rf /u02/oradata/db11g

Edit the "/etc/oratab" file as required.

Source: oracle-base.com

Friday, December 25, 2020

Introduction of Relational Model and Codd Rules in DBMS

Oracle Database Tutorial and Material, Database Exam Prep, Database Learning, Database Career

Terminology

Relational Model: Relational model represents data in the form of relations or tables.

Relational Schema: Schema represents structure of a relation. e.g.; Relational Schema of STUDENT relation can be represented as:

STUDENT (STUD_NO, STUD_NAME, STUD_PHONE, STUD_STATE, STUD_COUNTRY, STUD_AGE)

Relational Instance: The set of values present in a relation at a particular instance of time is known as relational instance as shown in Table 1 and Table 2.

Attribute: Each relation is defined in terms of some properties, each of which is known as attribute. For Example, STUD_NO, STUD_NAME etc. are attributes of relation STUDENT.

Domain of an attribute: The possible values an attribute can take in a relation is called its domain. For Example, domain of STUD_AGE can be from 18 to 40.

Tuple: Each row of a relation is known as tuple. e.g.; STUDENT relation given below has 4 tuples.

NULL values: Values of some attribute for some tuples may be unknown, missing or undefined which are represented by NULL. Two NULL values in a relation are considered different from each other.

Table 1 and Table 2 represent relational model having two relations STUDENT and STUDENT_COURSE.

Oracle Database Tutorial and Material, Database Exam Prep, Database Learning, Database Career

Codd Rules


Codd rules were proposed by E.F. Codd which should be satisfied by relational model.

1. Foundation Rule: For any system that is advertised as, or claimed to be, a relational data base management system, that system must be able to manage data bases entirely through its relational capabilities.

2. Information Rule: Data stored in Relational model must be a value of some cell of a table.

3. Guaranteed Access Rule: Every data element must be accessible by table name, its primary key and name of attribute whose value is to be determined.

4. Systematic Treatment of NULL values: NULL value in database must only correspond to missing, unknown or not applicable values.

5. Active Online Catalog: Structure of database must be stored in an online catalog which can be queried by authorized users.

6. Comprehensive Data Sub-language Rule: A database should be accessible by a language supported for definition, manipulation and transaction management operation.

7. View Updating Rule: Different views created for various purposes should be automatically updatable by the system.

8. High level insert, update and delete rule: Relational Model should support insert, delete, update etc. operations at each level of relations. Also, set operations like Union, Intersection and minus should be supported.

9. Physical data independence: Any modification in the physical location of a table should not enforce modification at application level.

10. Logical data independence: Any modification in logical or conceptual schema of a table should not enforce modification at application level. For example, merging of two tables into one should not affect application accessing it which is difficult to achieve.

11. Integrity Independence: Integrity constraints modified at database level should not enforce modification at application level.

12. Distribution Independence: Distribution of data over various locations should not be visible to end-users.

13. Non-Subversion Rule: Low level access to data should not be able to bypass integrity rule to change data.

Wednesday, December 23, 2020

A Simple Guide for Migration of data to Autonomous Database using EM

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides

Moving the workloads of your on-premises database to an Oracle Cloud gives you a competitive advantage today. Databases in Oracle Cloud leverage Oracle's Generation 2 infrastructure, providing maximum performance, better pricing, and the most secure environment for your enterprise workloads. OCI also provides the Oracle Autonomous Database, the industry’s first and only self-driving, self-securing, and self-repairing database.

Oracle continues to extend the capabilities of our industry-leading Enterprise Manager offering to support our customers' journey to cloud. We are excited to introduce you to the Database Migration Workbench, which will help you migrate on-premises databases to Oracle Autonomous Database. This blog will guide you through the various steps of the migration process.

Overview of Database Migration Workbench

Webinar: Automating Database Migration to Oracle Cloud, including Autonomous Database, Eliminates Operational Risk

Register: December 10, 2020 11:00 a.m. PT/2:00 p.m. ET

Database Migration Workbench simplifies the migration of an on-premises database to an Autonomous Database with a unified guided flow. Figure 1 (below) shows an overview of the Database Migration Workbench. It uses Data Pump to migrate data to an Autonomous Database. A DBA can use Data Pump regardless of the endian format and character set of the on-premises database. The source database can be a Single-Instance Database (SIDB), RAC Database (RACDB) or a Pluggable Database (PDB), and the destination database can be an ADW or ATP. As part of database migration, EM takes an export dump of source data files, uploads it to an object storage bucket, and imports it to an Autonomous Database.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 1 - Overview of Database Migration Workbench

Prerequisites for Database Migration


1. Connectivity between the on-premises datacenter and OCI services should be present, which can be established by having a FastConnect or an IPSec VPN connection.

2. For a full database export or exporting any schema other than the user schema, create an on-premises database user having privileges to export data, i.e., GRANT DATAPUMP_EXP_FULL_DATABASE TO <user_name>. For example:

◉ Execute the following SQL if the source database is an SIDB or a RACDB:

CREATE USER MIGUSER IDENTIFIED BY MPass_123;
GRANT CREATE SESSION TO MIGUSER;
GRANT CONNECT TO MIGUSER;
GRANT SELECT ANY TABLE TO MIGUSER;
GRANT DATAPUMP_EXP_FULL_DATABASE TO MIGUSER;
ALTER USER MIGUSER QUOTA UNLIMITED ON USERS;

◉ Execute the following SQL if the source database is a PDB:

ALTER SESSION SET CONTAINER=SALESPDB;
CREATE USER MIGUSER IDENTIFIED BY MPass_123 CONTAINER=current;
GRANT CREATE SESSION TO MIGUSER;
GRANT CONNECT TO MIGUSER;
GRANT SELECT ANY TABLE TO MIGUSER;
GRANT DATAPUMP_EXP_FULL_DATABASE TO MIGUSER;
ALTER USER MIGUSER QUOTA UNLIMITED ON USERS;

3. Create a directory object, which should have enough disk space to accommodate the data to be migrated.

CREATE OR REPLACE DIRECTORY MIG_DMP_DIR AS '/u01/mig_data';
GRANT READ, WRITE ON DIRECTORY MIG_DMP_DIR TO MIGUSER;

4. Create an "Auth Token" based credential in the Autonomous Database and set it as a default credential that will be required for authentication between the Autonomous Database and OCI object storage. Follow the below steps: 

a.  Log in to your OCI console and generate an Auth Token.

b.  Log in to your Autonomous Database and create a credential. Following is an example of SQL for the same:

     ◉ For Autonomous Database - Dedicated

SET DEFINE OFF
BEGIN DBMS_CREDENTIAL.CREATE_CREDENTIAL(
     credential_name => 'ADB_CRED',
     username => 'jhon_doe',
     password => 'auth_token');
END;
/

     ◉ For Autonomous Database - Serverless

SET DEFINE OFF
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
     credential_name => 'ADB_CRED',
     username => 'jhon_doe',
     password => 'auth_token');
END;
/

5. Create new or identify an existing object storage bucket in the same region of your OCI tenancy where the Autonomous Database is residing.

a.  The "Storage" tier setting for the bucket must be Standard.
b.  The "Visibility" setting for the bucket must be Private.

6. If not already discovered, discover both the source and destination databases in Enterprise Manager.  

7. cURL should be available on the host from where the export dumps will be uploaded to Object Storage.
 

Creating a Data Migration activity


Navigate to the Database Migration Workbench to create migration activity. You can traverse either via a global context or via target context.

Launch Database Migration Workbench from a global context

Navigate to Enterprise > Migration and Consolidation > Database Migration Workbench

Launch Database Migration Workbench from target context

Goto to the target home page (target can be an SIDB, RACDB, PDB, or an Autonomous Database). Navigate to Oracle Database > Migration

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 2 - Navigating to Database Migration Workbench
 
From the Database Migration Workbench dashboard, click on the button "Create Migration Activity" (as shown in figure 3).

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 3 - Database Migration Workbench Dashboard
 

Source, Destination, and Object Storage


Selecting a source and destination database

You can either perform a full fatabase migration or migrate specific schemas. Select the source database and destination database. The source database can be a SIDB, RACDB or a PDB. The destination database can be ATP or ADW.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 4a - Select source and destination for migration
 
Selecting targets

Source and destination database targets can be selected by clicking the magnifying glass icon in the Select Source/Destination Database pane. Figure 4b (below) shows the target selector for the source database.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 4b - Select source database
 
Upload Migration Tools

Database Migration Workbench makes use of the tools Oracle Instant Client and Autonomous Database Schema Analyzer. The latest version of these tools is downloaded automatically via job "Migration Prerequisite Download Job" and saved in Software Library. If these utilities are not available from the Software Library for your operating system, you will need to download them manually. Manual downloads are usually required when the latest version of the Migration Tools is not compatible with your current operating system version. The following binaries are needed to be uploaded:

1. Instant Client Files

Instant client basic zip package
Instant client SQL*Plus zip package
Instant client tools zip package

2. Autonomous Database Schema Advisor file

**Enterprise Manager automatically downloads the tools when it has a direct internet connection or has MOS proxy setup.
 
Adding Source, Destination, and Object Storage details

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 5 - Adding source and destination details

Source

1. Database Credentials - Select a named credential of an on-premises database user with Data Pump export privileges. This region gets auto-populated if the preferred credential is set.

◉ To create database credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Database Instance", credential type as "Database Credentials", enter username, password, role and click on save.

◉ Database Credential can also be created via EMCLI as follows:

emcli create_named_credential \
-auth_target_type=oracle_database \
-cred_type=DBCreds \
-cred_scope=Global \
-cred_name=SRC_DB_CREDS \
-attributes="DBUserName:db_user;DBPassword:Db_password;DBRole:Normal"

2. Host Credentials - Select a named credential for database host user. This region gets auto-populated if the preferred credential is set.

◉ To create host credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Host", credential type as "Host Credentials", enter the username, password, and click on save.

◉ Host Credential can also be created via EMCLI as follows:

emcli create_named_credential \
-auth_target_type=host \
-cred_type=HostCreds \
-cred_scope=Global \
-cred_name=SRC_HOST_CREDS \
-attributes="HostUserName:host_user;HostPassword:Host_password"

Destination

1. Database Credentials - Select a named credential of Autonomous Database user with Data Pump import privileges. This region gets auto-populated if the preferred credential is set.

◉ To create database credentials, navigate to Setup > Security > Named Credentials and click on create. Enter the credential name, select authenticating target type as "Autonomous Transaction Processing" or "Autonomous Data Warehouse", credential type as "Database Credentials", enter username, password, role and click on save.

◉ Database credential can also be created via EMCLI as follows:

emcli create_named_credential \
-auth_target_type=oracle_cloud_atp \
-cred_type=DBCreds \
-cred_scope=Global \
-cred_name=DEST_ADB_CREDS \
-attributes="DBUserName:adb_user;DBPassword:Adb_pass123;DBRole:Normal"

**Provide auth_target_type=oracle_cloud_adw if destination is Autonomous Data Warehouse

2. Agent Host Credentials - Select a named credential for destination ADB's monitoring agent user. This region gets auto-populated if the preferred credential is set.

◉ To create host credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Host", credential type as "Host Credentials", enter the username, password, and click on save.

◉ Host Credential can also be created via EMCLI as follows:

emcli create_named_credential \
-auth_target_type=host \
-cred_type=HostCreds \
-cred_scope=Global \
-cred_name=DEST_HOST_CREDS \
-attributes="HostUserName:host_user;HostPassword:Host_password"

3. Service Name - Select the Autonomous Database service name, which will be used for migration. It is recommended to use the service name "high" for data migration.

Action

◉ Migration Method - Migration Method for On-premises to Autonomous Database migration will always be "Data Pump".

◉ Recompile invalid objects after migration - On selecting this checkbox, all invalid objects from the migrated schema will be recompiled.

◉ Compare performance after migration - On selecting this checkbox, SQL Performance Analyzer will be used to compare performance before and after migration. You can see a detailed SQL Performance Analyzer report once the migration activity is complete.

◉ Source Data Pump Directory - Select a directory object of the source database, on which the selected database user is having read and write privileges. If the source database is a RAC target, then the Data Pump directory must be shared across all RAC nodes. Data Pump directory cannot be an ASM based location.

◉ Encryption Password - Provide encryption password for export dumps. The export dumps generated from the source database are always encrypted.

◉ Cloud Storage URL - Provide object storage URL for the region where your Autonomous Database exists. Cloud Storage URL is of the following format:

objectstorage.[Region Identifier].oraclecloud.com

You can get the value of region identifier from the OCI console. Open the "Regions" menu and click on "Manage Regions". You will find the "Region Identifier" in the list of regions.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 6 - Finding region identifier from OCI console

◉ Bucket Name - Enter Object storage bucket name (this bucket should reside in the same region where destination ADB is residing).

◉ OCI Credential - Select a named credential for the Autonomous Database of type Oracle Cloud Infrastructure.

     ◉ To create OCI credentials, navigate to Setup > Security > Named Credentials and click on create. Enter credential name, select authenticating target type as "Autonomous Transaction Processing" or "Autonomous Data Warehouse", credential type as "Oracle Cloud Infrastructure Credential", enter Tenancy OCID, User OCID, Private Key, Fingerprint and Passphrase.

     ◉ Database Credential can also be created via EMCLI as follows:

emcli create_named_credential \
-cred_name=OCI_CRED \
-auth_target_type="oracle_cloud_atp" \
-cred_type="OCICredential" \
-cred_scope=Global \
-target_type="Autonomous Transaction Processing" \
-attributes="TenancyOCID#<OCI Tenancy OCID>;UserOCID#<OCI User OCID>;PrivateKey#<Private_key>;FingerPrint#<Fingerprint>;PassPhrase#<Pass_phrase>" \
-subseparator=attributes="#"

**Provide -auth_target_type=oracle_cloud_adw if destination is Autonomous Data Warehouse
**Provide -target_type="Autonomous Data Warehouse" if destination is Autonomous Data Warehouse
**Steps to setup SSH keys can be found here
**The new lines of <Private_key> should be delimited as \r\n while using in emcli

◉ Database OCI Auth Credential - Select the Auth Token based credential created in Autonomous Database.
 
Named credentials may be scoped as target-specific or global. All the examples above show named credential creation with global scope. More details of named credential creation can be found here.

Customizing the Activity


Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 7 - Customizing export/import options, STS, and pre/post custom scripts

Export and Import Options - You can provide the applicable parameters for export and import command, which will be used during migration. The defaults are pre-populated with values that are required for migration to Autonomous Database.

There are a few options that get added by Database Migration Workbench implicitly, as shown in Figure 8 below.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 8 - Implicitly added options for export and import

Compare Performance After Migration - SQL Performance Analyzer will be used to compare performance before and after migration. You can choose an existing SQL Tuning Set (STS), or Database Migration Workbench will create a new STS from AWR.

Pre/Post Script - Executes pre- or post- migration scripts stored in the software library. The scripts should be uploaded to the software library at <ROOT>/Database Migration/User Defined Scripts as a Directive or Generic Component. Following are the supported custom scripts:

Pre-Migration Script - SH script executed before migration on the on-premises database host.
Post-Migration Script- SH script executed after migration on the ADB monitoring agent's host.
Post-Migration SQL Script - SQL executed after migration on ADB.
 
Upload Options - Database Migration Workbench uses system cURL to upload exported dumps to Object Storage. By default, the source database host is used to upload to Object Storage. But a user can opt to select a different agent host from which connectivity to object storage will be made for uploading export dumps. If the dump location is not shared with the upload agent host, then the dump files are copied to a staging directory.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 9a - Flow of data from an on-premises database to Oracle Autonomous Database when data is uploaded from the source host

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 9b - Flow of data from an on-premises database to Oracle Autonomous Database when data is uploaded from a different host

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 10 - Inputs for uploading data from different host
 

Analyze Source Data


Based on the provided inputs, Analyze Source will provide a report describing the on-premises database objects which will not get successfully migrated to Autonomous Database. Analyze schema makes use of the schema analyzer tool, which has been uploaded in the first step. Click on "Analyze Source" on Review & Submit page. You have to either select the named credential of a user where schema advisor is installed or credential with sysdba privileges. If a sysdba credential is selected, then a temporary schema advisor user is created, which will be deleted post advisor report generation.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 11 - Report generated by Analyze Source
 
Validate Inputs Before Submission

On the Review & Submit page, click on the "Validate" button to verify all the inputs concerning the source, destination, object storage, and export/import options provided in the activity. Validation is not a mandatory step, but it is advised to validate the inputs before submitting the activity. Figure 12 shows a list of validations that are executed as part of this step.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 12 - Validations

Post Migration Activities


Once the migration activity is complete, you can Compare Performance, View Analysis of the Activity, View detailed logs of the activity, or cleanup dumps from source and object storage. You can perform all these operations by clicking on the Action Menu of an activity.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 13 - Various options of Actions Menu
 
Cleanup - Deletes export dumps from the source location and object storage.

View Details - Redirects to the deployment activity page where you will get step by step progress details and logs for the corresponding migration activity.

View Analysis - This page analyzes validations, export, and import operation executed by migration activity. It will summarize the job steps and then deep dives into details of individual Object errors.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 14 - Analyzing a Migration Activity
 
Compare Performance - Shows the SQL Performance Analyzer report of the migration activity.

Marking an activity as "Complete" - Importing data to an Autonomous Database might result in a few import errors. In such situations, the migration activity ends with a status as "Completed with errors". Users can analyze the migration (using View Analysis), and if the errors reported during import are ignorable, the user can choose to Mark the Activity as complete.

Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Guides
Figure 15 - Marking an activity as complete