Friday, April 29, 2022

Multitenant : Unplug/Plugin PDB Upgrade to Oracle Database 21c (AutoUpgrade)

Database Multitenant, Oracle Database 21c, Oracle Database Exam Prep, Oracle Database Career, Database Skills, Oracle Database Certification, Database Preparation, Database Jobs, Oracle Database Preparation

This article provides an overview of upgrading an existing PDB to Oracle 21c on the same server using AutoUpgrade Unplug/Plugin. Upgrades can be very complicated, so you must always read the upgrade manual, and test thoroughly before considering an upgrade of a production environment.

◉ Assumptions

This article is focused on upgrading a pluggable database using unplug/plugin. If you want to upgrade a CDB and all PDBs directly, you need to follow this article.

- Multitenant : Upgrade to Oracle Database 21c (AutoUpgrade)

This article assumes your source database is of a version supported for direct upgrade to 21c.

19c, 18c, 12.2

In this example we are doing an upgrade from 19c multitenant to 21c. The process is very similar for all supported versions.

It's important to have backups of everything before you start! Some of these steps are destructive, and if something goes wrong you have no alternative but to restore from backups and start again.

◉ Prerequisities

Make sure you have all the OS prerequisites in place by running the 21c preinstall package. On Oracle Linux you can do this by installing the preinstall package. It probably makes sense to update the remaining packages also.

yum install -y oracle-database-preinstall-21c

yum update -y

◉ Install 21c Software

You can read about the installation process in more detail here (OL7, OL8), but for this example we'll keep it brief. The following commands will perform a silent installation of the 21c software.

export ORACLE_HOME=$ORACLE_BASE/product/21.0.0/dbhome_1

export SOFTWARE_DIR=/vagrant/software

export ORA_INVENTORY=/u01/app/oraInventory

mkdir -p ${ORACLE_HOME}

cd $ORACLE_HOME

/bin/unzip -oq ${SOFTWARE_DIR}/LINUX.X64_213000_db_home.zip

./runInstaller -ignorePrereq -waitforcompletion -silent                        \

    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp               \

    oracle.install.option=INSTALL_DB_SWONLY                                    \

    ORACLE_HOSTNAME=${ORACLE_HOSTNAME}                                         \

    UNIX_GROUP_NAME=oinstall                                                   \

    INVENTORY_LOCATION=${ORA_INVENTORY}                                        \

    SELECTED_LANGUAGES=en,en_GB                                                \

    ORACLE_HOME=${ORACLE_HOME}                                                 \

    ORACLE_BASE=${ORACLE_BASE}                                                 \

    oracle.install.db.InstallEdition=EE                                        \

    oracle.install.db.OSDBA_GROUP=dba                                          \

    oracle.install.db.OSBACKUPDBA_GROUP=dba                                    \

    oracle.install.db.OSDGDBA_GROUP=dba                                        \

    oracle.install.db.OSKMDBA_GROUP=dba                                        \

    oracle.install.db.OSRACDBA_GROUP=dba                                       \

    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false                                 \

    DECLINE_SECURITY_UPDATES=true

Run the root scripts when prompted.

As a root user, execute the following script(s):

        1. /u01/app/oracle/product/21.0.0/dbhome_1/root.sh

At this point you should also patch the new Oracle home, but in this case we will forgo that step to keep things simple.

◉ Create 21c Container Database

We need to create a container database (CDB) as the destination for the resulting PDB. The following example create a CDB called "cdb2" with no PDBs.

#dbca -silent -deleteDatabase -sourceDB cdb2 -sysDBAUserName sys -sysDBAPassword SysPassword1

dbca -silent -createDatabase \

 -templateName General_Purpose.dbc \

 -gdbname cdb2 -sid cdb2 -responseFile NO_VALUE \

 -characterSet AL32UTF8 \

 -sysPassword SysPassword1 \

 -systemPassword SysPassword1 \

 -createAsContainerDatabase true \

 -numberOfPDBs 0 \

 -databaseType MULTIPURPOSE \

 -memoryMgmtType auto_sga \

 -totalMemory 1536 \

 -storageType FS \

 -datafileDestination "/u02/oracle/" \

 -redoLogFileSize 50 \

 -emConfiguration NONE \

 -ignorePreReqs

Prepare for db operation

10% complete

Copying database files

40% complete

Creating and starting Oracle instance

42% complete

46% complete

52% complete

56% complete

60% complete

Completing Database Creation

66% complete

69% complete

70% complete

Executing Post Configuration Actions

100% complete

Database creation complete. For details check the logfiles at:

 /u01/app/oracle/cfgtoollogs/dbca/cdb2.

Database Information:

Global Database Name:cdb2

System Identifier(SID):cdb2

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb2/cdb2.log" for further details.

$

We enable the fast recovery area, Oracle Managed Files (OMF) and archivelog mode.

sqlplus / as sysdba <<EOF

alter system set db_recovery_file_dest_size=40g;

alter system set db_recovery_file_dest='/u01/app/oracle/fast_recovery_area';

alter system set db_create_file_dest = '/u02/oradata';

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

exit;

EOF

You will need to adjust the instance parameters to make sure the container can cope with the demands of the final PDB, but for this example we will ignore that.

◉ Run AutoUpgrade Analyze

Download the latest "autoupgrade.jar" file from MOS 2485457.1. If you don't have MOS access you can miss out the next step.

cd $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin

mv autoupgrade.jar autoupgrade.jar.`date +"%Y"-"%m"-"%d"`

cp /tmp/autoupgrade.jar .

Make sure you are using the original Oracle home before running the "autoupgrade.jar" commands.

export ORACLE_SID=cdb1

export ORAENV_ASK=NO

. oraenv

export ORAENV_ASK=YES

export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1

Generate a sample file for a full database upgrade.

$ORACLE_BASE/product/21.0.0/dbhome_1/jdk/bin/java \

  -jar $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin/autoupgrade.jar \

  -create_sample_file config /tmp/config.txt unplug

Edit the resulting "/tmp/config.txt" file, setting the details for your required upgrade. In this case we used the following parameters. We are only upgrading a single PDB, but if we had multiple we could use a comma-separated list of PDBs.

upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/pdb1

upg1.sid=cdb1

upg1.source_home=/u01/app/oracle/product/19.0.0/dbhome_1

upg1.target_cdb=cdb2

upg1.target_home=/u01/app/oracle/product/21.0.0/dbhome_1

upg1.pdbs=pdb1                              # Comma delimited list of pdb names that will be upgraded and moved to the target CDB

#upg1.target_pdb_name.mypdb1=altpdb1        # Optional. Name of the PDB to be created on the target CDB

#upg1.target_pdb_copy_option.mypdb1=file_name_convert=('mypdb1', 'altpdb1')  # Optional. file_name_convert option used when creating the PDB on the target CDB

#upg1.target_pdb_name.mypdb2=altpdb2

upg1.start_time=NOW                        # Optional. [NOW | +XhYm (X hours, Y minutes after launch) | dd/mm/yyyy hh:mm:ss]

upg1.upgrade_node=localhost                # Optional. To find out the name of your node, run the hostname utility. Default is 'localhost'

upg1.run_utlrp=yes                         # Optional. Whether or not to run utlrp after upgrade

upg1.timezone_upg=yes                      # Optional. Whether or not to run the timezone upgrade

upg1.target_version=21                     # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2

Run the upgrade in analyze mode to see if there are any expected issues with the upgrade.

$ORACLE_BASE/product/21.0.0/dbhome_1/jdk/bin/java \

  -jar $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin/autoupgrade.jar \

  -config /tmp/config.txt -mode analyze

AutoUpgrade tool launched with default options

Processing config file ...

+--------------------------------+

| Starting AutoUpgrade execution |

+--------------------------------+

1 databases will be analyzed

Type 'help' to list console commands

upg> Job 100 completed

------------------- Final Summary --------------------

Number of databases            [ 1 ]

Jobs finished                  [1]

Jobs failed                    [0]

Jobs pending                   [0]

Please check the summary report at:

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log

The output files list the status of the analysis, and any manual intervention that is needed before an upgrade can take place. The output of the "status.log" file is shown below. The detail section gives a file containing the details of the steps in the upgrade process. If you've seen the output from the "preupgrade.jar", it will look familiar. If there are any required manual actions in the main log file, the detail file should give more information.

==========================================

          Autoupgrade Summary Report

==========================================

[Date]           Sun Aug 22 14:49:01 UTC 2021

[Number of Jobs] 1

==========================================

[Job ID] 100

==========================================

[DB Name]                cdb1

[Version Before Upgrade] 19.12.0.0.0

[Version After Upgrade]  21.3.0.0.0

------------------------------------------

[Stage Name]    PRECHECKS

[Status]        SUCCESS

[Start Time]    2021-08-22 14:48:44

[Duration]      0:00:16

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/100/prechecks

[Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/100/prechecks/cdb1_preupgrade.log

                Precheck passed and no manual intervention needed

------------------------------------------

The log directory contains a number of files, including a HTML format of the detailed report. It's the same information as the log file, but some people my prefer reading this format.

Once any required manual fixups are complete, run the analysis again and you should see a clean analysis report.

◉ Run AutoUpgrade Deploy

We are now ready to run the database upgrade with the following command. The upgrade takes some time, so you will be left at the "upg" prompt until it's complete.

$ORACLE_BASE/product/21.0.0/dbhome_1/jdk/bin/java \

  -jar $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin/autoupgrade.jar \

  -config /tmp/config.txt -mode deploy

AutoUpgrade tool launched with default options

Processing config file ...

+--------------------------------+

| Starting AutoUpgrade execution |

+--------------------------------+

1 databases will be processed

Type 'help' to list console commands

upg>

Use the "help" command to see the command line options. We can list the current jobs and check on the job status using the following commands.

upg> lsj

+----+-------+---------+---------+-------+--------------+--------+----------------+

|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|         MESSAGE|

+----+-------+---------+---------+-------+--------------+--------+----------------+

| 101|   cdb1|DBUPGRADE|EXECUTING|RUNNING|21/08/22 14:51|14:56:28|22%Upgraded PDB1|

+----+-------+---------+---------+-------+--------------+--------+----------------+

Total jobs 1

upg> status -job 101

Progress

-----------------------------------

Start time:      21/08/22 14:51

Elapsed (min):   5

End time:        N/A

Last update:     2021-08-22T14:56:28.019

Stage:           DBUPGRADE

Operation:       EXECUTING

Status:          RUNNING

Pending stages:  7

Stage summary:

    SETUP             <1 min

    PREUPGRADE        <1 min

    PRECHECKS         <1 min

    PREFIXUPS         1 min

    DRAIN             <1 min

    DBUPGRADE         3 min (IN PROGRESS)

Job Logs Locations

-----------------------------------

Logs Base:    /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1

Job logs:     /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101

Stage logs:   /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/dbupgrade

TimeZone:     /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/temp

Additional information

-----------------------------------

Details:

[Upgrading] is [22%] completed for [cdb1-pdb1]

                 +---------+-------------+

                 |CONTAINER|   PERCENTAGE|

                 +---------+-------------+

                 |     PDB1|UPGRADE [22%]|

                 +---------+-------------+

Error Details:

None

upg>

Once the job completes a summary message is displayed.

upg> Job 101 completed

------------------- Final Summary --------------------

Number of databases            [ 1 ]

Jobs finished                  [1]

Jobs failed                    [0]

Jobs pending                   [0]

Please check the summary report at:

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log

The "status.log" contains the top-level information about the upgrade process.

==========================================

          Autoupgrade Summary Report

==========================================

[Date]           Sun Aug 22 15:15:50 UTC 2021

[Number of Jobs] 1

==========================================

[Job ID] 101

==========================================

[DB Name]                cdb1

[Version Before Upgrade] 19.12.0.0.0

[Version After Upgrade]  21.3.0.0.0

------------------------------------------

[Stage Name]    PREUPGRADE

[Status]        SUCCESS

[Start Time]    2021-08-22 14:51:03

[Duration]      0:00:00

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/preupgrade

------------------------------------------

[Stage Name]    PRECHECKS

[Status]        SUCCESS

[Start Time]    2021-08-22 14:51:03

[Duration]      0:00:20

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/prechecks

[Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/prechecks/cdb1_preupgrade.log

                Precheck passed and no manual intervention needed

------------------------------------------

[Stage Name]    PREFIXUPS

[Status]        SUCCESS

[Start Time]    2021-08-22 14:51:24

[Duration]      0:01:43

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/prefixups

------------------------------------------

[Stage Name]    DRAIN

[Status]        SUCCESS

[Start Time]    2021-08-22 14:53:08

[Duration]      0:00:12

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/drain

------------------------------------------

[Stage Name]    DBUPGRADE

[Status]        SUCCESS

[Start Time]    2021-08-22 14:53:20

[Duration]      0:15:56

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/dbupgrade

------------------------------------------

[Stage Name]    NONCDBTOPDBXY

[Status]        SUCCESS

[Start Time]    2021-08-22 15:09:16

[Duration]      0:00:00

------------------------------------------

[Stage Name]    POSTCHECKS

[Status]        SUCCESS

[Start Time]    2021-08-22 15:09:16

[Duration]      0:00:09

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postchecks

[Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postchecks/cdb1_postupgrade.log

------------------------------------------

[Stage Name]    POSTFIXUPS

[Status]        SUCCESS

[Start Time]    2021-08-22 15:09:26

[Duration]      0:06:23

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postfixups

------------------------------------------

[Stage Name]    POSTUPGRADE

[Status]        SUCCESS

[Start Time]    2021-08-22 15:15:49

[Duration]      0:00:00

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postupgrade

------------------------------------------

[Stage Name]    SYSUPDATES

[Status]        SUCCESS

[Start Time]    2021-08-22 15:15:50

[Duration]      0:00:00

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/sysupdates

------------------------------------------

Upgrade Summary: /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/dbupgrade/upg_summary.log

Check out the "upg_summary.log" file, and if anything looks wrong, check out the associated log files. At this point I do a shutdown and startup to make sure everything is running in the correct mode.

export ORACLE_HOME=$ORACLE_BASE/product/21.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb2

sqlplus / as sysdba <<EOF

alter pluggable database PDB1 save state;

shutdown immediate;

startup;

show pdbs

exit;

EOF

◉ Final Steps

If you've finished with the 19c CDB1 instance, you can remove it.

export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb1

dbca -silent -deleteDatabase -sourceDB mydb -sysDBAUserName sys -sysDBAPassword SysPassword1

Edit the "/etc/oratab" file and any environment files as required.

◉ Appendix

The following commands are used to rebuild the databases if you want to rerun the examples.

Rebuild the 21c CDB1 multitenant database with no PDBS.

export ORACLE_HOME=/u01/app/oracle/product/21.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb2

#dbca -silent -deleteDatabase -sourceDB cdb2 -sysDBAUserName sys -sysDBAPassword SysPassword1

dbca -silent -createDatabase \

 -templateName General_Purpose.dbc \

 -gdbname cdb2 -sid cdb2 -responseFile NO_VALUE \

 -characterSet AL32UTF8 \

 -sysPassword SysPassword1 \

 -systemPassword SysPassword1 \

 -createAsContainerDatabase true \

 -numberOfPDBs 0 \

 -databaseType MULTIPURPOSE \

 -memoryMgmtType auto_sga \

 -totalMemory 1536 \

 -storageType FS \

 -datafileDestination "/u02/oradata/" \

 -redoLogFileSize 50 \

 -emConfiguration NONE \

 -ignorePreReqs

sqlplus / as sysdba <<EOF

alter system set db_recovery_file_dest_size=40g;

alter system set db_recovery_file_dest='/u01/app/oracle/fast_recovery_area';

alter system set db_create_file_dest = '/u02/oradata';

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

exit;

EOF

Rebuild the 19c CDB1 multitenant database with one PDB.

export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb1

#dbca -silent -deleteDatabase -sourceDB mydb -sysDBAUserName sys -sysDBAPassword SysPassword1

dbca -silent -createDatabase \

 -templateName General_Purpose.dbc \

 -gdbname cdb1 -sid cdb1 -responseFile NO_VALUE \

 -characterSet AL32UTF8 \

 -sysPassword SysPassword1 \

 -systemPassword SysPassword1 \

 -createAsContainerDatabase true \

 -numberOfPDBs 1 \

 -pdbName pdb1 \

 -pdbAdminPassword SysPassword1 \

 -databaseType MULTIPURPOSE \

 -memoryMgmtType auto_sga \

 -totalMemory 1536 \

 -storageType FS \

 -datafileDestination "/u02/oradata/" \

 -redoLogFileSize 50 \

 -emConfiguration NONE \

 -ignorePreReqs

sqlplus / as sysdba <<EOF

alter pluggable database pdb1 save state;

alter system set db_recovery_file_dest_size=40g;

alter system set db_recovery_file_dest='/u01/app/oracle/fast_recovery_area';

alter system set db_create_file_dest = '/u02/oradata';

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

exit;

EOF

Source: oracle-base.com

Wednesday, April 27, 2022

Introducing Exadata Cloud Infrastructure X9M

More Compute, Storage, and Faster Networking at the Same Price

The world’s fastest public cloud database platform is now even faster. Exadata Cloud Infrastructure X9M runs database workloads dramatically faster with extremely low 19 microsecond latency for SQL reads, up to 87% more SQL IOPS for OLTP, and 80% faster analytic scans.  All powereed by 2.5x more database cores, 28% more storage, and 2x the bandwidth to application server clients than the X8M and at the same price! 

Exadata Cloud Infrastructure X9M offers extreme scale out capabilities that benefit both Autonomous Database and Exadata Database Service. With more compute, storage, and networking resources in the latest generation platform, customers can consolidate even more mixed database workloads on less infrastructure. The granular scalability of Exadata Cloud Infrastructure X9M also lowers costs by enabling customers to start with smaller configurations and right-size their hardware configuration to match their growing workload requirements.

Exadata Cloud Infrastructure X9M delivers the following improvements for organizations running mission-critical databases in Oracle Cloud Infrastructure.


As a result, organizations can run OLTP workloads up to 22.4 M IOPS in a configuration with only 8 database servers and 12 storage servers while analytics and machine learning workloads can achieve up to 540GB/sec of scan throughput in the same configuration. This allows customers to run the same workloads faster and at lower costs, run larger workloads and more complex analyses in the same amount of time, and run more workloads on the same infrastructure.

If organizations need additional infrstructure to support their workloads, they can independently scale the number of database servers and storage servers they utilize. When fully scaled out with 32 database servers and 64 storage servers, Exadata Cloud Infrastructure X9M allows customers to run any type of workload at impressive scale and with impressive performance, effectively meaning that no workload or database is too large to run.


Exadata Smart System Software


With Exadata we combine the best compute, storage, networking, and system software available and build a scale-out engineered system specifically architected to run the Oracle Database. Exadata uses state-of-the-art hardware components, but since Exadata is built using the same industry standard components available to any vendor, what makes Exadata special? Simple, it’s the smart software we build into the system. Exadata includes dozens of enhanced software features and algorithms that, when combined with the hardware, provide superior performance, scalability, availability, and security for OLTP, Analytics, and Consolidation.


Exadata uses these smart technologies across on-premises, hybrid cloud, and OCI deployments, making it unique not just in its capabilities but also by having consistent and compatible capabilities with all deployment models. Exadata Cloud Infrastructure and Exadata Cloud@Customer run the same database services – Autonomous Database and Exadata Database Service – making it easy for organizations to develop once and deploy anywhere or move workloads between different locations. The power of the core Exadata platform is combined with cloud operations, automation, and cloud-based economics to eliminate infrastructure management, simplify or eliminate database administrations tasks, and reduce TCO.

Next-generation Exadata Infrastructure in OCI


For those of you interested in all the specifics for how the CPU, storage, and networking resources improved so dramatically over the previous X8M generation, here’s a summary of the Exadata Cloud Infrastructure X9M Data Sheet.

Exadata Cloud Infrastructure X9M Database Server:

◉ 126 AMD EPYC™ Processor cores
◉ 1.35 TB Memory
◉ 50 Gbps Network (Client/Backup)

Exadata Cloud Infrastructure X9M Storage Server:

◉ 48 Intel® Xeon® Processor cores
◉ 1.5 TB Intel® Optane™ PMem
◉ 25.6 TB NVMe Flash
◉ 63.6 TB Usable Disk Capacity

Exadata Cloud Infrastructure X9M increases the throughput over its 100 Gbps active-active Remote Direct Memory Access over Converged Ethernet (RoCE) internal network fabric, providing more bandwidth than previous generations and an extremely low-latency interconnect between all compute and storage servers. The servers now use PCIe4 to enable the full 200 Gbps aggregate bandwidth of the active-active RoCE fabric.

Exadata Database Service on Dedicated Infrastructure


Exadata Cloud Infrastructure X9M runs Exadata Database Service, where infrastructure is managed by Oracle and the customer has control over most aspects of their database service. This provides customers with an easy path to migrate applications to the cloud by using the same Exadata capabilities available on premises but without having to modify operational procedures.

We continue to develop new software capabilities for Exadata Database Service running in the cloud. These include the following recent cloud automation capabilities that make it even easier to use.

◉ Scale and right-size configuration with elastic compute and storage provisioning and expansion

◉ Easily setup standby using Cloud UI or APIs

◉ Deploy, start, stop, delete, and clone Pluggable Databases from the UI and APIs

◉ Easily deploy customer specific fixes across multiple systems with custom database home images

◉ Centrally manage encryption keys with OCI Vault

◉ Monitor database Performance with built in Performance Hub and Metrics Screens

Autonomous Database on Dedicated Exadata Infrastructure


For Autonomous Database running on Dedicated Exadata Cloud Infrastructure, there are additional benefits. Not only do you get the 87% greater IOPS and 80% higher throughput available with the Exadata Cloud Infrastructure X9M starting configuration, but you also have access to granular compute and storage expandability that allows organization to right-size their configurations to match their workloads. As a result, Autonomous Database customers can now take advantage of the full Exadata Cloud Infrastructure expandability. This effectively provides Autonomous Database with up to 40x more processor cores, 16x more memory, 27x more storage capacity, and 38x more scan throughput than the previous limit for X8M infrastructure.

Autonomous Database is the simplest cloud database service for running any type of application at any scale or with any level of criticality. When running it on dedicated Exadata Cloud Infrastructure X9M, organizations can run all types of database workloads faster, eliminate human error and most database administration tasks, and automatically scale database consumption in consolidated environments to minimize costs.

Here are some recent software enhancements for Autonomous Database on Dedicated Exadata Infrastructure:

◉ Automatic Failover to Data Guard Standby Database

◉ Bring Your Own Key to Autonomous Data Guard using OCI Vault for Cross Region key management

◉ Integration with Database Management Service to manage full database fleet from one console

◉ Save costs for small databases and dev/test with by running up to 10 databases on 1 OCPU

◉ GoldenGate change capture is now available to implement replication and data mesh architectures

Source: oracle.com

Monday, April 25, 2022

Steps to Enable Exadata Cloud Service Monitoring using OCI Performance Hub

Introduction:

We can use OCI Performance Hub to analyze and tune the performance of Oracle Cloud Infrastructure Shared and Dedicated Autonomous Databases, Virtual Machine, Bare Metal, Oracle Exadata Cloud Service, and external Oracle databases. 

In this blog, we will view the steps to enable monitoring in Oracle Exadata Cloud Service (ExaCS).

High level steps and Prerequisites 

1. Assign OCI user group required permissions to Enable Database Management

2. Assign dbsnmp user with required permissions for the target database

3. Add secret in OCI Vault service with dbsnmp user password

4. Add NSG to enable communication between Database Management and the Oracle Cloud Database

5. Add Private endpoint to enable connectivity between Database management and ExaCS VMs

6. Verify the Exadata monitoring and metrics

Detailes Steps:

1.Make sure OCI user group has the required permissions.

Database Management Permissions

Here are examples of the policies to grant the DB-MGMT-ADMIN user group permission to create a Database Management private endpoint and monitor the work requests associated with the private endpoint:

Read More: 1Z0-750: Oracle Application Express 18: Developing Web Applications

Allow group DB-MGMT-ADMIN to manage dbmgmt-private-endpoints in tenancy

Allow group DB-MGMT-ADMIN to read dbmgmt-work-requests in tenancy

Alternatively, a single policy using the Database Management aggregate resource-type grants the DB-MGMT-ADMIN user group the same permissions detailed in the preceding paragraph:

Allow group DB-MGMT-ADMIN to manage dbmgmt-family in tenancy

Exadata Cloud service permission

Here's an example of a policy that grants the DB-MGMT-ADMIN user group the permission to enable Database Management for the Oracle Cloud Databases in the tenancy:

Allow group DB-MGMT-ADMIN to use database-family in tenancy

Networking service permissions

Here are examples of the individual policies that grant the DB-MGMT-ADMIN user group the required permissions:

Allow group DB-MGMT-ADMIN to manage vnics in tenancy

Allow group DB-MGMT-ADMIN to use subnets in tenancy

Allow group DB-MGMT-ADMIN to use network-security-groups in tenancy

or

Allow group DB-MGMT-ADMIN to use security-lists in tenancy

Alternatively, a single policy using the Networking service aggregate resource-type grants the DB-MGMT-ADMIN user group the same permissions detailed in the preceding paragraph:

Allow group DB-MGMT-ADMIN to manage virtual-network-family in tenancy

Vault service permissions

Here's an example of the policy that grants the DB-MGMT-ADMIN user group the permission to create and use secrets in the tenancy:

Allow group DB-MGMT-ADMIN to manage secret-family in tenancy

In addition to the user group policy for the Vault service, the following service policy is required to grant Database Management (dpd) the permission to read database password secrets in a specific vault:

Allow service dpd to read secret-family in compartment ABC where target.vault.id = 'Vault OCID'

2. Grant dbsnmp user the required privileges and set password as per complaint.

a) Grant privileges: 

      GRANT CREATE PROCEDURE TO dbsnmp;     

      GRANT SELECT ANY DICTIONARY, SELECT_CATALOG_ROLE TO dbsnmp;

      GRANT ALTER SYSTEM TO dbsnmp;

      GRANT ADVISOR TO dbsnmp;

      GRANT EXECUTE ON DBMS_WORKLOAD_REPOSITORY TO dbsnmp;

b) Set password for compliance:

The database user password checks in Database Management require the password to be Federal Information Processing Standards (FIPS) compliant:

◉ Password length must be between 14 to 127 characters.

◉ Password must have at least two lowercase, two uppercase, two digits, and two special characters.

SQL> alter user dbsnmp account unlock;

User altered.

SQL> alter user dbsnmp identified by "<password>";

User altered.

3. Create Secret in OCI Vault service for above dbsnmp user.

Use the Oracle Cloud Infrastructure Vault service to save the database user password in a secret with an encryption key. The Vault service is a managed service that enables you to centrally manage the encryption keys that protect your data and the secret credentials that you use to securely access resources.

Note that if you change the database user password, you must also update the secret with the new password by creating a new version of the secret and updating the contents.

Create Vault

Open the navigation menu, click Identity & Security, and then click Vault.

Under List Scope, in the Compartment list, click the name of the compartment where you want to create the vault.

Click Create Vault.

In the Create Vault dialog box, click Name, and then enter a display name for the vault.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

Create key

Open the navigation menu, click Identity & Security, and then click Vault.

Under List Scope, in the Compartment list, click the name of the compartment where you want to create the key.

From the list of vaults in the compartment,

- Click the name of the vault where you want to create the key.

Click Master Encryption Keys, and then click Create Key.

In the Create Key dialog box, choose a compartment from the Create in Compartment list.

Click Protection Mode, and then do one of the following:

- To create a master encryption key that is stored and processed on a hardware security module (HSM), choose HSM.
- To create a master encryption key that is stored and processed on a server, choose Software.

You cannot change a key's protection mode after you create it.  

Click Name, and then enter a name to identify the key

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

Create Secret
 
Open the navigation menu, click Identity & Security, and then click Vault.

Under List Scope, in the Compartment list, click the name of the compartment where you want to create a secret.

From the list of vaults in the compartment, 

- Click the name of the vault where you want to create a secret.

4. Click Secrets, and then click Create Secret.
5. In the Create Secret dialog box, choose a compartment from the Create in Compartment list.
6. Click Name, and then enter a name to identify the secret. 
7. Click Description, and then enter a brief description of the secret to help identify it. 
8. Choose the master encryption key that you want to use to encrypt the secret contents while they're imported to the vault. (The key must belong to the same vault. The key must also be a symmetric key. You cannot encrypt secrets with asymmetric keys.)
9. Specify the format of the secret contents you're providing by choosing a template type from the Secret Type Template list.
10. Click Secret Contents, and then enter the secret contents – dbsnmp password.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

4. NSG to enable communication between Database Management and the Oracle Cloud Database


You must add ingress and egress security rules to Network Security Groups (NSGs) or Security Lists in the Oracle Cloud Database's VCN to allow communication between the Database Management private endpoint and the Oracle Cloud Database.

- Ingress rule for the ExaCS VM Cluster Subnet: The ExaCS VM Cluster Subnet (on port 1521) can receive incoming traffic from the Database Management private endpoint's subnet from any port.
- Egress rule for the Database Management private endpoint: The Database Management private endpoint's subnet (from any port) can send requests to the ExaCS VM Cluster Subnet on port 1521.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

Assign/add the NSG to ExaCS VM Cluster – Client Network Security Groups.
 

5. Create a Database Management private endpoint


A Database Management private endpoint is required to enable communication between Database Management and the Oracle Cloud Database in a VCN. The Database Management private endpoint is its network point of presence in the VCN in which the Oracle Cloud Database can be accessed.

To create a Database Management private endpoint:

1. Open the navigation menu, click Observability & Management. Under Database Management, click Administration.

2. On the left pane on the Administration page, click Private Endpoint and select the compartment in which you want to create the private endpoint.

3. On the Private Endpoints page, click Create Private Endpoint.

4. In the Create Private Endpoint dialog:

    1. Name: Enter a name for the private endpoint.
    2. Description: Optionally, enter a description for the private endpoint.
    3. Choose Compartment: Select the compartment in which you want the private endpoint to reside.
    4. Use this private endpoint for RAC databases: Select this check box. The Database Management private endpoint for RAC Oracle Cloud Databases is a limited resource and you can create only one such private endpoint in your tenancy.
    5. Virtual Cloud Network in <Compartment>: Select the VCN in which the Oracle Cloud Database can be accessed.
    6. Subnet in <Compartment>: Select a subnet within the selected VCN. Note that the subnet can be in a different compartment than the VCN, however, it must have access to the database subnet in the VCN.
    7. Network Security Group: select an NSG added to the Exadata VM cluster.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

6. Enable Database management


Go to the target database details page.
Under Associated Services, Database Management Click Enable.
In the Enable Database Management dialog:

Specify the following details:

Note: Database Type, VM Cluster, Database Home, Database - details are preselected and read-only.

    1. Service Name: The unique service name in the Database_Name.Host_Domain_Name format (can be found in target DB - tnsnames.ora service_name field in DB server)

Specify the database credentials for the connection to the selected Oracle Cloud Database.

    2. Database User Name: Enter the database user name – dbsnmp.

    3. Use existing secret: Select the radio button to use an existing Oracle Cloud Infrastructure Vault service secret that contains the database user password.

    4. Database User Password Secret in <Compartment>: Select the secret that contains the database user password from the drop-down list.

1. Private Endpoint in <Compartment>: Select the private endpoint that will act as a representation of Database Management in the VCN in which the Oracle Cloud Database can be accessed.

2. Management Options: Select Full management option.

3. Click Enable Database Management.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

A confirmation message with a link to the Oracle Cloud Database's Work Requests page is displayed. Click the link to monitor the progress of the work request.

7. Verify


You can verify if Database Management is successfully enabled on the following pages:

◉ Database Details page of the Database: Select Metrics on the left pane under Resources and check if the database metrics are displayed.

◉ Database Details page of the Database: Click Performance Hub tab, then click Exadata tab, you will see screen as below.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

◉ Click Exadata Details from the same above page to get more metrics as below screen shot.

Exadata Cloud Service Monitoring, OCI Performance Hub, Oracle Database Exam Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs

Note: This Exadata monitoring comes with Database Full Management option which is additional cost option.

Source: oracle.com

Friday, April 22, 2022

Using Ansible with Oracle Autonomous Database on Dedicated Exadata Infrastructure

Have you ever wondered if you can use Ansible with ​Oracle Autonomous Database on Dedicated Exadata Infrastructure?  The answer is absolutely! Since I recently helped update the Ansible Autonomous Database samples to include Dedicated, and the GitHub updates have merged, I thought a blog post about what they do and how to use them was in order. 

The sample scripts assume that an Exadata Infrastructure has been created and an Autonomous Exadata VM Cluster is running on top of it.  These resources are the minimum requirements for running the Ansible sample.

There are two Ansible files needed (located here):

sample.yaml is the main script and will (1) create a new Autonomous Container Database (ACD); (2) create a new Autonomous Database (ADB) on top of the newly created ACD; (3) list ADBs in a compartment filtered by display name; (4) get facts of the ADB that was created earlier; (5) stop/start the ADB; and (6) kick off the cleanup script.

teardown.yaml is the cleanup script and cannot be run by itself.  When run from the main script, the created ADB and ACD will be deleted, in that order.  This script can take extra time to run as it has a looping wait coded in it, which ensures that the ACD is in the correct state before attempting to delete it.  After an ACD is created, a backup is taken – this step must be completed before a delete operation is allowed.  The teardown script handles everything automatically.  If you want to keep the ACD and ADB for testing, the teardown.yaml file can be commented out of the sample.yaml.    

To use the sample, we need somewhere to install the OCI Ansible Collection and its dependencies, OCI SDK for Python and Ansible.  An OCI Oracle Linux 8 compute instance is perfect for this.

Here is the image and shape profile I used for my virtual machine (VM):

Oracle Autonomous Database, Oracle Database Exam Prep, Database Career, Database Skills, Database Jobs, Oracle Database Learning

In a few minutes or less, the VM will be up and running.  After logging in as OPC, we need to get the necessary software downloaded and setup:

sudo yum install -y oci-ansible-collection --enablerepo ol8_developer --enablerepo ol8_developer_EPEL

After installing the RPM and the dependencies, you must configure the SDK and CLI configuration file as explained in Configuring Authentication.

The default configuration file name and location is ~/.oci/config, which you will need to create.

Once you have gone through the steps of generating your API signing key and setting up your configuration file, you can test that everything is working using:

ansible localhost -m oracle.oci.oci_object_storage_namespace_facts

Oracle Autonomous Database, Oracle Database Exam Prep, Database Career, Database Skills, Database Jobs, Oracle Database Learning

As you can see, Ansible is set up, and our configuration is working correctly! 

To pull the samples down, I used wget after making a directory to store them:

mkdir ansible
cd ansible
wget https://raw.githubusercontent.com/oracle/oci-ansible-collection/master/samples/database/autonomous_database/dedicated/sample.yaml
wget https://raw.githubusercontent.com/oracle/oci-ansible-collection/master/samples/database/autonomous_database/dedicated/teardown.yaml

You need to set OS variables or update the Ansible variables for the scripts to work.  I find it easiest to create the OS variables this way:

export SAMPLE_COMPARTMENT_OCID=ocid1.compartment.oc1..a........teyo5ln6zl5p3ncgcpo7ur
export SAMPLE_OCPU_COUNT=.5
export SAMPLE_ADB_DISPLAY_NAME=ANSDED
export SAMPLE_DB_NAME=ANSDED
export SAMPLE_ADMIN_PASSWORD=OciOci2022##
export SAMPLE_DATA_STORAGE_SIZE_IN_GBS=75
export SAMPLE_IS_DEDICATED=true
export SAMPLE_CLOUD_AUTONOMOUS_VM_CLUSTER_ID=ocid1.cloudexadatainfrastructure.oc1.oc1.iad...gjwp2p
export SAMPLE_ACD_DISPLAY_NAME=ANSACD

These are the bare minimum variables that have to be set for the sample to work.  Many other parameters can be set to customize the creation of the resources.   

As of the time of writing this blog, more details can be found here:

For all of the parameters that can be customized, you should review oracle.oci.oci_database_autonomous_container_database and oracle.oci.oci_database_autonomous_database

Run the sample using:

nohup ansible-playbook sample_dedicated.yaml &

This will run the process in the background in case our session times out and will create a file called nohup.out that we can review and watch the progress using:

tail -f nohup.out

Sample of the partial expected output:

Oracle Autonomous Database, Oracle Database Exam Prep, Database Career, Database Skills, Database Jobs, Oracle Database Learning

After the script completes, you should see the below recap if everything was successful:

Oracle Autonomous Database, Oracle Database Exam Prep, Database Career, Database Skills, Database Jobs, Oracle Database Learning

The creation script will take about 60 minutes, and the teardown about 40 minutes to complete.

As you can see, getting Ansible set up and running in your dedicated environment is an easy process, and the benefits of being able to script and automate your resource creation is substantial.

Source: oracle.com

Wednesday, April 20, 2022

Oracle Database 21c - Persistent Memory Database – Eliminate Latency with Directly Mapped Buffer Cache

Oracle Database 21c, Persistent Memory Database, Oracle Database Exam Prep, Oracle Database Certification, Database Career, Database Skills, Database Jobs, Database Preparation

Oracle Database 21c Persistent Memory Database maps the database buffer cache directly onto Persistent Memory, which means the buffer cache and storage become one in the same. The majority of input/output operations simply disappear because the database directly uses data that resides in storage and operates on that data in the same manner it uses data in memory in the database buffer cache. The Directly Mapped Buffer Cache is the defining feature of Oracle Database 21c Persistent Memory Database and embodies the true promise of Persistent Memory.

Oracle Database 19c also supports Persistent Memory as outlined in our previous blog (Oracle Database 19c Running @Memory Speed) but in a quite different manner. Oracle Database 19c allows database files to reside on Persistent Memory, which gives the benefit of dramatically improved I/O performance. Oracle Database 21c eliminates I/O by mapping the buffer cache into the Persistent Memory contents, allowing the database to bypass the I/O layer entirely.  Of course, these features are also quite different from Exadata, which uses Persistent Memory in the storage-side. Both Oracle Database 19c (using OMS) and Oracle Database 21c Persistent Memory Database use server-side Persistent Memory.

In this blog, we will go through the process of setting up PMEM devices and creating an Oracle database on Persistent Memory. We will explore how Oracle databases can be created from scratch (using the CREATE DATABASE command) or using RMAN ACTIVE DUPLICATE command to bring an active database from another server.

What is Persistent Memory

Persistent Memory is a solid-state, high-performance, byte-addressable technology (like DRAM) that resides on the memory bus, giving PMEM the access and speed characteristics of DRAM. Memory (or DRAM) is ephemeral, and storage (such as Disk or Flash) has traditionally been required for persistent copies of data. For any computer application to work, the application data stored in HDD/SSD must be loaded in memory, introducing latency into the process.

Persistent Memory, on the other hand serves a dual purpose. Persistent Memory is a storage device that sits on the memory bus and can be used for memory expansion or adding storage to a server.

Oracle Database 21c, Persistent Memory Database, Oracle Database Exam Prep, Oracle Database Certification, Database Career, Database Skills, Database Jobs, Database Preparation
Persistent Memory Module

With the advancements in infrastructure technology (compute, storage, memory, networking etc.), and fast running database systems, there has always been a struggle to optimize database workloads and improve performance. Increasing data volumes have taken a big toll on bringing the data (IO system) to the compute for processing, and disks or flash (persistent storage) performance have been a bottleneck. Although we have seen huge advancement in storage technologies in the hardware itself (mechanical to flash to NVME) and clever technologies on top of them (cache layer built on top of storage devices), IO has always been a bigger bottleneck.

In the later part of the blog, we will show how to set up an existing Oracle database on a Persistent Memory store. Let’s first look at the memory storage hierarchy.

Memory Storage Hierarchy

In the below diagram, you can see where Persistent Memory sits in the memory and storage hierarchy and the latency to bring data to the compute.

Oracle Database 21c, Persistent Memory Database, Oracle Database Exam Prep, Oracle Database Certification, Database Career, Database Skills, Database Jobs, Database Preparation

How does Oracle utilize Persistent Memory?

Persistent Memory with Oracle databases can be utilized in three different ways.

1. Persistent Memory Database using Oracle Memory Speed (OMS). This feature is available starting Oracle Database 19c. This feature was covered in the blog here.

2. Persistent Memory Database with Directly Mapped Buffer Cache (available from Oracle Database 21c onwards) – Accelerates DBMS operations, I/O is done via memory copy and/or direct access. Data is accessed directly from PMEM as an integral part of the database buffer cache. Persistent Memory File-Store is tightly coupled with the database. (we will talk about the capability in this blog)

3. Storage side (Exadata) – Combined with RDMA (Remote Direct Memory Access), Oracle database access data cached in PMEM on the storage side of Exadata. 

In this blog, we will focus on the Directly Mapped Buffer Cache feature which fully utilizes the potential of persistent memory. So, what is the Directly Mapped Buffer Cache? Let us find out.

Introduction to Directly Mapped Buffer Cache

Typically, when data is read storage, it is necessary to copy data blocks into DRAM (Dynamic Random-Access Memory) regardless of whether the underlying storage device is a Hard Disk or Flash Storage or even Persistent Memory (as we have seen with Oracle Database 19c using OMS). Using the Directly Mapped Buffer Cache feature of Oracle Database 21c, Oracle Database accessed data directly from the PMEM device, rather than copying the data into buffer cache (DRAM), eliminating the majority of I/O operations entirely. With Directly Mapped Buffer Cache, database files on PMEM become part of the database buffer cache as we will see in the set up.

Avoiding Fractured Blocks (aka torn writes) and Data loss with Persistent Memory

Persistent Memory, like DRAM, is byte addressable rather than block addressable as with conventional disk or Flash storage. With PMEM, data is persisted 8-bytes at a time rather than in increments of 4,096 bytes with the typical O/S pages or storage sector sizes. If an Oracle Database writes a block of data (8192 bytes = 8K), the data would be persisted into PMEM in 1024 chunks (1024 chunks of 8 bytes each). Any disruption (power failure for example) during the time of this write, can leave data “fractured” or “torn” with inconsistent data in some of those 1024 chunks (old and new data). Both the 19c (OMS) and Directly Mapped Buffer Cache (21c) solutions eliminates this “fractured” block problem. Oracle accomplishes this internally by doing an out of place copy with metadata update to reference the new block only after the entire block is written successfully written.

While database blocks are accessed directly in PMEM as part of the Directly Mapped Buffer Cache, Oracle copies blocks into DRAM when they are modified. All changed blocks are written by the Database Writer (DBWR) process, including retaining copies of prior block versions in rollback segments for read consistency purposes. In any well-tuned system, writing of blocks is done asynchronously from the transaction and is not latency sensitive. Of course, writing of REDO blocks is latency sensitive, so REDO is written directly to PMEM in Oracle Database 21c Persistent Memory Database. Oracle Database 21c includes logic to prevent fracturing (or tearing) of REDO blocks to guard against corruption of REDO as well.

Let us get into details of how to configure Oracle Database with Persistent Memory using Directly Mapped Buffer Cache. We will start with basic requirements including, software versions and then move on to create PMEM File store followed by creating an Oracle database.

Requirements and guidelines for Directly Mapped Buffer Cache

Requirements:

◉ An Intel Cascade Lake or Ice Lake based server or later.
◉ Persistent Memory devices (PMEM) in each socket capable of memory interleaving.
◉ Oracle Linux with Huge-Page support.
◉ A DAX capable (DAX) file system such as XFS.
◉ Oracle version 21.3 with patch – 33516412 (DBRU 21.5.0.0.0)

bash-4.2$ $ORACLE_HOME/OPatch/opatch lspatches

33907983;MERGE ON DATABASE RU 21.5.0.0.0 OF 33339444 33486067

33516412;Database Release Update: 21.5.0.0.220118 (33516412)

Persistent Memory Filestore:

Oracle Database 21c Persistent Memory Database includes the integrated Persistent Memory Filestore to store all files.

Guidelines:

Directly Mapped Buffer Cache feature is only supported for a single instance Oracle database deployment, not with Oracle Real Application Clusters. In addition, each PMEM Filestore can only contain Oracle datafiles from one Oracle Database. Multiple PMEM Filestores are required to deploy multiple databases on a single server.

Let us start setting up:

1. Setup environment variables (tst21pm is our target instance on PMEM Filestore)

export ORACLE_HOME=/u01/app/oracle/product/21.5

export ORACLE_BASE=/u01/app/oracle

export PATH=$ORACLE_HOME/bin:$PATH

export PATH=$ORACLE_HOME/perl/bin: $PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

export TWO_TASK=

export ORACLE_SID=tst21pm

2. The following command lists some of the XFS file systems created using PMEM devices
df -hT

/dev/pmem0p1                 xfs       1.5T  1.4T   57G  97% /mnt/pmem12

/dev/pmem1p1                 xfs       1.5T  1.4T   57G  97% /mnt/pmem13

3. Create sample init.ora (initialization file) to start up the instance

*.audit_file_dest='/u01/app/oracle/admin/tst21pm/adump'

*.audit_trail='db'

*.compatible='21.0.0'

*.control_files='/u01/app/oracle/oradata/TST21PM/control01.ctl','/u01/app/oracle/oradata/TST21PM/control02.ctl'

*.db_block_size=8192

*.db_file_name_convert='/u01/app/oracle/oradata/TSTLOCAL’, ‘/u01/app/oracle/oradata/TST21PM'

*.log_file_name_convert=’ /u01/app/oracle/oradata/TSTLOCAL’, '/u01/app/oracle/oradata/onlinelog/TST21PM', ‘/u01/app/oracle/oradata/onlinelog/TSTLOCAL’, ‘/u01/app/oracle/oradata/onlinelog/TST21PM’

*.db_name='tst21pm'

*.db_recovery_file_dest='/u01/app/oracle/oradata/fast_recovery_area'

*.db_recovery_file_dest_size=10g

*.diagnostic_dest='/u01/app/oracle'

*.dispatchers='(PROTOCOL=TCP) (SERVICE=tst1smplXDB)'

*.enable_pluggable_database=true

*.local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=<Host_Name>)(PORT=<Port_Number>)))'

*.log_archive_format='%t_%s_%r.dbf'

*.nls_language='AMERICAN'

*.nls_territory='AMERICA'

*.open_cursors=300

*.pga_aggregate_target=1g

*.processes=8960

*.remote_login_passwordfile='EXCLUSIVE'

*.sga_target=8g

*.undo_tablespace='UNDOTBS1'

4. Make sure /pmemfs/TST21PM directory exists, and user oracle has proper permissions to it.

5. Edit /etc/fuser.conf file to add (or uncomment if one already exists)
‘user_allow_other’ option

$ cat /etc/fuser.conf

user_allow_other

6. Linux userid that owns Oracle database server binary (located at $ORACLE_HOME/bin/oracle) must have execute permissions for /bin/fusermount program. This can be accomplished in different ways, such as by changing file permissions, adding this user to the Access Control List associated with /bin/fusermount, etc.

Please note that /bin/fusermount program has setuid bit set, and it is typically owned by root.

$ ls -l /bin/fusermount

-rwsr-x---. 1 root root 32584 Sep  6  2016 /bin/fusermount

$ chmod o+rx /bin/fusermount

$ ls -l /bin/fusermount

-rwsr-xr-x. 1 root root 32584 Sep  6  2016 /bin/fusermount

7. Create SPFILE

Create a spfile (from the initial init file we created in step 3). The reason for that is when "CREATE PMEM FILESTORE" command is run, spfile will record the mountpoint

for the PMEM device for subsequent startup of the instance. In case you chose to use init file instead of spfile, you will have to manually specify the parameter pmem_filestore and its mapping.

SQL> CREATE spfile=’/u01/app/oracle/dbs/spfiletst21pm.ora’ from pfile=’/u01/app/oracle/dbs/inittst21pm.ora’;

8. SQL > startup nomount

SQL > create pmem filestore TST21PM MOUNTPOINT '/u01/app/oracle/oradata/TST21PM' BACKINGFILE '/pmemfs/TST21PM/tst21pm.bf' SIZE 80G;

PMEM filestore created.

In alert log you will see following messages:

PDBID-1 OFS:TST21PM of type fsdirect CREATED

PDBID-1 OFS:TST21PM MOUNTED at /u01/app/oracle/oradata/TST21PM/ w/type fsdirect

2022-03-04T19:30:16.545076-08:00

PDBID-1 OFS:/u01/app/oracle/oradata/TST21PM/ UNMOUNTED

PDBID-1 OFS:TST21PM of type fsdirect DESTROYED

2022-03-04T19:30:16.624065-08:00

Creating PMEM Filestore with the following parameters:

  Backing Storage    :/pmemfs/TST21PM/tst21pm.bf

  Filestore Name     :TST21PM

  Block Size         :8192

  Filestore Size     :85899345920

  Auto Extend Size   :0

  Max Size           :85899345920

  InternalVsize      :167503724544

  InternalRootSz     :10485760

2022-03-04T19:35:28.440379-08:00

PMEM Filestore is mounted with the following parameters:

  Backing Storage    :/pmemfs/TST21PM/tst21pm.bf

  Filestore Name     :TST21PM

  Block Size         :8192

  Filestore Size     :86769664000

  Auto Extend Size   :0

  Max Size           :86769664000

  InternalVsize      :167503724544

  InternalRootSz     :10485760

2022-03-04T19:35:28.442010-08:00

PDBID-1 OFS:TST21PM of type fsdirect CREATED

PDBID-1 OFS:TST21PM MOUNTED at /u01/app/oracle/oradata/TST21PM/ w/type fsdirect

2022-03-04T19:35:28.593751-08:00

Completed: create pmem filestore TST21PM MOUNTPOINT '/u01/app/oracle/oradata/TST21PM' BACKINGFILE '/pmemfs/TST21PM/tst21pm.bf' SIZE 80G

spfile is updated with the following parameter

SQL> show parameter pfile


NAME                             TYPE        VALUE

------------------------------------ ----------- -------------------------------------------------

spfile                               string      /u01/app/oracle/dbs/spfiletst21pm.ora

SQL> show parameter pmem

NAME                        TYPE        VALUE

------------------------------------ ----------- --------------------------------------------------

pmem_filestore      string      /u01/app/oracle/oradata/TST21PM, /pmemfs/TST21PM/tst21pm.bf

9. Make sure you can connect to source database, and to the PMEM DB (that we are going to create) using the same SYS password. You may want to create static listener for the PMEM DB so that you can connect to it using SYS in nomount mode.

10. I will use RMAN ACTIVE DUPLICATE command to clone the source database to the PMEM DB.

I have a script that I will use to run for the DUPLICATE command. tstlocal is the source database running on a different host. tst21pm is the target database that will be a PMEM database.

connect target sys/<password>@tstlocal;

connect auxiliary sys/<password>@tst21pm;

RUN {

ALLOCATE CHANNEL d1 TYPE DISK;

ALLOCATE CHANNEL d2 TYPE DISK;

ALLOCATE CHANNEL d3 TYPE DISK;

ALLOCATE CHANNEL d4 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux1 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux2 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux3 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux4 TYPE DISK;

DUPLICATE TARGET DATABASE TO TST21PM  FROM ACTIVE DATABASE                NOFILENAMECHECK;

RELEASE CHANNEL d1;

RELEASE CHANNEL d2;

RELEASE CHANNEL d3;

RELEASE CHANNEL d4;

RELEASE CHANNEL aux1;

RELEASE CHANNEL aux2;

RELEASE CHANNEL aux3;

RELEASE CHANNEL aux4;

}

11. Once the database is cloned, let us validate the data files and their destination file system.

To validate datafiles are located on the right mount point, query the name and size of the datafiles.

Oracle Database 21c, Persistent Memory Database, Oracle Database Exam Prep, Oracle Database Certification, Database Career, Database Skills, Database Jobs, Database Preparation


Source: oracle.com