Wednesday, March 30, 2022

Oracle Database 19c Running @Memory Speed

Oracle Database 19c, Oracle Database 19c Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Learning

Oracle Database 19c Running @Memory Speed


Introduction

Intel Optane Persistent Memory is a new memory technology and Oracle has harnessed to deliver the highest possible database performance today. This technical advancement is a fusion of memory with storage capabilities.

Oracle Database has been able to use Persistent Memory since it was first introduced using what is known as Memory Mode, which enables much larger memory configurations than conventional Dynamic Random-Access Memory (DRAM). Starting with Oracle Database version 19.12, Oracle Database can also use what is known as AppDirect Mode. Persistent Memory in AppDirect mode (which we address in this blog) allows, PMEM to be used as a persistent data store for data and related files.

Oracle Database can now reside on Persistent Memory and take advantage of the byte addressable capabilities of PMEM with much lower latency than even the fastest Flash storage can provide. To further improve performance, data sitting on PMEM device is accessed directly by the Oracle Database eliminating the overhead of copying data into the database buffer cache. In this blog, we will go through the process of setting up PMEM devices and creating file systems on top of them. Later we will clone an active source Oracle Database onto these file systems. Oracle Database can also be created from scratch (using CREATE DATABASE command) once the file system is set up. In this blog, we will see how we can use the standard RMAN ACTIVE DUPLICATE command to bring an active database from another server.

What is Persistent Memory

Persistent Memory is a solid-state high-performance byte-addressable memory (like DRAM) that resides on memory bus, giving DRAM-like access to data and DRAM-like speed in the range of nanoseconds. DRAM memory is ephemeral, meaning the contents disappear when a server is rebooted. Storage has traditionally been a distinctly separate tier in the form of disk or Flash drives. For any computer application to work, the application data stored in disk or Flash must be loaded in memory, introducing additional latency in the process. Persistent Memory (in AppDirect Mode) operates both like memory and storage. It is a storage device that sits on the memory bus and is byte-addressable like memory, but also has the ability to persist data across server reboots.

Oracle Database 19c, Oracle Database 19c Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Learning

With advancements in infrastructure technology (compute, storage, memory, networking etc.), and fast database systems, there has always been a struggle to optimize database workloads and improve performance. Bringing data from storage into memory is a performance bottleneck. PMEM devices reside directly on the memory bus, which reduces IO latency. In the later part of the blog, we will show how to set up an existing Oracle database on a Persistent Memory store. First, let us talk about the Memory/Storage hierarchy.

Memory Storage Hierarchy

In the below diagram, you can see where Persistent Memory sits in the memory and storage hierarchy and the latency involved in bringing data to the compute.

Oracle Database 19c, Oracle Database 19c Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Learning

How does Oracle utilize Persistent Memory?

Persistent Memory with Oracle Databases fits into the system architecture in 2 different ways. Persistent Memory can reside within servers or within storage. Oracle supports both of these systems architectures as follows:

Server-Side Persistent Memory

◉ Oracle Memory Speed (OMS) file system (Oracle version 19.12+). Also available in 21c.
◉ Directly Mapped Buffer Cache (Oracle version 21.3+).

Storage-Side Persistent Memory

◉ Exadata uses PMEM in the storage tier accessed using RDMA (Remote Direct Memory Access)

In this blog, we will focus on Oracle Database 19c (19.12 and later) using the OMS (Oracle Memory Speed) file system. The Directly Mapped Buffer Cache feature of Oracle 21c will be covered in a later blog post. Oracle Memory Speed (OMS) file system fully utilizes the potential of persistent memory to reduce storage latencies. So, what is OMS? Let’s find out.

Introduction to OMS (Oracle Memory Speed)

As we have seen in the Memory/Storage hierarchy above, Persistent Memory is a new tier in the hierarchy that fits between DRAM and Flash Storage. PMEM performance is close to the performance of volatile memory (DRAM). However, unlike DRAM, the data residing on PMEM devices is not lost (persists) across system restarts or power failures. PMEM devices are usually deployed with a DAX (Direct Access Filesystem). Since PMEM devices are byte addressable unlike storage devices (HDD, SSD) that are block addressable, there are cases where data stored in DAX enable filesystems may become corrupt in certain scenarios. Oracle Memory Speed (OMS) file system is designed to avoid these corruptions known as “fractured blocks” or “torn writes” while still delivering the full performance potential of Persistent Memory

Avoiding Fractured Blocks (aka torn writes) and Data loss with Persistent Memory

Persistent Memory, like DRAM, operates byte-by-byte (hence byte addressable) rather than in blocks as with conventional disk or Flash storage. With PMEM, data is persisted 8-bytes at a time rather than in terms of 4K O/S pages or storage sectors. If an Oracle Database writes a block of data (8192 bytes = 8K), the data would be persisted into PMEM in 1024 chunks (1024 chunks of 8 bytes each). Any disruption (power failure for example) during the time of this write, can leave data “fractured” or “torn” with inconsistent data in some of those 1024 chunks (old and new data). OMS eliminates this “fractured” block problem by doing an out of place copy into DRAM and the metadata is updated to point to the new block only after the entire block is written.

With a firm understanding of Persistent Memory and how it works, let’s get into details of how to configure Oracle Database with Persistent Memory.  We start with system configuration and basic installation, then move to create a backing file to store Oracle data files, redo logs, temporary files etc.

Requirements and guidelines for Oracle Memory Speed:

Requirements:

◉ An Intel Cascade Lake or Ice Lake based server or later.
◉ Persistent Memory devices (PMEM) in each socket capable of memory interleaving.
◉ Oracle Linux with Huge-Page support.
◉ A DAX capable (DAX) file system such as XFS.
◉ Oracle version 19.11 with patch – 33192793 (DBRU 19.13)

-bash-4.2$ cd $ORACLE_HOME/OPatch

-bash-4.2$ ./opatch lspatches

33192793;Database Release Update : 19.13.0.0.211019 (33192793)

Guidelines:

OMS file system is only supported for a single instance Oracle database deployment and is not supported with Oracle Real Application Clusters (RAC). Each OMS file store also supports a single Oracle Database. This means, you can have Oracle datafiles from one database on one OMS file store. To deploy multiple databases on a server (or Virtual Machine), we can create multiple OMS file stores on the server.

Let us start setting up:

1. Setup hardware and Operating System Components. In our setup, we have two OMS file systems configured with 12 PMEM (128 GB each) devices. 12x128 = 1536GiB which is 1.5 TiB.

2. The following command lists some of the XFS file systems created using PMEM devices
df -hT

/dev/pmem0p1                 xfs       1.5T  1.4T   57G  97% /mnt/pmem12

/dev/pmem1p1                 xfs       1.5T  1.4T   57G  97% /mnt/pmem13

3. Configure PMEM devices and set up OMS file store:

# ipmctl show -memoryresources

Capacity=3029.4 GiB

MemoryCapacity=0.0 GiB

AppDirectCapacity=3024.0 GiB

UnconfiguredCapacity=0.0 GiB

InaccessibleCapacity=5.4 GiB

ReservedCapacity=0.0 GiBsources

4. Verify if regions are configured in AppDirect mode

# ipmctl show -region

SocketID |               ISetID              | PersistentMemoryType | Capacity   | FreeCapacity | HealthState

====================================================================================

 0x0000   | 0x97407f48df982ccc | AppDirect                            | 1512.0 GiB | 0.0 GiB          | Healthy

 0x0001   | 0xfb907f48d59a2ccc | AppDirect                            | 1512.0 GiB | 0.0 GiB          | Healthy

 

Here, there are two PMEM regions; one per socket. Each PMEM region is 1512 GiB and supports AppDirect mode.

5. Check for namespaces

In the below example, two namespaces namespace0.0 and namespace1.0 are now available for NUMA nodes 0 and 1 respectively. the block device path is /dev/pmem0 and /dev/pmem1

# ndctl list -u

[

  {

    "dev":"namespace1.0",

    "mode":"fsdax",

    "map":"dev",

    "size":"1488.37 GiB (1598.13 GB)",

    "uuid":"f8ec094d-5a69-4f87-9198-fb9117f2ea8e",

    "sector_size":512,

    "align":2097152,

    "blockdev":"pmem1"

  },

  {

    "dev":"namespace0.0",

    "mode":"fsdax",

    "map":"dev",

    "size":"1488.37 GiB (1598.13 GB)",

    "uuid":"8924aa35-701f-40cc-ba08-654aa5729611",

    "sector_size":512,

    "align":2097152,

    "blockdev":"pmem0"

  }

]

6. Format the device by specifying the stripe unit size (su) as 2 MiB and stripe width size (sw) as 1. Repeat the step for both the file system.

# mkfs.xfs -f -d su=2m,sw=1 /dev/pmem0

...

...

data     = bsize=4096 blocks=777240064, imaxpct=5

         = sunit=512 swidth=512 blks

naming   =version 2 bsize=4096 ascii-ci=0 ftype=1

log      =internal log bsize=4096 blocks=379511, version=2

         = sectsz=4096 sunit=1 blks, lazy-count=1

realtime =none

In this example, the data section of the output has bsize=4096 (=4K blocks). So, the data block size for this file system is 4096 bytes. The sunit=512 and the swidth=512 blks. This implies that the stripe unit is 512*4096 bytes = 2 MB and the stripe width is 512*4096 bytes = 2 MB. A single stripe of this file system is composed of a single stripe unit (512 blocks / 512 blocks per unit).

A minimum contiguous allocation of 2 MiB or contiguous allocations in exact multiples of 2 MiB is enforced for data to enable the address space to be represented with HugePages mappings and to bypass the page cache. (Huge Pages in Linux are 2MiB in size)

 Huge Pages – Typically for a large Oracle database we configure Hugepages to map SGA memory. This gives a substantial benefit in virtual memory management by reducing page faults (standard page size is 4K, HugePage size is 2MiB) to 512x less.

Similarly, with PMEM devices, we create a hugepage aware filesystem (DAX) that has mmap mapping to be 2MiB and block allocation is 2MiB as well. Partitions created on top of PMEM namespace should be 2MiB aligned. By default, fdisk creates partitions that are 1MiB (1024 sector) aligned. A filesystem built on top of this partition won’t be able to provide DAX with 2MiB aligned block allocations. We need to have our partition begin at 2MiB aligned boundary.

Now we mount the device using the dax option

# mount -o dax /dev/pmem0 /mnt/pmem0

# mount -o dax /dev/pmem1 /mnt/pmem1

Verify if the filesystem is mounted with the dax option

# mount | grep dax

/dev/pmem0p1 on /mnt/pmem12 type xfs (rw,relatime,attr2,dax,inode64,rtdev=/dev/pmem0p2,noquota)

/dev/pmem1p1 on /mnt/pmem13 type xfs (rw,relatime,attr2,dax,inode64,rtdev=/dev/pmem1p2,noquota)

7. Now we are ready to create a uber file for Oracle Memory Speed (OMS) file system

An uber file is like a volume in a traditional kernel-based file system. The uber file stores all the metadata and data for OMS.

1. Use fallocate to create uber files in the DAX-mounted file systems.
                # fallocate -l 1400G /mnt/pmem12/omsuberfile.tst19oms

                # fallocate -l 1400G /mnt/pmem13/omsuberfile.tst19oms

# chown oracle:dba /mnt/pmem12/omsuberfile.tst19oms

# chown oracle:dba /mnt/pmem13/omsuberfile.tst19oms

# chmod 644 /mnt/pmem12/omsuberfile.tst19oms

# chmod 644 /mnt/pmem13/omsuberfile.tst19oms

In this example above, fallocate creates two 1400 GiB files in each of the DAX-enabled file systems. Ensure that the size of the file is an exact multiple of 2MiB. This aligns the file extents on 2 MiB boundaries and enables access through HugePages virtual memory mappings.

Also note down the string “tst19oms” at the end of the uber file. This means that this uber file can be used by only one database instance “tst19oms”. The suffix tst19oms is tied to an Oracle SID (unique name for Oracle database instance).

8. Creating and mounting OMS file systems. Here we will create two directories and make them read only.

$ mkdir /home/oracle/product/oradata/TST19OMS

$ chmod -wx /home/oracle/product/oradata/TST19OMS

$ mkdir /home/oracle/product/redo/TST19OMS

$ chmod -wx /home/oracle/product/redo/TST19OMS

9. Link Oracle binaries for OMS use
cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk oms_on

cd $ORACLE_HOME/bin

chmod +x oms_daemon

chmod +x omsfscmds

10. Setting up the OMS daemon:

(IMPORTANT: ORACLE_HOME, ORACLE_BASE and ORACLE_SID must be setup in the shell before attempting daemon startup)
 
$ cd $ORACLE_HOME/bin

                                $ ./oms_daemon

OMS daemon process with the name (oms_tst19oms – oms followed by SID_NAME) will be created with trace file located in $ORACLE_BASE/diag/oms/

$ ps -ef | grep oms

oracle    10303      1  0 Feb16 ?        00:00:00 oms_tst19oms

11. Once you are in OMS command prompt, we can mount the OMS file systems using directories   created in step 8 above.

OMS> mkfs /mnt/pmem12/omsuberfile.tst19oms

OMS:mkfs:No blocksize specified, using 4K

OMS:mkfs: Device /mnt/pmem12/omsuberfile.tst19oms formatted with blocksize 4096

OMS> mount /mnt/pmem12/omsuberfile.tst19oms /home/oracle/product/oradata/TST19OMS

OMS:mount: Mounted /mnt/pmem12/omsuberfile.tst19oms at /home/oracle/product/oradata/TST19OMS

OMS> mkfs /mnt/pmem13/omsuberfile.tst19oms

OMS:mkfs:No blocksize specified, using 4K

OMS:mkfs: Device /mnt/pmem13/omsuberfile.tst19oms formatted with blocksize 4096

OMS> mount /mnt/pmem13/omsuberfile.tst19oms /home/oracle/product/redo/TST19OMS

OMS:mount: Mounted /mnt/pmem13/omsuberfile.tst19oms at /home/oracle/product/redo/TST19OMS

Let us now validate the mount status

OMS> lsmount

fsindex : 0

Mountpt : /home/oracle/product/oradata/TST19OMS

Deviceid: /mnt/pmem12/omsuberfile.tst19oms

fsindex : 1

Mountpt : /home/oracle/product/redo/TST19OMS

Deviceid: /mnt/pmem13/omsuberfile.tst19oms

12. Create an init file for the instance and startup the target instance in NOMOUNT mode

Sample initialization file (inittst19oms.ora). From the init file, we can see that we are creating control files on a PMEM device. I am also using db_file_name_convert and log_file_name_convert parameters to convert directory paths for the files that we will be bringing from a source active database. The destination strings are on PMEM device.

*.audit_file_dest='/home/oracle/product/admin/tst19oms/adump'

*.audit_trail='db'

*.compatible='19.0.0'

*.control_files='/home/oracle/product/oradata/TST19OMS/control1.ctl','/home/oracle/product/oradata/TST19OMS/control2.ctl'

*.db_block_size=8192

*.db_file_name_convert='/u01/app/oracle/oradata/TST19DB/TST19DB/datafile', '/home/oracle/product/oradata/TST19OMS', '/u01/app/oracle/oradata/TST19DB/TST19DB/D82AA715ADA54DD1E053A15F1F0A8DE5/datafile', '/home/oracle/product/oradata/TST19OMS'

*.log_file_name_convert='/u01/app/oracle/oradata/onlinelog/TST19DB/TST19DB/onlinelog', '/home/oracle/product/fast_recovery_area/TST19OMS/onlinelog','/u01/app/oracle/oradata/fast_recovery_area/TST19DB/TST19DB/onlinelog', '/home/oracle/product/fast_recovery_area/TST19OMS/onlinelog'

*.db_name='tst19oms'

*.db_unique_name='tst19oms'

*.db_recovery_file_dest='/home/oracle/product/fast_recovery_area'

*.db_recovery_file_dest_size=20g

*.diagnostic_dest='/home/oracle/product'

*.dispatchers='(PROTOCOL=TCP) (SERVICE=tst1dbXDB)'

*.enable_pluggable_database=true

*.local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<host_name>)(PORT=1522)))'

*.log_archive_format='%t_%s_%r.dbf'

*.nls_language='AMERICAN'

*.nls_territory='AMERICA'

*.open_cursors=300

*.pga_aggregate_target=1072m

*.processes=320

*.remote_login_passwordfile='EXCLUSIVE'

*.sga_target=3214m

*.undo_tablespace='UNDOTBS1'

13. Startup the instance in NOMOUNT mode

$ cd $ORACLE_HOME/dbs

sqlplus / as sysdba

startup nomount pfile=’inittst19oms.ora’

when the instance is started, you should watch out for these messages in alert log to confirm Oracle Memory Speed (OMS) is used

2021-12-09T12:01:40.402435-08:00

Oracle instance running with ODM: OMS (Oracle memory speed) ODM Library, Version: 2.0 without DataGuard.

14. Make sure you can connect to the source database, and to the PMEM DB (that we are going to create) using the same SYS password. You may want to create static listener for the PMEM DB so that you can connect to it using SYS in nomount mode.

15. We will use RMAN ACTIVE DUPLICATE command to clone the source database to the PMEM DB.

We have a script that we will use to run for the DUPLICATE command. tst19db is the source database running on a different host. Tst19oms is the target database that will be a PMEM database.

connect target sys/<password>@tst19db;

connect auxiliary sys/<password>@tst19oms;

RUN {

ALLOCATE CHANNEL d1 TYPE DISK;

ALLOCATE CHANNEL d2 TYPE DISK;

ALLOCATE CHANNEL d3 TYPE DISK;

ALLOCATE CHANNEL d4 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux1 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux2 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux3 TYPE DISK;

ALLOCATE AUXILIARY CHANNEL aux4 TYPE DISK;

DUPLICATE TARGET DATABASE TO TST19OMS

        FROM ACTIVE DATABASE

                NOFILENAMECHECK;

RELEASE CHANNEL d1;

RELEASE CHANNEL d2;

RELEASE CHANNEL d3;

RELEASE CHANNEL d4;

RELEASE CHANNEL aux1;

RELEASE CHANNEL aux2;

RELEASE CHANNEL aux3;

RELEASE CHANNEL aux4;

}

16. Once the database is cloned, let us validate the data files and their destination file system.

To validate datafiles are located on the right mount point, query the name and size of the datafiles.

Oracle Database 19c, Oracle Database 19c Exam Prep, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Learning

Source: oracle.com

Sunday, March 27, 2022

How to Apply patch on DBCS with Data guard

Oracle Database Cloud Service (DBCS) runs Enterprise and Standard Edition databases on Virtual Machine (VM) DB Systems and enables you to easily build, scale, and secure Oracle database cost-effectively in the cloud. Database patching is an important and critical subject. This article provides a walkthrough of patching process for DBCS with Data Guard.

PATCHING

To patch DBCS with Data Guard. We need to first apply the patch on the standby database then the primary. Follow below steps for applying the patch.

Friday, March 25, 2022

Multiple VM Autonomous Database on Exadata Cloud@Customer debuts

I am excited to announce the launch of Multiple VM Autonomous Database on Exadata Cloud@Customer. One of our customers' most frequently requested features has been the ability to create Autonomous Database VM clusters and Exadata Database VM clusters on a single Exadata Cloud@Customer infrastructure. With the launch of Multiple VM Autonomous Database, customers can now create multiple Autonomous Exadata VM Clusters and Exadata Database VM Clusters on all their existing Exadata Cloud@Customer platforms from X7 Gen 2 to the newest generation.

Oracle's Exadata Cloud@Customer is the world's simplest path for customers to realize cloud benefits for database: self-service API agility, pay-per-use financial model, high availability, security, and standardization that reduces business risk. It brings these cloud benefits to the customer, behind their firewall, inside their data centers, and fully managed by Oracle using cloud native APIs. Exadata Cloud@Customer allows organizations to modernize their database estate and take advantage of cloud benefits without changing anything at the application layer in their enterprise architecture while meeting security, governance, and regulatory requirements.

Oracle Autonomous Database is the most operationally complete and simple to use database service for developers and administrators of database applications. The service provides machine-learning driven touchless mission critical capabilities with automatic and dynamic scaling, performance, and security. The service is especially well suited for modern application architectures that utilize multiple data types, workloads, and analytic functions in a single solution.  

Autonomous Database on Exadata Cloud@Customer (ADB-C@C) were announced together in July 2020, providing an operationally complete and simple to use database service with all the benefits of the cloud in customers’ data centers.

To take advantage of ADB-C@C, customers have to create four primary resources:

◉ Exadata Cloud@Customer infrastructure

◉ Autonomous Exadata VM Cluster

◉ Autonomous Container Database

◉ Autonomous Database

However, as shown in Figure 1, the original deployment architecture required customers to dedicate the entire platform to either an Autonomous Exadata VM Cluster or Exadata Database VM Clusters.

Figure 1: Autonomous Database on Exadata Cloud@Customer in 2020

With the announcement and general availability of Multiple VM Autonomous Database, customers can now create multiple Autonomous Exadata VM Clusters and Exadata Database VM Clusters on a single Exadata Cloud@Customer – as shown in Figure 2.

Figure 2: Multiple VM Autonomous Database on Exadata Cloud@Customer

Multiple VM Autonomous Database runs on a group of symmetric Virtual Machines that isolate highly available Autonomous Database Service instances and can be deployed alongside Exadata Database VM Clusters, running on a single Exadata Cloud@Customer infrastructure.

Each Autonomous VM Cluster supports separate network configuration, maintenance scheduling, license type selection (BYOL and License Included), and customizable memory, storage, and compute allocations on an Exadata Cloud@Customer to create and run Autonomous Databases.

Multiple VM Autonomous Database allows customers to  provision isolated operational environments such as dev-test, staging, and production with their own access rules, quotas, and performance SLOs. This, in turn, enables organizations to not only migrate and modernize existing databases to gain cloud benefits, but also to create a self-service database application development platform that meets corporate governance standards. This allows internal developers to build new applications using autonomous databases that auto-tune, auto-scale and auto-manage.

By employing multiple VM clusters on existing and new Exadata Cloud@Customer deployments customers gain multiple benefits:  

  • Single Infrastructure with both Exadata Database Service and Autonomous Database Service  
    • Gain  Autonomous Database experience using capacity on your existing infrastructure
    • Efficiently allocate resources for different workloads on the same physical resources
    • Incrementally upgrade all databases and conveniently migrate to Autonomous Database on the same Exadata Cloud@Customer
  • Lowest Cost to Adopt Autonomous Database
    • Set up Autonomous VM Clusters at no cost, enabling the creation of a cost-effective private Database as-a-Service (DBaaS) environment
    • Pay only for running Autonomous Database workloads
    • Leverage fractional CPU cores, auto-scale consumption as needed, and start/stop individual databases to reduce cost
  • Simplify new and existing workloads
    • Fully automate and optimize existing workloads
    • Provide developer self-service databases for creating new applications
  • Flexible License types
    • Use both BYOL and License Included Autonomous Databases on the same Exadata Cloud@Customer infrastructure
  • Defer production costs for critical deployments
    • Create and test Autonomous Data Guard between Autonomous Exadata VM Clusters on the same Cloud@Customer infrastructure
    • Enable dev/test use cases that require Autonomous Data Guard at a low cost
  • Enable specialized workloads
    • Customize compute, storage, and memory of each Autonomous Exadata VM Cluster configuration to optimally support different workloads
  • Secure environment separation
    • Network isolated Autonomous Exadata VM Clusters to provide enhanced security and predictable  performance for specific workloads (dev-test, staging, and production)

OCI Console Experience


Create Autonomous Exadata VM Clusters

To create an Autonomous Exadata VM Cluster, navigate to the Autonomous Exadata VM Cluster list view page and select "Create Autonomous Exadata VM Cluster ".
Each Autonomous Exadata VM Cluster resource is network isolated by VLANs configured for the Exadata Cloud@Customer deployment. The networks are set up separately and at least one VLAN should be configured before creating your first cluster resource.

While creating an Autonomous Exadata VM Cluster, you must allocate resources that will be used for Autonomous Container Databases and Autonomous Databases. Key resource configuration parameters:

◉ Number of Autonomous Container Databases you plan to create in the Autonomous Exadata VM Cluster - Local storage is automatically allocated based on this value

◉ OCPU count per node – sets OCPUs per node in the Autonomous VM Cluster for Autonomous Databases

◉ Database memory per OCPU – sets total memory in the VM cluster for Autonomous Database workloads based on total OCPU allocation

◉ Autonomous Database Storage – User data storage for your Autonomous Databases
Resource configuration sliders default to the minimum values needed for the Autonomous Exadata VM Cluster.

Once the resource configuration parameters are set, the aggregate resources needed to create the Autonomous Exadata VM Cluster and the formulas used to calculate those values are displayed on the right side of the resource configuration section.


Each Autonomous Exadata VM Cluster has a separate maintenance schedule. Select the "Modify Maintenance" button to configure your Autonomous Maintenance preference. Set your maintenance schedule and click "Save Changes".


Note: Once the maintenance runs are scheduled, you can skip or reschedule them.

Select the license type and click "Create Autonomous Exadata VM Cluster". With Multiple-VM Autonomous Database, you can have Autonomous Databases with different license types on the same Exadata Cloud@Customer infrastructure.

Note: The console experience for creating Exadata Cloud@Customer, Autonomous Container Database and Autonomous Database does not change with Multiple VM Autonomous Database functionality.

Availability


Console, SDK, CLI and Terraform for Multiple-VM Autonomous Database are available starting on  March 16, 2022 in all OCI commercial regions.

Source: oracle.com

Wednesday, March 23, 2022

Use Oracle Enterprise Manager data with OCI to unlock new insights

Oracle Enterprise Manager 13.5 customers gain greater operations, capacity planning, and forecasting insight by enabling rich Enterprise Manager target-level data and sharing it with Oracle Cloud Observability & Management platform services. Enterprise Manager transfers data from managed targets and Oracle Management Repository (OMR) to OCI Object Storage, where it is accessible by Oracle Cloud O&M services like Operations Insights, Logging Analytics, etc.

Gain greater insight from the data Enterprise Manager already collects  

Oracle Cloud enables new storage, processing, and machine learning technologies to help customers solve more business problems using target-level data already collected and managed by Enterprise Manager. It helps eliminate custom scripting, warehousing building, or the need to have extra hardware or storage onsite to perform IT Operational analytics and planning.

Customers configure Oracle Enterprise Manager to transfer data from the targets it manages and the Oracle Management Repository (OMR) into Oracle Cloud Infrastructure (OCI) Object Storage, enabling it to be securely accessible by Oracle Cloud services. Once OCI connectivity is set up, data automatically and routinely uploads to OCI Object Storage to enable a data lake. IT Operations and DBAs then use OCI Operations Insights to perform capacity planning and forecasting activities from Enterprise Manager data and other sources. Prerequisites for this capability include: customers should already have the Oracle Diagnostics Pack licensed or add it, and have an Oracle Cloud account set up with credentials and connectivity as detailed in the EM 13.5 documentation.

The following graphic illustrates how to target data flows from Enterprise Manager to an Oracle Cloud service once the configuration process has been completed.

Figure 1: Oracle Enterprise Manager to Oracle Cloud Flow

Set up and configure Enterprise Manager to target data flows to an Oracle Cloud service


Step 1: Export Enterprise Manager data to OCI

For target data to move from Enterprise Manager to OCI, create an OCI Bridge in Enterprise Manager. The OCI Bridge defines a data connection to the OCI Object Storage bucket residing in the Oracle Cloud. The user needs Super Administrator privileges for this one-time setup.

Log in to Enterprise Manager and from the Setup menu, choose Export EM Data to OCI.

Figure 2: Choosing Export EM Data to OCI
 
Then the Manage EM Data for OCI Services page displays. Note that it will be empty the first time you access it. Click the Enable Data Export tab.

Figure 3: Managing EM Data to OCI Services

The OCI Service Data Export dialog displays as follows:

Figure 4: Setting up an OCI Bridge

Click the + to the right of the Source text field and the Select Group dialog displays.

Then, select a group that contains the targets for which you want data exported to OCI and then click Select Target Group. Either select an existing OCI Bridge from the drop-down menu or click the + to create a new OCI Bridge as shown below.

Figure 5: Configuring an OCI Bridge
 
Enter the OCI properties.

The Save As field displays an auto-generated bridge name (you can optionally change the name). Click the Test button to the right of the field to validate the properties you entered.

When you are all done with your entries, click Submit to create the OCI Bridge. Note that only one OCI Bridge per OCI service is allowed.

Step 2: Import Data from the Object Storage Bucket to the OCI Service

Once you've set up an OCI Bridge to enable data to move from Enterprise Manager to the OCI Object Storage bucket, you will need to create an EM Bridge to move Enterprise Manager target data from the Object Storage bucket to your OCI service for processing.

Setup OCI Bridge and target groups in OCI to validate configuration worked


Once you've successfully created an OCI Bridge in Enterprise Manager and selected one or more target groups for data export to OCI, the Manage EM Data for OCI Services dashboard will show the current upload status for each service including the groups that have been added for each service.

Figure 6: Validate the configuration works

Connecting EM to Oracle Cloud enables data lake capabilities so users can harvest rich metric, database telemetry, SQL, and other configuration data for operational analytic reporting and forecasting use-cases.

Bridge

◉ Connects EM with OCI - harvesting metric, SQL, and configuration data for deep operational analytics

◉ Provides visibility and analytics across multiple EM installations

Data Lake

◉ Automated Extract, Transform, Load (ETL) of EM data to Oracle Cloud

◉ Enables capacity planning analysis, trending, and forecasting on long-term metrics and configuration data

◉ Allows other cloud services such as OPSI to provide visibility into the performance of critical workloads including Exadata, Oracle Database, and more

Source: oracle.com

Monday, March 21, 2022

Enhancing Database and Apps Troubleshooting with ML Driven Log Analytics using EM OCI Bridge

Enterprise applications are becoming complex by the day and the ability to analyze log data generated by different systems easily is critical for ensuring smooth business operations.

Oracle Enterprise Manager is Oracle’s premier monitoring solution for on-premises and cloud environments. Oracle Cloud Infrastructure services can take advantage of this valuable set of telemetry data to perform further resource analysis.  Oracle Cloud Logging Analytics is a cloud solution that allows you to index, enrich, aggregate, explore, search, analyze, correlate, visualize and monitor all log data from your applications and system infrastructure on cloud or on-premises. 

Enterprise Manager (EM) users can now set up cloud bridge to import their target model data like associations, and properties information into Oracle Cloud Logging Analytics to start analyzing logs in a few steps.

EM collects detailed information, from managed targets such as configuration necessary to locate log files and hierarchical relationships between entities to enable topology views of applications and infrastructure components. Logging Analytics EM integration automatically builds the topology view across the application stack to help app, IT, and database admins troubleshoot availability and performance issues quickly.

Oracle Logging Analytics Resources, Oracle Database Preparation, Database Career, Database Skills, Database Jobs, Database Exam Prep
Figure 1: Oracle Enterprise Manager OCI Bridge and OCI O&M Services

OCI Bridge


◉ The EM targets which represent the on-premises resources are equivalents of entities in Oracle Cloud Logging Analytics. Therefore, you can leverage the discovered targets in EM to achieve faster onboarding in Logging Analytics.

◉ The topology in EM, including that of complex composite targets is retained and reproduced in Oracle Cloud Logging Analytics. 

◉ Relevant target metadata in EM is stored as entity properties in Oracle Cloud Logging Analytics enabling better monitoring of those EM resources.

◉ The entity properties are periodically updated with the target information that is transferred from EM to OCI Object Storage, and in turn, collected by Oracle Cloud Logging Analytics using an EM Bridge.

Below is the high-level architecture and steps for integrating Enterprise Manager with OCI Logging Analytics:

Oracle Logging Analytics Resources, Oracle Database Preparation, Database Career, Database Skills, Database Jobs, Database Exam Prep
Figure 2: EM and Logging Analytics Cloud Bridge Architecture

1. Configure OCI Logging Analytics EM Bridge & OCI Object Storage Bucket.

2. Configure EM to start sending target information to the object storage bucket in your tenancy.

3. Enable logs collection for the discovered entities in Logging Analytics using Management Agent

Once setup entities will be automatically created in Logging Analytics along with their relationships and application tiers. 

Oracle Logging Analytics Resources, Oracle Database Preparation, Database Career, Database Skills, Database Jobs, Database Exam Prep
Figure 3: Auto-created entities in Logging Analytics, collected from EM Cloud Bridge

Next step is to enable log collection for the discovered entities in Logging Analytics using the new Add Data guided wizard.

Pro Tip: You can enable auto-association of each source for any entities discovered in the future.

Oracle Logging Analytics Resources, Oracle Database Preparation, Database Career, Database Skills, Database Jobs, Database Exam Prep
Figure 4: Enabling log collection for auto-created entities

At this point, the log collection will start from the EM targets into Oracle Cloud Logging Analytics. You can view them in the log explorer. You’ll also see a topology view of the targets harvested from EM. Below is an example of E-Business Suite topology automatically created in Logging Analytics after configuring the cloud bridge.

Oracle Logging Analytics Resources, Oracle Database Preparation, Database Career, Database Skills, Database Jobs, Database Exam Prep
Figure 5: Example topology view of an E-Business Suite environment harvested from EM

The entity topology view in the scope filters can be used to visually select entities for filtering logs, and understanding their relationships while troubleshooting issues.

Oracle Logging Analytics Resources, Oracle Database Preparation, Database Career, Database Skills, Database Jobs, Database Exam Prep
Figure 6: Sample Application Dashboard

Source: oracle.com

Friday, March 18, 2022

Difference between Structured, Semi-structured and Unstructured data

Structured, Semi-Structured, Unstructured Data, Database Exam Prep, Database Tutorial and Materials, Database Career, Database Skill, Database Jobs

Big Data includes huge volume, high velocity, and extensible variety of data. These are 3 types: Structured data, Semi-structured data, and Unstructured data. 

1. Structured data

Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repository that is typically a database. It concerns all data which can be stored in database SQL in a table with rows and columns. They have relational keys and can easily be mapped into pre-designed fields. Today, those data are most processed in the development and simplest way to manage information. Example: Relational data. 

2. Semi-Structured data

Structured, Semi-Structured, Unstructured Data, Database Exam Prep, Database Tutorial and Materials, Database Career, Database Skill, Database Jobs
Semi-structured data is information that does not reside in a relational database but that has some organizational properties that make it easier to analyze. With some processes, you can store them in the relation database (it could be very hard for some kind of semi-structured data), but Semi-structured exist to ease space. Example: XML data. 

3. Unstructured data

Unstructured data is a data which is not organized in a predefined manner or does not have a predefined data model, thus it is not a good fit for a mainstream relational database. So for Unstructured data, there are alternative platforms for storing and managing, it is increasingly prevalent in IT systems and is used by organizations in a variety of business intelligence and analytics applications. Example: Word, PDF, Text, Media logs. 

Differences between Structured, Semi-structured and Unstructured data: 

Properties Structured data  Semi-structured data  Unstructured data 
Technology It is based on Relational database table It is based on XML/RDF(Resource Description Framework). It is based on character and binary data
Transaction management  Matured transaction and various concurrency techniques  Transaction is adapted from DBMS not matured  No transaction management and no concurrency 
Version management  Versioning over tuples,row,tables  Versioning over tuples or graph is possible  Versioned as a whole 
Flexibility  It is schema dependent and less flexible  It is more flexible than structured data but less flexible than unstructured data  It is more flexible and there is absence of schema 
Scalability  It is very difficult to scale DB schema  It’s scaling is simpler than structured data  It is more scalable. 
Robustness  Very robust  New technology, not very spread  -  
Query performance  Structured query allow complex joining   Queries over anonymous nodes are possible  Only textual queries are possible 

Source: geeksforgeeks.org

Wednesday, March 16, 2022

Oracle Zero Downtime Migration 21.3

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Exam Study

Oracle Zero Downtime Migration 21.3

Zero Downtime Migration (ZDM) 21.3 is available for download! ZDM is Oracle’s premier solution for moving Oracle Database workloads to Oracle Cloud, supporting various Oracle Database versions as the source and most Oracle Cloud Database Services as targets. Zero Downtime Migration 21.3 enhances the existing functionality by adding online cross-platform migration, Standby Databases as a source, Data Guard Broker Integration, and many more features!

New in Oracle Zero Downtime Migration 21.3

Oracle Database, Oracle Database Exam, Oracle Database Exam Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Exam Study

What’s New in Oracle Zero Downtime Migration 21.3


Physical Online Migration Standby Database Support

ZDM 21.3 enhances the Physical Online migration workflow, introducing Standby Database support, thus minimizing any impact on primary databases. ZDM will leverage the standby Database to instantiate the target Cloud Database via a Restore from Service mechanism. The rest of the migration workflow will continue leveraging Oracle Data Guard using the original Primary Databases On-Premises for synchronization and switchover purposes.

Oracle Data Guard Broker Support for Physical Online Migration

Oracle ZDM Physical Online Migration workflow includes now Data Guard Broker support. You will directly benefit from ZDM orchestration and the Data Guard Broker’s automation, complementing the Database switchover process. Oracle’s Data Guard Broker usage is fully customizable and adjusted on a case-by-case scenario.   

Migration Enhancements from Amazon Web Services Oracle RDS Sources

Support for Source Oracle Databases in Amazon Web Services (AWS) RDS ADB) has expanded. You can now use ZDM Logical Online Migration with Data Pump and GoldenGate, migrate to new targets like DBCS and Exadata Cloud Services, and you can now leverage Oracle’s Pre-Migration Advisor (CPAT).

Online Migration from Solaris & AIX based Source Databases

ZDM 21.3 enhances support for cross-platform cloud migration for source Oracle Databases hosted on Oracle Solaris or IBM AIX operating systems. You can now leverage the logical online workflow and Oracle Autonomous Database and Co-managed Cloud Oracle Databases as targets.

Logical Online Migration to Exadata On-Premises

Oracle ZDM Logical Online workflow with Data Pump and GoldenGate can now be leveraged to migrate to On-Premises Exadata Targets. This methodology provides a platform for in-flight upgrades, hardware refresh, and cross-platform migration. Usage of Oracle GoldenGate in this workflow requires an On-Premises GoldenGate Hub with a customer-provided GoldenGate license.

Logical Migration Job Suspension and Update Enhancements

ZDM’s logical workflow now includes the possibility to pause a job in-flight with the zdmcli -suspend job command. Furthermore, you can now update an Oracle GoldenGate Extract or Replicat configuration while the ZDM job runs.
 
Logical Migration Enhancements

You can now specify multiple schemas, and ZDM will perform the migration in parallel; different or all schemas of the same Database can be selected simultaneously. In addition, Zero Downtime Migration 21.3 introduces the capability to auto-retry specific Data Pump jobs in case of failure. Previously, in the case of Data Pump failure, the migration would fail altogether.

Customization Enhancements and other Fixes

ZDM now registers with the Source Database feature usage tracking API. You can directly query the Source Database to track all features, including ZDM, utilized during the migration.

Before release 21.3, ZDM required the Target Database in Exadata Cloud@Customer migrations to be registered with the OCI Database Service, prompting API calls to OCI. You can now perform Logical Migration to ExaC@C without OCI rest API OCID calls.

Source: oracle.com