Wednesday, September 29, 2021

Top 7 Database You Must Know For Software Development Projects

A database is just like a room in an office where all the files and important information can be stored related to a project. Every company needs a database to store and organize the information. The information that we store can be very sensitive so we always have to be careful while accessing or manipulating the information in the database. Choosing the right database is completely dependent on the purpose of the project and over the years programmers and industry specialists have shown their love for databases that fulfilled their requirements.

Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Database Learning, Database Guides

Now if you wonder which databases are most popular in the world then according to the recent ranking shown by the DB Engines below is the list…

Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Certification, Oracle Database Career, Database Learning, Database Guides
Image Source: DB-Engine

1. Oracle


Oracle is the most popular RDBMS written in assembly language C, C++, and Java. The current version of the Oracle Database is 19c. However, a lot of organizations are currently using 11g and 12c. It’s a very powerful secure database that has a well-written document. It takes less space and quickly processes data also you can find some new good features like JSON from SQL as well. Some of the other features are given below…


◉ Oracle provides functionality for Cloud, Document Store, Key-value storage, Graph DBMS, PDF Storages, and BLOG.
◉ It fulfills the requirements in the areas of performance, scalability, high availability, security, data warehousing, etc.
◉ It supports multiple Windows, UNIX, and Linux versions.

2. MySQL


MySQL is a very popular open-source RDBMS which is used by most of the major tech companies such as Facebook, Google, Twitter and Adobe. It was acquired by Oracle as a part of Sun Microsystems’ acquisition in 2009. It uses structured query language and it is written in C and C++. The latest version of MySQL is 8.0 which has a better recovery option. MySQL has different features for different editions (Enterprise Edition, Standard Edition, and Classic Edition). Some of the good features of MYSQL are given below…

◉ It is widely used in web development because it gives high performance, it is secure, flexible and reliable.
◉ It supports C, C++, Java, Perl, PHP, Python, and Tcl for client programming.
◉ It support Unicode, Replication, Transactions, full-text search, triggers, and stored procedures.
◉ MySQL supports large databases, up to 50 million rows or more in a table.
◉ MySQL can run on Linux, Windows, OSX and FreeBSD and Solaris.

3. Microsoft SQL Server


This database was launched in 1989 and it is also one of the most popular relational database management systems (RDBMS) in the world. It is written in C and C++ and supports structured query language. The latest version of SQL Server is SQL Server 2019. It works well with Microsoft products and it is available on both Windows and Linux platforms. There are so many editions of this database such as Azure SQL Database (cloud-based version), Compact edition, enterprise edition (preferred by most of the companies) and Developer edition. Some of the main features are given below…

◉ It is platform depdendent, high performance database.
◉ It uses data compression technique so you need to worry less about storage or space.
◉ It can be integrated with non-relational sources like Hadoop.
◉ For security-related concern it uses row-level security, dynamic data masking, transparent data encryption, and robust auditing.
◉ It comes with custom-built graphical integration that saves a lot of time of users.
◉ Object Explorer feature allows users to view the creation of the tables.
◉ Creates various designs, tables, and view data without syntax.
◉ Efficient management of workload and allows multiple users to use the same database.

4. PostgreSQL


This database is also an open-source Object-Relational DBMS but users are free to create NoSQL databases. It is written in C and the popularity of this database is increasing day by day. It is ideal for companies that frequently deal with large volumes of data. A lot of gaming apps, database automation tools, and domain registries use this database. Companies such as Apple (macOS Server operating system uses this database), Cisco, Fujitsu, Skype, and IMDb, etc use this database. PostgreSQL runs on many operating systems, including Windows, Linux, Solaris and now Mac OS X. The database is good for single-machine applications, a large internet-facing application, and for all applications in between. It is also good for building fault-tolerant environments, managing the data and protecting data integrity. Let’s discuss some more features…

◉ High scalability, predefined functions, support for JSON, easy data portability, multiple interfaces.
◉ It provides support for tablespaces, as well as for stored procedures, joins, views, triggers, etc.
◉ Security and disaster recovery features.
◉ Extensibility through stored functions and procedures, procedural languages, and foreign data wrappers.
◉ Allows you to create custom data types and query methods.
◉ Robust, secure and fast.

5. MongoDB


MongoDB is a cross-platform NoSQL database. It is written in C++, C and JavaScript programming languages. You can use this database for mobile apps, real-time analytics, IoT, and can provide a real-time view for all your data. MongoDB is a high-speed database and the data is stored in the form of JSON style documents. MongoDB uses internal memory so the data is easily accessible. You can process a large amount of data simultaneously.

◉ Fast, easy to use, auto-sharding, deployment flexibility, high performance, high availability and easy scalability.
◉ Supports JSON and the schema can be written without downtime.
◉ Easy to administer in the case of failures.
◉ For data migrations, it provides complete deployment flexibility.
◉ Queries can be easily optimized for output.

6. IBM DB2


The latest release of DB2 is 11.5 which runs queries faster. This database supports the relational model but in recent years products have been extended to support object-relational features and non-relational structures like JSON and XML. The database offers AI-dedicated capabilities that are designed to manage and structure complex data. Some of its good features are given below.

◉ It supports private as well as cloud environments.
◉ It supports most of the data science languages to handle simple or complex frameworks.
◉ It supports multiple Windows, UNIX, and Linux versions.
◉ Easy to install and easily accessible.
◉ DB2 has different server editions which are designed for specific tasks.

7. Elasticsearch


ElasticSearch is a search engine based on the Lucene library. It is a distributed and open-source full-text search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. A lot of big organizations like Wikipedia, The Guardian, StackOverflow, GitHub, etc. Some of the key features are given below…

◉ It allows you to create replicas of their indexes and shards.
◉ It is scalable up to petabytes of structured and unstructured data.
◉ Multi tenancy and extremely fast search.
◉ Java-based and designed to operate in real-time.
◉ Document oriented with a higher performance result.

Source: geeksforgeeks.org

Monday, September 27, 2021

Oracle Database Connection in Python

Oracle Database Connection Python, Oracle Database Preparation, Database Exam Prep, Database Certification, Database Career

Sometimes as part of programming, we required to work with the databases because we want to store a huge amount of information so we use databases, such as Oracle, MySQL, etc. So In this article, we will discuss the connectivity of Oracle database using Python. This can be done through the module name cx_Oracle.

Oracle Database 

For communicating with any database through our Python program we require some connector which is nothing but the cx_Oracle module.

Read More: 1Z0-067: Upgrade Oracle 9i/10g/11g OCA to Oracle Database 12c OCP

For installing cx-Oracle : 

If you are using Python >= 3.6 use the below command in Linux: –

pip install cx-Oracle

If you are using Python >= 3.6 use the below command in Windows: –

py -m pip install cx-Oracle

By this command, you can install cx-Oracle package but it is required to install Oracle database first on your PC.

◉ Import database specific module 

Ex. import cx_Oracle

◉ connect(): Now Establish a connection between the Python program and Oracle database by using connect() function. 

con = cx_Oracle.connect('username/password@localhost')

◉ cursor(): To execute a SQL query and to provide results some special object is required that is nothing but cursor() object.

cursor = cx_Oracle.cursor()

◉ execute/executemany method :

cursor.execute(sqlquery) – – – -> to execute a single query. 

cursor.executemany(sqlqueries) – – – -> to execute a single query with multiple bind variables/place holders.

◉ commit(): For DML(Data Manipulation Language) queries that comprise operations like update, insert, delete. We need to commit() then only the result reflects in the database.

◉ fetchone(), fetchmany(int), fetchall():

1. fetchone() : This method is used to fetch one single row from the top of the result set.

2. fetchmany(int): This method is used to fetch a limited number of rows based on the argument passed in it.

3. fetchall() : This method is used to fetch all rows from the result set.

◉ close(): After all done it is mandatory to close all operations. 

cursor.close()

con.close()

Execution of SQL statement: 

1. Creation of table

# importing module

import cx_Oracle

# Create a table in Oracle database

try:

con = cx_Oracle.connect('tiger/scott@localhost:1521/xe')

print(con.version)

# Now execute the sqlquery

cursor = con.cursor()

# Creating a table employee

cursor.execute("create table employee(empid integer primary key, name varchar2(30), salary number(10, 2))")

print("Table Created successfully")

except cx_Oracle.DatabaseError as e:

print("There is a problem with Oracle", e)

# by writing finally if any error occurs

# then also we can close the all database operation

finally:

if cursor:

cursor.close()

if con:

con.close()

Output: 

Table Created successfully

DDL statements don’t require to be committed. They are automatically committed. In the above program, I have used execute() method to execute an SQL statement.

2. Inserting a record into table using execute() method

# importing module
import cx_Oracle

# Inserting a record into a table in Oracle database
try:
con = cx_Oracle.connect('tiger/scott@localhost:1521/xe')
cursor = con.cursor()
#con.autocommit = True
# Inserting a record into table employee
cursor.execute('insert into employee values(10001,\'Rahul\',50000.50)')

# commit() to make changes reflect in the database
con.commit()
print('Record inserted successfully')

except cx_Oracle.DatabaseError as e:
print("There is a problem with Oracle", e)

# by writing finally if any error occurs
# then also we can close the all database operation
finally:
if cursor:
cursor.close()
if con:
con.close()

Output: 

Record inserted successfully

Once we execute any DML statement it is required to commit the transaction. You can commit a transaction in 2 ways: –

1. con.commit(). This is used to commit a transaction manually.
2. con.autocommit = True. This is used to commit a transaction automatically.

3. Inserting multiple records into a table using executemany() method

import cx_Oracle

# Load data from a csv file into Oracle table using executemany
try:
con = cx_Oracle.connect('tiger/scott@localhost:1521/xe')

except cx_Oracle.DatabaseError as er:
print('There is an error in Oracle database:', er)

else:
try:
cur = con.cursor()
data = [[10007, 'Vikram', 48000.0], [10008, 'Sunil', 65000.1], [10009, 'Sameer', 75000.0]]

cur = con.cursor()
# Inserting multiple records into employee table
# (:1,:2,:3) are place holders. They pick data from a list supplied as argument
cur.executemany('insert into employee values(:1,:2,:3)', data)

except cx_Oracle.DatabaseError as er:
print('There is an error in Oracle database:', er)

except Exception as er:
print(er)

else:
# To commit the transaction manually
con.commit()
print('Multiple records are inserted successfully')

finally:
if cur:
cur.close()
if con:
con.close()

Output:  

Multiple records are inserted successfully

There might be times when it is required to execute a SQL statement multiple times based on the different values supplied to it each time. This can be achieved using executemany() method. We supply a list containing a list of values that will replace placeholders in a SQL query to be executed. 

From the above case

◉ :1 is substituted by value 10007
◉ :2 is substituted by value ‘Vikram’
◉ :3 is substituted by value 48000.0

And so on(next list of values in a given list)

Similarly, you can supply a list of dictionaries. But instead of placeholder, we will use the bind variable( discussed later).

4. View result set from a select query using fetchall(), fetchmany(int), fetchone()

import cx_Oracle

try:
con = cx_Oracle.connect('tiger/scott@localhost:1521/xe')

except cx_Oracle.DatabaseError as er:
print('There is an error in the Oracle database:', er)

else:
try:
cur = con.cursor()

# fetchall() is used to fetch all records from result set
cur.execute('select * from employee')
rows = cur.fetchall()
print(rows)

# fetchmany(int) is used to fetch limited number of records from result set based on integer argument passed in it
cur.execute('select * from employee')
rows = cur.fetchmany(3)
print(rows)

# fetchone() is used fetch one record from top of the result set
cur.execute('select * from employee')
rows = cur.fetchone()
print(rows)

except cx_Oracle.DatabaseError as er:
print('There is an error in the Oracle database:', er)

except Exception as er:
print('Error:'+str(er))

finally:
if cur:
cur.close()

finally:
if con:
con.close()

Output:

[(10001, 'Rahul', 50000.5), (10002, 'Sanoj', 40000.75), (10003, 'Soumik', 30000.25), (10004, 'Sayan', 45000.0), (10005, 'Sobhan', 60000.1), (10006, 'Gururaj', 70000.0), (10007, 'Vikram', 48000.0), (10008, 'Sunil', 65000.1), (10009, 'Sameer', 75000.0)]
[(10001, 'Rahul', 50000.5), (10002, 'Sanoj', 40000.75), (10003, 'Soumik', 30000.25)]
(10001, 'Rahul', 50000.5)

In the above program, we have used 3 methods 

1. fetchall() : The fetchall() is used to fetch all records from the result set.

2. fetchmany(int) : The fetchmany(int) is used to fetch the limited number of records from the result set based on the integer argument passed in it.

3. fetchone() : The fetchone() is used to fetch one record from the top of the result set.

5. View result set from a select query using bind variable

import cx_Oracle

try:
con = cx_Oracle.connect('tiger/scott@localhost:1521/xe')

except cx_Oracle.DatabaseError as er:
print('There is error in the Oracle database:', er)

else:
try:
cur = con.cursor()

cur.execute('select * from employee where salary > :sal', {'sal': 50000})
rows = cur.fetchall()
print(rows)

except cx_Oracle.DatabaseError as er:
print('There is error in the Oracle database:', er)

except Exception as er:
print('Error:', er)

finally:
if cur:
cur.close()

finally:
if con:
con.close()

Output:

[(10001, 'Rahul', 50000.5), (10005, 'Sobhan', 60000.1), (10006, 'Gururaj', 70000.0),
 (10008, 'Sunil', 65000.1), (10009, 'Sameer', 75000.0)]

In this case, I have passed a dictionary in execute() method. This dictionary contains the name of the bind variable as a key, and it’s corresponding value. When the SQL query is executed, value from the key is substituted in place of bind variable.

Friday, September 24, 2021

Difference between Centralized Database and Distributed Database

1. Centralized Database

A centralized database is basically a type of database that is stored, located as well as maintained at a single location only. This type of database is modified and managed from that location itself. This location is thus mainly any database system or a centralized computer system. The centralized location is accessed via an internet connection (LAN, WAN, etc). This centralized database is mainly used by institutions or organizations. 

Centralized Database, Distributed Database, Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Career, Database Learning, Oracle Database Certification

Advantages – 

◉ Since all data is stored at a single location only thus it is easier to access and coordinate data.

◉ The centralized database has very minimal data redundancy since all data is stored in a single place.

◉ It is cheaper in comparison to all other databases available.

Disadvantages – 

◉ The data traffic in the case of centralized database is more.

◉ If any kind of system failure occurs at the centralized system then the entire data will be destroyed.

2. Distributed Database


A distributed database is basically a type of database which consists of multiple databases that are connected with each other and are spread across different physical locations. The data that is stored on various physical locations can thus be managed independently of other physical locations. The communication between databases at different physical locations is thus done by a computer network. 

Centralized Database, Distributed Database, Oracle Database Preparation, Oracle Database Exam Prep, Oracle Database Career, Database Learning, Oracle Database Certification

Advantages – 

◉ This database can be easily expanded as data is already spread across different physical locations.

◉ The distributed database can easily be accessed from different networks.

◉ This database is more secure in comparison to centralized database.

Disadvantages – 

◉ This database is very costly and it is difficult to maintain because of its complexity.

◉ In this database, it is difficult to provide a uniform view to user since it is spread across different physical locations.

Difference between Centralized database and Distributed database : 

Centralized database Distributed database
It is a database that is stored, located as well as maintained at a single location only.  It is a database which consists of multiple databases which are connected with each other and are spread across different physical locations. 
The data access time in the case of multiple users is more in a centralized database.   The data access time in the case of multiple users is less in a distributed database. 
The management, modification, and backup of this database are easier as the entire data is present at the same location.  The management, modification, and backup of this database are very difficult as it is spread across different physical locations. 
This database provides a uniform and complete view to the user.  Since it is spread across different locations thus it is difficult to provide a uniform view to the user. 
This database has more data consistency in comparison to distributed database.  This database may have some data replications thus data consistency is less. 
The users cannot access the database in case of database failure occurs.  In distributed database, if one database fails users have access to other databases. 
Centralized database is less costly.  This database is very expensive. 

Source: geeksforgeeks.org

Wednesday, September 22, 2021

Fine grained Network Access Control for Oracle Autonomous Database on Exadata Cloud@Customer

Network perimeter security is the primary method for securing cloud resources. This is generally done by creating virtual networks/subnets, security lists and firewalls. For Oracle multi-tenant databases deployed on Exadata Cloud@Customer, there is now an additional layer of network security available. Access Control Lists at the pluggable database level. What this means is that while the Exadata Infrastructure may be deployed in a private customer subnet and access is governed by the security rules associated with that subnet, each ADB or pluggable database can have its own set of access rules. This is done by defining an Access Control List at the time of provisioning an ADB or at a later stage if desired. Access Control Lists can be one or more IP addresses or a CIDR block.

More Info: 1Z0-888: MySQL 5.7 Database Administrator

Typically,  an Autonomous Container Database (ACD) may have multiple Autonomous Databases (ADB) providing a higher degree of consolidation and cost efficiency by leveraging service features such as online auto-scaling. By defining Access Control Lists (ACL) for each ADB, you now have much better control on which specific SQL clients or users can access the database.

Oracle Autonomous Database, Database Exadata Cloud, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Prep

As an example, take the Exadata Cloud@Customer deployed in a customer data center in a private network with CIDR 10.22.0.0/16.  The network security rules may be setup to allow traffic from any host within this corporate network but by setting up ACLs at the database level you can now allow / disallow traffic from specific applications or client users. This has other security benefits such as, say, 

a.)  blocking ad-hoc users from accessing production databases directly ( even if they have the right credentials) 

b) Allowing client connections to certain databases from office locations only ( although I cannot imagine such an ACL rule in these times)

c) If you are providing Database as a Service to your organization using Autonomous ( which is a great use case btw) then each database can be associated with a line of business / application / project team etc.

Let's take a look at how to go about setting up ACLs. 

We'll look at doing this from the database console in Oracle Cloud Infrastructure but you can easily automate the process using a robust set of REST APIs available for each action.

1. Set up ACLs at the time of your Autonomous Database provisioning


You are provided the choice to enable network ACLs on the provisioning page as shown below. 

Oracle Autonomous Database, Database Exadata Cloud, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Prep

Next, you simply provide your list of IP Addresses or CIDR ranges to allow connections from.

Oracle Autonomous Database, Database Exadata Cloud, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Prep

Make sure you do not leave this list empty once your enable ACLs or else your database may be inaccessible ( although you can always come back and edit this) 

2. Setup or modify ACLs for existing ADBs


If you wish to setup ACLs for your already existing databases or change a rule later, you can easily do that from the database console

Oracle Autonomous Database, Database Exadata Cloud, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Prep

... and in the event you clone your database ( a nifty feature in Autonomous) , your ACLs carry over to the clone and once again, you may edit them as desired.

Oracle Autonomous Database, Database Exadata Cloud, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Prep

Source: oracle.com

Monday, September 20, 2021

Multitenant : Unplug/Plugin PDB Upgrade to Oracle Database 21c (AutoUpgrade)

Oracle Database 21c, Oracle Database Tutorial and Materials, Oracle Database Learning, Oracle Database Guides, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Preparation

This article provides an overview of upgrading an existing PDB to Oracle 21c on the same server using AutoUpgrade Unplug/Plugin. Upgrades can be very complicated, so you must always read the upgrade manual, and test thoroughly before considering an upgrade of a production environment.

◉ Assumptions

This article is focused on upgrading a pluggable database using unplug/plugin. If you want to upgrade a CDB and all PDBs directly, you need to follow this article.

Multitenant : Upgrade to Oracle Database 21c (AutoUpgrade)

This article assumes your source database is of a version supported for direct upgrade to 21c.

19c, 18c, 12.2

In this example we are doing an upgrade from 19c multitenant to 21c. The process is very similar for all supported versions.

Read More: 1Z0-432: Oracle Real Application Clusters 12c Essentials

It's important to have backups of everything before you start! Some of these steps are destructive, and if something goes wrong you have no alternative but to restore from backups and start again.

◉ Prerequisities

Make sure you have all the OS prerequisites in place by running the 21c preinstall package. On Oracle Linux you can do this by installing the preinstall package. It probably makes sense to update the remaining packages also.

yum install -y oracle-database-preinstall-21c

yum update -y

◉ Install 21c Software

You can read about the installation process in more detail here (OL7, OL8), but for this example we'll keep it brief. The following commands will perform a silent installation of the 21c software.

export ORACLE_HOME=$ORACLE_BASE/product/21.0.0/dbhome_1

export SOFTWARE_DIR=/vagrant/software

export ORA_INVENTORY=/u01/app/oraInventory

mkdir -p ${ORACLE_HOME}

cd $ORACLE_HOME

/bin/unzip -oq ${SOFTWARE_DIR}/LINUX.X64_213000_db_home.zip

./runInstaller -ignorePrereq -waitforcompletion -silent                        \

    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp               \

    oracle.install.option=INSTALL_DB_SWONLY                                    \

    ORACLE_HOSTNAME=${ORACLE_HOSTNAME}                                         \

    UNIX_GROUP_NAME=oinstall                                                   \

    INVENTORY_LOCATION=${ORA_INVENTORY}                                        \

    SELECTED_LANGUAGES=en,en_GB                                                \

    ORACLE_HOME=${ORACLE_HOME}                                                 \

    ORACLE_BASE=${ORACLE_BASE}                                                 \

    oracle.install.db.InstallEdition=EE                                        \

    oracle.install.db.OSDBA_GROUP=dba                                          \

    oracle.install.db.OSBACKUPDBA_GROUP=dba                                    \

    oracle.install.db.OSDGDBA_GROUP=dba                                        \

    oracle.install.db.OSKMDBA_GROUP=dba                                        \

    oracle.install.db.OSRACDBA_GROUP=dba                                       \

    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false                                 \

    DECLINE_SECURITY_UPDATES=true

Run the root scripts when prompted.

As a root user, execute the following script(s):

        1. /u01/app/oracle/product/21.0.0/dbhome_1/root.sh

At this point you should also patch the new Oracle home, but in this case we will forgo that step to keep things simple.

◉ Create 21c Container Database

We need to create a container database (CDB) as the destination for the resulting PDB. The following example create a CDB called "cdb2" with no PDBs.

#dbca -silent -deleteDatabase -sourceDB cdb2 -sysDBAUserName sys -sysDBAPassword SysPassword1

dbca -silent -createDatabase \

 -templateName General_Purpose.dbc \

 -gdbname cdb2 -sid cdb2 -responseFile NO_VALUE \

 -characterSet AL32UTF8 \

 -sysPassword SysPassword1 \

 -systemPassword SysPassword1 \

 -createAsContainerDatabase true \

 -numberOfPDBs 0 \

 -databaseType MULTIPURPOSE \

 -memoryMgmtType auto_sga \

 -totalMemory 1536 \

 -storageType FS \

 -datafileDestination "/u02/oracle/" \

 -redoLogFileSize 50 \

 -emConfiguration NONE \

 -ignorePreReqs

Prepare for db operation

10% complete

Copying database files

40% complete

Creating and starting Oracle instance

42% complete

46% complete

52% complete

56% complete

60% complete

Completing Database Creation

66% complete

69% complete

70% complete

Executing Post Configuration Actions

100% complete

Database creation complete. For details check the logfiles at:

 /u01/app/oracle/cfgtoollogs/dbca/cdb2.

Database Information:

Global Database Name:cdb2

System Identifier(SID):cdb2

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb2/cdb2.log" for further details.

$

We enable the fast recovery area, Oracle Managed Files (OMF) and archivelog mode.

sqlplus / as sysdba <<EOF

alter system set db_recovery_file_dest_size=40g;

alter system set db_recovery_file_dest='/u01/app/oracle/fast_recovery_area';

alter system set db_create_file_dest = '/u02/oradata';

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

exit;

EOF

You will need to adjust the instance parameters to make sure the container can cope with the demands of the final PDB, but for this example we will ignore that.

◉ Run AutoUpgrade Analyze

Download the latest "autoupgrade.jar" file from MOS 2485457.1. If you don't have MOS access you can miss out the next step.

cd $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin

mv autoupgrade.jar autoupgrade.jar.`date +"%Y"-"%m"-"%d"`

cp /tmp/autoupgrade.jar .

Make sure you are using the original Oracle home before running the "autoupgrade.jar" commands.

export ORACLE_SID=cdb1

export ORAENV_ASK=NO

. oraenv

export ORAENV_ASK=YES

export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1

Generate a sample file for a full database upgrade.

$ORACLE_BASE/product/21.0.0/dbhome_1/jdk/bin/java \

  -jar $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin/autoupgrade.jar \

  -create_sample_file config /tmp/config.txt unplug

Edit the resulting "/tmp/config.txt" file, setting the details for your required upgrade. In this case we used the following parameters. We are only upgrading a single PDB, but if we had multiple we could use a comma-separated list of PDBs.

upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/pdb1

upg1.sid=cdb1

upg1.source_home=/u01/app/oracle/product/19.0.0/dbhome_1

upg1.target_cdb=cdb2

upg1.target_home=/u01/app/oracle/product/21.0.0/dbhome_1

upg1.pdbs=pdb1                              # Comma delimited list of pdb names that will be upgraded and moved to the target CDB

#upg1.target_pdb_name.mypdb1=altpdb1        # Optional. Name of the PDB to be created on the target CDB

#upg1.target_pdb_copy_option.mypdb1=file_name_convert=('mypdb1', 'altpdb1')  # Optional. file_name_convert option used when creating the PDB on the target CDB

#upg1.target_pdb_name.mypdb2=altpdb2

upg1.start_time=NOW                        # Optional. [NOW | +XhYm (X hours, Y minutes after launch) | dd/mm/yyyy hh:mm:ss]

upg1.upgrade_node=localhost                # Optional. To find out the name of your node, run the hostname utility. Default is 'localhost'

upg1.run_utlrp=yes                         # Optional. Whether or not to run utlrp after upgrade

upg1.timezone_upg=yes                      # Optional. Whether or not to run the timezone upgrade

upg1.target_version=21                     # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2

Run the upgrade in analyze mode to see if there are any expected issues with the upgrade.

$ORACLE_BASE/product/21.0.0/dbhome_1/jdk/bin/java \

  -jar $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin/autoupgrade.jar \

  -config /tmp/config.txt -mode analyze

AutoUpgrade tool launched with default options

Processing config file ...

+--------------------------------+

| Starting AutoUpgrade execution |

+--------------------------------+

1 databases will be analyzed

Type 'help' to list console commands

upg> Job 100 completed

------------------- Final Summary --------------------

Number of databases            [ 1 ]

Jobs finished                  [1]

Jobs failed                    [0]

Jobs pending                   [0]

Please check the summary report at:

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log

The output files list the status of the analysis, and any manual intervention that is needed before an upgrade can take place. The output of the "status.log" file is shown below. The detail section gives a file containing the details of the steps in the upgrade process. If you've seen the output from the "preupgrade.jar", it will look familiar. If there are any required manual actions in the main log file, the detail file should give more information.

==========================================

          Autoupgrade Summary Report

==========================================

[Date]           Sun Aug 22 14:49:01 UTC 2021

[Number of Jobs] 1

==========================================

[Job ID] 100

==========================================

[DB Name]                cdb1

[Version Before Upgrade] 19.12.0.0.0

[Version After Upgrade]  21.3.0.0.0

------------------------------------------

[Stage Name]    PRECHECKS

[Status]        SUCCESS

[Start Time]    2021-08-22 14:48:44

[Duration]      0:00:16

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/100/prechecks

[Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/100/prechecks/cdb1_preupgrade.log

                Precheck passed and no manual intervention needed

------------------------------------------

The log directory contains a number of files, including a HTML format of the detailed report. It's the same information as the log file, but some people my prefer reading this format.

Once any required manual fixups are complete, run the analysis again and you should see a clean analysis report.

◉ Run AutoUpgrade Deploy

We are now ready to run the database upgrade with the following command. The upgrade takes some time, so you will be left at the "upg" prompt until it's complete.

$ORACLE_BASE/product/21.0.0/dbhome_1/jdk/bin/java \

  -jar $ORACLE_BASE/product/21.0.0/dbhome_1/rdbms/admin/autoupgrade.jar \

  -config /tmp/config.txt -mode deploy

AutoUpgrade tool launched with default options

Processing config file ...

+--------------------------------+

| Starting AutoUpgrade execution |

+--------------------------------+

1 databases will be processed

Type 'help' to list console commands

upg>

Use the "help" command to see the command line options. We can list the current jobs and check on the job status using the following commands.

upg> lsj

+----+-------+---------+---------+-------+--------------+--------+----------------+

|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|         MESSAGE|

+----+-------+---------+---------+-------+--------------+--------+----------------+

| 101|   cdb1|DBUPGRADE|EXECUTING|RUNNING|21/08/22 14:51|14:56:28|22%Upgraded PDB1|

+----+-------+---------+---------+-------+--------------+--------+----------------+

Total jobs 1

upg> status -job 101

Progress

-----------------------------------

Start time:      21/08/22 14:51

Elapsed (min):   5

End time:        N/A

Last update:     2021-08-22T14:56:28.019

Stage:           DBUPGRADE

Operation:       EXECUTING

Status:          RUNNING

Pending stages:  7

Stage summary:

    SETUP             <1 min

    PREUPGRADE        <1 min

    PRECHECKS         <1 min

    PREFIXUPS         1 min

    DRAIN             <1 min

    DBUPGRADE         3 min (IN PROGRESS)

Job Logs Locations

-----------------------------------

Logs Base:    /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1

Job logs:     /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101

Stage logs:   /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/dbupgrade

TimeZone:     /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/temp

Additional information

-----------------------------------

Details:

[Upgrading] is [22%] completed for [cdb1-pdb1]

                 +---------+-------------+

                 |CONTAINER|   PERCENTAGE|

                 +---------+-------------+

                 |     PDB1|UPGRADE [22%]|

                 +---------+-------------+

Error Details:

None

upg>

Once the job completes a summary message is displayed.

upg> Job 101 completed

------------------- Final Summary --------------------

Number of databases            [ 1 ]

Jobs finished                  [1]

Jobs failed                    [0]

Jobs pending                   [0]

Please check the summary report at:

Oracle Database 21c, Oracle Database Tutorial and Materials, Oracle Database Learning, Oracle Database Guides, Oracle Database Exam Prep, Oracle Database Career, Oracle Database Preparation
/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html

/u01/app/oracle/cfgtoollogs/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log

The "status.log" contains the top-level information about the upgrade process.

==========================================

          Autoupgrade Summary Report

==========================================

[Date]           Sun Aug 22 15:15:50 UTC 2021

[Number of Jobs] 1

==========================================

[Job ID] 101

==========================================

[DB Name]                cdb1

[Version Before Upgrade] 19.12.0.0.0

[Version After Upgrade]  21.3.0.0.0

------------------------------------------

[Stage Name]    PREUPGRADE

[Status]        SUCCESS

[Start Time]    2021-08-22 14:51:03

[Duration]      0:00:00

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/preupgrade

------------------------------------------

[Stage Name]    PRECHECKS

[Status]        SUCCESS

[Start Time]    2021-08-22 14:51:03

[Duration]      0:00:20

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/prechecks

[Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/prechecks/cdb1_preupgrade.log

                Precheck passed and no manual intervention needed

------------------------------------------

[Stage Name]    PREFIXUPS

[Status]        SUCCESS

[Start Time]    2021-08-22 14:51:24

[Duration]      0:01:43

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/prefixups

------------------------------------------

[Stage Name]    DRAIN

[Status]        SUCCESS

[Start Time]    2021-08-22 14:53:08

[Duration]      0:00:12

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/drain

------------------------------------------

[Stage Name]    DBUPGRADE

[Status]        SUCCESS

[Start Time]    2021-08-22 14:53:20

[Duration]      0:15:56

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/dbupgrade

------------------------------------------

[Stage Name]    NONCDBTOPDBXY

[Status]        SUCCESS

[Start Time]    2021-08-22 15:09:16

[Duration]      0:00:00

------------------------------------------

[Stage Name]    POSTCHECKS

[Status]        SUCCESS

[Start Time]    2021-08-22 15:09:16

[Duration]      0:00:09

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postchecks

[Detail]        /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postchecks/cdb1_postupgrade.log

------------------------------------------

[Stage Name]    POSTFIXUPS

[Status]        SUCCESS

[Start Time]    2021-08-22 15:09:26

[Duration]      0:06:23

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postfixups

------------------------------------------

[Stage Name]    POSTUPGRADE

[Status]        SUCCESS

[Start Time]    2021-08-22 15:15:49

[Duration]      0:00:00

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/postupgrade

------------------------------------------

[Stage Name]    SYSUPDATES

[Status]        SUCCESS

[Start Time]    2021-08-22 15:15:50

[Duration]      0:00:00

[Log Directory] /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/sysupdates

------------------------------------------

Upgrade Summary: /u01/app/oracle/cfgtoollogs/autoupgrade/pdb1/cdb1/101/dbupgrade/upg_summary.log

Check out the "upg_summary.log" file, and if anything looks wrong, check out the associated log files. At this point I do a shutdown and startup to make sure everything is running in the correct mode.

export ORACLE_HOME=$ORACLE_BASE/product/21.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb2

sqlplus / as sysdba <<EOF

alter pluggable database PDB1 save state;

shutdown immediate;

startup;

show pdbs

exit;

EOF

◉ Final Steps

If you've finished with the 19c CDB1 instance, you can remove it.

export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb1

dbca -silent -deleteDatabase -sourceDB mydb -sysDBAUserName sys -sysDBAPassword SysPassword1

Edit the "/etc/oratab" file and any environment files as required.

◉ Appendix

The following commands are used to rebuild the databases if you want to rerun the examples.

Rebuild the 21c CDB1 multitenant database with no PDBS.

export ORACLE_HOME=/u01/app/oracle/product/21.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb2

#dbca -silent -deleteDatabase -sourceDB cdb2 -sysDBAUserName sys -sysDBAPassword SysPassword1

dbca -silent -createDatabase \

 -templateName General_Purpose.dbc \

 -gdbname cdb2 -sid cdb2 -responseFile NO_VALUE \

 -characterSet AL32UTF8 \

 -sysPassword SysPassword1 \

 -systemPassword SysPassword1 \

 -createAsContainerDatabase true \

 -numberOfPDBs 0 \

 -databaseType MULTIPURPOSE \

 -memoryMgmtType auto_sga \

 -totalMemory 1536 \

 -storageType FS \

 -datafileDestination "/u02/oradata/" \

 -redoLogFileSize 50 \

 -emConfiguration NONE \

 -ignorePreReqs

sqlplus / as sysdba <<EOF

alter system set db_recovery_file_dest_size=40g;

alter system set db_recovery_file_dest='/u01/app/oracle/fast_recovery_area';

alter system set db_create_file_dest = '/u02/oradata';

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

exit;

EOF

Rebuild the 19c CDB1 multitenant database with one PDB.

export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_SID=cdb1

#dbca -silent -deleteDatabase -sourceDB mydb -sysDBAUserName sys -sysDBAPassword SysPassword1

dbca -silent -createDatabase \

 -templateName General_Purpose.dbc \

 -gdbname cdb1 -sid cdb1 -responseFile NO_VALUE \

 -characterSet AL32UTF8 \

 -sysPassword SysPassword1 \

 -systemPassword SysPassword1 \

 -createAsContainerDatabase true \

 -numberOfPDBs 1 \

 -pdbName pdb1 \

 -pdbAdminPassword SysPassword1 \

 -databaseType MULTIPURPOSE \

 -memoryMgmtType auto_sga \

 -totalMemory 1536 \

 -storageType FS \

 -datafileDestination "/u02/oradata/" \

 -redoLogFileSize 50 \

 -emConfiguration NONE \

 -ignorePreReqs

sqlplus / as sysdba <<EOF

alter pluggable database pdb1 save state;

alter system set db_recovery_file_dest_size=40g;

alter system set db_recovery_file_dest='/u01/app/oracle/fast_recovery_area';

alter system set db_create_file_dest = '/u02/oradata';

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

exit;

EOF

Source: oracle-base.com

Friday, September 17, 2021

Does your document database provide these 3 critical capabilities ? Ensure you future-proof your applications today!

Oracle Database Exam Prep, Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Database Career, Database Learning, Database Guide

Most businesses operate in dynamic operating environment with rapidly changing customer expectations, and preferences. This is forcing businesses to be creative in delivering value to their customers and partners through applications or application programming interfaces (API) with new features and capabilities daily or even hourly.

Read More: 1Z0-432: Oracle Real Application Clusters 12c Essentials

Let’s consider a typical airline mobile app, it let's users manage reservations, get flight updates and keep track of membership bDocumenefits. However, not in the distant past, generating electronic boarding passes, seat selection and tracking bags in real-time used be an offline business process.

Businesses are increasingly depending on developers / IT teams  with an established continuous delivery process in meeting customer expectations. Unfortunate reality is, business requirements for applications can often be vague, and it is not unusual for business leaders to demand the ability to define or change requirements frequently and within a short notice. This can cause lots of frustration between business leaders and development teams. Enter JSON.

JSON to the rescue

According to Wikipedia, JSON (JavaScript Object Notation) is a lightweight data-interchange format for web applications. JSON is programming language independent, it is easy for humans to read and write, easy for machines to parse and generate. It is these properties that make JSON an ideal data-interchange language. It derived from JavaScript, the most popular language among developers for nine years in a row according to 2021 Stack Overflow survey.

Oracle Database Exam Prep, Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Database Career, Database Learning, Database Guide

In short, JSON data format allows applications tolerate changes to application model and supporting data model without incurring application downtime. Additionally, JSON can support structured, semi-structured or unstructured data sources, which are becoming common in modern applications.  The popularity of JSON has resulted in evolution of document databases that are suitable for high-volume web applications.

Evolution of document databases


Armed with JSON and with promise of speed, agility and optimized for productivity, development teams started using document databases for data persistence. Document model closely resemble JSON and can handle massive data sizes, without need for upfront schema modeling. These databases have become an alternative to fixed relational schema with tables and columns that are required in relational databases. Do they?

While a switch from relational to JSON document storage may offer significant advantages for application developers, it can lead to significant challenges for other consumers of this data. 79% of businesses executives say they don’t thrive when they operate in data silos and are increasingly relying on data analytics for faster innovation, improved efficiencies and ROI. Unfortunately these are not very well supported by document databases and precisely the need for database with converged capabilities. Mind the gap!

Oracle Database Exam Prep, Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Database Career, Database Learning, Database Guide

In pursuit of ideal data persistence


An ideal database should support open standards such as Structured Query Language (SQL) and process transactional workloads (OLTP) alongside analytics workloads (OLAP) without performance degrade. Developers shouldn’t be forced to add custom application logic and orchestrate complex, fragile, and inconsistent data movement between different specialized data stores just to support Graph, XML, Spatial data types and fast analytical queries to report across multiple and different JSON documents.

1. API driven Access:

Oracle Database Exam Prep, Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Database Career, Database Learning, Database Guide

Developers should have schema flexibility for their applications with ACID transaction guarantees. Better yet, a simple API-driven access and model-specific languages, while still having recourse to powerful SQL capabilities whenever they want.

Developers should be able to simply load JSON documents into the database natively, generate a schema and indices from JSON objects for transactions, and have ability to join with spatial, graph, and relational data for analytics.

2. Futureproof application needs with Multi-workload support

Application usecases evolve as the businesses evolve, different kinds of database workloads require different kinds of software optimizations. For example, smart sensors track millions of checked bags daily that pass through various airports, taking frequent measurements need a database that can ingest a large number of new records extremely quickly.

But training machine-learning models is a very different kind of job. For example, airline app should be able to provide personized lounge recommendations to its customers and suggest popular activities from their affiliates. This involves extremely large numbers of relatively simple calculations which means a lot of CPU cycles, including hundreds of common machine-learning algorithms.

Oracle Database Exam Prep, Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Database Career, Database Learning, Database Guide

3. Freedom to get job done

Developer productivity improves when a core database engine fully supports different kinds of application workloads without requiring excessive customization or app tuning. Likewise, data productivity improves when that same core engine also fully supports different analytical workloads critical to data scientists and business analysts. Developers need freedom to spend more time coding, more time learning, and contribute more to community by leveraging cloud and complete machine-learning based automation.

Optimized and fine tuned for JSON workloads


Oracle Autonomous JSON Database is part of Oracle Autonomous Database family that runs on Oracle Exadata engineered systems that are optimized for database workloads, and finely tuned for JSON workloads. Autonomous JSON database delivers industry leading performance with single digit microsecond response times that significantly outperforms specialized NoSQL document stores without sacrificing transactional consistency. This makes it an ideal persistent store for microservices based applications that can constantly adapt to business requirements and tolerate variability in demand. As with any Autonomous Database Oracle Autonomous JSON has full support of SQL including complex joins and long running transactions.

2X faster and 35% cheaper than MongoDB Atlas and AWS DocumentDB


Yahoo Cloud Serving Benchmark (YCSB), an industry standard open source tool that provides common set of workloads for evaluating the performance of different types of NoSQL data stores and Database as a Service (DBaaS) offerings available in market today. We used it to evaluate performance against MongoDB Atlas.

YCSB consists of YCSB Client, an extensible workload generator and core workloads, a set of workload scenarios to be executed by the generator. Source code for YCSB is available and can be downloaded from GitHub

We welcome you to run your workload by following these steps to configure and run the YCSB benchmark against Autonomous JSON Database

◉ Step 1: Provision an 8 OCPU instance with I TB storage. Follow these steps to provision the AJD instance.
◉ Step 2: Provision a VM.Standard2.24 compute instance on Oracle Cloud Infrastructure (OCI) in the same cloud region as Autonomous JSON Database for generating test load. For some workloads, you may need to more than one YCSB client to drive traffic. If so, repeat this step to provision more client compute instances
◉ Step 3. Configure and setup YCSB workload by cloning the GitHub repo. Follow instructions to install and run YCSB for multiple workloads

In our internal evaluation, Autonomous JSON Database with 8OCPU and 1 TB storage configuration that is comparable MongoDB Atlas M60 and AWS DocumentDB on R4.4xlarge configuration consistently outperformed for different type of workloads. These workloads are performed on 4 Million and 81 Million documents with each document.

◉ The first workload performs update-heavy activity, invoking 50% reads and 50% updates of the data.

◉ The second workload performs a medium-range scan that involves 75% scans and 25% updates.

◉ The third workload performs a short-range scan that involves 95% scans and 5% updates.

We used the lower end of performance improvements in the following comparison graph, however, your workloads could run much faster.

Oracle Database Exam Prep, Oracle Database Tutorial and Material, Oracle Database Exam Prep, Oracle Database Tutorial and Materials, Database Career, Database Learning, Database Guide

Simplified data platform for current and future application needs


Developers, IT and Business leaders should all work together towards primary goal of serving customers with flexibility and independence. Developers should have freedom to choose technology that makes them more productive, IT should have a unified data tier that makes reuse of data and overall management simpler and business leaders, flexibility to make data-driven decisions. An  Autonomous JSON database is same as an Autonomous Processing database, except that JSON  database is limited to storing 20GB of non JSON document collections. If your application needs change overtime, you can do a 1-click in place upgrade to Autonomous Transcation Processing database without any data migration or application rewrite. Ultimately, what this means for enterprises is a simplified data platform to both digitize business processes quickly and offer valued services more confidently for current and future business needs.

Source: oracle.com

Wednesday, September 15, 2021

Metrics and Performance Hub for ExaCS and DBCS

We are pleased to announce the General Availability (GA) of Metrics and Performance Hub feature for Exadata Cloud Service and Database Cloud Service in all OCI commercial regions.These capabilities empower customers to monitor their database instances and get comprehensive database performance analysis and management capabilities, natively within the OCI console for ExaCS and DBCS. 

Key Customer Benefits

◉ Database metrics help users monitor useful quantitative data, such as CPU and storage utilization, the number of successful and failed database logon and connection attempts, database operations, SQL queries, and transactions, etc. Users can use metrics data to diagnose and troubleshoot problems with their databases.

Read More: 1Z0-149: Oracle Database Program with PL/SQL

◉ Performance Hub provides in-depth diagnostics capability for doing Oracle database performance analysis and tuning. With this capability, users can have a consolidated view of real-time and historical performance data.

In order to use Metrics and Performance Hub features for ExaCS and DBCS, users need to enable Database Management Service on their databases. There are two management options (Basic and Full) to choose from. Details about Management options are here.

OCI Console Experience for ExaCS and DBCS

From the Bare Metal, VM, and Exadata service home page, navigate to Exadata VM Cluster → Database (for ExaCS) and navigate to DB Systems → Database (for DBCS). On the Database Details page, click 'Enable' for the label Database Management. Provide the details required and along with that choose between Full Management and Basic Management. There is an option to disable or edit the Database management option as well.


While enabling the Database Management, you will provide user credentials, private endpoint, and choose a management option. There are two management options to choose from: Basic Management and Full Management.



Once the Database Management is enabled, you can view database metrics by clicking on 'Metrics' under section 'Resources'. On the database details page, you can view six of the database metrics: CPU Utilization, Storage Utilization, Execute Count, Block Changes, Parse Count (Total), Current Logons. The additional metrics can be viewed by going to the OCI Database Management service or OCI Monitoring Service.



Once the Database Management is enabled, you can click on the 'Performance Hub' tab on the database details page to view real-time and historical performance data.




When Full management is enabled, Database Management Service-specific features such as Fleet Summary, Advanced Performance Hub features, AWR Explorer, Database Groups, Scheduled Jobs, etc can be accessed by navigating to Observability & Management → Database Management.  Some of these features are charge options.  Please check with your Oracle Sales Representative.

Considerations


◉ Database Metrics feature is available via OCI console and API, whereas Performance Hub is available via OCI console only.

◉ Database Metrics and Performance Hub feature is available for container databases (CDBs) on ExaCS and DBCS. The support for pluggable databases (PDBs) is being worked on as a roadmap item.

◉ Full Management option of Database Management is an extra-cost option.

Scope

This announcement is applicable for ExaCS and DBCS. Support for Gen2 Exadata Cloud@Customer is on the roadmap.

Availability

Support for Metrics and Performance Hub is available for ExaCS and DBCS in all OCI commercial regions.

Source: oracle.com