Friday, July 30, 2021

Oracle Database CICD with SQLcl and Liquibase

Part 1: Beginning Continuous Integration for the Database

What is CI/CD

Continuous Integration (CI) and Continuous Delivery/Deployment (CD) are a set of principles or methodologies that empower development teams to deliver code faster, more reliably, and with fewer bugs compared to traditional software development such as the waterfall model. Teams that use CI/CD also usually adhere to some version of the Agile Methodology, working on smaller features as part of a larger overarching requirement in sprints rather than monolithic tasks spanning months or years.

We also have the emergence of DevOps (Developer Operations) as the Agile Methodology collided with the ability to quickly create infrastructure as code in the cloud. As these teams surveyed their new development landscape, it became apparent that they could no longer work in a vacuum, living in their own worlds as they had done in the past. As traditional job roles blurred and more work was done by fewer people, order needed to rise from the chaos of this new complexity.

So, what is CI/CD? What does it do to help with this mess of developers, operations, and stakeholders who need to get features and functions into their software? To answer this, let’s start with the CI of CI/CD, Continuous Integration.

Continuous Integration brings us the ability to have multiple development and operations teams working on different features on a single project simultaneously without stepping all over each other. While this premise sounds too good to be true, there is some truth to be had here.

To have a common thread we can always refer to, this set of posts will be using a common example with real code behind it to illustrate the concepts and processes we will be talking about. So, let’s jump right into it with an example we can reference as we build upon our development process.

Here is our scenario: we have a team of developers working on an application that tracks local trees in a community via a web interface on the city’s web site. A citizen can access this app and enter the information for a historic tree they find in the city. Now this team also uses the Agile Methodology and it’s time for the next sprint. We have a few features we need to get in, mainly the ability to see these trees on a map and the ability to upload a picture with the tree entry form. The tasks are assigned to the developers and the sprint starts.

When does Continuous Integration start? When the tasks are handed out, each developer will copy the code from the latest release in a repository and create their own copy or branch. This latest code release is an exact copy of the code that is currently deployed and running in production. Each developer will also be given a personal development database, more on this later. As the developers finish the tasks they were assigned, they commit their code branch to the repository. Before they create a pull request/merge, they check the current main branch of code in the repository for any changes from other developers merging clean code branches. If found, they will pull these changes down and commit their code again. One of these changes might have affected their code in a negative way and the build process that occurs when a pull request/merge happens will catch this. A code review is scheduled and if accepted, the developer can now create the pull request/merge into the main branch. There is also a build process that occurs every time code is merged into main to catch any bugs or issues that may show up or issues that other developers code may have caused. If so, the issues are addressed, a pull request/merge is created, and the automated testing again happens. Once the testing comes back clean, the code can be merged into the main branch.

Oracle Database CICD, SQLcl and Liquibase, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Exam Study

Summary Points:


➥ Start Sprint
➥ Pull Code from Main
➥ Create Local Branch
➥ Work on Tickets/Features
➥ Check for Updates in Main
➥ Commit Branch
➥ Create Merge Request
➥ Code Review
➥ Merge into Main/Current Release

What advantages do we gain from this process? First, as we can see throughout this process, we can catch issues before they get into production. By running automated tests on code changes, we can simulate what is going to happen at deployment time and eliminate surprises. This leads to better releases, more confidence in the development team by stakeholders and end users, as well as a higher standard of code from our developers.

Accountability is another advantage. We can see through code commits, merges and reviews exactly what is being put into our code repository and by whom. We can catch any code supply chain issues or hacks because each file and line of code can be reviewed upon a merge. Modern code repositories also track these changes with who and when, so a historical record is always present.

Stateless vs. Stateful


Continuous Integration works very well with stateless files and deployments; static files such as java classes or JavaScript combined with a stateless container deployment; in with the new and out with the old. It very easy to create a new deployment in a docker container with the latest code and replace the old container if working with stateless files.

We see this with blue-green deployment models. In this model, you have your running application container, the blue. The green is your idle newly deployed application container. As some point you switch to the green and all traffic is now using the new version while the blue one is idle. If you need to roll or switch back, you just change back to the idle blue container.

Oracle Database CICD, SQLcl and Liquibase, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Exam Study

We quickly see these models are not going to work for database deployments. The database has state, transactions are being committed all the time so unless we can afford downtime to copy the entire database, deploy the changes and then resume operations, this poses a challenge. And because of this challenge, DBA’s and database developers are all too often blamed for these deployment bottlenecks. They are often told that databases can never do DevOps let alone CI/CD, it’s too legacy. Where does this leave the DBA and Database Developer? They will be branded with a legacy stigma and seen as a roadblock to CI/CD.

Tools of the Trade


While we wish there was a magic wand to make all database deployments easy and drop right into stateless CD/CD flows, we do have to accept that there is some truth with these DevOps concerns. But not all hope is lost! Oracle Database CI/CD can be a reality, but it requires some shifts in traditional methodologies and processes as well as some new tricks using features and functions of the Oracle Database that may have not been used before.

Let’s refer to our process flow and see where we can add database development specific steps:

Oracle Database CICD, SQLcl and Liquibase, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Exam Study

The sprint starts and we have our database developers pull the latest code from the latest release. Where is this code? What tools can we use to allow developers to have local copies of our database code? We use a Code Repository.

What is a Code Repository?

There are many code repositories available to us such as Git, GitHub, GitLab and BitBucket. These repositories will help us store our files and provide versioning, accountability and visibility into our development process. But we are starting from scratch here, we have no repository or code release to work with. We will need to create a baseline with a Change Management/Tracking tool.

How can I do Change Management/Tracking with Oracle?

SQLcl with Liquibase.

What’s SQLcl?

Oracle SQLcl (SQL Developer Command Line) is a small, lightweight, Java-based command-line interface for Oracle Database. SQLcl provides inline editing, statement completion, command recall, and also supports existing SQL*Plus scripts. You can download SQLcl from oracle.com and it is installed in the OCI Cloud Shell by default.

What’s Liquibase?

Liquibase is an open-source database-independent library for tracking, managing and applying database schema changes.

How do they work together?

The Liquibase feature in SQLcl enables you to execute commands to generate a changelog for a single object or for a full schema (changeset and changelogs). We also add Oracle specific features and enhancements to Liquibase in SQLcl.

What Database?

For this article, the simplest way to get up and running with an Oracle Database is a free OCI account and an always free Autonomous Database. They come up in minutes and are never going to cost you a dime to use. If you do not have an always free OCI account, you can start here. Once you are logged into your account, you can create an always free Autonomous Database by following this guide here. For our purposes, an Autonomous Transaction Processing database is fine. Just remember the password you used when creating the database; we will need it later.

No Really, What Database?

While we will use a single autonomous database for this article’s examples, in the real world you have a few choices but the most important take away is that all developers MUST have personal databases to work in; and this point cannot be stressed enough. This can be tricky due to many factors but here are some suggestions on making this step a reality in your development organization.

1) Use Multi-tenant in the Oracle Database

Did you know that with 19c and later, you can have up to 3 pluggable databases for free? No need for the multi-tenant license, just create and use. PDBs (pluggable databases) also have the ability to clone just the metadata from an origin PDB. This makes it super simple to create copies of a production code database for all developers. It also aids in the testing automation process we will discuss in the next article. Another quick note is that by installing Oracle REST Data Services (ORDS) on a database enables APIs for cloning and creating PDBs via REST calls.

2) The Autonomous Database in OCI

Using ADB in OCI works very similar to multi-tenant where you can create full and metadata only clones of any ADB. And these ADB instances may only need to be up for minutes to weeks.

3) Reusable Instances

There are many features in the Oracle Database that will allow you to recover state so that developers have clean slates to work with in non multi-tenant instances. Features such as Guaranteed Restore Points/Flashback Database, RMAN duplicates/clones and Data Pump allow you to have clean instances for each developer as sprints start.

4) Docker/Virtual Machines

Either cloud based or local, virtualization technologies can also give developers personal instances to work in with copies of our main code repository within. Developers can also create VM instances of databases in OCI based on the lastet backup of our production instance if need be as well to help with the personal database effort.

Hello Repo


Again referring to our process flow and example, we need to create a baseline or main release so that our developers all start with the same code. To do this, we need to use SQLcl and Liquibase.

Setups Needed

For the examples we will be going over, we will be using git at the command line with our SSH keys pre-set, our repository in GitHub and the project name being db-cicd-project.

You have two choices here:

Manually create the repository or fork the repository that has been pre-created for this article. I would suggest forking it from here:

https://github.com/Mmm-Biscuits/db-cicd-project

Next, set up SSH connection via the following document:

Connecting to GitHub with SSH

A GitHub desktop GUIs is also available.

Now we need to clone it to our local machine/computer/laptop. Navigate to a directory where you would like to store your git repositories locally and issue the clone command:

> git clone git@github.com:YOUR_GITHUB_USERNAME/db-cicd-project.git

The above command will create a directory called db-cicd-project for you and pull any code that was in the repository. It is now time to fill this repository with database goodness.
The First Genschema
The following repository directory structure is what the sample repository that you forked contains:

db-cicd-project
  • apps
  • database
  • etc
    • presetup
    • setup
    • postsetup
  • static
    • css
    • html
    • images
    • js
  • ut
    • utPLSQL
  • README.md
  • version.txt
Note: This directory structure assumes a few things (and we all know what assuming does). The assumption is that the apps directory may contain Application Express apps, so sub directories would be f101 for example. If not using APEX, you could consolidate the static directory and apps directory into a single directory.

Using this directory structure, we are going to start to work with the database and SQLcl now. Using our free ADB, we need to download the wallet that contains all the information for SQLcl to connect to the database. You can look here for the steps to download the wallet.

Once the ADB wallet is downloaded, at a command line, change the directory to the database directory in our repository project. We should be at db-cicd-project -> database. Start SQLcl but do not log into a database yet:

> sql /nolog
 
Next, we have to tell SQLcl where to look for the ADB wallet. First, remember where you downloaded it and we can use the following command to set its location:

SQL> set cloudconfig /DIRECTORY_WHERE_WALLET_IS/WALLET.zip

The naming of the wallet is in the format of Wallet_DB_NAME.zip. So, if I named my database ADB, the wallet name would be Wallet_ADB.zip. And if I put the wallet in my downloads folder and the full command would be:

SQL> set cloudconfig /Users/bspendol/Downloads/Wallet_ADB.zip

Next, we need to connect to the database. The syntax is:

SQL> conn USERNAME@DB_NAME_high/medium/low/tp/tpurgent

The high/medium/low/tp/tpurgent provide different levels of performance for various clients or applications to connect into the database. From the documentation:

➥ tpurgent: The highest priority application connection service for time critical transaction processing operations. This connection service supports manual parallelism.

➥ tp: A typical application connection service for transaction processing operations. This connection service does not run with parallelism.

➥ high: A high priority application connection service for reporting and batch operations. All operations run in parallel and are subject to queuing.

➥ medium: A typical application connection service for reporting and batch operations. All operations run in parallel and are subject to queuing. Using this service the degree of parallelism is limited to four (4).

➥ low: A lowest priority application connection service for reporting or batch processing operations. This connection service does not run with parallelism.

For what we want to do, the high-performance service name will be fine. We also want to connect as the admin user so we can setup a schema and permissions on that schema. With our database being named ADB, we would have the following connect command:

SQL> conn admin@ADB_high
 
And then provide the password you used when creating the ADB at the password prompt:

Password? (**********?)

And we are in. Time to create a schema, give that schema some permissions and then setup a sample for using Liquibase with. Run the following commands:

SQL> create user demo identified by "PAssw0rd11##11" quota unlimited on data;

SQL> grant connect, resource to demo;
 
Now connect as this user:

SQL> conn demo@ADB_high
 
And then provide the password at the password prompt:

Password? (**********?)

Its time to create some sample datababase object for our repository. Run the following scripts included in the sample repository to create some objects:

SQL > @../demo_scripts/create_table.sql

SQL > @../demo_scripts/insert_data.sql

SQL > @../demo_scripts/create_logic.sql

This has created us a table, some data in the table, a trigger with some logic and two procedures. Let’s say this is the base of our application we are going to create. Its now time to use Liquibase to create our baseline and commit it to the repository.

At the SQL prompt, issue the following command:

SQL> lb genschema -split
 
Once this is finished, we can exit out of SQLcl and start to commit our changes to the repository.

SQL> exit

If you take a quick look around the database directory, you will now see folders for indexes, tables, procedures and triggers; all the objects we just created. We also have a controller.xml file. This file is a change log that includes all files in each directory and in the proper order to allow the schema to be deployed correctly to other databases, our CI process.

Time to commit our code to the repository. Change your directory to the top level of the project (the db-cicd-project directory) and run the following command:

> git add .

This command adds our new files to the local staging area. Now we commit the files

> git commit -m "v1.0"

A git commit in takes a snapshot of your local repository or you can think of it as saving the current state. Finally, lets push these files up to the GitHub file repository

> git push

The push command uploads our commit or state to the main repository. Once the push is done, you can see our files in our repository on GitHub. Congratulations, you have just created your code baseline or version 1.0!

Ready, Set, Branch


We have our main code line for our application in the repository. We can now have our developers clone the repository and start their assigned tickets/task in this development cycle or sprint. The issue with this is that they would all be committing to main and probably stepping all over each other. This requires each developer to create a branch of the repository that they can work in and commit their code to.

Let’s pretend we are developer who has been given the URL to clone the repository and start coding. They start by using git to clone to their local machine:

> git clone git@github.com:YOUR_GITHUB_USERNAME/db-cicd-project.git
 
Now they need to create a personal branch of this code:

> git checkout -b BRANCH_NAME

A good practice is to have the developers name the branch to relate to a ticket or sprint they are working on with their names in the branch. Here are some examples:

> git checkout -b TICKET#_USERNAME

> git checkout -b JIRA_TICKET#_USERNAME

> git checkout -b SPRINT#_USERNAME

For our example here, let’s use the following (use your name, not mine). Run this command in the top level folder of the project (db-cicd-project)

> git checkout -b sprint1_brian.spendolini

And you will see the response:

> Switched to a new branch 'sprint1_brian.spendolini'

We can now start developing and committing code to the repository without mixing our changes with the main code line. You can always issue a git status to see what branch you are in:

> git status
On branch sprint1_brian.spendolini
nothing to commit, working tree clean

Only the Changes


Back to our sprint. If you remember from our example scenario, the first ticket says that we need to add a new column to the trees table so that citizens can upload a picture of the tree they are entering. OK, simple enough task.

Once we are in our local git repository for the project db-cicd-project, we move the the database directory. And as before, we use SQLcl to log into our autonomous database:

> sql /nolog

Set the wallet location again (your directory path will be different):

SQL> set cloudconfig /Users/bspendol/Downloads/Wallet_ADB.zip
 
And connect to the database:

SQL> conn demo@ADB_high
 
And then provide the password at the password prompt (password is PAssw0rd11##11):

Password? (**********?)

Time to add the column. We can issue the following SQL to add the personal_interest column to the employee table:

SQL> alter table trees add (tree_picture blob);
Table EMPLOYEES altered.
 
Ticket done, time to generate the schema again using Liquibase.

SQL> lb genschema -split
 
We can exit SQLcl and go back to the top level of our local git repository. In the db-cicd-project, we issue a git add

> git add .

And then a commit

> git commit -m "ticket1"

Then we push the new code up to our repository. This push will be slightly different because we are pushing to a branch now. If you issue a git push, you will see something similar to below:

> git push
fatal: The current branch sprint1_brian.spendolini has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin sprint1_brian.spendolini

So we need to issues the push with our branch this one time:

> git push --set-upstream origin sprint1_brian.spendolini
 
If you go into the GitHub web UI, you can see in this repository that a new branch has been created and that only the table file has been updated as expected.

Oracle Database CICD, SQLcl and Liquibase, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Guides, Oracle Database Career, Database Exam Study

The other files which contain the procedure and trigger code have not been changed. Again, Liquibase is tracking the changes for us, and we can see it in action here.

Apply Yourself


Seeing is believing so let’s do exactly that; let’s apply our master code line into a schema in our database then apply the new branch and see what Liquibase does.

We need to switch back to our master branch in our local code repository on our local machines. Before we do that, we can see what happened when we do this switch. Go into the database directory while still on this branch we just committed to. Take a look at the trees_table.xml file. We should see the new column in the XML as:

         <COL_LIST_ITEM>
            <NAME>TREE_PICTURE</NAME>
            <DATATYPE>BLOB</DATATYPE>
         </COL_LIST_ITEM>
      </COL_LIST>
 
Now issue the following git command:

> git checkout master
 
And look at the employees_table.xml again. You can see that the last column in this file is SUBMITION_DATE and no longer TREE_PICTURE.

         <COL_LIST_ITEM>
            <NAME>SUBMITION_DATE</NAME>
            <DATATYPE>TIMESTAMP</DATATYPE>
            <SCALE>6</SCALE>
         </COL_LIST_ITEM>
      </COL_LIST>
 
Why is that? It’s because you switched branches to the master branch where we have our baseline code. We have yet to merge this code into that main branch so the new column change only exists in our development branch. Let’s use this to our advantage. While on the master branch, start up SQLcl and login as the admin user:

> sql /nolog

SQL> set cloudconfig /Users/bspendol/Downloads/Wallet_ADB.zip

SQL> conn admin@ADB_high

And we will create a new schema

SQL> create user test identified by "PAssw0rd11##11" quota unlimited on data;

SQL> grant connect, resource to test;

SQL> conn test@ADB_high
Password? (**********?) **************
Connected.

We need to be in the database directory of our local project. We can use some of SQLcl built in functionality to get there if not already there. We can navigate directories with the cd command. If you are in the tables directory under the database directory, you can simply issue a:

SQL> host cd ..

And move up a directory into the database directory in our project home. We can also use the host command with SQLcl to see where we are:

SQL> host pwd
/Users/bspendol/git/db-cicd-project/database

Ensure you are in the /db-cicd-project/database directory where mine would be as above:
/Users/bspendol/git/db-cicd-project/database

You can also issue a host ls command to see the files in that directory:

SQL> host ls
controller.xml        index            table
database_files_here    procedure        trigger
 
we are looking to use the controller.xml file.

Quick checkpoint: We are logged into our database as the test user we just made, we are on the master code branch in the local repository and we are in the database directory in our project home. Once here, we are going to use Liquibase to create the objects from our master branch. Issue the following command:

SQL> lb update -changelog controller.xml

Once you issue this command, you will see all our objects being created in the database.

SQL> lb update -changelog controller.xml
ScriptRunner Executing: table/trees_table.xml::96726c6d630653c9a9169df8b67089f2cdee1135::(DEMO)-Generated -- DONE
ScriptRunner Executing:
procedure/admin_email_set_procedure.xml::9d1175a5b6ca6d0c969a2eb183062e0a873ee226::(DEMO)-Generated -- DONE
ScriptRunner Executing: index/tree_id_pk_index.xml::b4dd6c2359e7f1e881b55ba728056510d537967a::(DEMO)-Generated -- DONE
ScriptRunner Executing: trigger/set_date_bi_trigger.xml::3b0d8da4c19a11b80f035a748c9b40752f010110::(DEMO)-Generated -- DONE

######## ERROR SUMMARY ##################
Errors encountered:0

######## END ERROR SUMMARY ##################
 
Now, we can move to our development branch and apply that to this schema. Exit out of SQLcl:

SQL> exit
 
Change branches (remember your branch will be named different, unless you are a clone of me with the same name..if so email me please):

> git checkout sprint1_brian.spendolini
Switched to branch 'sprint1_brian.spendolini'
Your branch is up to date with 'origin/sprint1_brian.spendolini'.

And log back into the database as this test user:

> sql /nolog

SQL> set cloudconfig /Users/bspendol/Downloads/Wallet_ADB.zip

SQL> conn test@ADB_high

And now, using our development branch, we will apply the changes to this schema:

SQL> lb update -changelog controller.xml
ScriptRunner Executing: table/trees_table.xml::f81a149311b20a5a1dfe95a35108b8728df33b7f::(DEMO)-Generated -- DONE

######## ERROR SUMMARY ##################
Errors encountered:0

######## END ERROR SUMMARY ##################
 
There are 2 tables that Liquibase uses to track changes, DATABASECHANGELOG and DATABASECHANGELOG_ACTIONS. There is also a view that combines these tables for a more readable format, DATABASECHANGELOG_DETAILS. If we take a look at the DATABASECHANGELOG_DETAILS view, we can see that only an alter table was done and that the table was not dropped and recreated in the SQL column:

SQL
--------------------------------------------
ALTER TABLE "TREES" ADD ("TREE_PICTURE" BLOB)
 

This would be not only the behavior you would see in say a production instance, but more importantly the behavior you would expect a change management tool to be executing.

Merge Ahead


You now have the tools to start change management with SQLcl, Liquibase and a git repository for your development group or organization. As with all new processes, start slow. Maybe start with just getting everyone used to committing code to a central repository using git. But once we start adding these building blocks, we can continue down the CICD road to the next stages.

This process also allows you to start combining your database sprint code with stateless application code in the same repository setting the entire development team up for a CICD process. No longer are database developers and DBAs left out in the cold being called legacy when they can now uptake the same change management and repository practices that other development teams have enjoyed.

Source: oracle.com

Wednesday, July 28, 2021

Difference Between Primary Key and Candidate Key

Primary Key, Candidate Key, Oracle Database Exam Prep, Oracle Database Preparation, Database Preparation, Oracle Database CareerPrimary Key, Candidate Key, Oracle Database Exam Prep, Oracle Database Preparation, Database Preparation, Oracle Database Career

Primary Key vs Candidate Key

Though primary key is selected from the candidate keys there exist some difference between the primary key and the other candidate keys, which will be discussed in detail in this article. Database designing is one of the most important activity that should be done when maintaining and storing data. During this designing process, different tables with many relationships have to be created. In order to access these tables in a database, different types of keys are used in modern database designing languages such as MYSQL, MSAccess, SQLite, etc. Out of these keys, candidate keys and primary keys have become essential in database designing practices.

What is a Candidate Key?

Candidate key is a single column or set of columns in a table of a database that can be used to uniquely identify any database record without referring to any other data. Each table of a database can have one or more than one candidate keys. A set of candidate keys can be created by using functional dependencies. There are some important features in a candidate key. They are;

• candidate keys should be unique within the domain and they should not contain any NULL values.

• the candidate key should never change, and it must hold the same value for a specific occurrence of an entity.

The main purpose of a candidate key is to help to identify one single row out of millions of rows in a big table. Each candidate key is qualified to become a primary key. However, out of all the candidate keys, the most important and special candidate key will become the primary key of a table and it is the best among the candidate keys.

What is a Primary Key?

A primary key is the best candidate key of a table that is used to uniquely identify records that are stored in a table. When creating a new table in a database we are asked to select a primary key. Therefore, selection of a primary key for a table is the most critical decision that should be taken by a database designer. The most important constraint, which should be considered when deciding a primary key, is that the selected column of the table should only contain unique values, and it should not contain any NULL values. Some of the primary keys that are commonly used when designing tables are Social Security Number (SSN), ID and National Identity Card Number (NIC).

The programmer should remember to choose a primary key carefully because it is difficult to change. Therefore, according to the programmers, the best practice of creating a primary key is to use an internally generated primary key such as Record ID created by AutoNumber data type of MS Access. If we try to insert a record into a table with a primary key that duplicates an existing record, the insertion will fail. The primary key value should not keep changing, so it is more important to keep a static primary key.

Primary Key, Candidate Key, Oracle Database Exam Prep, Oracle Database Preparation, Database Preparation, Oracle Database Career
A primary key is the best candidate key.

What is the difference between Primary Key and Candidate Key?


• A candidate key is the column that qualifies as unique whereas primary key is the column that uniquely identifies a record.

• A table without candidate keys does not represent any relation.

• There can be many candidate keys for a table in a database, but there should be only one primary key for a table.

• Although the primary key is one of the candidate keys, sometimes it is the only candidate key.

• Once a primary key was selected the other candidate keys become unique keys.

• Practically a candidate key can contain NULL values although it presently does not contain any value. Therefore, the candidate key is not qualified for a primary key because the primary key should not contain any NULL values.

• It may also be possible that candidate keys, which are unique at the moment, may contain duplicate values that disqualify a candidate key from becoming a primary key.

Source: differencebetween.com

Monday, July 26, 2021

OCI Data Science now offers E3 Flex Shapes with AMD’s 64-core processor

If you’re dealing with increasingly compute-intensive workloads, you want more cores and memory — but you also want flexibility. That’s why it’s important to be able to specify exactly the compute shape you need.

To make that even easier, OCI Data Science recently launched support for E3 Flex shapes that let you specify the number of cores and memory for the compute shape. E3 standard instances are based on the AMD EPYC 7742 processor, which has a base clock frequency of 2.25 GHz, a max boost of up to 3.4 GHz, and 64 cores (which AMD calls Tetrahexaconta-core).

While the previous generation shape limited you 1,2,4,8, or 16 cores, now you can choose from 1 to 64 OCPUs. Also, you get up to 1 TB in memory. The E3 instances have 16GB of memory per OCPU, which is double the ratio of the AMD E2 shapes.

Do you need even more flexibility for your workload? With this update, you can deactivate your notebook session, choose a different shape configuration, then reactivate the notebook session.

Let’s take a closer look at the simple process of customizing your shapes.

Choosing your compute shape families

When you log into OCI Data Science, you will see a new UI that allows you to pick different shape families.

Oracle Java Tutorial and Material, Oracle Java Exam Prep, Oracle Java Career, Java Preparation

Upon choosing the AMD Rome or the E3 Flex Shape, you can customize the number of OCPUs and memory in your instance.

Oracle Java Tutorial and Material, Oracle Java Exam Prep, Oracle Java Career, Java Preparation

That’s it — you’ve customized your E3 instance to deliver just the amount of performance you need.

Speed-tested on our benchmarks

Just how much faster are the E3 shapes? We’ve found the second generation AMD EPYC processor server performs impressively compared to existing VM instances on our standardized benchmarks. Compared to the X7 Standard instances, the E3 Standard instance delivers a 60% increase in integer performance per core, a 41% increase in floating point performance per core, and a 52% increase in STREAM Triad bandwidth per core.

Source: oracle.com

Friday, July 23, 2021

How Oracle transformed its employee experience—and cut support requests by 70%—with Oracle Guided Learning

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Career, Oracle Database Prep, Database Learning

There are more than 130,000 Oracle employees worldwide. With that broad reach comes an ongoing responsibility to innovate: continually providing our employees with new experiences that make work easier, better, and more enjoyable.

With the rapid deployment of new Oracle Cloud HCM features, we saw a clear need to help employees quickly learn new processes and get the information they needed to complete key tasks: from setting goals, to completing performance check-ins, to updating their talent profiles.

As we considered this initiative and how to approach it, we kept a guiding insight in mind: that the backgrounds, tenures, and roles of our employees vary widely. We weren’t designing for a static user or persona. Some Oracle employees were “power users” with a proficient understanding of our talent management system and processes. Meanwhile, others were occasional users or new employees with limited experience with these flows here at Oracle.  

To help employees be successful, we knew we needed to be able to speak fluently and efficiently to these very different audiences, based on their individual roles and experiences.

Empowering employees through a modernized experience: The Oracle Guided Learning (OGL) story

Our recent implementation of Oracle Guided Learning (OGL) for Oracle HCM Talent Management Cloud has allowed us to personalize these employee experiences—and achieve tremendous time and cost savings along the way.

OGL has revolutionized the delivery of system updates, enhancements, and a variety of time-sensitive changes that affect our employees across the globe. Contextual messages, presentations, training videos, and other knowledge and communications can now be delivered and presented directly into the user’s flow of work, in one locale, when and where they need it.

These new tools and features are empowering Oracle’s people and modernizing how they operate in three key areas:

➽ Process and policy adoption: When employees know what to do and how to do it, they don’t have to leave the application to search for help or other resources. This in turn garners higher completion rates and adoption overall.

➽ Enhanced user experience: This centralized content experience, with the added benefit of contextual guidance in the flow of work, come together to form a revolutionized user experience.

➽ Time and dollars saved: OGL is saving time and driving efficiency for our internal team as well: enabling them to quickly auto-generate content for the experience, particularly non-audio instructional videos (once a time- and resource-intensive process).

Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Database Career, Oracle Database Prep, Database Learning

The results are already compelling. Oracle’s Talent Management Center of Expertise has seen Slack channel service requests, queries, and other support requests decrease by an estimated 70%. 

Additionally, time spent on creating content (i.e., video production and associated review cycles) has dropped by 50%. This capability has saved the team, on average, 20 hours of labor per video created, allowing them to focus their time on other mission critical initiatives, like continuing to implement new initiatives to support career development and employee success—helping drive the modernized employee experience forward even more.

Source: oracle.com

Wednesday, July 21, 2021

Using the Logging Service in OCI with Oracle REST Data Services

Oracle Database Tutorial and Material, Oracle Database Certification, Oracle Database Career, Oracle Database Learning, Database Prep

In this post, we are going to go over setting up your Oracle REST Data Services instances on OCI x86 compute VMs to send logs over to the OCI logging service automatically. Once in the logging service, you can set alerts, notifications or even have the logs inserted into a database for further analysis and reporting.

Preparing ORDS for Logging

To have ORDS start logging in standalone mode, we need to change a few parameters and create a file to instruct ORDS to log not only java logging, but access logs.

1. Update the standalone.properties file

In the directory where the ORDS configuration files are (typically ords/conf/ords/ or if installing from yum, /opt/oracle/ords/conf/ords/), find the standalone directory.

In this directory is the standalone.properties file. Add the following lines but be sure to replace <LOG_DIRECTORY> with an actual directory on your VM instance. This walkthrough uses /home/opc/ords/conf/ords/standalone/logs/ as the logging directory.

standalone.access.log=<LOG_DIRECTORY>

standalone.access.format=%{client}a %u %t "%r" %s %{CLF}O "%{Referer}i" "%{User-Agent}i" %{Host}i

using /home/opc/ords/conf/ords/standalone/logs/, the standalone.properties file would look like the following:

standalone.access.log=/home/opc/ords/conf/ords/standalone/logs/

standalone.access.format=%{client}a %u %t "%r" %s %{CLF}O "%{Referer}i" "%{User-Agent}i" %{Host}i

2. Create the mylogfile.properties file

Where the ords.war file is located, create a file called mylogfile.properties. Add the following to that file, again replacing <LOG_DIRECTORY> with the directory you want to use and used in the previous set up step:

handlers=java.util.logging.FileHandler

# Default global logging level for ORDS

.level=INFO

java.util.logging.FileHandler.pattern=<LOG_DIRECTORY>/ords-sys.log

java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter

java.util.logging.SimpleFormatter.format = %1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n

using /home/opc/ords/conf/ords/standalone/logs/, the standalone.properties file would look like the following:

handlers=java.util.logging.FileHandler

# Default global logging level for ORDS

.level=INFO

java.util.logging.FileHandler.pattern=/home/opc/ords/conf/ords/standalone/logs/ords-sys.log

java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter

java.util.logging.SimpleFormatter.format = %1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n

Now, when starting up ords in standalone mode, you will add the following:

-Djava.util.logging.config.file=mylogfile.properties

with the full command looking like the following

java -Djava.util.logging.config.file=mylogfile.properties ords.war standalone &

OCI Setups Using OCI CLI

The following steps are for using the OCI CLI for setting up logging.

Note: When running these commands, you will need to save off OCID of the following resources:

VM Instance OCID:_______________________

Compartment OCID:_______________________

Dynamic Group OCID:_______________________

Log Group OCID:_______________________

Log OCID:_______________________

1. Create a Dynamic Group

Use the following OCI CLI command to create a dynamic group. The instanceid should be the OCID of the VM instance you want to monitor for logs. Replace <INSTANCE.ID> with your VM OCID.

oci iam dynamic-group create --description ORDS_DynamicGroup --matching-rule "Any {instance.id = '<INSTANCE.ID>'}" --name ORDS_DynamicGroup

with the results being similar to the following:

{

"data": {

"compartment-id": "ocid1.tenancy.oc1..aaaaaaaambnyexddfd...",

"defined-tags": {},

"description": "ORDS_DynamicGroup",

"freeform-tags": {},

"id": "ocid1.dynamicgroup.oc1..aaaaaaaafddf...",

"inactive-status": null,

"lifecycle-state": "ACTIVE",

"matching-rule": "Any {instance.id = 'ocid1.instance.oc1.eu-frankfurt-1.anth...'}",

"name": "ORDS_DynamicGroup",

"time-created": "2021-06-17T18:03:45.166000+00:00"

},

"etag": "7c4e50ca466387eabc2eee1001001e233d414ad2589"

}

Remember to take note of the "id" attribute's value

"id": "ocid1.dynamicgroup.oc1..aaaaaaaafddf...",

2. Create a Log Group

Use the following OCI CLI command to create a Log group. The --compartment-id should be the OCID of a compartment you want to use for all the logs. Replace <COMPARTMENT.ID> with the Compartment OCID you would like to use.

oci logging log-group create --compartment-id <COMPARTMENT.ID> --display-name "ORDS_LogGroup"

3. List the Log Groups

Run the following command to get a list of the log groups you have created so we can find the Log Group OCID. Replace <COMPARTMENT.ID> with the Compartment OCID you used in the previous step.

oci logging log-group list --compartment-id <COMPARTMENT.ID>

with the results being similar to the following:

{

   "data": [

     {

      "compartment-id": "ocid1.compartment.oc1..aaaaaaaasoku6xtelb2c...",

      "defined-tags": {},

      "description": null,

      "display-name": "ORDS_LogGroup",

      "freeform-tags": {},

      "id": "ocid1.loggroup.oc1.eu-frankfurt-1.amaaaaaau3i6vkyayig4yxj7un...",

      "lifecycle-state": "ACTIVE",

      "time-created": "2021-06-17T16:34:09.913000+00:00",

      "time-last-modified": "2021-06-17T16:34:09.913000+00:00"

     },

     {

      "compartment-id": "ocid1.compartment.oc1..aaaaaaaasoku6xtelb2...",

      "defined-tags": {},

      "description": null,

      "display-name": "test",

      "freeform-tags": {},

      "id": "ocid1.loggroup.oc1.eu-frankfurt-1.amaaaaaau3i6vkyadrtdi2tzklev...",

      "lifecycle-state": "ACTIVE",

      "time-created": "2021-06-03T21:37:49.893000+00:00",

      "time-last-modified": "2021-06-03T21:37:49.893000+00:00"

     }

   ]

}

Remember to take note of the "id" attribite's value for the ORDS_LogGroup Log Group.

"display-name": "ORDS_LogGroup",

"freeform-tags": {},

"id": "ocid1.loggroup.oc1.eu-frankfurt-1.amaaaaaau3i6vkyayig4yxj7un...",

4. Create a Log

Use the following OCI CLI command to create a Log using the OCID of the Log Group you previously created. Replace <LOG_GROUP.ID> with the Log Group OCID you created in the Create a Log Group step and found the OCID in the List Log Groups step.

oci logging log create --display-name ORDS_Logs --log-group-id <LOG_GROUP.ID> --log-type CUSTOM

5. List the Logs

Use the following OCI CLI command to list all the logs that belong to a log group. It uses the Log Group OCID as a parameter. Replace <LOG_GROUP.ID> with the Log Group OCID you created in the Create a Log Group step and found the OCID in the List Log Groups step.

oci logging log list --log-group-id <LOG_GROUP.ID>

with the results being similar to the following:

{

   "data": [

     {

      "compartment-id": "ocid1.compartment.oc1..aaaaaaaasoku6xtelb2...",

      "configuration": null,

      "defined-tags": {},

      "display-name": "ORDS_Logs",

      "freeform-tags": {},

      "id": "ocid1.log.oc1.eu-frankfurt-1.amaaaaaau3i6vkyaduc...",

      "is-enabled": true,

      "lifecycle-state": "ACTIVE",

      "log-group-id": "ocid1.loggroup.oc1.eu-frankfurt-1.amaaaaaau3i6vkyayig4yxj7unl...",

      "log-type": "CUSTOM",

      "retention-duration": 30,

      "time-created": "2021-06-17T16:42:46.599000+00:00",

      "time-last-modified": "2021-06-17T16:42:46.599000+00:00"

     }

   ]

}

Remember to take note of the "id" attribite's value for the ORDS_Logs Log Group.

"display-name": "ORDS_Logs",

"freeform-tags": {},

"id": "ocid1.log.oc1.eu-frankfurt-1.amaaaaaau3i6vkyaduc...",

6. List the Dynamic Groups

Use the following OCI CLI command to list all the Dynamic Groups. We will be adding the --name parameter so that we only get back the dynamic group we created called ORDS_DynamicGroup.

oci iam dynamic-group list --name ORDS_DynamicGroup

with the results being similar to the following:

{

   "data": [

     {

      "compartment-id": "ocid1.tenancy.oc1..aaaaaaaambnyexdta...",

      "defined-tags": {},

      "description": "ORDS_DynamicGroup",

      "freeform-tags": {},

      "id": "ocid1.dynamicgroup.oc1..aaaaaaaajbgg7wq7aawnnkmrocvo...",

      "inactive-status": null,

      "lifecycle-state": "ACTIVE",

      "matching-rule": "Any {instance.id = 'ocid1.instance.oc1.eu-frankfurt-1.antheljsu3i6vkyc...'}",

      "name": "ORDS_DynamicGroup",

      "time-created": "2021-06-17T18:03:45.166000+00:00"

     }

   ]

}

Remember to take note of the "id" attribite's value.

"description": "ORDS_DynamicGroup",

"freeform-tags": {},

"id": "ocid1.dynamicgroup.oc1..aaaaaaaajbgg7wq7aawnnkmrocvo...",

7. Create an Agent Config to Harvest Logs on the Compute Instance

Use the following OCI CLI command to create an agent to harvest our ORDS log files and place them into the Logging Service. Replace <COMPARTMENT.ID> with the Compartment OCID you used previously.

oci logging agent-configuration create --compartment-id <COMPARTMENT.ID> --is-enabled TRUE --display-name ORDS_Logging_Agent --service-configuration file://serv.json --group-association file://group.json --description ORDS_Logging_Agent

Before running this command, we need to stage and fill two files; serv.json and group.json

group.json

In the group.json file, you will replace <DYNAMIC_GROUP.ID> with the OCID of your dynamic group you previously created.

{

  "groupList": [

    "<DYNAMIC_GROUP.ID>"

  ]

}

serv.json

In the serv.json file, you will replace <LOG.ID> with the OCID of your Log you previously created.

  {

    "configurationType": "LOGGING",

    "destination": {

      "logObjectId": "<LOG.ID>"

    },

    "sources": [

        {

            "name": "ordslog2",

            "parser": {

              "field-time-key": null,

              "is-estimate-current-event": null,

              "is-keep-time-key": null,

              "is-null-empty-string": null,

              "message-key": null,

              "null-value-pattern": null,

              "parser-type": "NONE",

              "timeout-in-milliseconds": null,

              "types": null

            },

            "paths": [

              "/home/opc/ords/conf/ords/standalone/logs/*.log"

            ],

            "source-type": "LOG_TAIL"

          }

    ]

  }

Shortly after the agent is created, you will begin to see logs from the ORDS VM instance appear in this log. The agent is set to tail the log files in the path: /home/opc/ords/conf/ords/standalone/logs/*.log. Your logging path my differ depending on the ORDS logging set up steps.

Oracle Database Tutorial and Material, Oracle Database Certification, Oracle Database Career, Oracle Database Learning, Database Prep

Source: oracle.com

Monday, July 19, 2021

PDB Management on Database Cloud Service

We are pleased to announce the General Availability (GA) of PDB Management on Database Cloud Service (DBCS) for VM and BM databases. Gone are the days when a new database was provisioned with a single PDB with lifecycle operations based on that CDB and PDB management had to be performed manually via SQL commands. With this new feature, users can create and manage pluggable databases (PDBs) via OCI console and APIs. This functionality provides complete user control for creating, managing, and performing lifecycle operations on PDBs.

Read More: 1Z0-068: Oracle Database 12c - RAC and Grid Infrastructure Administration

The new PDB Management provides:

◉ Create and delete PDBs within the existing container database

◉ Start and stop PDBs

◉ Clone a PDB into the existing CDB, or to a remote CDB

◉ Connection string to each PDB 

◉ Support backup, restore, and Data Guard operations for CDB with PDBs

Pluggable Databases are managed as an OCI resource in the context of DB System and Database, and all the lifecycle operation can be performed on them just like any other OCI resources. To create PDBs, specify the PDB name and database specific security credentials. Users can fully manage the lifecycle of these PDBs, which are compartment aware and belong to specific CDBs in which they are created. 

OCI Console Experience

We will go over the following core user journey highlights for PDB management using OCI console: 

◉ Create a pluggable database (PDB)

◉ How to perform lifecycle operations on PDBs

◉ How to clone a pluggable database

Create a pluggable database (PDB)

From the Bare Metal, VM, and Exadata service home page, navigate to DB System → Database → Pluggable Databases, and click on 'Create Pluggable Database'. To create a PDB, provide PDB name, PDB password, etc. The PDB is a resource of the database.

Oracle Database Cloud Service, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Exam Prep, Oracle Database

Perform lifecycle operations on PDBs


PDB lifecycle operations of starting, stoping, tagging, and deleting can be performed directly from the OCI Console. The PDB Connection tab provides connection strings to use when connecting to that PDB.

Oracle Database Cloud Service, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Exam Prep, Oracle Database

How to clone of a PDB


You can clone a PDB into the same CDB. Go to PDB details page, and click on 'Clone'. Provide new PDB details on the screen.

Oracle Database Cloud Service, Oracle Database Tutorial and Material, Oracle Database Preparation, Oracle Exam Prep, Oracle Database

Few additional points to note while creating and managing PDBs:

◉ There is no change in the billing with the introduction of this feature.

◉ Remote cloning of PDB is currently not available on OCI Console. Currently, it is only available via API, SDK, and Terraform.

◉ Backup, Restore, and Data Guard operation performed on the database will take consideration of the underlying PDBs as well. For example, when you take a backup of the database, and restore at a later point in time, your PDBs will be part of your restored database. 

◉ For this initial release, existing PDBs (not plugged in) in a database will be replicated to standby at the time of Data Guard configuration. Post Data Guard configuration, you cannot create the PDBs via console/API, however you can create additional PDBs via sqlplus and they will be visible in OCI console.

◉ Other PDB lifecycle operations such as refresh, hot clone, etc via OCI console and APIs are being worked on as a roadmap item.

Source: oracle.com

Friday, July 16, 2021

Elastic Storage Expansion now available on Gen2 Exadata Cloud@Customer

We are pleased to announce the General Availability (GA) of Elastic Storage Expansion on Gen 2 Exadata Cloud@Customer. Previously, Exadata Cloud@Customer customers provisioned Exadata Infrastructure with a fixed number of storage servers determined by the shape selected during initial infrastructure deployment. With this release, you can dynamically increase the shared Exadata storage capacity of the infrastructure to meet your business needs by adding storage servers on-demand.

Key Customer Benefits

With the elastic storage expansion capability, you can now

1. Provision new Exadata Infrastructure with custom storage capacity best suited for your business needs without being constrained by the standard supported shapes

2. Start with a smaller storage footprint for the Exadata Infrastructure at install time, thereby saving cost and planning overhead to accommodate for maximum future usage upfront 

3. Expand the storage capacity on existing deployed Exadata Infrastructure on-demand in an automated, elastic fashion without any disruption to current running workloads

4. Allocate additional storage capacity available from newly added storage servers to already deployed VM Clusters and/or use them for provisioning new VM Clusters on the infrastructure

Adding new storage servers to an existing Exadata Infrastructure follows the same overall customer experience and ordering process outlined while setting up a new infrastructure deployment. Once the order for additional storage servers is processed, Oracle will work with you to deploy, activate and add storage capacity from the new servers to your Exadata Infrastructure. For new Exadata Infrastructure deployments, any additional storage servers ordered along with the infrastructure will automatically be included and made available as part of the infrastructure activation process. Exadata Infrastructures deployed with additional storage servers will be configured as an Elastic shape. The total number of storage servers and usable capacity is clearly called out for the given infrastructure.

OCI Console Experience

We will go over the following core user journey highlights for elastic storage expansion using the OCI console

◉ Scale Exadata Infrastructure to Add Storage Capacity

◉ Scale VM Cluster to Use Additional Storage Capacity

◉ Create Exadata Infrastructure with Additional Storage Servers

1. Scale Exadata Infrastructure to Add Storage Capacity

From the Exadata Infrastructure details page, you can initiate a request to scale your infrastructure with additional storage server(s). Before initiating this request from the OCI console, you should order the additional storage servers planned for elastic expansion.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

You can specify the number of storage servers to add as part of the scale infrastructure request. The additional storage capacity, current storage capacity, and total storage capacity are clearly called out as part of the scale request.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

Once the scale infrastructure request has been completed, you can download the generated configuration bundle and provide it to the Oracle field engineer while coordinating the delivery, deployment, and activation of the additional storage server(s) in your data center.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

Oracle field engineering team will work with you to ensure that the requested number of additional storage server(s) are deployed as part of Exadata Infrastructure for which the scale request was initiated. After deploying the additional storage server(s), Oracle will provide a configuration file to upload and activate the new server(s).

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

After the new storage servers are activated, you can add the storage capacity from the newly added servers to the total usable Shared Exadata Storage capacity and make it available for VM Clusters to allocate and consume. 

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

Adding storage capacity to the infrastructure's shared Exadata Storage capacity free pool triggers an ASM rebalance of existing disk groups (used by the provisioned VM Clusters) to all storage servers visible to the infrastructure, including the newly added ones.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

Note: While rebalance is in progress, you may see some performance impact depending on how much capacity is already in use by the existing disk groups.

After completing all these steps for elastic storage expansion, you can view the total number of storage servers and the total Exadata Shared Storage capacity on the infrastructure details page.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

2. Scale VM Cluster to Use Additional Storage Capacity


As part of the VM Cluster scale workflow, you can allocate and use the additional storage capacity from the newly added storage server(s). The new maximum limit for shared storage capacity is reflected and used for validation.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

Note: The additional storage capacity from the newly added storage server(s) is also presented and available as part of the workflow to create new VM Clusters.
 

3. Create Exadata Infrastructure with Additional Storage Servers


While provisioning a new Exadata Infrastructure, you can choose the number of storage servers to be included and deployed as part of the initial infrastructure creation workflow. The total storage capacity from all the storage servers that are part of initial infrastructure deployment is clearly called out.

Oracle Database Exam Prep, DB Exam Study, DB Preparation, DB Prep, DB Certification, DB Career, DB Learning

Source: oracle.com