Monday, January 30, 2023

1Z0-1069-22 Exam Tips to Pass Oracle Certification In 2023

Oracle Recruiting Cloud 2022 Certified Implementation Professional, Oracle, 1Z0-1069-22, 1Z0-1069-22 Exam,  Oracle 1Z0-1069-22

Earning an Oracle Recruiting Cloud 2022 Implementation Professional certification will help you learn new technologies and skills, thus increasing your probability of a promotion. Oracle professionals believe that getting an Oracle 1Z0-1069-22 certification can help grow their careers.

The development of science and technology has brought a wave of competition in the cloud industry. One of the best measures to survive in this cut-throat competitive world is demonstrating your skills and getting certified. The Oracle Recruiting Cloud 2022 Implementation Professional 1Z0-1069-22 exam serves this goal, and the Oracle 1Z0-1069-22 exam plays a crucial role in achieving your dreams. The Oracle certifications are globally accepted and improve your opportunities for higher salaries.

Oracle 1Z0-1069-22 exam is high enough for all those professionals who need to boost their career in IT. This exam is an opportunity to open new realms in your career. We present the preparation guide for the Oracle 1Z0-1069-22 exam to assist you in this process. To complete the loop, it is equally important to have all the essential elements related to the exam.

Why Choose the Oracle 1Z0-1069-22 Exam?

Oracle 1Z0-1069-22 validates your skills to become an Oracle Recruiting Cloud 2022 Certified Implementation Professional and supplies you with an edge over your competitors. Moreover, Oracle certifications are well-recognized around the globe.

Therefore you can derive the following benefits from this certification:

  • To begin with, the 1Z0-1069-22 exam develops your knowledge, and an Oracle certification is demanding to your prospective employers.
  • Also, it enhances your potential earning power to command a higher salary.
  • Subsequently, you receive access to a secure digital badge to add to your social media profiles.
  • Moreover, through this exam, you get to join the 80% of Oracle certification holders who reported a promotion or salary increase and career improvement.

This certification exam is undoubtedly a move ahead in your career advancements. If you need clarification about the exam preparations, relax because we will provide you with a specially curated guide that solves all your doubts.

Study Tips for Oracle 1Z0-1069-22 Exam

We know how overwhelming can all this information be for you, and you might be wondering about all these questions. Just strap in because we provide you with preparation tips specially curated by experts. This study guide will help you throughout your journey for this exam and act as a bridge for your career advancements.

So Let’s start step by step:

1. Understand the 1Z0-1069-22 Exam Topics

The course domains act as a blueprint for the exam. You should be very clear with the syllabus of the Oracle 1Z0-1069-22 exam. Remember to refer to the official website before commencing your preparations.

2. Take a Look at Study Resources

Many online and offline study resources are available to prepare for this exam. Hence you should look over all the study resources and choose from the plethora of options.

3. Prepare from Oracle 1Z0-1069-22 Books

Books have always been the golden way to prepare for an exam. They guarantee in-depth clarity of complicated topics and are easy to refer to. There are different books available for this certification exam. You may review them and choose the one that suits you the best.

4. Online Training Course

This exam is practically oriented; therefore, it requires hands-on experience in labs and clarity of the topics. Training is essential to answer questions timely and accurate. Oracle offers a training program for your preparations. The training program provided in Oracle Talent Management Cloud Learning Subscription

5. Start Taking 1Z0-1069-22 Practice Test

1Z0-1069-22 practice tests are a must for any exam preparations. Practice tests help you evaluate your performance and evaluate your weaker areas. They provide a similar exam environment to check your accuracy and speed. Attempt multiple practice tests to improve and excel in this field.

You can refer to the online expert learning resources designed to practice for the 1Z0-1069-22 exam to provide a genuine exam experience. Also, you must refer to a unique set of questions shuffled with different levels, which will help you check your preparations. You should attempt as many practice tests as possible before the final exam and analyze the weak areas to improve.

6. Join an Online Community

Online forums and study groups are critical in preparing for an exam and help you learn from the experts. Moreover, such online study groups are fruitful as they connect you with other candidates appearing for the same, provide valuable insights into the certification, and clarify your doubts.

Follow the preparation tips, and your success is guaranteed. You will surely clear this 1Z0-1069-22 exam on the first attempt with flying colors.

Friday, January 20, 2023

What is a Converged Database?


"We need to build a new mobile app so customers can submit and retrieve documents, so I will need a specialized database to store the documents, right?

Oh, wait, next week, this other project requires we provide a new payment system with user fraud protection built in to meet compliance, and I am going to store relational data as well. We’ll need a specialized Blockchain database and a relational database, right?"

These conversations can leave you concerned and frustrated, wondering how you will allocate your resources to these specialized data stores.

But perhaps the solution to your problem is not more resources but a Converged Database.

A Converged Database is a database that has native support for all modern data types, analytics, and the latest development paradigms built into one product.

Converged Database, Oracle Database, Oracle Database Exam, Oracle Database Career, Database Skill, Database Jobs, Database Certification, Database Prep, Database Preparation, Database Tutorial and Materials
Converged Databases support Spatial data for location awareness, JSON for flexibility, IoT for device integration, in-memory technologies for real-time analytics, and of course, traditional relational data. A Converged Database can run all sorts of workloads, from Geo-Distributed to Blockchain to Analytics and Machine Learning. It can also handle any development paradigm, including Microservices, Events, REST, SaaS, and CI/CD, to name a few.

Traditionally when new data management technologies first come out, they are implemented as separate products. For example, when Blockchain first came out, it was a separate stand-alone system that required you to use an entirely different, proprietary way to store and access data. 

By integrating new data types, workloads, and paradigms as features within a Converged Database, you can support mixed workloads and data types in a much simpler way. You don't need to manage and maintain multiple systems or worry about providing unified security across them.

You also get synergy across these capabilities. For example, by supporting Machine Learning algorithms and Spatial data in the same database, you can easily do predictive analytics on Spatial data. Making it dramatically easier and faster to develop data-driven apps.

Converged Database, Oracle Database, Oracle Database Exam, Oracle Database Career, Database Skill, Database Jobs, Database Certification, Database Prep, Database Preparation, Database Tutorial and Materials
A good analogy for a Converged Database is a smartphone. In the past, if you wanted to make phone calls, you would use a phone, and if you wanted to take a picture or video, you would use a camera. If you tried to navigate somewhere, you would need a map or a navigation system. If you wanted to listen to music, you needed an iPod or other similar device.

But with a smartphone, all these products have been converted or converged into one. Each of these original products is now a feature of the smartphone. Having all these features combined into a single product inherently makes your life easier, as you can stream music over the phone's data plan or upload pictures or videos directly to social media sites.

The same ease of use and convenience you get from a smartphone also hold for a Converged Database. 

The Oracle Database is an excellent example of a Converged Database, as it provides support for Machine Learning, Blockchain, Graph, Spatial, JSON, REST, Events, Editions, and IoT Streaming as part of the core database at no additional cost. It allows you to support many diverse projects using a single platform, significantly reducing complexity and management overhead while minimizing risk.

A converged database does not mean data must be in one monolithic database. A converged database enables you to choose your data architecture for each application. You decide when to consolidate data into fewer databases and when a fully distributed design is best.


Source: oracle.com

Wednesday, January 18, 2023

User Assistance Enhancements in 2022: Autonomous Database On Dedicated Exadata Infrastructure

Oracle Database, Oracle Database Certification, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials

Happy New Year, 2023! Greetings from the User Assistance Desk.


We at User Assistance for Autonomous Database on Dedicated Exadata Infrastructure have been working continuously throughout 2022 to add more and more enhancements to our technical documentation. This blog presents those that made Autonomous Database documentation easier to locate, consume, and try. Even though, they are currently implemented in Autonomous Database on Dedicated Exadata Infrastructure, they will be expanded to all database documentation over time, as appropriate.

Let's get started!

Revised Folder Structure


Autonomous Database articles are now reorganized into groups that are more intuitive to the audience! 

An article is a single-page publication (with scrolling) that contains all the topics needed to cover a specific customer need. Look into Quick Start to get hold of the service quickly and try out our Livelabs or get inside Features to explore a specific feature you're interested in. If you are already using the service for some time and looking for detailed instructions on implementing something, jump straight into How-to Guides and get going! Tasks guide you in using the service for a specific job, such as migration, cloning, development, user management, and many more. Get a peek at the latest announcements and typical workflows from Overview and find many reference materials in the Reference section. Try it!

Oracle Database, Oracle Database Certification, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials

Self-Contained Articles for Key Features


We now have articles that give you a holistic picture of the critical Autonomous Database features backed up with overview graphics, use cases, or lists wherever applicable.

Look into the Key Security Features for a bird's eye view of various security features supported by Autonomous Database and how each is related. Check out the Service Maintenance article for in-depth information about Autonomous Database's change management capabilities. Explore the Migrate and Load articles to provide a comprehensive list of migration options mapped to the applicable use cases.  Along with a big picture of a service feature, you will also find helpful pointers to the related how-to guides wherever appropriate. Isn't this valuable? 

More and more features to follow this route!

Badges


Badges? Yes, badges. As our documentation covers Autonomous Database on Oracle Public Cloud and Exadata Cloud@Customer (ExaCC) platforms, we devised badges to tag the content that only applies to either platform:

◉ APPLIES TO: Oracle Public Cloud only
◉ APPLIES TO: Exadata Cloud@Customer only

Even though both platforms support a standard set of features for Autonomous Databases, there are a few exceptions where only one of these platforms supports a feature. For example, Changing the License Type on an Autonomous VM Cluster is supported only on the Exadata Cloud@Customer platform where as Using Operations Insights on Autonomous Database is only allowed on Oracle Public Cloud.

Tip: For the complete list of features supported exclusively by Oracle Public Cloud or Exadata Cloud@Customer platforms, refer to Feature Parity Between Oracle Public Cloud and Exadata Cloud@Customer Deployments.

Revised Oracle Help Center (OHC) Landing Pages


You may be a developer interested in the content appropriate to developing applications with Autonomous Database or a fleet administrator trying to perform some administrative tasks for your service. It has gotten easier if you're looking for user role-specific tasks with Autonomous Database. You can now see user-role-specific tiles on the Autonomous Database landing pages in Oracle Help Center (OHC). Click the tile you're interested in, and the content that applies to that specific user role is loaded! I hope this is useful, and try it out the next time you need it.

Tip: See User Roles Associated with Autonomous Database to understand the different user roles.

Oracle Database, Oracle Database Certification, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials

This is not an exhaustive list of enhancements implemented in 2022 but an attempt to capture what's happening in the User Assistance area for the Autonomous Database.

Source: oracle.com

Friday, January 13, 2023

Configure OCI DataFlow Interactive Notebook & Access Autonomous DataWarehouse & Object Storage

Oracle Cloud Infrastructure (OCI) Data Flow is a fully managed big data service that lets you run Apache Spark applications at any scale with almost no administration. Spark has become the leading big data processing framework and OCI Data Flow is the easiest way to run Spark in Oracle Cloud because there’s nothing for developers to install or manage.

Now perform your analytics on OCI DataFlow using Interactive Data Science Notebook:


OCI DataFlow is now coming with Interactive OCI Data Science Notebook. Now, users can seamlessly enable and start playing with their data using the Data Science Notebook. This Notebook experience leverages the power of Apache Spark. Here, We will see how easily an OCI DataFlow Environment can be created on OCI using Conda. We will be creating OCI Data Science session & Notebook and will access OCI Dataflow spark platform using Livy session. Submit fault-tolerant Spark jobs from the notebook using synchronous and asynchronous methods to retrieve the output.

OCI DataFlow with Interactive OCI Data Science have also introduced SparkMagic commands adding it's own flavours & upgrades. SparkMagic allows for interactive communication with Spark using Livy. Using the `%%spark` magic directive within a JupyterLab code cell.

The purpose of this document is to walk you through the setup required to access the OCI Data Flow Sessions through the Data Science Notebook Session. These Sessions allow you to run interactive Spark workloads on a long lasting Data Flow cluster through an Apache Livy integration. 

Also, Once OCI Data Flow Spark Session is created, will go through some Sample codes for performing Spark Operations on OCI Object Storage &  Autonomous DataWarehouse.

Features & Benefit:

◉ Use Interactive Jupyter Notebook with OCI Data Flow. 
◉ Data Flow Sessions supports auto-scaling Data Flow cluster capabilities. User can enable auto-scaling while creating Spark - Livy session as well as once the session is created from the Notebook.
◉ Data Flow Sessions supports the use of conda environments as customizable Spark runtime environments. 
◉ Import open source libraries & start using it after building Conda environments. 
◉ Spark Delta Lake in OCI Data Flow & perform ACID transactions. 

Use Interactive OCI Data Science Notebook with OCI Data Flow:


To use the OCI Data Science Notebook with OCI Data Flow, follow below steps:

◉ Create required buckets 
    ◉ Create a bucket named dataflow-logs in your tenancy. 
    ◉ Create a bucket named dataflow-warehouse in your tenancy. 
◉ Create a dynamic group in a specific compartment:

ALL {resource.type='dataflowrun', resource.compartment.id='<compartment_id>'}

ALL {resource.type='datasciencenotebooksession', resource.compartment.id='<compartment_id>'}

Any {resource.type = 'datacatalogmetastore'}

◉ Create a policy to manage OCI resources from OCI data flow, Data Science:

ALLOW DYNAMIC-GROUP <df-dynamic-group> TO MANAGE objects IN TENANCY WHERE ANY
{target.bucket.name='<bucket_name>',
target.bucket.name='dataflow-logs,
target.bucket.name='dataflow-warehouse'
}

ALLOW DYNAMIC-GROUP '<ds-dynamic-group>' TO MANAGE dataflow-family
in compartment '<your-compartment-name>'

ALLOW DYNAMIC-GROUP '<df-dynamic-group>' TO MANAGE data-catalog-metastores IN TENANCY

ALLOW DYNAMIC-GROUP '<dcat-hive-group>' TO READ buckets IN TENANCY

ALLOW DYNAMIC-GROUP '<dcat-hive-group>' TO MANAGE object-family IN TENANCY WHERE ANY
{ target.bucket.name = '<bucket_name>',
target.bucket.name = '<managed-table-location-bucket>',
target.bucket.name = '<external-table-location-bucket>'
}

ALLOW DYNAMIC-GROUP '<ds-dynamic-group>' TO MANAGE objects IN TENANCY
WHERE ALL {target.bucket.name='ds-conda-env'}

ALLOW DYNAMIC-GROUP '<df-dynamic-group>' TO MANAGE objects IN TENANCY
WHERE ALL {target.bucket.name='ds-conda-env'}

◉ Create OCI Data Science Project & Session.

Autonomous DataWarehouse & Object Storage, Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Database Tutorial and Materials

◉ Open New OCI Data Science Session. From File option,  choose New Launcher and click on Terminal.

Autonomous DataWarehouse & Object Storage, Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Database Tutorial and Materials

◉ Install and activate the pyspark32_p38_cpu_v1  conda environment from your terminal:

odsc conda install -s pyspark32_p38_cpu_v1
source activate /home/datascience/conda/pyspark32_p38_cpu_v1

◉ Once Conda is activated, Goto New Launcher Tab & click on Settings. Fill required information about object storage where Conda package will be uploaded & save it.

Autonomous DataWarehouse & Object Storage, Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Database Tutorial and Materials

◉ Now, Publish Conda environment:

odsc conda publish -s pyspark3_2anddataflowv1_0

Note: Publishing will take some time. Once it is completed, you can observe Conda package is uploaded on the Object Storage bucket.

◉ Open Notebook using "PySpark and DataFlow" as kernel from new Launcher.
◉ Execute below commands to setup and create Data Flow Spark session using Livy Service:

1. Setup Authentication using ADS

import ads
ads.set_auth("resource_principal") # Supported values: resource_principal, api_key

2. Load Extension

%load_ext dataflow.magics

3. Create OCI DataFlow Spark Session using Livy:

#Create OCI Dataflow Session using LIVY service through OCI Data SCience Notebook.
import json
command = {
    "compartmentId": "ocid1.compartment.oc1..xxxxxxxxxxxxxx",
    "displayName": "Demo_DataFlow_Spark_v1",
    "sparkVersion": "3.2.1",
    "driverShape": "VM.Standard.E3.Flex",
    "executorShape": "VM.Standard.E3.Flex",
    "driverShapeConfig":{"ocpus":1,"memoryInGBs":16},
    "executorShapeConfig":{"ocpus":1,"memoryInGBs":16},
    "numExecutors": 1,
    "logsBucketUri": "<oci://bucket@namespace/>",
    "archiveUri": "<oci://bucket@namespace/archive.zip>"
    "configuration":{"spark.archives":"<oci://bucket@namespace/>#conda",
                     "spark.oracle.datasource.enabled":"true"}
}

command = f'\'{json.dumps(command)}\''
print("command",command)

#"configuration":{
#    "spark.dynamicAllocation.enabled":"true",
#    "spark.dynamicAllocation.shuffleTracking.enabled":"true",
#    "spark.dynamicAllocation.minExecutors":"1",
#    "spark.dynamicAllocation.maxExecutors":"4",
#    "spark.dynamicAllocation.executorIdleTimeout":"60",
#    "spark.dynamicAllocation.schedulerBacklogTimeout":"60",
#    "spark.dataflow.dynamicAllocation.quotaPolicy":"min" }}'

%create_session -l python -c $command

Autonomous DataWarehouse & Object Storage, Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Database Tutorial and Materials

OCI Data Flow Spark Session is created. Moving ahead will go through some Sample codes for performing Spark Operations on:


1. OCI Object Storage
2. OCI Autonomous DataWarehouse

◉ Now, You can Import dependent libraries in Session:

%%spark
#Import required libraries.

import json
import os
import sys
import datetime
import oci
import pyspark.sql 
from pyspark.sql.functions import countDistinct

from delta.tables import *

◉ Perform Spark Read operation on Object Storage.  

Read Object Storage file using spark.read from Livy Session:

<strong>%%spark -o df_Bronze_Insurance_Data</strong>

#Read Claim Insurance files from OCI Object Storage in Spark Dataframe.

df_Bronze_Insurance_Data = spark.read.format("csv").option("header", "true") \
.option("multiLine", "true").load("oci://test-demo@OSNamespace/insur_claim/claim.csv*")

print("df_RawZone_Data",df_Bronze_Insurance_Data)
df_Bronze_Insurance_Data.show(5)

Autonomous DataWarehouse & Object Storage, Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Database Tutorial and Materials

◉ Perform Spark Write operation on Object Storage.  

%%spark
df_Bronze_Insurance_Data.write.format("json").option("mode","overwrite").save("oci://test-demo@OSNamespace/insur_claim/claim_curated")

◉ Perform Read & Write Operation on Autonomous DataWarehouse:

    ◉ Load Data into ADW using Secret Vault for Wallet: 

Copy below code as it is. 

Reference: https://github.com/oracle-samples/oracle-dataflow-samples/tree/main/python/loadadw

%%spark

def get_authenticated_client(token_path, client, file_location=None, profile_name=None):
    """
    Get an an authenticated OCI client.
    Example: get_authenticated_client(token_path, oci.object_storage.ObjectStorageClient)
    """
    import oci

    if not in_dataflow():
        # We are running locally, use our API Key.
        if file_location is None:
            file_location = oci.config.DEFAULT_LOCATION
        if profile_name is None:
            profile_name = oci.config.DEFAULT_PROFILE
        config = oci.config.from_file(file_location=file_location, profile_name=profile_name)
        authenticated_client = client(config)
    else:
        # We are running in Data Flow, use our Delegation Token.
        with open(token_path) as fd:
            delegation_token = fd.read()
        signer = oci.auth.signers.InstancePrincipalsDelegationTokenSigner(
            delegation_token=delegation_token
        )
        authenticated_client = client(config={}, signer=signer)
    return authenticated_client

def get_password_from_secrets(token_path, password_ocid):
    """
    Get a password from the OCI Secrets Service.
    """
    import base64
    import oci

    secrets_client = get_authenticated_client(token_path, oci.secrets.SecretsClient)
    response = secrets_client.get_secret_bundle(password_ocid)
    base64_secret_content = response.data.secret_bundle_content.content
    base64_secret_bytes = base64_secret_content.encode("ascii")
    base64_message_bytes = base64.b64decode(base64_secret_bytes)
    secret_content = base64_message_bytes.decode("ascii")
    return secret_content

def get_delegation_token_path(spark):
    """
    Get the delegation token path when we're running in Data Flow.
    """
    if not in_dataflow():
        return None
    token_key = "spark.hadoop.fs.oci.client.auth.delegationTokenPath"
    token_path = spark.sparkContext.getConf().get(token_key)
    if not token_path:
        raise Exception(f"{token_key} is not set")
    return token_path    

def get_temporary_directory():
    if in_dataflow():
        return "/opt/spark/work-dir/"
    else:
        import tempfile
        return tempfile.gettempdir()

def in_dataflow():
    """
    Determine if we are running in OCI Data Flow by checking the environment.
    """
    if os.environ.get("HOME") == "/home/dataflow":
        return True
    return False

def download_wallet(spark, wallet_path):
    """
    Download an ADW/ATP wallet file and prepare it for use in a Data Flow
    application.
    """
    import oci
    import zipfile

    # Get an object store client.
    token_path = get_delegation_token_path(spark)
    object_store_client = get_authenticated_client(
        token_path, oci.object_storage.ObjectStorageClient
    )

    # Download the wallet file.
    from urllib.parse import urlparse
    parsed = urlparse(wallet_path)
    bucket_name, namespace = parsed.netloc.split("@")
    file_name = parsed.path[1:]
    response = object_store_client.get_object(namespace, bucket_name, file_name)
    temporary_directory = get_temporary_directory()
    zip_file_path = os.path.join(temporary_directory, "wallet.zip")
    with open(zip_file_path, "wb") as fd:
        for chunk in response.data.raw.stream(1024 * 1024, decode_content=False):
            fd.write(chunk)

    # Extract everything locally.
    with zipfile.ZipFile(zip_file_path, "r") as zip_ref:
        zip_ref.extractall(temporary_directory)

    # Distribute all wallet files.
    contents = "cwallet.sso ewallet.p12 keystore.jks ojdbc.properties sqlnet.ora tnsnames.ora truststore.jks".split()
    spark_context = spark.sparkContext
    for file in contents:
        spark_context.addFile(os.path.join(temporary_directory, file))

    return temporary_directory

◉ Set below Parameters related to ADW Instance & Wallet.

%%spark
PASSWORD_SECRET_OCID = "ocid1.vaultsecret.oc1.phx.xxxxxxx"
TARGET_TABLE = "ADMIN.TB_NAME"
TNSNAME = "demolakehouseadw_medium"
USER = "admin"
WALLET_PATH = "oci://bucketname@osnamespace/Wallet_DemoLakeHouseADW.zip"

# Download and distribute our wallet file.
wallet_path = download_wallet(spark, WALLET_PATH)
adw_url = "jdbc:oracle:thin:@{}?TNS_ADMIN={}".format(TNSNAME, wallet_path)

◉ Now, Get password using the secret service.

%%spark
# Get our password using the secret service.
print("Getting wallet password")
token_path = get_delegation_token_path(spark)
password = get_password_from_secrets(token_path, PASSWORD_SECRET_OCID)
print("Done getting wallet password")

# Save the results to the database.
print("Saving processed data to " + adw_url)
properties = {
    "driver": "oracle.jdbc.driver.OracleDriver",
    "oracle.net.tns_admin": TNSNAME,
    "password": password,
    "user": USER
}

◉ Read Sample Table from ADW. 

%%spark
SOURCE_TABLE = "ADMIN.RETAILPOS"
df_RetailPOS_15min = spark.read.jdbc(url=adw_url, table=SOURCE_TABLE, properties=properties)

◉ Load above Dataframe into ADW. 

%%spark

#Load into ADW:

TARGET_TABLE = "ADMIN.RETAILPOS_15MINUTES"
print("TARGET_TABLE : ",TARGET_TABLE)

# Write to ADW.
print("Write to ADW : ")
df_RetailPOS_15min.write.jdbc(url=adw_url, table=TARGET_TABLE, mode="Append", properties=properties)
print("Writing done to ADW : ")


Source: oracle.com

Wednesday, January 11, 2023

Calculate Distances Between Point Geometries in Database through OAC Dataset Definition

Oracle Analytics (OA) is equipped with the capability to use spatial database functions to process datasets containing geometries. One of the ways this can be done in Oracle Analytics is by editing the definition of a data source using spatial SQL syntaxes when you create the dataset from a database connection. Assuming you are connecting to an Oracle Database, you can complete spatial calculations such as distance, area, length, and many more by entering the corresponding SQL function from the SDO_GEOMETRY package in the dataset definition.

In this blog, I will show how to calculate the distances between the hotels and Thai restaurants in some locations in the US using the SDO_DISTANCE function. With this function, users can get the distances of their warehouses to their customers, find the distances of the nearest banks from store locations, and calculate how far the schools are from residential houses. You may refer to this link for more details about this function in the Oracle Database.

Creating the database connection


First, you should connect to the database containing the metadata of the spatial datasets. If you’re connecting to an Oracle database, you may refer to this link. If you’re connecting to an autonomous database, you may refer to this link for more details.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Once you have created the connection to the database, you should be able to see the database when you create a dataset.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

For our use case, I’m using an autonomous database connection which I named OA_HERE_MAPS. Select the database connection to open the database and create dataset.

Creating the dataset


In creating a dataset from a database connection, we have the option to manually query the database tables or to first select the database schema to find the data tables that we want to use as data source. For our use case, I’m using the data tables from the OA_HERE_MAPS schema.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Find the data tables from the database and drag or double-click the data table to open. For our use case, I’m using the hotel POI data table found in the OA_HERE_MAPS schema.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Go to the data table and click on Edit Definition found on the upper right.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Editing the dataset definition using SQL editor


In the Edit Definition page, the default view shows the columns selector interface that enables users to select, drag and delete the columns that they want to use for the dataset. For our use case, we need to use custom SQL functions, so we select Enter SQL which will show the SQL editor containing the default statement of the current data table selection.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Once in the SQL editor, we can enter the SQL statement that selects the columns from the data tables and computes the spatial calculation as well. Here we will compute distance on point geometries of the hotels and restaurants in kilometers (KM) using SDO_DISTANCE.

SELECT t1.POI_ID as HOTEL_PO_ID, t1.NAME as HOTEL, t2.POI_ID as RESTO_PO_ID, t2.NAME as RESTO,
SDO_GEOM.SDO_DISTANCE(t1.GEOMETRY, t2.GEOMETRY, 0.005, 'unit=KM') distance_KM
FROM OA_HERE_MAPS.NTC_MAP_POI_HOTEL t1, OA_HERE_MAPS.NTC_MAP_POI_RESTAURANT t2
WHERE t1.ISO_COUNTRY_CODE='USA' AND t2.ISO_COUNTRY_CODE='USA' AND t2.CUISINE_ID=14

In our example, the output dataset is filtered using WHERE clause to limit the query within US for both data tables, and to Thai restaurants only based on CUISINE_ID for the restaurants POI data table.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Click OK and edit the metadata as needed. In this case, we change the treatment for IDs from Measure to Attribute.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

 Once done, name and save the dataset.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

Finally, we can now create map visualizations using the created dataset. Below is an example of a map visualization showing the Thai restaurants within 1-kilometer distance from the selected hotels.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database PDF, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation

In this blog, we have learned how to calculate distances on point geometries by editing the definition of a data source using the SQL syntax for SDO_DISTANCE when creating the dataset from a database connection. This function can be used with other spatial functions in the Oracle Database for other use cases such as getting the area and perimeter of land parcels, finding the nearest ATM machines to office buildings, and calculating the length of the streets from one point to another.

Source: oracle.com

Sunday, January 8, 2023

Restaurant Industry Salary Analysis by Role using Oracle Analytics Cloud

This article describes how you can use an Oracle Analytics Cloud (OAC) workbook to analyze trends and shifts in payroll, worked hours, and self-identified health rates for restaurant industry employees based on their roles. By using these insights, managers can identify gaps and can adjust their decisions according to industry statistics and payroll standards. The workbook includes an image map, KPI tiles, and horizontal bar graphs to visualize trends and insights within the dataset and the custom background.  Note that the data in this workbook is for demonstration purposes only.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

To begin, open a workbook in OAC. To use an image as a map, click the hamburger menu on the top-left corner of the page and select Console, then select Maps. Under Image Background, click the plus sign to add an image, or drag and drop the image to be used as a background. Once the image is available in the list, click the option icon (indicated by 3 dots) and select Create Map Layer to open the image as a map layer, then create polygons, lines, or points. Polygons are used in this workbook as shown in the following image.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

In your dataset, assign a column to the polygons. For this image, each character in the picture represents a role at a restaurant, so the Role column from the dataset is assigned to this map. Select the respective column and from the options menu, select Location Details to select a map layer and see the match quality between your data column and the map layer polygons.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

Now you can include the map layer in the workbook. Before starting to create visualizations, create custom calculations by right-clicking My Calculations, then Add Calculations. Calculations required for this workbook include salary change, health rate, and work hours change in 2021 from 2020.

To use the image map layer, drag the role and required metrics onto the visualization window to be shown in the image. Add the appropriate metrics in the tooltips section. When users hover over the image map, they can see the metrics for each role. To make the polygons transparent, set the transparency to 100 in the grammar menu.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

To visualize the average annual salary, total worked hours, and salary per hour for each year, use a tile visualization to present each of these KPIs.

Use horizontal bar charts to present the changes in annual salary, worked hours, and health rates between 2020 and 2021 using the custom calculations. For the Annual Change in Work Hours visualization, add a reference line to show the median change and to help with comparisons. Right-click the chart and from the menu, select Add Statistics and Reference Line. In the grammar menu on the left pane, change the function to average, median, or other appropriate values.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

To match the workbook colors with the image map, use a custom dark background image. At the bottom of the workbook, click Canvas Properties, select Custom for background, and select the desired background color. The folloiwng image shows the workbook using a dark background color from the Fill Color section and an uploaded background image.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

Use custom colors for bar charts by right-clicking the chart, selecting the colors, and assigning a custom color to each metric.

Oracle Analytics Cloud, Oracle Database, Oracle Database Career, Oracle Database Prep, Oracle Database Certification, Oracle Database Guides, Database Tutorial and Materials

Lastly, use text boxes for the workbook’s header and footer.

This has been a quick tour through creating a fun, eye-catching workbook that helps restaurant managers understand employee activity, which in turn helps them better support and manage their staff.

Source: oracle.com

Friday, January 6, 2023

Drive innovation with Oracle Analytics Cloud and OCI Vision image classification

Oracle Analytics Cloud & Server, Oracle Database, Oracle Database Exam, Oracle Database Certification, Oracle Database Prep, Oracle Database Prearation
Example of brain MRI pictures used in Oracle Analytics to recognize tumors based on an OCI Vision model.

Oracle Analytics Cloud (OAC) is integrated with Oracle Cloud Infrastructure (OCI) Vision, an AI service for image analysis at scale. One of its capabilities is image classification. OCI Vision classifies images and can assign labels to pictures to detect objects and changes in patterns or anomalies.

OCI allows the use of pretrained models. You can create custom models, upload data, and train an image classification model, then review custom model metrics and apply the model in Oracle Analytics Cloud on a new set of images. This is driving innovation and integrates the results quickly into an analytics platform.  You can use OAC and OCI Vision together for many industry use cases. For example, you can see in this video how Children's Medical Research Institute can more quickly analyze microscope images and is significantly reducing their simulation time, increasing the speed at which they can drive progress.

This blog describes some steps you can take to get the benefits of using OAC and OCI Vision in a low-code/no-code setting.  You can achieve more advanced use cases for mpMRI using the PyTorch framework MONAI (Medical Open Network for AI).

Create a bucket in OCI AI Services:

1. Connect to OCI at this URL: https://www.oracle.com/cloud/sign-in.html
2. Click the menu option, then Storage, and create a Bucket.
3. Upload in the bucket all the images needed to train and test your model.
4. Make your bucket public for now.
5. Label your pictures and place the JSON file with labels in your bucket.

For example, you might upload 3,000 pictures showing brain tumors and label each picture with the type of brain tumor and save those to the JSON file.

Create the OCI Vision model

1. Click the menu option, then Analytics & AI.
2. Under AI Services, click Vision.
3. Go to the Custom Models menu and click Projects.
4. Create a Project, then create a Model.
5. For the type of model, select Image Classification.
6. Choose existing dataset to train the model.
7. Select Object Storage and select your bucket.
8. Select your JSON file as the Data Labeling File.
9. Launch the training of the model for as many as 24 hours.
10. Check the results of the model.

Following is the example of a model trained to recognize brain tumors. The precision of recognition for the model is approximately 0.89.

Oracle Analytics Cloud & Server, Oracle Database, Oracle Database Exam, Oracle Database Certification, Oracle Database Prep, Oracle Database Prearation
Example from OCI of training an AI model for image classification.

Apply the model to classify new images

1. Open OAC.
2. On the Homepage, click Register Model/Function.
3. Select OCI Vision Models and select your model.
4. Create a Data Flow.
5. Add the CSV file with your Bucket URL. (See the GitHub as an example.)
6. Add the icon to Apply AI model to the images.
7. Save the data flow and Run it.

The data flow applies the model and labels the images. It creates a dataset with the result of the image classification for each image and indicates what type of tumor it recognized.

In the following example, the data flow connects to the Oracle Cloud AI model and analyzes the images. Then the AI model applies the label to the pictures resulting in a dataset including the URL and populated labels of the new images.  Beside each MRI brain picture, the AI model indicates the likelihood of a pituitary tumor and the percentage of confidence in the recommendation. 

Oracle Analytics Cloud & Server, Oracle Database, Oracle Database Exam, Oracle Database Certification, Oracle Database Prep, Oracle Database Prearation
Example of a data flow with a bucket URL and the AI model applied to all the pictures.

Explore the results and get insights in Oracle Analytics

1. Open OAC and create a Workbook.
2. Add the dataset created by the data flow.
3. Create a visualization using the image plugin to see all the pictures from the bucket.
4. Use this image visualization as a filter.
5. Create a table to show all the columns of the dataset or create any data visualization.
6. Click an MRI brain Picture to analyze the type of tumors returned by the AI model.
7. For each picture, notice the percentage of confidence by type of label (for example, tumor type).

In the following image, OAC displays data visualizations that represent the confidence percentage or precision of the classification as a specific tumor based on the picture selected.

Oracle Analytics Cloud & Server, Oracle Database, Oracle Database Exam, Oracle Database Certification, Oracle Database Prep, Oracle Database Prearation
OAC workbook using the dataset from a data flow with an applied OCI Vision model.

The power of integration between Oracle Cloud Infrastructure AI services and Oracle Analytics is fascinating and has the potential to change the world in the coming years. The ability to build such models using the low-code/no-code interface and to share them in 5 minutes on an analytics platform enables faster exploration and research and helps medical professionals easily use more data.

Important Statement

This article provides a basic example of using OCI Vision and OAC. This example is for illustration and demonstration purposes only and isn't intended to replace any medical imagery analysis tool or official diagnosis recommendation made by a professional. Any data visualizations or machine learning models or applications created using the Oracle tools is not approved for diagnostic use. Users need to receive the proper regulated compliance and approvals before using for medical and diagnostic use.

Data Visualization Tips

◉ If you have issues seeing your images and the objects detected, enter this URL in the Google Chrome browser: chrome://flags/#block-insecure-private-network-requests.


◉ Also temporarily disable “Block insecure private network requests”, if you have issues accessing your object storage as a debugging test.

Source: oracle.com