Friday, September 29, 2023

Mastering the Oracle 1Z0-129 Exam: Your One-Stop Guide to Success

For better preparation for Oracle 1Z0-129 certification, each topic should be thoughtfully prepared and given equal time.

Embarking on the Oracle 1Z0-129 certification journey is a significant step towards becoming a proficient Primavera P6 2021 Enterprise Project Portfolio Management. This comprehensive guide will walk you through the essential steps and strategies to excel in the 1Z0-129 exam.

Understanding the Oracle 1Z0-129 Exam

A. Purpose and Scope of Oracle 1Z0-129 Exam

The 1Z0-129 exam assesses your knowledge of Construction and Engineering concepts and technologies. It evaluates your ability to implement, manage, and troubleshoot database solutions.

B. Prerequisites

Before attempting the 1Z0-129 certification, it is crucial to have a foundational understanding of Construction and Engineering concepts and basic cloud computing knowledge.

C. Oracle 1Z0-129 Exam Format and Duration

The exam consists of multiple-choice questions. You'll have three hours to complete it, testing your theoretical knowledge and practical application.

D. Registration Process

Registering for the 1Z0-129 exam is straightforward. Visit Oracle's official website, create an account, select the exam, and select a convenient test center and date.

Preparing for Success in 1Z0-129 Exam

A. Setting Realistic Goals

Establish clear, attainable goals for your certification journey. Break down your study plan into manageable milestones.

B. Creating a Study Schedule

Devote dedicated time each day to study. Consistency is key. Allocate specific slots for theory, practical exercises, and review sessions.

C. Suggested Study Materials

Utilize a combination of Oracle's official documentation, online tutorials, and reputable study guides. Join forums and discussion groups to gain insights from experienced professionals.

D. Utilizing Oracle's Official Resources

Explore Oracle's comprehensive suite of resources, including documentation, whitepapers, and video tutorials. Leverage their cloud services for hands-on practice.

Test-Taking Strategies for Oracle 1Z0-129 Exam

A. Time Management Tips

Practice time-bound mock exams to enhance your speed and accuracy. Allocate a specific amount of time to each section of the exam.

B. Answering Multiple-Choice Questions

Read each question carefully, considering all options before selecting the best answer. Eliminate incorrect choices.

C. Tackling Practical Scenarios

For practical scenarios, follow a systematic approach. Analyze the problem, formulate a solution, and validate it before submission.

Final Review and Oracle 1Z0-129 Mock Exams

A. Comprehensive Review Checklist

Before the big day, revisit critical concepts, review your notes, and ensure you've covered all exam objectives.

B. Utilizing Oracle 1Z0-129 Practice Tests

Simulate exam conditions with practice tests. Assess your performance and focus on areas that need improvement.

C. Analyzing and Learning from Mistakes

Review incorrect answers from practice tests. Understand the rationale behind the correct answer to avoid similar mistakes in the exam.

Additional Tips for the Oracle 1Z0-129 Exam

A. Engage in Oracle 1Z0-129 Exam Hands-On Practice

Set up a virtual environment to gain practical experience with Oracle Construction and Engineering. Hands-on practice reinforces theoretical knowledge and increases your confidence.

B. Join Study Groups and Forums

Connect with fellow certification aspirants. Discussing concepts, sharing insights, and helping each other with doubts can significantly enhance your understanding.

C. Stay Updated with Oracle's Latest Offerings

Oracle frequently introduces new features and updates. Stay informed about these advancements to ensure your knowledge remains current and relevant.

D. Seek Feedback and Adapt Your Study Plan

Regularly assess your progress and be open to adapting your study plan. Seek feedback from practice tests and adjust your focus accordingly.

E. Building Confidence and Reducing Anxiety

Stay calm and confident. Engage in relaxation techniques and positive affirmations to alleviate exam-related stress.

Arrive early, well-prepared, and with all necessary documents. Follow instructions carefully and organize your time effectively during the exam.

Leveraging Your Oracle 1Z0-129 Certification for Career Growth

A. Updating Your Resume and LinkedIn Profile

Highlight your Oracle 1Z0-129 certification prominently on your resume and LinkedIn profile. This shows your expertise in Oracle Construction and Engineering solutions.

B. Exploring Job Opportunities

With your certification, you are eligible for various job roles such as Administrator, Cloud Solutions Architect, and Developer. Keep an eye on job portals and professional networks for relevant openings.

C. Networking with Industry Professionals

Attend industry events, webinars, and meetups to grow your professional network. Engage in conversations and exchange ideas with fellow professionals in the Oracle ecosystem.

D. Pursuing Advanced Oracle Certifications

Consider advancing your skills with higher-level Oracle certifications. This showcases your commitment to ongoing learning and enhances your career prospects.

Summary

Embarking on the Oracle 1Z0-129 certification journey is a significant endeavor. Following this comprehensive guide gives you the knowledge and strategies to excel in the exam. Remember, consistent effort and a structured study plan are crucial to success.

Fusion ERP Analytics adds support for Oracle Accounting Hub to create system of insights across accounting data

Unifying and consistently maintaining accurate accounting data for the purpose of analysis and reporting is a formidable challenge when dealing with multiple source applications and large volumes of data. Nonetheless, it is a necessary requirement for business agility, regulatory compliance, company growth, and maintaining a competitive advantage.

The introduction of Accounting Hub Analytics within Oracle Fusion ERP Analytics empowers finance organizations to establish a comprehensive system of insights for accounting data. This data is drawn from Oracle Accounting Hub sub-ledger applications, enabling organizations to gain valuable insights from both operational and financial data. Furthermore, the integration of data from Accounting Hub sub-ledger applications with the broader Oracle Fusion Cloud ERP application data forms a robust foundation for in-depth analysis.

Uncover operational and financial data connections


Accounting Hub acts as a bridge to general ledger (GL) data from diverse transactional sources, while preserving transaction context through associated supporting references. This supporting reference information is stored in Accounting Hub but not transferred to GL, and these details are often necessary to unearth valuable operational and financial insights.

Accounting Hub Analytics enables you to discover meaningful correlations among balances, journals, and granular sub-ledger transaction details, bolstered by supporting references, which allows for the swift detection of irregularities with end-to-end traceability. This empowers organizations to enhance operational efficiency, sharpen their decision-making capabilities and guide them toward optimized outcomes.

Oracle Database, Oracle Database Career, Oracle Database Skill, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Certification, Oracle Database Tutorial and Materials
Accounting Hub
 
Here are a few use cases on needs for Accounting Hub Analytics:

Industry specific analytics: Financial services companies aim to assess risk and profitability using data from various sources. With Analytics Hub Analytics, banks can analyze booking trends for mortgages, car loans, deposits, and other offerings based on supporting reference details. Insurance companies also require insights into auto, home, life, and other insurance businesses, enabling them to delve into granular details.

Enterprise analytics and reporting platform: Multinational companies often expand through acquisitions without centralizing IT operations in their new subsidiaries. This can lead to difficulties in providing comprehensive, accurate and timely enterprise-wide reports due to inconsistent reporting standards and data quality issues. By using Accounting Hub Analytics, organizations can leverage supporting references to reconcile management dimensions to create consolidated financial reporting.

Rapid Onboarding of Acquisitions: Consolidating IT functions after an acquisition can also leverage the advantages of Accounting Hub Analytics. Initially, upon acquiring a subsidiary, the Accounting Hub can facilitate the seamless integration of the subsidiary's legacy Chart of Accounts (COA) with the primary ledger's COA. Then Account Hub Analytics can provide various financial representations and reports, supporting the reconciliation process.

Create a system of insights for all accounting data


Here are a few use cases on needs for Accounting Hub Analytics:

Industry specific analytics: Financial services companies aim to assess risk and profitability using data from various sources. With Analytics Hub Analytics, banks can analyze booking trends for mortgages, car loans, deposits, and other offerings based on supporting reference details. Insurance companies also require insights into auto, home, life, and other insurance businesses, enabling them to delve into granular details.

Enterprise analytics and reporting platform: Multinational companies often expand through acquisitions without centralizing IT operations in their new subsidiaries. This can lead to difficulties in providing comprehensive, accurate and timely enterprise-wide reports due to inconsistent reporting standards and data quality issues. By using Accounting Hub Analytics, organizations can leverage supporting references to reconcile management dimensions to create consolidated financial reporting.

Rapid Onboarding of Acquisitions: Consolidating IT functions after an acquisition can also leverage the advantages of Accounting Hub Analytics. Initially, upon acquiring a subsidiary, the Accounting Hub can facilitate the seamless integration of the subsidiary's legacy Chart of Accounts (COA) with the primary ledger's COA. Then Account Hub Analytics can provide various financial representations and reports, supporting the reconciliation process.

Attain a cross-functional view of your business


By combining Accounting Hub data with other Oracle Fusion Cloud ERP data such as projects, general ledger, payables, and receivables, exploration can be enriched without complex data integration. Oracle Fusion ERP Analytics seamlessly integrates with other Fusion Analytics prebuilt analytical solutions designed for various Oracle Cloud Applications, including human capital management (HCM), supply chain management (SCM), and customer experience (CX) to create a unified analytical data model that spans multiple departments and incorporates prebuilt shared dimensions.

With Oracle Fusion ERP Analytics, you can easily import data from third-party sources, including operational and digital systems. That data can be combined with accounting data to discover intricate patterns and gain invaluable insights that drive transformative strategies and empower data-driven decision-making across multifaceted domains.

Source: oracle.com

Thursday, September 28, 2023

Oracle Roving Edge Infrastructure now available in UK

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Study

We’re excited to announce availablity for Oracle Roving Edge Infrastructure in London, England.  The expansion enables commercial users across the United Kingdom (UK) to distribute their cloud workloads beyond the data center.

How Roving Edge Works


Oracle Roving Edge Infrastructure extends Oracle Cloud Infrastructure (OCI) services to the edge in transportable form factors with core cloud services designed as an extension of a customer’s OCI tenancy to distribute workloads in remote locations. The identity, network, compute, and storage services on Roving Edge nodes enable workload processing at the edge, where data is generatedor consumed, giving you the following benefits:

  • Get faster insights: Compress time to results by processing data at its source; reduce round-trip latency to the cloud by running applications at the edge of networks; and analyze, transform, and filter data at the edge and move to cloud for further processing.
  • Benefit from unified user experience and tools: Build and test workloads in the cloud and deploy to the edge using the OCI UI.  Use same CLI, familiar APIs, and a WebUI with the same look and feel as the Oracle Cloud Console.  Sync data to and from the edge and cloud as needed with built in data synchronization.
  • Extend cloud services to remote and austere environments; Portable, ruggedized devices for operations at the edge of network and disconnected areas. Optional military grade ruggedized shock mount cases to provide electromagnetic shielding, protection, and shock-absorbing properties.
  • A rich portfolio of applications and solutions:  Bring a portfolio of applications to the edge including Oracle Database, MySQL, WebLogic, Golden Gate, Oracle Application Server, WebLogic, and artificial intelligence (AI) and machine learning (ML) solutions, smart anomaly detection with MSET, and more from Oracle and third parties.

Use Roving Edge Infrastructure to solve edge challenges, such as latency to improve application performance and the user experience. Deploy the Roving Edge nodes at retail or remote locations to gather data, distribute application layers to the edge for performance enhancements, or deploy artificial intelligence (AI) and machine learning (ML) models. The AI and ML models can quickly analyze the data gathered to provide insights for business decisions or personalize customer experiences. Camera streams directed to Roving Edge Infrastructure nodes running AI and ML video processing applications can provide real-time security, safety, and business insights. Data gathered from internet of things (IOT) sensors can be processed on Roving Edge nodes by applying techniques, such as the multivariate state estimation technique (MSET) to improve operations, equipment uptime, reliability, manufacturing quality, and productivity.

Healthcare satellite locations, such as remote clinics and emergency vehicles, can utilize Roving Edge nodes to store and update client or patient records locally for quick access. Digital content, such as video and images, can be cached at the edge to reduce latency and improve customer experiences or business operations at edge locations. Data collected or updated at the edge can sync back to the OCI tenancy for central management and consistent data across all locations.

Roving Edge Devices (RED) and the Roving Edge Ultra


Oracle Roving Edge Infrastructure comes in two form factors.  The Roving Edge Device (RED) is a 2U rack mountable server with 40 OCPUs, 512 GB of RAM, 61 TB of raw NVMe storage, an embedded NVIDIA GPU, and high-speed networking.  An optional ruggedized 3U military grade (MIL-STD-810) shock mount case is available for the RED, providing electromagnetic shielding, protection, and shock-absorbing properties.

The Roving Edge Ultra is a lightweight, ultraportable, battery-operated device with 12-core Xeon D-based CPU, 96 GB of RAM, and 7.68 TB of raw storage.  Both devices can operate in disconnected, low bandwidth, and connected environments. Oracle Cloud Marketplace provides compatible solutions for both RED and Ultra devices from Oracle and third-party partners for rapid application deployment to devices.

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Study

Source: oracle.com

Wednesday, September 27, 2023

Channels in OCI Queue enable messaging fairness, ephemeral destinations, and more

Oracle Cloud Infrastructure (OCI) Queue is a serverless asynchronous messaging service designed to handle large volumes of data with multiple concurrent producers and consumers of messages. When multiple producers and consumers are processing messages within the same queue, challenges can arise, such as the following examples: 

◉ An aggressive producer or a sudden volume burst from one producer can impact the performance of the application and cause a significant processing delay for messages coming from other producers.

◉ Certain request-response messaging patterns need ephemeral destinations, which are short-lived queues and meant for processing messages temporarily. While processing these messages in bulk, creating and deleting these short-lived queues can significantly increase the administrative overhead.

◉ When a queue has multiple consumers, individual consumers might be interested in consuming only certain types of messages or those coming from a specific producer, requiring developers to design a complex solution with multiple queues dedicated to each consumer type.

New: Channels in OCI Queue


To overcome these challenges, we’re announcing a new capability in OCI Queue: Queue channels. Channels enable ephemeral destinations within a queue and can improve message processing fairness, enable message selection, and request-response messaging patterns.

Messages published to a channel are available for consumption from both the queue and channel level. Channels don’t require explicit creation or deletion. While publishing a message, you can choose to specify a channel ID. All messages created with the same channel ID are published to the same channel. Channels are ephemeral, and if no messages are in a particular channel, the channel is deleted automatically. Channels share the same permissions, encryption method, and limits as their parent queue, but the queues can configure channel capacity, limiting the maximum amount of the queue’s overall capacity that a single channel can use at any time.

Enabling message processing fairness


When multiple producers are producing to the same queue, a sudden spike in message volume from one producer can cause a significant delay in processing messages from other producers. Sometimes, the application acts as a middleware for the end users, using a queue-per-end-user pattern, which requires the application to protect message consumption from skewing in favor of customers with higher traffic—the noisy neighbor problem.

In the following diagram, when the message producer (P1) starts producing messages in bulk, the messages from other producers (P2 or P3) can see a significant delay in processing.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

To consume messages from all the producers evenly and ensure message processing fairness, you must maintain separate queues for each producer type and implement a load balancing mechanism to consume messages evenly.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

With the introduction of channels, producers can now publish messages to a specific channel. When a consumer consumes messages from a queue without specifying a channel ID, the messages are returned from a random channel that has available messages. This separation not only helps increase fairness and protect against sudden volume spikes, but it allows you to use long polling for more effective data consumption.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Enabling message selection

You can include an optional parameter in requests to select messages that were published with the matching the channel ID. OCI Queue returns available messages only from one of the channels that match the ID.

If multiple channels’ IDs match the selection in a request, messages are returned from a random channel that has available messages. If messages are published to a specific channel, but a consumption request doesn’t include a channel, messages are returned from a random channel with available messages.

Enabling Ephemeral Queues for Request-Response Pattern

The request-response ephemeral pattern is a well-established integration pattern that enables the sender application to send a request and provides a way for the receiver to correctly send a response back to the sender application. This pattern typically needs a short-lived queue or topic for the application to send responses to. Channels provide a simple solution that lets you create and delete multiple channels, which can be short-lived, without making any explicit CreateQueue or DeleteQueue API calls.

Multiple applications can send their requests to a single request queue, with a specific channel ID parameter set to uniquely identify the sender application. The receiver application can process the requests and send replies with the channel ID parameter to uniquely identify the sender that had published the message. The application that sent the request can then receive messages on the specific channel ID and correctly process the replies. When the message processes successfully and the consumer deletes it, the channel is automatically deleted if it has no more messages to process.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Getting started with OCI Queue channels


Queue creation and channel capacity allocation

When creating or updating a queue, you can specify the percentage allocation for a channel to limit the maximum amount of the queue’s overall capacity that the channel can use at any time. By default, the limit is set as 100%, which means the channel can take up the entire queue’s bandwidth. If you set it up to 50%, the channel can take up only 50% of the queue’s bandwidth.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Sending messages to a specific channel

You can send messages to a specific channel. If the channel doesn’t already exist, you can create another channel by specifying a channel ID while sending the message. If the channel ID isn’t specified, the message is sent to the parent queue.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Listing channels

View the list of channels from the Channels page.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Poll for messages from channels

From the Channels List page, poll message from a random or from a specific channel.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Purge channel messages

You can purge messages from a specific channel using the Action menu of the specific channel.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Source: oracle.com

Monday, September 18, 2023

Oracle Analytics Cloud : Set up and configure Oracle Analytics Cloud environments using Terraform

Oracle Analytics Cloud (OAC) is a single and complete platform that empowers your entire organization to ask any question of any data, across any environment, on any device. It fits into your ecosystem, enabling analysis in the cloud while also providing easy access to any data source. The result? The best use of all your data, no matter where it is. Oracle Analytics Cloud provides a variety of options for intelligent analysis without being overwhelming to deploy and manage, making it an effective way to engage more people in analysis and extend your organization’s expertise.

When you first deploy OAC, you have a choice: either you can provision and configure your OAC environment manually, or you can use Terraform to create and maintain your OAC environment. Terraform is an infrastructure-as-code tool that allows you to build, change, and version your infrastructure using code techniques. It will enable you to set up configuration files to define your applications or infrastructure and store your infrastructure's state to modify or update. You can use Terraform to create and maintain multiple OAC environments. Not only will it automate the manual steps of deployment, but it will also allow the consistent rollout of additional OAC instances as needed.

This blog uses OAC with a private end point as a sample architecture to describe how to create and setup these resources with Terraform. Based on your requirements, you can update the Terraform script to add or update network resources such as VCNs, subnets, routing tables, security lists, etc.

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation

For the above architecture, you will perform the following tasks with Terraform:

  • Setup OCI network resources for the OAC private endpoint (PE) environment.
  • Create a VCN, a public subnet for the Bastion Host, and two separate private subnets for the database and OAC.
  • Oracle Autonomous Database (ADB) will be provisioned with the given configuration.
  • An OAC instance with a private access channel (PAC) will be provisioned with the provided configuration.
  • PAC egress IPs will be configured in the security list of the database private subnet (to establish connectivity between OAC and ADB).
  • A Bastion host will be setup to access private subnet resources, e.g., ADB and OAC, for maintenance purposes.( Bastion Host is a compute instance to access private network resources from an external network such as the Internet).
  • The required routing and security rules will be configured so that the environment is ready to use.

Terraform Artifacts


These are the Terraform scripts that are provided in the "Deployment Scripts" section.

  • Variable.tf — Terraform variable file which contains all variable definitions.
  • Config.tf—Terraform Config file which contains all variable values. Variables are grouped into different categories, e.g., VCN, Subnet, ADB etc., and are self-explanatory. The sample values provided in the attached file are just an example.
  • Deployment.tf — Terraform resource file for the OCI environment, which will set up all network resources (including VCN and subnets) and provision all PaaS components such as the Autonomous database, OAC, private access channel, and Bastion host. (For your reference, a Terraform resource for Oracle Database VM has also been provided as commented section in Deployment.tf).

How to Use these Terraform artifact


Prerequisites: Terraform should be installed on the machine where you are planning to execute these scripts.

  • Make a directory on your machine where Terraform is installed.
  • Copy Variable.tf, Config.tf, and Deployment.tf files into the directory.
  • Provide all the values required in Config.tf for provider, VCN, subnets, and PaaS resources.
  • Set resource_suffix_dev as “Dev” or “Test," as this will be used in all resource names, e.g., Dev_Public_Subnet. You can change it based on your naming specifications.
  • Execute " terraform plan -var-file=config.tfvars"
  • Execute " terraform apply -var-file=config.tfvars "

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation
Execution of Terraform Script

Output: all network resources (VCN, Internet GW, Service GW, Nat GW, Subnets, Route Tables, Security Lists) and all PaaS components (OAC, PAC, ADB, and Bastion host) will be deployed for the OAC environment.

Test OAC Connectivity with Oracle Autonomous Database


Once the Terraform deployment completes, you can test the connectivity of OAC with your ADB instance. Please follow these steps to make OAC connection with ADB. 

  • Log in to your OAC Instance.
  • In the upper right corner, click the “Create” button and select Oracle Autonomous Data Warehouse Cloud.
  • Enter the details as shown in the example by filling in the connection name, your username and password, and by uploading the ADB Wallet.zip file.
  • Click Save.
  • ( To download ADB wallet - 1. Navigate to Oracle Autonomous Database instance 2. click Database connection, 3. Under wallet type, make sure Instance Wallet is selected and then click Download wallet 4. Enter a wallet Password and click Download)

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation

Download Terraform artifacts from Github


You can download Terraform Artifacts from this GitHub Repository.

Source: oracle.com

Wednesday, September 13, 2023

Comparing Data Flows in Oracle Analytics Cloud with Spreadsheet-Based Tools

In many companies, IT departments play a crucial role in gathering and organizing data from disparate sources. Ideally, these companies would implement processes to handle large amounts of data and would have a tool to effectively prepare data for reporting and analysis. It’s difficult to have large amounts of data formatted and aggregated properly for every reporting use case. Many companies rely on spreadsheets to prepare data for reporting and analysis, yet spreadsheets present a few challenges:

  • Spreadsheets are often prone to data entry and formula errors.
  • They lack visibility into the transformations that have been applied.
  • They lack automation.
  • They introduce security concerns when emailed.
  • Spreadsheets don't support advanced statistical and machine learning capabilities.

Data flows in Oracle Analytics Cloud (OAC) enable users to leverage the benefits of spreadsheet-based tools while alleviating many of these challenges. The low-code environment makes it easy for existing spreadsheet users to build preparation workflows and to derive meaning from their data stores.

This article highlights some of the advanced capabilities provided by data flows, including the ability to track data transformations through the visual UI, automate workflow runs, share workflows with other users in a secure manner, and leverage advanced machine learning models with no coding experience.

Tracking Transformations


Data flows in OAC are designed so that each “step” implements a change to the dataset. It’s clear what steps were applied, along with the order in which they were applied. There are four main “categories” of data flow steps: data ingestion, data preparation, machine learning, and database analytics. Each of these categories offers a range of functions for data preparation. Data flows allow users to view a preview of the data after each step to verity that the transformations yield the desired result. Unlike spreadsheet-based tools, the data flow editor combined with the data preview makes it easy to track changes and observe how each transformation influences the resulting dataset.

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Automation and Scheduling


Users of OAC can configure schedules on their data flows to control when and how often the data flow is run. This introduces a layer of automation to ensure that output datasets used in workbooks are up-to-date and reflect the information present in sources. You can configure data flow schedules to meet your business needs: for example, you can configure schedules to run at a certain time after your source database systems update. You can also configure schedules to run at hourly, daily, weekly, monthly, or yearly increments. Running data flows using schedules prevents users from having to manually run the workflows and introduces a layer of automation often lacking from spreadsheet tools. The following animated image illustrates how users can create and configure schedules for data flows.

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Data Flow Sharing


Users in OAC can share data flows with other users and application roles, which presents several benefits related to collaboration and governance. The benefits and use cases related to data flow sharing include:

  • Teams can easily collaborate on single data preparation workflows, reducing duplicated efforts.
  • Sharing data flows allows for quality control and error reduction. Allowing multiple users to work on and/or have access to a data flow can potentially prevent data preparation mistakes from being overlooked.
  • If someone leaves the company or goes on vacation, the data flow is accessible to other users. The company doesn't have to create data flows from scratch.
  • Sharing data flows makes troubleshooting easier. If a user runs into an issue, they can share with someone else on their team for assistance.

Sharing data flows with users and application roles is relatively straightforward, as the following animated image illustrates.

Oracle Analytics Cloud, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials

Tailored to Citizen Data Scientists


If you're new to machine learning or don’t have much experience coding in Python or R, data flows in OAC offer the ability to easily prepare data and apply machine learning models in the same low-code workflow. No coding experience is required to leverage many of the data flow machine learning offerings. If you're interested in learning more about the machine learning offerings, check out this link. Many spreadsheet tools don't offer the wide range of machine learning algorithms that are provided in OAC.

Source: oracle.com

Monday, September 11, 2023

Oracle Analytics Cloud : Set up and configure Oracle Analytics Cloud environments using Terraform

Oracle Analytics Cloud (OAC) is a single and complete platform that empowers your entire organization to ask any question of any data, across any environment, on any device. It fits into your ecosystem, enabling analysis in the cloud while also providing easy access to any data source. The result? The best use of all your data, no matter where it is.  Oracle Analytics Cloud provides a variety of options for intelligent analysis without being overwhelming to deploy and manage, making it an effective way to engage more people in analysis and extend your organization’s expertise.

When you first deploy OAC, you have a choice: either you can provision and configure your OAC environment manually, or you can use Terraform to create and maintain your OAC environment. Terraform is an infrastructure-as-code tool that allows you to build, change, and version your infrastructure using code techniques. It will enable you to set up configuration files to define your applications or infrastructure and store your infrastructure's state to modify or update. You can use Terraform to create and maintain multiple OAC environments. Not only will it automate the manual steps of deployment, but it will also allow the consistent rollout of additional OAC instances as needed.

This blog uses OAC with a private end point as a sample architecture to describe how to create and setup these resources with Terraform. Based on your requirements, you can update the Terraform script to add or update network resources such as VCNs, subnets, routing tables, security lists, etc.

Oracle Analytics Cloud, Oracle Database Certification, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs

For the above architecture, you will perform the following tasks with Terraform:

  • Setup OCI network resources for the OAC private endpoint (PE) environment.
  • Create a VCN, a public subnet for the Bastion Host, and two separate private subnets for the database and OAC.
  • Oracle Autonomous Database (ADB) will be provisioned with the given configuration.
  • An OAC instance with a private access channel (PAC) will be provisioned with the provided configuration.
  • PAC egress IPs will be configured in the security list of the database private subnet (to establish connectivity between OAC and ADB).
  • A Bastion host will be setup to access private subnet resources, e.g., ADB and OAC, for maintenance purposes.( Bastion Host is a compute instance to access private network resources from an external network such as the Internet).
  • The required routing and security rules will be configured so that the environment is ready to use.

Terraform Artifacts


These are the Terraform scripts that are provided in the "Deployment Scripts" section.

  • Variable.tf — Terraform variable file which contains all variable definitions.
  • Config.tf—Terraform Config file which contains all variable values. Variables are grouped into different categories, e.g., VCN, Subnet, ADB etc., and are self-explanatory. The sample values provided in the attached file are just an example.
  • Deployment.tf — Terraform resource file for the OCI environment, which will set up all network resources (including VCN and subnets) and provision all PaaS components such as the Autonomous database, OAC, private access channel, and Bastion host. (For your reference, a Terraform resource for Oracle Database VM has also been provided as commented section in Deployment.tf).

How to Use these Terraform artifact


Prerequisites: Terraform should be installed on the machine where you are planning to execute these scripts.

  • Make a directory on your machine where Terraform is installed.
  • Copy Variable.tf, Config.tf, and Deployment.tf files into the directory.
  • Provide all the values required in Config.tf for provider, VCN, subnets, and PaaS resources.
  • Set resource_suffix_dev as “Dev” or “Test," as this will be used in all resource names, e.g., Dev_Public_Subnet. You can change it based on your naming specifications.
  • Execute " terraform plan -var-file=config.tfvars"
  • Execute " terraform apply -var-file=config.tfvars "

Oracle Analytics Cloud, Oracle Database Certification, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs
Execution of Terraform Script

Output: all network resources (VCN, Internet GW, Service GW, Nat GW, Subnets, Route Tables, Security Lists) and all PaaS components (OAC, PAC, ADB, and Bastion host) will be deployed for the OAC environment.

Test OAC Connectivity with Oracle Autonomous Database


Once the Terraform deployment completes, you can test the connectivity of OAC with your ADB instance. Please follow these steps to make OAC connection with ADB. 

◉ Log in to your OAC Instance.
◉ In the upper right corner, click the “Create” button and select Oracle Autonomous Data Warehouse Cloud.
◉ Enter the details as shown in the example by filling in the connection name, your username and password, and by uploading the ADB Wallet.zip file.
◉ Click Save.
◉ ( To download ADB wallet - 1. Navigate to Oracle Autonomous Database instance 2. click Database connection, 3. Under wallet type, make sure Instance Wallet is selected and then click Download wallet 4. Enter a wallet Password and click Download)

Oracle Analytics Cloud, Oracle Database Certification, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs

Oracle Analytics Cloud, Oracle Database Certification, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs

Download Terraform artifacts from Github


You can download Terraform Artifacts from this GitHub Repository.


Source: oracle.com

Friday, September 8, 2023

Multilingual Engine (MLE) for JavaScript in Oracle Database 23c

Multilingual Engine (MLE), Oracle JavaScript in Oracle Database 23c

The multilingual engine (MLE) for JavaScript was first introduced as a beta option for Oracle 18c. It was published in 21c, with the ability to execute ad-hoc JavaScript using the DBMS_MLE package. Oracle 23c extends the multilingual engine with the addition of persistent MLE modules and call specs that allow modules to be called from SQL and PL/SQL.

This article intentionally keeps the JavaScript really simple, so it can demonstrate the basic mechanism of creating and calling MLE modules in the Oracle database. Teaching JavaScript is beyond the scope of this article.

Setup


We create a new test user.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

drop user if exists testuser1 cascade;

create user testuser1 identified by testuser1 quota unlimited on users;
grant create session to testuser1;

To create MLE modules and the PL/SQL call specs we need the following two grants.

grant create mle to testuser1;
grant create procedure to testuser1;

If we want the ability to execute JavaScript, we must grant the following to our test user.

grant execute on javascript to testuser1;

If we want to run dynamic JavaScript using the DBMS_MLE package, we will need the following grant also.

grant execute dynamic mle to testuser1;

We'll add DB_DEVELOPER_ROLE as a catch-all.

grant db_developer_role to testuser1;

We check the multilingual engine is enabled for our database. Setting it to DISABLED will disable it at the PDB or CDB level respectively.

SQL> show parameter multilingual_engine

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
multilingual_engine                  string      enable
SQL>

Connect to the test user.

conn testuser1/testuser1@//localhost:1521/freepdb1

One of the examples needs access to the following table.

drop table if exists t1 purge;

create table t1 (
  id           number generated always as identity primary key,
  description  varchar2(20)
);

insert into t1 (description) values ('one'), ('two'), ('three');
commit;

Create MLE Modules


We create a JavaScript module using the CREATE MLE MODULE command. In this example we create a simple module to add two numbers together.

create or replace mle module math_mod language javascript as

export function add_numbers(num1, num2) {
  return(num1 + num2);
}
/

We create a call spec to allow the JavaScript module to be called from SQL and PL/SQL.

create or replace function add_numbers_fnc (
  num1 number,
  num2 number)
  return number
as mle module math_mod
signature 'add_numbers(number, number)';
/

We can now test it using a call from SQL or PL/SQL.

select add_numbers_fnc(1, 3);

ADD_NUMBERS_FNC(1,3)
--------------------
                   4

SQL>

In this example we create a module containing two functions. Notice only one of them is exported, so we have one public and one private function. We've used console.log to write output.

create or replace mle module math_mod language javascript as

export function add_numbers(num1, num2) {
  return do_the_work(num1, num2);
}

function do_the_work(num1, num2) {
  console.log('Doing some work');
  return(num1 + num2);
}
/

Alternatively, we could have exported the function like this.

create or replace mle module math_mod language javascript as

function add_numbers(num1, num2) {
  return do_the_work(num1, num2);
}

function do_the_work(num1, num2) {
  console.log('Doing some work');
  return(num1 + num2);
}

export { add_numbers }
/

We can use the same call spec, because only add_numbers is public.

create or replace function add_numbers_fnc (
  num1 number,
  num2 number)
  return number
as mle module math_mod
signature 'add_numbers(number, number)';
/

We turn on SERVEROUTPUT so we can see the message from console.log.

set serveroutput on
select add_numbers_fnc(1, 3);

ADD_NUMBERS_FNC(1,3)
--------------------
                   4

Doing some work
SQL>

The USER_MLE_MODULES view displays the modules in our schema.

column module_name format a20
column language_name forma a20

select module_name, language_name
from   user_mle_modules;

MODULE_NAME          LANGUAGE_NAME
-------------------- --------------------
MATH_MOD             JAVASCRIPT

SQL>

The USER_SOURCE view allows us to display the contents of the module.

column text forma a50

select line, text
from   user_source
where  type = 'MLE MODULE'
and    name = 'MATH_MOD'
order by line;

      LINE TEXT
---------- --------------------------------------------------
         1 function add_numbers(num1, num2) {
         2   return do_the_work(num1, num2);
         3 }
         4
         5 function do_the_work(num1, num2) {
         6   console.log('Doing some work');
         7   return(num1 + num2);
         8 }
         9
        10 export { add_numbers }

10 rows selected.

SQL>

We can load modules from files on the host file system. We create a directory object pointing to a directory holding our JavaScript module.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

create or replace directory tmp_dir as '/tmp';
grant read on directory tmp_dir to testuser1;

conn testuser1/testuser1@//localhost:1521/freepdb1

We have a file called "/tmp/math_mod2.js" with the following contents.

function add_numbers(num1, num2) {
  return do_the_work(num1, num2);
}

function do_the_work(num1, num2) {
  console.log('Doing some work');
  return(num1 + num2);
}

export { add_numbers }

We create the module with a BFILE reference to the source file.

create or replace mle module math_mod2
language javascript using bfile(TMP_DIR,'math_mod2.js');
/

The source is loaded into the dictionary as a one-off operation. If the source file changes, it has to be loaded again manually.

column text forma a50

select line, text
from   user_source
where  type = 'MLE MODULE'
and    name = 'MATH_MOD2'
order by line;

      LINE TEXT
---------- --------------------------------------------------
         1 function add_numbers(num1, num2) {
         2   return do_the_work(num1, num2);
         3 }
         4
         5 function do_the_work(num1, num2) {
         6   console.log('Doing some work');
         7   return(num1 + num2);
         8 }
         9
        10 export { add_numbers }

10 rows selected.

SQL>

Once loaded, it can be used in the same way shown before.

create or replace function add_numbers_fnc2 (
  num1 number,
  num2 number)
  return number
as mle module math_mod2
signature 'add_numbers(number, number)';
/

set serveroutput on
select add_numbers_fnc2(1, 3);

ADD_NUMBERS_FNC2(1,3)
---------------------
                    4

Doing some work
SQL>

We can also assign a version to a module using the optional VERSION keyword.

create or replace mle module math_mod
language javascript
version '1.0'
as

export function add_numbers(num1, num2) {
  return(num1 + num2);
}
/

select version
from   user_mle_modules
where  module_name = 'MATH_MOD';

VERSION
--------------------------------------------------------------------------------
1.0

SQL>

MLE Environments


We are able to reuse modules by importing them into a new module. To do this we need to create an MLE environment.

We create a new environment importing the two modules we created previously.

drop mle env if exists math_env;

create mle env math_env imports ('MATH_MOD' module MATH_MOD, 'MATH_MOD2' module MATH_MOD2);

The USER_MLE_ENVS and USER_MLE_ENV_IMPORTS views display information about our environment.

select env_name
from   user_mle_envs;

ENV_NAME
--------------------------------------------------------------------------------
MATH_ENV

SQL>

column env_name format a10
column import_name format a12
column module_owner format a12
column module_name format a12

select env_name,
       import_name,
       module_owner,
       module_name
from   user_mle_env_imports;

ENV_NAME   IMPORT_NAME  MODULE_OWNER MODULE_NAME
---------- ------------ ------------ ------------
MATH_ENV   MATH_MOD     TESTUSER1    MATH_MOD
MATH_ENV   MATH_MOD2    TESTUSER1    MATH_MOD2

SQL>

We create a new module, importing one of the modules from our environment.

create or replace mle module imp_math_mod language javascript as

import * as mm from "MATH_MOD"

export function add_numbers(num1, num2) {
  return mm.add_numbers(num1, num2);
}
/

We create a call spec and execute the module.

create or replace function imp_add_numbers_fnc (
  num1 number,
  num2 number)
  return number
as mle module imp_math_mod
env math_env
signature 'add_numbers(number, number)';
/

set serveroutput on
select imp_add_numbers_fnc(1, 3);

IMP_ADD_NUMBERS_FNC(1,3)
------------------------
                       4

SQL>

Drop MLE Modules


We use the DROP MLE MODULE command to drop modules.

drop mle module if exists math_mod;
drop mle module if exists math_mod2;

Remember to drop any call specs also.

drop function if exists add_numbers_fnc;
drop function if exists add_numbers_fnc2;

MLE JavaScript SQL Driver


The MLE JavaScript SQL Driver allows JavaScript modules to interact with the database. As you can imagine, there is a lot of documentation related to this functionality (here), but we'll just give a simple example.

In this example we query some data from the T1 table based on the supplied ID value.

create or replace mle module sql_mod_1 language javascript as

function queryT1(id) {
  if (id === undefined) {
    throw "Parameter ID is mandatory.";
  }

  try {
    const result = session.execute(
      `SELECT id, description FROM t1 WHERE id = :id`,
      [ id ],
      { outFormat: oracledb.OUT_FORMAT_OBJECT }
    );

    if (result.rows.length > 0) {
      for (let row of result.rows) {
        console.log(`The query found a row : id=${row.ID} description=${row.DESCRIPTION}`);
      }
    } else {
      console.log(`No data found.`);
    }

  } catch (err) {
    console.error(`Error: ${err.message}`);
  }
}

export { queryT1 };
/

We create a call spec for the module and execute the module.

create or replace procedure queryT1 (
  id number)
as mle module sql_mod_1 
signature 'queryT1(number)';
/

set serveroutput on
exec queryt1(2);

The query found a row : id=2 description=two

PL/SQL procedure successfully completed.

SQL>

Dynamic MLE Execution (DBMS_MLE)


In Oracle 21c dynamic MLE execution using the DBMS_MLE package was the main way to execute JavaScript in the database. With the inclusion of MLE modules in Oracle 23c, it is likely to be used far less frequently. We can think of the DBMS_MLE package as the JavaScript equivalent of the DBMS_SQL package used for dynamic SQL.

Here is a simple example of using the DBMS_MLE package to execute some JavaScript to add two numbers together.

set serveroutput on;
declare
  l_ctx     dbms_mle.context_handle_t;
  l_source  clob;
  l_num1    number := 1;
  l_num2    number := 3;
  l_output  varchar2(100);
begin
  l_ctx := dbms_mle.create_context();

  dbms_mle.export_to_mle(l_ctx, 'num1', l_num1);
  dbms_mle.export_to_mle(l_ctx, 'num2', l_num2);

  l_source := q'~
    (async () => {
       const bindings = await import("mle-js-bindings");
       const num1 = bindings.importValue("num1");
       const num2 = bindings.importValue("num2");
       const output = num1 + "+" + num2 + "=" + (num1+num2);
       bindings.exportValue("output", output);
     }
    )();
  ~';

  dbms_mle.eval(l_ctx, 'JAVASCRIPT', l_source);
  dbms_mle.import_from_mle(l_ctx, 'output', l_output);
  dbms_output.put_line(l_output);
  dbms_mle.drop_context(l_ctx);
end;
/
1+3=4

PL/SQL procedure successfully completed.

SQL>

The code block is executed as an asynchronous anonymous function. Alternatively we can split function and invocation.

set serveroutput on;
declare
  l_ctx     dbms_mle.context_handle_t;
  l_source  clob;
  l_num1    number := 1;
  l_num2    number := 3;
  l_output  varchar2(100);
begin
  l_ctx := dbms_mle.create_context();

  dbms_mle.export_to_mle(l_ctx, 'num1', l_num1);
  dbms_mle.export_to_mle(l_ctx, 'num2', l_num2);

  l_source := q'~
    async function dbms_mle_example() {
      const bindings = await import("mle-js-bindings");
      const num1 = bindings.importValue("num1");
      const num2 = bindings.importValue("num2");
      const output = num1 + "+" + num2 + "=" + (num1+num2);
      bindings.exportValue("output", output);
    }

    dbms_mle_example();
  ~';

  dbms_mle.eval(l_ctx, 'JAVASCRIPT', l_source);
  dbms_mle.import_from_mle(l_ctx, 'output', l_output);
  dbms_output.put_line(l_output);
  dbms_mle.drop_context(l_ctx);
end;
/

Thoughts


The Multilingual Engine (MLE) is all about choice. Most long term users of Oracle will be comfortable with SQL and PL/SQL, and probably won't consider using JavaScript in the database on a regular basis. The way those same users didn't use Java in the database when it was introduced in Oracle 8i.

If a new user comes to Oracle with existing JavaScript skills, they can choose to use those skills rather than focussing on PL/SQL. There may also be some tasks that are easier to achieve using existing JavaScript modules.

So JavaScript is not a replacement for PL/SQL. It is simply another option for developers.

Source: oracle-base.com

Monday, September 4, 2023

Oracle REST Data Services (ORDS) : AutoREST of JSON-Relational Duality Views

Oracle REST Data Services (ORDS), Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Certification, Oracle Database Learning, Oracle Database Preparation Exam

This article gives an overview of the AutoREST functionality of Oracle REST Data Services (ORDS) against JSON-relational duality views in Oracle 23c.

◉ Create a Test Database User


We need a new database user for our testing.

conn sys/SysPassword1@//localhost:1521/freepdb1 as sysdba

drop user if exists testuser2 cascade;
create user testuser2 identified by testuser2
  default tablespace users quota unlimited on users;
  
grant connect, resource to testuser2;

Create and populate a copy of the EMP and DEPT tables.

conn testuser2/testuser2@//localhost:1521/freepdb1

drop table if exists emp purge;
drop table if exists dept purge;

create table dept (
  deptno number(2) constraint pk_dept primary key,
  dname varchar2(14),
  loc varchar2(13)
) ;

create table emp (
  empno number(4) constraint pk_emp primary key,
  ename varchar2(10),
  job varchar2(9),
  mgr number(4),
  hiredate date,
  sal number(7,2),
  comm number(7,2),
  deptno number(2) constraint fk_deptno references dept
);

create index emp_dept_fk_i on emp(deptno);

insert into dept values (10,'ACCOUNTING','NEW YORK');
insert into dept values (20,'RESEARCH','DALLAS');
insert into dept values (30,'SALES','CHICAGO');
insert into dept values (40,'OPERATIONS','BOSTON');

insert into emp values (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,null,20);
insert into emp values (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);
insert into emp values (7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);
insert into emp values (7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,null,20);
insert into emp values (7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30);
insert into emp values (7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,null,30);
insert into emp values (7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,null,10);
insert into emp values (7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,null,20);
insert into emp values (7839,'KING','PRESIDENT',null,to_date('17-11-1981','dd-mm-yyyy'),5000,null,10);
insert into emp values (7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30);
insert into emp values (7876,'ADAMS','CLERK',7788,to_date('13-JUL-87', 'dd-mm-rr')-51,1100,null,20);
insert into emp values (7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,null,30);
insert into emp values (7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,null,20);
insert into emp values (7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,null,10);
commit;

Create a JSON-relational duality view against the base tables. You can get more information about JSON-relational duality views here. We've purposely excluded some of the optional columns to make things a little simpler.

drop view if exists department_dv;

create json relational duality view department_dv as
select json {'departmentNumber' : d.deptno,
             'departmentName'   : d.dname,
             'location'         : d.loc,
             'employees' :
               [ select json {'employeeNumber' : e.empno,
                              'employeeName'   : e.ename,
                              'job'            : e.job,
                              'salary'         : e.sal}
                 from   emp e with insert update delete
                 where  d.deptno = e.deptno ]}
from dept d with insert update delete;

Notice the view references the departments table, but includes a list of all employees in the department. So this maps to a real-world object, not just a single table.

◉ Enable ORDS and AutoREST


Enable REST web services for the test schema. We use any unique and legal URL mapping pattern for the schema, so we don't expose the schema name. In this case we use "hr" as the schema alias.

conn testuser2/testuser2@//localhost:1521/freepdb1

begin
  ords.enable_schema(
    p_enabled             => TRUE,
    p_schema              => 'TESTUSER2',
    p_url_mapping_type    => 'BASE_PATH',
    p_url_mapping_pattern => 'hr',
    p_auto_rest_auth      => FALSE
  );
    
  commit;
end;
/

Web services from the schema can now be referenced using the following base URL.

http://localhost:8080/ords/hr/

The final step is to enable AutoREST for the JSON-relational duality view. This is done the same way as AutoREST enabling any other view.

begin
  ords.enable_object (
    p_enabled      => TRUE, -- Default  { TRUE | FALSE }
    p_schema       => 'TESTUSER2',
    p_object       => 'DEPARTMENT_DV',
    p_object_type  => 'VIEW', -- Default  { TABLE | VIEW }
    p_object_alias => 'departments'
  );
    
  commit;
end;
/

Notice the object is called DEPARTMENT_DV, but we want the web service to refer to it as "departments", hence the object alias. To disable AutoREST repeat the call with the P_ENABLED parameter set to FALSE.

We are now ready to start.

◉ GET Web Services (READ)


By default browsers use the GET method for HTTP calls, so the following URLs can be called from a browser URL bar.

The following URLs return JSON documents containing metadata about the objects in the test schema the specified object structure respectively.

Available Objects : http://localhost:8080/ords/hr/metadata-catalog/
Object Description: http://localhost:8080/ords/hr/metadata-catalog/departments/

There are a variety of ways to query data from an AutoREST enabled table or view. The following URL returns all the data from the DEPARTMENT_DV view. Remember, the object alias was set to "departments".

http://localhost:8080/ords/hr/departments/

The data from an individual row is returned using the primary key value. A comma-separated list is used for concatenated keys.

http://localhost:8080/ords/hr/departments/10

It's possible to page through data using the offset and limit parameters. The following URL returns a page of 2 rows of data from the DEPARTMENT_DV view, starting at row 3.

http://localhost:8080/ords/hr/departments/?offset=2&limit=2

There are a variety of operators that can be used to filter the data returned from the object. Depending on you client, you may need to encode parts of the URI.

# departmentName = 'SALES'
Normal : http://localhost:8080/ords/hr/departments/?q={"items.departmentName":"SALES"}
Encoded: http://localhost:8080/ords/hr/departments/?q=%7B%22departmentName%22:%22SALES%22%7D

# departmentNumber >= 20
Normal : http://localhost:8080/ords/hr/departments/?q={"departmentNumber":{"$gte":30}}
Encoded: http://localhost:8080/ords/hr/departments/?q=%7B%22departmentNumber%22:%7B%22$gte%22:30%7D%7D

# departmentName = 'SALES' AND departmentNumber >= 30
Normal : http://localhost:8080/ords/hr/departments/?q={"departmentName":"SALES","departmentNumber":{"$gte":30}}
Encoded: http://localhost:8080/ords/hr/departments/?q=%7B%22departmentName%22:%22SALES%22,%22departmentNumber%22:%7B%22$gte%22:30%7D%7D

◉ POST Web Services (INSERT)


New records are created using the POST method. The URL, method, header and payload necessary to do this are displayed below.

URL        : http://localhost:8080/ords/hr/departments/
Method     : POST
Header     : Content-Type: application/json
Raw Payload:
{
  "departmentNumber" : 50,
  "departmentName" : "DBA",
  "location" : "BIRMINGHAM",
  "employees" : [
    {
      "employeeNumber" : 9999,
      "employeeName" : "HALL",
      "job" : "CLERK",
      "salary" : 500
    }
  ]
}

If the payload is placed in a file called "/tmp/insert-payload.json", the following "curl" command will add a department via the DEPARTMENT_DV view.

$ curl -i -X POST --data-binary @/tmp/insert-payload.json -H "Content-Type: application/json" http://localhost:8080/ords/hr/departments/
HTTP/1.1 201
Content-Location: http://localhost:8080/ords/hr/departments/50
ETag: "77052B06E84B60749E410D5C2BA797DF"
Location: http://localhost:8080/ords/hr/departments/50
Cache-Control: max-age=0
Expires: Wed, 12 Apr 2023 09:55:13 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 12 Apr 2023 09:55:13 GMT

{"departmentNumber":50,"departmentName":"DBA","location":"BIRMINGHAM","employees":[{"employeeNumber":9999,"employeeName":"HALL","job":"CLERK","salary":500}],"_metadata":{"etag":"77052B06E84B60749E410D5C2BA797DF","asof":"00000000002710B9"},"links":[{"rel":"self","href":"http://localhost:8080/ords/hr/departments/50"},{"rel":"describedby","href":"http://localhost:8080/ords/hr/metadata-catalog/departments/item"},{"rel":"collection","href":"http://localhost:8080/ords/hr/departments/"}]}$

In addition to the web service output, we can see rows have been created in the base tables.

select * from dept where deptno = 50;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        50 DBA            BIRMINGHAM

SQL>


select * from emp where empno = 9999;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      9999 HALL       CLERK                                 500                    50

SQL>

◉ PUT Web Services (UPDATE)


Records are updated, or inserted if they are missing, using the PUT method. The URL, method, header and payload necessary to do this are displayed below.

URL        : http://localhost:8080/ords/hr/departments/50
Method     : PUT
Header     : Content-Type: application/json
Raw Payload: 
{
  "departmentNumber" : 40,
  "departmentName" : "OPERATIONS",
  "location" : "BOSTON",
  "employees" : [
    {
      "employeeNumber" : 9998,
      "employeeName" : "WOOD",
      "job" : "CLERK",
      "salary" : 500
    }
  ]
}

Notice the row to be updated is determined by the URL, in a similar way to a GET call using the primary key. Excluding the PK columns, any columns not specified in the payload are set to null.

If the payload is placed in a file called "/tmp/update-payload.json", the following "curl" command will add a new employee to department 40 via the DEPARTMENT_DV view.

$ curl -i -X PUT --data-binary @/tmp/update-payload.json -H "Content-Type: application/json" http://localhost:8080/ords/hr/departments/40
HTTP/1.1 200
Content-Location: http://localhost:8080/ords/hr/departments/40
ETag: "AAC7DB6EB25FAB98572C2855225DE82B"
Cache-Control: max-age=0
Expires: Wed, 12 Apr 2023 10:06:51 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 12 Apr 2023 10:06:51 GMT

{"departmentNumber":40,"departmentName":"OPERATIONS","location":"BOSTON","employees":[{"employeeNumber":9998,"employeeName":"WOOD","job":"CLERK","salary":500}],"_metadata":{"etag":"AAC7DB6EB25FAB98572C2855225DE82B","asof":"00000000002713E6"},"links":[{"rel":"self","href":"http://localhost:8080/ords/hr/departments/40"},{"rel":"describedby","href":"http://localhost:8080/ords/hr/metadata-catalog/departments/item"},{"rel":"collection","href":"http://localhost:8080/ords/hr/departments/"}]}$

In addition to the web service output, we can see the row has been updated by querying the table.

select * from emp where deptno = 40;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      9998 WOOD       CLERK                                 500                    40

SQL>

◉ DELETE Web Services (DELETE)


Records are deleted using the DELETE method. The URL and method necessary to do this are displayed below.

URL        : http://localhost:8080/ords/hr/departments/50
Method     : DELETE

The following "curl" command will delete a row from the EMP table. The URL is an encoded version of the one shown above.

$ curl -i -X DELETE  http://localhost:8080/ords/hr/departments/50
HTTP/1.1 200
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: max-age=0
Expires: Wed, 12 Apr 2023 10:11:09 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 12 Apr 2023 10:11:09 GMT

{"rowsDeleted":1}
$

In addition to the web service output, we can see the row has been deleted by querying the table.

select * from dept where deptno = 50;

no rows selected

SQL>


select * from emp where deptno = 50;

no rows selected

SQL>

◉ Managing State


In all the previous operations we've ignored state, assuming the data is not changing. In reality it's possible the data has changed between our service calls. JSON-relational duality views give us a way to manage the state, providing us with an "etag" which is effectively a version we can use for optimistic locking. The following example shows this.

We delete department "50" to give us a clean starting point.

delete from emp where deptno = 50;
delete from dept where deptno = 50;
commit;

We create a new department using a REST call as we did previously.

URL        : http://localhost:8080/ords/hr/departments/
Method     : POST
Header     : Content-Type: application/json
Raw Payload:
{
  "departmentNumber" : 50,
  "departmentName" : "DBA",
  "location" : "BIRMINGHAM",
  "employees" : [
    {
      "employeeNumber" : 9999,
      "employeeName" : "HALL",
      "job" : "CLERK",
      "salary" : 500
    }
  ]
}

If the payload is placed in a file called "/tmp/insert-payload.json", the following "curl" command will add a department via the DEPARTMENT_DV view.

$ curl -i -X POST --data-binary @/tmp/insert-payload.json -H "Content-Type: application/json" http://localhost:8080/ords/hr/departments/
HTTP/1.1 201
Content-Location: http://localhost:8080/ords/hr/departments/50
ETag: "77052B06E84B60749E410D5C2BA797DF"
Location: http://localhost:8080/ords/hr/departments/50
Cache-Control: max-age=0
Expires: Wed, 12 Apr 2023 12:51:57 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 12 Apr 2023 12:51:57 GMT

{"departmentNumber":50,"departmentName":"DBA","location":"BIRMINGHAM","employees":[{"employeeNumber":9999,"employeeName":"HALL","job":"CLERK","salary":500}],"_metadata":{"etag":"77052B06E84B60749E410D5C2BA797DF","asof":"0000000000274448"},"links":[{"rel":"self","href":"http://localhost:8080/ords/hr/departments/50"},{"rel":"describedby","href":"http://localhost:8080/ords/hr/metadata-catalog/departments/item"},{"rel":"collection","href":"http://localhost:8080/ords/hr/departments/"}]}
$

Notice the resulting "etag" value of "77052B06E84B60749E410D5C2BA797DF".

We add another employee to department "50" using a conventional insert. This simulates the data changing between the last time we checked the document.

insert into emp values (9997,'WOOD','CLERK',null,null,1300,null,50);
commit;

Now we attempt to update the department, passing the original "etag" value in the "_metadata" tag.

URL        : http://localhost:8080/ords/hr/departments/50
Method     : POST
Header     : Content-Type: application/json
Raw Payload:
{
  "_metadata" : {"etag" : "77052B06E84B60749E410D5C2BA797DF"},
  "departmentNumber" : 50,
  "departmentName" : "DBA",
  "location" : "BIRMINGHAM",
  "employees" : [
    {
      "employeeNumber" : 9999,
      "employeeName" : "HALL",
      "job" : "SALESMAN",
      "salary" : 1000
    }
  ]
}

If the payload is placed in a file called "/tmp/update-payload.json", the following "curl" command will update the employee details in department 50 via the DEPARTMENT_DV view.

$ curl -i -X PUT --data-binary @/tmp/update-payload.json -H "Content-Type: application/json" http://localhost:8080/ords/hr/departments/50
HTTP/1.1 412
Cache-Control: max-age=0
Expires: Wed, 12 Apr 2023 12:58:40 GMT
Content-Type: application/problem+json
Content-Length: 204
Date: Wed, 12 Apr 2023 12:58:40 GMT

{
    "code": "PredconditionFailed",
    "message": "Predcondition Failed",
    "type": "tag:oracle.com,2020:error/PredconditionFailed",
    "instance": "tag:oracle.com,2020:ecid/qOqFfmt7AEbuGbIfns-vKg"
}
$

The data change has caused the "etag" value to change, so the update caused a "PredconditionFailed" error. In order to proceed, we would have to re-query the data to get the new "etag" value, then try again.

◉ Batch Load


In addition to basic DML and queries, it's possible to upload batches of data using AutoREST.

Make sure the additional departments and employees are removed.

delete from emp where deptno > 40;
delete from dept where deptno > 40;
commit;

The URL, method, header and payload necessary to do this are displayed below. Notice the payload is a JSON array of documents to load.

URL        : http://localhost:8080/ords/hr/departments/batchload"
Method     : POST
Header     : Content-Type : application/json
Raw Payload:
[
  {
    "departmentNumber" : 60,
    "departmentName" : "DEVELOPER",
    "location" : "LONDON",
    "employees" : [
      {
        "employeeNumber" : 9997,
        "employeeName" : "SMITH",
        "job" : "MANAGER",
        "salary" : 3000
      }
    ]
  },
  {
    "departmentNumber" : 70,
    "departmentName" : "PROJECTS",
    "location" : "LONDON",
    "employees" : [
      {
        "employeeNumber" : 9996,
        "employeeName" : "JONES",
        "job" : "MANAGER",
        "salary" : 3500
      }
    ]
  },
  {
    "departmentNumber" : 80,
    "departmentName" : "MAINTENANCE",
    "location" : "LONDON",
    "employees" : [
      {
        "employeeNumber" : 9995,
        "employeeName" : "DAVIS",
        "job" : "MAMAGER",
        "salary" : 2500
      }
    ]
  }
]

If the payload is placed in a file called "/tmp/data.json", the following "curl" command will perform a batch load into the EMP table.

$ curl -i -X POST --data-binary @/tmp/data.json -H "Content-Type: application/json" http://localhost:8080/ords/hr/departments/batchload
HTTP/1.1 200
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: max-age=0
Expires: Wed, 12 Apr 2023 14:58:58 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Date: Wed, 12 Apr 2023 14:58:58 GMT

#INFO Number of rows processed: 3
#INFO Number of rows in error: 0
#INFO Last row processed in final committed batch: 3
SUCCESS: Processed without errors
$

In addition to the web service output, we can see the rows have been loaded by querying the table.

select * from dept where deptno > 40;

  DEPTNO DNAME          LOC
---------- -------------- -------------
        70 PROJECTS       LONDON
        80 MAINTENANCE    LONDON
        60 DEVELOPER      LONDON

SQL>


select * from emp where deptno > 40;

  EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      9996 JONES      MANAGER                              3500                    70
      9995 DAVIS      MAMAGER                              2500                    80
      9997 SMITH      MANAGER                              3000                    60

SQL>

The parameters that can be used to influence the batch load are documented here.

◉ Display Enabled Objects


The USER_ORDS_ENABLED_OBJECTS view displays enabled objects.

set linesize 200
column parsing_schema format a20
column parsing_object format a20
column object_alias format a20
column type format a20
column status format a10

select parsing_schema,
       parsing_object,
       object_alias,
       type,
       status
from   user_ords_enabled_objects
order by 1, 2;

◉ Thoughts


My biggest issue with AutoREST in the past was it was table/view centric. Very few real world units of work map directly to one table or view. As a result I often spurned AutoREST in favour of manually coding APIs in PL/SQL, and presenting them as REST web services.

With JSON-relational duality views we can easily map real world objects to multiple database tables. The integration between ORDS and JSON-relational duality views make AutoREST a lot more appealing.

Source: oracle-base.com