Wednesday, November 30, 2022

Making XA Easy for Microservices

In my last post, I explained how the Oracle Transaction Manager for Microservices (MicroTx) can help applications adopt microservices without giving up consistency. In this post I’ll show how easy it is for a microservices based application to adopt XA transactions to ensure strong consistency. The XA distributed transaction protocol is one of the most widely used distributed transaction protocols in the world. Virtually every application server, transaction processing monitor, database, and messaging system provides support for XA transactions.

If we replace a monolithic application with a set of microservices using XA transactions, we will see something like the following picture. A client or other microservice calls microservice A, which starts the XA transaction by calling the Transaction Manager. A’s business logic updates its resource manager and calls B which also calls the transaction manager to enlist in the transaction. B’s business logic does some updates to its resource manager and calls C. Likewise, C calls the transaction manager to enlist in the transaction and its business logic updates its resource manager. Finally, A calls the transaction manager to commit or rollback the transaction. The transaction manager then prepares and commits A, B, and C’s resource managers or rolls them back.

Oracle Microservices, Oracle Database Career, Database Skills, Database Jobs, Database Certification, Database Prep, Database Preparation, Database Tutorial and Materials
Figure 1 Multiple Resource Managers
 
Which is a little complicated and we’ll explain how this can be simplified.

Simplifying XA Transactions for Microservices


XA transactions are often considered an anti-pattern in microservices because of the perceived complexity and the potential for performance issues due to locking. There are many articles decrying the use of XA transactions in microservices because of these potential issues. While it is certainly true that XA transactions can be problematic, the same can be said about any solution to distributed consistency. The question in some sense is whether one trusts the implementation of the transaction manager in the case of XA, or one trusts their developers to correctly code the completion and compensation functions under all failure scenarios in eventual consistency models. Transaction managers tend to be well tested and highly available.

Part of the perceived difficulty in using XA are all the interactions that must take place. However, these can easily be hidden from the application developer and make their lives far easier than trying to hand code compensation logic required for eventual consistency. In its simplest form, an application using XA, only needs to demarcate the transactions boundaries. Handling the commitment or rolling back of the transaction is left up to the transaction manager and associated resource managers.

One of the major differences between XA and Sagas or Try-Confirm/Cancel is that XA participants hold locks for the duration of the transaction. Sagas and Try-Confirm/Cancel use local transactions that only span the duration of the participant’s business logic. Otherwise, from the developer’s perspective they are all very similar. An initiator starts the distributed transaction, invokes one or more participant microservices which enlist in the transaction, and then decides to commit/complete/confirm or rollback/compensate/cancel the transaction. 

The major difference between XA and either Sagas or Try-Confirm/Cancel comes from what happens once the initiator has decided to commit/complete/confirm the transaction. In XA, the transaction manager uses a two-phase commitment protocol whereas Sagas and Try-Confirm/Cancel use a one-phase commitment protocol. However, these differences should largely be up to the transaction manager to deal with and not the microservices themselves.
In XA, the flow is as follows:

1. Initiator starts the distributed transaction
2. Called microservices enlist in the transaction
3. Initiator asks transaction manager to commit or rollback the transaction
4. If the initiator decided to commit, the transaction manager asks each microservice to prepare
    1. If all participants successfully prepare, they are all asked to commit
    2. If any of the participants fail to prepare, they are all asked to rollback
5. If the initiator decided to rollback the transaction, the transaction manager asks each microservice to rollback

In LRA and TCC, the flow is as follows:

1. Initiator starts the distributed transaction
2. Called microservices enlist in the transaction
3. Initiator asks transaction manager to complete/confirm or compensate/cancel the transaction.
4. Transaction manager asks each enlisted microservice to complete/confirm or compensate/cancel

The primary differences are around what the transaction manager must perform and for Sagas what participants must provide in their business logic to complete or compensate the transaction. With XA there is no application logic required to perform the commit or rollback functions as these are handled by the infrastructure and resource managers.

XA RM Proxy


In the XA protocol, the transaction manager needs to communicate with the resource managers to complete committing or rolling back the resource managers involvement in the transaction. In a monolithic application this is trivial as the transaction manager is essentially built into the monolithic application, typically by using an application server such as WebLogic Server or Tuxedo. So, a monolithic application already has connections to the resource managers. In the microservices world, each microservice may have its own resource manager. Thus, the transaction manager would need to connect to each resource manager to prepare/commit/rollback the transaction. This means the transaction manager must have unique client libraries and credentials for each resource manager that is used by the microservices. This makes it very difficult for a transaction manager to support an arbitrary set of resource managers.

Oracle Microservices, Oracle Database Career, Database Skills, Database Jobs, Database Certification, Database Prep, Database Preparation, Database Tutorial and Materials
Figure 2 Without RM Proxy

To solve this problem in MicroTx, a resource manager proxy is provided by the MicroTx client libraries. When the transaction manager needs to prepare/commit/rollback a participating resource manager, it simply makes a callback to the microservice and the proxy relays the request to the resource manager being used by the microservice. These REST based callbacks allow the transaction manager to be agnostic to the resource manager being used by the microservice. However, this also means that participant microservices must use the MicroTx client libraries which registers the callbacks and provides the implementation of the callbacks for the resource manager being used.

Oracle Microservices, Oracle Database Career, Database Skills, Database Jobs, Database Certification, Database Prep, Database Preparation, Database Tutorial and Materials
Figure 3 With RM Proxy
 

Request/Reply Interceptors


Another issue in any distributed transaction protocol is how to handle enlistment in the transaction for a microservice. In MicroTx, interceptors are provided in the MicroTx client libraries to intercept both incoming and outgoing REST calls and their replies and responses. These interceptors use headers to propagate the transaction context such that called microservices can automatically be enlisted in the transaction by the interceptor. The interceptors also ensure the appropriate transaction headers are propagated in any outgoing REST call.

The typical flow looks like:

Oracle Microservices, Oracle Database Career, Database Skills, Database Jobs, Database Certification, Database Prep, Database Preparation, Database Tutorial and Materials
Figure 4 Interceptors

When a microservice using the MicroTx client libraries makes an outbound REST request, the library’s interceptors will add transaction headers to the outbound request if the microservice has started a distributed transaction or is currently participating in a distributed transaction. When the recipient of the request receives the request, the interceptors in the recipient will see the transaction headers and automatically enlist the participant in the distributed transaction. As well for XA, the interceptors will automatically start an XA transaction on the participant’s resource manager.

Adopting XA Transactions in a Microservice Based Applications


For microservices to use XA with MicroTx, the only significant changes required are to:

1. Include the MicroTx client library in their microservice implementation

2. Use CDI annotations or MicroTx client library APIs to register the required interceptors and callbacks

3. Use CDI annotations or MicroTx client library APIs in participant microservices to obtain the connection to their XA compliant resource manager.

4. Use MicroTx client libraries API to delineate transaction boundaries indicating an XA transactions has been started, and then committing or rolling back the transaction.

Source: oracle.com

Monday, November 28, 2022

Database links within pluggable databases

Sometimes, you might need a database link between 2 schemas within the same (pluggable) database.

Why? There are several reasons. Here is one: may be you want to refresh one schema from another using Data Pump via network link? This is very common practice for development databases. I will show in this blog how this can be done step-by-step.

Oracle Database, Database Career, Database Jobs, Database Tutorial and Materials, Database Guides, Database Skills, Database Pluggable

Here is what is needed before you can start: 2 tnsnames.ora entries pointing to the same service name, just with different names. I will need a logical directory, say schema_dir, although I will not place anything there.

I am doing the schema cloning within the same PDB in a 21c CDB, although nothing is preventing us from doing the same in 12c, 18c or 19c.

The schema julian will be duplicated into another schema called kerry:

julian1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PDB1.laika2.laika.oraclevcn.com)
)
)

julian2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PDB1.laika2.laika.oraclevcn.com)
)
)

[oracle@aeg admin]$ sqlplus sys/password@//localhost:1521/PDB1.laika2.laika.oraclevcn.com as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Thu Nov 10 10:48:47 2022
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle. All rights reserved.

Connected to:
Oracle Database 21c EE High Perf Release 21.0.0.0.0 - Production
Version 21.1.0.0.0

SQL> CREATE OR REPLACE DIRECTORY schema_dir AS '/u01/app/oracle/homes/OraDB21Home1/datapump';

Directory created.

SQL> GRANT READ, WRITE ON DIRECTORY schema_dir TO julian;

Grant succeeded.

SQL> conn julian/password@julian1 as sysdba
Connected.

SQL> create user kerry identified by password;

User created.

SQL> grant dba to kerry;

Grant succeeded.

SQL> conn julian/password@julian1
Connected.

-- Now, let us create the database link:

SQL> create database link data_pump_link connect to kerry identified by password using 'julian2';

Database link created.

SQL> select sysdate from dual@data_pump_link;
select sysdate from dual@data_pump_link
*
ERROR at line 1:
ORA-02085: database link DATA_PUMP_LINK.LAIKA2.LAIKA.ORACLEVCN.COM connects to
PDB1.LAIKA2.LAIKA.ORACLEVCN.COM

SQL> show parameter global

NAME TYPE VALUE

allow_global_dblinks boolean FALSE
global_names boolean TRUE
global_txn_processes integer 1

SQL> alter system set global_names=false scope=memory;

System altered.

SQL> select sysdate from dual@data_pump_link;

SYSDATE

10-NOV-22

SQL>

-- and now it is time to do the import:

[oracle@aeg datapump]$ impdp julian/password@julian1 DIRECTORY=schema_dir NETWORK_LINK=data_pump_link schemas=julian remap_schema=julian:kerry

Import: Release 21.0.0.0.0 - Production on Thu Nov 10 11:12:22 2022
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 21c EE High Perf Release 21.0.0.0.0 - Production
Starting "JULIAN"."SYS_IMPORT_SCHEMA_01": julian/@julian1 DIRECTORY=schema_dir NETWORK_LINK=data_pump_link schemas=julian remap_schema=julian:kerry
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.162 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"KERRY" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ORACLE_OBJECT_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39129: Object type TABLE: "JULIAN"."SYS_IMPORT_SCHEMA_01" not imported. Name conflicts with the master table

. . imported "KERRY"."SALES" 37790720 rows
. . imported "KERRY"."BLOGS" 73991 rows
. .
. .
. . imported "KERRY"."RDBMS_BRANDS" 12 rows
. . imported "KERRY"."SHARDINGADVISOR_ECPREDS" 1 rows
. . imported "KERRY"."SHARDINGADVISOR_PREDS" 4 rows
. . imported "KERRY"."SHARDINGADVISOR_CONFIGDETAILS" 0 rows
. . imported "KERRY"."SHARDINGADVISOR_CONFIGURATIONS" 0 rows
. . imported "KERRY"."SHARDINGADVISOR_IMPORTANT_TABS" 0 rows
. . imported "KERRY"."SHARDINGADVISOR_QUERYTYPES" 0 rows
. . imported "KERRY"."USER_TABLE" 0 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
ORA-39083: Object type COMMENT failed to create with error:
ORA-00942: table or view does not exist

Failing sql is:
COMMENT ON TABLE "KERRY"."SYS_IMPORT_SCHEMA_01" IS 'Data Pump Master Table IMPORT SCHEMA '

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX:"KERRY"."SYS_MTABLE_00001374A_IND_3" failed to create with error:
ORA-00942: table or view does not exist

...

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/MATERIALIZED_ZONEMAP
Job "JULIAN"."SYS_IMPORT_SCHEMA_01" completed with 9 error(s) at Thu Nov 10 11:34:49 2022 elapsed 0 00:22:22

[oracle@aeg datapump]$
Note the errors related to the master table SYS_IMPORT_SCHEMA_01 which Data Pump uses for processing exports and imports. Of course, being in the same pluggable database, there is a conflict in replacing the Master Table and hence these can be totally ignored.

Note also the importance of GLOBAL_NAMES when creating the DB link.

Wednesday, November 23, 2022

Introducing Project Analytics in Oracle Fusion ERP Analytics

Whether managing large enterprise or small projects within an organization, project teams need agility to thrive in a constantly changing business environment. To reach their goals and reduce risk, modern project management organizations (PMOs) are data-driven and recognize the need to analyze their data and understand its impact and correlations for cross-functionally at every stage of the project lifecycle. A critical role includes budget control and margin optimization and for this purpose, they need not only a bird’s eye view of their costs, budgets, forecasts, and spend. They also need to be able to undertake detailed analyses to find hidden insights.

Oracle has introduced Project Analytics as part of Fusion ERP Analytics. Using a prebuilt integrated analytics approach, Project analytics helps project members identify cost optimization opportunities, understand revenue and billing trends, predict variances in budget and forecasting, and integrate all data across departments to enable faster decisions and better project outcomes.

Quickly infer financial health of the project portfolio with prebuilt and best practice KPIs


The biggest challenge is keeping project expenditures on track with allocated budgets as the project progresses. With information spread across multiple systems, project members spend a lot of their time on manual data extraction and manipulation, trying to keep track of and monitor budgets and forecasts, then analyze the variances from what was planned and from the original baseline cost.

Budget and forecast analysis: Project Analytics smooths out the budget and forecast analysis process. With prebuilt KPIs, project teams can keep financial health in check by identifying “at-risk” projects with proactive comparisons of budget and forecasts versus approved original and current baseline costs. Embedded analytics helps to predict budget and forecast overruns by leveraging trends from historical performance. This allows PMOs to make necessary adjustments in costs, expenses, resources, and commitments to reach project goals.

Three key questions that Project Analytics helps users quickly answer are:

◉ What's the overall financial health of projects that I'm managing?
◉ In which cost categories am I overrunning the budget?
◉ What's our current and updated forecast against budget?

Project Management Organizations (PMOs), Database Exam Study, Database Career, Database Jobs, Database Certification, Database Preparation, Database Jobs, Database ERP, Oracle Database Study, Database Jobs, Database Guides
Financial Performance Dashboard

Get timely visual insights on revenue and billing  


Organizations often have difficulty getting accurate and timely project performance information for themselves and their clients. With complex project data managed across siloed systems and continued reliance on multiple spreadsheets, it's hard for teams to connect the dots and understand what the up-to-date revenue trends are, who their top customers are, and who isn’t paying on time.

Revenue and Billing Analysis: With interactive dashboards and easy-to-use self-service visualizations, project teams can now follow day-to-day project trends and variances across the project execution phases and billing milestones. With complete visibility in trends across revenue, billings, customers, contracts, and funding allocations, project teams can instantly get the answer to their queries without waiting for IT to extract the reports they need:

◉ Who are my top ten customers, and who's the most profitable?
◉ Which are the top projects driving revenue?
◉ How do current billings compare with billings from the last quarter? 
◉ What's accrued revenue compared with billings right now?
◉ How am I performing against original funding contracts?
◉ Which of my projects are doing well and how do they compare with each other?

Time is of the essence. Particularly with the rise of virtual collaboration, there's an immediate need across project organizations to enable seamless and rapid communications. With built-in natural language processing (NLP) features, PMOs can use speech-driven visualizations to get insights related to revenue and billing at any time from any device. Ask time-sensitive questions such as “how's my revenue trending this month?” or “Which customers haven’t paid this month?” to receive visual insights and share the results with stakeholders.  

Project Management Organizations (PMOs), Database Exam Study, Database Career, Database Jobs, Database Certification, Database Preparation, Database Jobs, Database ERP, Oracle Database Study, Database Jobs, Database Guides
Mobile phone with voice

Improve controls over project costs and expenses


PMOs need to constantly monitor the status of their project costs and manage changes to the budget. Always a challenge as the project evolves, this requires project members to analyze various costing reports periodically—whether that's expenses, variances from initial estimates, or profit margins and commitments. All of this can be a time-consuming process, as project-related data and associated costs are often stored in different locations and managed by different people, thus reducing the visibility for teams into project costs and overall spending.  

Project Cost Analysis: Project Analytics provides a telescopic view, powered by prebuilt analysis, enabling dynamic decision-making with self-service visualizations, to quickly assess the impact of project costs on margins and budgets. With the ability to take a deep dive and uncover variances proactively, project members can make project cost adjustments and improve project control. Get detailed insights into the drivers of cost commitments and expenses. Slice and dice data in multiple ways at any time—either by project organization, project type, or category. Deep dive into revenue and cost distributions at account level details by project, or general ledger account and fiscal period using prebuilt analyses and metrics. Some of the queries that the prebuilt cost analysis can answer include:

◉ Where am I forecasting to overspend and how can I control that?
◉ Which resources can I use to maximize profitability and minimize cost?
◉ What's my revenue variance quarter over quarter?
◉ How's my cost trending over the last six months by project and category?

Project Management Organizations (PMOs), Database Exam Study, Database Career, Database Jobs, Database Certification, Database Preparation, Database Jobs, Database ERP, Oracle Database Study, Database Jobs, Database Guides
Project Cost Analysis

Gain an integrated view of projects with finance, HR, and supply chain operations—all in one place


Typically, project members have very limited visibility into the project related information across other departments that can either positively or negatively impact their milestones. Project Analytics in combination with Fusion SCM (Supply Chain Management) Analytics provides teams ability to slice and dice supply chain data such as sales orders, procurement requisitions and orders, inventory transactions by project attributes.

Cross-Departmental and Project-Driven Supply Chain: With visually connected insights, project members now have an integrated view across the organization. This helps them not only recognize how to leverage resources but also deliver projects on time. The single, analytical, extensible data model provides integrated visualizations to quickly present correlations of project data across finance, HR, and supply chain operations. This enables members to find correlations in project-driven supply chains to reduce risk and control costs. They can recognize opportunities for next budget allocations and see how much additional labor is required to reach their budget. They can get ahead of any bottlenecks in supply chain operations that will hinder their outcomes. Using the extensibility feature, project members can easily bring together data from different sources or import external files with drag and drop integrations for a secured analysis:

◉ What are my open sales orders by project?
◉ Which orders have been currently shipped out and for which customers?
◉ Which are the current POs that are open by supplier and by project?
◉ How many hours of labor is yet to be processed in projects?

Project Management Organizations (PMOs), Database Exam Study, Database Career, Database Jobs, Database Certification, Database Preparation, Database Jobs, Database ERP, Oracle Database Study, Database Jobs, Database Guides
Project-driven Supply Chain Analysis

Source: oracle.com

Monday, November 21, 2022

Oracle Analytics Cloud at Oracle CloudWorld 2022

October 2022 is going to be an exciting month! Not only is Oracle hosting the first ever Oracle CloudWorld event in fabulous Las Vegas, but there are three significant Oracle Analytics Cloud updates coming to the platform. These updates provide greater depth to existing functionality to better support:

1. Data modelers with a new enterprise semantic modeler,
2. Citizen data scientists through better integration with AI/ML services,
3. and business users with new automated insights.

1. Semantic modeling and multi-user development


For over two decades, Oracle Analytics has had a rich enterprise semantic model that helps deliver consistent and trusted numbers. Until now, creating, editing, and publishing the semantic model (aka RPD) has been limited to a desktop tool. However, the next generation of semantic modeler is entirely web-based and introduces new functionality never originally available through the desktop tool. The new functionality provides a seamless, multi-user developer experience with tight Git integration. And it includes a new Semantic Model Markup Language (SMML) for more flexible methods to edit and update models. For example, users can now make updates directly through code rather than through the tool’s user interface.

Oracle Analytics Cloud, Oracle CloudWorld 2022, Oracle Database Prep, Database Certification, Database Career, Database Skills, Database Prep, Database Tutorial and Materials
The new semantic modeler with Git integration

“This is the real game-changer for the entire Oracle Analytics community”

Vyshak Palanisamy & Slaven Mandic, ClearPeaks

Web access with Git integration allows all internal analytics users the ability to contribute their business knowledge to the centralized semantic model, effectively democratizing the enterprise semantic layer. As the varied nature of data sources grows, this distributed approach allows the business to contribute to the organization’s data-mesh and not rely or wait on IT for such updates. Using Git, semantic layer content can be versioned, merged, branched, and managed by other types of Git operations. Final submissions can go through checks and approval processes before being integrated into the primary model.

This capability has been highly anticipated and will have real positive impact for analytics practitioners. ClearPeaks, one of Oracle’s systems integrator partners had this to say, “… the new integration with Git offers a real multi-user experience where dozens of different users can work on the semantic model simultaneously, without the risk of their changes being overwritten by someone else when published. This is the real game-changer for the entire Oracle Analytics community, making the development of new models faster, more efficient, and more fun!”

2. AI/ML new integrations and enhancements


Oracle Analytics is embedded with machine learning capabilities (ML) for all levels of users, from advanced users that know the commonly used algorithms and how to tune them, to users who want a simple code-free experience to apply ML to their data sets. The new capability allows you to extend the ML functionality already built-into OAC with other OCI services such as the OCI Vision service. OCI Vision is an AI service that applies computer-vision technology to analyze image-based content. Now, OAC can render the information into a familiar, easy-to-read dashboard. In addition, seamless integration with OCI Functions allows OAC administrators to register the OCI functions directly into OAC, and then business users can execute those functions in their data flows without the need to code. Further integration with other OCI AI services are currently in development.

Oracle Analytics Cloud, Oracle CloudWorld 2022, Oracle Database Prep, Database Certification, Database Career, Database Skills, Database Prep, Database Tutorial and Materials
OCI Vision (AI service) with Oracle Analytics Cloud for image recognition and classification

In this image detection and classification example, Oracle’s Ben Arnulf (Senior Director, Analytics Product Strategy) has used his doorbell camera to capture images, leveraged OCI Vision to identify himself, classified a variety of attributes about what the camera is seeing, and then displayed the findings in an OAC project.

3. Advanced composite visualization and proactive automated insights


This update has two parts, Advanced Composite Visualizations and Auto-insights. These capabilities aim to increase productivity, especially for business users, by simplifying analytics projects and using unbiased machine learning to generate analytics-driven insights.

Advanced Composite Visualizations

As the world gets more complicated every day, so does the data we require to understand it. This means larger amounts and wider varieties of data that needs to be processed and considered in our business decisions. With business users gaining access to an organization’s data-mesh, much more data is brought into analytics projects – great for decisions, hard for dashboard design. Ultimately, this additional complexity can, unintentionally, end up in our analytics visualization projects while trying to show all the relevant information simultaneously. A denser way to deliver more information faster is needed, but at the same time without increasing the dashboard’s complexity. A new capability in Oracle Analytics Cloud helps to organize content and reduce the complexity of building and maintaining dashboard projects. It’s called Advanced Composite Visualizations. These new composite visualizations allow the author to easily add additional metrics onto a variety of graphics while reducing the effort to manage and maintain the dashboard. Overall, this capability reduces the number of individual visualizations, but increases the information density on the dashboard without making it look busy or cluttered.

Oracle Analytics Cloud, Oracle CloudWorld 2022, Oracle Database Prep, Database Certification, Database Career, Database Skills, Database Prep, Database Tutorial and Materials
Six advanced composite visualizations displaying multiple metrics

Auto-insights

Using machine learning to find analytics-driven insights is something all companies are striving to achieve, but how do you make that ML capability available to those users who really need it? Auto-insights is an easy-to-use, 1-click capability that allows OAC to analyze your data set and make smart recommendations for visualizations based the unbiased findings it discovered. OAC proactively analyzes data sets and uses ML to produce automated insights about its findings. The visualization recommendations are quickly added to an analytics project with a single click, boosting productivity while building a compelling story for stakeholders. 

Oracle Analytics Cloud, Oracle CloudWorld 2022, Oracle Database Prep, Database Certification, Database Career, Database Skills, Database Prep, Database Tutorial and Materials
Gain machine learning generated auto-insights about your datasets
 
Oracle Analytics Cloud is frequently updated with new capabilities for all types of users, from data specialists to ordinary business users. OAC is built to help you easily connect your organization’s data sources and create or expand a data-mesh, making it easy to retrieve any relevant information and display it clearly so that you can build your analytics-driven stories that lead to better business decisions. 

Source: oracle.com

Friday, November 18, 2022

Most Guaranteed Tips to Pass the Oracle 1Z0-149 Exam

1z0-149, 1z0-149 dumps, oracle 1z0-149, 1z0-149 dumps pdf, 1z0-149 free dumps, exam 1z0-149, 1z0-149 dumps free, 1z0-149 questions and answers, 1z0-149 pdf, 1z0-149 exam dumps, oracle 1z0-149 dumps, oracle database program with pl/sql 1z0-149, 1z0-149 study guide, 1z0-149 practice test, oracle pl sql online test, pl sql assessment test, pl sql certification questions answers, oracle pl sql certification, oracle pl/sql developer certified associate pdf, oracle pl/sql certification sample questions, oracle pl/sql exam questions pdf, oracle database 19c student guide pdf

Are you looking for a 1Z0-149 certification exam to help you check your expertise in Oracle Database Program with PL/SQL and gain a certification? Oracle throws an Oracle Database Program with PL/SQL 1Z0-149 certification exam for the same. This exam 1Z0-149 is intended for individuals with essential knowledge and skills in Oracle Database PL/SQL Developer Certified Professional (OCP). Candidates planning to take this Oracle exam can opt for it with the ease and comfort of their homes or workplaces without concerning their daily schedule.

Moreover, this 1Z0-149 certification will help you get a good pay job. And it will open doors to more advanced level certifications to have a well and secure future. But we understand that cracking the 1Z0-149 exam requires good study resources and preparation.

Prepare for Oracle 1Z0-149 Exam in Simple Steps

To earn this Oracle Database Program with PL/SQL certification, you must pass exam 1Z0-149. These tests are very challenging and need serious preparation, and you must show a high level of expertise in the different technical aspects of Oracle Database PL/SQL Developer.

1. Study the 1Z0-149 Exam Syllabus Topics

Go through the 1Z0-149 syllabus topics. You must recognize the test's details, such as the number of questions and the time duration of your test. Study the different topics you must understand while preparing for the exam. With the information you gather from all materials, you can select how much time you spend on each question during the test.

Take this note as a basis and develop a checklist of what you must accomplish before writing your test. This checklist should include the Oracle Database Program with PL/SQL exam topics and how you plan to tackle each.

Also Read: Oracle Database Program with PL/SQL 1Z0-149 Certification for Your Database Career

2. Gather Relevant Resource 1Z0-149 Exam Study Materials

Numerous resources for your Oracle Database Program with PL/SQL exam preparation are available. You only need to find the appropriate ones. You can start your search from a browser and look for materials. You can download the official 1Z0-149 PDF exam manuals to do your search for resources more accessible. It is a guide to help you in your preparation process.

3. Take Notes on Your Study

Having collected relevant materials for your 1Z0-149 exam preparation, you must make conscious efforts to take note of the resources. It is usually tough to go over materials from the beginning to the end again after you might have gone through it the first time. It is only possible if you have a few resources to study during preparation. Taking

proper notes as you go through the resource materials creates a reading note that you can refer to at any time before writing exam 1Z0-149. Another important reason you should take note is that it will help you memorize details of what you read.

4. Keep Track of Best 1Z0-149 Practice Tests

During the preparation time, candidates must keep on practicing using the topics. As this will help the candidates to keep perfect in that topic as well as to point out the weak areas. Moreover, assessing yourself will improve your answering skills, saving time during 1Z0-149. However, to get the best 1Z0-149 practice tests, many websites supply practice tests for the Oracle Database Program with PL/SQL exam. Research and start taking the 1Z0-149 practice test.

5. Joining Study groups

During the preparation time, it is helpful if the candidates join the Oracle exam study groups. That is to say, and these groups will help you keep up to date with the Oracle latest changes or any update happening in exam 1Z0-149. Moreover, these groups include both beginners as well as professionals. In other words, if you have any issue or query related to exam 1Z0-149, you begin the discussion in the group to get the best possible solution.

6. Must Do Revision

Revision is essential to your preparation. After you cover your study plan, spare time to review the notes you have made to refresh your memory and boost your knowledge on the topics again.

Final Words

Oracle Database certifications are well-known for offering applicants knowledge and skills in Oracle database. And earning an Oracle Database PL/SQL Developer Certified Professional is the first step to a professional career path and high-paying jobs. You must pass exam 1Z0-149 to receive this Oracle Database Program with PL/SQL certification.

The Semantic Modeler in Oracle Analytics Cloud

Oracle Analytics has a seasoned, rich Semantic Model, which has been used by thousands of analytics customers over the past two decades. Today, I'm excited to share with you the next generation modeling tool and a modeling language to create those semantic models.

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
Semantic Modeler

Data Model Tools Today


Let's take a quick look at the modeling tools available in Oracle Analytics today:

Semantic Modeler (New): A web-based tool for creating semantic models and publishing the semantic model as an RPD file for deployment. We'll cover this in detail in this post.

Model Administration Tool: A developer-focused modeling tool that provides complete governed data modeling capabilities by enabling a developer to define rich business semantics, data governance, and data interaction rules to fetch, process, and present data at different granularity from disparate data systems. The semantic model created by a developer using this tool is the brains behind the intelligent analytics query engine that generates optimized queries across various data sources.

Data Modeler: A web-based, easy-to-use data modeling tool to create simple semantic models. It doesn't cover the complete spectrum of semantic model capabilities. Data Modeler doesn't support creating semantic models that involve data at different grains and federated and fragmented data sources. Data Modeler will be replaced by Semantic Modeler.

Dataset Editor: A simple, easy-to-use, data modeling and data preparation tool that empowers data analysts and business analysts to create datasets based on data from local and remote files, including more than 50 connections and Subject Areas. Datasets enable business users to create self-service data models on top of existing governed semantic models.

Web-based Semantic Modeling Tool


The new Semantic Modeler is web-based, with a modern interface for data modeling, and it provides a streamlined user experience to create governed semantic models. It has a tight integration with Git to provide a seamless multi-user development experience. Developers create the models using the Semantic Modeler UI, or they can create the models using the Semantic Model Markup Language (SMML). 

Semantic Modeler is an alternative to the Model Administration Tool with complete semantic modeling capabilities currently available in Oracle Analytics Cloud for relational sources only. Semantic Modeler generates Semantic Model Markup Language (SMML) to define semantic models. Developers fond of creating semantic models with code can use SMML directly. Those who prefer to use a modeling tool with diagramming capabilities to build a model can use Semantic Modeler.

Salient features of Semantic Modeler include:

◉ Modern and browser-based modeling tool that's an integrated component of Oracle Analytics Cloud.
◉ Complete semantic modeling capabilities including physical diagrams, logical diagrams, and lineage diagrams.
◉ Tight integration with any Git-based platform. You can perform most common Git operations from within Semantic Modeler.
◉ Transparent SMML generation to define semantic models.
◉ An SMML editor that includes smart integration with an expression editor to validate calculations and advanced expressions.
◉ A lineage viewer that shows the mapping of physical, logical, and presentation layers.
◉ Streamlined search integration that seamlessly shows relationships among objects.

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials

Here is a sneak peek of Semantic Modeler features:

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
Physical Diagram

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
Logical Diagram

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
Git Integration - View Diff

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
Git Integration - Merge Conflict

Language to Define Semantic Models


Introducing Semantic Model Markup Language (SMML), a modeling language based on the popular JSON format familiar among developers to add business semantics to data. SMML provides a grammar, syntax, and structure for defining semantic models. A semantic model is a collection of SMML files that can be version controlled with the Git integration.

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
SMML files organized by Semantic Layers

◉ Design-time semantic model definitions in human-readable format with SMML can be exported to a run-time RPD file for deployment.
◉ Each semantic model object is a file containing SMML definition.
◉ Files names map to what you see in the UI for object names.
◉ File granularity is at the table level, thereby reducing the number of files to manage.
◉ Object references are easy to define with fully qualified names of the objects.
◉ SMML (along with Git integration) enables collaborative multi-user development and version control of semantic models.

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
SMML Editor in Semantic Modeler

Preview and share your feedback


The current semantic model supports only relational sources. If you have a semantic model with Oracle Essbase, Oracle OLAP, or Analytics Views, continue to use the Model Administration Tool rather than Semantic Modeler. 

To preview Semantic Modeler in your environment after the May update, display the Console and select System Settings and Preview to enable the Semantic Modeler. You must have Service Administrator privileges to perform this action. 

Oracle Analytics Cloud, Oracle Database Prep, Database Certification, Database Learning, Database Career, Database Skills, Database Jobs, Database Tutorial and Materials
Enable Semantic Modeler Preview​

Try importing your existing semantic models (RPD files) or create a semantic model from scratch.

Source: oracle.com

Wednesday, November 16, 2022

List of the Oracle Database 23c New Features

Oracle Database 23c, Oracle Database Tutorial and Materials, Oracle Database Career, Database Skills, Database Jobs, Database Preparation

Here is a compiled list of the new features I am aware of:

OLTP and Core DB:


Accelerate SecureFiles LOB Write Performance
Automatic SecureFiles Shrink
Automatic Transaction Abort
Escrow Column Concurrency Control
Fast Ingest (Memoptimize for Write) Enhancements
Increased Column Limit to 4k
Managing Flashback Database Logs Outside the Fast Recovery Area
Remove One-Touch Restrictions after Parallel DML
Annotations – Define Metadata for Database Objects
SELECT Without the FROM Clause
Usage of Column Alias in GROUP BY and HAVING
Table Value Constructor – Group Multiple Rows of Data in a Single DML or SELECT statement
Better Error Messages to Explain why a Statement Failed to Execute
New Developer Role: dbms_developer_admin.grant_privs(‘JULIAN’);
Schema Level Privileges
RUR’s are transitioning to MRPs (available on Linux x86-64)

Application Development:


Aggregation over INTERVAL Data Types
Asynchronous Programming
Blockchain Table Enhancements
DEFAULT ON NULL for UPDATE Statements
Direct Joins for UPDATE and DELETE Statements
GROUP BY Column Alias or Position
Introduction to Javascript Modules and MLE Environments MLE – Module Calls
New Database Role for Application Developers
OJVM Web Services Callout Enhancement
OJVM Allow HTTP and TCP Access While Disabling Other OS Calls
Oracle Text Indexes with Automatic Maintenance
Sagas for Microservices
SQL Domains
SQL Support for Boolean Datatype
SQL UPDATE RETURN Clause Enhancements
Table Value Constructor
Transparent Application Continuity
Transportable Binary XML
Ubiquitous Search With DBMS_SEARCH Packages
Unicode IVS (Ideographic Variation Sequence) Support

Compression:


Improve Performance and Disk Utilization for Hybrid Columnar Compression
Index-Organized Tables (IOTs) Advanced Low Compression

Data Guard:


Per-PDB Data Guard Integration Enhancements

Event Processing:


Advanced Queuing and Transactional Event Queues Enhancements
OKafka (Oracle’s Kafka implementation)
Prometheus/Grafana Observability for Oracle Database

In-Memory:


Automatic In-Memory enhancements for improving column store performance

Java:


JDBC Enhancements to Transparent Application Continuity
JDBC Support for Native BOOLEAN Datatype
JDBC Support for OAuth2.0 for DB Authentication and Azure AD Integration
JDBC Support for Radius Enhancements (Challenge Response Mode a.k.a. Two Factor Authentication)
JDBC Support for Self-Driven Diagnosability
JDBC-Thin support for longer passwords
UCP Asynchronous Extension

JSON:


JSON-Relational Duality View
JSON SCHEMA

RAC:


Local Rolling Patching
Oracle RAC on Kubernetes
Sequence Optimizations in Oracle RAC
Simplified Database Deployment
Single-Server Rolling Patching
Smart Connection Rebalance

Security:


Ability to Audit Object Actions at the Column Level for Tables and Views
Enhancements to RADIUS Configuration
Increased Oracle Database Password Length: 1024 Byte Password
Schema Privileges to Simplify Access Control
TLS 1.3

Sharding:


JDBC Support for Split Partition Set and Directory based Sharding
New Directory-Based Sharding Method
RAFT Replication
UCP Support for XA Transactions with Oracle Database Sharding

Spatial and Graph:


Native Representation of Graphs in Oracle Database
Spatial: 3D Models and Analytics
Spatial: Spatial Studio UI Support for Point Cloud Features
Support for the ISO/IEC SQL Property Graph Queries (SQL/PGQ) Standard
Use JSON Collections as a Graph Data Source

Source: juliandontcheff.wordpress.com

Monday, November 14, 2022

Industrialization of artificial intelligence and machine learning for SaaS applications

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

This article provides an overview of how machine learning (ML) applications, a new capability of Oracle Cloud Infrastructure (OCI) Data Science, help software-as-a-service (SaaS) organizations industrialize artificial intelligence (AI) and machine learning and embed AI and ML functionality into their product experiences. It discusses our experience and the challenges we had to overcome when we faced the problem of delivering hundreds of AI and ML features to tens of thousands of customers. It describes key capabilities of ML applications that shorten the time-to-market for AI and ML features by standardizing implementation, packaging, and delivery. The article also provides an outlook into other areas beyond SaaS that can benefit from using ML applications.

It has been widely recognized that building an AI and ML solution isn’t the same as building a general software solution. Today, businesses understand that moving from a data science proof-of-concept to production deployment is a challenging discipline.

Many known success stories of AI and ML solution productizations exist, not only among high-tech IT companies but across many industries like healthcare, retail, energy, agriculture, and more. Thanks to AI and ML frameworks, toolkits, and especially services provided by major hyper-scale cloud vendors like Oracle, it is becoming easier to develop, deploy and operate AI and ML solutions. The rate of successful projects is also positively influenced by the fact that companies are adopting practices like MLOps that streamline the ML lifecycle.

However, with SaaS applications, the percentage of successful projects is lower. For example, let’s imagine you want to enrich your SaaS application with a new AI and ML feature. It’s on a completely different level! You need to develop and run thousands of pipelines on thousands of varied customer datasets, training and deploying thousands of models. To avoid this management nightmare, you must automate these tasks as much as possible.

The success of these projects depends on the efforts and knowledge of large, professional teams of experienced software engineers, data engineers, and data scientists. This investment can be expensive for an organization, and not every attempt gets a happy ending. AI and ML projects tend to go over budget in general. According to a McKinsey & Co. survey, the cost of 51% of AI projects was higher than expected when AI high performers were excluded. Another serious issue can be delays. Teams building AI and ML features for SaaS on their own can experience setbacks that extend the project by a year or more.

Some problems and challenges associated with delivery and operations of AI and ML feature for SaaS applications are too complex for every team or organization to address repeatedly. A better strategy relies on ML applications to solve them for you, enabling your development and data science teams to better focus on business problems.

Our experience: Building bespoke solutions for SaaS


At Oracle, we have been delivering market-leading SaaS applications for decades and taking the ML applications approach for years through working with SaaS teams to help them add new AI and ML capabilities to their SaaS applications. This work enabled us to gain a deep understanding of both the needs of SaaS products and challenges related to delivering AI and ML features within SaaS applications.

To stay competitive, SaaS organizations like Oracle Fusion Applications or Oracle NetSuite need the ability to introduce new AI and ML use cases as intelligent features for their customers. They need to rapidly develop, deploy, and operate AI and ML features. The development lifecycle needs to be shortened from months to weeks. This goal poses a challenge because of the large number of SaaS customers and the size of SaaS organizations. To give a sense of the scale, with thousands of customers who each have hundreds of AI and ML features, Oracle runs millions of ML pipelines and models in production!

We also must cover the entire lifecycle of AI and ML features. To accomplish this goal, implementing business requirements, debug, test, and deploy solutions to production must be simple for data scientists and engineers. Then these organizations must be able to monitor and troubleshoot the production fleet of solution instances provisioned for thousands of customers, used by millions of users.

Taking into consideration the scale of AI and ML development and operations, SaaS organizations need to standardize the implementation and development of AI and ML features to efficiently manage and evolve them.

ML application origins: Adaptive intelligent applications


The roots of ML Applications go back to our work with Adaptive Intelligent Applications (AIA). Under the umbrella of AIA organization, we have been building several generations of a framework that helps SaaS teams build AI and ML features for SaaS applications. We focused primarily on Fusion applications like enterprise resource planning (ERP), human capital management (HCM), and customer experience (CX). However, other non-Fusion applications and teams were involved even in the early days.

To make a long story short, we helped AIA to move most of their applications to production. If you are a Fusion user, you have likely already interacted with AIA functionality. To find out more about AI Apps for finance, human resources, sales, service, and procurement, visit AI Apps Embedded in Oracle Cloud Applications.

For further explanation, we provide a brief description for three examples of successfully productized AIA features.

ERP intelligent account combination defaulting

This AIA feature assists the payables function by using AI and ML to create default code combination segments when processing invoices don’t match purchase orders (POs). Predicting and defaulting code combination segments reduces manual keystroke effort, reduces human errors, saves invoice processing time, and reduces costs.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

AI-UX suggested actions

News feed suggestions make Oracle customers more productive by recommending important tasks to them in a timely fashion. Fusion apps have a wide range of functionality and knowing what you have access to across those apps at a particular time and how to navigate to important tasks can be challenging.

News Feed Suggestions track the navigation behavior of a user and users like them to make recommendations for the tasks they are most likely to perform at that time. For example, if a group of users historically submits timecards each Friday, they see a suggestion for that task on that day. If tasks related to closing at the end of each month exist, the users with roles associated with those tasks see suggestions for those tasks. With news feed suggestions, Oracle helps your users get to the tasks that matter faster.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

CX sales lead conversion probability

Within CX sales, AI and ML models analyze data on profiles, sales, and the interactions you have with company prospects and customers to create a score to indicate the propensity of each lead to progress to a sale close. This functionality enables sales teams to have more effective lead management with better prioritization, which leads to more incremental sales, more efficient processes through a reduced need to wrangle data, and improved marketing effectiveness and return on investment driven by tighter targeting and personalized content.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

ML applications: Basics


ML applications can help SaaS teams address most of the challenges they face when trying to add AI and ML features to their SaaS applications. Let’s look to a few basic properties of ML applications to help understand how ML applications work and their benefits.

Provider and consumer roles

ML applications distinguish between users providing or authoring ML applications (Providers) and the users consuming or using ML applications (Consumers). These roles are even reflected in the Oracle Cloud Console. ML applications provide two UI areas, as shown in the following image.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

ML application resource

An ML application resource, or just ML app, is a self-contained representation of an AI and ML use case. It has well-defined boundaries defined by contracts and components that define its implementation.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

A typical ML app consists of the following features:

◉ Data pipelines responsible for the preparation of training data
◉ ML pipelines that train and deploy models
◉ Triggers and schedules that define workflows
◉ Storage for the data used by other components
◉ Model deployments that serve predictions
◉ AI services that can be used instead of or in addition to custom pipelines and models

You can think of an ML app as a blueprint that defines how an AI and ML use case is implemented for a customer. This blueprint is used to instantiate a solution for each SaaS customer.

ML application instance resource

An ML application instance resource, or ML app instance, represents an ML app prepared or instantiated for a specific customer. An ML app Instance is created for a customer during provisioning, according to the blueprint defined by the ML app for the end-to-end solution. The solution typically trains customer-specific models with customer data sets. SaaS applications then use the prediction services provided by ML app Instances that were created for them.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

As-a-service delivery across tenancies

A key characteristic and benefit provided by ML applications is the standardization of implementation, packaging, and the delivery of AI and ML use cases, which allows ML application providers to offer prediction services that they implement as-a-service. Consumers can see only the surface of the provided ML apps. They interact with ML apps through defined contracts. They can’t see the implementations.

On the other hand, providers are responsible for the management and operations of the implementations. They ensure that prediction services implemented in the ML apps are up and running. They monitor them, react to outages, and roll out fixes and new updates.

Unlike other OCI services that operate in a single OCI tenancy, ML applications support the interactions between consumers and providers across OCI tenancies. Providers and consumers can work independently in their own separate OCI tenancies. It allows for loose coupling between providers and consumers. This separation is a huge advantage because teams of engineers and data scientists developing ML apps don’t need to ask for access to the tenancy of the SaaS application. This configuration simplifies and improves security for the overall solution. Compute and storage resources used by ML apps also don’t interfere with resource consumption and service limits in SaaS tenancy.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

You might ask how providers can support consumers without access to the SaaS tenancy. ML applications guarantee observability to providers. For every ML app instance created, ML app instance view resource is created for providers in their tenancies. Instance views mirror ML app instances from consumer tenancies and link all related resources, such as buckets or model deployments. This setup benefits operations and troubleshooting. Providers can easily navigate to the resources used by a specific customer and review monitoring metrics and logs.

Democratization of AI: ML applications and AI services

Although ML applications drastically simplify the development of AI and ML features for SaaS applications, you still need data scientists to build ML pipelines and models for your ML apps.

We found a way to further accelerate the development of AI and ML features. In many cases, you don’t need to develop new pipelines and models and can use a generic AI and ML solution. For example, when you need sentiment analysis, anomaly detection, or forecasting, you can use AI Services. AI Services provide the best-in-class implementation of common AI and ML problems. You can use AI Services to implement the machine learning part of your ML apps.

By adopting AI Services in conjunction with ML applications, you don’t need a large data science department. Instead, your citizen data scientists can enrich your SaaS applications with cutting-edge AI features. Under the hood, ML applications and AI services apply transfer learning and tailor AI and ML models to the specific data sets used by your customers.

Best of all, this shortcut doesn’t close the door to future evolution. You can update your implementation later and provide your own ML pipelines and models if you choose. Thanks to the versioning capabilities, you can roll out even such a substantial change without affecting clients using your initial implementation.

ML applications: Advanced features


The delivery and operations of AI and ML features for SaaS applications  too complex for every team or organization to repeatedly address have problems and challenges associated with. A better strategy relies on ML applications to solve them for you, enabling your development and data science teams to focus on business problems. The following sections can give you insight into these problems and solutions.

Versioning: Evolving the fleet

Versioning is one of the most important benefits of ML applications for providers. Typically, teams developing AI and ML use cases don’t want to solve this type of problem on their own. They’re aware of or soon realize how complex it can get, and they start looking for tools or services that solve these problems.

When you decide on a custom solution, think through how to support versioning and consider the following questions:
 
◉ When and how is a new version provisioned for new customers?
◉ How can you roll out a new version to all the customers who have been already using an older version?
◉ How can you update implementations for existing customers when a pipeline is changed? What if the change isn’t backwards compatible?
◉ What if the interface of your prediction service needs to be changed?
◉ What if the change of your prediction service is backwards incompatible?
◉ How do you deal with migrating customers from old to new versions?
◉ How do you migrate without downtime?
◉ What if your SaaS application has been updated only for some customers and your prediction services are receiving different versions of incompatible requests?

You can imagine that engineers must answer hundreds of tough questions.

On the other hand, consider usability and user experience. You don’t want your data scientists to spend days and weeks following a complex change management process, filling in forms, and asking for approvals. Imagine that a data scientist fixes a typo in an ML pipeline. It must be easy to implement the change, test it, and roll it out to production without needless delay.

ML applications allow providers to release changes to production within minutes independently of the SaaS application release process. Still, ML application’s strong versioning capabilities give providers confidence that the validated changes are delivered to customers without them noticing it. Providers must only update the implementation of their ML apps. ML applications ensure that the whole fleet of existing instances used by customers is updated without outages.

Finally, versioning guarantees reproducibility and traceability. Providers will always know which version, even code revision and line of code, is used by a particular customer in an environment and which changes were applied to the customer’s implementation. Without this feature, troubleshooting problems can be challenging. You also struggle with answering questions from auditors. For every change ever implemented, you must track who introduced the change and when.

Fleet monitoring and management

ML Applications are a powerful tool that allows for the deployment of AI and ML functionality at a mass scale. This brings the question of how to monitor and manage the fleet of millions of AI and ML models.

When you have numbers of customers in the low hundreds and a few ML apps, the basic features of ML applications with OCI Monitoring and Logging services can be sufficient for you to monitor and troubleshoot customers’ ML app instances. However, when the adoption grows and you operate dozens of ML apps for thousands of customers, you need more sophisticated tools. The situation usually gets even more complicated because of the necessity to provide the applications in multiple regions and environments.

Different roles in your organization can benefit from an aggregated view that collects information across teams, regions, and environments. Your managers can use fleet monitoring to answer the following questions:

◉ How many customers are affected by a failure or defect?
◉ Are sudden failures of prediction services limited to a region?
◉ How many instances of an ML app are provisioned within a region?
◉ How many ML apps are used by a major customer?

Your product managers can analyze trends around customer adoption and usage. They also control the exposure of new features to customers by defining customer segments, such as early adopters. Without aggregated monitoring metrics, they’re flying blind.

Data scientists can get great assistance from fleet monitoring. Typically, data scientists face a difficult problem. They need to generalize their models across diverse data sets that customers have. They need to ensure that the performance metrics of models meet business requirements for most customers, while managing the tradeoffs between accuracy and operational performance and efficiency. Using fleet monitoring, they can evaluate model performance metrics and take corrective actions.

Artificial Intelligence, Machine Learning, SaaS Applications, DB Exam Study, Database Tutorial and Material, Database Certification, Database Prep, Database Skills, Database Jobs, Database Learning

Reusable components and patterns

ML applications reflect the observation that machine learning implementations tend to follow a relatively small number of patterns. We provide a library of common patterns that you can use as a foundation for the implementation of new ML applications. Providers can pick a pattern and build a new ML application on top of the pattern by adding more business logic instead of building the complete end-to-end implementation from scratch. The pattern ensures that the implementation follows best practices, is efficient and scalable, addresses all corner cases, and covers nonfunctional aspects like security and monitoring.

Patterns define extensibility points that allow providers to add customization into the overall end-to-end implementation of the ML application. For example, a pattern can introduce a transformation extensibility point that runs your Spark SQL in the transformation step within a data pipeline that’s part of the pattern’s implementation.

Demonstrating the benefits of this approach is easy by considering, for example, a pattern that ingests data from Fusion and serves predictions with a REST endpoint. The pattern implements a data pipeline using OCI Data Integration that incrementally ingests Fusion data and transforms them using OCI Data Flow. Next, an ML pipeline reads transformed training data, trains a new model, and finally deploys the newly trained model as a model deployment. Then, active monitoring triggers when something unusual happens, such as a data ingestion failure or an out-of-memory error in training. Providers are notified and can fix the problem or take proactive action to avoid failures.

You can imagine how inefficient and error-prone it would be to let every team implement their own ML apps from scratch instead of applying the pattern. Better to let engineers and data scientists focus on their business problems instead of challenging them with questions like, “What if the data connection is interrupted while a data increment is downloaded? Is the ingestion going to resume without data loss? What if the training pipeline fails?”

Another essential feature of ML Applications is that Providers can customize patterns. Providers can customize provided patterns and even build new greenfield patterns that they can reuse across their organization.

Vision of ML applications


ML Applications have already been a game changer for Oracle SaaS organizations for years. Now, as part of OCI Data Science, we’re expanding the core functionality and addressing new business problems so all our customers can take advantage.

Helping OCI customers: Independent software vendors building SaaS applications

The obvious extension to the internal usage of ML applications by Oracle teams is to open this capability to you: OCI customers building your own SaaS applications. You can add AI and ML features to your SaaS applications with the same efficiency as the Oracle SaaS organizations.

These days, it can take several years before teams mature their AI and ML development and manage to integrate AI and ML features into their SaaS applications. Most of those teams understand the challenges and expect to get the required functionality as a service instead of implementing everything from scratch on their own. A typical example of functionality that no one wants to build is versioning. Neither engineers nor data scientists want to think through all the corner cases and define how to introduce backward compatible or even incompatible changes into various parts of their AI and ML implementation. They don’t want to define update strategies for models and services used by SaaS customers.

Similarly, ML applications can help you when you need to start AI and ML features for multiple lines of businesses or geographic locations. For example, financial institutions might want to instantiate ML features with different configurations for AMER, EMEA, and APAC regions. Healthcare providers might need to provide an independent instance of an ML App to each hospital.

SaaS application customization: Bring your own implementation (BYOI)

Large companies using SaaS applications with ML application-based features, such as customers of Fusion ERP, might need to customize the out-of-the-box provided AI and ML capabilities and utilize extra data they have outside of the SaaS application or use their private intellectual property. ML applications are designed to enable those companies to build custom ML applications and replace out-of-the-box available functionality provided by Oracle.

Third-party partners who provide specialized functionality for business verticals can even provide such customizations, as described in the next section.

Beyond SaaS: ML application marketplace

One of the key benefits of ML Applications is the ability to offer prediction services as-a-service to the SaaS applications consuming them. SaaS applications using the prediction services don’t need to worry about their implementations and operations. They simply use the provided functionality. On the other hand, the implementers of prediction services are responsible for the operations of their ML app deployments.

We plan to allow anyone who builds any applications on OCI to benefit from this capability. Anyone can build ML apps on OCI, becoming a provider of ML applications. Provided ML apps can be registered in a marketplace allowing any OCI customer to discover them and use them.

Anyone who needs a specific AI and ML functionality can consume ML applications registered in the marketplace. They can use prediction services designed for a certain business problem without needing to invest in the development of their own machine learning pipelines and models. The marketplace can also assist customers and suggest ML apps that a customer can use with data sets they own.

Source: oracle.com