Friday, March 31, 2023

Oracle Database Service for Azure: Benefits, Features, and Best Practices

Oracle Database Service for Azure, Azure Exam, Azure Exam Prep, Azure Tutorial and Materials, Azure Career, Azure Skills, Azure Jobs, Azure Guides, Azure Learning, Azure Exam Preparation

If you're looking for a cloud-based database service for your business, Oracle Database Service for Azure might be the right choice. This article will explore the benefits, features, and best practices of using Oracle Database Service for Azure.

Introduction


Oracle Database Service for Azure is a cloud-based database service that offers many benefits to businesses. It is a managed service that provides the scalability, availability, and security that businesses need. The service is fully integrated with Azure and is designed to help businesses save time, money, and resources.

Benefits of Using Oracle Database Service for Azure


Here are some of the key benefits of using Oracle Database Service for Azure:

1. Scalability

One of the biggest benefits of using Oracle Database Service for Azure is scalability. The service allows businesses to easily scale up or down as needed, without having to worry about the underlying infrastructure. This means businesses can quickly respond to changing business needs without having to invest in additional hardware or software.

2. Availability

Oracle Database Service for Azure is designed to be highly available. The service provides automatic failover and disaster recovery capabilities, ensuring that businesses can access their data at all times. This is especially important for businesses that rely on their data for critical business operations.

3. Security

Security is always a concern for businesses when it comes to their data. Oracle Database Service for Azure provides a high level of security, with features such as encryption, network isolation, and access controls. Businesses can be confident that their data is protected from unauthorized access.

4. Cost Savings

Using a cloud-based database service can help businesses save money in several ways. First, there is no need to invest in additional hardware or software. Second, businesses only pay for the resources they use, which can be a significant cost savings compared to traditional on-premises solutions.

Features of Oracle Database Service for Azure


Oracle Database Service for Azure offers many features that businesses can take advantage of. Here are some of the key features:

1. Automated Backups

Oracle Database Service for Azure provides automated backups, which can help businesses protect their data in case of an outage or disaster. Backups can be scheduled to occur at regular intervals, and businesses can easily restore data if needed.

2. High Availability

As mentioned earlier, Oracle Database Service for Azure is designed to be highly available. The service provides automatic failover and disaster recovery capabilities, ensuring that businesses can access their data at all times.

3. Security Features

Oracle Database Service for Azure provides a high level of security. The service offers encryption, network isolation, and access controls to protect data from unauthorized access.

4. Performance Monitoring

Oracle Database Service for Azure provides performance monitoring tools that can help businesses identify and resolve performance issues quickly. This can help ensure that applications running on the service are running smoothly and efficiently.

Best Practices for Using Oracle Database Service for Azure


Here are some best practices to keep in mind when using Oracle Database Service for Azure:

1. Plan for Scalability

When setting up Oracle Database Service for Azure, it's important to plan for scalability. This means considering factors such as expected growth, workload, and performance requirements.

2. Use Security Features

Oracle Database Service for Azure provides many security features that can help protect data. It's important to use these features to ensure that data is protected from unauthorized access.

3. Optimize Performance

Optimizing performance is critical for ensuring that applications running on Oracle Database Service for Azure are running smoothly and efficiently. This can be done by monitoring performance, identifying and resolving issues, and tuning the database as needed.

4. Backup Data Regularly

Regularly backing up data is important for ensuring that data is protected in case of an outage or disaster. Oracle Database Service for Azure provides automated backups, but businesses should also consider creating their own backups to ensure redundancy and quick recovery.

5. Monitor Costs

While using a cloud-based database service can help businesses save money, it's still important to monitor costs. Businesses should regularly review their usage and adjust their resources as needed to avoid unexpected costs.

Conclusion

Oracle Database Service for Azure is a powerful cloud-based database service that offers many benefits to businesses. Its scalability, availability, security, and cost savings make it an attractive option for businesses looking to move their data to the cloud. However, to make the most of the service, businesses should follow best practices such as planning for scalability, using security features, optimizing performance, backing up data regularly, and monitoring costs.

Wednesday, March 29, 2023

Automate media processing jobs

In today’s fast-paced world, delivering high-quality media content to a global audience has become a critical aspect of many businesses. The traditional methods of media delivery, such as satellite and terrestrial networks, are becoming less feasible because of the high costs, complexity, and limitations associated with these systems.

Here, the Oracle Cloud Infrastructure (OCI) Media Flow service comes in as a game changer in the media delivery landscape. In this blog, we review how we can automate media processing with OCI Media Flow using OCI prebuilt functions.

What is Media Flow?


OCI Media Flow is a cloud-based media processing solution provided by Oracle Cloud Infrastructure. It provides a flexible and scalable platform for delivering high-quality video content over the internet to a global audience. The solution uses the power of OCI’s highly available and secure infrastructure to deliver a reliable and high-quality media delivery experience to end-users.

OCI Media Flow provides processing through media workflow-based approach. You can customize the media workflow to do tasks like transcodes, speech to text outputs, object detection, text detection, and thumbnail generations. After a media workflow is configured, you can create media workflow jobs to run for each media file that needs the processing. These media files must be on OCI Object Storage for the media workflow job to pick it up for processing. After processing, the output files are sent to OCI Object Storage and optionally also to OCI Media Streams.

Why automate media processing?


Media files can be large when they come from a camera, and often they need edits to make the video or audio more reachable for the audience. OCI Media Flow requires you to pass the media file details to the media workflow job. So, if you have 100 files to process, select those 100 files to submit 100 media workflow jobs. This process is doubtlessly time-consuming and costly.

How to automate media processing


OCI now offers prebuilt functions that allow you to create OCI Function solutions using standard codes from Oracle that you can use and not manage. In the prebuilt functions catalog, we have a “Media Workflow Job Spawner” code that lets you automate those 100 jobs to be processed as they’re uploaded to OCI Object Storage.

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Tutorial and Materials

Source: oracle.com

Friday, March 24, 2023

15 important things DBAs need to know about Oracle Zero Downtime Migration (ZDM)

Utopias are common in fiction, from Plato’s “The Republic” in 370 BCE to the Federation in Star Trek. Does zero downtime exist only in power point presentations? Let us find out.

Business services where companies make money or loose money depending on if the systems are up or down respectively are for example: online transactions, online authorizations, online stores, consumer applications or factory applications

SLA of 99.99% means ~52 minutes downtime a year while 99.999% means < 6 minutes downtime a year:

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Learning, Oracle Database Certification

◉ International Data Corporation (IDC) estimate: the cost of an 11-hour IT outage is approximately one million dollars. Unplanned application downtime costs the Fortune 1000 from $1.25 billion to $2.5 billion every year, according to an IDC report.

◉ Gartner report: employees experience at least six hours of unplanned downtime per month.

◉ Large companies, those that have 2,500 users or more, report even higher losses: up to 16 hours of unplanned downtime per month

Put it simply: downtime equals lost revenue:

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Learning, Oracle Database Certification

Many businesses, such as retail, travel, and banking, no longer have the extended downtime windows for planning upgrades and migrations.

Oracle ZDM follows Oracle Maximum Availability Architecture (MAA) principles and incorporates products such as GoldenGate and Data Guard to ensure High Availability and an online migration workflow that leverages technologies such as the Recovery Manager, Data Pump, and Database Links.

Here are (probably) the most important 15 things every DBA should know about ZDM:

1. Latest version of Zero Downtime Migration (ZDM) 21.4 is available for download

2. Oracle ZDM supports the following Oracle database versions: 11.2.0.4, 12.1.0.2, 12.2.0.1, 18c, 19c, 21c.

3. The source and target databases should be in the same database version. This is valid for physical migrations. For logical, it can be different versions, providing in-flight upgrades.

4. Oracle ZDM supports Oracle databases hosted on Linux operating systems. For logical migrations, ZDM also supports AIX and Solaris as a source.

5. The source database can be a single instance database migrating to a single instance or a RAC database, or it can also be a RAC One Node/RAC database, migrating to a RAC database.

6. Oracle ZDM supports Enterprise & Standard Edition Oracle databases as source databases. Enterprise Edition databases are migrated leveraging Oracle Data Guard; Standard Edition databases are migrated in an offline manner using a backup and restore methodology.

7. Oracle ZDM allows for the source database to be a non-CDB or a container database (CDB) with one or more pluggable databases (PDBs).

8. Starting in 21c, ZDM allows for non-CDB to CDB migration with both its physical and logical migration workflows.

9. ZDM supports on-premises databases to be migrated to:

◉ Oracle Database Cloud Service Bare Metal
◉ Oracle Database Cloud Service Virtual Machine
◉ Exadata Cloud Service, Exadata Cloud at Customer, Exadata On-Premises
◉ Autonomous Database (Logical Workflow only)

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Learning, Oracle Database Certification

10. ZDM Supports the following backup mediums: OCI Object Storage, Oracle Zero Data Loss Recovery Appliance, NFS Storage.

11. Oracle ZDM binaries must be installed on a separate host which fulfils the following requirements:

◉ Linux host running on Oracle 7
◉ 100 GB of free storage space

12. The source database must be in archive log mode and if the source database is on 12c Release 2 and later and Transparent Data Encryption is not enabled you must configure the Transparent Data Encryption (TDE) Wallet.

13. The target database must be created prior to the migration, and the target database version should be the same as the source database version.

14. Ensure that both the source database server and the target database server can access the backup medium (Object Store for DBCS BM/VM and ExaCS, Recovery Appliance or NFS Storage for ExaCC).

15. The following port requirements must be met:

◉ ZDM Service Node: Port 22 must be open, this port is going to be used for SSH, enabling connectivity between the servicenode and the source database server and the service node and the target database server.

◉ Source Database Server: Port 1521 must be open and not blocked by a firewall, this port will be used for Oracle NET Services connectivity between the source database server and target database server. This connectivity will enable proper Data Guard Synchronization. Port 443 must be open and not blocked by a firewall, this port will be used to access the Object Store.

◉ Target Database Server: Port 1521 must be open and not blocked by a firewall, this port will be used for Oracle NET Services connectivity between the source database server and target database server. This connectivity will enable proper Data Guard Synchronization. Port 443 must be open and not blocked by a firewall, this port will be used to access the Object Store.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Learning, Oracle Database Certification

Good to know: Oracle GoldenGate for Oracle can be used for 183 days to perform migrations into Oracle databases located in Oracle Cloud Infrastructure using Oracle Zero Downtime Migration. This is valid also for migrations to Exadata Cloud@Customer. In 21.4, we can configure section sizes for RMAN and also upgrade TZ on the target during ZDM post steps.

Note finally the new product names: BaseDB (formerly DBCS), and ExaDB-D (formerly ExaCS).

Source: juliandontcheff.wordpress.com

Wednesday, March 22, 2023

What is Synthetic Data? Exploring Its Uses and Benefits

Synthetic Data, Oracle Database, Oracle Database Career, Database Jobs, Database Skill, Database Tutorial and Materials

As data continues to play a vital role in shaping businesses and industries, there is an increasing demand for data that is both diverse and abundant. However, obtaining large amounts of high-quality data can be challenging and expensive. Enter synthetic data, a new approach to data generation that is rapidly gaining popularity. In this article, we will explore what synthetic data is, its benefits, and its various applications.

What is Synthetic Data?


Synthetic data refers to artificially generated data that mimics real data in terms of statistical properties and distributions. It is created using algorithms that are trained on real data and can produce new data that is similar to the original data. Synthetic data is often used when real data is either too expensive or too difficult to obtain.

How is Synthetic Data Generated?


There are various ways to generate synthetic data, but one common method is through the use of generative adversarial networks (GANs). GANs are a type of machine learning algorithm that consists of two neural networks: a generator and a discriminator. The generator creates synthetic data while the discriminator evaluates the quality of the synthetic data against real data.

Another method is to use simulation software to generate synthetic data. This approach is often used in fields such as robotics and autonomous vehicles, where it is difficult to obtain large amounts of real-world data.

Benefits of Synthetic Data


1. Cost-effective: Synthetic data is significantly cheaper to produce than real data.

2. Diverse: Synthetic data can be generated with different attributes and characteristics, allowing for greater diversity in datasets.

3. Scalable: Synthetic data can be generated quickly and easily, making it highly scalable.

4. Privacy: Synthetic data can be used to protect sensitive information in the original data, as it does not contain any real information.

Applications of Synthetic Data


Synthetic Data, Oracle Database, Oracle Database Career, Database Jobs, Database Skill, Database Tutorial and Materials
1. Machine learning: Synthetic data is often used to train machine learning models, as it can provide large amounts of diverse data.

2. Testing: Synthetic data can be used to test the performance of algorithms and models.

3. Healthcare: Synthetic data can be used to create virtual patients for medical research, which can help reduce the use of animal testing.

4. Autonomous vehicles: Synthetic data can be used to train autonomous vehicles in simulated environments before testing in the real world.

Challenges of Synthetic Data


1. Realism: Synthetic data may not always accurately represent the real world, as it is based on statistical properties and distributions.

2. Bias: Synthetic data can be biased if the original data used to train the algorithm is biased.

3. Ethics: The use of synthetic data raises ethical concerns around privacy and the potential misuse of the data.

Conclusion


Synthetic data is a promising solution for generating large amounts of high-quality data that is both diverse and cost-effective. Its benefits range from scalability and diversity to privacy and cost-effectiveness. While there are still challenges to overcome, the potential applications of synthetic data are vast and varied.

Monday, March 20, 2023

Achieving Data Sovereignty with Oracle Sharding (Part 1)

In this two part blog on Data Sovereignty, i will share how customers are achiving Data Sovereignty with Oracle Sharding including step by step implemantation details.

Part 1 : Overview of Data Sovereignty and how Sharding can provide a perfect solution for achieving data sovereignty

Part 2: Customer usecase - How one of the biggest re-insurance company is using Oracle Sharding to achive data sovereignty

Join us at Cloud World 2022 for a session on Data Sovereignty : LIT4195: Complying with Data Sovereignty Regulation using Oracle Sharding [Wednesday. Oct 19 | 4:20 PM PDT]

Introduction


Country- or region-specific data placement, residency, and sovereignty regulations are becoming more prevalent. We discuss how Oracle Database customers leverage Oracle Sharding to comply with such regulations. We also discuss various flavors of such rules and corresponding architecture and implementation patterns.

Data sovereignty generally refers to how data is governed by regulations specific to the region in which it originated. These types of regulations can specifywhere data is stored, how it is accessed, how it is processed, and the life-cycle of the data.

With the exponential growth of data crossing borders and public cloud regions, more than 100 countries now have passed regulations concerning where data is stored and how it is transferred. Personally identifiable information (PII) in particular increasingly is subject to the laws and governance structures of the nation in which it is collected. Data transfers to other countries often are restricted or allowed based on whether that country offers similar levels of data protection, and whether that nation collaborates in forensic investigations.

Data sovereignty requirements are driven by local regulations which could result in different application architectures. A few of them are:

◉ Data must be physically stored in a certain geographic location. For example, within the boundaries of a specific country or a region comprising of several countries. It is fine to access and process the data remotely so far as the data is not stored in remote locations. From a technical standpoint, this implies that data stores like databases, object stores, and messaging stores that physically store the persistent data must be in a certain geographic location. However, the application run time which has business logic for processing of data could be outside the geographic location. Examples of such applications parts include application servers, mobile applications, API Gateways, Workflows, and so on.

◉ Data must be physically stored and processed in a certain geographic location: In this case, storing of data and processing of data must take place within the defined geographic location.

Problem Statement


Achieving Data Sovereignty has become a complex problem for IT as in most cases they end up duplicating the entire stack (application and database) in attempts to provide a solution to the business. This not only adds hardware cost but also impacts the operational cost. As more and more countries are introducing compliance policies around Data Sovereignty, this solution is not scalable at all and some companies are even thinking about retricting business to certain countries as the cost of running a seprate stack doesnt add up to the business benefits.

Use Case of Achieving Data Sovereignty with Oracle Sharding


A large but imaginary financial institute, Shard Bank, wants to offer credit services to users in multiple counties. Each country where credit service will be provided has its own data privacy regulations and the Personally Identifiable Information (PII) data have to be stored in this country.

The access to the data has to be limited and data administrators in one country cannot see data in others. The solution for this use case is user-defined Sharding with shards configured in different countries and Real Application Security (RAS) for data access control.

Overview of Oracle Sharding Solution


Oracle Sharding solution provides you with in-country data storage, and still supports a global view of all the data.

The example below demonstrates a hybrid Oracle Sharding user-defined deployment between OCI data centers and on-premises across multiple regions. In this Oracle Sharding configuration, you can store and process all data locally. Each database (in each sovereign region) is made into a shard and the shards belong to a single sharded database. Oracle Sharding allows you to query data in one shard (within one country), and Oracle Sharding supports multi-shard queries (that can query data from all the countries).

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Tutoiral and Material, Oracle Database Guides, Oracle Database Learning

The global sharded database is sharded by a key indicating the country in which it must reside. In-country applications connect to the local database as usual, and all data is stored and processed locally.

Any multi-shard queries are directed to the shard coordinator. The coordinator rewrites the query and sends it to each shard (country) that has the required data. The coordinator processes and aggregates the results from all of the countries and returns result.

Oracle Sharding makes this use case possible with the following capabilities:

◉ Direct-to-shard routing for in-country queries.
◉ The user-defined sharding method allows you to use a range or list of countries to partition data among the shards.
◉ Automatic configuration of replication using Oracle Active Data Guard, and constrain the replicas to be in-country.

The benefits of this approach are:

◉ Each shard can be in a cloud or on-premises within the country.
◉ Shards can use different cloud providers (multi-cloud strategy) and replicas of a shard can be in a different cloud or on-premises.
◉ Online resharding allows you to move data between clouds, or to and from the cloud and on-premises.
◉ Strict enforcement of data sovereignty providing protection from inadvertent cross region data leak.
◉ Single Multimodel Big Data store with reduced volume of data duplication.
◉ Better fault isolation as planned/unplanned down time within one region/LOB does not impact other regions/LOBs.
◉ Ability to split busy partitions and shards as needed.
◉ Support for full ACID properties is critical for transactional applications.

Benefits of Implementing Data Sovereignty with Oracle Sharding


Oracle Sharding meets data sovereignty requirements and supports applications that require low latency and high availability.

◉ Sharding makes it possible to locate different parts of the data in different countries or regions – thus satisfying regulatory requirements where data has to be located in a certain jurisdiction.
◉ It also supports storing particular data closer to its consumers. Oracle Sharding automates the entire lifecycle of a sharded database – deployment, schema creation, data-dependent routing with superior run-time performance, elastic scaling, and life-cycle management.
◉ It also provides the advantages of an enterprise RDBMS, including relational schema, SQL, and other programmatic interfaces, support for complex data types, online schema changes, multi-core scalability, advanced security, compression, high-availability, ACID properties, consistent reads, developer agility with JSON, and much more.
 

Implementing Data Sovereignty with Oracle Sharding


Oracle Sharding distributes segments of a data set across many databases (shards) on different computers, on-premises, or in the cloud. These shards can be deployed in multiple regions across the globe. This enables Oracle Sharding to create globally distributed databases honoring data residency.

All of the shards in a given database are presented to the application as a single logical database. Applications are seamlessly connected to the right shard based on the queries they run. For example, if an application instance deployed in the US needs data that resides in Europe, the application request is seamlessly routed to an EU data center, without the application having to do anything special.

Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Tutoiral and Material, Oracle Database Guides, Oracle Database Learning

Additionally, Oracle Database security features such as Real Application Security (RAS) and Oracle Database Vault can be used to limit data access further, even within a region. For example, an administrator in the EU region can further be restricted to see data only from a subset of countries and not all EU countries. Within a Data Sovereignty region, data can be replicated across multiple data centers by using Oracle Data Guard and Oracle GoldenGate for such replication.

Oracle Sharding management interfaces give you control of the global metadata and provide a view of the physical databases (replicas), data they contain, replication topology, and more. Oracle Sharding handles data redistribution when nodes are added or dropped.

You can access worldwide reporting without actually copying the data from the various regions. Sharding can run multi-shard reports without copying any data from any region. Oracle Sharding pushes queries to the nodes where the data resides.

Oracle Sharding provides comprehensive data sovereignty solutions that focus on the following aspects:

◉ Data Residency: Data can be distributed across multiple shards, which can be deployed in different geographical locations.
◉ Data Processing: Application requests are automatically routed to the correct shard irrespective of where the application is running.
◉ Data Access: Data access within a region can be restricted further using the Virtual Private Database capability of Oracle Database.
◉ Derivative Data: Ensuring that the data is stored in an Oracle Database, and using Oracle Database features to contain the proliferation of derivative data.
◉ Data Replication: Oracle Sharding can be used with Oracle Data Guard or Oracle GoldenGate to replicate data within the same Data Sovereignty region.

Source: oracle.com

Wednesday, March 15, 2023

Announcing Oracle Transaction Manager for Microservices 22.3.1

Oralce Database, Oralce Database Career, Oralce Database Skills, Oralce Database Jobs, Oralce Database Prep, Oralce Database Preparation, Oralce Database Tutorial and Materials, Oralce Database Guides

Oracle is pleased to announce a new release of Oracle Transaction Manager for Microservices Free (MicroTx).  This new release, 22.3.1, provides a number of enterprise enhancements and demonstrates Oracle's commitment to microservices.

Enhancing Transaction Coordinator Availability, Scalability, and Recoverability


This new release includes a number of resiliency enhancements that improve the availability, scalability, and recoverability of the MicroTx transaction coordinator microservice.  With this new release, the MicroTx transaction coordinator microservice tcs can be configured to store transaction records in either Oracle Database or etcd in place of the current in memory transaction log.  This allows tcs to recover inflight transactions should the tcs instance fail.   More importantly this allows for deploying multiple tcs instances as part of a cluster sharing the transaction logs stored in Oracle Database or etcd.  By being able to cluster this microservice, you can scale up and down the number of tcs instances needed for load and availability.

Session Affinity


Often customers will want to run multiple instances of their microservices.  When one of these instances enlists in an XA transaction, it provides a callback URL that tcs can use to manage the transaction.  Typically this URL is common across all the instances of the microservice.  When tcs calls back to a participant microservice using XA, the request will get routed to whatever tcs instance the service mesh decides.  There are several use cases where tcs needs to call back to the same instance that enlisted in the transaction.  Here are just some of the reasons when session affinity is required:

1. The participant is a Last Logging Resource (LLR) participant in an XA transaction.  For LLR, the MicroTx transaction coordinator prepares all the other participants, and then sends a commit request to the LLR participant with a copy of the commit record.  That commit record is written to the LLR's resource manager as part of the local resource manager transaction, and then the participant commits the local resource manager transaction that includes any resource manager updates made by the participant.

2. The participant is using PostgreSQL as its resource manager.  The issue here is that XA call backs made by the MicroTx coordinator, must go to the same participant instance that was enlisted in the transaction as PostgreSQL requires those operations be performed on the same connection as was used when participating in the XA transaction.

3. The participant in a MicroTx transaction is holding state that is not shared across instances of the participant.  By using session affinity, it can be assured that the requests associated with a specific transaction get routed to the same instances.

Session affinity can be enabled on a per participant level.  Unless session affinity is required, it's recommended to not enable it

Resource Manager ID (RMID)


In the initial release of MicroTx, participant microservices were required to define a unique resource manager ID or RMID identifying the resource manager the microservice was using.  The purpose of this configuration item is to allow the MicroTx transaction coordinator to perform some optimizations for XA transactions.  In particular, in this release, the RMID is optional, but providing it correctly can help improve application performance.  When a participant microservice enlists in an XA transaction, it provides the RMID, if configured, in the enlistment request.  The MicroTx transaction coordinator uses this information to determine whether a new transaction branch needs to be created or not for this enlistment.  If another participant has already enlisted in the XA transaction with the same RMID, the new enlistment will receive the same XID, including the same branch qualifier as the other participant.  This reduces the number of branches in the XA transaction which reduces the time to commit the transaction, increasing the performance of the application.  In some cases where all the participant microservices are using the same resource manager such as Oracle Database, the transaction can end up being committed with a one phase commit, skipping the prepare phase and further increasing the performance of the application.

Source: oracle.com

Monday, March 13, 2023

Long-Term Backups (up to 10 years) on Autonomous Dedicated!

The long-term storage of backups is generally an essential part of cybersecurity and disaster recovery plans in any serious business. Moreover, many applications have regulatory, compliance, or other business purposes that require the retention of database backups beyond a lapse of just days or months.

Until now, all backups (automatic and manual) in Oracle's Autonomous Database on Dedicated Exadata Infrastructure (ADB-D) and Exadata Cloud@Customer (ADB-C@C) were tied to the retention set on the Autonomous Container Database (ACD) for up to 60 days. So today, we're announcing the launch of Long-Term backups for Autonomous Databases (ADB) on ADB-D and ADB-C@C.

With this update, customers can now more easily meet their cybersecurity, DR plans, and compliance requirements for storing backups. This new capability allows customers to select a retention time between 90 days and 10 years for long-term backups.

OCI Console Experience


Once you have navigated to the ADB on which you want to run the long-term backup, scroll to the bottom left of the page and select Backups under Resources.

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Guides, Oracle Database Learning, Oracle Database Tutorial and Materials

In the Backups section, you now have a new option called “Create long-term backup.”

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Guides, Oracle Database Learning, Oracle Database Tutorial and Materials

A screen will pop up where you must enter a backup name and set the desired retention. 

Public Cloud:


Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Guides, Oracle Database Learning, Oracle Database Tutorial and Materials

Cloud@Customer:


In the initial release for Cloud@Customer, long-term backups must be stored on an NFS device, but it does not have to be the same device that the automatic and manual backups use. Select an existing NFS device or create a new backup destination under the Infrastructure section of your Cloud@Customer resources in the UI.

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Guides, Oracle Database Learning, Oracle Database Tutorial and Materials

Click Create, and that is it. An asynchronous backup job will be automatically created for you and can be monitored under the Work Requests tab. Once the backup has been completed, a few operations can be done with it.

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Prep, Oracle Database Preparation, Oracle Database Guides, Oracle Database Learning, Oracle Database Tutorial and Materials

A new ADB can be created from the long-term backup, the retention time can be edited as needed from 90 days to up to 10 years, and the backup can be deleted if desired. 

A few final things to understand about long-term backups:

◉ In Data Guard environments, long-term backups will happen on the primary or standby database wherever the action was initiated. If desired, you must make two separate requests to back up the primary and standby databases.

◉ A long-term backup will be available as long as the ADB is available in the running or stopped state (not terminated).

◉ You can use a long-term backup only to create a new database. You cannot use it for an in-place/PITR restore.

◉ Long-term backups are fully supported in the UI, CLI APIs, or through Terraform.

◉ Creating an ADB from a long-term backup will always be upgraded to the currently supported version (target ACD version). For example, a 5-year-old backup may not have 19c available, so Oracle will upgrade the database to 23c. However, the ADB created will not guarantee that an application will work just because the data is available.

That's it. Long-term backups put you in full control of your recoverability compliance requirements and are completely automated and managed by Oracle! Remember it's a good idea to test your long-term backups occasionally by creating a clone from them and verifying your data is available.

Source: oracle.com

Saturday, March 11, 2023

Oracle Cloud Guard and Hybrid Tablespace Encryption

“There are two types of encryption: one that will prevent your sister from reading your diary and one that will prevent your government” – Bruce Schneier

What is Hybrid Tablespace Encryption? With the latest release updates of Oracle 19c (19.16 and above as 19.18 is the latest as of February 2023), it is now possible to decrypt redo operations in a hybrid cloud DR configurations where the cloud database is encrypted with TDE (Transparent Data Encryption) and the on-prem database is not.

We start seeing more often Oracle on-prem databases using Data Guard with the standby database being in the cloud, mostly in OCI. When the on-prem database is not using TDE and the cloud one is, it is important to ensure that redo data is encrypted during the transportation process. The Advanced Security Option places a key role here as this new feature is controlled via the (dynamic) TABLESPACE_ENCRYPTION initialization parameter.

V$PARAMETER_VALID_VALUES shows the accepted values of all database init.ora parameters, here is what we have for tablespace encryption (viewable even in ADB):

Oracle Database Career, Database Skill, Database Jobs, Database Tutorial and Materials, Database Hybrid

In a multitenant environment, remember that you need to change the value of TABLESPACE_ENCRYPTION in the root container. You cannot set TABLESPACE_ENCRYPTION in a PDB.

Here is how to set-up the encryption depending on (1) have/not have the Advanced Security Option and (2) how you would like to encrypt:

  • AUTO_ENABLE is the default for cloud databases. It encrypts all new tablespaces if you have the Oracle Advanced Security license.
    • If an existing tablespace is not encrypted, then the database writes a warning to the alert log.
    • Encrypted tablespaces cannot be converted to unencrypted tablespaces.
    • Because all tablespaces must be encrypted in OCI, setting TABLESPACE_ENCRYPTION to DECRYPT_ONLY or MANUAL_ENABLE will be ignored in the cloud and the database will behave as if the setting is AUTO_ENABLE.
    • In the primary database, this setting encrypts the new tablespace with an encryption key.
    • In a standby database, this setting adds a key to the new tablespace and encrypts all blocks.
  • DECRYPT_ONLY prevents new tablespaces from being encrypted. Use it in your on-prem database in case you do not have the Advanced Security Option.
    • If an existing tablespace is encrypted, then the database writes a warning to the alert log.
    • Unencrypted tablespaces cannot be changed to encrypted tablespaces!
  • MANUAL_ENABLE enables you to selectively encrypt tablespaces if the database is licensed for Oracle Advanced Security. This is the default for both on-prem primary and standby databases.

Note that (1) for RAC, set TABLESPACE_ENCRYPTION to the same value for all instances of the primary database, and for all instances of the standby database and (2) modifying TABLESPACE_ENCRYPTION does not affect the master key rotation operations.

Oracle Database Career, Database Skill, Database Jobs, Database Tutorial and Materials, Database Hybrid

On the second topic, Oracle Cloud Guard is useful when wanting to maintain good security posture by detecting weak security configurations and activities that can indicate cloud security risks. Cloud Guard detects security problems within a customer tenancy by ingesting audit and configuration data about resources in each region, processing it based on detector rules, and correlating the problems at the reporting region. Identified problems will be used to produce dashboards and metrics and may also trigger one or more provided responders to help resolve the problem.

Here are the key features of Oracle Cloud Guard:

  • Detects misconfigured resources and identifies risky activities.
  • Delivers out-of-the-box recipes that apply security best practices with a few clicks.
  • Provides automated responders to trigger corrective actions.
  • Gives a global view of customer’s overall security posture of their OCI tenancy, including security and risk scores and activity maps.

Enabling Cloud Guard is simple. The first step us to select/define the policies and then select some basis information:

Oracle Database Career, Database Skill, Database Jobs, Database Tutorial and Materials, Database Hybrid

And just click the “Enable” button.

Source: juliandontcheff.wordpress.com

Friday, March 10, 2023

Disaster Recovery Solution for Oracle Analytics Server on Oracle Cloud Marketplace using Snapshot Replication

High Level Steps involved in the Disaster Recovery Setup


1. Create Primary Oracle Analytics Server Instance in one Region e.g. Ashburn

2. Create Disaster Recovery Oracle Analytics Server Instance in another Region e.g. Phoenix

3. Configure both Oracle Analytics Server Instances to use same external connections like SMTP Server, Data Sources and Database Tables that involved in the Security Configuration like BISQLGroupProvider, Act as Configuration, Data Level Security, etc.

4. Configure both the Oracle Analytics Server Instances share the same Security Rules to connect to same on-Premise Data Sources or Cloud Data Sources.

5. Both Oracle Analytics Server Instances are running independent of each other which means both doesn't share the RCU Database Schemas to sync the connections, Catalog Objects, etc. (Answers Reports/DV Projects).

6. Same Set of Users and Groups can be accessed with in the Oracle Analytics Cloud Service Console, Users and Role Section as both of the Services run on same Oracle Identity Cloud Service (IDCS) for Identity Management.

7. Snapshot Create and Restore is the method available to synchronize the content between the two Oracle Analytics Server Instances.

8. Creation of Snapshots and Restore can be automated using WLST Scripting or else using REST APIs in future releases.

9. Snapshot Create on Primary Oracle Analytics Server Instance and Restore on Disaster Recovery Oracle Analytics Server Instance will not sync Data File content and BI Publisher JDBC Connections, may need to manually move across Instances.

10. At this moment we don’t have automatic Snapshot Create and Snapshot Restore mechanism. We need to perform this task manually and periodically.

11. Due to the periodic manual tasks involved in restoring the Content from Primary Oracle Analytics Server Instance to Disaster Recovery Oracle Analytics Server Instance, Live Content Sync is not possible between the Instances. We might need to live with delayed content sync.

12. Create a Load Balancer in each Region and decide on a single DNS name e.g oas.oracleceal.com for both the Load Balancers.

13. Get the SSL/TLS Certificate for the DNS Name and implement the same SSL Certificate to both the Load Balancers in each Region.

14. Use Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover) and configure Ashburn Load Balancer as the Primary and Phoenix Load Balancer as the Disaster Recovery Load Balancer.

15. Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover) maps the Primary Load Balancer IP Address to the DNS Name using “A” Record in Oracle Cloud Infrastructure Zones.

16. When the Primary Load Balancer or Oracle Analytics Server is not reachable based on the Health Check Policy defined, the Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover) will remove the Primary Load Balancer IP Address from the DNS Mapping in “A” Record and maps the Disaster Recovery Load Balancer IP Address to the DNS Name in the “A” Record of the Oracle Cloud Infrastructure Zones.

17. By this configuration we can create Disaster Recovery and make sure the end user always use the same URL.

18. When the Primary Load Balancer or Oracle Analytics Server is reachable based on the Health Check Policy defined, the Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover) will Fall back to Primary Load Balancer i.e. remove the Secondary Load Balancer IP Address from the DNS Mapping in “A” Record and maps the Primary Load Balancer IP Address to the DNS Name in the “A” Record of the Oracle Cloud Infrastructure Zones.

19. Before the Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover) Fallback to the Primary we need to take the latest snapshot of Secondary Oracle Analytics Server and restore it on Primary Oracle Analytics Server.

Oracle Analytics Server Disaster Recovery Architecture and Configuration across Different Oracle Cloud Infrastructure (OCI) Regions

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Analytics Server Instance configured with Load Balancer in the Front End.

In this Blog, we have created Oracle Analytics Server Instance in a Private Subnet and Load Balancer in a Public Subnet of the VCN.

Demonstrating Disaster Recovery Architecture using Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover)

You can also create Oracle Analytics Server Instance and Load Balancer in the same Public Subnet of the VCN.

Oracle Cloud Infrastructure (OCI) Traffic Management Steering Policy

For Disaster Recovery we use Failover Policy Type

◉ Failover:  Use Oracle Cloud Infrastructure Health Checks to determine the health of answers. If the primary answer is determined to be unhealthy, DNS traffic is automatically steered to the secondary answer.

Subscribe for Secondary Oracle Cloud Infrastructure (OCI) Region (Phoenix)

We have two regions Ashburn and Phoenix subscribed for the Oracle CloudInfrastructure Tenancy

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

On the Oracle Cloud Infrastructure (OCI) Console of Home Region (Ashburn)

  • Create Component
  • Create a Virtual Cloud Network (VCN)
  • Access Control
  • Route Rules
  • Create Oracle Analytics Server Instance in the Private Subnet of the VCN
  • Generate SSL Certificate for Load Balancer for the desired DNS Name
  • Create a Load Balancer with Reserved Public IP Address on the Public Subnet of the VCN
  • Configure the Load Balancer to the backend Oracle Analytics Server
  • Get a Domain from Domain Providers like GoDaddy based on the DNS Name
  • Create Public Zone
  • Create Oracle Cloud Infrastructure Traffic Management Steering Policy

On the Oracle Cloud Infrastructure Console Secondary Region (Phoenix)

  • Use Existing Compartment created in Home Region Ashburn
  • Create a Virtual Cloud Network (VCN)
  • Access Control
  • Route Rules
  • Create Oracle Analytics Server Instance in the Private Subnet of the VCN
  • Use the already generated same SSL Certificate for Load Balancer
  • Create a Load Balancer with Reserved Public IP Address on the Public Subnet of the VCN
  • Configure the Load Balancer to the backend Oracle Analytics Server
  • Use Existing Public Zone created in Home Region, no steps required here
  • Use Existing Oracle Cloud Infrastructre Traffic Management Steering Policy, no steps required here

Let us check that we have an Oracle Analytics Server Front End with Load Balancer and the Load Balancer configured with same SSL Certificate and Hostname as the DNS Name e.g. oas.oracleceal.com in both the Oracle Cloud Infrastructure Regions i.e Ashburn and Phoenix.

Oracle Analytics Server Compute Instance

Ashburn

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Phoenix

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Load Balancer

Ashburn

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Phoenix

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Load Balancer Backend Set Health Check

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

How to find the Status Code

From any Bastion Server or Windows Server (on Public Subnet) that can access the Oracle Analytics Server (on Private Subnet), run below command

curl -k -vvv http://<Oracle Analytics Server Private IP Address>:<Port No>/dv

e.g.  curl -k -vvv http://10.0.1.253:9502/dv  

Two Load Balancers URL’s for Oracle Analytics Server (Primary and Disaster Recovery)

Ashburn:   https://129.213.208.51/dv

Phoenix:    https://129.153.95.118/dv

Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover)

To work with the Oracle Cloud Infrastructure Traffic Management Steering Policies, you need delegated Zone in Oracle Cloud Infrastructure DNS. Oracle is not a registrar, so you need a domain, get a domain if you don’t have one.

Get a Domain e.g. oracleceal.com from Domain Provider like GoDaddy

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

GoDaddy uses its own NameServers

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Can create an “A Record” at GoDaddy DNS Management mapping the Load Balancer Reserved IP address to the required Sub Domain e.g. oas.oracleceal.com, but we need to delegate the Domain to Oracle Cloud Zone.

To delegate the Domain, we should create a DNS Zone in Oracle Cloud Infrastructure for that Domain and use the Oracle Cloud Infrastructure NameServers at the Domain Provider e.g. GoDaddy

Create Public Zone on the Oracle Cloud Infrastructure (OCI) Home Region (e.g. Ashburn)

In Oracle Cloud Infrastructure Console Navigate to Networking à DNS Management à Zones à Create Public Zone

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Obtain the Public Zone Name Server hostnames

Add these DNS Server Oracle NameServers to your Domain provider like GoDaddy.

Login to Domain Provider portal change the Name Servers based on the DNS Zone created in Oracle Cloud Infrastructure.

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Create Oracle Cloud Infrastructure Traffic Management Steering Policy (Failover)

Login to Oracle Cloud Infrastructure Console as an Administrator, Select Home Region e.g. Ashburn

Navigate to Networking à Under DNS Management select Traffic Management Steering Policies

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Select the Compartment as “oasmp”.

Click on Create Traffic Management Steering Policy

Select the Policy Type as Failover

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Create Answer Pools (Pool 1 - Ashburn, Pool 2 - Phoenix)

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Create Pool Priority (Which Pool should be the Primary Instance and which one to be the Disaster Recovery Instance based on Health Check Fail of the Primary Instance)

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Create a Health Check

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Click on Show Advanced Options

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Select the Compartment where Oracle Cloud Infrastructure Public Zone is created. e.g. oacdr

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Test the access to Primary Oracle Analytics Server Instance.

https://oas.oracleceal.com/dv

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Test the Oracle Cloud Infrastructure Traffic Management Steering Failover from Primary to Disaster Recovery Load Balancer, by stopping services at Primary Oracle Analytics Server Instance.

https://oas.oracleceal.com/dv

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

LIMITATION: Since the content is replicated using Snapshots and not by using common Database RCU Schemas, we suggest to take snapshot from DR Instance i.e. secondary Oracle Analytics Server Instance after a failover from Primary to Secondary (i.e. DR Oracle Analytics Server) and restore it to Primary Oracle Analytics Server Instance before allowing Users to access the Primary Oracle Analytics Server Instance through the same URL.

Migrate Metadata and Content between Primary and Disaster Recovery Oracle Analytics Server Instances

Check this Section in Oracle Analytics Server Documentation, Move Oracle Analytics Server Between Environments

Migrating Content using Snapshots

A snapshot captures the state of your environment at a point in time

Exporting Snapshot in Primary Oracle Analytics Server Instance using script:

Run the exportarchive command to create a BAR file:

$DOMAIN_HOME/bitools/bin/exportarchive.sh <service instance key> <export directory> encryptionpassword=<password>

Example: ./exportarchive.sh ssi /tmp encryptionpassword=Admin123

Result:

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Importing Snapshot in Disaster Recovery Oracle Analytics Server Instance using script:

Copy the snapshot created in Primary Oracle Analytics Server Instance to Disaster Recovery Oracle Analytics Server Instance.

Run the exportarchive command to create a BAR file:

$DOMAIN_HOME/bitools/bin/importarchive.sh <service instance key> <location of BAR file> encryptionpassword=<password>

Example: ./importarchive.sh ssi /tmp/ssi.bar encryptionpassword=Admin123

Result:

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Snapshot Exclusions:

There are a few items that aren't included in a snapshot

  • System settings - Any properties that you configured on the System Settings page.
  • Snapshot list - The list of snapshots that you see on the Snapshot page.
  • BI Publisher JDBC connections.
  • Don’t include data that's hosted on external data sources

Migrating File-based Data

Snapshots don’t include the file-based data which are used for creating data sets in Oracle Analytics Server.

To migrate file-based data, you must export your data files to an archive file in the Primary Oracle Analytics Server Instance and import the archive file into the Disaster Recovery Oracle Analytics Server Instance.

Export Data files from Primary Oracle Analytics Server Instance

  • Navigate to ORACLE_HOME/bi/modules/oracle.bi.publicscripts
  • Copy migrate_datafiles.py to migrate_datafiles_export.py
  • Update migrate_datafiles_export.py as follows

domain_home = topology.get_domain_home()

oracle_home = topology.get_oracle_home()

wlst_path = topology.get_wlst_path()

internal_scripts_path = os.path.join(topology.get_publicscripts_path(), 'internal')

To

domain_home = '[PATH_DOMAIN_HOME]'

oracle_home = '[PATH_ORACLE_HOME]'

wlst_path = '[PATH_WLST.SH]'

internal_scripts_path = '[PATH_INTERNAL_SCRIPTS]'

Example:

domain_home = "/u01/fmw/user_projects/domains/bi"

oracle_home = "/u01/fmw/bi"

wlst_path = "/u01/fmw/oracle_common/common/bin/wlst.sh"

internal_scripts_path = "/u01/fmw/bi/modules/oracle.bi.publicscripts/internal"

Export data files to an archive file in the Primary Oracle Analytics Server Instance using following command

python migrate_datafiles.py /tmp/DATAFILES export --logdir=/tmp

Result:

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Import Data files to Disaster Recovery Oracle Analytics Server Instance

  • Copy DATFILES archive file from source environment to target environment.
  • Navigate to ORACLE_HOME/bi/modules/oracle.bi.publicscripts
  • Copy migrate_datafiles.py to migrate_datafiles_import.py
  • Update migrate_datafiles_import.py as follows

domain_home = topology.get_domain_home()

oracle_home = topology.get_oracle_home()

wlst_path = topology.get_wlst_path()

internal_scripts_path = os.path.join(topology.get_publicscripts_path(), 'internal')

To

domain_home = '[PATH_DOMAIN_HOME]'

oracle_home = '[PATH_ORACLE_HOME]'

wlst_path = '[PATH_WLST.SH]'

internal_scripts_path = '[PATH_INTERNAL_SCRIPTS]'

Example:

domain_home = "/u01/fmw/user_projects/domains/bi"

oracle_home = "/u01/fmw/bi"

wlst_path = "/u01/fmw/oracle_common/common/bin/wlst.sh"

internal_scripts_path = "/u01/fmw/bi/modules/oracle.bi.publicscripts/internal"

Import data files from the archive file into the Disaster Recovery Oracle Analytics Server Instance using following command

python migrate_datafiles.py /tmp/DATAFILES import --logdir=/tmp

Result:

Oracle Database, Oracle Database Exam, Database Prep, Database Preparation, Oralce Database Guides, Oralce Database Learning, Oralce Database Tutorial and Materials, Oralce Database Skills, Oralce Database Jobs

Migrating BI Publisher JDBC Connections:

Snapshots don’t include the BI Publisher JDBC Connections hence copy datasources.xml file from Primary Oracle Analytics Server Instance to Disaster Recovery Oracle Analytics Server Instance.

Path: /fmw/user_projects/domains/bi/config/fmwconfig/biconfig/bipublisher/Admin/DataSource/datasources.xml

Note: Once JDBC Connections are migrated, need to enter the passwords in the Disaster Recovery Oracle Analytics Server Instance and submit, apply.

Source: oracle.com