Monday, April 29, 2024

Where is the Complexity? Part 2

Introduction


In my previous blog post, I described how a simple Teller microservice to transfer funds from one account to another account could be implemented as REST services in Spring Boot. What I didn’t cover were all the possible ways a simple application like this could fail. I only mentioned a couple that could have been implemented. However, to solve this problem of various failures such as network failure, service failure, database failure, etc., one can choose to use a distributed transaction pattern such as the popular Saga pattern. In this post, I’ll examine what it takes to move from the simplistic minimal or no error handling version of these microservices to leveraging Eclipse MicroProfile Long Running Actions. As you’ll learn in reading this post, adding Sagas adds considerable complexity to the application code. Ideally using a distributed transaction pattern should simplify the developer’s life, not make it more complex. Continue reading to see how much complexity the developer takes on in using Sagas.

To use the Saga pattern with Eclipse MicroProfile Long Running Actions (LRA), the developer must keep track of information that is essentially unrelated to the business problem being solved. The basic flow for the LRA in this use case is:


  1. Teller microservice is called to transfer X from account A in department 1 to account B in department 2
  2. Teller microservice begins an LRA. This causes LRA headers to be included in subsequent calls
  3. Teller microservice calls withdraw on department1 for account A
    1. Under the covers, department 1 sees the LRA headers and joins the LRA by enlisting with the LRA coordinator
    2. Department 1 withdraws X from account A and returns to the teller microservice
  4. Teller microservice calls deposit on department 2 for account B
    1. Under the covers, department 2 sees the LRA headers and joins the LRA by enlisting with the LRA coordinator
    2. Department 2 deposits X into account B and returns to the teller microservice
  5. Teller microservice closes the LRA which causes
    1. LRA Coordinator to call back to the complete endpoint for each joined participant
    2. Each participant’s complete endpoint then performs any final steps for the LRA, more on this later
  6. Teller microservice returns success

Handling errors

Now to handle the possible various failures that can occur, the following additional steps may need to be performed:

  1. If the withdraw request fails, teller should cancel the LRA and return failure to its caller
    1. This in turn will cause the LRA coordinator to call back to each enrolled participant at their compensate endpoint, which at this point is only the initiator
    2. The initiator will get called back on its compensate endpoint if defined, at which it has nothing to do. The sample code doesn’t define a compensate or completion callback as there is nothing to do in the teller to compensate/complete the LRA as the teller maintains no state.
  2. Likewise, if the deposit request to account B fails, teller should cancel the LRA.
    1. This in turn will cause the LRA coordinator to call back to each enrolled participant at their compensate endpoint, which at this point is likely just the initiator and department 1. It might also include department 2 if department 2 had a chance to join before failing to deposit.
      1. The initiator will get called back on its compensate endpoint if defined, at which it has nothing to do
      2. Department 1 will get called back on its compensate endpoint, at which point it needs to figure out how to compensate the previous withdraw operation. This is where the fun begins. More on this shortly!
      3. Department 2 may also get called back on its compensate endpoint and go through the same drill as department 1.
  3. If both withdraw and deposit succeed, the teller should close the LRA. This causes the LRA coordinator to:
    1. Call the completion endpoint on the teller
      1. The teller really has nothing to do at this point
    2. Call the completion endpoint on department 1
      1. Department 1’s completion endpoint needs to clean-up any bookkeeping information it has on the LRA
    3. Call the completion endpoint on department 2
      1. Department 2’s completion endpoint needs to clean-up any bookkeeping information it has on the LRA

And we’re done! Well, sort of. Let’s examine in more detail what was mentioned about the fun beginning and bookkeeping. When a participant is called at its complete or compensate endpoints, all that is passed is the LRA Id in the REST headers. What it means to complete or compensate the transaction is completely up to the application code provided by the developer. So typically, this means creating some sort of log or journal to record the changes that were made as part of the LRA such that the microservice knows what needs to be done to complete or compensate the transaction.

Here is the updated Teller microservice code from TransferResource.java looks like:

    @Autowired
    @Qualifier("MicroTxLRA")
    RestTemplate restTemplate;

    @Value("${departmentOneEndpoint}")
    String departmentOneEndpoint;

    @Value("${departmentTwoEndpoint}")
    String departmentTwoEndpoint;

    @RequestMapping(value = "transfer", method = RequestMethod.POST)
    @LRA(value = LRA.Type.REQUIRES_NEW, end = true, cancelOnFamily = {HttpStatus.Series.CLIENT_ERROR, HttpStatus.Series.SERVER_ERROR})
    public ResponseEntity<?> transfer(@RequestBody Transfer transferDetails,
                                      @RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId) {

        LOG.info("Transfer initiated: {}", transferDetails);
        try {
            ResponseEntity<String> withdrawResponse = withdraw(transferDetails.getFrom(), transferDetails.getAmount());
            if (!withdrawResponse.getStatusCode().is2xxSuccessful()) {
                LOG.error("Withdraw failed: {} Reason: {}", transferDetails, withdrawResponse.getBody());
                return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
                        .body(new TransferResponse("Withdraw failed. " + withdrawResponse.getBody()));
            }

            ResponseEntity<String> depositResponse = deposit(transferDetails.getTo(), transferDetails.getAmount());
            if (!depositResponse.getStatusCode().is2xxSuccessful()) {
                LOG.error("Deposit failed: {} Reason: {} ", transferDetails, depositResponse.getBody());
                return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
                        .body(new TransferResponse("Deposit failed"));
            }
        } catch (Exception e) {
            LOG.error("Transfer failed with exception {}", e.getLocalizedMessage());
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
                    .body(new TransferResponse("Transfer failed. " + e.getLocalizedMessage()));
        }
        LOG.info("Transfer successful: {}", transferDetails);
        return ResponseEntity
                .ok(new TransferResponse("Transfer completed successfully"));
    }

You’ll notice that we’ve switched the RestTemplate to one provided by MicroTx that provides filters to automate things like passing MicroTx transaction headers. We added the @LRA annotation to indicate that a new LRA needs to be started. As well we’ve removed the redepositWithdrawnAmount method as the LRA will ensure corrective action takes place by eventually calling the department 1’s compensate endpoint. Finally, because of the “end = true” option in the LRA annotation, the LRA will automatically be closed (or canceled if the return status is a server or client error, 4xx or 5xx) when the transfer method completes.

So far, the changes are minimal. Let’s see what changes need to be made to the department deposit and withdraw services. In the first blog post, these services were contained in the AccountResource.java file. In this post, we’ve split that class into 3 separate classes (AccountAdminService, DepositService, and WithdrawService) as we want to register completion and/or compensation callbacks that are unique to the withdraw or deposit services.

Let’s look at the changes to how the withdraw service is implemented in WithdrawService.java: 

@RestController
@RequestMapping("/withdraw")
public class WithdrawService {

    private static final Logger LOG = LoggerFactory.getLogger(WithdrawService.class);

    @Autowired
    IAccountOperationService accountService;

    @Autowired
    JournalRepository journalRepository;

    @Autowired
    AccountTransferDAO accountTransferDAO;

    @Autowired
    IAccountQueryService accountQueryService;

    /**
     * cancelOnFamily attribute in @LRA is set to empty array to avoid cancellation from participant.
     * As per the requirement, only initiator can trigger cancel, while participant returns right HTTP status code to initiator
     */
    @RequestMapping(value = "/{accountId}", method = RequestMethod.POST)
    @LRA(value = LRA.Type.MANDATORY, end = false, cancelOnFamily = {})
    public ResponseEntity<?> withdraw(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId,
                                      @PathVariable("accountId") String accountId, @RequestParam("amount") double amount) {
        try {
            this.accountService.withdraw(accountId, amount);
            accountTransferDAO.saveJournal(new Journal(JournalType.WITHDRAW.name(), accountId, amount, lraId, ParticipantStatus.Active.name()));
            LOG.info(amount + " withdrawn from account: " + accountId);
            return ResponseEntity.ok("Amount withdrawn from the account");
        } catch (NotFoundException e) {
            return ResponseEntity.status(HttpStatus.NOT_FOUND).body(e.getMessage());
        } catch (UnprocessableEntityException e) {
            LOG.error(e.getLocalizedMessage());
            return ResponseEntity.status(HttpStatus.UNPROCESSABLE_ENTITY).body(e.getMessage());
        } catch (Exception e) {
            LOG.error(e.getLocalizedMessage());
            return ResponseEntity.internalServerError().body(e.getLocalizedMessage());
        }
    }

Where is the Complexity? Part 2
As you can see the withdraw service got more complex as we needed to introduce a journal to keep track of changes made by an LRA. We keep this journal so when asked to compensate we’ll know how to compensate the request. The withdraw service updates the account balance immediately and tracks the change in its journal.

    /**
     * Update LRA state. Do nothing else.
     */
    @RequestMapping(value = "/complete", method = RequestMethod.PUT)
    @Complete
    public ResponseEntity<?> completeWork(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId) {
        LOG.info("withdraw complete called for LRA : " + lraId);
        Journal journal = accountTransferDAO.getJournalForLRAid(lraId, JournalType.WITHDRAW);
        if (journal != null) {
            String lraState = journal.getLraState();
            if (lraState.equals(ParticipantStatus.Completing.name()) ||
                    lraState.equals(ParticipantStatus.Completed.name())) {
                // idempotency : if current LRA stats is already Completed, do nothing
                return ResponseEntity.ok(ParticipantStatus.valueOf(lraState));
            }
            journal.setLraState(ParticipantStatus.Completed.name());
            accountTransferDAO.saveJournal(journal);
        }
        return ResponseEntity.ok(ParticipantStatus.Completed.name());
    }

    /**
     * Read the journal and increase the balance by the previous withdrawal amount before the LRA
     */
    @RequestMapping(value = "/compensate", method = RequestMethod.PUT)
    @Compensate
    public ResponseEntity<?> compensateWork(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId) {
        LOG.info("Account withdraw compensate() called for LRA : " + lraId);
        try {
            Journal journal = accountTransferDAO.getJournalForLRAid(lraId, JournalType.WITHDRAW);
            if (journal != null) {
                String lraState = journal.getLraState();
                if (lraState.equals(ParticipantStatus.Compensating.name()) ||
                        lraState.equals(ParticipantStatus.Compensated.name())) {
                    // idempotency : if current LRA stats is already Compensated, do nothing
                    return ResponseEntity.ok(ParticipantStatus.valueOf(lraState));
                }
                accountTransferDAO.doCompensationWork(journal);
            } else {
                LOG.warn("Journal entry does not exist for LRA : {} ", lraId);
            }
            return ResponseEntity.ok(ParticipantStatus.Compensated.name());
        } catch (Exception e) {
            LOG.error("Compensate operation failed : " + e.getMessage());
            return ResponseEntity.ok(ParticipantStatus.FailedToCompensate.name());
        }
    }

    @RequestMapping(value = "/status", method = RequestMethod.GET)
    @Status
    public ResponseEntity<?> status(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId,
                                    @RequestHeader(LRA_HTTP_PARENT_CONTEXT_HEADER) String parentLRA) throws Exception {
        return accountTransferDAO.status(lraId, JournalType.WITHDRAW);
    }

    /**
     * Delete journal entry for LRA (or keep for auditing)
     */
    @RequestMapping(value = "/after", method = RequestMethod.PUT)
    @AfterLRA
    public ResponseEntity<?> afterLRA(@RequestHeader(LRA_HTTP_ENDED_CONTEXT_HEADER) String lraId, @RequestBody String status) {
        LOG.info("After LRA called for lraId : {} with status {} ", lraId, status);
        accountTransferDAO.afterLRA(lraId, status, JournalType.WITHDRAW);
        return ResponseEntity.ok().build();
    }

The remaining portion of the WithdrawService class is dealing with completing or compensating the LRA, allowing the department's view of the LRA status, and registering a callback when the LRA is completed or compensated. If the LRA is canceled and the compensation endpoint is called by the coordinator, it redeposits the withdrawn amount and sets its LRA state to compensated. If the LRA is completed, then the completion endpoint is called which has nothing to do as the funds have already been withdrawn and committed, so just its LRA state is updated. When the LRA is finally done, either completed or compensated, the after callback is made by the coordinator. In this case the callback simply deletes the journal entry as it is no longer needed.

Let's look at the deposit service DepositService.java:

@RestController
@RequestMapping("/deposit")
public class DepositService {

    private static final Logger LOG = LoggerFactory.getLogger(DepositService.class);

    @Autowired
    IAccountOperationService accountService;

    @Autowired
    JournalRepository journalRepository;

    @Autowired
    AccountTransferDAO accountTransferDAO;

    /**
     * cancelOnFamily attribute in @LRA is set to empty array to avoid cancellation from participant.
     * As per the requirement, only initiator can trigger cancel, while participant returns right HTTP status code to initiator
     */
    @RequestMapping(value = "/{accountId}", method = RequestMethod.POST)
    @LRA(value = Type.MANDATORY, end = false, cancelOnFamily = {})
    public ResponseEntity<?> deposit(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId,
                                     @PathVariable("accountId") String accountId, @RequestParam("amount") double amount) {
        accountTransferDAO.saveJournal(new Journal(JournalType.DEPOSIT.name(), accountId, amount, lraId, ParticipantStatus.Active.name()));
        return ResponseEntity.ok("Amount deposited to the account");
    }

    /**
     * Increase balance amount as recorded in journal during deposit call.
     * Update LRA state to ParticipantStatus.Completed.
     */
    @RequestMapping(value = "/complete", method = RequestMethod.PUT)
    @Complete
    public ResponseEntity<?> completeWork(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId) {
        try {
            LOG.info("deposit complete called for LRA : " + lraId);
            Journal journal = accountTransferDAO.getJournalForLRAid(lraId, JournalType.DEPOSIT);
            if (journal != null) {
                String lraState = journal.getLraState();
                if (lraState.equals(ParticipantStatus.Completing.name()) ||
                        lraState.equals(ParticipantStatus.Completed.name())) {
                    // idempotency : if current LRA stats is already Completed, do nothing
                    return ResponseEntity.ok(ParticipantStatus.valueOf(lraState));
                }
                accountTransferDAO.doCompleteWork(journal);
            } else {
                LOG.warn("Journal entry does not exist for LRA : {} ", lraId);
            }
            return ResponseEntity.ok(ParticipantStatus.Completed.name());
        } catch (Exception e) {
            LOG.error("Complete operation failed : " + e.getMessage());
            return ResponseEntity.ok(ParticipantStatus.FailedToComplete.name());
        }
    }

and the doCompleteWork method looks like:

    public void doCompleteWork(Journal journal) throws Exception {
        try {
            Account account = accountQueryService.getAccountDetails(journal.getAccountId());
            account.setAmount(account.getAmount() + journal.getJournalAmount());
            accountService.save(account);
            journal.setLraState(ParticipantStatus.Completed.name());
            journalRepository.save(journal);
        } catch (Exception e) {
            journal.setLraState(ParticipantStatus.FailedToComplete.name());
            journalRepository.save(journal);
            throw new Exception("Failed to complete", e);
        }
    }

which finally adds the amount deposited to the account. If we look at the compensate callback for deposit, it just sets the LRA status in the journal to compensated:

    /**
     * Update LRA state to ParticipantStatus.Compensated.
     */
    @RequestMapping(value = "/compensate", method = RequestMethod.PUT)
    @Compensate
    public ResponseEntity<?> compensateWork(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId) {
        LOG.info("Account deposit compensate() called for LRA : " + lraId);
        Journal journal = accountTransferDAO.getJournalForLRAid(lraId, JournalType.DEPOSIT);
        if (journal != null) {
            String lraState = journal.getLraState();
            if (lraState.equals(ParticipantStatus.Compensating.name()) ||
                    lraState.equals(ParticipantStatus.Compensated.name())) {
                // idempotency : if current LRA stats is already Compensated, do nothing
                return ResponseEntity.ok(ParticipantStatus.valueOf(lraState));
            }
            journal.setLraState(ParticipantStatus.Compensated.name());
            accountTransferDAO.saveJournal(journal);
        }
        return ResponseEntity.ok(ParticipantStatus.Compensated.name());
    }

    @RequestMapping(value = "/status", method = RequestMethod.GET)
    @Status
    public ResponseEntity<?> status(@RequestHeader(LRA_HTTP_CONTEXT_HEADER) String lraId,
                                    @RequestHeader(LRA_HTTP_PARENT_CONTEXT_HEADER) String parentLRA) throws Exception {
        return accountTransferDAO.status(lraId, JournalType.DEPOSIT);
    }

Finally the after callback will be triggered and it will delete the associated journal entry by calling the afterLRA method on the accountTransferDAO.

    /**
     * Delete journal entry for LRA (or keep for auditing)
     */
    @RequestMapping(value = "/after", method = RequestMethod.PUT)
    @AfterLRA
    public ResponseEntity<?> afterLRA(@RequestHeader(LRA_HTTP_ENDED_CONTEXT_HEADER) String lraId, @RequestBody String status) {
        LOG.info("After LRA Called : " + lraId);
        accountTransferDAO.afterLRA(lraId, status, JournalType.DEPOSIT);
        return ResponseEntity.ok().build();
    }

The accountTransferDAO afterLRA method is what actually deletes the journal entry:

    public void afterLRA(String lraId, String lraStatus, JournalType journalType){
        Journal journal = getJournalForLRAid(lraId, journalType);
        if (Objects.nonNull(journal) && isLRASuccessfullyEnded(lraStatus)) {
            journalRepository.delete(journal);
        }
    }

As the above code shows, a lot of additional work is required the application developer to ensure data consistency. However even with all these changes, the above code doesn't handle two calls to deposit or withdraw in the same LRA. The journal implementation only records the last update.

Source: oracle.com

Friday, April 26, 2024

The 5 major benefits of ECPUs in the Oracle Autonomous Database

Oracle Autonomous Database (ADB) recently introduced a more advanced ECPU billing metric and is now retiring the legacy OCPU billing metric for Autonomous Data Warehouse and Autonomous Transaction Processing in the next 12 months. Oracle recommend switching from the OCPU billing metric to the ECPU billing metric, which will not incur any downtime or service interruptions.

The 5 major benefits of ECPUs in the Oracle Autonomous Database

ECPUs will provide the same great price-performance as OCPUs and continuous improvements in price-performance over time. Updating to the ECPU billing metric provides the following benefits:

  • 50% lower entry cost: The smallest Autonomous Database that can be provisioned with ECPUs is 50% less expensive ($0.672 per hour vs $1.3441 per hour with OCPUs)
  • Finer granularity for database scaling: Each incremental increase in ECPU database size is only $0.336
  • Lower storage costs: Autonomous Data Warehouse storage price reduced from $118.40 to $25.00 per TB per month and Autonomous Transaction Processing storage can be provisioned in increments of 1GB (this is a huge thing!), with a minimum of 20GB – this brings the ADW in-database storage price on par with the object storage and thus this helps to build data lakes solely on the architectural requirements and not focusing on cost
  • Up to 87% lower costs with database consolidation: Elastic Resource Pools, available on ECPU ADB Serverless databases, help consolidate deployments leading to major cost savings
  • New features for Autonomous Database may only be available with ECPU’s

Note that the prices mentioned above are the current list prices for Autonomous Databases with the License Included license type.

The 5 major benefits of ECPUs in the Oracle Autonomous Database

There are also differences in backups between OCPU’s and ECPU’s. ECPU’s backup storage is billed separately, and the backup retention period may be selected between 1 and 60 days. With OCPU’s, 60-days of backup storage is included in the storage price. This new ECPU customer-controllable backup is beneficial because customers can now control the backup storage size and further reduce the cost of dev/test environments. Here is how I reduced the size from 60 to 31 days (later on I did reduce it to 7 days).

The 5 major benefits of ECPUs in the Oracle Autonomous Database

I did scale down my database in 2 phases: (1) I switched to the ECPU model (1 OCPU –> 4 ECPUs) and then (2) reduced the ECPU count from 4 to 2 and the storage from 1024GB to 20GB (those two in one go with no downtime).

The 5 major benefits of ECPUs in the Oracle Autonomous Database

Here are some general guidelines related to the new ECPU metric:

  1. Provision all new Autonomous Data Warehouse and Autonomous Transaction Processing databases or clones with the ECPU billing metric
  2. Update all existing databases to the ECPU billing metric, which is a simple and seamless button click or API call
  3. Note that if you choose not to update your existing databases’ billing metric at this time, Oracle may convert your databases from the OCPU billing metric to the ECPU billing metric in the future

Updating your Autonomous Database Serverless to the ECPU billing metric will have no impact to your service and incur no downtime. Oracle Autonomous Database will be retiring the OCPU-based SKUs and replacing them with the ECPU-based SKUs. Starting in August 2023, some new Autonomous Database features may be available only on ECPU’s. For example, Elastic Resource Pools are only available with ECPU’s.

Note that ECPU’s have also already been introduced for MySQL Heatwave on AWS, and other services may also offer ECPU’s in the future.

The 5 major benefits of ECPUs in the Oracle Autonomous Database

Source: juliandontcheff.wordpress.com

Wednesday, April 24, 2024

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

The Oracle Database and in particular the Oracle Autonomous Database offers multiple technologies to accelerate queries and improve database performance over JSON data, including indexes, materialized views, in-memory column storage, and Exadata storage-cell pushdown.

Simple Oracle Document Access (SODA) is a NoSQL query language for storing, editing and deleting JSON documents. A SODA collection is a set of documents (in particular JSON) that is backed by an Oracle Database table or view. By default, creating a SODA document collection creates the following in the Oracle Database:

◉ Persistent default collection metadata.
◉ A table for storing the collection, in the database schema to which your SODA client is connected.

If the init.ora parameter compatible is at least 20, then SODA uses JSON data type by default for JSON content, and the version method is UUID. If the init.ora parameter compatible is less than 20, then SODA uses BLOB textual data by default for JSON content and the data is character data. 

Collection metadata is composed of multiple components. The kind of database you use determines the collection metadata that is used by default, and which of its field values you can modify for custom metadata.

Let us consider a collection of movies within the ADMIN schema in a 19c Oracle Autonomous JSON Database (think of one JSON file per movie). After opening JSON from Database Actions, we will create an index on the MOVIES colection:

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

Creating an index for a JSON collection requires the name and the type. The different options are Functional, Spatial and Search.

Before creating a functional index on the runtime field (we will be running order-by queries against it), let us see what are the field needed (fields to enter):

  • For a functional type index, the fields to enter are:
    • Unique: make all indexed values unique
    • Index Nulls: use the index in order-by queries
    • Path Required: the path must select a scalar value, even a JSON null value
    • Properties: select the property that you want to index on, or easier is to just type * to display all available document properties in the collection
    • Composite Index: use more than one property
    • Advanced: change the storage properties of the indexed property
  • For search index, the options are:
    • Dataguide off-on: create JSON data guide for collection
    • Text Search off-on: index all properties in documents to support full-text search based on string equality
    • Range Search off-on: choose on to support range search when string-range search or temporal search (equality or range) is required
  • Spatial index is used to index GeoJSON geographic data. The selected property should be of GeoJSON type.
    • Path Required: the path must select a value, even if it is a JSON null value
    • Lax: the targeted field does not need to be present or does not have a GeoJSON geometry object as its value

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

The properties of the selected index appear in JSON format below the listed indexes:

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

We can also index a singleton scalar field using SQL, here is how:

CREATE INDEX YEAR_NDX ON movies (json_value(data, '$.year.number()' ERROR ON ERROR));

Item method numberOnly() is used in the path expression that identifies the field to index, to ensure that the field value is numeric. As I have the year field in one of the JSON files as a string, I am getting the following error if I use the method numberOnly(): ORA-01722: invalid number. Method numberOnly() is used instead of method number(), because number() allows also for conversion of non-numeric fields to numbers. Clearly as some moview are produced during the same year I cannot make the index unique: ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found.

Creating a collection relational views of JSON documents is simple, select the columns we need, say title, year and runtime:

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

In SQL, we can use the view and order the movies by runtime:

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

Regardless of your database release you can create whatever Oracle Database indexes you need directly, using:

(1) the JSON Page of Using Oracle Database Actions, check Creating Indexes for JSON Collections

(2) Simple Oracle Document Access (SODA)

(3) SQL — see Indexes for JSON Data in Oracle Database JSON Developer’s Guide.

Using the JSON page in ADB is perhaps the easiest approach to indexing JSON data.

The static data dictionary views USER_SODA_COLLECTIONS lists the basic features of all of your SODA collections. Correspondently, you have also DBA_SODA_COLLECTIONS and ALL_SODA_COLLECTIONS. Collection metadata, expressed in JavaScript Object Notation (JSON) can be obtained from the last column, called JSON_DESCRIPTOR.

Users will typically work with JSON collections using native language drivers, for example, SODA for Java or SODA for Python. SODA native language drivers generally provide more throughput (operations per second) than the REST driver (SODA for REST). It is recommended to configure the SODA drivers as follows:

  • Enable SODA Metadata Cache: The SODA driver needs to know the metadata of each JSON collection (the column names, types, etc.). By enabling the metadata cache, roundtrips to the database can be saved, improving latency and throughput.
  • Enable Statement Cache: Statement caching improves performance by caching executable statements that are used repeatedly, such as in a loop or in a method that is called repeatedly. For Java, the statement cache is enabled using JDBC.
  • For load-balanced systems: turn off DNS caching: Load balancing allows to distribute SODA operations across different nodes. If DNS caching is turned on, then all connections are likely to use the same node and nullifying the load balancing. For Java, the following system property should be set: inet.addr.ttl=0

The database performance tuning techniques also apply to SODA: for example, SODA collections can be partitioned or sharded, and queries can be accelerated using indexes and/or materialized views.

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

How to monitor all that? You can turn on performance monitoring of the SQL operations that underlie a SODA read or write operation, by adding a SQL hint to the SODA operation.

Use only hint MONITOR (turn on monitoring) or NO_MONITOR (turn off monitoring). You can use this to pass any SQL hints, but MONITOR and NO_MONITOR are the useful ones for SODA, and an inappropriate hint can cause the optimizer to produce a suboptimal query plan.

Note in addition that when using SODA with Autonomous Database the following restrictions apply:

  • Automatic indexing is not supported for SQL and PL/SQL code that uses the SQL/JSON function json_exists.
  • Automatic indexing is not supported for SODA query-by-example (QBE)

Final note: since Auto Indexing is disabled by default in the Autonomous Database, you can enable it by running: EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

Source: juliandontcheff.wordpress.com

Monday, April 22, 2024

Oracle Database API for MongoDB - Best Practices

Oracle Database API for MongoDB - Best Practices

The Oracle Database API for MongoDB (hereafter MongoDB API) allows you to access Oracle Databases from MongoDB-compatible programs and drivers simply by modifying the access URI of your program or application.

There are two ways to use the Oracle Database API for MongoDB (or MongoDB API):

1. As part of a fully managed Autonomous Database Serverless.
2. Using a standalone ORDS server connected to an Autonomous Database Dedicated/CC or an unmanaged Oracle Database, on-premises or in the Cloud.

As a fully managed service, the first option requires no configuration (beyond turning the feature on) nor any maintenance from a customer perspective. This document will therefore concentrate on the second option, where the customer manages the deployment and support of the MongoDB API.

MongoDB API architecture


MongoDB programs and drivers communicate with their database(s) over a socket connection, usually using port 27017. A URI contains all the information needed to define the server connection and typically takes the form

mongodb://[user:pass@]servername:27017/databasename?parameters

In the case of a MongoDB database, the database server itself listens on port 27017 and acts on incoming requests. Oracle MongoDB API provides the same transparent connection through Oracle REST Data Services (ORDS), so all you need to do is to change your connection string. No additional port openings or firewall changes are required; all you need is network connectivity to your ORDS installation, on-premises, or in the Cloud. 

The MongoDB API is an integral part of ORDS. It rewrites MongoDB requests into appropriate Oracle SQL or SODA operations and passes them to the Oracle database server over SQL*Net. Storing your documents centrally in an Oracle Database enables you to provide MongoDB compatibility with your data stored and managed in the most mature, mission-critical Oracle Database and enhance your capabilities to work with the data beyond pure document-store APIs, such as full SQL/JSON access for analytics.

Oracle Database API for MongoDB - Best Practices

You need to deploy ORDS as a standalone application, meaning that the ORDS process spins up its own Jetty Web Server. Such a deployment has the advantage of requiring only Java and a minimal resource footprint.

The following defines some best practices for deploying the MongoDB API for your applications.

I have an Autonomous Database Serverless. Is there any reason to use a standalone ORDS server??


No.

In most cases, the built-in MongoDB API support is sufficient, and you don’t want to deploy and manage anything yourself. However, you may get more tunability and performance under certain challenging conditions by utilizing a standalone ORDS server on an OCI VM or a machine outside of OCI. If you decide to deploy your own ORDS, you can still use the MongoDB API interface provided by the Autonomous Database Service in parallel, for example, for testing or development purposes.

Can I use WebLogic Server or Apache Tomcat?


No.

Although ORDS itself can run standalone or as part of WebLogic Server (WLS) or Tomcat, the MongoDB API is currently only supported through standalone ORDS deployments.

Where should I run ORDS?


The location of where to deploy ORDS is primarily a decision of your architectural preference and how you leverage ORDS in your enterprise. It also depends on whether ORDS is just a means for providing the MongoDB API or is also leveraged for other ORDS functionality, such as serving as a REST Server or as a backend for Oracle APEX. Depending on your use case, you can either:

◉ Deploy ORDS centrally as a separate component.
◉ Consider ORDS part of your application that requires MongoDB API compatibility.
 
When working with ORDS, where the sole purpose is to provide the MongoDB API, it is usually best to run the ORDS server in the application node next to your application. When you have multiple application servers on individual nodes, you can run an ORDS instance with each application server. That way:

◉ The MongoDB API scales with the number of application server nodes.
◉ You only need to size the resources for each ORDS installation based on the requirements of a single application node.
◉ There’s no need to keep other ports open between your application node and the ORDS node. The only connection port required to communicate with the Oracle Database is the port for your SQL*Net connection.
◉ You don’t need to maintain separate hardware or containers for the ORDS nodes.
◉ Assuming you have a high-availability and/or load-balancing solution for your application nodes, that will also automatically work for ORDS.

However, there can be advantages to running ORDS on a separate cluster, mainly in the case of using ORDS for more than the MongoDB API. The benefits would be:

◉ A single ORDS instance can serve multiple applications and/or databases.
◉ ORDS can be scaled independently of the application.
◉ You can maintain a shared connection pool.

When deploying ORDS as a separate central component in a multi-tenant fashion, you need to ensure that:

◉ Your HA requirements must be met with multiple deployments and a mechanism to transparently work with any of your deployed ORDS instances, for example, through Load Balancers in a Cloud environment.
◉ You need to size the resources required for ORDS based on the aggregated requirements of all ORDS-supported services.

Are there other recommendations for installing and using Standalone ORDS?


Yes. Please see Oracle REST Data Services (ORDS) best practices, but when reading it, remember that any references to WLS or Tomcat should be ignored—any recommendation wrt. High Availability is applicable for standalone ORDS installations. Most of these recommendations will be more relevant for separate, centrally deployed ORDS environments.

Anything I need to do to achieve high throughput?


Yes. ORDS is configured with a relatively basic set of performance settings out of the box. Those might be ok for testing or small applications with light-weight utilization, but if you want high throughput, you must configure some settings like the connection pool size. Please see Section 9.7, Achieving High Performance in the ORDS Installation and Configuration Guide

Source: oracle.com

Friday, April 19, 2024

Unleashing the Power of Oracle Enterprise Manager Cloud Control for Oracle Exadata Cloud

Unleashing the Power of Oracle Enterprise Manager Cloud Control for Oracle Exadata Cloud

Introduction: Understanding Oracle Enterprise Manager Cloud Control


Oracle Enterprise Manager Cloud Control is a comprehensive management solution that offers a unified and centralized approach to managing Oracle environments. Specifically designed for Oracle Exadata Cloud, this powerful tool empowers organizations to streamline operations, enhance performance, and optimize resources effectively.

Streamlined Management with Oracle Enterprise Manager Cloud Control


Centralized Monitoring and Administration

Oracle Enterprise Manager Cloud Control provides a centralized platform for monitoring and administering Oracle Exadata Cloud environments. With real-time insights into system performance, resource utilization, and application health, administrators gain full visibility and control over their infrastructure.

Proactive Performance Management

By leveraging advanced monitoring capabilities, Oracle Enterprise Manager Cloud Control enables proactive performance management. Administrators can identify potential issues, analyze trends, and take preemptive actions to ensure optimal performance and prevent downtime.

Automated Provisioning and Scaling

One of the key benefits of Oracle Enterprise Manager Cloud Control is its ability to automate provisioning and scaling tasks. With predefined templates and policies, administrators can effortlessly deploy new resources, scale capacity up or down as needed, and optimize resource utilization based on workload demands.

Enhanced Security and Compliance


Comprehensive Security Controls

Oracle Enterprise Manager Cloud Control offers a wide range of security controls to safeguard Oracle Exadata Cloud environments. From user authentication and access control to encryption and audit logging, administrators can enforce stringent security measures to protect sensitive data and ensure compliance with regulatory requirements.

Continuous Compliance Monitoring

With built-in compliance management features, Oracle Enterprise Manager Cloud Control enables continuous monitoring of regulatory compliance. Administrators can define and enforce compliance policies, conduct regular audits, and generate compliance reports to demonstrate adherence to industry standards and regulations.

Optimized Resource Management


Efficient Resource Allocation

Oracle Enterprise Manager Cloud Control optimizes resource management by providing tools for efficient allocation and utilization of resources. By analyzing workload patterns and performance metrics, administrators can allocate resources dynamically, optimize resource utilization, and eliminate resource bottlenecks.

Cost Optimization

By optimizing resource usage and improving operational efficiency, Oracle Enterprise Manager Cloud Control helps organizations minimize costs associated with Oracle Exadata Cloud deployment. Administrators can identify opportunities for cost savings, optimize licensing agreements, and streamline resource provisioning to achieve maximum ROI.

Conclusion: Empowering Organizations with Oracle Enterprise Manager Cloud Control
Oracle Enterprise Manager Cloud Control is a game-changing solution for organizations leveraging Oracle Exadata Cloud. From streamlined management and enhanced security to optimized resource utilization and cost savings, this powerful tool offers a comprehensive suite of features to empower organizations and drive business success.

Monday, April 15, 2024

Where is the Complexity?

Where is the Complexity?

One of the common arguments I hear about avoiding XA distributed transactions is due to their complexity. In this series of blog posts I’ll examine that claim by looking at three versions of the same application. The first version of the application ignores data consistency issues and operates as though failures essentially don’t occur. This unfortunately is a pretty common practice due to the perceived complexity of introducing distributed transactions into an application. The second version adopts the saga pattern that is all the rage. It uses Eclipse MicroProfile Long Running Actions to implement the saga. Unlike the toy examples used to illustrate how sagas work, this version will include the necessary logic to actually be able to complete or compensate a transaction. Finally, the third version will use the XA pattern to ensure data consistency.

Basic Problem to Solve


The application I’ll use to illustrate the issues associated with ensuring data consistency is one that provides the transfer of money from one account to another account where each account is serviced by a different microservice. Fundamentally a very simple application if you ignore failures. The flow is basically a Transfer microservice:

1. Accepts a request to transfer an amount of money from one account to another account
2. Makes a request to withdraw the amount from the first account
3. Makes a request to deposit the amount in the second account
4. Returns success

Where is the Complexity?

A very simple application, what could possibly go wrong?

Simplistic Application Without Considering Failures


Let’s look first at a possible simple Transfer microservice. It offers a single service named “transfer”, which transfers money from an account in one microservice (department1) to an account in a different microservice (department2). Here is a simple Spring Boot based teller service that handles transferring the money:

@RestController
@RequestMapping("/transfers")
@RequestScope
public class TransferResource {

    private static final Logger LOG = LoggerFactory.getLogger(TransferResource.class);

    @Autowired
    RestTemplate restTemplate;

    @Value("${departmentOneEndpoint}")
    String departmentOneEndpoint;

    @Value("${departmentTwoEndpoint}")
    String departmentTwoEndpoint;

    @RequestMapping(value = "transfer", method = RequestMethod.POST)
    public ResponseEntity<?> transfer(@RequestBody Transfer transferDetails) throws TransferFailedException {
        ResponseEntity<String> withdrawResponse = null;
        ResponseEntity<String> depositResponse = null;

        LOG.info("Transfer initiated: {}", transferDetails);
        try {
            withdrawResponse = withdraw(transferDetails.getFrom(), transferDetails.getAmount());
            if (!withdrawResponse.getStatusCode().is2xxSuccessful()) {
                LOG.error("Withdraw failed: {} Reason: {}", transferDetails, withdrawResponse.getBody());
                throw new TransferFailedException(String.format("Withdraw failed: %s Reason: %s", transferDetails, withdrawResponse.getBody()));
            }
        } catch (Exception e) {
            LOG.error("Transfer failed as withdraw failed with exception {}", e.getLocalizedMessage());
            throw new TransferFailedException(String.format("Withdraw failed: %s Reason: %s", transferDetails, Objects.nonNull(withdrawResponse) ? withdrawResponse.getBody() : withdrawResponse));
        }

        try {
            depositResponse = deposit(transferDetails.getTo(), transferDetails.getAmount());
            if (!depositResponse.getStatusCode().is2xxSuccessful()) {
                LOG.error("Deposit failed: {} Reason: {} ", transferDetails, depositResponse.getBody());
                LOG.error("Reverting withdrawn amount from account {}, as deposit failed.", transferDetails.getFrom());
                redepositWithdrawnAmount(transferDetails.getFrom(), transferDetails.getAmount());
                throw new TransferFailedException(String.format("Deposit failed: %s Reason: %s ", transferDetails, depositResponse.getBody()));
            }
        } catch (Exception e) {
            LOG.error("Transfer failed as deposit failed with exception {}", e.getLocalizedMessage());
            LOG.error("Reverting withdrawn amount from account {}, as deposit failed.", transferDetails.getFrom());
            redepositWithdrawnAmount(transferDetails.getFrom(), transferDetails.getAmount());
            throw new TransferFailedException(String.format("Deposit failed: %s Reason: %s ", transferDetails, Objects.nonNull(depositResponse) ? depositResponse.getBody() : depositResponse));
        }
        LOG.info("Transfer successful: {}", transferDetails);
        return ResponseEntity
                .ok(new TransferResponse("Transfer completed successfully"));
    }
    
    /**
     * Send an HTTP request to the service to withdraw amount from the provided account identity
     *
     * @param accountId The account Identity
     * @param amount    The amount to be withdrawn
     */
    private void redepositWithdrawnAmount(String accountId, double amount) {
        URI departmentUri = UriComponentsBuilder.fromUri(URI.create(departmentOneEndpoint))
                .path("/accounts")
                .path("/" + accountId)
                .path("/deposit")
                .queryParam("amount", amount)
                .build()
                .toUri();

        ResponseEntity<String> responseEntity = restTemplate.postForEntity(departmentUri, null, String.class);
        LOG.info("Re-Deposit Response: \n" + responseEntity.getBody());
    }

    /**
     * Send an HTTP request to the service to withdraw amount from the provided account identity
     *
     * @param accountId The account Identity
     * @param amount    The amount to be withdrawn
     * @return HTTP Response from the service
     */
    private ResponseEntity<String> withdraw(String accountId, double amount) {
        URI departmentUri = UriComponentsBuilder.fromUri(URI.create(departmentOneEndpoint))
                .path("/accounts")
                .path("/" + accountId)
                .path("/withdraw")
                .queryParam("amount", amount)
                .build()
                .toUri();

        ResponseEntity<String> responseEntity = restTemplate.postForEntity(departmentUri, null, String.class);
        LOG.info("Withdraw Response: \n" + responseEntity.getBody());
        return responseEntity;
    }

    /**
     * Send an HTTP request to the service to deposit amount into the provided account identity
     *
     * @param accountId The account Identity
     * @param amount    The amount to be deposited
     * @return HTTP Response from the service
     */
    private ResponseEntity<String> deposit(String accountId, double amount) {
        URI departmentUri = UriComponentsBuilder.fromUri(URI.create(departmentTwoEndpoint))
                .path("/accounts")
                .path("/" + accountId)
                .path("/deposit")
                .queryParam("amount", amount)
                .build()
                .toUri();

        ResponseEntity<String> responseEntity = restTemplate.postForEntity(departmentUri, null, String.class);
        LOG.info("Deposit Response: \n" + responseEntity.getBody());
        return responseEntity;
    }

}

Note that this simplistic implementation of the transfer service at least considers the possibility that the deposit service fails and if so attempts to redeposit the withdrawn amount back into the source account.  However as should be obvious, it’s possible that the redeposit fails thus leaving the funds in limbo.

Here is the code providing the REST service interface for withdraw for Department1:

@RequestMapping(value = "/{accountId}/withdraw", method = RequestMethod.POST)
    public ResponseEntity<?> withdraw(@PathVariable("accountId") String accountId, @RequestParam("amount") double amount) {
        try {
            this.accountOperationService.withdraw(accountId, amount);
            return ResponseEntity.ok("Amount withdrawn from the account");
        } catch (NotFoundException e) {
            return ResponseEntity.status(HttpStatus.NOT_FOUND).body(e.getMessage());
        } catch (UnprocessableEntityException e) {
            LOG.error(e.getLocalizedMessage());
            return ResponseEntity.status(HttpStatus.UNPROCESSABLE_ENTITY).body(e.getMessage());
        } catch (Exception e) {
            LOG.error(e.getLocalizedMessage());
            return ResponseEntity.internalServerError().body(e.getLocalizedMessage());
        }
    }

The withdraw service calls the withdraw method on the accountOperationService, Here is the code providing the withdraw method used by the above REST service. It uses an injected EntityManager for JPA:

/**
 * Service that connects to the accounts database and provides methods to interact with the account
 */
@Component
@RequestScope
@Transactional
public class AccountOperationService implements IAccountOperationService {

    private static final Logger LOG = LoggerFactory.getLogger(AccountOperationService.class);

    @Autowired
    EntityManager entityManager;

    @Autowired
    IAccountQueryService accountQueryService;

    @Override
    public void withdraw(String accountId, double amount) throws UnprocessableEntityException, NotFoundException {
        Account account = accountQueryService.getAccountDetails(accountId);
        if (account.getAmount() < amount) {
            throw new UnprocessableEntityException("Insufficient balance in the account");
        }
        LOG.info("Current Balance: " + account.getAmount());
        account.setAmount(account.getAmount() - amount);
        account = entityManager.merge(account);
        entityManager.flush();
        LOG.info("New Balance: " + account.getAmount());
        LOG.info(amount + " withdrawn from account: " + accountId);
    }

    @Override
    public void deposit(String accountId, double amount) throws NotFoundException {
        Account account = accountQueryService.getAccountDetails(accountId);
        LOG.info("Current Balance: " + account.getAmount());
        account.setAmount(account.getAmount() + amount);
        account = entityManager.merge(account);
        entityManager.flush();
        LOG.info("New Balance: " + account.getAmount());
        LOG.info(amount + " deposited to account: " + accountId);
    }
}

The withdraw method gets the current account balance and checks for sufficient funds and throws an exception if insufficient funds. Otherwise it updates the account balance and saves the account. We can also see the deposit method which gets the current account balance, adds the amount to deposit and saves the updated account information.

What About Failures?


The developer of this teller service realizes there might be some failure scenarios to handle. For example, what happens if the deposit request fails? The developer solved that by having the teller service takes the corrective measure of redepositing the money back into the first account. Problem solved. But what happens if between the time of the withdrawal request and the request to redeposit the funds the teller microservice dies? What happens to the funds that were withdrawn? As it stands, they’re lost!

The developer could solve this problem by creating a table of pending operations that could be examined when the teller microservice starts up. But that would also mean that the deposit service must be idempotent as the only thing the teller service can do is retry the deposit request until it succeeds at which point it would remove the entry from its pending operations table. Until the deposit succeeds, the funds are basically in limbo and inaccessible to the account owner or anyone else.

So far, the developer has only handled some of the possible failures by adding error recovery logic into their microservice. And this is only for a trivial microservice. As more state information is updated, more complex recovery mechanisms may need to be added to the microservice. In the next post, we’ll look at how we can apply the saga pattern to solve this data consistency problem using Eclipse MicroProfile Long Running Actions, coordinated by MicroTx.

Source: oracle.com