Friday, September 30, 2022

How to Install Oracle Database 11g on Windows?

Oracle Database (known as Oracle RDBMS) is a Database Management System produced and marketed by Oracle Corporation. 

The Most Fundamental and common usage of Oracle Database is to store a Pre-Defined type of Data. It supports the Structured Query language (SQL) to Manage and Manipulate the Data that it has. It is one of the most Reliable and highly used Relational Database Engines. 
  
There are many versions of Oracle Database like Oracle Database 10g, Oracle Database 11g, Oracle Database 12c, Oracle Database 19c, etc. from which Oracle 19c is the Latest Version. In this article, we will learn how to Install version 11g on Windows. 

Downloading the Installation Files


Step 1: Go to oracle.com and Click on Options Menu. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 2: Click the Download Button and Scroll Down to Database Section. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 3: Click Database 11g Enterprise/Standard Editions, after which you’ll find different versions of Oracle for different OS. Download the Files according to your OS. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 4: After Clicking the Download Button, the page will be directed to Login Screen where you’ll need to Sign In in Oracle Account. If you don’t have one, then you must Sign Up, because without that you won’t be able to download the Files. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 5: Repeat the same steps for both the Files and Download them. 
  
After downloading the files Successfully, you’ll find both Files in Downloads Folder where both of them will be in Compressed Form, so you’ll need to Extract them.

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Extraction of Downloaded Files


Step 1: Extract both the files with the use of WinRAR or any other extraction tool of your choice in the same directory. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 2: Go to Folder database -> Stage -> Components Copy all the content of this Folder 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 3: Go to Main Database -> Stage Paste the copied content in Components Folder 

Installation of Oracle Database


Step 1: Go to Main Database Folder where you’ll find Setup. Right click the setup.exe file and choose Run as Administrator. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 2: Click Yes to continue. This will start Oracle Universal Installer. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 3: Provide your Email Address to receive all the Notifications and News Alerts from Oracle. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

In case you don’t want to receive any Alerts from Oracle, then Simply leave these fields empty and click on Next to move ahead. 
  
Step 4: Select any of the three different Installation Options according to your needs.

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

◉ Option 1 – If you want to Install Oracle Server Software and want to Create Database also.
◉ Option 2 – If you want to Install Oracle Server only.
◉ Option 3 – If you want to Upgrade your Existing Database.

Step 5: Choose between Server Class and Desktop Class as per your requirement and click on Next. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Note: Ignore the warning related to the Admin Password.

Step 6: Configure the basic settings and create a Password for your database. Once the configuration is done click on Next to continue. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 7: Here, Oracle Universal Installer(OUI) will check for the Prerequisites such as Hardware compatibility. 

If there will be any Errors, then OUI will show them here and will recommend the changes. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 8: Click on Finish to start the Installation process. This installation might take some time depending on your Hardware. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 9: Click OK to finish the installation. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Step 10: Copy the localhost link provided to open your Enterprise Manager. 

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Click the Close Button and you are done with the Installation Process. 

Getting Started with Oracle Enterprise Manager


There are two ways to start Oracle Enterprise Manager: 

◉ One way to launch Enterprise Manager is using the Link provided in last step of Installation Process. 

Note: In case if you misplace this link in the future, follow the next step to start it the other way. 

◉ Go to Start Menu and 
    ◉ Search Oracle Folder
    ◉ Click on Database Control – Oracle (Your Global Database Name)
    ◉ This will take you to the Login Screen of your Oracle Enterprise Manager.

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

◉ Incase the above Error arises, just Click Advanced ->Proceed To Localhost

Oracle Database 11g, Oracle Database Exam, Oracle Database Tutorial and Materials, Oracle Database Prep, Oracle Database Preparation, Oracle Database Career, Oracle Database Jobs

Login with your Credentials and begin using the Oracle Database 11g on your Windows.

Source: geeksforgeeks.org

Wednesday, September 28, 2022

Oracle Analytics Server 2022 is available!

Oracle is proud to announce the availability of our customer-managed analytics platform: Oracle Analytics Server 2022. This is the next generation of Oracle Business Intelligence Enterprise Edition (OBIEE) and a great path to modernization for anyone needing to deploy analytics on-premises or customer-managed in the cloud via the Oracle Cloud Infrastructure Marketpla

The total value of the Oracle Analytics platform


Our analytics platform is continually improving. Oracle Analytics Server (OAS) users leverage excellent data visualization and storytelling capabilities, the ability to explore real-time insights, and the power of Machine Learning for all levels from clickers to coders. With integrated data preparation and enrichment, our self-service data analytics delivers performance, compliance, and easy administration, through a secure unified system. See the Analytics Capabilities Explorer for additional information.

Oracle Analytics Server, Oracle Databse Exam, Oracle Databse Prep, Oracle Databse Preparation, Oracle Databse Certification, Oracle Databse Tutorial and Materials, Oracle Databse Career, Oracle Databse Skills, Oracle Databse Jobs

Selected new features in OAS 2022:


◉ Multi-table datasets: Perform self-service data modeling with datasets by adding multiple tables to a dataset from one or more relational data source connections.

◉ Auto insights: Oracle Analytics analyzes datasets when you connect to them and suggests which visualizations will give you the best insights.

◉ New mapping capabilities: Smooth and powerful interactions with map visualizations through client-side libraries from Mapbox GL and WebGL, and map layers on unique data sets.

◉ Strong hierarchy support for EPM and Essbase: Navigate EPM hierarchies when connecting to subject areas or datasets. You can drill up and down the hierarchy tree in pivot and table visualizations.

◉ Graph and text analysis: Enhance your visualizations of network graphs using powerful property graph analytics. Compute the shortest path between two vertices or identify connected vertices in the graph.

◉ Data quality insights: Explore your data and use a visual overview of your content to assess and improve data quality.

◉ Custom data enrichment knowledge: Extend the system knowledge in Oracle Analytics by adding your own enterprise-specific reference data. This automatically increases Oracle Analytics’ ability to discover and offer more relevant enrichment recommendations.

◉ Analyze Performance: Use built-in performance tools to analyze statistics such as query time, server time, and streaming time for visualization components in workbooks.

◉ New efficient Redwood UI Design: Updated product look and feel, as well as new default colors and fonts follow guidelines focused on usability and accessibility.

For a complete overview of new features and enhancements, please see "What’s New for Oracle Analytics Server 2022.”

How does Oracle Analytics Server differ from Oracle Analytics Cloud?


Oracle Analytics Cloud is fully Oracle-managed cloud service that's updated several times a year. Oracle Analytics Server is customer-managed, and releases annually to keep up with Oracle Analytics Cloud capabilities. Oracle Analytics Server includes most of the features available except those with cloud dependencies. See the Feature Availability and Comparison Between Oracle Analytics Server and Oracle Analytics Cloud for more details.

However, because Oracle Analytics Server is fully customer-managed, you have more flexibility with connections, server configuration, and UI customization than customers using Oracle Analytics Cloud. To make Oracle Analytics Server deployment on OCI much easier than other cloud providers, we offer Oracle Analytics Server on Oracle Cloud Marketplace. Two licensing options are available:

Oracle Analytics Server, Oracle Databse Exam, Oracle Databse Prep, Oracle Databse Preparation, Oracle Databse Certification, Oracle Databse Tutorial and Materials, Oracle Databse Career, Oracle Databse Skills, Oracle Databse Jobs

Bring Your Own License (BYOL) for customers with an Oracle Fusion Middleware on-premises license. Deploy OAS quickly and easily, paying only for the OCI components they use.

Universal Credits (UCM) for customers with an active Oracle Cloud account (Pay As You Go or Annual Universal Credits). OAS license at a rate of $1.75 per CPU per hour, charged only for the hours during which the instance is active.

One analytics platform for all customer needs


The Oracle Analytics platform is a complete, modern analytics platform that supports smarter predictions and better decisions. Embedded machine learning and artificial intelligence power intelligent enterprise reporting, ad hoc analysis, and self-service data visualization. Oracle Analytics Server provides a choice to modernize, either on-premises in your own data center, or through the Oracle Cloud Infrastructure marketplace.

Source: oracle.com

Monday, September 26, 2022

InfStones and Autonomous JSON Database

Introduction


InfStones is an enterprise-grade Platform as a Service (PaaS) blockchain infrastructure provider that is trusted by top blockchain companies around the world. The Oracle Autonomous JSON Database (AJD) is a fully managed OCI service for building and deploying scalable applications that primarily need to store JSON data (aka JSON documents) in the Cloud. In this, post we will describe how InfStones was able to leverage AJD to build a low-latency query service over billions of blockchain records.

InfStones’ AI-based infrastructure provides developers worldwide with a rugged, powerful node management platform alongside an easy-to-use API. With over 15,000 nodes supported on over 60 blockchains, InfStones gives developers all the control they need - reliability, speed, efficiency, security, and scalability - for cross-chain DeFi, NFT, GameFi, and decentralized application development. InfStones is trusted by the biggest blockchain companies in the world including Binance, CoinList, BitGo, OKX, Chainlink, Polygon, Harmony, and KuCoin, among a hundred other customers. InfStones is dedicated to developing the next evolution of a better world through limitless Web3 innovation.

When building a new set of query services, InfStones had the following database requirements:

◉ Low-management: They wanted a managed database solution where they would not have to manually do administrative tasks like database backups, patching, and other server maintenance. And they wanted a secure-by-default database where they would not have to configure certificates, encryption settings, or worry if their database configuration was secure enough.

◉ Low-latency: The service provided to their end-users needs to support ad-hoc queries that return in milliseconds. Their end-users may use the results to support interactive, real-time applications so response time is critical.

◉ Scalable: The data sets backing their service contain billions of documents and new data is ingested continuously. At the same time, the InfStones platform continues to grow as new users access their platform. They needed a database that could automatically scale both storage and compute capacity as they grew.

◉ Flexible development options: The engineering team was primarily writing application code using Google Go and they preferred to interact with this dataset as a simple JSON collection using the MongoDB Go Driver. 

AJD was an obvious choice as it met all requirements while significantly reducing database compute costs. After only a few months of development, the new InfStones query processing services are now in production and executing millions of requests each day.

Architecture


Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

The above diagram describes the main components of the InfStones query processing service. This architecture is used for several different block chains including BSC, OKC, and Ethereum. Historical transactions were initially loaded into AJD using mongoimport. For BSC, for example, this resulted in a starting collection with about 2 billion JSON documents. The Data Manager is a background process that periodically loads new transactions as they occur. In the case of BSC, it loads about 100 rows every three seconds. Each transaction document is roughly 100 bytes and has the following structure:

{
  "_id": "6275561be8e4de7b67280cf7",
  "blockNumber": 14090932,
  "address": "5C15e82yIRvkFj+n",
  "data": "0x00000000000000000000000080372346523562456342345634562341534511",
  "transactionIndex": "0x4",
  "transactionash": "0xdedcb68e1023982345129C306d85768ee579f",
  "blockHash": "0x69387e811a0a442e1574ac05d783e42c7c73627d22342b39435089296",
  "removed": true,
  "logIndex": "0x0",
  "topic0": "0x000000000000000000000000803734dddb64c4392910b5ae0a2923733ab1c6b0",
  "topic1": "0x00000000000000000000000016d1160c4dddb64c4733b5ae0a2923733ab1c6b0",
  "topic2": "0x000000000000000000000000935a2252803734d682b8f64cfbf0fdb17797f586"
}

SQL was used to create function-based indexes like the following:

CREATE INDEX ADDR_BLOCK_IDX ON collectionTable (
     json_value(data, '$.address.string()' ERROR ON ERROR) asc,
     json_value(data, '$.blockNumber.number()' ERROR ON ERROR) desc
);
​​
CREATE INDEX BLOCK_IDX ON collectionTable (
     json_value(data, '$.blockNumber.number()' ERROR ON ERROR) desc
);

These indexes support queries on address, blockNumber, or both address and blockNumber. The “ERROR ON ERROR” clause causes an error to be raised if incoming data doesn’t match the expected type (in this case string or number). The ONLINE option could have been added when creating these indexes but it was not necessary as the indexes were created in advance. 

The distributed query service is written in Google Go and it issues filter expressions against the collection using the Oracle API for MongoDB and the MongoDB Go driver. For example:

filter := bson.M{"blockNumberInt": bson.M{"$gte": fromBlock, "$lte": toBlock}}
filter["address"] = bson.M{"$in": userRequest.Address}
findOptions.SetHint(indexName)
findOptions.SetBatchSize(BATCH_SIZE + 1)
cursor, _ := collection.Find(context.TODO(), filter, findOptions)

The query service exposes a REST interface to InfStones end users, allowing them to submit ad-hoc queries against the dataset. Examples of queries include:

◉ Select transactions with blockNumber values within a specified range
◉ Select transactions with blockNumber values within a specified range, having a specific address value.
◉ Select transactions with a matching topic

Because the dataset is so large, these queries must be index driven as scanning the entire dataset serially can take hours or even days. The result size of the queries is limited to 10,000 rows in order to limit the amount of data that is sent to the end user to around 7 mb. Overall, InfStones executes around 75 million requests across their databases each month.

Lessons Learned


Working with billions of documents while supporting fast range queries with large result sets created a unique set of challenges that are typically not encountered with smaller datasets. Additionally, a goal was to use the database as efficiently as possible: just throwing more hardware at a problem is an easy but expensive solution. The remainder of this post, therefore, shares some of the techniques used while developing the InfStones query processor. Most of the techniques shared here are not only relevant to AJD but also to Oracle Database in general.

Creating indexes

Operations that need to process all 2 billion rows, such as adding a new index, can take hours or even days if done using a single processor. InfStones relied on the ability of AJD to scale up compute capacity temporarily and use parallel processing to greatly reduce the time required to create indexes. That is, for normal operations, only 2 OCPUs are needed to support the workload described above, but when creating an index, the database would be temporarily scaled up to 40-100 OCPUs. This was done manually from the AJD Console and without incurring any database downtime.

Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

This screenshot shows where the scale can be changed in the database console (this can also be changed programmatically). Once the database is scaled up to use more OCPUs, the index is created from SQL using the HIGH consumer group. 

Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

This screenshot shows where the consumer group can be selected in Database Actions. Using the HIGH consumer group ensures that the index is created in parallel. Once the index has been created, the database is scaled back down. Billing increases for the additional OCPUs only for the time they are allocated. See more on consumer group settings here: https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/service-names-tranaction-processing.html

Transforming Data

At some point after the initial collection was loaded, a decision was made to change the structure of the JSON data - fields were renamed and an unnecessarily nested object was eliminated. To perform bulk transformations on the data, the most efficient way found was to make a modified copy of the collection using the SQL/JSON function JSON_TRANSFORM. 

First, an empty collection was created using the JSON page in Database Actions.

Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

This screenshot shows the JSON page where a new empty collection can be created. The empty collection could alternatively be created from a MongoDB client or tool. After creating the empty collection and scaling up the database, a SQL “insert as select” was executed, again using the HIGH consumer group for optimal parallel execution.

Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

The query in the screenshot above shows a JSON_TRANSFORM query that is used to insert a modified copy of the data into a second collection. Then the application is then updated to refer to the new collection and the old one is dropped. Creating a modified copy in this manner was more efficient than attempting to update the data in-place.

Using SQL Monitoring and Hints

The query service mainly interacts with the database using the Oracle API for MongoDB. However, at a lower level, all MongoDB commands received are translated into Oracle SQL statements. Database Actions includes a Performance Hub page that allows you to monitor the execution and performance of any SQL statement, including those executed as a result of commands sent using the Oracle API for MongoDB.

The InfStones query processor must support many concurrent users and queries at once. At this data scale, these operations need to be index driven or use parallel query processing in order to execute quickly. During the development, occasionally either a SQL statement or a MongoDB command would be executed serially that wasn’t able to use an existing index. Even with Exadata Smart Scan and OSON in-place query evaluation, the operation would still seem to hang as it scanned all 2 billion rows, potentially taking hours or days depending on the operation. The Performance Hub was used in such cases to see the query plan being used and potentially kill the long-running operation.

Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

This screenshot shows the Performance Hub page of Database Actions. In this case, one SQL statement is actively being monitored. Statements will be picked up by SQL monitoring if the MONITOR hint is used or the query takes longer than three seconds to execute. From the Mongo API, the $native hint can be used to pass the MONITOR hint to the commands that run in less than 5 seconds.

filter := bson.M{"blockNumber": bson.M{"$gte": fromBlock, "$lte": toBlock}}
filter["address"] = bson.M{"$in": userRequest.Address}
findOptions := options.Find()
findOptions.SetLimit(RESULT_LIMIT + 1)
findOptions.SetHint(bson.M{"$native": "MONITOR"})
findOptions.SetBatchSize(BATCH_SIZE + 1)
cursor, _ := collection.Find(context.TODO(), filter, findOptions)

This can be useful for diagnosing performance issues that take less than three seconds. The $native hint passes the specified text directly to the underlying statement as a SQL hint.

Hinting can also be used to force index pick-up, as shown in the following example:

filter := bson.M{"blockNumber": bson.M{"$gte": fromBlock, "$lte": toBlock}}
filter["address"] = bson.M{"$in": userRequest.Address}
findOptions := options.Find()
findOptions.SetLimit(RESULT_LIMIT + 1)
indexName := "BLK_IDX"
findOptions.SetHint(indexName)
findOptions.SetBatchSize(BATCH_SIZE + 1)
cursor, _ := collection.Find(context.TODO(), filter, findOptions)

Optimizing Response Time for Large Range Queries

One of the goals of InfStones was to keep query response time well under one second in all cases. However, initially range queries with larger results were sometimes taking longer than a second the first time the query was run. Running exactly the same query again would be much faster but the initial “cold” run was sometimes over a second. We examined the SQL monitoring report for the cold run:

Oracle Autonomous Database, JSON, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Prep, Oracle Database Preparation, Oracle Database News

What we observed is that even though the query plan was optimally using an index to perform the range scan, each row retrieved was incurring a physical I/O request. In this particular case, the query produces 133k rows (roughly 13MB of data) and incurs 132K physical IOs (each is IO is 8KB). The rows selected by the index were each pointing to different data blocks. When the query is run a second time, over exactly the same range of the data, it will be much faster (milliseconds) as the relevant data blocks were already in the buffer cache. For smaller datasets, an option to improve performance in this case would be to increase the available memory so that subsequent queries have a higher likelihood of having the relevant blocks paged into memory already. However, for a dataset this big, it is not feasible to cover enough of the data in memory. Instead, the solution we chose was to organize the data along the index being used. In this case, the range scan was on a composite index on attributes “address” and “blockNumber”. We recreated the dataset so that it would be stored in order of these attributes.

insert into newCollection (id, created_on, last_modified, version, data)
select id, created_on, last_modified, version, data
  from oldCollection t
  order by t.data.address.string(), t.data.blockNumber.number()

In this new collection, since the data is organized along the attributes of the index, range scans will get roughly 8 documents per physical IO (data blocks are 8k and the blockchain documents are on average 100 bytes). This reduces the overall physical IO by a factor of 8x and consequently keeps query times well under 1 second even when none of the data is in memory at the time the query is executed.  Since the blockchain transactions are not modified after they have been inserted, there is no risk of the data becoming unorganized over time. 

As new data comes in, we need to ensure that it is stored such that data along this index tends to be co-located in data blocks. Given that blockNumber values are chronologically increasing, we decided to additionally partition the table by the creation timestamp for newly inserted data:

alter table collectiontable modify
  partition by range (CREATED_ON)
  interval (interval '1' day) (
    partition p1 values less than (to_date('2022-06-21','YYYY-MM-DD'))
  );

This ensures that data loaded on any given day will be in the same set of data blocks. This design allows the query service to provide millisecond latency for ad-hoc queries over 2 billion rows with minimal database compute and memory resources.

Source: oracle.com

Friday, September 23, 2022

Time to rebuild indexes in the Oracle database?

One of the most controversial topics among DBAs is whether to rebuild indexes or not.

But it is so generic that the answer to the question depends on the database version, the index type and the reasons behind doing it.

Oracle Database Exam Prep, Database Career, Database Skills, Database Jobs, Database Learning, Database Prep, Database Preparation

In most cases, we consider b-tree indexes for rebuilt. Note that b stands not for “binary” but for “balanced”. We create indexes mostly for performance reasons. If performance is the real concern, we need to first understand at least at high level how those indexes are organized.

The MOS note Lists All Indexes that Benefit from a Rebuild (Doc ID 122008.1) suggests that indexes considered for rebuilt are indexes for which:

– deleted entries represent 20% or more of the current entries
– the index depth is more then 4 levels

And possible candidate for bitmap index were considered when the distinctiveness was more than 99%:

if ( height > 5 ) OR ( (del_lf_rows/lf_rows) > 0.2 ) then
    dbms_output.put_line (rpad(r_indx.owner,16,' ') || rpad(r_indx.index_name,40,' ') || 
                          lpad(round((del_lf_rows/lf_rows)*100,3),17,' ') || 
                          lpad(height-1,7,' ') || lpad(round((lf_rows-distinct_keys)*100/lf_rows,3),16,' '));
end if;

However, it is a good question if this applies to version 11g and above. I have personally seen good benefits of rebuilding indexes satisfying those conditions in lower versions of the database.

Another MOS article, Script to investigate a b-tree index structure (Doc ID 989186.1), provides a script which is quite handy as it verifies the structure of a b-tree index based on the existing table and index statistics.  The script calculates the following items:

– Estimate the size the index should be as optimal packing can be specified

– The index layout

This script keeps a history of the the data gathered in the INDEX_HIST table. This can be useful to prevent a pattern in index rebuilding. The history log can be user defined.

Yes another MOS article, Index Rebuild, the Need vs the Implications (Doc ID 989093.1), specifies that the most common justifications given for rebuilding an index are:

– index becomes fragmented
– index grows and grows – deleted space is not re-used
– index clustering factor becomes out of sync

However, as noted, the impact of rebuilding the index can be quite significant:

◉ Most scripts around depend on the index_stats dynamic table
◉ Redo activity may increase and general performance might be impacted as a direct result of rebuilding an index

An index coalesce is often preferred instead of an index rebuild. It has the following advantages:

◉ does not require approximately 2 times the disk storage
◉ always online
◉ does not restructure the index, but combines index leaf blocks as much as possible, avoiding system overhead

Due to the reasons listed above, it is strongly advised not to rebuild indexes on a regular basis but instead use proper diagnostics.

Next, here is my personal experience from rebuilding indexes: either drop and create or simply rebuild. I have seen 3 main use cases:

1. For performance reasons: because of too many deleted rows or because of split index nodes. With every new release of Oracle, it looks like that we need less and less time on performing such a rebuild operation.

2. Indexes get fragmented over time and occupy too much space. There was a mission critical production database where data was about 500MB and the indexes all together were about 4TB. You can release sometimes quite a lot of space.

3. Indexes should be in my opinion in a separate tablespace. Not where the data is. If something gets wrong with the index tablespace: logical or physical corruption, or deleting files unintentionally, then it is just a matter of time to recreate the indexes again.

Oracle Database Exam Prep, Database Career, Database Skills, Database Jobs, Database Learning, Database Prep, Database Preparation
In Oracle 21c, there is a new feature called Automatic Index Optimization. The optimization process includes 3 actions:

• Compress: Compresses portions of the key values in an index segment (~3 times)

• Shrink: Merges the contents of index blocks where possible to free blocks for reuse

• Rebuild: Rebuilds an index to improve space usage and access speed

For a very long time, both DBAs and Developers, have been struggling (really struggling) with what indexes should be created, what type of indexes they should be created as and what indexes should be dropped from the database. By far, the most interesting new feature of Oracle Database 19c is Automatic Index creation (AI Creation). In the long run, this is to be one of the most important features in the Oracle database. Note that you cannot rebuild an auto index! Nice and useful capability for AI is that Oracle automatically rebuilds indexes that are marked as “Unusable”.

Source: juliandontcheff.wordpress.com

Wednesday, September 21, 2022

How to calculate the size of a database?

Often even DBAs have misalignment on what is meant by “database size”. As data and databases grow with a rather high rate, it is important to understand the size of the database estate. Worldwide data is expected to hit 175 zettabytes by 2025, representing a 61% CAGR. Also, 51% of the data will be in data centers and 49% will be in the public cloud. So, when moving to cloud, it is important to know how much storage is needed for the databases.

Ever wondered which is the biggest database in the world? Here is an interesting link: 10 Largest Databases in the World. “Really? Sure about that? How do you actually know if it’s true? Who is the source of information, and can we trust it?” I was wondering exactly the same when checking the list.

And also: how should all the data be stored? According to Data: A Lenovo Solutions Perspective, the answer is that within the core (i.e. data center or cloud) the classic relational database will still be the dominant approach for many years. Thus, one more reason to properly estimate and calculate the size of your databases.

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Calculate, Oracle Database Materials

So, how how to calculate the size of say an Oracle database (check the links at the end for other database brands)? The most common way is to simply calculate the space which the database files physically consume on disk:

select sum(bytes)/1024/1024/1024 size_in_GB from dba_data_files;

But how about the other files? Like temp, control and redolog files? Are the undo files really part of the database? Do we always want to include the files from SYSTEM and SYSAUX too?

Also, not all this space in the files listed in dba_data_files is necessarily allocated. There could be sections of these files that are not used. Then, we can go with this method:

select sum(bytes)/1024/1024/1024 size_in_GB from dba_segments;

I often prefer using the query which ignores the system, temp and undo data:

select nvl(ceil(sum(bytes)/(1024*1024*1024)),0) size_in_GB
from dba_extents
where tablespace_name not in ('SYSTEM','TEMP','SYSAUX') and tablespace_name not like '%UNDO%';

Another way used is to only calculate the size of the real data from each schema in the database – the methods above include the indexes too. This requires recent analyze (not estimate but rather compute 100%) for all tables.

select owner, nvl(ceil(sum(num_rows*avg_row_len)/(1024*1024*1024)),0) size_in_GB
from dba_tables
where owner not in ('SYSTEM','OUTLN','SYS') group by owner;

However, checking the overall size including TEMP and REDO is perhaps the best approach. There is a MOS note, How to Calculate the Size of the Database (Doc ID 1360446.1) which is also worth reading. Here is the suggested method to calculate the total database size:

select a.data_size+b.temp_size+c.redo_size+d.cont_size "total_size"
from ( select sum(bytes) data_size
       from dba_data_files ) a,
     ( select nvl(sum(bytes),0) temp_size
       from dba_temp_files ) b,
     ( select sum(bytes) redo_size
       from sys.v_$logfile lf, sys.v_$log l
       where lf.group# = l.group#) c,
     ( select sum(block_size*file_size_blks) cont_size
       from v$controlfile ) d;

Oracle Database, Oracle Database Career, Oracle Database Skills, Oracle Database Jobs, Oracle Database Tutorial and Materials, Oracle Database Guides, Oracle Database Calculate, Oracle Database Materials

If you have enabled block change tracking and want to be really pedantic with the database size, you should also add the BCT file.

select owner "TABLE_OWNER", tablename "TABLE_NAME", tablesize "TABLE SIZE (GB)", indexname "INDEX_NAME", indexsize "INDEX SIZE (GB)", indexsize/tablesize "INDEX/TABLE" from
(
with
tabs as (select owner, segment_name tablename,sum(bytes/1024/1024/1024) tablesize from dba_segments where segment_type='TABLE' group by owner, segment_name),
inds as (select i.owner, i.index_name indexname, i.table_name tablename, sum(s.bytes/1024/1024/1024) indexsize from dba_indexes i join dba_segments s on (i.owner=s.owner and i.index_name=s.segment_name) group by i.owner, i.index_name, i.table_name)
select * from tabs natural join inds where indexsize > tablesize and indexsize>1
)
order by indexsize/tablesize desc;

Finally, there is a view in Oracle, called dba_hist_tbspc_space_usage, which displays historical tablespace usage statistics. So, you can retrieve historical growth of the tablespaces and thus the database as a whole. Check this blog post: How to retrieve growth history for Oracle tablespaces. Note that the history is available for as far back as AWR data is retained. It is considered good practice to keep track of the database size on say weekly basis. You will be always able to answer questions on capacity needs, trends, growth, size, etc.

P.S. DBA_HIGH_WATER_MARK_STATISTICS contains a column called DB_SIZE which is dependent on the internal package DBMS_SWRF_REPORT_INTERNAL. The definition of DBA_HIGH_WATER_MARK_STATISTICS is:

select dbid, hwm.name, version, highwater, last_value, description
from wri$_dbu_high_water_mark hwm, wri$_dbu_hwm_metadata mt
where hwm.name = mt.name and
hwm.name not like '_HWM_TEST%' and /* filter out test hwm */
bitand(mt.method, 4) != 4          /* filter out disabled hwm */

The table WRI$_DBU_HIGH_WATER_MARK is mostly likely populated using DBMS_SWRF_REPORT_INTERNAL. The view DBA_HIGH_WATER_MARK_STATISTICS (existing since 10g!) is useful as it tells you the maximum size of the database for each version of the lifespan of the database.

Source: juliandontcheff.wordpress.com

Monday, September 19, 2022

How to Import Data into Oracle Database?

An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. Oracle database contains data in the form of tables in the form of rows and columns. In this article, we will see how to import data into the Oracle database.

Here is few step through which we can import our files data into oracle database in the form of rows and columns.

Step 1: Log in to your oracle database.

Step 2: Click on option Utilities -> Data Load / Unload->Load.


Step 3: You can load Load Text Data, Spreadsheet Data, XML Data. Click on the option Load text data to load text file which is in CSV(comma separated value) format as shown above.


Step 4: Select New table as we want to load the data into new table and select upload file option. Click on the option next.


Step 5: Select the file you want to import. Here selected file is  student_info.txt. This file contain seven rows of students information. Put separator ( ,)  as our file is comma separated. Click on option next.


Step 6: Give a name to your table in which data is going to be stored. Click on option next then select customizing option as you want and click on next then your data is successfully uploaded in database.


Step 7: You can also check through sql command. 

Query:

Select * from gfg_demo_table

Source: geeksforgeeks.org