Oracle is a commercial software developed by Oracle Corporation. Oracle widely uses RDBMS. Oracle allows quick and safe store and retrieval of data. It is used for running Online Transaction Processing and Data Warehousing. Oracle runs on the most major operating systems like Mac OS, UNIX, Windows and Linux. Oracle has networking stack, allows application from a different platform to communicate with the Oracle easily.
2. dBASE:
dBASE was one of the most successful database management systems for microcomputers. It was the first commercially successful database system for personal computers. It is used for creating and manipulating relational databases (RDBMS). DBASE uses procedural functions and commands similar to the BASIC language. It uses simple commands for data manipulation like USE, GO TOP, and more.
Difference between Oracle and dBASE:
S.NO.
ORACLE
DBASE
1
It was developed by Oracle Corporation.
It was developed by Asthon Tate.
2
It was launched in 1980.
It was launched in 1979.
3
Run over operating system is Oracle.
Run over operating systems are DOS (dBASE Classic) and Windows (dBASE Pro).
4
It is an commercial software.
It is also commercial software.
5
JDBC, ODBC, ODP.NET and Oracle Call Interface (OCI) are the APIs and access methods used in Oracle.
No APIs and access method are used in dBASE.
6
Oracle is implemented in C and C++ languages.
dBASE is implemented in SQL languages.
7
It uses Horizontal partitioning with the Optional Oracle Partitioning methods.
dBASE does not use any partitioning method.
8
Referential integrity is used in Oracle.
Referential integrity is used in dBASE.
9
It uses Selectable Replication method.
It does not use any replication method.
10
ACID properties of transaction is used.
There are no transaction concepts for dBASE internal data, but IDE does support transactions when accessing external DBMS.
Databases store critical business information and are essential for the efficient operation of modern organizations. DBAs are often overburdened with the time-consuming manual tasks of managing and maintaining databases. The demands of current workloads can lead to DBA errors, which can have a catastrophic impact on uptime, performance, and security.
For example, failing to apply a patch or security update can create vulnerabilities. Failing to apply the patch correctly can weaken or eliminate security protections altogether. If the database is not secure, the enterprise can be at risk for data breaches that can have serious financial repercussions and negatively impact a company’s reputation.
Business applications add new records to existing databases or use database information to create reports, analyze trends, or look for anomalies. This can cause databases to grow to many terabytes in size and become highly complex, making them even more difficult for DBAs to manage, secure, and tune for maximum performance. Databases that are slow-running or unavailable due to downtime can negatively impact employee productivity and frustrate customers.
The amount and velocity of data available to the enterprise is accelerating. This amplifies the need for efficient, secure database management that enhances data security, reduces downtime, improves performance, and is not vulnerable to human error. An autonomous database can achieve these objectives.
Types of Data Stored in Databases
Information stored in a database management system can be either highly structured (such as accounting records or customer information) or unstructured (such as digital images or spreadsheets). The data may be accessed directly by customers and employees, or indirectly through enterprise software, websites, or mobile apps. Additionally, many types of software—such as business intelligence, customer relationship management, and supply chain applications—use information stored in databases.
Components of an Autonomous Database
An autonomous database consists of two key elements that align with workload types.
◉ A data warehouse performs numerous functions related to business intelligence activities, and uses data that’s been prepared in advance for analysis. The data warehouse environment also manages all database lifecycle operations, can perform query scans on millions of rows, is scalable to business needs, and can be deployed in a matter of seconds.
◉ Transaction processing enables time-based transactional processes such as real-time analytics, personalization, and fraud detection. Transaction processing typically involves a very small number of records, is based on predefined operations, and allows for simple application development and deployment.
How an Autonomous Database Works
An autonomous database leverages AI and machine learning to provide full, end-to-end automation for provisioning, security, updates, availability, performance, change management, and error prevention.
In this respect, an autonomous database has specific characteristics.
◉ It is self-driving. All database and infrastructure management, monitoring, and tuning processes are automated. DBAs are still needed for tasks such as managing how applications connect to the a and helping developers use the in-database features and functions without their application code.
◉ It is self-securing Built-in capabilities protect against both external attacks and malicious internal users. This helps eliminate concerns about cyberattacks on unpatched or unencrypted databases.
◉ It is self-repairing. This can prevent downtime, including unplanned maintenance. An autonomous database can require fewer than 2.5 minutes of downtime per month, including patching.
Benefits of an Autonomous Database
There are several benefits of an autonomous database.
◉ Maximum database uptime, performance, and security―including automatic patches and fixes
◉ Elimination of manual, error-prone management tasks through automation
◉ Reduced costs and improved productivity by automating routine tasks
An autonomous database also allows an organization to refocus database management staff on higher-level work that creates greater business value, such as data modeling, assisting programmers with data architecture, and planning for future capacity. In some cases, an autonomous database can help a business save money by reducing the number of DBAs needed to manage its databases or by redeploying them to more strategic tasks.
Intelligent Technologies Support Autonomous Databases
Several fundamental intelligent technologies support autonomous databases―enabling the automation of mundane but important tasks such as routine maintenance, scaling, security, and database tuning. For example, an autonomous database’s machine learning and AI algorithms include query optimization, automatic memory management, and storage management to provide a completely self-tuning database.
Machine learning algorithms help companies improve database security by analyzing reams of logged data and flagging outliers and anomalous patterns before intruders can do damage. Machine learning can also automatically and continuously patch, tune, back up, and upgrade the system without manual intervention, all while the system is running. This automation minimizes the possibility that either human error or malicious behavior will affect database operations or security.
In addition, autonomous databases have some specific capabilities.
◉ Easy scalability. A cloud-based database server can expand or reduce its compute and memory resources instantly, as needed. For example, a company could scale up from 8 cores of database computing to 16 cores for end-of-quarter processing, and then scale down to the less-expensive 8 cores afterward. In fact, all compute resources could be shut down over the weekend to reduce costs, and then be started up again on Monday morning.
◉ Seamless database patching. Many data breaches are enabled by system vulnerabilities for which a security or vulnerability patch is already available but not yet applied. An autonomous database prevents this issue by automatically rolling patches against the cloud servers in a sequence designed to eliminate business downtime.
◉ Integrated intelligence. An autonomous database integrates monitoring, management, and analytics capabilities that leverage machine learning and AI techniques. The goal is to automate database tuning, prevent application outages, and harden security across the entire database application.
Deployment Options for Autonomous Databases
There are two options for deploying an autonomous database.
◉ Serverless deployment. In serverless deployment, multiple users share the same cloud infrastructure resources. Serverless deployment is the simplest option; it requires no minimum commitment and users can take advantage of quick data provisioning and application development. Users also enjoy independent compute and storage scalability. In this deployment model, users are responsible for database provisioning and management while the provider takes care of infrastructure deployment and management responsibilities.
◉ Dedicated deployment. Dedicated deployment allows the user to provision the autonomous database within a dedicated (unshared) cloud infrastructure. This deployment model has no shared processor, memory, network, or storage resources. Dedicated deployment offers greater control and customization over the entire environment and is ideal for users who want to tailor their autonomous database to meet specific organizational needs. Additionally, dedicated deployment allows for an easy transition from on-premise databases to a fully autonomous and isolated private database cloud.
Choosing an Autonomous Database
Autonomous databases offer many benefits to businesses. When you’re ready to evaluate the offerings available to your organization, look for four key features.
◉ Serverless deployment. In serverless deployment, multiple users share the same cloud infrastructure resources. Serverless deployment is the simplest option; it requires no minimum commitment and users can take advantage of quick data provisioning and application development. Users also enjoy independent compute and storage scalability. In this deployment model, users are responsible for database provisioning and management while the provider takes care of infrastructure deployment and management responsibilities.
◉ Dedicated deployment. Dedicated deployment allows the user to provision the autonomous database within a dedicated (unshared) cloud infrastructure. This deployment model has no shared processor, memory, network, or storage resources. Dedicated deployment offers greater control and customization over the entire environment and is ideal for users who want to tailor their autonomous database to meet specific organizational needs. Additionally, dedicated deployment allows for an easy transition from on-premise databases to a fully autonomous and isolated private database cloud.
The Future of Autonomous Databases
Data is being generated today at a rate that is fast outpacing how quickly it can be manually managed and processed to efficiently and securely deliver business-critical insights. Because of their intelligent automation capabilities, autonomous databases offer enterprises many advantages over traditional databases. The expectation is that enterprises will increasingly migrate to this database model to enjoy these advantages, maintain a competitive edge, and gain the ability to refocus IT efforts on innovation rather than database management.
When it comes to database management software, Oracle database is the top pick for most of the organizations. It has some advanced features and capabilities which makes it popular in DBMS world. Oracle was designed towards business and corporations. Within the software market, it has the leading role as the perfect database solution. In other words, Oracle database has some unique features that made it the best DBMS solution. Let’s see, why Oracle is the best among all other database management systems.
Functionality: Oracle provides the combination of high-level security and integrated business solutions. As it fits perfectly with the needs of businesses and corporations, they are the most common users of Oracle database. Today, Oracle dominates the companies and banking industries. Top ten banks use Oracle to access and manage their data. Oracle is the best solution for corporations who have an enormous amount of data to handle and access.
Portability: Oracle is the best choice as a portable DBMS solution. It supports more than 100 hardware platforms and 20 networking protocols. You can also develop a completely featured Oracle application on any operating system without even knowing the underlying layers of that system. Oracle has the most user-friendly and easy to use features which is usable to any platform. Meteorically, this features makes Oracle the best portable database solution.
Quick recovery: If your system fails due to any technical issues, Oracle offers the quickest online backup and recovery features. So, you can undo or recover the existing database from the server, when your system is back online. You can also set a point-in-time recovery. Because Oracle supports continues backup and recovery from a pre-defined server storage.
Speed and Performance: When you are managing a large database, speed and performance is everything for you. It doesn’t matter, how big is the database, Oracle provides the best speed to access the database. With the highest level of performance. The performance of Oracle is not only “raw” but includes the locking and transaction controls. Which also ensures the high-level security of your database system.
Multiple Database Support: Oracle uses two-phase commit protocol to manage multiple databases within same transactions. With multiple database support, you can move data from nodes to nodes from where it was stored. As well as mirroring the data within the same network. It is one of Oracle’s unique features that allows you to update more than one data with a single request.
Reliability: When it comes to handling complex tasks, Oracle delivers the best performance. It uses the ACID test to ensure the integrity of data to determine whether it passed by Oracle databases or not. A reliable storage will store the data from a database, that’s the main purpose of using DBMS. Using ACID test, Oracle ensures the top level reliability of your system. As well as, delivers high integrity of data storage.
Other advantages of Oracle database: Oracle offers the main four properties, that all database must have, as:
◉ consistency,
◉ isolation,
◉ atomicity and
◉ durability.
These four features ensure the capability of a database management system. Oracle database is well developed with all these properties. Thus, provides a reliable and complete database system.
Being one of the first & leading commerce platform with a substantiated history of innovation, you can count on Oracle CX Commerce 2020 Implementation Essentials certification, coded 1Z0-1068-20, to help you make the most of modern commerce technologies.
An Oracle Commerce Cloud 2020 Implementation Specialist has
proved the expertise needed to apply and configure the service, comprising
Store Implementation Requirements and Design, Storefront Development, Search
Engine Optimization (SEO), Managing Multiple Sites in a Single OCC Instance,
Custom Payment Integration, and Account-Based Commerce (B2B).
Oracle Commerce presents all you need to delight clients, promote brand engagement, and produce more revenue and loyalty. To leverage all these features, one needs to pass the Oracle 1Z0-1068-20 exam. This article will discuss Oracle 1Z0-1068-20 exam details and how to ace the exam with the right approach.
To earn Oracle CX Commerce 2020 Implementation Essentials certification, you need to write and pass the 1Z0-1068-20 exam. Preparing for the exam requires that you do your best and revise all the available study resources. Below are useful tips that you can follow to have an effective preparation.
6 Easy Steps to Deliver High Results in Oracle 1Z0-1068-20
Certification Exam
1. Understand Oracle 1Z0-1068-20 Exam
Make sure you understand what the Oracle 1Z0-1068-20 exam involves. Go online research all that the exam is about. The best way to do this is by exploring the official page for Oracle CX Commerce 2020 Implementation Essentials certification. Know the objectives and the objective of the exam. Please don’t make the mistake of starting your preparation before knowing the exam objectives in detail.
Main Oracle 1Z0-1068-20 Exam objectives are the following:
Store Implementation Requirements and Design
Storefront Development
Multiple Sites in a Single Oracle Commerce Cloud Instance
Search Engine Optimization (SEO)
Custom Payment Integration
Custom Payment Integration
2. Obtain a Relevant Study Guide
It is important to get a study guide for effective
preparation. With an appropriate study guide, you can emphasize the basic
information related to the exam topics. These are described comprehensively,
making complicated concepts or technology easy to understand. The study guide
will also show you the areas to which you require to pay more attention.
3. Jon in Online Community or Forum
Many applicants are studying for different Oracle exams every year. Joining a community for exam preparation can be an excellent way to communicate with like-minded people. You can share ideas with others, obtain insight from those who’ve passed the exam and bestow knowledge and resources to help prepare. Online forums or groups will let you and other candidates share tips, questions, challenges, and experience.
4. Take Help from Experts
While adopting a self-studying approach, some of the concepts might be difficult to understand. Taking help from experts will let you ask the questions and will also solve your doubts. You can follow blogs and articles published by experts on a relevant topic. Interacting with experts allows you to avail of solid knowledge of difficult concepts.
5. Take Oracle 1Z0-1068-20 Practice Test
Practice tests for Oracle 1Z0-1068-20 exam are a useful tool. They help you get familiar with the exam structure. They also get equipped for all types of questions set in the exam. It is an excellent way to figure out weak areas and work on them. But you need to get practice tests from reliable platforms. Once you’ve started your preparation and started scoring above 90% in the mock test, then you are sure to pass the actual exam. This is a perfect way to assess your understanding and know the areas that need improvement.
6. Take Enough Rest
Sleep beginning the night before the examination. The
scholars who have unlimited night rest are about to score well in exams. Also,
guarantee that you transfer the exam-testing market less than 30 seconds before
the correct opening. This will allow you to refresh before you start the exam.
Final Outcome
With all the things and tips underlined, the time is now to
begin your preparation. Hereby, in today’s competitive IT industry, Oracle
certification goes for your secret weapon in climbing the corporate ladder and
allowing you to look splendidly in front of your potential hiring managers.
Oracle Application Express (APEX) 20.2 is now generally available!
Oracle APEX is the world's most popular low-code platform for enterprise apps and enables you to build scalable, secure enterprise apps, with world-class features, that can be deployed anywhere.
This release of APEX introduces several new features and enhancements to help developers be more productive than ever before. From the all new Cards component, Automations, and REST Data Synchronization, to charts in Faceted Search, REST Connector Plug-ins, Report Printing improvements, Redwood Light Theme Style, and an exceptional code editing experience across all of App Builder, APEX 20.2 an epic release that has something for everyone. Here are some of the feature highlights:
All New Cards Component
The new Cards component unlocks a powerful new way to present data and provides developers with the ultimate control and flexibility over the user interface.
The Cards region provides easy customization of nearly every aspect of the card UI, including the layout, appearance, icon, badge, media, advanced HTML expressions that support client-side templating, and for the first time, the ability to define multiple actions per card. This means that you can include any number of actions, such as links or buttons, on a single card, all without writing a single line of code.
Automations
Automations are a conditional set of actions that are automatically triggered by changes that occur within database tables or remote data sources. They are used to monitor data and then perform the appropriate action (examples are auto-approving specific requests and sending email alerts). An automation can be triggered on Schedule or on Demand, by invoking the APEX_AUTOMATION API.
Faceted Search Enhancements
◉ Bar or pie charts of facet value counts
Quickly display a chart of facet value counts in a dialog or 'dashboard' area.
◉ Groups of checkbox facets for Boolean columns
Checking the facet will find records that match the 'yes' or 'true' value of the column.
◉ Input Field facet type
Input Field facet type supports comparing a user-entered value with the facet column. This enables faceted searches such as finding stores within a user entered number of miles or records where a column contains the user entered text.
◉ Performance optimization for distinct value facets
Report Printing
◉ Built-in PDF printing and Excel download
Built-in APIs to print PDF and download Excel in interactive reports and classic reports.
◉ Interactive Report - Send E-Mail
All download formats can now be attached.
◉ Enhanced integration with Oracle BI Publisher
◉ New APIs to generate files
Introduced new APEX_REGION.EXPORT_DATA and APEX_DATA_EXPORT APIs to programmatically generate PDF, CSV, Excel, HTML, JSON and XML files.
◉ More language support
Built-in PDF now supports Chinese, Japanese and Korean languages.
REST Data Source Synchronization
APEX supports data synchronization from a REST Data Source (formerly known as Web Source Modules) to a local table. Synchronization can run either on Schedule or on Demand, by calling the APEX_REST_SOURCE_SYNC package. Developers don't need to build custom PL/SQL code in order to copy data from REST services to local tables; APEX provides this as a declarative option.
◉ More control over REST Source Data
REST Source Data can be appended or merged to the local table. Replacing all local data is also supported.
◉ Local or REST Source Data
APEX components using the REST Data Source can be configured to use the local table instead.
◉ Configure Technical Details
Technical details like HTTP request limits, commit intervals or delete methods for the Replace mode are configurable.
◉ Automatic Table Generation
REST Data Source Connector Plug-Ins
The APEX Plug-In infrastructure has been extended to support Connector Plug-Ins for external REST APIs. This enables APEX to fully leverage REST API features like result pagination or server-side filtering, often used with 3rd Party REST Services (note: APEX natively supports ORDS and Oracle Fusion SaaS Services today!)
◉ REST Service Handling
The Plug-In code handles REST service-specific implementation details like the pagination style or how filters are passed to the REST API
◉ Automatically Pass Relevant Information
When APEX invokes a REST Data Source (e.g. to render a report), the engine will invoke the Plug-In code and pass all relevant context information.
◉ HTTP Requests
The Plug-In code executes one or multiple HTTP requests and passes results back to the APEX engine.
◉ Automatic Processing
APEX processes the REST response received from the Plug-In.
New Web Credential Types
APEX 20.2 introduces new URL Query String and HTTP Header types for Web Credentials. This allows developers to use the highly secure and encrypted credential storage also for REST Services which expect an element (like an API key) as part of the URL. APEX makes sure that such sensitive parts are not written to debug or execution logs.
A web credential can now be protected by providing a URL pattern. APEX will only use the Web Credential for URLs starting with the given pattern; otherwise an error message will be raised. To change the URL pattern, the secret part of the Web Credential needs to be entered again.
Redwood Light Theme Style
Universal Theme now provides a new Redwood Light theme style for your apps, available via Theme Roller. This new theme style aligns with Oracle's new user experience design system, and provides the new design, color, and textures that extend across all of Universal Theme.
Refresh your existing apps to uptake the latest version of Universal Theme and this new theme style.
Developer Experience
◉ Multiple Tabs in Property Editor
Page Designer has been enhanced to support multiple tabs in the Property Editor pane, making it more efficient to access the attributes of a region.
◉ Monaco Editor
Oracle APEX now uses Monaco Editor to provide a vastly improved coding experience throughout the development environment. The new editor provides in-context code completion, syntax highlighting, and superior accessibility.
◉ Inspect the SQL, PL/SQL and JavaScript code with Ease
The Embedded Code utility allows developers to inspect the SQL, PL/SQL and JavaScript contained within an APEX application. Having the ability to view the embedded code makes conducting tasks such as code reviews, security evaluations or application tuning, far more convenient. Code can be saved to the file system from the App Builder, or using the APEXExport utility.
◉ Quick SQL Enhancements
Quick SQL now enables you to save and load data models, provides automatic primary key column defaults, and delivers far better coding experience with improved syntax highlighting and autocompletion for table and column directives.
New and Improved Items
◉ New Checkbox
This single checkbox offers an alternative to the Switch item type for Boolean columns. This works in Interactive Grid as well, even when not in edit mode. The previous Checkbox type has been renamed to Checkbox Group.
◉ File Browse
File Browse item type has been enhanced to support rendering as a drop zone, supporting the drag & drop of a file to be uploaded.
◉ Rich Text Editor
Rich Text Editor item type has been upgraded to use CKEditor 5 and now supports markdown output.
◉ Text Field
Field item type has a new Text Case setting to optionally transform the user-entered text to upper or lower case. The Text Field Trim Spaces and Text Case settings and Textarea Trim Spaces settings are now applied on the client as well as the server.
SQL is a programming language for Relational Databases. It is designed over relational algebra and tuple relational calculus. SQL comes as a package with all major distributions of RDBMS.
SQL comprises both data definition and data manipulation languages. Using the data definition properties of SQL, one can design and modify database schema, whereas data manipulation properties allows SQL to store and retrieve data from database.
Data Definition Language
SQL uses the following set of commands to define database schema −
CREATE
Creates new databases, tables and views from RDBMS.
For example −
Create database dbexamstudy;
Create table article;
Create view for_students;
DROP
Drops commands, views, tables, and databases from RDBMS.
For example −
Drop object_type object_name;
Drop database dbexamstudy;
Drop table article;
Drop view for_students;
ALTER
Modifies database schema.
Alter object_type object_name parameters;
For example −
Alter table article add subject varchar;
This command adds an attribute in the relation article with the name subject of string type.
Data Manipulation Language
SQL is equipped with data manipulation language (DML). DML modifies the database instance by inserting, updating and deleting its data. DML is responsible for all forms data modification in a database. SQL contains the following set of commands in its DML section −
◉ SELECT/FROM/WHERE
◉ INSERT INTO/VALUES
◉ UPDATE/SET/WHERE
◉ DELETE FROM/WHERE
These basic constructs allow database programmers and users to enter data and information into the database and retrieve efficiently using a number of filter options.
SELECT/FROM/WHERE
◉ SELECT − This is one of the fundamental query command of SQL. It is similar to the projection operation of relational algebra. It selects the attributes based on the condition described by WHERE clause.
◉ FROM − This clause takes a relation name as an argument from which attributes are to be selected/projected. In case more than one relation names are given, this clause corresponds to Cartesian product.
◉ WHERE − This clause defines predicate or conditions, which must match in order to qualify the attributes to be projected.
For example −
Select author_name
From book_author
Where age > 50;
This command will yield the names of authors from the relation book_author whose age is greater than 50.
INSERT INTO/VALUES
This command is used for inserting values into the rows of a table (relation).
On July 8, perhaps a bit lost among the excitement of Autonomous Database on Exadata Cloud@Customer and Dedicated Region Cloud@Customer, Oracle announced availability of Exadata Cloud@Customer X8M. Oracle Exadata X8M was announced back at OOW 2019, along with Gen 2 Exadata Cloud@Customer. Many would have thought that meant Gen 2 Exadata Cloud@Customer was built on Exadata X8M, but it was not. Rather, until now, Gen 2 Exadata Cloud@Customer was based on Exadata X8, the second-best database platform on the planet. For those of you waiting for Exadata Cloud@Customer to support the latest Exadata technology, your wait is over.
What’s the big deal about a letter being added to the model number—after all, both have the same CPUs, memory, and storage? Those components stay the same, but Exadata X8M boosts its interconnect fabric from a 40Gb/sec InfiniBand Network to a 100Gb/sec Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) network. It also upgrades the Xen hypervisor technology to the more popular Kernel Virtual Machine (KVM). In addition, Exadata Cloud@Customer X8M almost doubles the usable memory in each database server, and Oracle Exadata X8M adds 1.5TB of persistent memory (PMEM) to each storage server.
As with many things Exadata, it is the software innovation and how Oracle uses industry standard hardware components that truly differentiate Exadata from other solutions. Exadata X8M uses PMEM as a new tier of storage in the storage servers, maximizing its benefit by allowing it to be shared by all databases in the platform, and improving its HA characteristics in the event of a failure. More importantly, Exadata exposes PMEM as memory modules, enabling database clients to use RDMA to access the data in PMEM directly over the network, without any involvement from the OS or network stack on the storage cell, dramatically reducing latency by 10x for sensitive operations like database reads and commit log writes.
The magic software is the “Data Accelerator” and “Commit Accelerator,” features that speed up data reads and commit writes, respectively. To appreciate the importance of the Exadata approach to using PMEM, we need to look at a few numbers. The latency to read an 8K database block from a network-attached NVMe device is around 190usec, with about 100usec of the time spent sending the block across the network. PMEM is about 10x faster than flash, so simply replacing flash with PMEM, significantly drops the device read latency. However, most of that improvement is masked by the 100usec network latency. To realize the benefit of PMEM’s speed, we need to eliminate most of the network latency. Exadata X8M does that using RDMA. RDMA bypasses the OS and network stack, enabling the NIC on the remote database server to directly read and write to the memory location on the storage server. The result is an average of 19usec to read that same 8K block, 10x faster than before. The Commit Accelerator provides a similar benefit to latency-sensitive commit writes to the REDO logs.
The result of these accelerators is improved response time for OLTP applications, that will spend less time waiting for data and commit acknowledgements. The 2.5x faster RoCE network will allow data warehouse and analytics applications to send more data more quickly to the database servers for processing, and the doubling of memory in the database servers will allow for more databases in a single rack. Overall, these enable higher consolidation levels that in turn lowers TCO.
Exadata Cloud@Customer X8M will outperform any on-premises storage arrays, even those using PMEM, due to its smart software and RDMA access directly from the database.It is hardly even worth comparing to other cloud solutions. AWS, with their network attached flash storage, promises 1msec access to data, over 50x slower.
So, if you’re interested in moving to a Cloud business model, but need to keep the data local, and want to get the same performance available on the best on-premises database platform, Oracle Exadata X8M, you should look at Oracle Exadata Cloud@Customer X8M—the best “in datacenter” database cloud platform is now even better.
MS SQL server is a database product of Microsoft. It allows users to SQL queries and execute them. It is among the most stable, secure and reliable database solutions. It supports wide variety of transaction processing, analytics, and business intelligence applications in corporate IT environments.
What is Oracle Database?
Oracle database is an RDMS system from Oracle Corporation. The software is built around the relational database framework. It allows data objects to be accessed by users using SQL language. Oracle is a completely scalable RDBMS architecture which is widely used all over the world.
Oracle is one of the biggest vendor in the IT market and the shorthand name of its flagship RDBMS product, that was formally called Oracle Database.
Early History of Microsoft SQL:
In 1987, Microsoft partnered with Sybase Solutions for developing DBMS which may compete with other IT giants like IBM and Oracle. Both these companies agreed that Sybase will have all the selling rights and profit earned from the product designed for the Non-Microsoft platform.
Microsoft, on the other hand, will have rights to the database product designed for Microsoft Platforms. In 1989 the first database server product version was released.
After that, Microsoft purchase all the rights from Sybase and they changed the name to MS SQL Server. Till date, 30 versions of MS SQL, has been released.
KEY DIFFERENCE
◉ Oracle runs on a wide variety of platforms while the SQL server can install on the Windows server only.
◉ Oracle supports star query optimization while SQL server doesn’t offer query optimization.
◉ In oracle, values do not change before commit whereas in SQL Server values are changed even before commit.
◉ Oracle allows rollback during the transaction process whereas SQL server doesn’t allow rollback in the transaction process.
◉ Oracle supports many "Schemas" with the instance whereas SQL server offers "Schemas" within each user database.
◉ Oracle allows database, full, file-level, incremental & differential backups on the other hand, SQL server allows full, partial, and incremental backups.
◉ Oracle uses both "after" and "before" triggers whereas SQL server mostly uses only "after" triggers.
Early History of Oracle:
Oracle Corporation has a distance long journey to become the multinational technology company that it has at the current time.
In 1977 the Oracle Organisation was founded by two developers, Larry Ellison and Bob Miner. Both had prior experience in developing database software for different organizations.
In the year 1978, they were able to build a first RDBMS using SQL. This software is known as Oracle Relational Database Management System.
Oracle became the first company to sell the RDBMS software and within 1982 had yearly revenue of $2.5 million.
Features of Microsoft SQL Server
1. Support tools SQL Server Profiler, BI tools, SQL Server Management Studio, and Database Tuning Advisor
2. Offers online support and documentation, and live product support
3. Provides advanced customization option for datatype mappings and delete and rename objects
4. Displays error, and warning messages about the migration in a progress window
5. A single, integrated environment for SQL Server Database Engine management and authorizing
6. Resizable dialogs allow access to multiple tools when a dialog is open.
7. An activity monitor feature with filtering and automatic refresh
8. Importing and Exporting from SQL Server Management Studio
Features of Oracle
1. Ease of data recovery when compare to databases
2. The RDMS system can easily handle large amounts of data
3. Allows you to change platforms at any time
4. Gives the option for scale-up and scale-out strategies
5. The database allows you to rerun actual production workloads, including online user and batch workloads, in test environments
6. Support for hardware- and OS-specific virtualization technologies
7. Provides VMWare support for test and productive SAP environments
8. If primary database becomes unavailable, the standby database can act as a primary database
9. It can be used for read-write, reporting, testing, or backups, reducing the load on the primary database
10. Uninterrupted processing for users which eliminates the need for manual recovery
Difference between SQL Server and Oracle
Parameters
Ms-SQL Server
Oracle
Parent Company
It is owned by Microsoft Corporation.
It is owned by Oracle Corporation.
Syntax
Simpler and easier syntaxes.
Complex and more efficient syntaxes.
Download
120 or 180 days evaluation version are available from www. Micrsoft.com/sqlserver
Opensource version can be download from otn.oracle.com
Platform support
Can install on Windows server only. But version 2017 onwards can be installed on Linux
Run on a wide variety of platforms
Language
MS-SQL uses transact SQL or T-SQL.
PL/SQL or is used by Oracle Corporation.
Job scheduling
Job Scheduling via the SQL Server Agent
Job scheduling via Oracle scheduler or OEM
Bit map indexes
No bitmap indexes base on reverse keys and functions.
Uses bitmap, indexes based on functions, and reverse keys.
Query optimization
No query optimization.
Uses Star query optimization.
Triggers
Mostly uses "after" triggers.
Uses both "after" and "before" Triggers.
Support & Trouble Shooting
Provides technical notes, bug descriptions, scripts, patches, and download at a not additional charge.
Support call which is chargeable for each support case.
Roll back
Not allowed in the transaction process.
Rollback is allowed during the transaction process.
Concurrent accesses
Concurrent accesses are not allowed when a writer is in progress which increase the wait time.
Concurrent accesses are permitted and waits time are generally less.
Change of Value
Values are changed even before commit.
Values do not change before commit.
Method
Use Row or Page blocking method. It never allows a read while the page is blocked.
Use a copy of the record so while modifying it allows reads of original data while doing the modification.
Error handling
SQL Server executes each command separately, so it will be quite difficult to make changes if any errors are encountered during the process.
Oracle treats each new database connection as a new transaction.
Human Intervention
Follows Global memory allocation so less intrusion of Database admin. Therefore, very few chances of human errors.
Follows Dynamic memory allocation. DBA has to interrupt more. So higher chances or human errors.
Parallel execution
In MS SQL server INSERT, UPDATE, DELETE statements are executed serially.
Oracle will execute INSERT, UPDATE, DELETE, and MERGE statement in parallel.
Automation support
SQL upgrade advisor is available for automation.
Database upgrade assistant available for automation.
Redo stream
Redo streams are unique to each user and database.
One redo stream at the database level.
Schemas
"Schemas" within each use database.
Many "schemas" with the instance.
Protection
Logins authenticated at the instance level and database level.
User authenticated via database credential and OS roles.
Sharability
Every database has its own, unshared disk file on the server.
All the database objects are grouped by schemas. Subset collection of database objects and all the database objects are shared between all schemas and users.
Backups
It allows full, partial and incremental backups
It allows Database, full, file level, incremental & differential backups.
Maintenance
Tables usually stored Index-organized.
Automatically update statistics and identify SQL issue
The fastest cloud database service is now the most scalable
Oracle Exadata Cloud Service, the world’s fastest cloud database service, is now even faster, supporting the latest Exadata X8M hardware and bringing the benefits of Intel® Optane™ Persistent Memory (PMEM) and Remote Direct Memory Access over Converged Ethernet (RoCE) to the Oracle Cloud. On top of that, Exadata Cloud Service X8M redefines elasticity in the cloud, with configurations that scale to over 1,600 cores and 25 PB of HCC compressed data.
How did Exadata Cloud become even faster? With smart use of persistent memory and the RoCE networking fabric, Exadata Cloud now has 50x lower storage access latency than AWS databases. With typical storage access latency as low as 19usec, Exadata X8M is now up to 10x faster than before, boosting performance of a wide range of OLTP and Analytical applications. What are you going to do with all that speed? You can consolidate more workloads with less infrastructure and fewer CPUs, and ultimately lower your total cost of ownership. To boost Exadata’s consolidation capabilities, ExaCS X8M will have almost double the usable memory (1,390 GB per server) and double the local disk space for Oracle Homes.
But Exadata Cloud Service X8M is not just catching up to its sibling Exadata Cloud@Customer, it is redefining elasticity in the cloud. Exadata Cloud Service now supports independent scaling of compute and storage up to 32 compute servers and 64 storage cells. This provides three key benefits to customers:
◉ No workload is too large: Customers can scale to support huge workloads with up to 1,600 cores, 44 TB memory, 96 TB PMEM, 1.6 PB Flash, and 2.5 PB of usable data (25 PB after 10x HCC Compression).
◉ Right-sized systems: Customers can choose the optimum mix of storage and compute. They are no longer required to deploy compute and storage in fixed ratios, eliminating the need to pay for unnecessary infrastructure. They can choose what is best for their workload.
◉ On-demand Elasticity: Grow your system at any time by adding additional compute and storage servers. Pay for that capacity only when you need it.
To make this work, we have redefined the concept of an Exadata in the cloud. Gone are the fixed rack sizes (Quarter, Half, and Full). Now, you start out with a minimal elastic configuration (happens to be the same as a quarter rack) of two database servers and three storage servers, and you add database and storage servers one or more at a time as required. The Oracle cloud takes care of provisioning these servers from pools of RoCE interconnected compute and storage, building the ideal system as you go.
Start with a small quarter rack and grow it over time as your business grows. If you need more storage, add more. If your system needs additional compute to remain balanced, add compute. Pay for this additional infrastructure only when you need it. With maximum configurations of 32 compute servers and 64 storage cells, be confident that the system can scale to meet future growth.
Exadata Cloud Service X8M will be available globally in the next few weeks. Pricing is unchanged from Exadata Cloud Service X8—you get persistent memory and the new faster RoCE interconnect for no additional charge. You can try it out with little investment as we allow you to provision a system for as little as 48 hours.
Oracle’s Autonomous Database has been gaining momentum as the world’s simplest way to develop and deploy data driven applications in the cloud. Now the extreme availability feature, Autonomous Data Guard (AuDG) previously available for Shared Infrastructure deployment, is also available on Dedicated Infrastructure deployments.
Autonomous Database on Dedicated Infrastructure provides the entire technology stack (storage, network, compute) to a single tenant and adds operational customization, policy controls that can influence how the autonomous software layer performs its tasks.
The addition of Autonomous Data Guard adds an ability to improve the service uptime as necessary for more business critical or mission critical database deployments, protecting against failure scenarios that include among other things site level outages. Autonomous Data Guard on Dedicated Infrastructure adds functionality commonly requested by customers - delivering high availability and disaster recovery in an architecture similar to that found in the most critical on-premises Oracle deployments.
Autonomous Data Guard on Dedicated Infrastructure is a manifestation of Oracle’s Maximum Availability Architecture (MAA), incorporating autonomous operation of high availability best practices found in mission critical deployments from Oracle’s extensive set of enterprise customers. Autonomous Data Guard includes key technologies: Real Application Clusters (RAC), Active Data Guard (ADG), and Transparent Application Continuity (TAC).
◉ RAC enables the database to scale horizontally for both read and write operations while working in tandem with TAC to provide an always online maintenance experience and protects against the most common failure, local hardware and software related failures.
◉ ADG is a database replication technology that protects against site level outages and block level data corruptions that can occur due to hardware failures. ADG provides a complete standby replica of your database to take over in the event of severe disasters where entire data center sites are lost. The standby site can also be actively used for reporting during normal operation.
◉ TAC is an intelligent driver technology that works with connection pooling, connection services and Oracle database state management capabilities that can rebuild session state of running transactions on a surviving cluster server - masking hardware and software level failures in a way that preserves all in flight transactions.
In the Autonomous Database on Dedicated Infrastructure, the above capabilities are autonomously operated to provide a zero downtime application experience for any kind of failure event. Autonomous Data Guard on Dedicated Infrastructure includes the following:
Autonomous Data Guard Enabled at Container Creation
Enabled Containers Filtered and Selectable at Autonomous Database Creation
Switchover, Failover, Reinstate controls for customer validation testing
Transport and Apply Lag Metrics and Alerting for Recovery Time Objective analysis
Symmetric Standby – primary storage and compute scale actions mirrored on standby
Autonomous Database Clone into AuDG Enabled Container to move existing databases into an AuDG protection level of service
Standby site read-only connections and operation
Backup operations from both the primary and standby sites
Protection Modes (set when enabled):
Max Availability – synchronous replication of database transactions
Max Performance – asynchronous replication of database transactions
In this blog we are going to discuss the various types of database data compression, the differences between the compression types and when to use (and not use) the different data compression types.
Basic Table Compression
If you are not familiar with Basic Table Compression, then some important points to know about Basic Table Compression are that it is a free data compression capability and it is included with Oracle Database Enterprise Edition. Basic Compression compresses data that is loaded using bulk load operations (direct path), but does not compress data that is added/updated through conventional path DML operations (INSERT or UPDATE). If INSERTS and UPDATES are performed on a Basic compressed table/partition over time, then that table/partition would have to be re-compressed to get the changes compressed.
USAGE: Basic Table Compression isn’t intended for OLTP applications, and instead, is best suited for data warehouse applications (read-mostly) where data is loaded using bulk load operations and is never (or very rarely) modified.
Advanced Row Compression
Advanced Row Compression is the data compression feature of Advanced Compression that uses the same algorithm as Basic Compression, but differs from Basic Compression in that Advanced Row Compression maintains data compression during all types of data manipulation operations, including conventional path DML such as INSERT and UPDATE.
Advanced Row Compression (and Basic Compression) use a compression algorithm specifically designed to eliminate duplicate values within a database block, even across multiple columns. The compression ratio achieved in a given environment depends on the data being compressed, specifically the cardinality of the data. In general, organizations can expect to reduce their storage space consumption by a factor of 2x to 4x by using Advanced Row Compression and/or Basic Compression. That is, the amount of space consumed by uncompressed data will be two to four times larger than that of the compressed data.
USAGE: Advanced Row Compression is intended for both OLTP and Data Warehouse applications.
Hybrid Columnar Compression (HCC)
Unlike both Basic and Advanced Row Compression, Oracle’s Hybrid Columnar Compression technology utilizes a combination of both row and columnar methods for storing data. This hybrid approach achieves the compression benefits of columnar storage, while avoiding the performance shortfalls of a pure columnar format. A logical construct called the compression unit (CU) is used to store a set of hybrid columnar compressed rows. When data is loaded, column values for a set of rows are grouped together and compressed. After the column data for a set of rows has been compressed, it is stored in a compression unit. To maximize storage savings with Hybrid Columnar Compression, data must be loaded using bulk loading (direct path) techniques. Examples of bulk load operations commonly used includes: Insert statements with the APPEND hint, Parallel DML, Direct Path SQL*LDR and/or Create Table as Select (CTAS). In general, organizations can expect to reduce their storage space consumption by a factor of 6x to 15x+ by using Hybrid Columnar Compression.
USAGE: Hybrid Columnar Compression is best suited for data warehouse applications (read-mostly) where data is loaded using bulk load operations and is never (or very rarely) modified. While HCC compressed data can be modified using conventional path Data Manipulation Language (DML) operations, such as UPDATE and INSERT, HCC is best suited for applications with no, or very limited DML operations. If frequent UPDATE and INSERT operations are planned on a table or partition, then Advanced Row Compression (a feature of Oracle Advanced Compression) is better suited for such data. Hybrid Columnar Compression automatically compresses new data from SQL INSERT ... SELECT statements, without the APPEND hint and array inserts from programmatic interfaces such as PL/SQL and the Oracle Call Interface (OCI).
One of my least favorite database administration activities is managing users – creating users, changing passwords, granting roles – zero fun. Even further down on my scale of fun ways to spend an afternoon is the governance that goes with user management – which users have left the organization? Do they still need the privileges they have? It’s important work, but it is a constant never-ending drumbeat of repetitive tasks that seems to always interrupt more interesting activities.
Fortunately, with a few built-in features of the database (in other words, not extra-cost options, not extra-cost software that we have to maintain) we can make someone else do that tedious work and free up our time for higher value tasks.
The solution? Microsoft Active Directory. I haven’t worked with many organizations in the past 15 years that do NOT have Active Directory running. And fortunately, the Oracle Database plays well with Active Directory and has ever since Oracle 9i. Even better, starting with Oracle Database 18c and improving in Oracle Database 19c that integration with Active Directory has become easier to set up and requires even less maintenance.
To integrate with Active Directory we are going to use two database features – Kerberos authentication, and Centrally Managed Users (note: Centrally Managed Users is an Enterprise Edition feature). When I started to write this I realized I was going WAY beyond a reasonable length for a blog, so I’m splitting this up across two different entries, one for Kerberos, the other for Centrally Managed Users. If you’d prefer to see the details on Kerberos in video form, please take a look at this YouTube video.
At the end of this post I've got a link to another video that goes into more depth on implementation and troubleshooting.
We will use Kerberos to authenticate database users. Each Active Directory domain controller is also a Kerberos Distribution Center. Kerberos can be used standalone to authenticate database users (in place of a password). A lot of times I’ll advise my clients to just use Kerberos because that does the work of placing password management on the Active Directory team’s plate, and also gets me out of having to worry about immediately deprovisioning a database user when someone leaves the company. Once the account is deleted from Active Directory, that account can no longer login to the database. We can clean it up when we get around to it. Even if we are going to configure Centrally Managed Users (CMU) I’ll usually configure Kerberos first because once that’s done, adding on CMU is just a few more minutes worth of work.
To setup Kerberos we will need to make changes in three places:
◉ Database Server
◉ Client Workstation
◉ Active Directory
On the Database Server we configure a new network file called a krb5.conf. This file tells the Kerberos libraries where the Kerberos Distribution Center (from here on out I’m just going to call it the domain controller) is located, what port it is listening on, and which alias to send to that server. A typical krb5.conf file looks like this:
[libdefaults]
default_realm = DBSECLABS.COM
clockskew = 6000
passwd_check_s_address = false
noaddresses = true
forwardable = yes
[realms]
DBSECLABS.COM = {
kdc = DBSECLABS.COM:88
}
[domain_realm]
DBSECLABS.COM = DBSECLABS.COM
.DBSECLABS.COM = DBSECLABS.COM
dbseclabs.com = DBSECLABS.COM
.dbseclabs.com = DBSECLABS.COM
The file has three sections:
◉ libdefaults – parameters that control how Kerberos is going to behave
◉ realms – for each Kerberos domain, where are the domain controllers? What port do they listen for Kerberos requests on (almost always port 88) pro tip: Notice I don’t actually have a server name or IP address, instead I use the Active Directory domain name, and DNS sends me to whichever domain controller is available. This builds in fault tolerance and lets me survive the constant maintenance of domain controllers
◉ domain_realm – these are aliases for the domains listed in the realms section. The value on the left side of the equal sign is the alias, the value on the right is the entry in the realms section that this alias should direct requests to. Kerberos is case sensitive, so I list my aliases in both upper and lower case because I never can be sure what format a client will use.
The next change we need to make on the database server is to install a Kerberos keytab file. The AD administrator needs to:
1. Create a service account for our database server – this is just a regular Active Directory user account nothing special. Because it’s a service account, I usually set “Password never expires” but follow your organizations standards
2. Have your Active Directory administrator create a keytab for you. The command they will use to do this will look like this:
Copy the keytab to our database server. I usually put the keytab in $ORACLE_HOME/network/admin directory. If I’m using the new read-only $ORACLE_HOME feature I put it in $ORACLE_BASE_HOME/network/admin instead.
The last file we work with is our sqlnet.ora file. In this file we will add seven new parameters:
These parameters are described in the Database Net Services Reference guide so I’ll save space by not defining them here. One of the entries, sqlnet.kerberos5_conf, points to the krb5.conf file we discussed earlier. Another, sqlnet.kerberos5_keytab, points to the keytab file generated above.
Copy the krb5.conf file to the client workstation and update the client's sqlnet.ora with the relevant parameters:
That’s it – now we are ready to begin using Kerberos. First, we create an externally authenticated database user (identified by the Kerberos principle name) in the database and grant that user the CREATE SESSION privilege.
SQL> create user RUSS identified externally as 'rlowenth@DBSECLABS.COM'; User created.
SQL> grant create session to RUSS; Grant succeeded.
Now we get a Kerberos ticket from Active Directory and use it to login to the database. The "rlowenth" you see in the Kerberos principal name above, and the okinit command below, is my Active Directory username.
[oracle@russ-test ~]$ okinit rlowenth
Kerberos Utilities for Linux: Version 19.0.0.0.0 - Production on 18-SEP-2020 22:41:01
Copyright (c) 1996, 2019 Oracle. All rights reserved.