There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:
1. Oracle Data Pump checksums support for cloud migrations
The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.
Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:
YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums
The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.
SQL> !expdp dbexam DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Export: Release 20.0.0.0.0 - Production on Sat May 30 07:20:55 2020
Version 20.2.0.0.0
Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Starting "DBEXAM"."SYS_EXPORT_SCHEMA_01": dbexam/********@//localhost:1521/novopdb1.laika7.laika.oraclevcn.com DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "DBEXAM"."BLOGS" 9.983 MB 73991 rows
. . exported "DBEXAM"."SALES" 14.38 MB 295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . exported "DBEXAM"."RDBMS_BRANDS" 7.726 KB 12 rows
. . exported "DBEXAM"."CLIENTS" 6.007 KB 2 rows
. . exported "DBEXAM"."T" 5.476 KB 1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "DBEXAM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for DBEXAM.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/ORCL/dpdump/9D45645C541E0B7FE0530206F40AE9E9/jmd.dmp
Job "DBEXAM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01
You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.
Goes without saying that COMPATIBLE must be set to at least 20.0
2. Oracle Data Pump exports from Autonomous Database
Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.
The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.
Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:
expdp dbexam DUMPFILE=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbexam/data_pump/jmd.dmp
CREDENTIAL=jmd_obj_store_cred
3. Oracle Data Pump includes and excludes in the same operation
Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.
Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):
expdp dbexam SCHEMAS=DBEXAM DUMPFILE=dbexam.dmp REUSE_DUMPFILES=YES
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\"
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"
4. Oracle Data Pump parallelizes transportable tablespace metadata operations
Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.
Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.
5. Oracle Data Pump provides optional index compression
In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.
If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.
If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:
TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\"
Specifying this transform changes the type of compression for all indexes in the job.
6. Oracle Data Pump resumes transportable tablespace jobs
Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.
Transportable jobs are now restartable at or near the point of failure.
To restart the job JMD_EXP_20C, first perform:
expdp system/password attach=jmd_exp_20c
Then restart the job with:
Export> continue_client
0 comments:
Post a Comment