ZDM – Logical Offline Migration with Standard Edition 2

In today’s post , we will focus on using ZDM to migrate a database from the Standard Edition to the Enterprise Edition. I remembered how I once participated in a conversation during which high-ranking managers were surprised that someone could use a database version other than the Enterprise version. Well, the Standard version is not extinct (and probably will not become extinct soon) like Yeti….
My test environment looked like this:

Zero Downtime Migration – I used such a tool from Oracle for migration. ZDM provides many migration options in several different categories – physical/logical, online/offline.
In case of Standard – Enterprise editions migrations we are limited only for logical methods (ie. Data Pump / replication with GoldenGatem). This tool has become even more important – solutions such as Oracle@Azure oraz Oracle@Google.
Both databases are in version 19.23 and visible as pluggable databases. Logical migration requires configuring Transfer Medium. In the case of my lab environment (OCI and only OCI 🙂 ) choice was simple – OCI Object Storage -> OCI Bucket.
Final command used for migration is presented below.

zdmcli migrate database 
-sourcedb POLITSE_SRC 
-rsp /home/zdmuser/zdm_logical_offline.rsp 
-sourcenode sourceseoda -srcauth zdmauth 
-srcarg1 user:opc 
-srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa 
-srcarg3 sudo_location:/usr/bin/sudo 
-targetnode targetseoda 
-tgtauth zdmauth 
-tgtarg1 user:opc 
-tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa 
-tgtarg3 sudo_location:/usr/bin/sudo 
-ignoreadvisor

So final log for ZDM’s migration job:

[zdmuser@zdm-host-tz ~]$ $ORACLE_HOME/bin/zdmcli query job -jobid 55
zdm-host-tz.sub04182101030.odaapilot.oraclevcn.com: Audit ID: 597
Job ID: 55
User: zdmuser
Client: zdm-host-tz
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -sourcedb POLITSE_SRC -rsp /home/zdmuser/zdm_logical_offline.rsp -sourcenode sourceseoda -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode targetseoda -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -ignoreadvisor"
Scheduled job execution start time: 2024-06-19T14:59:58Z. Equivalent local time: 2024-06-19 14:59:58
Current status: SUCCEEDED
Result file path: "/home/zdmuser/zdm/config/chkbase/scheduled/job-55-2024-06-19-15:00:23.log"
Metrics file path: "/home/zdmuser/zdm/config/chkbase/scheduled/job-55-2024-06-19-15:00:23.json"
Excluded objects file path: "/home/zdmuser/zdm/config/chkbase/scheduled/job-55-filtered-objects-2024-06-19T15:00:31.835.json"
Job execution start time: 2024-06-19 15:00:23
Job execution end time: 2024-06-19 15:06:58
Job execution elapsed time: 6 minutes 34 seconds
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED
ZDM_DATAPUMP_EXPORT_SRC ............... COMPLETED
ZDM_UPLOAD_DUMPS_SRC .................. COMPLETED
ZDM_DATAPUMP_IMPORT_TGT ............... COMPLETED
ZDM_POST_DATAPUMP_SRC ................. COMPLETED
ZDM_POST_DATAPUMP_TGT ................. COMPLETED
ZDM_POST_ACTIONS ...................... COMPLETED
ZDM_CLEANUP_SRC ....................... COMPLETED

There were a few issues along the way that needed to be resolved.
Let’s look at them one by one.

  • Incorrect OCIDs in the parameter file

    You must carefully enter OCIDs in the ZDM parameter file. Oracle Base System has a different OCID, cdb database has a different OCID, pluggable database has a different one.

    Parameters TARGETDATABASE*, SOURCEDATABASE*, SOURCECONTAINERDATABASE*
  • Lack of passwordless connection from ZDM host to source and target system

    My oversight.
  • Failed CPAT (Cloud Premigration Advisor Tool) report

    Although the main purpose of the CPAT tool is to check the environment for migration to the Autonomous database, it is worth having it at hand during each migration. CPAT is started automatically as a dedicated step in the migration job. Unfortunately, it is not fully supported for the Standard version. It can be ignored in the migration job using the appropriate parameter. This doesn’t mean it won’t run. It will be there, but incorrect check statuses will not block ZDM’s migration job.
    Another topic is the lack of full ZDM support for the Standard Edition version. These warnings appear during startup.

    CPAT-4008: Warning: CPAT has detected the database being analyzed is not Oracle Enterprise Edition. The instance is not fully supported by CPAT and results are on a ‘best effort’ basis. Some validation checks may not work properly, and extra care must be assumed by the user when interpreting results

    Failed. The Cloud Premigration Advisor was unable to complete its analysis. Please contact Oracle Support Services.
  • ORA-39173: Encrypted data has been stored unencrypted in dump file set

    It seems that Data Pump dumps should be encrypted. But how? Data Pump has an encryption_password parameter that enables this. But it is available for Enterprise edition. Can this operation be disabled in ZDM? It doesn’t look like that option exists. The solution was to simply ignore this error in the export phase of Data Pump. Bucket is encrypted by default.

    IGNOREEXPORTERRORS=ORA-31684,ORA-39111,ORA-39082,ORA-39173
  • CPAT’s has_common_objects check failed

    In my test environment databases have been created using Oracle Base Database Service.
    It turns out that a user has appeared in this database use C##DBLCMUSER. What is he for?
    Needed to automate database backups and used by OCI dobreascli tool. Common objects are not migrated by ZDM. We have to recreated them on target (unless the tool does it itself).

    User C##DBLCMUSER is very nicely described in below note:

    Creating PDB hang due to excessive execution of common user C##DBLCMUSER
    grant command (Doc ID 2963803.1)

    There is one common object more – user profile C##DBLCMPROFILE associated with C##DBLCMUSER user. Interestingly, it turns out that profiles can be easily excluded from export.
    It looks much worse in the case of user. It seems you cannot exclude him from proces, but you could remap it. You cannot forget about it.
    After some experimentation final two parameters with values have been used in final migration job run:

    EXCLUDEOBJECTS-1=owner:.*,objectName:”C##DBLCMPROFILE”, objectType:PROFILE
    DATAPUMPSETTINGS_METADATAREMAPS1=type:REMAP_SCHEMA,oldValue:C##DBLCMUSER,newValue:DBLCMUSER

  • Disappearing dumps when multi-phased mode is enabled during import.

    Out of curiosity, I divided the import into several phases.

    DATAPUMPSETTINGS_METADATAFIRST=TRUE

    It is mainly used to add modifications in individual import phases. I had no such need. I was interested in how exactly the phases will appear, in the context of whether it is possible to optimize the import by parallelizing the index creation operation. It turns out there is a problem. While additional phases have appeared, it turns out that dumps are not maintained between different phases.
zdm-host-tz: 2024-06-19T13:25:33.777Z : deleting Data Pump dump in directory path /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/log/1B2C755344AC494AE0634C01000AC853 on node targetseoda ...
zdm-host-tz: 2024-06-19T13:25:34.301Z : Execution of phase ZDM_DATAPUMP_IMPORT_USER_TGT completed
zdm-host-tz: 2024-06-19T13:25:34.320Z : Executing phase ZDM_DATAPUMP_IMPORT_METADATA_TGT
zdm-host-tz: 2024-06-19T13:25:34.585Z : Oracle Data Pump Import parallelism set to 16 ...
zdm-host-tz: 2024-06-19T13:25:34.588Z : Oracle Data Pump errors to be ignored are ORA-39083,ORA-65141...
zdm-host-tz: 2024-06-19T13:25:34.588Z : starting Data Pump Import for database "PDB1.TARGETPRIVATESU.ODAAPILOT.ORACLEVCN.COM"
zdm-host-tz: 2024-06-19T13:25:34.588Z : running Oracle Data Pump job "ZDM_38_DP_IMPORT_METADATA_189" for database "PDB1.TARGETPRIVATESU.ODAAPILOT.ORACLEVCN.COM"
zdm-host-tz: 2024-06-19T13:25:45.885Z : Oracle Data Pump log ZDM_38_DP_IMPORT_METADATA_189.log uploaded to Oracle cloud storage bucket zdmlogicalmigration
zdm-host-tz: 2024-06-19T13:26:16.040Z : ------------------------------------------------------
zdm-host-tz: 2024-06-19T13:26:16.041Z : Oracle Data Pump job ZDM_38_DP_IMPORT_METADATA_189 completed with 4 non-ignorable errors
zdm-host-tz: 2024-06-19T13:26:16.041Z : ------------------------------------------------------
Non-ignorable errors found in Oracle Data Pump job ZDM_38_DP_IMPORT_METADATA_189 log are
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/log/1B2C755344AC494AE0634C01000AC853/ZDM_38_DP_EXPORT_4928_dmp_1_01.dmp" for read
ORA-27037: unable to obtain file status
ORA-39012: Client detached before the job started.
  • There are settings that assume that the target database will be an Autonomous database

    Oracle@Azure allows using ADB databases (both Dedicated or Serverless – rolling over Oracle datacenters). In my exercise I’ve was interested only in traditional database.

    We have to focus on below two parameters in this use case:

    DATAPUMPSETTINGS_DATAPUMPPARAMETERS_RETAININDEX oraz DATAPUMPSETTINGS_SECUREFILELOB

    Indexes by default are not created and SecureFile are automatically converted from BasicLobs (Standard Edition) to SecureFile (Enterprise Edition). Unfortunately SecureFiles have some limitations related to tablespaces. Additional manul step was need to adjust tablespaces parameters to allow store SecureFile files storing as part of migration exercise.
  • ZDM besides output log file it produces metrics from running. For Data Pump migration mode they are not very staffed with information. But it strange is shows finally 0.
	{
	    "dataPumpJobs": [
	        {
	            "name": "ZDM_55_DP_IMPORT_7657",
	            "type": "IMPORT",
	            "progress": "0"
	        },
	        {
	            "name": "ZDM_55_DP_EXPORT_2380",
	            "type": "EXPORT",
	            "progress": "0"
	        }
	    ]
        }

Finally I had to put a lot of ORA- errors in ZDM parameter file to make smooth ZDM migration. Well, why should it be any different 🙂