Domain 7: Migrate Oracle Databases to OCI Database Services (20%)
Domain 7 is the second-heaviest domain on the OCI 2025 Migration Architect Professional exam (1Z0-1123-25), representing approximately 10 questions. It covers the full spectrum of Oracle database migration strategies, tools, and services. You must know when to use each migration method, understand the tradeoffs between physical and logical approaches, and be able to design end-to-end migration plans for various source environments.
1. OCI Database Services Overview
Before selecting a migration method, you must know what target services are available and which workloads they serve. (OCI Database Overview)
| Service | Deployment | Use Case | Key Characteristics |
|---|---|---|---|
| Autonomous Database Serverless (ADB-S) | Shared Exadata infrastructure | OLTP, data warehouse, JSON, APEX workloads | Fully managed; auto-scaling, auto-patching, auto-tuning; no OS/infrastructure access |
| Autonomous Database Dedicated (ADB-D) | Dedicated Exadata infrastructure | Regulated workloads, multi-tenant consolidation | Dedicated Exadata; customer controls maintenance schedule; Fleet Administration |
| Exadata Database Service (ExaDB-D) | Dedicated Exadata infrastructure | High-performance OLTP/mixed, RAC, large databases | Full Oracle Database control; RAC supported; highest I/O throughput |
| ExaDB Cloud@Customer (ExaDB-C@C) | On-premises Exadata managed by OCI | Data sovereignty, latency-sensitive apps | Same OCI control plane, hardware in customer data center |
| ExaDB on Exascale (ExaDB-XS) | Exascale storage infrastructure | Cost-effective Exadata for smaller workloads | Exadata performance without dedicated infrastructure commitment |
| Base Database Service (BM) | Bare metal | Full OS control, single-instance or RAC | Direct hardware access; customer manages patching and backups |
| Base Database Service (VM) | Virtual machine | General-purpose Oracle Database | Scalable OCPU/storage; fast provisioning; customer manages DB |
| MySQL HeatWave | Managed MySQL | MySQL workloads with analytics | OLTP + in-memory analytics in one service; MySQL-compatible |
| OCI PostgreSQL | Managed PostgreSQL | PostgreSQL-native workloads | Managed PostgreSQL with OCI integration |
| OCI NoSQL | Managed NoSQL | Low-latency key-value and document workloads | Serverless; automatic sharding |
Exam trap: Autonomous Database targets require logical migration methods (Data Pump, GoldenGate). You cannot use RMAN backup/restore or Data Guard standby to migrate directly to ADB-S. Physical migration methods (RMAN, Data Guard) work with ExaDB-D, ExaDB-C@C, and Base Database Service targets.
2. Physical vs. Logical Migration
Every migration method falls into one of two categories. Understanding this distinction is fundamental to every decision on this exam.
| Attribute | Physical Migration | Logical Migration |
|---|---|---|
| How data moves | Block-level copy of datafiles (exact replica) | Row-by-row export/import of data and metadata |
| Tools | RMAN, Data Guard, Unplug/Plug | Data Pump, GoldenGate, DMS (logical mode) |
| Speed | Faster (block copy) | Slower (row processing) |
| Cross-version | Source and target must be same major version (or upgrade path supported) | Cross-version supported (export from lower, import to higher) |
| Cross-platform | Same endianness required (or RMAN CONVERT) | Platform-independent |
| Cross-character-set | Must match exactly | Can convert (with caveats) |
| Schema restructuring | Not possible | Remap tablespaces, schemas, datafiles during import |
| ADB target | Not supported | Supported |
| Typical downtime | Minutes (Data Guard switchover) to hours (RMAN restore) | Hours (Data Pump) to minutes (GoldenGate) |
(Oracle Database Migration Technologies)
Exam trap: "Physical migration is always faster" is generally true for data transfer speed, but Data Guard physical migration achieves lower downtime than a full RMAN restore because it uses redo apply to stay synchronized. Do not confuse transfer speed with downtime.
3. RMAN-Based Migration
RMAN (Recovery Manager) is Oracle's native backup and recovery utility. For migration, it creates a physical copy of the database that can be restored on OCI. (Migrate with RMAN)
3.1 Migration Patterns
| Pattern | How It Works | When to Use |
|---|---|---|
| Backup to Object Storage + Restore | Install Oracle Database Cloud Backup Module (SBT library) on source; RMAN backup to OCI Object Storage; restore on target | No FastConnect/VPN available; large databases; internet-based transfer acceptable |
| RMAN Duplicate from Active Database | DUPLICATE TARGET DATABASE ... FROM ACTIVE DATABASE; streams data over network from source to target |
FastConnect or VPN in place; target DB instance already provisioned; want one-step copy |
| Backup to NFS/shared storage + Restore | RMAN backup to NFS mount visible to both source and target; restore on target | Source and target share network-accessible storage |
3.2 Oracle Database Cloud Backup Module
This module provides an SBT (System Backup to Tape) interface that integrates RMAN with OCI Object Storage. It requires:
- JDK 1.7+
- OCI Object Storage bucket
- Authentication credentials (tenancy OCID, user OCID, API key fingerprint, private key file)
- The
libopc.so(Linux) orlibopc.dll(Windows) SBT library
Configuration: CONFIGURE CHANNEL DEVICE TYPE SBT PARMS='SBT_LIBRARY=/path/libopc.so,SBT_PARMS=(OPC_PFILE=/path/opc_config.ora)'
(Back Up a Database to Object Storage Using RMAN)
3.3 Key Considerations
- Downtime: Source database must be quiesced for the final incremental backup and RESETLOGS open on target. Downtime proportional to the final incremental size.
- Incremental strategy: Take a full backup first, then incremental backups. Restore full + apply incrementals. Final incremental captures last changes before cutover.
- Encryption: RMAN backups to Object Storage should be encrypted. Use
SET ENCRYPTION ONwith a password or use TDE wallet-based encryption. - Cross-platform: RMAN supports cross-platform migration using
CONVERT DATABASEorCONVERT DATAFILE, but source and target must have compatible endianness (or explicit conversion).
Exam trap: RMAN backup to Object Storage does NOT require VPN or FastConnect. The Cloud Backup Module uses HTTPS over the public internet. This makes it viable even without dedicated connectivity.
4. Data Pump Export/Import
Oracle Data Pump (expdp/impdp) is a logical migration tool that exports database objects as dump files and imports them into a target database. (Data Pump Migration Methods)
4.1 Export Modes
| Mode | Parameter | What It Exports |
|---|---|---|
| Full | FULL=Y |
Entire database (all schemas, objects, data) |
| Schema | SCHEMAS=HR,OE |
Specified schemas only |
| Table | TABLES=HR.EMPLOYEES |
Specified tables only |
| Tablespace | TABLESPACES=USERS |
All objects in specified tablespaces |
| Transportable Tablespace | TRANSPORT_TABLESPACES=USERS |
Metadata only; datafiles copied separately (physical + logical hybrid) |
4.2 Data Transfer Methods
| Method | How It Works | Best For |
|---|---|---|
| Dump file to Object Storage | Export to local dump file, upload to OCI Object Storage, import on target | Databases >50 GB; best parallelism; recommended by Oracle |
| Network link (DB Link) | NETWORK_LINK=dblink_to_source on impdp; imports directly over the network without dump files |
Smaller databases; simplicity; no intermediate storage needed |
| Dump file to NFS | Export to shared NFS mount; import from same mount on target | Source and target share accessible storage |
| Direct to/from Object Storage (21c+) | CREDENTIAL parameter with OCI Object Storage URI |
Oracle 21c+ databases; direct cloud object store integration |
4.3 Parallel Execution and Tuning
Parallelism is the primary performance lever for Data Pump.
| Parameter | Guidance |
|---|---|
PARALLEL |
Set to 0.25 x ECPU count (ADB) or 1 x OCPU count (standard). Start conservative, increase if CPU/IO headroom exists. |
DUMPFILE |
Use %U substitution variable for multiple dump files: DUMPFILE=export_%U.dmp |
FILESIZE |
Split dump files (e.g., FILESIZE=10G) for easier upload to Object Storage |
CLUSTER=N |
Disable RAC cluster-wide operation if only one node should handle export |
EXCLUDE / INCLUDE |
Filter unnecessary objects (statistics, grants) to reduce export size |
CONTENT=DATA_ONLY |
Skip DDL if schema pre-created on target |
REMAP_TABLESPACE |
Remap tablespace names during import: REMAP_TABLESPACE=USERS:DATA |
REMAP_SCHEMA |
Remap schema names during import |
4.4 Key Considerations
- Cross-version: Data Pump supports importing from lower versions to higher versions. You can export from 11.2 and import into 19c.
- Character set: Data Pump handles character set conversion, but verify compatibility to avoid data loss.
- No ongoing sync: Data Pump is a point-in-time export. Changes after the export begins are NOT captured (unlike GoldenGate).
- Downtime: Proportional to database size. For large databases, downtime can be significant.
Exam trap: Network link Data Pump (NETWORK_LINK) transfers data directly without dump files, but performance depends entirely on network bandwidth for the entire duration. Oracle recommends Object Storage transfer for databases over 50 GB because parallelism is better. (OCI DMS Documentation)
5. Multitenant Architecture Migration (PDB)
Oracle Multitenant (12c+) provides two migration methods unique to the pluggable database architecture. (Remote PDB Cloning)
5.1 Remote Cloning (PDB Clone Over DB Link)
Remote cloning creates a copy of a PDB from a source CDB to a target CDB using a database link.
Steps:
- Create a database link on the target CDB pointing to the source CDB
- Execute
CREATE PLUGGABLE DATABASE target_pdb FROM source_pdb@dblink - The target CDB copies all datafiles over the network
Hot Cloning (12.2+): If the target CDB is in ARCHIVELOG mode and LOCAL UNDO mode, the source PDB can remain open read/write during cloning. No downtime required for the clone operation itself.
Prerequisites:
- Same endianness (both little-endian for OCI)
- Compatible character sets
- Same or compatible database options installed
CREATE PLUGGABLE DATABASEprivilege on the DB link user- SQL*Net connectivity between source and target
OCI API support: Remote cloning can be initiated via OCI Console, CLI, SDKs, or Terraform using the RemoteClonePluggableDatabase API.
5.2 Unplug/Plug (PDB Relocation)
Unplug/Plug physically moves a PDB's datafiles from one CDB to another.
Steps:
- Close the PDB on the source CDB:
ALTER PLUGGABLE DATABASE pdb CLOSE - Unplug the PDB:
ALTER PLUGGABLE DATABASE pdb UNPLUG INTO '/path/pdb.xml' - Copy the PDB datafiles and the XML manifest to OCI (Object Storage, NFS, or direct copy)
- Plug into the target CDB:
CREATE PLUGGABLE DATABASE pdb USING '/path/pdb.xml' - Run compatibility checks and open the PDB
Key constraints:
- Source platform must be little-endian (or use RMAN CONVERT for endian conversion)
- Oracle Database 12c or higher required
- Source and target must be same or compatible database version
- PDB must be closed during unplug (downtime required)
Exam trap: Unplug/Plug requires the PDB to be CLOSED before unplugging. Remote Cloning (12.2+ hot clone) does NOT require the source PDB to be closed. If the question asks about "minimal downtime PDB migration," the answer is remote cloning with hot clone, not unplug/plug.
6. Data Guard for Migration
Oracle Data Guard maintains a synchronized standby database using redo transport and apply. When used for migration, the OCI database starts as a standby and becomes the primary via switchover. (Hybrid Data Guard Configuration)
6.1 Migration Architecture
On-Premises Primary ──── redo transport ────▶ OCI Standby
(source) (FastConnect/VPN) (target)
│
switchover ◀────────────────┘
│
OCI Primary (new) ◀─── role transition ──── On-Prem Standby (fallback)
6.2 Steps
- Provision target: Create a database on OCI (ExaDB-D, Base DB VM/BM) with the same version, character set, and options as the source
- Network configuration: Establish FastConnect or VPN connectivity; configure SQL*Net (port 1521) between primary and standby
- Instantiate standby: Use
RMAN DUPLICATE ... FOR STANDBY FROM ACTIVE DATABASEor restore from backup - Enable redo transport: Configure
LOG_ARCHIVE_DEST_nto ship redo to the OCI standby - Apply redo: Enable managed recovery process (MRP) on the standby to apply redo in real time
- Validate synchronization: Confirm no apply lag using
V$DATAGUARD_STATS - Switchover: Execute
ALTER DATABASE SWITCHOVER TO target_db(Data Guard Broker:SWITCHOVER TO target_db) - Post-switchover: The OCI database is now primary. The on-premises database becomes a standby (fallback option)
6.3 Key Considerations
| Factor | Detail |
|---|---|
| Downtime | Seconds to minutes during switchover (application reconnection time) |
| Fallback | On-premises database remains as standby after switchover; switchback possible |
| Version | Source and target must be same major version |
| Network | Requires persistent, reliable connectivity (FastConnect recommended for production) |
| RAC support | Both primary and standby can be RAC |
| Encryption | Redo transport can be encrypted; TDE wallet must be copied to standby |
Exam trap: Data Guard switchover is a planned, graceful role transition with zero data loss. Failover is an unplanned transition (used in disaster recovery) that may lose data depending on protection mode. For migration, you always use switchover, not failover.
7. GoldenGate for Zero-Downtime Migration
Oracle GoldenGate provides real-time data replication for zero-downtime (or near-zero-downtime) migration. It is the only method that supports heterogeneous databases, cross-version migration, and continuous bidirectional replication. (Oracle GoldenGate)
7.1 Architecture Components
| Component | Role |
|---|---|
| Extract | Captures committed transactions from the source database redo logs or transaction logs |
| Trail files | Sequential log files that store captured changes for transport |
| Distribution Service | Reads local trails and sends them to the target (Microservices Architecture) |
| Receiver Service | Accepts trail data and writes target-side trails (Microservices Architecture) |
| Replicat | Applies captured changes to the target database |
| Manager | Controls and monitors GoldenGate processes (Classic Architecture) |
7.2 Migration Pattern
- Initial Load: Full export of source data to target (via Data Pump, RMAN, or GoldenGate direct load)
- Change Data Capture: Extract captures ongoing changes from source redo logs starting from the SCN of the initial load
- Apply changes: Replicat applies trail data to the target, keeping it synchronized
- Cutover: When apply lag reaches near-zero, stop application writes to source, let Replicat drain remaining changes, switch application connections to target
7.3 OCI GoldenGate Service
OCI GoldenGate is a fully managed service. Key facts:
- No infrastructure to provision or manage
- Supports Oracle-to-Oracle and heterogeneous replication (Oracle to MySQL, PostgreSQL, Kafka, etc.)
- Integrates with OCI Vault for credential storage
- Auto-scaling and high availability built in
- Supports bidirectional replication for active-active configurations
- Replicates DDL operations by default
7.4 Key Considerations
| Factor | Detail |
|---|---|
| Downtime | Near-zero (seconds during application cutover) |
| Cross-version | Supported (e.g., 11.2 to 19c) |
| Cross-platform | Supported (including non-Oracle sources) |
| Bidirectional | Supported for active-active and fallback scenarios |
| Supplemental logging | Must be enabled on source database (ALTER DATABASE ADD SUPPLEMENTAL LOG DATA) |
| Initial load time | Varies by database size; runs in parallel with CDC |
| Complexity | Highest of all methods; requires Extract/Replicat configuration and monitoring |
Exam trap: GoldenGate requires supplemental logging on the source. Without it, GoldenGate cannot capture enough information to replicate DML operations. This is a prerequisite that candidates forget.
8. OCI Database Migration Service (DMS)
OCI Database Migration (DMS) is a fully managed service that automates database migration to OCI. It is internally driven by Zero Downtime Migration (ZDM) but abstracts the complexity behind a console-based workflow. (OCI Database Migration Overview)
8.1 Architecture
┌──────────────────────┐ ┌─────────────────────────────┐
│ DMS Service Tenancy │ │ Customer Tenancy │
│ (Oracle-managed) │ │ │
│ │ PE │ ┌─────────┐ ┌─────────┐ │
│ ZDM Engine ─────────┼──────┤ │ Source │ │ Target │ │
│ │ │ │ DB Conn │ │ DB Conn │ │
│ │ │ └─────────┘ └─────────┘ │
│ │ │ │ │ │
│ │ │ ┌────┴────┐ ┌─────┴────┐ │
│ │ │ │ Source │ │ Target │ │
│ │ │ │ Database │ │ Database │ │
│ │ │ └─────────┘ └──────────┘ │
│ │ │ │
│ │ │ ┌──────────────────────┐ │
│ │ │ │ OCI GoldenGate │ │
│ │ │ │ (online migrations) │ │
│ │ │ └──────────────────────┘ │
│ │ │ │
│ │ │ ┌──────────────────────┐ │
│ │ │ │ Object Storage / │ │
│ │ │ │ DB Link (data xfer) │ │
│ │ │ └──────────────────────┘ │
└──────────────────────┘ └─────────────────────────────┘
- DMS runs in Oracle's service tenancy, not the customer's
- Communicates with customer resources via Private Endpoints (PEs)
- Uses OCI Vault to store database credentials securely
- Data flows through GoldenGate and Data Pump within the customer tenancy
8.2 Migration Types
| Type | Mechanism | Downtime | GoldenGate Required |
|---|---|---|---|
| Offline | Data Pump export/import (point-in-time copy) | Hours (proportional to DB size) | No |
| Online | Data Pump initial load + GoldenGate CDC replication | Near-zero (seconds at cutover) | Yes (auto-provisioned) |
8.3 Migration Lifecycle
| Phase | What Happens |
|---|---|
| Create connections | Register source and target databases with DMS; provide credentials stored in OCI Vault |
| Create migration | Define migration type (online/offline), data transfer medium (Object Storage or DB link), include/exclude schemas |
| Validate | DMS runs pre-migration checks: connectivity, privileges, compatibility, source readiness |
| Run migration | DMS executes Data Pump export, transfers data, imports into target. For online: starts GoldenGate replication after initial load |
| Monitor | Track progress in OCI Console; view phase completion, errors, and warnings |
| Switch over | For online: confirm apply lag is zero, initiate switchover, DMS tears down GoldenGate replication |
| Clean up | Remove DMS resources (connections, migration job) after successful validation |
8.4 Supported Sources and Targets
| Source | Target |
|---|---|
| On-premises Oracle Database (11.2.0.4+) | Autonomous Database (ADB-S, ADB-D) |
| OCI Base Database Service | Exadata Database Service (ExaDB-D) |
| Amazon RDS for Oracle | ExaDB Cloud@Customer |
| Other cloud Oracle instances | Base Database Service (VM/BM) |
8.5 Data Transfer Medium
| Medium | When to Use |
|---|---|
| Object Storage | Databases >50 GB; better parallelism; Oracle-recommended |
| Database Link | Databases <50 GB; simpler setup; performance depends on network bandwidth |
Exam trap: DMS is internally driven by ZDM, but you do NOT need to install or configure ZDM yourself. DMS is a managed service. If the question asks about a "fully managed, console-driven migration service," the answer is DMS, not ZDM. ZDM is the installable tool you run on your own host.
9. Zero Downtime Migration (ZDM)
ZDM is an installable migration tool (not a managed service) that orchestrates end-to-end database migration using MAA best practices. It supports both physical and logical workflows. (ZDM Documentation)
9.1 ZDM vs. DMS
| Attribute | ZDM | DMS |
|---|---|---|
| Deployment | Customer installs on a Linux host | Fully managed OCI service |
| Interface | CLI (zdmcli) |
OCI Console, CLI, API, Terraform |
| Physical migration | Yes | No (logical only) |
| Logical migration | Yes | Yes |
| Online migration | Yes (via Data Guard or GoldenGate) | Yes (via GoldenGate) |
| Target platforms | OCI, ExaDB-C@C, Database@Azure, Database@AWS | OCI database services |
| Multicloud | Yes (Database@Azure, Database@AWS, Database@Google Cloud) | OCI only |
9.2 Workflow Matrix
| Workflow | Technology | Downtime | Key Requirement |
|---|---|---|---|
| Physical Online | Data Guard (redo ship + apply + switchover) | Seconds to minutes | Same version, same endianness, FastConnect/VPN |
| Physical Offline | RMAN backup + restore | Hours | Same version, same endianness |
| Logical Online | Data Pump initial load + GoldenGate CDC | Near-zero | GoldenGate license or OCI GoldenGate |
| Logical Offline | Data Pump export + transfer + import | Hours | Backup location (Object Storage or NFS) |
9.3 ZDM Service Host Requirements
| Requirement | Detail |
|---|---|
| OS | Oracle Linux 7/8 or RHEL 8 |
| Storage | 100 GB free |
| Connectivity | SSH (port 22) and SQL*Net (port 1521) to source and target |
| SSH keys | RSA format required (-----BEGIN RSA PRIVATE KEY-----) |
| Packages | glibc-devel, expect, unzip, libaio |
9.4 Supported Database Versions
ZDM supports source databases from 11.2.0.4 through 26ai. Physical workflows require version compatibility between source and target. Logical workflows allow cross-version migration.
Exam trap: ZDM physical online workflow uses Data Guard under the hood. ZDM logical online workflow uses GoldenGate under the hood. If a question describes a "ZDM physical migration with near-zero downtime," the underlying technology is Data Guard, not GoldenGate.
10. Pre-Migration Assessment and Workload Analysis
10.1 AWR and ADDM
Before migration, analyze the source database workload to select the right OCI target and size it correctly.
| Tool | Purpose | Key Outputs |
|---|---|---|
| AWR (Automatic Workload Repository) | Collects performance statistics at regular intervals (snapshots) | CPU usage, I/O throughput, wait events, SQL statistics, memory usage |
| ADDM (Automatic Database Diagnostic Monitor) | Expert system that analyzes AWR data | Performance findings, recommendations, impact rankings |
| OCI Ops Insights AWR Hub | Cloud-based AWR repository that consolidates snapshots from multiple databases | Cross-database analysis, historical trending, capacity planning |
| Cloud Premigration Advisor Tool (CPAT) | Assesses source database readiness for cloud migration | Incompatible features, required remediation steps, target compatibility |
Best practice: Collect AWR snapshots spanning 3-6 months (covering peak and off-peak periods) before planning migration. Use ADDM findings to identify performance issues that should be resolved BEFORE migration, not after.
10.2 Sizing Guidance
| Source Metric | Maps To |
|---|---|
| Peak CPU utilization (AWR) | OCPU/ECPU count on target |
| Peak IOPS and throughput (AWR, iostat) | Storage tier selection (Balanced, Higher Performance) |
| Database size (DBA_SEGMENTS, RMAN) | Storage allocation on target |
| Memory usage (SGA + PGA) | Shape selection (memory-optimized vs. standard) |
| RAC node count | Number of cluster nodes on target |
Exam trap: Size for the 80th percentile workload, not the absolute peak. OCI allows scaling up for occasional peaks. Over-provisioning wastes money; under-provisioning causes performance issues during migration validation.
11. Migration Method Selection Decision Tree
Use this decision matrix to select the appropriate migration method. This is heavily tested on the exam.
Decision Criteria
1. Is the TARGET Autonomous Database?
├── YES → Logical migration only (Data Pump, GoldenGate, DMS)
└── NO → Continue to #2
2. Is ZERO (or near-zero) downtime required?
├── YES → Data Guard (physical online) or GoldenGate (logical online) or ZDM/DMS online
└── NO → Continue to #3
3. Is the source platform BIG-ENDIAN (Solaris SPARC, AIX, HP-UX)?
├── YES → Logical migration (Data Pump, GoldenGate) or RMAN with CONVERT
└── NO → Continue to #4
4. Must the migration include a VERSION UPGRADE?
├── YES → Logical migration (Data Pump, GoldenGate) or physical + separate upgrade
└── NO → Continue to #5
5. Is the database a PLUGGABLE DATABASE (12c+)?
├── YES → Consider Remote Cloning (hot clone for minimal downtime) or Unplug/Plug
└── Also consider other methods below
6. Is the database VERY LARGE (multi-TB)?
├── YES → RMAN (fastest bulk transfer) or Data Guard (minimal downtime) or
│ Transportable Tablespaces (hybrid physical+logical)
└── NO → Data Pump (simplest for smaller databases)
Comprehensive Comparison Table
| Method | Downtime | Cross-Version | Cross-Platform | ADB Target | Complexity | Best For |
|---|---|---|---|---|---|---|
| Data Guard | Seconds | No (same version) | No (same endianness) | No | Medium | Large DBs, minimal downtime, fallback needed |
| GoldenGate | Seconds | Yes | Yes | Yes | High | Zero downtime, heterogeneous, bidirectional |
| RMAN Backup/Restore | Hours | No (same version) | Limited (endian convert) | No | Low | Large DBs, no connectivity requirement, simple |
| Data Pump | Hours | Yes | Yes | Yes | Low | Small-medium DBs, schema subset, version upgrade |
| Remote Clone (PDB) | Minutes (hot) | Limited | Same endianness | No | Low | PDB migration, 12.2+ hot clone |
| Unplug/Plug (PDB) | Hours | Same version | Same endianness | No | Low | Simple PDB relocation, little-endian only |
| DMS (offline) | Hours | Yes | Yes | Yes | Low | Managed migration, console-driven |
| DMS (online) | Seconds | Yes | Yes | Yes | Medium | Managed zero-downtime migration |
| ZDM | Varies | Varies by workflow | Varies by workflow | Yes (logical) | Medium | Automated, multicloud targets |
12. Post-Migration Validation
After migration, validate before decommissioning the source.
12.1 Validation Checklist
| Check | Method |
|---|---|
| Row counts | Compare SELECT COUNT(*) on key tables between source and target |
| Data integrity | Checksum comparison using DBMS_SQLHASH or application-level validation |
| Schema comparison | Compare DDL using DBMS_METADATA.GET_DDL on both source and target |
| Invalid objects | Run SELECT COUNT(*) FROM DBA_OBJECTS WHERE STATUS = 'INVALID'; recompile with UTL_RECOMP |
| Performance baseline | Generate AWR report on target; compare with source AWR baseline |
| Application testing | Full functional regression test against the target database |
| Backup verification | Confirm automated backups are configured and running on OCI target |
| DR configuration | Set up Data Guard or Autonomous Data Guard on the target for ongoing HA |
12.2 Common Post-Migration Issues
| Issue | Cause | Resolution |
|---|---|---|
| Performance degradation | Stale statistics on target | Run DBMS_STATS.GATHER_DATABASE_STATS |
| SQL plan regressions | Different optimizer behavior on new hardware | Use SQL Plan Management (SPM) to capture and evolve SQL plans |
| Invalid objects | Missing dependent objects or version differences | Recompile with UTL_RECOMP.RECOMP_SERIAL or RECOMP_PARALLEL |
| Network timeouts | Application connection strings still pointing to source | Update TNS entries, connection pools, and DNS records |
| TDE wallet issues | Wallet not copied or not opened on target | Copy wallet from source; ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN |
13. Exam Tips and Common Traps
ADB = logical only: You cannot use RMAN or Data Guard to migrate directly to Autonomous Database. Data Pump, GoldenGate, or DMS are the only options.
DMS vs. ZDM: DMS is a managed service (OCI Console). ZDM is an installable tool (CLI). DMS uses ZDM internally but abstracts it. DMS only supports logical workflows. ZDM supports both physical and logical.
Data Guard = physical, near-zero downtime: Data Guard uses redo apply to keep a standby synchronized, then switchover completes in seconds. It requires same version and same endianness.
GoldenGate = logical, zero downtime, highest complexity: GoldenGate captures from redo logs and replays transactions. It supports cross-version, cross-platform, and even non-Oracle sources. But it requires supplemental logging and careful Extract/Replicat configuration.
Data Pump is NOT zero downtime: Data Pump creates a point-in-time snapshot. Changes after export starts are lost. For large databases, downtime can be hours.
Object Storage transfer > DB Link for large databases: Oracle recommends Object Storage for databases over 50 GB because parallelism is better and network dependency is reduced.
Physical online ZDM = Data Guard underneath: When ZDM runs a physical online workflow, it is orchestrating Data Guard. When it runs logical online, it is orchestrating GoldenGate.
Switchover vs. Failover: Migration uses switchover (planned, zero data loss). Failover is for disaster recovery (unplanned, potential data loss depending on protection mode).
Hot clone requires ARCHIVELOG + LOCAL UNDO: Remote PDB cloning without source downtime (hot clone) requires the target CDB to be in ARCHIVELOG mode with LOCAL UNDO mode enabled. Available from 12.2+.
Cloud Premigration Advisor Tool (CPAT): Run this before migration to identify incompatible features, unsupported options, and required remediation. It is integrated with ZDM logical workflows.
References
- OCI Database Services Overview
- OCI Database Migration Service
- Migrating Databases to OCI
- Oracle Database Migration Technologies
- Zero Downtime Migration Documentation
- Migrate with RMAN
- Back Up a Database to Object Storage Using RMAN
- Data Guard Hybrid Cloud Configuration
- Hybrid Data Guard to OCI (PDF)
- Remote PDB Cloning on ExaDB-C@C
- Oracle GoldenGate
- OCI GoldenGate Service
- Import Data Using Data Pump on ADB
- Migration Decision Tree
- OCI Ops Insights AWR Hub
- Oracle Database Migration FAQ