Reference

Domain 7: Migrating to Autonomous Database (7%)

Domain 7 of the 1Z0-931-25 Oracle AI Autonomous Database 2025 Professional exam covers migration methods, tools, and best practices for moving data and applications into Oracle Autonomous Database. At 7%, this is the smallest domain on the exam -- expect approximately 3-4 questions out of 50 (90 minutes, 68% passing score).

The exam syllabus defines two objectives for this domain:

  1. Describe the options and considerations for migrating to Autonomous Database
  2. Migrate to an Autonomous Database using Data Pump

Despite the small weight, these questions reward candidates who understand specific tool capabilities, parameter syntax, and migration architecture. Guessing is not an option when the exam asks about exclude parameters or credential configuration.

1. Migration Methods Overview

Oracle provides six primary migration paths to Autonomous Database. Each has different downtime characteristics, automation levels, and source requirements. The exam expects you to match the right tool to the right scenario. (Oracle ADB Migration Methods)

Online Migration Methods (Minimal/Zero Downtime)

Method Type Source Requirement Key Feature
OCI Database Migration Service (DMS) Fully managed cloud service Oracle DB 11.2.0.4+ Runs CPAT + Data Pump + GoldenGate automatically; fault-tolerant and incremental
Zero Downtime Migration (ZDM) Self-managed CLI tool Oracle DB 11.2.0.4+ Greater user control; runs CPAT + Data Pump + GoldenGate under the hood
Oracle GoldenGate Real-time replication Oracle DB (GG 12.3.0.1.2+) Continuous replication; supports near-zero downtime cutover

Offline Migration Methods

Method Best For Key Detail
Oracle Data Pump (expdp/impdp) Oracle-to-Oracle bulk migration Export schemas to dump files, stage on Object Storage, import into ADB
Database Links Small datasets, no extra tools INSERT over database link from ADB to source; multi-session parallelism possible
Unload and Load via Files CSV, JSON, Parquet, ORC, Avro data Use DBMS_CLOUD.COPY_DATA or external tables; supports OCI, AWS S3, Azure Blob
Materialized Views Incremental sync during migration Database link + materialized views for ongoing refresh until cutover

Exam trap: The exam may ask which method provides zero downtime. Data Pump alone is offline -- the source database is quiesced during export. For zero/near-zero downtime, you need GoldenGate (directly, through DMS, or through ZDM). Database links and materialized views also allow ongoing source activity but are not classified as "zero downtime migration" tools.

2. Data Pump Migration to ADB (Core Exam Topic)

Data Pump is the most heavily tested migration method in this domain. The exam expects you to know the end-to-end workflow, critical parameters, and ADB-specific restrictions. (Oracle Data Pump with ADB)

Data Pump Workflow

Source DB  -->  expdp (schema mode)  -->  Dump files  -->  Object Storage  -->  impdp into ADB

Step-by-step:

  1. Run CPAT on the source database to identify compatibility issues before export
  2. Export using expdp in schema mode (not full mode) to dump files
  3. Upload dump files to OCI Object Storage (or AWS S3, Azure Blob)
  4. Store credentials in ADB using DBMS_CLOUD.CREATE_CREDENTIAL
  5. Import using impdp with the Object Storage URL as the dumpfile parameter
  6. Validate imported data and application functionality

Critical Export Parameters (expdp)

schemas=YOUR_SCHEMA
dumpfile=export_%U.dmp
directory=data_pump_dir
parallel=<number_of_ECPUs>
compression=ALL
compression_algorithm=MEDIUM
encryption=ALL
encryption_pwd_prompt=yes
exclude=cluster,db_link
logtime=all
metrics=yes

For ADW (Data Warehouse) targets, add additional excludes:

exclude=index,indextype,materialized_view,materialized_view_log,materialized_zonemap,cluster,db_link

Source: Database Heartbeat - Data Pump Considerations

Exam trap: You must know which objects to exclude. cluster, db_link, and indextype are always excluded for ADB. For ADW workloads, indexes and materialized views are also excluded because ADW auto-creates its own indexes and the optimizer manages materialized views.

Critical Import Parameters (impdp)

schemas=YOUR_SCHEMA
dumpfile=https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket>/o/export_%U.dmp
credential=def_cred_name
directory=data_pump_dir
parallel=<number_of_ECPUs>
remap_tablespace=%:DATA
transform=segment_attributes:n
transform=dwcs_cvt_iots:y
transform=constraint_use_default_index:y
logtime=all
metrics=yes

For ADW targets, add:

partition_options=merge
data_options=group_partition_table_data
exclude=index,indextype,materialized_view,materialized_view_log,materialized_zonemap

ADB-Specific Restrictions to Memorize

Restriction Detail
Default tablespace Always DATA -- cannot create or use custom tablespaces; use remap_tablespace=%:DATA
Service names Limited to _low, _medium, _high, _tp, _tpurgent -- no custom services
Database version Always the latest (currently 19c/23ai) -- no older versions
Removed features Java in DB, Oracle Multimedia, certain clustering features
Time zone UTC only; use CURRENT_DATE/CURRENT_TIMESTAMP instead of SYSDATE for UTC-aware behavior
Never export as SYSDBA Official Oracle restriction for Data Pump
Character set Single-byte source character sets cause ORA-12899 (value too large) errors during import to ADB's AL32UTF8

Exam trap: The remap_tablespace=%:DATA parameter is mandatory for ADB imports. ADB has a single DATA tablespace -- you cannot create additional tablespaces. If a question asks about tablespace handling during migration, this is the answer.

Object Storage and Credentials

Dump files must be staged in cloud object storage before ADB can import them. ADB supports importing from four cloud storage providers:

Provider Credential Type
OCI Object Storage Auth token or resource principal
OCI Object Storage Classic Swift credentials
AWS S3 Access key + secret key
Azure Blob Storage Storage account + access key

Credentials are stored in ADB using:

BEGIN
  DBMS_CLOUD.CREATE_CREDENTIAL(
    credential_name => 'OBJ_STORE_CRED',
    username        => 'your_oci_username',
    password        => 'your_auth_token'
  );
END;
/

Verify uploaded files with:

SELECT object_name, bytes, checksum
FROM DBMS_CLOUD.LIST_OBJECTS('OBJ_STORE_CRED',
  'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket>/');

Source: K21 Academy - Migrate to ADB

Exam trap: For files over 100 MB, use multipart upload to Object Storage. Maximum object size is 10 TB.

3. DBMS_CLOUD Data Loading

Beyond Data Pump, the DBMS_CLOUD package is the primary PL/SQL interface for loading flat files into ADB. This is a separate path from Data Pump -- used when you have CSV, JSON, Parquet, or other file-format data rather than Oracle dump files. (Oracle DBMS_CLOUD Loading)

DBMS_CLOUD.COPY_DATA

Loads data from cloud-stored files into existing ADB tables:

BEGIN
  DBMS_CLOUD.COPY_DATA(
    table_name   => 'MY_TABLE',
    credential_name => 'OBJ_STORE_CRED',
    file_uri_list   => 'https://objectstorage.<region>.oraclecloud.com/.../data.csv',
    format          => json_object('type' VALUE 'csv', 'skipheaders' VALUE '1')
  );
END;
/

Supported file formats: CSV, JSON, Parquet, ORC, Avro.

Supported cloud sources: OCI Object Storage, AWS S3, Azure Blob Storage, Google Cloud Storage.

SQL*Loader

SQL*Loader connects directly from a client machine to ADB over the network. Performance depends on network bandwidth between client and ADB -- suitable for smaller datasets only. Requires the connection wallet and environment variables configured on the client side.

Exam trap: SQL*Loader is client-to-database (network-bound). DBMS_CLOUD.COPY_DATA loads from Object Storage (server-side, faster for large datasets). The exam may ask which is more performant for large data -- the answer is DBMS_CLOUD from Object Storage.

4. Cloud Premigration Advisor Tool (CPAT)

CPAT is a Java-based command-line tool that analyzes source database metadata for compatibility with Autonomous Database. It is read-only -- it creates no users, no packages, and requires no roles or privileges on the source database. (Oracle CPAT Documentation)

Key CPAT Facts

Aspect Detail
What it checks Database metadata only (no application/business data)
Number of checks 100+ possible checks; only a subset applies per invocation
Output formats TXT, JSON, and HTML reports
Target types ATPS (ATP Serverless), ADWS (ADW Serverless), ATPD (ATP Dedicated), ADWD (ADW Dedicated), DEFAULT (ExaCS/ExaCC)
Integration Automatically run by both DMS and ZDM; also available standalone
MOS Note 2758371.1

Running CPAT

./premigration.sh \
  --connectstring jdbc:oracle:oci:@ \
  --sysdba \
  --targetcloud ATPS \
  --pdbname PDB1 \
  --schemas SAMPLEUSER \
  --reportformat json

CPAT identifies incompatible features, prioritizes their severity, and suggests remediation steps. Run it before starting any export.

Exam trap: CPAT is metadata-only and read-only. It does not modify the source database in any way. If a question implies CPAT makes changes or requires special privileges, that is incorrect.

5. OCI Database Migration Service (DMS)

DMS is Oracle's fully managed migration service. Internally, it is driven by the Zero Downtime Migration (ZDM) engine and orchestrates CPAT, Data Pump, and GoldenGate automatically. (OCI DMS Overview)

Feature DMS ZDM (Self-Managed)
Management Fully managed (OCI console/API) CLI tool you install and run
Source version Oracle DB 11.2.0.4+ Oracle DB 11.2.0.4+
Online migration Yes (GoldenGate-based) Yes (GoldenGate-based)
Offline migration Yes (Data Pump-based) Yes (Data Pump-based)
CPAT Runs automatically Runs automatically
User control Lower (automated decisions) Higher (manual configuration)
Concurrent migrations Supported Supported

Offline DMS workflow: Export source with Data Pump, transfer to Object Storage, import into ADB target.

Online DMS workflow: Initial Data Pump load + continuous GoldenGate replication until cutover.

Exam trap: DMS uses GoldenGate internally for online migrations -- it is not a separate replication technology. If asked what enables online (zero-downtime) DMS migrations, the answer is GoldenGate.

6. Oracle GoldenGate with ADB

GoldenGate enables real-time replication for near-zero-downtime migration. Key constraints for ADB: (GoldenGate with ADB)

Constraint Detail
ADB as target Fully supported with non-integrated Replicat (Parallel Replicat also supported)
ADB as source Supported for ADB-to-ADB migrations (between tiers, regions, Serverless/Dedicated)
Minimum GG version 12.3.0.1.2 or later
Connection TLS wallet required; wallet stored in GG deployment directory
Source prerequisites ARCHIVELOG mode enabled, FORCE LOGGING enabled, minimal supplemental logging enabled

Exam trap: GoldenGate supports ADB as both source and target for ADB-to-ADB migrations. Older documentation stated ADB could only be a target -- this changed. However, for on-premises-to-ADB migrations, ADB is always the target.

7. MV2ADB Tool

MV2ADB (Move to Autonomous Database) is a command-line tool that provides "one-click" migration by automating the entire Data Pump workflow. Available from My Oracle Support (MOS Note 2463574.1). (Oracle MV2ADB Reference Architecture)

What mv2adb auto does in a single command:

  1. Runs the Schema Advisor to analyze source schemas for ADB compatibility
  2. Exports data from source using Data Pump
  3. Transfers dump files to OCI Object Storage
  4. Imports data into ADB using Data Pump

MV2ADB is essentially an automation wrapper around Data Pump + Object Storage + Schema Advisor. It does not use GoldenGate and is therefore an offline migration tool.

8. Migrating Non-Oracle Databases

ADB provides the DBMS_CLOUD_MIGRATION package for translating SQL from non-Oracle databases to Oracle SQL. This is application-level migration, not data migration. (Oracle SQL Translation)

Supported Source Databases

Source Database Translation Target
PostgreSQL Oracle SQL
MySQL Oracle SQL
SQL Server (T-SQL) Oracle SQL

Translation Methods

Method Procedure Use Case
Single statement DBMS_CLOUD_MIGRATION.MIGRATE_SQL Translate one SQL statement at a time
File-based DBMS_CLOUD_MIGRATION.MIGRATE_FILE Translate entire SQL file in Object Storage
Real-time session DBMS_CLOUD_MIGRATION.ENABLE_TRANSLATION Auto-translate non-Oracle SQL in the current session

Key limitations: Only one SQL statement per MIGRATE_SQL call. MySQL @var_name global variables and backtick-enclosed identifiers are not supported. Requires ADMIN user or EXECUTE privilege on DBMS_CLOUD_MIGRATION.

For data migration from non-Oracle sources, use Oracle SQL Developer Migration Wizard or export to flat files (CSV/JSON) and load with DBMS_CLOUD.

9. Database Actions Data Loading

Database Actions (the web-based SQL Developer successor built into ADB) provides a GUI-based data loading interface. (Oracle Database Actions Data Load)

Supported upload formats: CSV, XLS, XLSX, TSV, TXT, XML, JSON, GeoJSON, Avro, Parquet, GZ, GZIP, ZIP, PDF, PNG, JPG, JPEG, TIFF.

Supported cloud sources for loading: OCI Object Storage, AWS S3, Azure Blob Storage, Google Cloud Storage.

Capabilities: Create new tables from uploaded files, update existing tables, merge into existing tables, create external tables linked to cloud storage.

Exam trap: Database Actions can load from local files (uploaded through the browser) and from cloud object storage. It is the easiest method for ad-hoc loading but not suitable for large-scale production migration.

10. Post-Migration Validation

After migration, validate the following:

Validation Step How
Row counts Compare source and target table row counts
Data checksums Compare checksums on critical columns
Application testing Run application test suite against ADB
Performance baselines Compare query execution times; ADB auto-indexes may change plans
Invalid objects Run DBMS_UTILITY.COMPILE_SCHEMA and check DBA_OBJECTS for invalid status
Constraint verification Confirm all constraints are enabled and valid
Statistics Gather optimizer statistics: DBMS_STATS.GATHER_SCHEMA_STATS

Quick-Reference Decision Matrix

Scenario Recommended Tool
Oracle DB to ADB, downtime acceptable Data Pump via Object Storage
Oracle DB to ADB, zero downtime required DMS (online mode) or GoldenGate
Oracle DB to ADB, one-click automation MV2ADB
Large CSV/Parquet files to ADB DBMS_CLOUD.COPY_DATA from Object Storage
Small dataset, no tools available Database Links or SQL*Loader
Ad-hoc file upload (browser) Database Actions Data Load
Non-Oracle app SQL translation DBMS_CLOUD_MIGRATION package
Pre-migration compatibility check CPAT
Full automated assessment + migration DMS or ZDM (both run CPAT automatically)

References