Domain 6: Migrate Infrastructure Workloads Between OCI Regions (15%)
Domain 6 of the 1Z0-1123-25 Oracle Cloud Infrastructure 2025 Migration Architect Professional exam covers cross-region workload migration, replication strategies, disaster recovery architecture, and the Full Stack DR service. At 15% of the exam this domain accounts for approximately 8 questions. The questions focus on which replication technology to use, how each one works mechanically, and how Full Stack DR orchestrates recovery across an entire application stack.
1. OCI Regions and Availability Domains
Before cross-region migration makes sense, you must understand the physical topology.
| Concept | Definition | Exam Relevance |
|---|---|---|
| Region | Independent geographic location containing one or more availability domains. Regions are connected by Oracle's private backbone network. | Cross-region replication always means between two different regions. |
| Availability Domain (AD) | One or more isolated, fault-tolerant data centers within a region. Multi-AD regions have 3 ADs. | Some services support cross-AD replication within the same region (Block Volume, File Storage). |
| Fault Domain | Logical grouping of hardware within an AD. Each AD has 3 fault domains. | Not directly relevant to cross-region, but appears in questions about HA vs DR. |
| Region Subscription | Tenancy must be explicitly subscribed to a region before using it. | You cannot replicate to a region you have not subscribed to. This is a prerequisite for all cross-region operations. |
Exam trap: HA (High Availability) and DR (Disaster Recovery) are different. HA uses multiple ADs or fault domains within a region to survive data center failures. DR uses multiple regions to survive regional outages. Questions will test whether you pick the right scope.
2. Cross-Region Replication: Service-by-Service Comparison
This is the core of Domain 6. Three storage services offer native cross-region replication, each with different mechanics and trade-offs.
Quick Comparison Table
| Feature | Object Storage | Block Volume | File Storage (FSS) |
|---|---|---|---|
| Replication type | Asynchronous, object-level | Asynchronous, volume-level | Asynchronous, snapshot-based |
| Typical RPO | Eventual (no SLA published) | < 30 minutes (can exceed 1 hr under heavy I/O) | Depends on interval (minimum 15 min) |
| Destination state | Read-only | Replica volume | Read-only target file system |
| Max replicas per source | 1 | Multiple regions/ADs | Up to 3 replication jobs |
| Pre-existing data | NOT replicated (only new objects after policy creation) | Full initial sync | Full initial sync |
| Encryption restriction | SSE-C objects cannot replicate | Customer-managed Vault keys block replication | File locks and encryption keys not replicated |
| Cross-region network cost | Yes | Yes | Yes (free within same region) |
Source: Object Storage Replication, Block Volume Replication, File System Replication
3. Object Storage Replication
How It Works
You create a replication policy on the source bucket specifying a destination region and bucket. After creation, the destination bucket becomes read-only and receives asynchronous copies of objects written to the source. Replication is unidirectional only -- there is no bidirectional or chained replication. (Object Storage Replication)
Critical Constraints
| Constraint | Detail |
|---|---|
| One policy per source bucket | A source bucket can have exactly one replication policy. |
| One-to-one relationship | One source to one destination. No fan-out, no fan-in. |
| No chaining | A destination bucket cannot also be a replication source. |
| Pre-existing objects skipped | Only objects uploaded after the policy is created are replicated. Existing objects are NOT copied. |
| SSE-C blocks replication | Objects encrypted with Server-Side Encryption with Customer-Provided Keys cannot be replicated. Oracle-managed and Vault-managed keys work fine. |
| Destination is read-only | The destination bucket accepts only replication updates. You cannot write to it directly. |
| Deletion replication | Objects deleted from the source after policy creation are automatically deleted from the destination. |
| Replication metrics | Not currently available in the Console. You cannot monitor lag via the Console. |
IAM Policies Required
Two layers of authorization are required:
- User policies -- the administrator must have
manage object-familypermissions in both source and destination compartments. - Service policies -- the Object Storage service must be authorized in each region:
Allow service objectstorage-<region-id> to manage object-family in compartment <compartment>
The region identifier format is objectstorage-us-phoenix-1, objectstorage-eu-frankfurt-1, etc. Without service authorization, replication silently fails. (Object Storage Replication)
Stopping Replication
- Delete policy on source -- permanent deletion. To replicate again, create a new policy.
- Stop from destination -- makes the destination writable again. The source replication status enters a "client error state." To resume, you must delete the source policy and create a new one.
Exam trap: Pre-existing objects are NOT replicated. If a question describes a bucket with 500 GB of existing data and asks what happens when you enable replication, the answer is that the 500 GB stays in the source only. You must use the OCI CLI oci os object bulk-upload or oci os object copy to manually copy existing objects.
Exam trap: Lifecycle policies that attempt to delete objects from a read-only destination bucket do not work. The read-only constraint overrides lifecycle rules.
4. Block Volume Replication
How It Works
Block Volume replication provides automatic asynchronous replication of block volumes, boot volumes, and volume groups to other regions or availability domains. The initial sync transfers all data, after which replication is continuous. There is no downtime or impact on source volumes during replication. (Block Volume Replication)
RPO and RTO
| Metric | Target | Caveat |
|---|---|---|
| RPO | < 30 minutes | Can exceed 1 hour for volumes with heavy write I/O. RPO varies with data change rate. |
| RTO | Not explicitly defined by Oracle | Depends on activation workflow -- typically minutes for promoting a replica. |
Cost Model
- Storage: Replica billed at Block Storage Lower Cost option price regardless of source volume performance tier.
- Network: Cross-region replication incurs outbound data transfer charges. Cross-AD replication within the same region has no network cost.
- Volumes with continual updates incur higher network costs. Total data transferred is visible in Console under Replica Details.
Limitations
| Limitation | Impact |
|---|---|
| Customer-managed Vault keys | Volumes encrypted with customer-managed Vault encryption keys CANNOT be replicated. |
| Resizing | Cannot resize a volume while replication is enabled. Must disable replication (which deletes the replica), resize, then re-enable (restarts from scratch). |
| RPO variability | Heavy write workloads can push RPO well beyond 30 minutes. |
| Region availability | Not all region pairs support replication. Check volume-replica-disallowed-regions.json for exclusions. |
| Tenancy subscription | Destination region must be subscribed. |
Exam trap: Block Volume replication and Block Volume backups are complementary, not alternatives. Replication provides the current version of your data in another region (for DR). Backups provide point-in-time snapshots (for recovery from corruption or accidental deletion). A well-designed DR plan uses both.
Exam trap: If you need to resize a replicated volume, the entire replication relationship must be destroyed and rebuilt. The initial sync restarts. Factor this into maintenance windows.
5. File Storage (FSS) Replication
How It Works
FSS replication uses a snapshot-based mechanism with four components: source file system, target file system, replication resource (on source), and replication target resource (on destination). At each replication interval, the service captures an incremental (delta) snapshot on the source, transfers it to the target, and applies it. (File System Replication)
Replication Cycle States
Idle --> Capturing --> Transferring --> Applying --> Idle (repeat)
- Capturing: Taking a delta snapshot of changes since last replication.
- Transferring: Sending the snapshot data to the target.
- Applying: Committing the snapshot data to the target file system.
Configuration Parameters
| Parameter | Value |
|---|---|
| Minimum replication interval | 15 minutes |
| Maximum replication jobs per file system | 3 |
| Target file system requirement | Must never have been exported and must have no user snapshots |
What Gets Replicated vs What Does Not
| Replicated | NOT Replicated |
|---|---|
| File and folder structure | File locks |
| Snapshots | Encryption keys |
| Metadata | Tags on source file system |
| Permissions | Clones of source file system |
| Quota rules (copied but disabled) |
Target File System Restrictions
The target file system is read-only after replication is configured. It must meet strict prerequisites:
- Never previously exported
- No existing user snapshots
- No snapshot policies attached
Workaround: If you need to use a previously exported file system as a target, create a clone of it first and use the clone as the target.
Failover Process
- Export the target file system (makes it accessible to applications/users).
- Applications connect to the target file system in the DR region.
- The target takes over from the source.
For failback, Oracle recommends cloning the source from the last completely applied snapshot, which is faster and more cost-effective than creating a new file system.
Cost Model
- Source and target file systems are both metered at the same rate for capacity stored.
- Cross-region replication incurs network egress charges.
- Same-region, cross-AD replication has no bandwidth charge.
RPO Monitoring
The key metric is Replication Recovery Point Age, which tracks the age of the last successfully applied snapshot. Set alarms on this metric to detect replication lag before it becomes a problem.
Exam trap: Quota rules are copied to the target but arrive disabled. After failover, you must manually re-enable quota rules. This is easy to miss in a real DR scenario and is exactly the kind of detail the exam tests.
Exam trap: If you delete a replication resource, the replication snapshots on the target are converted to user snapshots. If you then delete those converted user snapshots, the target file system can never be used for future replications. This is irreversible.
6. Cross-Region Workload Architecture Patterns
DR Topology Comparison
| Topology | RPO | RTO | Cost | Description |
|---|---|---|---|---|
| Backup and Restore | Hours | Hours | $ | Backups stored cross-region, infrastructure provisioned on demand after failure. |
| Pilot Light | Minutes | Minutes | $$ | Minimal infrastructure pre-provisioned in DR region. Core data replicated. Compute scaled up on failover. |
| Warm Standby | Seconds | Minutes | $$$ | Scaled-down replica of production running in DR region. Data continuously replicated. |
| Active-Active | Near zero | Near zero | $$$$ | Both regions serve production traffic simultaneously. Requires application-level data consistency. |
Exam trap: Oracle's official CAF DR documentation defines four topologies (Backup/Restore, Pilot Light, Warm Standby, Active-Active). Some third-party resources add "Hot Standby" as a separate tier, but it does not appear in the official OCI framework.
Source: OCI Cloud Adoption Framework - Disaster Recovery
Database MAA Tiers (Exam-Critical)
Oracle's Maximum Availability Architecture (MAA) defines tiered DR for databases. These tiers map directly to exam questions:
| MAA Tier | Technology | RPO | RTO | Supported DB Types |
|---|---|---|---|---|
| Bronze | RMAN backups (local + replicated) | Last backup | Hours | ADB-S, ADB-D/C@C, Base DB, ExaDB-D, ExaDB-C@C |
| Silver | RAC + replicated backup | Last backup | Hours (zero for planned maintenance) | ADB-S, ADB-D/C@C, Base DB (2+ nodes, EE-EP), ExaDB-D/C@C |
| Aurous | Refreshable PDB + Autonomous Data Guard | Last refresh | Minutes | ADB-S, ADB-D/C@C |
| Gold | Active Data Guard (active-passive) | Zero | Seconds | All with Data Guard license |
| Platinum | OCI GoldenGate (active-active) | Zero | Zero | All with GoldenGate |
Source: OCI Cloud Adoption Framework - Disaster Recovery
Exam trap: Gold tier (Data Guard) provides zero RPO and seconds RTO. Platinum tier (GoldenGate) provides zero RPO and zero RTO. Know the difference -- Data Guard is active-passive (standby is read-only or mounted), GoldenGate is active-active (both sides accept writes).
DNS-Based Failover with Traffic Management
OCI Traffic Management provides DNS-based traffic steering using steering policies. For cross-region DR, the Failover policy type is most relevant. (Traffic Management Failover Policy)
How failover steering works:
- Create answer pools pointing to endpoints in each region (e.g., load balancer IPs).
- Configure pool priority (primary region first, DR region second).
- Attach health checks to monitor endpoint availability.
- When health checks detect the primary endpoint is unhealthy, DNS responses automatically steer traffic to the DR endpoint.
The failover template processes rules in this order: FILTER --> HEALTH --> PRIORITY --> LIMIT.
| Component | Purpose |
|---|---|
| Answer Pools | Groups of DNS answers (IPs, CNAMEs) associated with a region. |
| Pool Priority | Defines primary vs secondary ordering. |
| Health Checks | HTTP, HTTPS, or TCP probes that evaluate endpoint availability. |
| Policy TTL | Controls how quickly clients pick up the DNS change. Lower TTL = faster failover but more DNS queries. |
Exam trap: Traffic Management failover depends on DNS TTL propagation. If the TTL is set to 300 seconds, clients may continue hitting the failed primary for up to 5 minutes after failover. For faster failover, use a lower TTL -- but this increases DNS query volume and cost.
7. Full Stack Disaster Recovery (Full Stack DR)
Full Stack DR is OCI's managed disaster recovery orchestration service. It does not perform replication itself -- it orchestrates the recovery of entire application stacks that use the underlying replication services (Block Volume replication, Data Guard, FSS replication, etc.). (Full Stack DR Overview)
Core Components
| Component | Description |
|---|---|
| DR Protection Group | Collection of OCI resources that form an application and must be recovered together. Can span multiple compartments. |
| DR Plan | Ordered sequence of steps that define how to switchover or failover the protection group. |
| DR Plan Execution | A single run of a DR plan (switchover, failover, or drill). |
| Prechecks | Non-disruptive validation that a DR plan can execute successfully. |
Member Types
Protection group members are classified as moving or non-moving:
| Classification | Behavior | Typical Resources |
|---|---|---|
| Moving Members | Resources that migrate from primary to standby region during DR. Instances are terminated in primary and recreated in standby. | Compute instances, load balancers, volume groups |
| Non-Moving Members | Resources that exist in both regions and use replication. DR transitions the active role. | Databases (via Data Guard/replication), network resources (VCNs, subnets) |
Moving Compute Instance Details
When a compute instance is configured as a moving member:
- You specify VNIC-to-subnet mappings for the destination region (which subnet each VNIC attaches to in the DR region).
- If source and destination subnets have matching CIDR blocks and the IP is available, Full Stack DR assigns the same private IP. Otherwise, it assigns a new available IP.
- Boot volumes and block volumes must be in a replicated volume group (using Block Volume cross-region replication).
- Full Stack DR automatically creates the compute instance in the standby region during plan execution -- no need to pre-provision.
Source: Add a Moving Instance, Moving Instance Features
DR Plan Types
| Plan Type | Scenario | Requirements | Role Change |
|---|---|---|---|
| Switchover | Planned, orderly migration | Both regions must be operational | Primary <--> Standby swap |
| Failover | Unplanned, disaster response | Only standby region needs to be operational | Standby promoted to Primary |
| Start Drill | Non-destructive test of DR readiness | Creates replica stack in standby | No role change (DrillInProgress state) |
| Stop Drill | Terminates drill and cleans up replica | Must be in DrillInProgress state | Reverts to Active state |
Plan Structure: Groups and Steps
DR plans are organized into plan groups (logical groupings) containing plan steps (individual actions).
Built-in steps (pre-configured by the service):
- Launch/terminate compute instances
- Update load balancer backends
- Promote database standby
- Restore volumes from replicas
- Update DNS/routing configuration
User-defined steps (custom):
- Run custom scripts (e.g., application startup, health checks, notifications)
- Execute application-specific recovery logic
- Perform external system updates
Steps can execute sequentially or in parallel within a plan group. Timeout handling is configurable per step.
Prechecks
Prechecks validate DR plan readiness without affecting production. They verify:
- Network connectivity between regions
- Storage replication status
- Compute resource availability in standby region
- IAM permission validity
- Database replication eligibility
- Custom script accessibility
Oracle recommends running prechecks weekly on all DR plans. Prechecks have zero impact on production workloads.
DR Drills
Drills test your DR readiness without affecting production:
- Start Drill creates a replica of the production stack in the standby region.
- The protection groups enter DrillInProgress state.
- While in DrillInProgress: you cannot execute switchover, failover, or another start drill. You cannot add or remove members.
- Stop Drill tears down the replica and reverts to Active state.
Exam trap: During a drill, protection group roles do NOT change. The primary remains primary and the standby remains standby. Only the lifecycle sub-state changes to DrillInProgress.
Execution Policies
| Policy | Behavior |
|---|---|
| Stop on Failure | Halts entire plan if any step fails. |
| Continue on Failure | Skips failed steps and continues. |
| Timeout | Configurable per step and globally. |
Typical Recovery Timelines
| Operation | Typical Duration |
|---|---|
| Switchover (planned) | 20-55 minutes total (prep 5-15 min, execution 10-30 min, validation 5-10 min) |
| Failover (unplanned) | 20-45 minutes total (detection 1-5 min, assessment 2-5 min, execution 10-20 min, validation 5-15 min) |
8. Business Continuity Planning
Designing for Cross-Region DR
The exam expects you to select the right combination of services for a given RTO/RPO requirement:
| Requirement | Recommended Approach |
|---|---|
| RPO hours, RTO hours, low cost | Backup and restore (RMAN backups cross-region, Object Storage replication for data, compute images pre-replicated) |
| RPO minutes, RTO minutes | Pilot light (Block Volume replication, minimal compute in DR, Full Stack DR for orchestration) |
| RPO seconds, RTO minutes | Warm standby (Data Guard for DB, FSS replication, scaled-down compute in DR, Traffic Management for DNS failover) |
| RPO zero, RTO seconds | Hot standby (Active Data Guard synchronous, Block Volume replication, full compute in DR) |
| RPO zero, RTO zero | Active-active (GoldenGate for DB, Traffic Management load-balanced across regions, identical compute in both regions) |
Key Planning Decisions
Capacity Reservations: Reserve compute shapes in the DR region to guarantee availability during a regional outage. Without reservations, you may not be able to launch instances when everyone fails over simultaneously.
Network pre-configuration: VCNs, subnets, security lists, route tables, and DRGs must be pre-created in the DR region. These are non-moving members and must exist before failover.
Custom image replication: Compute custom images must be replicated to the DR region before they are needed. This is a prerequisite for moving compute members in Full Stack DR.
SSL certificates: Certificates used by load balancers must be available in the DR region. They are not automatically replicated.
9. Cost Considerations
| Cost Category | Details |
|---|---|
| Storage replication | Replicated block volumes billed at Lower Cost tier. Object Storage and FSS replicas billed at standard rates. |
| Network egress | All cross-region replication incurs outbound data transfer charges. Inbound is free. Same-region cross-AD replication for Block Volume and FSS has no network cost. |
| Standby compute | Pilot light and warm standby incur compute costs for running instances. Hot standby doubles compute costs. |
| Capacity reservations | Reserved capacity in DR region incurs charges whether used or not, but guarantees availability. |
| Full Stack DR | The Full Stack DR service itself is included at no additional charge. You pay for the underlying resources (compute, storage, networking). |
| Traffic Management | Steering policies and health checks are billed per policy and per health check. |
| Data Guard | Requires Enterprise Edition or higher licensing. Active Data Guard requires Enterprise Edition Extreme Performance. |
| GoldenGate | Separate service with its own pricing (OCPU-based). |
Exam trap: Full Stack DR itself is free. The cost is in the infrastructure it orchestrates. If a question asks about the cost of implementing DR with Full Stack DR, focus on compute, storage, networking, and database licensing -- not the DR service fee.
10. Exam Focus Areas and Common Traps
High-Probability Question Topics
- Which replication service for which scenario -- Object Storage for unstructured data, Block Volume for compute-attached storage, FSS for shared file systems, Data Guard for databases.
- RPO differences -- Block Volume < 30 min, FSS depends on interval (min 15 min), Object Storage has no published RPO SLA.
- Full Stack DR plan types -- Know the four types (switchover, failover, start drill, stop drill) and when each is used.
- Moving vs non-moving members -- Compute instances are moving; databases are non-moving (they use replication).
- Prechecks and drills -- Prechecks are non-disruptive validation. Drills create a replica stack. Neither affects production.
- Encryption restrictions -- SSE-C blocks Object Storage replication. Customer-managed Vault keys block Block Volume replication.
- Pre-existing object behavior -- Object Storage does NOT replicate pre-existing objects.
- DR topology selection -- Given RTO/RPO requirements and budget, select the correct topology.
Summary of Key Gotchas
| Gotcha | Service |
|---|---|
| Pre-existing objects not replicated | Object Storage |
| SSE-C encryption blocks replication | Object Storage |
| Service authorization required per region | Object Storage |
| Destination bucket is permanently read-only until replication stopped | Object Storage |
| Cannot resize volume while replication enabled | Block Volume |
| Customer-managed Vault keys block replication | Block Volume |
| RPO can exceed 1 hour under heavy write I/O | Block Volume |
| Target file system must never have been exported | FSS |
| Quota rules copied but disabled on target | FSS |
| Deleting converted user snapshots permanently prevents target reuse | FSS |
| Minimum replication interval is 15 minutes | FSS |
| Drill state blocks switchover/failover execution | Full Stack DR |
| Protection group roles do not change during drills | Full Stack DR |
| Full Stack DR service itself is free | Full Stack DR |