Domain 6: Migrate Workloads to OCI (10%)
Domain 6 of the 1Z0-1124-25 Oracle Cloud Infrastructure 2025 Networking Professional exam covers the networking strategies required to migrate workloads from on-premises and other cloud providers into OCI. At 10% of the exam this domain accounts for approximately 5 questions out of 50 (90 minutes, 68% passing score). Migration questions test your ability to select the correct connectivity model for a given scenario, understand cross-cloud interconnect architectures, and design multi-cloud network topologies.
1. On-Premises to OCI Connectivity for Migration
Two primary connectivity services exist for connecting on-premises data centers to OCI: Site-to-Site VPN and FastConnect. Choosing between them (or combining them) is a core exam topic.
Site-to-Site VPN (IPSec)
Site-to-Site VPN provides encrypted IPSec tunnels over the public internet between an on-premises Customer-Premises Equipment (CPE) device and a Dynamic Routing Gateway (DRG) in OCI. (Site-to-Site VPN Overview)
| Attribute | Detail |
|---|---|
| Protocol | IPSec (tunnel mode only; transport mode not supported) |
| Tunnels per connection | 2 redundant IPSec tunnels |
| Max connections per CPE IP | 8 IPSec connections |
| Routing options | BGP dynamic routing, static routing (max 10 routes), policy-based routing |
| Authentication | IKE pre-shared key (shared secret); letters, numbers, spaces only |
| Setup time | Minutes to hours (no physical provisioning required) |
| Cost | No dedicated line cost; uses public internet |
| Bandwidth | Constrained by internet link; no guaranteed throughput |
Routing type selection per tunnel: Each tunnel can independently use BGP, static, or policy-based routing. Oracle recommends using the same type across all tunnels in a connection. When migrating from static to BGP, switch one tunnel at a time to avoid total connection downtime. (Site-to-Site VPN Overview)
BGP route preference controls (when using BGP):
- Local preference on CPE: Controls which tunnel is preferred for on-premises-to-VCN traffic
- More specific routes: Advertise narrower prefixes on the preferred tunnel (longest prefix match wins)
- AS path prepending: Shorter AS path is preferred; prepend extra hops on the backup tunnel
Exam trap: Changing a tunnel's shared secret causes tunnel downtime during reprovisioning. The exam may present a scenario where a key rotation appears safe -- it is not a zero-downtime operation.
Exam trap: If the CPE is behind a NAT device, the CPE IKE identifier must be set to the CPE's private IP address (not the public NAT IP). This is a frequently tested detail.
FastConnect
FastConnect provides a dedicated, private connection between on-premises networks and OCI that does not traverse the public internet. (FastConnect Overview)
| Attribute | Detail |
|---|---|
| Connection models | Oracle Partner, Third-Party Provider, Colocation with Oracle |
| Port speeds | 1 Gbps, 10 Gbps, 100 Gbps, 400 Gbps (400 Gbps: colocation and third-party only) |
| Virtual circuit types | Private (VCN access via DRG) and Public (OCI public services without internet) |
| Routing | BGP mandatory for all models |
| Redundancy | Multiple cross-connects in a LAG (cross-connect group); separate FastConnect locations in same metro area |
| Letter of Authority | Required for colocation and third-party models; not applicable for partner model |
| Setup time | Days to weeks (physical cross-connect provisioning required) |
Three connection models compared:
| Capability | Oracle Partner | Third-Party Provider | Colocation |
|---|---|---|---|
| Port speeds | 1/10/100 Gbps | 1/10/100/400 Gbps | 1/10/100/400 Gbps |
| Cross-connect | Partner provides | Customer arranges | Customer arranges |
| LOA required | No | Yes | Yes |
| Network connectivity | Customer arranges | Customer arranges | Not applicable (on-site) |
| Cross-connect group (LAG) | Not applicable | Yes | Yes |
Public virtual circuits provide access to OCI public services (Object Storage, Console, APIs, public load balancers) over the dedicated connection rather than the internet. This is relevant during migration when you need high-speed access to Object Storage for data transfer. No DRG is involved with public virtual circuits. (FastConnect Overview)
Exam trap: FastConnect requires BGP for all connection models. There is no static routing option on FastConnect, unlike Site-to-Site VPN. If a question describes a CPE that does not support BGP, FastConnect is not an option.
VPN vs. FastConnect: Migration Decision Matrix
| Factor | Site-to-Site VPN | FastConnect |
|---|---|---|
| Speed to establish | Minutes | Days to weeks |
| Bandwidth | Limited by internet link | Dedicated 1-400 Gbps |
| Latency | Variable (internet path) | Low, predictable |
| Encryption | Built-in IPSec | Not encrypted by default (add IPSec over FastConnect for encryption) |
| Data volume suitability | Small to moderate | Large to massive |
| Cost model | Internet bandwidth costs | Port fees + partner/colo fees |
| Redundancy | 2 tunnels per connection | Multiple circuits, cross-connects, locations |
Common migration pattern: Stand up a Site-to-Site VPN first for immediate connectivity (application testing, DNS validation, small data transfers). Provision FastConnect in parallel for high-volume data migration. Run both simultaneously during the migration window; use VPN as backup if FastConnect has issues. Decommission VPN after migration completes if only FastConnect is needed long-term. Both VPN and FastConnect attach to the same DRG.
Parallel Connectivity (VPN + FastConnect)
A DRG can terminate both Site-to-Site VPN tunnels and FastConnect virtual circuits simultaneously. This enables:
- Migration redundancy: FastConnect as primary path, VPN as fallback
- Staged migration: VPN for control plane / management traffic, FastConnect for bulk data
- Zero-downtime cutover: Traffic flowing on both paths during transition
Route preference between VPN and FastConnect on the same DRG is controlled through BGP attributes (local preference, AS path length, MED). The DRG uses standard BGP best-path selection. (FastConnect Overview)
OCI Data Transfer Service (Bulk Offline Migration)
For datasets too large to transfer over the network in a reasonable timeframe, Oracle provides a physical Data Transfer service:
| Attribute | Detail |
|---|---|
| Capacity | Up to 50 TB per appliance |
| Multiple appliances | Supported for larger datasets |
| Destination | OCI Object Storage bucket |
| Cost | No data transfer charges |
| Turnaround | Days (vs. weeks/months over the network) |
| Security | Encrypted at rest on appliance |
When to recommend: If transferring tens of terabytes or more, and available bandwidth (even via FastConnect) would result in multi-week transfer times, the Data Transfer Appliance is the correct answer. The exam may present a scenario with a 100 TB dataset and a 1 Gbps link -- do the math: 100 TB over 1 Gbps takes roughly 9 days at full utilization with no overhead. With realistic overhead, two to three weeks. The Data Transfer Appliance is faster and has no transfer cost.
2. Migration from Other Cloud Providers
Oracle Interconnect for Azure
Oracle Interconnect for Azure creates a private, dedicated cross-cloud connection between an OCI VCN and an Azure VNet. Traffic uses private IP addresses and never traverses the public internet. (Interconnect for Azure)
Architecture:
| Component | Azure Side | OCI Side |
|---|---|---|
| Virtual network | VNet | VCN |
| Virtual circuit | ExpressRoute circuit | FastConnect private virtual circuit |
| Gateway | Virtual Network Gateway | Dynamic Routing Gateway (DRG) |
| Routing | Route tables | Route tables |
| Security | Network Security Groups | NSGs + Security Lists |
Setup flow: Create Azure ExpressRoute circuit (choose "Oracle Cloud Infrastructure FastConnect" as provider) -> receive Service Key -> create OCI FastConnect virtual circuit using "Microsoft Azure: ExpressRoute" as partner -> enter the ExpressRoute Service Key -> both circuits provision within minutes -> configure route tables on both sides -> verify with test traffic. (Interconnect for Azure)
BGP requirements: Two address blocks (/28 to /31 each) for primary and secondary BGP peering sessions. Each block yields an Oracle BGP IP and a customer (Azure) BGP IP. Built-in redundancy means a single FastConnect and single ExpressRoute circuit is sufficient.
Key regions (partial list):
| OCI Region | Azure ExpressRoute Location |
|---|---|
| US East (Ashburn) | Washington DC |
| US West (Phoenix) | Phoenix |
| Germany Central (Frankfurt) | Frankfurt |
| UK South (London) | London |
| Japan East (Tokyo) | Tokyo |
| Canada Southeast (Toronto) | Toronto |
Exam trap: The Interconnect for Azure does NOT support on-premises passthrough traffic. You cannot route on-premises -> VCN -> VNet or on-premises -> VNet -> VCN. It is strictly cloud-to-cloud. If a question describes a requirement for on-premises to reach Azure through OCI, the Interconnect alone does not solve it.
Exam trap: VCN and VNet CIDR blocks must not overlap. This is a hard requirement. If a migration scenario has overlapping address spaces between Azure and OCI, you must re-address one side before establishing the interconnect.
Oracle Interconnect for Google Cloud
Oracle Interconnect for Google Cloud connects an OCI VCN to a Google Cloud VPC using GCP Partner Interconnect (VLAN attachment) on the Google side and FastConnect on the OCI side. (Interconnect for Google Cloud)
Architecture:
| Component | GCP Side | OCI Side |
|---|---|---|
| Virtual network | VPC | VCN |
| Virtual circuit | VLAN attachment (Partner Interconnect) | FastConnect private virtual circuit |
| Gateway | Google Cloud Router | Dynamic Routing Gateway (DRG) |
| Routing | BGP via Cloud Router | Route tables + DRG |
| Security | Service Perimeters / Firewall rules | NSGs + Security Lists |
Setup flow: Create GCP Partner Interconnect VLAN attachments (select Oracle FastConnect as partner, paired region, Cloud Router) -> receive pairing key(s) -> create OCI FastConnect virtual circuit using "Google Cloud: OCI Interconnect" -> enter pairing key -> activate VLAN attachment if not pre-activated -> configure route tables -> verify BGP session is ESTABLISHED -> test connectivity. (Interconnect for Google Cloud)
Key regions (partial list):
| OCI Region | GCP Region |
|---|---|
| us-ashburn-1 (Ashburn) | us-east4 (N. Virginia) |
| us-phoenix-1 (Phoenix) | us-west2 (Los Angeles) |
| eu-frankfurt-1 (Frankfurt) | europe-west3 (Frankfurt) |
| uk-london-1 (London) | europe-west2 (London) |
| ap-tokyo-1 (Tokyo) | asia-northeast1 (Tokyo) |
| ap-sydney-1 (Sydney) | australia-southeast1 (Sydney) |
Recommended MTU: 1500 bytes. Larger MTU values on the VLAN attachment can cause hanging connections. (Interconnect for Google Cloud)
Same limitations as Azure Interconnect: No on-premises passthrough. Non-overlapping CIDRs required. Cloud-to-cloud only.
Cross-Cloud Interconnect Comparison
| Attribute | Azure Interconnect | Google Cloud Interconnect |
|---|---|---|
| OCI side technology | FastConnect (partner) | FastConnect (partner) |
| Other cloud technology | ExpressRoute | Partner Interconnect (VLAN) |
| Key exchange | ExpressRoute Service Key | GCP Pairing Key |
| BGP address blocks | 2 blocks, /28 to /31 | 2 blocks, /28 to /31 |
| On-premises passthrough | Not supported | Not supported |
| CIDR overlap | Not allowed | Not allowed |
| Data transfer charges | Standard cloud egress applies | Google waives cross-cloud transfer fees |
| Provisioning time | Minutes | Minutes (plus optional activation step) |
| Redundancy | Built-in (single circuit sufficient) | Must create redundant pair for HA |
Exam trap: Google Cloud Interconnect waives data transfer fees for cross-cloud traffic. Azure Interconnect does not have this same blanket waiver. This cost difference can be a factor in migration planning questions.
AWS to OCI Connectivity
There is no native Oracle-AWS interconnect equivalent to the Azure or Google Cloud interconnects. Connectivity from AWS to OCI requires:
- Internet-based Site-to-Site VPN: IPSec tunnels between AWS VPN Gateway and OCI DRG. Quickest to set up but bandwidth limited and latency variable.
- Third-party partner interconnect: Use a network partner (such as Megaport, Equinix Fabric) that has presence in both AWS Direct Connect and OCI FastConnect locations. Provides private, dedicated bandwidth but requires a partner relationship.
- Direct peering at a colocation facility: If you have presence at a facility that hosts both AWS Direct Connect and OCI FastConnect, you can establish private connectivity through your own network infrastructure.
Exam trap: The exam may test whether you know that a native Oracle-AWS interconnect does not exist. If a question asks about private cross-cloud connectivity between AWS and OCI, the answer involves a third-party partner or VPN -- not a direct Oracle interconnect product.
3. Multi-Cloud Networking Patterns
Split Workload (Database on OCI, Application on Another Cloud)
The most common multi-cloud pattern Oracle promotes: Oracle Database (Autonomous or Exadata) runs on OCI while application tiers run on Azure or GCP. The cross-cloud interconnect provides the low-latency, private connectivity needed for database connections (TCP 1521 for SQL*Net). (Interconnect for Azure)
Oracle Database@Azure takes this further by placing Oracle Exadata hardware physically inside Azure data centers, connected via the Interconnect for Azure for the lowest possible latency.
DR/Failover Multi-Cloud
Use one cloud as primary and OCI as disaster recovery (or vice versa). The cross-cloud interconnect provides the replication path. Network design considerations:
- Replication bandwidth: Size the interconnect to handle steady-state replication throughput
- DNS failover: External DNS with health checks to redirect traffic during failover
- Asymmetric routing: Ensure security rules and stateful firewalls on both clouds account for traffic that may enter via one path and return via another during failover transitions
Hybrid Cloud with OCI Extension
On-premises infrastructure connected to OCI via FastConnect or VPN, with a secondary cloud connected via cross-cloud interconnect. OCI acts as the hub. Important: on-premises traffic cannot transit through OCI to reach the other cloud via the interconnect (passthrough not supported). On-premises to Azure/GCP requires its own separate connectivity (ExpressRoute, Cloud Interconnect, or VPN).
4. Migration Networking Best Practices
DNS Migration and Cutover
- Pre-migration: Lower TTL values on DNS records weeks before cutover (reduces propagation delay)
- During migration: Use weighted or failover DNS routing to gradually shift traffic
- Post-migration: Update DNS records to point to OCI endpoints; restore normal TTL values
- Private DNS: OCI Private DNS zones resolve within the VCN; configure DNS forwarders between on-premises DNS and OCI Private DNS during hybrid operation
IP Address Planning
- Non-overlapping CIDRs: Mandatory for VCN peering, cross-cloud interconnects, and on-premises connectivity. Plan the OCI address space to avoid conflicts with every connected network.
- Overlapping CIDR workaround: If overlap is unavoidable, use NAT at the boundary. OCI does not natively NAT between VCN and on-premises; this would need to be handled on the CPE or a network virtual appliance (NVA) in the VCN.
- Address conservation: Use appropriately sized subnets. Do not allocate /16 VCNs when /24 subnets are sufficient.
Security During Migration
- Encrypted tunnels: Site-to-Site VPN encrypts by default. FastConnect does not -- use IPSec over FastConnect if encryption in transit is required.
- Temporary security rules: Open only the ports needed for migration traffic (e.g., database replication, file transfer). Remove these rules post-migration.
- Multiple enforcement layers: OCI enforces security at route tables, security lists, NSGs, and instance firewalls (firewalld/iptables). All layers must permit the migration traffic. (Interconnect for Azure)
Testing Before Cutover
- Connectivity verification: Ping (ICMP type 8), SSH (TCP 22), application ports (TCP 1521 for database) from source to destination
- BGP session verification: Confirm BGP state is ESTABLISHED on FastConnect virtual circuits and VPN tunnels before routing production traffic
- Route table verification: Confirm route rules direct traffic to the correct DRG and that the DRG has learned the expected routes
- Performance baseline: Measure latency and throughput on the migration path before starting bulk transfer
Rollback Planning
- Keep old connectivity alive: Do not decommission on-premises VPN/FastConnect or source cloud interconnect until migration is validated
- DNS rollback: Retain the ability to switch DNS records back to original endpoints
- Route table rollback: Document all route table changes; be prepared to revert
- Data sync: Maintain reverse replication or backup so the source environment can resume if migration fails
5. Exam Focus Areas
| Topic | Why It Matters |
|---|---|
| VPN vs. FastConnect selection | Scenario-based questions will describe bandwidth, latency, and setup time requirements |
| Cross-cloud interconnects | Know which clouds have native interconnects (Azure, GCP) and which do not (AWS) |
| No on-premises passthrough | Both Azure and GCP interconnects are cloud-to-cloud only |
| CIDR overlap restrictions | Hard requirement for all interconnects and peering |
| BGP mandatory on FastConnect | No static routing option; disqualifies CPEs without BGP support |
| IPSec over FastConnect | Required if encryption in transit is needed on FastConnect |
| Data Transfer Appliance | Correct answer for massive offline data migrations (tens of TB+) |
| DRG as single entry point | Both VPN and FastConnect terminate at the same DRG |