Reference

Domain 4: Transitive Routing (10%)

Domain 4 of the OCI 2025 Networking Professional exam (1Z0-1124-25) covers transitive routing -- the ability to route traffic between networks that are not directly connected. At 10% of the exam (~5 questions), this domain requires precise knowledge of DRG route tables, import route distributions, LPG-based transit, NVA routing patterns, and the traffic flows through hub-and-spoke topologies.


1. Why OCI Doesn't Support Transitive Routing by Default

OCI VCN peering is non-transitive. If VCN-A peers with VCN-B, and VCN-B peers with VCN-C, traffic from VCN-A cannot reach VCN-C through VCN-B. Each peering relationship is isolated. This is by design -- it prevents unintended route propagation and enforces explicit network segmentation. (Transit Routing)

To achieve transitive routing, you must explicitly configure one of three patterns:

Pattern Hub Component On-Premises Connectivity Inspection Capability
DRG-based transit Upgraded DRG with route tables FastConnect / VPN attached to DRG Via NVA in hub VCN
LPG-based transit Hub VCN with LPGs to each spoke DRG attached to hub VCN Via NVA in hub VCN (optional)
Upgraded DRG direct peering Single DRG, all VCNs attached Same DRG No inline inspection (DRG only)

2. DRG-Based Transit Routing

The upgraded DRG (post-May 2021) is the primary mechanism for transit routing in OCI. It supports multiple VCN attachments, VCN-to-VCN routing, and transit between on-premises and spoke VCNs. (Managing DRGs)

2.1 Hub-and-Spoke Topology with DRG as Hub

The standard architecture uses a single DRG as the central router with an optional hub VCN containing a network virtual appliance (NVA) for traffic inspection.

                  On-Premises (172.16.0.0/16)
                          │
                   FastConnect / VPN
                          │
                     ┌────┴────┐
                     │  DRG    │
                     │  (Hub)  │
                     └┬──┬──┬──┘
                      │  │  │
              ┌───────┘  │  └───────┐
              ▼          ▼          ▼
        ┌──────────┐ ┌──────────┐ ┌──────────┐
        │ Hub VCN  │ │  VCN-A   │ │  VCN-B   │
        │ (NVA)    │ │ (Spoke)  │ │ (Spoke)  │
        │10.0.0/16 │ │192.168.10│ │192.168.20│
        └──────────┘ └──────────┘ └──────────┘

All VCNs attach to the same DRG. Spoke VCN attachments use a DRG route table (RT-Spoke) that sends all traffic to the hub VCN. The hub VCN attachment uses a different DRG route table (RT-Hub) that knows how to reach each spoke and on-premises. (Transit Routing with Firewall)

2.2 DRG Route Tables for Transit Traffic

Three DRG route tables control all transit traffic:

RT-Spoke (assigned to all spoke VCN attachments):

Destination Target Type
0.0.0.0/0 VCN-Hub attachment Static

RT-Hub (assigned to the hub VCN attachment):

Destination Target Type
192.168.10.0/24 VCN-A attachment Dynamic (imported)
192.168.20.0/24 VCN-B attachment Dynamic (imported)
172.16.0.0/16 Virtual Circuit attachment Dynamic (imported)

RT-OnPrem (assigned to FastConnect/VPN attachment):

Destination Target Type
192.168.10.0/24 VCN-Hub attachment Static
192.168.20.0/24 VCN-Hub attachment Static

RT-OnPrem routes on-premises traffic through the hub VCN (not directly to spokes), ensuring all traffic passes through the NVA for inspection. (Transit Routing with Firewall)

2.3 Import Route Distributions

Import route distributions control which routes are dynamically imported into a DRG route table. The RT-Hub table uses an import distribution (Import-Hub) with statements that match specific attachments:

Statement Priority Match Criteria Effect
1 10 VCN-A attachment Import VCN-A subnet CIDRs
2 20 VCN-B attachment Import VCN-B subnet CIDRs
3 30 Attachment type: Virtual Circuit Import on-premises routes via BGP

Statements are evaluated by priority (lowest number = highest priority), but priority does not affect route preference -- that is determined by the conflict resolution rules (static > dynamic, then AS path length, then attachment type). (Managing DRGs)

When a DRG is created, two auto-generated import distributions are created:

  • One that imports only VCN routes
  • One that imports all routes from all attachment types

You can create additional custom import distributions. You cannot create custom export distributions. (Managing DRGs)

2.4 Route Conflict Resolution

When multiple routes exist for the same CIDR, the DRG resolves conflicts in this order:

  1. Static routes always win over dynamic routes
  2. Shortest AS path wins (VCN and STATIC sources have empty AS paths)
  3. Attachment type priority: VCN > VIRTUAL_CIRCUIT > IPSEC_TUNNEL > RPC
  4. Same type conflicts: Arbitrary but stable selection (for VCN and RPC); ECMP if enabled (for VIRTUAL_CIRCUIT and IPSEC_TUNNEL, max 8 paths)

Conflicting routes are marked with conflict status in the route table listing. (Managing DRGs)

2.5 Spoke-to-Spoke Routing Through DRG

Without an NVA, the upgraded DRG can route directly between spokes. Attach all VCNs to the same DRG, and use the default auto-generated route tables with the "all routes" import distribution. Traffic flow: VCN-A subnet route table sends to DRG, DRG route table has imported route for VCN-B CIDR pointing to VCN-B attachment, packet arrives in VCN-B. No hub VCN needed for uninspected spoke-to-spoke traffic.

With an NVA, spoke-to-spoke traffic follows a longer path: Spoke-A to DRG (RT-Spoke routes to hub VCN), hub VCN ingress route table sends to NVA private IP, NVA processes and returns to hub subnet route table, hub subnet routes back to DRG, DRG (RT-Hub) routes to Spoke-B. (Transit Routing with Firewall)


3. LPG-Based Transit Routing

LPG-based transit uses a hub VCN with local peering gateways (LPGs) to each spoke VCN, plus a DRG for on-premises connectivity. This pattern works with both legacy and upgraded DRGs. (Transit Routing)

3.1 Architecture

      On-Premises                Hub VCN (10.0.0.0/16)
          │                    ┌─────────────────────┐
    FC / VPN                   │  DRG    LPG-H-1 ──────── LPG-1  Spoke VCN-A
          │                    │          LPG-H-2 ──────── LPG-2  Spoke VCN-B
          └──── DRG attachment │                         │
                               └─────────────────────┘

Each spoke requires its own LPG pair (one in hub, one in spoke). Route tables on the DRG attachment and each hub LPG control transit traffic.

3.2 Route Table Configuration (Direct Transit)

Four route tables required for direct gateway-to-gateway transit:

DRG attachment route table (inside VCN, associated with DRG attachment):

Destination Target
192.168.0.0/16 (spoke) LPG-H-1

LPG-H-1 route table (inside hub VCN, associated with hub LPG):

Destination Target
172.16.0.0/12 (on-prem) DRG

Spoke subnet route table:

Destination Target
172.16.0.0/12 (on-prem) LPG-1 (spoke LPG)
10.0.0.0/16 (hub) LPG-1 (spoke LPG)

(Transit Routing)

3.3 LPG vs DRG for Transit Routing

Factor LPG-Based Transit DRG-Based Transit
DRG requirement Legacy or upgraded Upgraded only
Scalability One LPG pair per spoke (limited by LPG quota) All VCNs attach to single DRG
Spoke-to-spoke Not supported (LPG peering is non-transitive) Supported through DRG route tables
Route management Manual route tables per LPG Import distributions automate route propagation
NVA support Yes (route through private IP) Yes (route through private IP in hub VCN)
Cross-region No (LPG is intra-region only) Yes (via RPC attachments)
Recommendation Legacy environments only Preferred for all new deployments

Exam trap: LPG peering is non-transitive. If Spoke-A peers with Hub, and Hub peers with Spoke-B, Spoke-A cannot reach Spoke-B through the LPGs unless explicit route table rules route traffic from one LPG to another through the hub VCN. Even then, this is not direct spoke-to-spoke peering -- it requires the hub to forward traffic between LPGs, which is only achievable with NVA or explicit routing.

3.4 LPG Route Table Restrictions

Gateway Route Table Allowed Targets NOT Allowed
DRG attachment Service Gateway, Private IP, LPG Internet Gateway, NAT Gateway, DRG itself
LPG Service Gateway, Private IP, DRG Internet Gateway, NAT Gateway

(Transit Routing)


4. Network Virtual Appliance (NVA) Transit Routing

An NVA (firewall, IDS/IPS, or other inspection appliance) in the hub VCN enables centralized traffic inspection for all transit traffic.

4.1 NVA Configuration Requirements

Requirement Detail
Source/destination check Must be disabled on all NVA VNICs. Without this, the VNIC drops packets not addressed to its own IP.
Private IP Must be static. Used as the route target in route tables.
Dual VNICs (LPG pattern) Frontend VNIC (DRG-facing subnet) and backend VNIC (LPG-facing subnet).
Single VNIC (DRG pattern) One VNIC in hub subnet is sufficient since all traffic enters/exits through DRG.
OS configuration Must configure the OS to use secondary VNICs if applicable (not automatic on most images).

(Transit Routing with Firewall), (Transit Routing)

4.2 NVA Traffic Flow: DRG Pattern

For the DRG-based pattern with a single NVA VNIC (IP 10.0.0.10) in the hub VCN:

Hub VCN ingress route table (associated with hub VCN's DRG attachment):

Destination Target
172.16.0.0/16 (on-prem) 10.0.0.10 (NVA)
192.168.10.0/24 (VCN-A) 10.0.0.10 (NVA)
192.168.20.0/24 (VCN-B) 10.0.0.10 (NVA)

Hub subnet route table (for the NVA's subnet):

Destination Target
172.16.0.0/16 (on-prem) DRG
192.168.10.0/24 (VCN-A) DRG
192.168.20.0/24 (VCN-B) DRG

Traffic enters hub VCN through DRG attachment, ingress route table sends to NVA, NVA inspects and forwards, hub subnet route table sends back to DRG, DRG route table (RT-Hub) routes to destination attachment. (Transit Routing with Firewall)

4.3 NVA Traffic Flow: LPG Pattern (Dual VNIC)

For the LPG-based pattern with dual VNICs (frontend 10.0.4.3, backend 10.0.8.3):

DRG attachment route table: 192.168.0.0/16 -> 10.0.4.3 (frontend NVA)

LPG-H-1 route table: 172.16.0.0/12 -> 10.0.8.3 (backend NVA)

Traffic from on-premises enters through DRG, hits the frontend VNIC, NVA processes, sends out the backend VNIC, backend subnet route table routes to LPG, LPG forwards to spoke. Return traffic follows the reverse path through the backend VNIC. (Transit Routing)

4.4 NVA High Availability

A single NVA is a single point of failure. Consider:

  • Multiple NVA instances across fault domains or availability domains
  • OCI Network Load Balancer in front of NVA instances for health checking and failover
  • Route table updates needed if a primary NVA fails (unless using NLB as the route target)
  • All transit traffic passes through the NVA -- it is a potential performance bottleneck

4.5 VCN Local Routing Constraint

Exam trap: VCN local routing always takes precedence over route table rules. Traffic destined for an address within the hub VCN's own CIDR block bypasses all route tables and routes directly within the VCN. This means:

  • Traffic from a spoke to a hub VCN address cannot be inspected by the NVA
  • Do not place production workloads in the hub VCN if you need full NVA inspection
  • Keep hub VCNs dedicated to transit functions only

(Transit Routing)


5. Critical Constraints and Exam Traps

5.1 CIDR and Overlap Rules

Rule Enforced?
Hub and spoke VCNs cannot overlap Yes (enforced at peering/attachment)
Spoke-to-spoke CIDRs cannot overlap No (must validate manually)
On-premises and VCN CIDRs cannot overlap No (must validate manually)

5.2 DRG Route Propagation Restrictions

IPSec/Virtual Circuit isolation: Routes learned from IPSec tunnel or virtual circuit attachments are never exported to other IPSec or virtual circuit attachments. Packets entering the DRG through an IPSec or virtual circuit attachment cannot exit through another IPSec or virtual circuit attachment -- they are dropped. This prevents OCI from becoming a transit network between on-premises sites. (Managing DRGs)

Dynamic export to VCN not supported: Routes cannot be dynamically exported to VCN attachments. VCN subnet route tables must be configured manually to point to the DRG. (Managing DRGs)

RPC propagation depth limit: Routes cannot propagate through more than 4 DRGs via RPC connections. (Managing DRGs)

5.3 Legacy DRG Limitations

DRGs created before May 2021 cannot perform transit routing between VCNs and cannot attach multiple VCNs. They support only a single VCN attachment plus RPC for remote peering. An upgrade is available (one-way, ~30 minutes per on-premises attachment with BGP reset). (Managing DRGs)

5.4 Static Route Rules

  • Static routes in DRG route tables cannot have a next-hop of IPSec tunnel or virtual circuit attachment
  • Static routes always win over dynamic routes for the same CIDR
  • Cannot have two static routes with identical CIDRs in the same DRG route table

5.5 Unsupported Pattern: Chained DRG Attachments

You cannot chain DRG attachments across VCNs: DRG-1 attachment to VCN-1, LPG peer to VCN-2, DRG-2 attachment from VCN-2. For multi-DRG scenarios within a region, use RPC connections instead. (Transit Routing)


6. Scenario-Based Configuration Summary

6.1 On-Premises to Spoke VCN via Hub VCN with NVA

  1. Create DRG, attach hub VCN and all spoke VCNs
  2. Create RT-Spoke (static 0.0.0.0/0 -> hub VCN attachment), assign to all spoke attachments
  3. Create RT-Hub with import distribution importing spoke VCN and virtual circuit routes, assign to hub attachment
  4. Create RT-OnPrem routing spoke CIDRs to hub VCN attachment, assign to virtual circuit/IPSec attachment
  5. Create hub VCN ingress route table routing all remote CIDRs to NVA private IP
  6. Create hub subnet route table routing all remote CIDRs to DRG
  7. Disable source/destination check on NVA VNIC
  8. Configure spoke subnet route tables to send remote traffic to DRG

6.2 Internet-Bound Traffic Through NVA

Route internet-bound traffic from spoke VCNs through the NVA for inspection:

  1. Spoke subnet route table: 0.0.0.0/0 -> DRG
  2. RT-Spoke: 0.0.0.0/0 -> hub VCN attachment
  3. Hub VCN ingress route table: 0.0.0.0/0 -> NVA private IP
  4. NVA inspects and forwards to hub subnet route table
  5. Hub subnet route table: 0.0.0.0/0 -> NAT Gateway (or Internet Gateway if public)

Note: The NAT/Internet Gateway must be in the hub VCN. Spoke VCNs do not need their own internet gateways in this pattern.


7. Quick-Reference Decision Table

Scenario Solution
Spoke-to-spoke, no inspection needed Upgraded DRG, default route tables with auto-import
Spoke-to-spoke with firewall inspection Upgraded DRG + hub VCN with NVA
On-premises to multiple VCNs, no inspection Upgraded DRG, RT-Spoke routes to hub or direct spoke routes
On-premises to multiple VCNs with inspection Upgraded DRG + hub VCN with NVA (full pattern)
Legacy DRG, on-premises to spokes LPG-based transit through hub VCN
Cross-region transit DRG with RPC attachments (max 4-DRG depth)
Centralized internet egress NVA in hub VCN + NAT Gateway

References