Reference

Domain 4: Design and Implement Migrations Using Oracle Cloud VMware Solution (15%)

Domain 4 of the 1Z0-1123-25 Oracle Cloud Infrastructure 2025 Migration Architect Professional exam covers designing and implementing migrations using Oracle Cloud VMware Solution (OCVS). At 15% of the exam this domain accounts for approximately 8 questions. Unlike Domain 3 (Oracle Cloud Migrations service, which migrates workloads to native OCI compute), this domain focuses on migrating VMware workloads while preserving the entire VMware stack -- vSphere, vSAN, NSX-T, and HCX -- on OCI bare metal infrastructure.

1. Oracle Cloud VMware Solution (OCVS) Overview

What OCVS Is

Oracle Cloud VMware Solution enables creation and management of VMware Software-Defined Data Centers (SDDCs) running on OCI bare metal compute instances. It provides a fully automated VMware environment with full administrative (root-level) access to all components. The VMware software bundle is pre-installed and managed through OCI, while the VMware operational layer (vCenter, NSX Manager) remains under customer control. (OCVS Overview)

The critical distinction: OCVS is not a managed VMware service where Oracle runs your VMs. You get bare metal hosts with VMware installed, and you manage the VMware environment exactly as you would on-premises. Existing operational practices, runbooks, and tooling work unchanged.

SDDC Components

Every OCVS SDDC includes these VMware components:

Component Edition Purpose
vSphere (ESXi + vCenter) Enterprise Plus Type 1 hypervisor and centralized management
vSAN Advanced with Encryption Converged storage across ESXi hosts
NSX-T Data Center Advanced Network virtualization, microsegmentation, security
HCX Advanced or Enterprise Application mobility and migration (optional but must be enabled at creation)

Source: OCVS Overview

Exam trap: HCX can only be enabled during SDDC creation. If you skip HCX during provisioning, you cannot add it later. This is a permanent decision point.

Current Software Versions

OCVS supports two major vSphere tracks:

Component vSphere 8.0 Track vSphere 7.0 Track
ESXi 8.0 Update 3g (Build 24859861) 7.0U3w (Build 24784741)
vCenter 8.0 Update 3g (Build 24853646) 7.0 U3s (Build 24201990)
NSX-T 4.2.3.1 (Build 24954571) 3.2.4 (Build 23653566)
HCX Cloud 4.11.2 (Build 24933578) 4.10.2.0 (Build 24404455)

Source: OCVS Overview

vSphere 6.7 and 6.5 reached end of support on October 15, 2022 and are no longer available for new SDDCs.

2. OCVS vs. Native OCI Compute: When to Use Which

This is a core exam topic. The exam tests whether you can recommend the right migration target.

Factor OCVS Native OCI Compute
Use case Lift-and-shift VMware workloads without refactoring Cloud-native deployments, modernized applications
Skill set Existing VMware team can operate immediately Requires OCI skills (IAM, VCN, compute shapes)
Refactoring required None -- same VMware tools and processes Application may need OS/driver/networking changes
Migration speed Fast -- HCX enables live migration with zero downtime Slower -- requires image conversion, testing, validation
Cost model Per-host (bare metal) -- fixed capacity Per-instance (flexible shapes) -- pay for what you use
Long-term strategy Interim landing zone; modernize over time Target architecture for cloud-native workloads
Oracle Database licensing VMware-based DB licensing applies Native OCI DB services may offer better economics
Storage vSAN (local NVMe) or Block Volumes Block Volumes, Boot Volumes, File Storage

Source: OCVS Overview; Migrate VMware Workloads

Exam trap: OCVS is the correct answer when questions mention "minimal changes to existing VMware operations," "existing VMware skills," or "fastest path to cloud." Native OCI is correct when questions mention "cloud-native," "modernization," or "optimized cost." If a question mentions migrating Oracle Databases specifically, the answer is often Oracle Zero Downtime Migration (ZDM) to native OCI Database services, not OCVS.

3. SDDC Architecture: Cluster Sizing, Storage, and Networking

SDDC Types

Type Hosts Clusters Use Case
Multi-Host SDDC 3-64 per cluster 1-15 Production workloads
Single-Host SDDC 1 1 PoC, testing, short-term development only

Source: OCVS Overview

Cluster Structure

Every SDDC has exactly one management cluster (also called the unified management cluster) created during provisioning. This cluster hosts vCenter Server, NSX Manager Cluster, NSX Edge Nodes, and management services. The management cluster can also run workload VMs.

Up to 14 additional workload clusters can be added (total: 15 clusters per SDDC). Workload clusters contain no management components. The aggregate host count across all clusters cannot exceed 64.

Limit Value
Maximum clusters per SDDC 15
Maximum total hosts per SDDC 64
Dense shapes: hosts per cluster 3-64
Standard shapes: hosts per cluster 3-32
Single-host SDDCs per tenancy per region 10
Multi-AD recommendation Maximum 16 hosts per SDDC

Source: OCVS Overview

Supported Compute Shapes

Dense Shapes (Local NVMe Storage, vSAN)

Shape Processor OCPUs Memory Network Max Hosts/Cluster Multi-AD
BM.DenseIO2.52 Intel 52 768 GB 2 x 25 Gbps 64 Yes
BM.DenseIO.E4.128 AMD EPYC 3rd Gen 128 2048 GB 2 x 50 Gbps 64 Yes
BM.DenseIO.E5.128 AMD EPYC 4th Gen 128 1536 GB 1 x 100 Gbps 64 Yes

Dense shapes include local NVMe SSDs that form the vSAN datastore. All pricing commitment types (Hourly, Monthly, Yearly, 3-Year) are available.

Standard Shapes (Block Volume Storage)

Shape Processor OCPUs Memory Network Max Hosts/Cluster Multi-AD
BM.Standard2.52 Intel 52 768 GB 50 Gbps 32 No
BM.Standard3.64 Intel 64 1024 GB 100 Gbps 32 No
BM.Standard.E4.128 AMD EPYC 3rd Gen 128 2048 GB 100 Gbps 32 No
BM.Standard.E5.192 AMD EPYC 4th Gen 192 2304 GB 100 Gbps 32 No

Standard shapes require OCI Block Volume storage. An 8 TB management datastore volume is auto-created. Block Volume limits: max 32 volumes, max 32 TB per volume, minimum 50 GB per volume. Standard shapes support only Hourly billing (no Monthly, Yearly, or 3-Year).

GPU Shape

Shape Processor OCPUs Memory GPUs GPU Memory Local NVMe
BM.GPU.A10.4 Intel Xeon X9 64 1024 GB 4x NVIDIA A10 96 GB 11.52 TB

Source: OCVS Overview

Exam trap: Standard shapes do not support Multi-AD deployment. Only dense and GPU shapes support spanning availability domains. Standard shapes also cannot be used for single-host SDDCs.

Exam trap: When mixing dense shapes in a cluster (e.g., E4 + E5), all shapes must share the same processor vendor (all Intel or all AMD). You cannot mix Intel and AMD in the same cluster. Mixed E4/E5 clusters must align vSAN configuration to the lower-spec shape (E4's 8 NVMe disks), leaving 4 of the E5's 12 disks unused.

Storage Architecture

vSAN (Dense/GPU Shapes): All-flash converged storage using local NVMe SSDs across ESXi hosts. Data is replicated across hosts using vSAN storage policies. A single shared vsanDatastore serves both management and workload VMs. vSAN fault domains distribute replica copies across logical boundaries so a single domain failure only affects one replica. (Architecture)

Block Volumes (Standard Shapes): OCI-managed block storage with automatic redundancy across storage servers. The 8 TB management datastore is auto-created during provisioning. Additional volumes can be added during or after cluster creation.

SDDC Networking Architecture

The SDDC resides inside an OCI Virtual Cloud Network (VCN). Networking uses a dual-layer model:

  1. OCI Layer: VCN with VLANs, subnets, route tables, security lists, Network Security Groups (NSGs), and gateways
  2. VMware Layer: NSX-T overlay segments for VM-to-VM and VM-to-external traffic

Required VLANs per cluster (vSphere 7.x):

VLAN Purpose
NSX Edge Uplink 1 SDDC-to-OCI communication
NSX Edge Uplink 2 Reserved for public-facing applications
NSX Edge VTEP Data plane: ESXi host to NSX Edge
NSX VTEP Data plane: ESXi host to ESXi host
vMotion VM live migration traffic
vSAN Storage data traffic
vSphere SDDC management (ESXi, vCenter, NSX-T, NSX Edge)
Replication Net vSphere Replication engine
Provisioning Net VM cold migration, cloning, snapshots
HCX (if enabled) HCX traffic

Total: 9 VLANs + 1 provisioning subnet (10 VLANs + subnet if HCX enabled).

Source: Creating an SDDC

Connectivity options:

  • On-premises: OCI FastConnect (recommended) or Site-to-Site VPN
  • Oracle Services Network: Direct access to OCI-native services
  • Internet: Via Internet Gateway
  • Other VCN resources: Native integration with OCI compute, databases, Autonomous Database

Known limitation: VMs in OCVS cannot communicate directly with OCI Network Load Balancers (NLBs) on the same VCN. A workaround using a separate VCN or alternative configuration is required. (OCVS Overview)

4. SDDC Provisioning

Prerequisites

Before creating an SDDC, you need:

  1. Service limits: Minimum ESXi host count of 3 and SDDC count of 1 for the target region. Also verify compute core and memory limits for the chosen shape.
  2. VCN: Existing VCN with available CIDR block of /24 or larger.
  3. SSH key pair: Required for ESXi host access.
  4. NAT Gateway (if HCX enabled): Required for HCX activation via VMware SaaS portal.
  5. On-premises connectivity (recommended): Set up FastConnect or VPN before SDDC creation.

CIDR Sizing

CIDR Block Segment Size Max Nodes
/24 /28 3-12
/23 /27 3-28
/22 /26 3-60
/21 /25 3-64

Source: Creating an SDDC

Exam trap: If you plan to add multiple clusters, you need multiple CIDR blocks. Each cluster requires its own set of VLANs and a provisioning subnet. Plan your IP space before provisioning.

Provisioning Workflow

The SDDC creation workflow has five steps:

Step 1 -- Basic Information: SDDC name (1-16 chars, must start with letter, unique per region across creating/active/updating SDDCs), VMware software version, HCX license type, SSH public key.

Step 2 -- Clusters: Host configuration (shape, count, availability domain, pricing commitment), networking (VCN, CIDR, VLANs -- create new or select existing), datastores (standard shapes get 8 TB auto-created), notifications.

Step 3 -- Review and Create: Confirm configuration and submit.

Key constraints during provisioning:

Decision Permanence
HCX enablement Permanent -- cannot add later
Shielded instances Permanent -- cannot enable after creation
Provisioning subnet Cannot be changed after provisioning
SDDC name Unique per region among active SDDCs

Source: Creating an SDDC

Single-Host SDDC Limitations

Single-host SDDCs are for testing and PoC only. Key restrictions:

  • Dense shapes only (no standard shapes)
  • No SLA, limited Oracle support (commercially reasonable), VMware support for first 60 days only
  • No HA, no DRS, no distributed management
  • Cannot be upgraded to multi-cluster SDDC
  • No backup -- data is lost if the host fails
  • Hourly and Monthly billing only (no Yearly or 3-Year)
  • Maximum 10 per tenancy per region

Source: OCVS Overview

5. VMware HCX: Migration Engine

HCX (Hybrid Cloud Extension) is the primary migration tool for moving VMware workloads to OCVS. It provides application mobility, network extension, and workload rebalancing between on-premises and cloud.

HCX Licensing

Feature HCX Advanced HCX Enterprise
Activation keys 3 10
Cost (dense shapes) Included Monthly billed upgrade
Cost (standard shapes) Included Included at no cost
Bulk Migration Yes Yes
vMotion Yes Yes
Cold Migration Yes Yes
Replication Assisted vMotion No Yes
OS Assisted Migration No Yes
SRM integration No Yes

Source: OCVS Overview

HCX Enterprise billing is monthly, independent of host billing intervals. Upgrading from Advanced to Enterprise takes effect immediately. Downgrading from Enterprise to Advanced is pending until the billing cycle end date.

HCX Migration Types

This is heavily tested. Know each migration type, its downtime profile, and when to use it.

Migration Type Protocol Downtime Parallelism Best For
vMotion VMware vMotion Zero (live migration) Single VM at a time Individual critical VMs requiring zero downtime
Bulk Migration vSphere Replication Reboot equivalent (brief switchover) Multiple VMs in parallel, schedulable Large-scale migrations where brief downtime is acceptable
Cold Migration VMware NFC Full (VM powered off) Multiple VMs Powered-off VMs, template migrations
Replication Assisted vMotion (RAV) Replication + vMotion Zero (combines both) Parallel, schedulable Enterprise-only; best of both worlds -- parallel zero-downtime migration
OS Assisted Migration Agent-based Varies Multiple VMs Non-vSphere VMs migrating to VMware (Enterprise only)

Source: HCX Migration Types; Migrate VMware Workloads

Exam trap: vMotion provides zero downtime but migrates only one VM at a time. Bulk Migration handles parallel VMs but requires a reboot-equivalent outage. Replication Assisted vMotion (RAV) combines both advantages but requires HCX Enterprise license. If a question asks for "zero downtime migration of many VMs in parallel," the answer is RAV with Enterprise license.

Exam trap: Cold Migration is automatically selected when the source VM is powered off. It uses the VMware NFC protocol, not vMotion.

HCX Deployment Architecture

HCX uses a paired architecture with components at both source and destination:

  • HCX Cloud Manager: Pre-deployed and pre-configured in the OCVS SDDC during provisioning
  • HCX Connector: Deployed on-premises by the customer as an OVA appliance in the source vCenter

The two instances are paired via site pairing (port 443), and a service mesh is created to deploy the interconnect, network extension, and WAN optimization appliances at both sites.

HCX Deployment Steps (Exam Sequence)

  1. Verify OCVS HCX Cloud Manager: Update HCX bundle in OCVS vCenter if updates are available. Obtain the HCX Connector OVA download link from OCVS HCX Manager. (HCX Configuration Guide)

  2. Deploy HCX Connector on-premises: Deploy the OVA in the source vCenter. Configure admin/root passwords, IP address, hostname, DNS, NTP, and SSH. The connector must have outbound internet access for VMware SaaS portal activation.

  3. Activate HCX Connector: Use the activation key from the OCI Console (found under SDDC details > HCX on-premises Connector Activation Keys). The connector contacts the VMware SaaS portal for verification.

  4. Connect to vCenter and NSX: Register the connector with the on-premises vCenter Server and optionally NSX Manager. Configure SSO/PSC details.

  5. Create Site Pairing: Pair the on-premises HCX Connector with the OCVS HCX Cloud Manager using the remote HCX URL and OCVS vCenter credentials over port 443.

  6. Create Network Profiles: Define four separate network profiles on the source side:

    Profile DvPortgroup Routed? Purpose
    HCX-Management Management vmk Routed Communication with vCenter, DNS, NTP
    HCX-Uplink HCX-Uplink VLAN Routed Interconnect appliance communication
    HCX-vMotion vMotion vmk Non-routed vMotion traffic
    HCX-Replication Replication vmk Non-routed Bulk migration replication traffic

    Best practice: Never combine functions into a single network profile. (HCX Configuration Guide)

  7. Create Compute Profile: Define compute, storage, and network placement for HCX appliances. Select services to activate, target cluster, datastore, folder, and map to the four network profiles.

  8. Create Service Mesh: Deploy HCX Interconnect, Network Extension, and WAN Optimization appliances at both sites. Select source and destination compute profiles. Configure uplink profile and network container mapping.

  9. Update OCI Network Security Groups: Add ingress and egress rules to OCVS VLAN NSGs to accept traffic from the on-premises VMware network.

HCX Prerequisites

Requirement Detail
Connectivity OCI FastConnect with minimum 1 Gbps bandwidth
Routing All required routes published between on-premises and OCVS VCN CIDR
Compatibility Verify source VMware version against VMware Interoperability Matrix
Port 443 Required between sites for site pairing
NAT Gateway Required in the OCVS VCN for HCX activation via VMware SaaS portal
Bastion host Recommended in OCI for accessing OCVS management components
NTP and DNS Must be configured at both sites

Source: HCX Configuration Guide

HCX Network Extension

HCX Network Extension stretches Layer 2 networks from on-premises to OCVS, allowing VMs to retain their IP addresses during migration. This eliminates the need for IP re-addressing, DNS changes, or firewall rule updates. Network extension is configured through the service mesh and uses the HCX VLAN.

If NSX is deployed on-premises, transport zones are eligible for extension. If using distributed vSwitches (dvSwitch) without NSX, the dvSwitch is used for L2 extension.

6. Migration Strategy and Best Practices

Source Environment Discovery and Assessment

Before migrating, assess the source VMware environment:

  1. Inventory: Catalog all VMs, templates, resource pools, and vApps
  2. Dependencies: Map application-to-application and application-to-infrastructure dependencies
  3. Compatibility: Verify VMware version compatibility using the VMware Interoperability Matrix
  4. Network: Document all VLANs, subnets, firewall rules, and load balancer configurations
  5. Storage: Calculate total storage consumption and IOPS requirements for cluster sizing
  6. Licensing: Identify Oracle Database VMs (these may be better migrated to native OCI DB services using ZDM)

Migration Decision Framework

Workload Type Recommended Target Migration Tool
VMware application VMs OCVS HCX (vMotion, Bulk, RAV)
Non-Oracle database VMs OCVS HCX (vMotion, Bulk)
Oracle Database VMs Native OCI DB services Oracle Zero Downtime Migration (ZDM)
VMs requiring modernization Native OCI compute Oracle Cloud Migrations (OCM) or manual

Source: Migrate VMware Workloads

Cluster Sizing Best Practices

  • Work with an Oracle Cloud Architect for optimal shape and cluster sizing
  • Size based on compute, memory, and storage requirements from the assessment phase
  • For standard shapes, do not scale to the maximum 32 hosts -- reserve 1-2 hosts for potential replacements
  • Host deployment takes 20-25 minutes per host
  • Validate sizing during implementation; assessment-phase estimates need updating based on actual workload behavior

Migration Best Practices

  1. Enable HCX at SDDC creation -- it cannot be added later
  2. Use FastConnect (minimum 1 Gbps) for production migrations, not VPN
  3. Separate network profiles -- do not combine management, uplink, vMotion, and replication into one profile
  4. Stage migrations: Start with non-critical workloads, validate, then migrate production
  5. Use RAV for large-scale zero-downtime migrations if HCX Enterprise is licensed
  6. Plan IP addressing: Use /21 or /22 CIDR blocks if you expect to scale to many hosts
  7. Set up monitoring: Configure OCI Notifications with alarm topics during SDDC creation

7. Scaling OCVS Clusters

Adding Hosts

Hosts can be added to any cluster after provisioning. Key rules:

  • Different shapes are allowed within a cluster, but all shapes must share the same processor vendor (Intel or AMD)
  • Different billing intervals can be mixed within a cluster
  • Different minor ESXi versions can coexist
  • Dense shapes: scale to 64 hosts per cluster
  • Standard shapes: scale to 32 hosts per cluster (recommend max 30-31 to reserve for replacements)

Adding Clusters

Add up to 14 workload clusters beyond the initial management cluster. Each new cluster requires its own:

  • CIDR block
  • VLANs
  • Provisioning subnet
  • Cluster resources are independent of other clusters

Source: OCVS Overview

8. Pricing and Billing Model

Commitment Types

Commitment Availability Discount
Hourly Dense and Standard shapes (not GPU) No discount (on-demand)
Monthly Dense and GPU shapes (not Standard) Moderate discount
Yearly Dense and GPU shapes (not Standard) Significant discount
3-Year Dense and GPU shapes (not Standard) Maximum discount

Source: OCVS Overview

Exam trap: Standard shapes support only Hourly billing. GPU shapes do not support Hourly. If a question presents a standard-shape SDDC with monthly billing, or a GPU SDDC with hourly billing, those are invalid configurations.

Exam trap: Single-host SDDCs support only Hourly and Monthly billing. Yearly and 3-Year commitments are not available.

Key Billing Rules

  • Billing is per-cluster and can be mixed across clusters
  • Different hosts within a cluster can have different billing intervals
  • HCX Enterprise is billed monthly, independent of host billing
  • If a host is deleted before its commitment period ends, billing continues for the full commitment duration
  • Billing commitments can be transferred between hosts
  • Failed provisioning is not billed

Reserved Capacity

Reserve bare metal capacity in advance for guaranteed availability. Billing works in two stages:

  1. In reserved pool: Billed at Reserved Capacity SKU pricing
  2. Provisioned to SDDC: Billed at VMware Solution SKU pricing

Reserved capacity is not supported for SDDCs spanning multiple availability domains.

9. Shielded Instances

OCVS supports Shielded Instances for ESXi hosts, providing:

  • Secure Boot: Validates cryptographic signatures of boot firmware, drivers, and OS
  • Trusted Platform Module (TPM): Secure storage for certificates, encryption keys, and platform authentication artifacts

Critical constraint: Shielded instances must be enabled during cluster creation. They apply to all hosts in the cluster and cannot be enabled for specific hosts or added after creation. If you need shielded instances later, you must recreate the cluster. (OCVS Overview)

10. Management and Monitoring

Management Interfaces

Interface Controls
OCI Console / API / CLI SDDC lifecycle, cluster management, host scaling, networking
vCenter vSphere Client VM creation, vSAN management, resource pools
NSX Manager Overlay segments, distributed firewall, load balancing
HCX Manager Migration workflows, network extension, service mesh

Exam trap: Changes made via the OCI Console (SSH key updates, software version changes) are not automatically reflected in vCenter. Manual synchronization may be required.

Software Upgrades

OCVS provides automated upgrade workflows. Oracle notifies when new VMware versions are available. The upgrade process varies by current version and follows guided steps.

11. Exam Traps and Common Pitfalls

Trap Correct Answer
"Can I add HCX after SDDC creation?" No. Must be enabled during creation.
"Can I enable shielded instances on existing hosts?" No. Must be enabled during cluster creation for all hosts.
"What migration type for zero-downtime parallel migration?" Replication Assisted vMotion (RAV) -- requires HCX Enterprise.
"Can standard shapes span multiple ADs?" No. Only dense and GPU shapes support Multi-AD.
"Monthly billing for standard shapes?" Not available. Standard shapes support Hourly only.
"Maximum hosts per SDDC?" 64 total across all clusters.
"Can I upgrade single-host SDDC to multi-cluster?" No. Single-host SDDCs cannot be upgraded.
"Oracle DB on VMware -- migrate where?" Native OCI DB services using ZDM, not OCVS.
"Mix Intel and AMD in same cluster?" Not allowed. All shapes must share same processor vendor.
"What does bulk migration downtime look like?" Equivalent to a reboot (brief switchover).
"HCX Connector -- who deploys it?" Customer deploys OVA on-premises. HCX Cloud Manager is pre-deployed in OCVS.
"Minimum FastConnect bandwidth for HCX?" 1 Gbps.
"What happens if host deleted before commitment ends?" Billing continues for the full commitment period.
"OCVS VMs can talk to NLB on same VCN?" No. Known limitation requiring a workaround.

References