Reference

Cloud Services Cross-Reference: Compute

Compute is the foundational category across all major cloud providers, covering the virtual and physical server resources used to run workloads. Each provider offers virtual machines, bare metal servers, spot/preemptible pricing options, dedicated isolation, HPC-optimized instances, image management, and auto-scaling — but with different naming conventions, shape models, and differentiating features. This document maps equivalent services and highlights provider-specific distinctions.


1. Virtual Machines (General Purpose)

General purpose VMs are designed for balanced CPU-to-memory ratios and are suitable for the widest range of workloads including web servers, dev/test environments, small-to-medium databases, and containerized applications.

Function AWS Azure OCI GCP
Core VM service Amazon EC2 Azure Virtual Machines OCI Compute Instances Compute Engine
General purpose family M (Intel), M-a (AMD), M-g (Graviton) D-family (Dv5, Dasv6, Dpsv6 series) VM.Standard3, VM.Standard.E4/E5/E6 N4, N2, N2D, E2, C4, C4D
Burstable/entry-level T (T3, T4g) B-family (Bsv2, Basv2, Bpsv2) VM.Standard.A1.Flex (Ampere) E2
Arm-based Graviton (M7g, C7g, R7g) Dpsv6-series (Cobalt/Ampere) VM.Standard.A1.Flex, A2.Flex, A4.Flex (Ampere) Tau T2A (Ampere), N4A, C4A (Axion)

AWS uses the M-series as its primary general purpose family, with sub-variants for Intel (M7i), AMD (M7a), and its custom Graviton Arm processors (M7g). The T-series provides burstable CPU credit-based performance for variable workloads.

Azure uses the D-family as its standard general purpose offering. The D-family has many sub-series based on processor vendor (Intel, AMD, Arm) and feature set (local disk, confidential computing). The B-family provides burstable CPU credit functionality equivalent to AWS T-series.

OCI offers fixed and flexible shapes. Fixed shapes have a preset CPU/memory ratio. Flexible shapes (e.g., VM.Standard.E4.Flex) let you specify exact OCPU count and memory within defined limits, which is a differentiator not available in the same form on other providers. Ampere Altra-based A1 shapes are widely used for cost-efficient Arm workloads.

GCP organizes general purpose instances into multiple series by processor generation. The E2 series is the most cost-efficient entry-level option. N-series instances (N2, N2D, N4) are the standard general purpose tier. C4 and C4D are newer, larger-core series with Intel and AMD processors respectively.

Unique / Differentiating Features

  • OCI Flexible Shapes: OCI is the only provider that allows you to independently specify OCPU count and memory at launch for standard VM shapes. On AWS, Azure, and GCP, instance size determines a fixed CPU/memory pairing.
  • AWS Graviton: Custom Arm-based processors (Graviton3, Graviton4) designed by AWS, offering up to 40% lower cost per operation than comparable x86 instances.
  • GCP Axion: Google's custom Arm-based processor (Armv9 architecture), available in N4A and C4A series as of 2025.
  • Azure Cobalt: Microsoft's custom Arm-based processor used in Dpsv6/Epsv6 series.

2. Compute-Optimized Instances

Compute-optimized instances provide a high CPU-to-memory ratio for workloads that require significant processing power relative to memory: batch processing, media transcoding, high-performance web servers, and scientific computing.

Function AWS Azure OCI GCP
Compute-optimized family C (C7i, C7a, C7g) F-family (Fasv7, Fsv2), FX-family VM.Optimized3.Flex, BM.Optimized3.36 C2, C2D, H3, H4D
High single-core frequency C7i (Intel Sapphire Rapids) FX-series (Intel Ice Lake) BM.Optimized3.36 (Intel Xeon 6354) C2 (Intel Cascade Lake)

AWS C-series instances are available across Intel (C7i), AMD (C7a), and Graviton (C7g) processor families. The C7i uses Intel Sapphire Rapids processors.

Azure F-family VMs are the primary compute-optimized tier. The FX-series is a specialized sub-family targeting Electronic Design Automation (EDA) and workloads requiring high single-core performance with large cache.

OCI VM.Optimized3.Flex and BM.Optimized3.36 shapes use Intel Xeon 6354 processors running at 3.0–3.6 GHz, providing high single-core frequency for tightly coupled compute workloads.

GCP C2 and C2D are the core compute-optimized series. H3 and H4D serve the upper end where compute-optimized overlaps with HPC use cases.


3. Memory-Optimized Instances

Memory-optimized instances provide large amounts of RAM relative to CPU count, suited for in-memory databases (SAP HANA, Redis), large-scale analytics, real-time data processing, and enterprise workloads.

Function AWS Azure OCI GCP
Memory-optimized family R (R7i, R7a, R7g) E-family (Easv7, Ev5, Epsv6), M-family VM.Standard.E4.Flex (up to 1024 GB), VM.Standard.E6.Flex (up to 1454 GB) M1, M2, M3, M4, X
Ultra-high memory X (X2idn, X2iedn, X8aedz) M-family (Msv3, Msv2 High Memory) BM.Standard.E6.256 (up to ~6 TB via Extended Memory VMs) X series (up to 32 TB memory)
SAP HANA certified X, R series M-family, E-family VM.Standard.E4.Flex with extended memory M2, M3

AWS R-series is the primary memory-optimized tier. The X-series provides extreme memory configurations up to multiple terabytes, with X8aedz instances based on 5th Gen AMD EPYC providing up to 3,072 GiB of memory.

Azure E-family covers the standard memory-optimized range. The M-family addresses extreme memory requirements. Msv3 and Mdsv3 series offer Medium, High, and Very High Memory sub-series. SAP HANA certification spans both E and M families.

OCI uses flexible shapes to cover most memory-optimized use cases. VM.Standard.E5.Flex supports up to 1,049 GB and VM.Standard.E6.Flex supports up to 1,454 GB. Extended Memory VMs allow configurations beyond the standard shape maximums.

GCP M-series (M1, M2, M3, M4) are the memory-optimized line. The X series provides the highest memory configurations, up to 32 TB, for in-memory database workloads at the extreme end.


4. GPU / Accelerated Computing Instances

GPU instances attach hardware accelerators (primarily NVIDIA GPUs, with some AMD GPU options) to handle AI/ML training and inference, scientific simulation, high-performance rendering, and other massively parallel workloads.

Function AWS Azure OCI GCP
AI/ML training (NVIDIA) P5 (H100), P4 (A100), P3 (V100) ND H100 v5, ND A100 v4, NCv3 (V100) BM.GPU.H100.8, BM.GPU.H200.8, BM.GPU.B200.8, BM.GPU.A100-v2.8 A3 (H100), A2 (A100), A4 (H200), A4X (H200)
AI inference Inf2 (Inferentia2), G6 (L4) NV (A10), NC_A100_v4 VM.GPU.A10.1/2, BM.GPU.A10.4 G2 (L4), G4 (A100)
Graphics/VDI G5 (NVIDIA A10G) NV-family (NVadsA10_v5, NVv4) VM.GPU.A10 G2 (L4), NG-family (AMD Radeon)
Proprietary AI accelerators Trainium (Trn2), Inferentia (Inf2) TPU (v4, v5e, v5p, Trillium v6e)

AWS P-series targets training workloads (P5 uses H100 80GB NVLink). The Inf2 and Trn2 families use AWS-designed Inferentia and Trainium chips respectively for inference and training, avoiding NVIDIA licensing costs and offering competitive pricing for specific ML workloads.

Azure ND-family (large-memory GPU) and NC-family (compute-focused GPU) cover training and inference. The ND MI300X v5 series adds AMD MI300X GPU support. NV-family targets visualization and VDI use cases.

OCI offers a wide range of GPU bare metal shapes covering NVIDIA H100, H200, B200, and AMD MI300X / MI355X. OCI is notable for offering some of the latest NVIDIA GPU generations quickly and in large cluster configurations. VM GPU shapes are available for smaller workloads.

GCP A-series (A2, A3, A4, A4X) are NVIDIA-based accelerator-optimized instances. G-series (G2, G4) target inference and lighter training. GCP uniquely offers TPUs (Tensor Processing Units) as a first-party AI accelerator, available as v4, v5e, v5p, and Trillium v6e — a capability not available on AWS, Azure, or OCI without third-party hardware.

Unique / Differentiating Features

  • AWS Trainium/Inferentia: Custom AI accelerator chips that provide an alternative to NVIDIA GPUs for training (Trainium) and inference (Inferentia) workloads.
  • GCP TPU: Google's custom Tensor Processing Units for ML workloads, available nowhere else in the same integrated form.
  • OCI GPU Shapes: OCI offers GPU bare metal shapes with AMD MI300X/MI355X GPUs, providing AMD GPU options at scale alongside NVIDIA options.

5. Bare Metal Servers

Bare metal instances provide direct access to the physical server hardware with no hypervisor between the workload and the hardware. Used for workloads with strict licensing requirements (Oracle Database, per-core licensing), high-performance computing, and latency-sensitive applications.

Function AWS Azure OCI GCP
Bare metal service EC2 bare metal instances (e.g., m7i.metal, c7i.metal) Azure Bare Metal Infrastructure OCI Bare Metal Compute (BM.Standard, BM.DenseIO, BM.GPU, BM.HPC shapes) Bare metal VMs (via sole-tenant nodes, z3-metal, C3-metal)
Availability Selected instance families have metal sizes Separate specialized service First-class shape type alongside VMs Available in select series

AWS bare metal instances are available as specific sizes within existing instance families (e.g., m7i.metal-24xl, c7i.metal-48xl). They run within the same EC2 ecosystem and support the same APIs. Not all instance families offer metal variants.

Azure separates bare metal into Azure Bare Metal Infrastructure, a distinct service targeting SAP HANA, Oracle, and high-performance workloads. It is not a self-service resource in the same way as EC2 bare metal — it is provisioned through Microsoft's sales and infrastructure team for specific certified workloads.

OCI treats bare metal as a first-class shape type alongside VMs. All BM.* shapes are bare metal: BM.Standard3.64, BM.Standard.E5.192, BM.GPU.H100.8, BM.HPC.E5.144, etc. OCI bare metal runs the same Compute service APIs and console as VMs, with no separate service boundary.

GCP provides bare metal access primarily through sole-tenant nodes (see Dedicated Hosts section) and select instance series that offer metal sizes (Z3, C3). The offering is more limited compared to AWS or OCI in terms of self-service breadth.


6. Dedicated Hosts

Dedicated hosts provide a physical server allocated entirely to a single customer's use, enabling per-socket, per-core, or per-VM software licensing compliance, and meeting compliance requirements for physical host isolation.

Function AWS Azure OCI GCP
Dedicated host service Amazon EC2 Dedicated Hosts Azure Dedicated Hosts OCI Dedicated Virtual Machine Hosts GCP Sole-Tenant Nodes
Billing model Billed per host, instances run at no additional compute charge Billed per host Billed per dedicated host; VMs on it are not billed separately Billed for all vCPU and memory on the node + 10% sole-tenancy premium
License bring-your-own Yes (BYOL per-socket/per-core) Yes (BYOL with Azure Hybrid Benefit) Yes (supports per-core licensing isolation) Yes (BYOL support)
License Manager integration AWS License Manager Azure Hybrid Benefit OCI License Manager Google Cloud License Manager

AWS Dedicated Hosts are integrated with AWS License Manager to automate host allocation and track license usage. The host is allocated to your account and you choose when to place instances on it. Hosts can be shared across an AWS Organization.

Azure Dedicated Hosts follow the same concept and support a Host Group construct for grouping multiple dedicated hosts. Azure Hybrid Benefit integrates with Windows Server and SQL Server licensing.

OCI Dedicated VM Hosts are created with a specific shape that determines total capacity (e.g., DVH.Standard.E4.128 means 128 OCPUs available on the dedicated host). Multiple VMs of varying shapes can be placed on a single dedicated host as long as capacity permits.

GCP Sole-Tenant Nodes serve the same purpose. Node types define total vCPU and memory capacity. You define node templates and node groups to manage allocation. A 10% premium above standard VM pricing applies to all resources on the sole-tenant node.


7. Spot / Preemptible / Low-Priority Instances

These instance types run on spare cloud capacity and are offered at steep discounts (50–91%) relative to on-demand pricing. They can be interrupted by the provider when on-demand demand increases.

Function AWS Azure OCI GCP
Service name EC2 Spot Instances Azure Spot Virtual Machines OCI Preemptible Instances GCP Spot VMs (replaces Preemptible VMs)
Maximum discount Up to 90% vs on-demand Up to 90% vs pay-as-you-go Fixed 50% off on-demand price Up to 91% vs on-demand
Pricing model Market-based (bid or current spot price) Market-based (set max price or use current price) Fixed 50% discount, no bidding Fixed discount, no bidding
Interruption notice 2-minute warning 30-second warning (eviction) 2-minute warning event 30-second preemption notice
Maximum runtime Unlimited (until interrupted) Unlimited (until evicted) Unlimited (until reclaimed) Unlimited (previous preemptible VMs had 24-hour max; Spot VMs do not)
Auto Scaling support Yes (EC2 Auto Scaling, Spot Fleet) Yes (VMSS with Spot Priority Mix) Yes (via Instance Pools) Yes (MIG with Spot VMs)

AWS Spot Instances use a market-based pricing model where you can set a maximum price (Spot Fleet) or accept the current Spot price. Spot Fleet and EC2 Auto Scaling support mixed on-demand and Spot instance pools. Instance Interruption Notices are published to instance metadata and EventBridge 2 minutes before interruption.

Azure Spot VMs are evicted either when the capacity is needed or when the current price exceeds your specified maximum. The Spot Priority Mix feature in VMSS allows combining standard and Spot VMs within a single scale set, enabling availability/cost trade-off tuning.

OCI Preemptible Instances have a simpler model: always 50% off on-demand pricing, no bidding, and no market-based pricing variability. This makes cost planning predictable. Available on all VM shapes except VM.Standard.E2.1.Micro.

GCP Spot VMs are the current generation, replacing legacy Preemptible VMs. Legacy preemptible VMs had a hard 24-hour maximum runtime; Spot VMs do not. GCP Spot VMs work with Managed Instance Groups (MIGs) and GKE cluster autoscaler.


8. High-Performance Computing (HPC)

HPC instances are purpose-built for tightly coupled, massively parallel scientific and engineering workloads that require high-bandwidth, low-latency inter-node networking (typically RDMA or InfiniBand/EFA).

Function AWS Azure OCI GCP
HPC instance family Hpc6a, Hpc6id, Hpc7g, Hpc7a HB-family (HBv2, HBv3, HBv4, HBv5), HC-series, HX-series BM.HPC.E5.144, BM.Optimized3.36 H3, H4D
RDMA / cluster networking Elastic Fabric Adapter (EFA) InfiniBand (HDR 200 Gbps) RDMA Cluster Network Cloud RDMA (H4D)
Max inter-node bandwidth 300 Gbps (Hpc7a, EFA) 200 Gbps InfiniBand (HBv4/HBv5) 100 Gbps RDMA (BM.HPC.E5) 200 Gbps (H4D)
Managed HPC service AWS ParallelCluster Azure CycleCloud, Azure HPC Cache HPC Cluster (via instance pools + RDMA) Google Cloud HPC Toolkit
Job scheduler integration Slurm, PBS, SGE (via ParallelCluster) Slurm, PBS (via CycleCloud) Slurm (manual or via HPC templates) Slurm (via HPC Toolkit)

AWS HPC instances include Hpc6a (96-core AMD EPYC, 100 Gbps EFA), Hpc7g (Graviton3E, 200 Gbps EFA), and Hpc7a (192-core 4th Gen AMD EPYC Genoa, 300 Gbps EFA). EFA (Elastic Fabric Adapter) is AWS's RDMA-capable networking layer. AWS ParallelCluster automates HPC cluster deployment and management.

Azure H-family is split across HB (high memory bandwidth for CFD and weather), HC (high density compute for molecular dynamics and chemistry), and HX (large memory for EDA). The HBv4 and HBv5 series use AMD EPYC with up to 200 Gbps InfiniBand. Azure CycleCloud provides job scheduler-aware cluster management.

OCI BM.HPC.E5.144 is OCI's dedicated HPC bare metal shape with AMD EPYC 9J14 processors (144 OCPUs) and RDMA cluster networking. Tightly coupled HPC jobs use OCI's cluster network, which provides low-latency RDMA between nodes in the same cluster.

GCP H3 (Intel Sapphire Rapids, 88 vCPUs, 352 GB) and H4D (AMD EPYC Turin, 192 vCPUs, Cloud RDMA) are the HPC-optimized series. Cloud RDMA on H4D provides low-latency fabric for HPC clustering. Google Cloud HPC Toolkit handles Slurm-based cluster deployment.


9. VM Image Management

VM image services allow users to create, store, share, version, and distribute machine images (pre-configured OS + software stacks) used to launch compute instances.

Function AWS Azure OCI GCP
Image format / term AMI (Amazon Machine Image) Managed Image, Azure Compute Gallery image Custom Image, Platform Image Custom Image, Image Family
Image build automation EC2 Image Builder Azure VM Image Builder OCI Compute Image Import / BYOI Packer (community), no native build service
Image registry / gallery AMI (regional, account-scoped) Azure Compute Gallery (formerly Shared Image Gallery) OCI Custom Images (regional, per-tenancy) Cloud storage + Image Families
Cross-region replication Yes (AMI copy across regions) Yes (Compute Gallery with replication) Yes (export/import via Object Storage) Yes (image copy across regions)
Cross-account sharing Yes (AMI sharing via account ID or org) Yes (Compute Gallery RBAC + community gallery) Yes (via pre-authenticated request or tenancy sharing) Yes (IAM roles on image resource)
Marketplace images AWS Marketplace Azure Marketplace OCI Marketplace Google Cloud Marketplace

AWS EC2 Image Builder is a fully managed service that automates the creation, testing, and distribution of AMIs and container images. It supports pipelines with build, test, and distribution stages. AMIs are regional and must be explicitly copied to other regions. Cross-account sharing via AWS Organizations is well integrated.

Azure Azure Compute Gallery (previously Shared Image Gallery) organizes images into definitions and versions, supports zone-redundant storage, and replicates to multiple regions. Azure VM Image Builder (based on Packer) automates image customization pipelines. Community Gallery allows sharing images publicly.

OCI custom image management is built into the Compute service. Images can be exported to Object Storage in QCOW2 or OCI format and imported in other regions or tenancies. OCI's Bring Your Own Image (BYOI) feature allows importing images from on-premises or other clouds. No native pipeline/build service equivalent to EC2 Image Builder exists; Packer is commonly used.

GCP custom images are stored as Compute Engine resources and can be grouped into Image Families. An image family always points to the latest non-deprecated image in the family, simplifying instance template references. GCP does not provide a native pipeline-based image builder (equivalent to EC2 Image Builder or Azure VM Image Builder); Packer with the Google Cloud builder plugin is the standard approach.


10. Auto-Scaling

Auto-scaling services automatically adjust the number of running compute instances in response to load changes (metric-based) or on a schedule, reducing costs during low utilization and maintaining capacity during peak load.

Function AWS Azure OCI GCP
Auto-scaling service EC2 Auto Scaling (Auto Scaling Groups) Virtual Machine Scale Sets (VMSS) OCI Autoscaling (Instance Pools) Managed Instance Groups (MIG) with Autoscaler
Launch template / config Launch Template VM Scale Set configuration (instance profile) Instance Configuration Instance Template
Scaling triggers CPU, memory, custom CloudWatch metrics, scheduled, predictive CPU, custom Azure Monitor metrics, scheduled, HTTP queue depth CPU, memory (metric-based); schedule-based CPU, custom Cloud Monitoring metrics, scheduled, HTTP load balancing
Min/max/desired capacity Yes Yes Yes Yes
Mixed on-demand + spot Yes (Spot Fleet, mixed instance policies) Yes (Spot Priority Mix) Yes (standard + preemptible in same pool) Yes (MIG with Spot VMs)
Predictive scaling Yes (AWS Predictive Scaling) Yes (Azure Predictive Autoscale) No No
Health checks EC2 status checks, ELB health checks Azure Load Balancer / App Gateway health probes OCI Load Balancer health checks GCP Load Balancing health checks

AWS Auto Scaling Groups (ASGs) are deeply integrated with EC2, ELB, CloudWatch, and AWS predictive scaling. Mixed instance policies allow combining multiple instance types and sizes within a single ASG, and Spot Fleet enables more sophisticated Spot capacity diversification. Predictive Scaling uses ML to forecast capacity needs and scales proactively.

Azure VMSS is Azure's equivalent. The Spot Priority Mix feature (2024+) allows a configurable percentage of Spot VMs within a standard VMSS. VMSS supports both uniform (all instances from same config) and flexible orchestration modes.

OCI autoscaling operates on Instance Pools. An Instance Configuration acts as the launch template. Autoscaling policies are either metric-based (CPU, memory threshold triggers) or schedule-based. OCI does not currently offer predictive scaling.

GCP Managed Instance Groups (MIGs) are the equivalent unit. The GCP Autoscaler supports CPU utilization, HTTP load balancing, Cloud Monitoring custom metrics, and scheduled scaling. MIGs support both zonal and regional distribution of instances. No native predictive scaling feature.


Summary Cross-Reference Table

Capability AWS Azure OCI GCP
Core VM service Amazon EC2 Azure Virtual Machines OCI Compute Compute Engine
General purpose VMs M-series D-family VM.Standard (E4/E5/E6 Flex) N4, N2, C4, E2
Compute optimized C-series F-family, FX-family VM.Optimized3.Flex C2, C2D, H3
Memory optimized R-series, X-series E-family, M-family VM.Standard.E* Flex + Extended Memory M-series, X-series
GPU / accelerated P, G, Inf, Trn series NC, ND, NV, NG families BM/VM.GPU.* shapes A-series, G-series
Bare metal EC2 .metal sizes Azure Bare Metal Infrastructure BM.* shapes (first-class) Sole-tenant nodes (metal sizes)
Dedicated hosts EC2 Dedicated Hosts Azure Dedicated Hosts Dedicated VM Hosts Sole-Tenant Nodes
Spot / preemptible EC2 Spot Instances Azure Spot VMs Preemptible Instances Spot VMs
HPC Hpc6a/7a/7g + EFA HB/HC/HX families + InfiniBand BM.HPC + RDMA Cluster Network H3, H4D + Cloud RDMA
Image management AMI + EC2 Image Builder Managed Image + Azure Compute Gallery Custom Image + BYOI Custom Image + Image Families
Auto-scaling EC2 Auto Scaling (ASG) Virtual Machine Scale Sets (VMSS) Autoscaling + Instance Pools MIG + Autoscaler
Proprietary accelerators Graviton (CPU), Trainium, Inferentia Cobalt (CPU) Axion (CPU), TPU

Key Differentiators

OCI Flexible Shapes: Only OCI allows arbitrary OCPU and memory selection within defined bounds for standard VM shapes. The other three providers use fixed CPU/memory pairings per instance size.

OCI Preemptible Pricing Model: OCI's flat 50% discount with no bidding or market variation makes spot pricing simpler and more predictable than AWS (market-based), Azure (market-based with max price), or GCP (fixed but non-transparent).

AWS Ecosystem Depth: AWS has the broadest range of instance types (600+), the most mature spot/mixed fleet capabilities, predictive scaling, and the most tightly integrated managed HPC service (ParallelCluster).

Azure Confidential Computing: Azure has the most comprehensive confidential computing VM lineup (DC-family, EC-family) integrating hardware TEEs (AMD SEV-SNP, Intel TDX) directly into the standard VM portfolio.

GCP TPUs: Google Cloud is the only provider offering first-party AI training accelerators (TPUs) as a standard compute product. TPU v5e/v5p and Trillium v6e provide alternatives to NVIDIA GPUs at scale for specific ML frameworks.

OCI Bare Metal as First-Class: OCI treats bare metal as a standard shape alongside VMs in the same Compute service, making the operational model consistent. AWS follows a similar approach with .metal EC2 sizes; Azure separates bare metal into a distinct premium service.


References