Reference

Cloud Services Cross-Reference: Containers & Serverless

This document maps container orchestration, serverless compute, container registry, and service mesh services across AWS, Azure, Oracle Cloud Infrastructure (OCI), and Google Cloud Platform (GCP). All four providers offer managed Kubernetes, serverless container execution, Functions-as-a-Service, and private container registries, but their architectural choices, pricing models, and managed-versus-self-service tradeoffs differ significantly. Use this reference when evaluating which cloud best fits a workload's operational complexity tolerance, cost model, and existing ecosystem.


1. Managed Kubernetes

Managed Kubernetes services handle the control plane, etcd, and API server as a cloud-managed concern. The key differentiator across providers is how much of the data plane (node lifecycle, patching, scaling) is also managed.

AWS — Amazon EKS (Elastic Kubernetes Service) EKS provides a highly available, multi-AZ Kubernetes control plane. Compute choices are EC2 managed node groups, Fargate (serverless pods), or EKS Auto Mode (launched 2024), which automatically provisions right-sized infrastructure, selects optimal instance types, and dynamically scales nodes. EKS Anywhere extends EKS to on-premises and edge environments; EKS Hybrid Nodes bridges on-premises worker nodes into a cloud-hosted control plane.

Azure — AKS (Azure Kubernetes Service) AKS manages the Kubernetes control plane at no charge; you pay only for worker node VMs. AKS supports system node pools (reserved for system pods), user node pools, and virtual nodes backed by Azure Container Instances for burst scaling. Azure Red Hat OpenShift (ARO) is a jointly engineered alternative for organizations already using OpenShift. AKS integrates natively with Azure Active Directory (Entra ID) for RBAC and with Azure Monitor for observability.

OCI — OKE (Oracle Container Engine for Kubernetes) OKE offers three node types: managed nodes (shared OCI/customer responsibility), virtual nodes (fully serverless, pay-per-pod pricing, no node infrastructure to manage), and self-managed nodes (maximum customization, GPU, high-performance networking). Virtual nodes eliminate the operational overhead of node patching and scaling by delegating all infrastructure concerns to OCI. OKE integrates with OCI IAM, OCI Container Registry (OCIR), OCI Vault, and Oracle DevOps.

GCP — GKE (Google Kubernetes Engine) GKE offers Standard mode (full node control) and Autopilot mode. Autopilot provisions nodes automatically, enforces security baselines (no privileged containers, no host path volumes), and charges per-pod rather than per-node, providing a true serverless Kubernetes experience. GKE Fleet management enables centralized governance, policy, and workload management across multiple clusters. GKE was ranked first in every critical capability in the 2025 Gartner Critical Capabilities for Container Management report.

Feature AWS EKS Azure AKS OCI OKE GCP GKE
Managed control plane Yes (paid) Yes (free) Yes (free) Yes (free)
Serverless node option Fargate Virtual Nodes (ACI) Virtual Nodes Autopilot
Pricing model Per cluster + EC2/Fargate Worker nodes only Managed/virtual nodes Standard (nodes) / Autopilot (pods)
On-premises extension EKS Anywhere / Hybrid Nodes Azure Arc No equivalent (self-managed OKE on customer infra) Anthos / GKE on-prem
OpenShift option ROSA (Red Hat) ARO (Red Hat) None native None native
Spot/preemptible nodes Spot Instances Spot VMs Preemptible VMs Spot VMs
Multi-cluster management EKS Fleet (preview) Azure Fleet Manager None native GKE Fleet
Auto node provisioning EKS Auto Mode Cluster Autoscaler / KEDA OKE Autoscaler Autopilot built-in

Key differentiators:

  • GKE Autopilot is the most fully automated serverless Kubernetes offering, with per-pod billing and enforced security baselines.
  • OKE Virtual Nodes deliver comparable serverless Kubernetes at a competitive per-pod price; strong choice when Oracle database integration is required.
  • EKS Auto Mode (2024) is AWS's answer to Autopilot: automated node lifecycle with no manual node pool management required.
  • AKS control plane is free; lowest entry cost for a managed Kubernetes tier.

2. Serverless Container Execution

Serverless container services run OCI-compliant container images without requiring a Kubernetes cluster or node management. They target single-container workloads, microservices, batch jobs, and APIs that do not require full Kubernetes orchestration.

AWS — AWS Fargate + Amazon ECS Fargate is a serverless compute engine that works as the compute layer under ECS (Elastic Container Service) or EKS. ECS with Fargate is AWS's primary non-Kubernetes container platform: task-based execution, service auto-scaling, and event-driven task launch via EventBridge or SQS. Fargate charges per vCPU and memory per second.

AWS — AWS App Runner App Runner deploys containerized web applications and APIs directly from a container image or source code repository, handling load balancing, TLS, auto-scaling, and health checks automatically. Positioned as the simplest path to deploy a container without any infrastructure decisions. Charges per vCPU-second of active request processing plus provisioned concurrency if configured.

Azure — Azure Container Instances (ACI) ACI runs a single container (or container group) on-demand, billed per second for CPU and memory. It is a low-level primitive: no built-in load balancing, scaling policies, or health checks. ACI is commonly used as a burst-scaling backend for AKS virtual nodes, or for isolated CI/CD jobs and one-off tasks. Not intended as a production application platform.

Azure — Azure Container Apps Container Apps is a managed serverless container platform built on Kubernetes, Dapr, KEDA, and Envoy. It provides service discovery, traffic splitting, revision management, KEDA-based scale-to-zero, and Dapr integration for microservice communication patterns. Azure Functions can run hosted within Container Apps, unifying FaaS and container workloads on a single platform. Supports serverless GPU workloads (GA 2025). Does not expose the Kubernetes API directly.

OCI — OCI Container Instances Container Instances runs containers without Kubernetes for workloads that do not require orchestration: APIs, web apps, CI/CD jobs, data processing, and automation tasks. Billed per OCPU and memory per second. Supports ARM-based processors (Ampere A1). Not a managed platform: no built-in scaling, load balancing, or service discovery.

GCP — Cloud Run Cloud Run runs stateless containers on-demand, scale-to-zero, pay-per-100ms of request CPU and memory. Services handle HTTP/gRPC traffic; Jobs handle batch and scheduled execution without HTTP. GPU support (NVIDIA L4) is GA as of 2024, enabling inference workloads with scale-to-zero economics. Cloud Run is positioned as the primary serverless container platform on GCP and is the recommended default for stateless containerized workloads.

Feature AWS Fargate (ECS) AWS App Runner Azure ACI Azure Container Apps OCI Container Instances GCP Cloud Run
Kubernetes required No (ECS or EKS) No No No (built on K8s) No No
Built-in load balancing ECS Service Yes No Yes No Yes
Scale to zero No (min 1 task) Yes (with pause) N/A Yes No Yes
KEDA / event-driven scale No (ECS) No No Yes (KEDA native) No No (triggers-based)
Dapr integration No No No Yes (native) No No
GPU support No No No Yes (GA 2025) No Yes (NVIDIA L4, GA 2024)
Jobs / batch support ECS Scheduled Tasks No No Container Apps Jobs No Cloud Run Jobs
Pricing unit vCPU+mem/sec vCPU/sec (requests) vCPU+mem/sec vCPU+mem/sec OCPU+mem/sec 100ms (CPU+mem)

Key differentiators:

  • Cloud Run has the most mature scale-to-zero serverless container execution with GPU support and a Jobs construct for batch.
  • Azure Container Apps is unique in bundling KEDA, Dapr, and revision-based traffic management in a managed serverless platform.
  • App Runner is the simplest AWS option: source code to running HTTPS endpoint with no configuration. Limited control.
  • OCI Container Instances and Azure ACI are primitive building blocks, not full application platforms.

3. Container Registries

Private container registries store and distribute OCI-compliant container images and artifacts. All four major cloud providers offer fully managed private registries with vulnerability scanning, access control via their respective IAM systems, and geo-replication.

AWS — Amazon ECR (Elastic Container Registry) ECR is a fully managed OCI-compliant private registry. Features include lifecycle policies for automatic image cleanup, automated vulnerability scanning via Amazon Inspector (OS and programming language packages), immutable image tags, cross-region and cross-account replication, and a public registry (ECR Public / Gallery). Integrates with ECS, EKS, Lambda, and CodePipeline without additional authentication configuration.

Azure — Azure Container Registry (ACR) ACR supports OCI Images, OCI Artifacts, and Helm charts. Key features: geo-replication across Azure regions, ACR Tasks for automated image builds and base image update triggers, integrated vulnerability scanning via Microsoft Defender for Containers, and content trust (Notary v2 signing). Three tiers: Basic, Standard, and Premium (adds geo-replication and private endpoints).

OCI — OCI Container Registry (OCIR) OCIR is OCI's fully managed private registry. Integrates with the OCI Vulnerability Scanning Service (VSS) for CVE-based image scanning; scanning can be configured to run automatically on push and on new CVE database updates. Supports image signing via OCI Vault master encryption keys. Free for images stored in the same region as the tenancy's home region; cross-region replication is available.

GCP — Google Artifact Registry Artifact Registry (GAR) is the current recommended registry for GCP, replacing the older Google Container Registry (GCR). GAR supports Docker images, Helm charts, Maven and npm packages, Python packages, and other artifact types in a unified repository. Vulnerability scanning is provided via Container Analysis. Regional and multi-regional repositories. Integrates with Cloud Build, GKE, Cloud Run, and Binary Authorization for supply chain security.

Feature AWS ECR Azure ACR OCI OCIR GCP Artifact Registry
OCI image support Yes Yes Yes Yes
Helm chart storage Yes Yes Yes Yes
Vulnerability scanning Amazon Inspector Defender for Containers OCI VSS Container Analysis
Image signing Yes (Notation/Notary) Yes (Notary v2) Yes (OCI Vault) Yes (Binary Authorization)
Geo-replication Yes (cross-region) Yes (Premium tier) Yes Yes (multi-regional repos)
Public registry ECR Public No Yes (public repos) Artifact Registry Public
Non-image artifact types No (images only) OCI Artifacts No Yes (Maven, npm, Python, etc.)
Free tier storage 500 MB/month None Same-region home None

Key differentiators:

  • GCP Artifact Registry supports multiple artifact types (not just containers) in one service, reducing the number of registries to manage.
  • Azure ACR Tasks enable automated image builds triggered by base image updates, unique among the four.
  • OCI OCIR auto-rescans images when new CVEs are published, not only on push.
  • ECR Public Gallery provides a hosted public distribution channel comparable to Docker Hub.

4. Functions-as-a-Service (FaaS)

FaaS platforms execute event-driven code in response to triggers without managing any underlying server infrastructure. All four providers support multiple runtimes, container image deployment, and event source integrations with their respective cloud ecosystems.

AWS — AWS Lambda Lambda is the market-leading FaaS platform, used by over 70% of AWS customers. Supports Node.js, Python, Java, Go, .NET, Ruby, and custom runtimes via Lambda Layers. Container image support (up to 10 GB) enables packaging arbitrary runtimes. SnapStart reduces cold starts for Java and .NET to under 200ms by pre-initializing execution environments. Lambda@Edge and CloudFront Functions enable code execution at CDN edge locations globally. ARM/Graviton2 execution available at 20% lower cost. Maximum execution timeout: 15 minutes.

Azure — Azure Functions Azure Functions offers multiple hosting plans: Consumption (true serverless, scale to zero), Flex Consumption (GA 2025, fixed memory sizes of 512 MB, 2 GB, and 4 GB, per-function scaling), Premium (pre-warmed instances, VNet integration), and Dedicated (App Service plan). Durable Functions v3 enables stateful, long-running orchestrations and fan-out/fan-in patterns. Azure Functions can also run hosted within Azure Container Apps, unifying the FaaS and container serverless ecosystems. Supports PowerShell, a unique differentiator for Windows automation workloads.

OCI — OCI Functions OCI Functions is built on the open-source Fn Project, providing portability: functions can run on OCI or self-hosted Fn servers. Supports Python, Go, Java, Node.js, and C#. Container image-based deployment. Native integration with OCI Events, OCI API Gateway, OCI Streaming, Oracle Integration Cloud, and Oracle Autonomous Database. Functions are invoked synchronously or via OCI Events triggers. Maximum execution timeout: 120 seconds (default), configurable up to 300 seconds.

GCP — Google Cloud Functions Cloud Functions (2nd gen) is built on Cloud Run under the hood, giving it access to Cloud Run's concurrency model: a single instance can handle up to 1,000 concurrent requests, reducing cold starts and instance count compared to per-request isolation. Supports Node.js, Python, Go, Java, .NET, Ruby, and PHP. HTTP functions support up to 60-minute timeout (2nd gen). CloudEvents standard support for event-driven triggers. Cloud Functions Gen 2 achieves sub-second cold starts with startup CPU boost.

Feature AWS Lambda Azure Functions OCI Functions GCP Cloud Functions
Open-source engine No No Yes (Fn Project) No
Container image deploy Yes (up to 10 GB) Yes Yes Yes
Cold start mitigation SnapStart (Java/.NET) Premium plan (pre-warmed) None native Startup CPU boost
Max timeout 15 minutes 230 minutes (Premium) 300 seconds 60 minutes (HTTP, Gen 2)
Edge execution Lambda@Edge No native No No
Stateful orchestration AWS Step Functions Durable Functions None native Cloud Workflows (separate)
Concurrency model 1 request/instance 1 request/instance 1 request/instance Up to 1,000/instance (Gen 2)
ARM/low-cost option Graviton2 (20% discount) No Ampere A1 No
PowerShell support Yes Yes No No
Open standards No No CloudEvents, Docker CloudEvents

Key differentiators:

  • Lambda@Edge is unique: execute FaaS code at CloudFront edge nodes globally with no equivalent from other providers.
  • GCP Cloud Functions Gen 2 concurrency (1,000 req/instance) dramatically reduces cold starts and cost vs. one-request-per-instance models.
  • OCI Functions is the only provider built on an open-source engine (Fn Project), enabling true portability.
  • Azure Durable Functions provides the richest built-in stateful orchestration pattern library (chaining, fan-out, human approval, etc.) of any FaaS platform.

5. Serverless Application Platforms

These are opinionated, higher-level platforms for deploying web applications and APIs without infrastructure management, sitting above raw containers or FaaS.

AWS — AWS App Runner App Runner deploys web apps and APIs from container images or source code (GitHub). It handles load balancing, TLS termination, auto-scaling (including scale to zero when paused), and health checks. No VPC, subnet, or IAM role configuration required at the application layer. Supports private VPC connectivity via App Runner VPC Connector. Priced per vCPU-second during active request handling.

Azure — Azure App Service Azure App Service provides fully managed web application hosting for code or containers. Optimized for web apps and web APIs. Supports auto-scaling, deployment slots for blue/green, custom domains with managed TLS, and integration with Azure CDN. Available on Windows and Linux. Not a serverless platform in the scale-to-zero sense (Consumption plan is available for Functions), but highly managed.

OCI — None equivalent (native) OCI does not offer a direct equivalent to App Runner or Azure App Service as a native managed web application platform. OCI API Gateway combined with OCI Functions provides a serverless API pattern. For web application hosting, OKE with a load balancer or OCI Container Instances behind a load balancer are the available paths.

GCP — Cloud Run (dual-purpose) Cloud Run serves as both a serverless container execution platform (listed under Section 2) and the de facto serverless application platform on GCP, replacing the older Google App Engine standard for most new workloads. It provides HTTPS endpoints, custom domains, scale to zero, and global load balancing integration without requiring GKE.

GCP — Google App Engine (legacy) App Engine (Standard and Flexible environments) remains available but is generally superseded by Cloud Run for new workloads. Standard environment offers per-language runtimes with scale-to-zero. Flexible environment runs containers on Compute Engine VMs, similar to AWS Elastic Beanstalk.

Feature AWS App Runner Azure App Service OCI (none native) GCP Cloud Run
Source code deploy Yes (GitHub) Yes N/A Yes (Cloud Build integration)
Container image deploy Yes Yes OCI Container Instances Yes
Scale to zero Yes (pause) No (Consumption only via Functions) N/A Yes
Custom domains + TLS Yes Yes Via Load Balancer Yes
VPC connectivity Yes (VPC Connector) Yes Yes Yes (Serverless VPC Connector)
Deployment slots No Yes (blue/green) N/A Yes (traffic splitting by revision)

6. Service Mesh

Service meshes provide traffic management, mutual TLS (mTLS), observability, and policy enforcement for microservice-to-microservice communication, typically via sidecar proxies injected into pods.

AWS — Amazon VPC Lattice (replacement for App Mesh) AWS App Mesh is deprecated, effective September 30, 2026. AWS recommends ECS workloads migrate to ECS Service Connect and EKS workloads migrate to Amazon VPC Lattice. VPC Lattice is a fully managed application networking layer that provides consistent connectivity, security, and monitoring across ECS, EKS, Lambda, and EC2 without requiring sidecar proxies. It simplifies cross-cluster and cross-account service communication with built-in authentication and authorization policies.

AWS — ECS Service Connect Service Connect is an ECS-native service networking feature that enables service-to-service communication within an ECS cluster using logical service names and automatic health-based routing. It does not require Envoy configuration knowledge. Targeted at ECS workloads migrating off App Mesh.

Azure — Istio add-on for AKS Azure provides a managed Istio control plane as an AKS add-on. Microsoft handles Istio lifecycle management, including upgrades when triggered by the user. Verified integration with Azure Monitor managed Prometheus and Azure Managed Grafana. Replaces the deprecated Open Service Mesh (OSM) add-on. Official Azure support is provided for the add-on, making it the recommended service mesh for AKS.

OCI — Istio add-on for OKE OCI Service Mesh reached end-of-life on May 31, 2025. The replacement is the Istio cluster add-on for OKE. The add-on supports Oracle Linux 7 and 8 worker nodes running Kubernetes 1.26 or later. Oracle manages Istio version updates when opted in. Using the add-on form (vs. manual Istio installation) simplifies enable/disable, version selection, and configuration via approved key/value arguments.

GCP — Cloud Service Mesh (CSM) Google Cloud Service Mesh is the unified product that consolidates Anthos Service Mesh and Traffic Director. CSM provides managed and in-cluster control plane options, integrates with GKE, supports both Istio-compatible APIs and gRPC proxyless service mesh, and connects to GCP services such as Cloud Armor, IAP, and Cloud Load Balancing. CSM supports both GKE-hosted clusters and on-premises clusters via the fleet API.

Feature AWS VPC Lattice AWS ECS Service Connect Azure Istio (AKS add-on) OCI Istio (OKE add-on) GCP Cloud Service Mesh
Sidecar proxy required No No Yes (Envoy) Yes (Envoy) Optional (proxyless gRPC)
Managed control plane Yes (fully managed) Yes (ECS-native) Yes (Microsoft-managed) Yes (Oracle-managed updates) Yes
mTLS Yes Yes Yes (Istio) Yes (Istio) Yes
Cross-cluster support Yes (cross-account) No Multi-cluster Istio Via OKE add-on Yes (Fleet)
Kubernetes required No (multi-compute) No (ECS) Yes (AKS) Yes (OKE) No (GKE + on-prem)
Predecessor / EOL App Mesh (EOL Sep 2026) App Mesh (ECS) OSM (deprecated) OCI Service Mesh (EOL May 2025) Anthos Service Mesh + Traffic Director

Key differentiators:

  • AWS VPC Lattice is proxyless, operating at the VPC networking layer rather than requiring per-pod sidecar injection. Unique architectural approach.
  • GCP Cloud Service Mesh supports proxyless gRPC service mesh, enabling service mesh semantics without sidecars for gRPC workloads.
  • Azure and OCI both standardized on managed Istio add-ons after deprecating proprietary mesh products.
  • AWS App Mesh deprecation (EOL 2026) is the most significant service mesh migration event currently active across the industry.

Summary Cross-Reference

Category AWS Azure OCI GCP
Managed Kubernetes Amazon EKS AKS OKE GKE
Serverless Kubernetes EKS + Fargate / EKS Auto Mode AKS Virtual Nodes (ACI) OKE Virtual Nodes GKE Autopilot
Serverless containers AWS Fargate (ECS/EKS) Azure Container Apps OCI Container Instances Cloud Run
Simple app platform AWS App Runner Azure App Service None native Cloud Run / App Engine
Container registry Amazon ECR Azure Container Registry OCI Container Registry (OCIR) Google Artifact Registry
Functions-as-a-Service AWS Lambda Azure Functions OCI Functions Cloud Functions (Gen 2)
Service mesh VPC Lattice / ECS Service Connect Istio add-on for AKS Istio add-on for OKE Cloud Service Mesh
On-premises Kubernetes EKS Anywhere / Hybrid Nodes Azure Arc + AKS None native Anthos / GKE on-prem
OpenShift managed ROSA ARO None None

References