Domain 2: Exadata Database Service (20%)
Domain 2 of the 1Z0-1093-25 Oracle AI Cloud Database Services 2025 Professional exam covers Oracle Exadata Database Service on OCI. At 20% of the exam weight, this represents approximately 10 questions. The exam tests your knowledge of Exadata architecture, provisioning workflows, infrastructure shapes, VM cluster management, elastic scaling, patching, backup/recovery, Data Guard, networking, and the Exadata-specific performance features that differentiate it from Base Database (BaseDB) Service.
1. Exadata Database Service Overview
What It Is
Oracle Exadata Database Service on Dedicated Infrastructure (ExaDB-D) provides Oracle Exadata Database Machine as a managed service in OCI data centers. It combines Exadata hardware (compute servers, storage servers, RoCE/InfiniBand networking) with OCI cloud automation for provisioning, patching, backup, and lifecycle management. (ExaDB-D Description)
The service is purpose-built for database consolidation -- a single Exadata infrastructure can host multiple VM clusters, each running multiple databases across multiple DB Homes. This is fundamentally different from Base Database Service, which runs individual database instances on standard OCI compute shapes with block volume storage.
Two Deployment Models
| Deployment | Location | Managed By | Key Use Case |
|---|---|---|---|
| ExaDB-D (Dedicated Infrastructure) | Oracle's OCI data centers | Oracle manages hardware; customer manages VMs, GI, DB software | Cloud-native Exadata in public cloud |
| ExaDB-C@C (Cloud@Customer) | Customer's own data center | Oracle manages hardware remotely via OCI Control Plane; customer manages VMs, GI, DB | Data residency, regulatory compliance, hybrid cloud |
Both use identical OCI Console, CLI, and API interfaces. The exam may test whether you know that Cloud@Customer hardware is physically in the customer's data center but managed through OCI. (ExaDB-C@C Overview)
Shared Responsibility Model
| Oracle Manages | Customer Manages |
|---|---|
| Physical hardware (DB servers, storage servers) | Guest VM OS and Grid Infrastructure patching |
| Hypervisor layer | Database software patching and updates |
| Storage networking | Database lifecycle (create, scale, backup, terminate) |
| Base OS and hardware patching | Data, schemas, encryption keys |
| Security scans and updates | VM and database management via Cloud Automation |
| Infrastructure monitoring | Additional software installation in VMs |
Exam trap: Oracle staff are NOT authorized to access customer VMs. This is a hard security boundary. The customer is fully responsible for everything inside the VM.
2. Infrastructure Shapes and Specifications
Scalable (Flexible) Systems
Scalable systems start with a minimum configuration and allow independent expansion of compute and storage servers. (ExaDB-D Description)
| Spec (per system minimum) | X11M | X9M | X8M |
|---|---|---|---|
| DB Servers (min/max) | 2 / 32 | 2 / 32 | 2 / 32 |
| Storage Servers (min/max) | 3 / 64 | 3 / 64 | 3 / 64 |
| Per DB Server: Compute | 760 ECPUs | 126 usable cores | 50 usable cores |
| Per DB Server: Memory | 1,390 GB | 1,390 GB | 1,390 GB |
| Per Storage Server: Disk | 80 TB usable | 63.6 TB usable | 49.9 TB usable |
| Total ECPUs/Cores (min config) | 1,520 ECPUs | 252 cores | 100 cores |
| Total Memory (min config) | 2,780 GB | 2,780 GB | 2,780 GB |
| Total Disk (min config) | 240 TB | 190 TB | 149 TB |
| Total Flash (min config) | 81.6 TB | 76.8 TB | 76.8 TB |
| Max Local Storage per DB Server | 2,243 GB | 2,243 GB | 2,243 GB |
| Max VM Clusters | 8 | 8 | 8 |
| Max VMs per DB Server | 8 | 8 | 8 |
| Interconnect | RoCE (RDMA) | RoCE (RDMA) | RoCE (RDMA) |
Fixed-Shape Systems (Legacy)
Fixed shapes cannot be scaled after provisioning. They come in Quarter, Half, and Full rack configurations. (ExaDB-D Description)
| Property | X8 Quarter | X8 Half | X8 Full |
|---|---|---|---|
| Shape Name | Exadata.Quarter3.100 | Exadata.Half3.200 | Exadata.Full3.400 |
| DB Servers | 2 | 4 | 8 |
| Storage Servers | 3 | 6 | 12 |
| Total Usable Cores | 100 | 200 | 400 |
| Total Memory (GB) | 1,440 | 2,880 | 5,760 |
| Total Flash (TB) | 76.8 | 179.2 | 358.4 |
| Total Disk (TB) | 149 | 299 | 598 |
| Interconnect | InfiniBand | InfiniBand | InfiniBand |
X7 and X6 fixed shapes follow the same Quarter/Half/Full pattern with lower specs. X6 shapes require License Included pricing (BYOL not supported).
Exam trap: X8M and later use RoCE (RDMA over Converged Ethernet). X8 and X7 fixed shapes use InfiniBand. The "M" suffix indicates the modern RoCE-based interconnect.
Key Limits to Memorize
| Limit | Value |
|---|---|
| Max VM Clusters per infrastructure | 8 |
| Max VMs per DB Server | 8 |
| VM Image Size (minimum/default) | 244 GB (includes 60 GB for /u02) |
| Max File System Size per VM | 900 GB |
| Minimum billing commitment | 48 hours per infrastructure |
| Billing after minimum | Per-second |
| OCPU scaling billing | Per-second with 1-minute minimum per added OCPU |
3. Resource Hierarchy and Provisioning Workflow
Four-Level Resource Hierarchy
Creating an Exadata database requires provisioning resources in a strict top-down sequence. (Provisioning Guide)
Cloud Exadata Infrastructure (physical rack)
└── Cloud VM Cluster (virtual machines on the rack)
└── DB Home (Oracle Database software installation)
└── Database (CDB with optional PDBs)
Each level depends on the one above it. You cannot create a VM Cluster without first having an Infrastructure resource. You cannot create a DB Home without a VM Cluster.
Step 1: Create Cloud Exadata Infrastructure
This provisions the physical Exadata rack in an OCI availability domain. Key parameters:
- Compartment and Availability Domain
- Model Selection: Fixed (X6/X7/X8 rack sizes) or Scalable (X8M/X9M/X11M with server counts)
- Maintenance Schedule: Rolling (one server at a time, minimal downtime) or Non-rolling (all servers at once, full downtime)
- Contacts: 1-10 email addresses for maintenance notifications
The infrastructure status transitions from Provisioning to Available before you can create VM clusters.
Step 2: Create Cloud VM Cluster
The VM Cluster defines the virtual machines that run on the infrastructure. Key parameters:
- VM Cluster Type: Standard or Developer (cannot be changed after creation)
- DB Server Selection: Choose which physical servers host this cluster (minimum 1, recommend 2+ for HA)
- OCPU/ECPU Allocation: X10M and earlier: minimum 2 OCPUs; X11M: minimum 8 ECPUs per VM
- Memory Per VM: Minimum 30 GB
- Local Storage Per VM: Minimum 60 GB for /u02
- Exadata Storage: Multiples of 1 TB, minimum 2 TB (ASM); or Exascale vault (X8M+ only)
- Grid Infrastructure Version: 19c or 26ai (determines which DB versions you can run)
- Guest OS Version: Oracle Linux 8
- Networking: Client subnet, backup subnet, hostname prefix (max 12 chars), SCAN listener port (default 1521, range 1024-8999)
- Licensing: License Included or BYOL
Exam trap: The VM Cluster type (Standard vs Developer) cannot be changed after creation. Developer clusters are limited to a single VM, 2 threads (1 core) per PDB, 8 GB memory per PDB, 20 GB storage per PDB, and 30 sessions per PDB. Data Guard is prohibited on Developer clusters.
Step 3: Create DB Home
A DB Home is an Oracle Database software installation. Multiple DB Homes can exist on a single VM Cluster, allowing you to run different database versions or patch levels simultaneously. This is a key advantage over BaseDB, where each DB system has a single DB Home.
Step 4: Create Database
The database itself (CDB with optional PDBs) lives inside a DB Home. Default settings include ASM storage, American locale, and an auto-generated db_unique_name.
Networking Requirements
| Component | Purpose | Constraints |
|---|---|---|
| Client Subnet | Application connectivity to databases | Must NOT overlap with 192.168.16.16/28 |
| Backup Subnet | Backup traffic and Data Guard replication | Must NOT overlap with 192.168.128.0/20 |
| SCAN Listener | Single Client Access Name for database connectivity | Port 1024-8999, default 1521 |
| Service Gateway | Required for Object Storage backups | Must be configured in VCN |
Exam trap: The SCAN listener port cannot be manually changed after provisioning. Attempting to change it may cause Data Guard failures.
4. Elastic Scaling
Compute Scaling
Exadata allows online scaling of OCPUs/ECPUs with zero downtime. The scaling increment depends on the infrastructure. (ExaDB-D Description)
| System | Scaling Increment |
|---|---|
| X11M (6 DB servers) | Multiples of 24 ECPUs (4 ECPUs/core x 6 servers) |
| X9M/X8M | Multiples equal to DB server count |
| X8 Quarter Rack | Multiples of 2 |
| X8 Half Rack | Multiples of 4 |
| X8 Full Rack | Multiples of 8 |
You can scale down to zero CPU cores, paying only infrastructure charges. Scaling is billed per-second with a 1-minute minimum per added OCPU.
Storage Scaling
On scalable systems (X8M/X9M/X11M), you can independently add storage servers up to 64 total. Fixed-shape systems cannot scale storage after provisioning.
Exam trap: Unlike BaseDB (which uses OCI Block Volumes that scale independently), Exadata storage scaling means adding entire storage servers. Compute and storage scale independently on flexible systems, but storage on fixed shapes is permanently set at provisioning.
5. Exadata-Specific Performance Features
These features are what make Exadata fundamentally different from running Oracle Database on standard OCI compute (BaseDB). (Exadata Features)
Smart Scan (SQL Offload)
Smart Scan offloads data-intensive SQL operations from database servers to storage servers. Instead of shipping all data blocks to the database server for filtering, the storage server applies WHERE clause predicates, column projection, and certain join processing before returning only the qualifying rows. This achieves up to 31 TB/second of scan throughput across the storage tier.
Exam trap: Smart Scan works automatically for full table scans and index fast full scans. It does NOT offload index range scans or single-row lookups. If the exam asks "which scan type benefits from Smart Scan," the answer is full table scan.
Storage Indexes
Storage Indexes are lightweight, in-memory structures maintained automatically on each storage server. They track the minimum and maximum values of columns for each 1 MB storage region. When a query includes a WHERE clause predicate, the storage index can skip entire storage regions where no matching data exists -- without reading any data from disk or flash. This works in conjunction with Smart Scan to further reduce I/O.
Hybrid Columnar Compression (HCC)
HCC provides compression ratios typically around 10:1 by combining columnar organization with compression algorithms optimized for each column's data type. HCC is available ONLY on Exadata storage and Oracle's engineered systems -- it is NOT available on BaseDB or any non-Exadata platform.
RDMA and PMEM (X8M and Later)
Starting with X8M, Exadata uses RDMA over Converged Ethernet (RoCE) for the internal storage network, replacing InfiniBand on older systems. This enables: (PMEM and RDMA)
- PMEM Cache: Database clients perform RDMA reads directly from persistent memory on storage servers, bypassing the storage server software (cellsrv) entirely. Read latency drops to as low as 14 microseconds.
- PMEM Log: Redo log write latency is reduced by sending redo buffers via RDMA directly to PMEM on storage servers.
- iDB Protocol: Exadata's purpose-built protocol for communication between database servers and storage servers over RoCE.
| Feature | How It Helps |
|---|---|
| PMEM Cache | Bypasses cellsrv for reads; ~14 microsecond latency |
| PMEM Log | Reduces redo write latency for OLTP workloads |
| RDMA (RoCE) | Direct memory-to-memory data transfer; eliminates OS/CPU overhead |
| Smart Flash Cache | SSD-based caching tier between DRAM and disk |
| Smart Flash Log | SSD-based redo log write acceleration |
I/O Resource Management (IORM)
IORM automatically eliminates resource contention between databases sharing the same Exadata infrastructure. It prioritizes I/O based on configured policies, ensuring predictable performance for critical workloads during database consolidation.
6. Patching and Updating
Exadata has three independent patching layers, each with its own lifecycle. Patch in this order. (Patching Guide)
Patching Order (Mandatory Sequence)
1. Grid Infrastructure (VM Cluster level) -- patch first
2. Database Home -- patch second
3. Individual Databases -- patch third
Patching Methods by Layer
| Layer | Method | Rolling? | Scope | Downtime |
|---|---|---|---|---|
| Grid Infrastructure | In-place upgrade on VM Cluster | Yes (one node at a time) | All DBs on cluster | Minimal |
| Database Home | In-place patch OR move DB to new Home | Yes (node by node) | All DBs in Home (in-place) or single DB (move) | Yes |
| Database | Move to new DB Home (recommended) | Yes | Single database | Brief |
| OS (Guest VM) | Automated image update | Yes (node by node) | VM cluster nodes | Minimal |
Recommended Patching Method: Move Database to New DB Home
Oracle recommends patching databases by moving them to a DB Home with the target patch level rather than patching the Home in place. Advantages:
- Only affects the single database being moved
- Easy rollback (move back to original Home)
- Datapatch executes automatically (can be skipped)
- Target Home must use same major version, or latest through N-3 minor versions
Precheck Operations
Always run a precheck before applying any patch. Prerequisites for patching:
/u01must have minimum 15 GB free space- Oracle Clusterware must be running
- ALL nodes of VM cluster must be up
- No infrastructure maintenance scheduled within 24 hours
Restrictions During GI Upgrade
While Grid Infrastructure is upgrading, the following operations are blocked:
- Starting, stopping, rebooting nodes
- Scaling CPU
- Provisioning or managing DB Homes / databases
- Database restoration
- Editing IORM settings
- Data Guard enable, switchover, failover to DB on same cluster
Exam trap: Failover to a standby on a DIFFERENT VM cluster IS allowed during GI patching. The restriction only applies to the cluster being patched.
Database Version Support
Supported versions: Oracle AI Database 26ai, Oracle Database 19c, 12.2, 12.1, 11.2 (upgrade support).
Patches are available for the latest version through N-3 (four versions total). For example, for 19c: 19.8, 19.7, 19.6, 19.5 would be supported; 19.4 and earlier would not.
Exam trap: To run 26ai databases, Grid Infrastructure MUST be 26ai. For 19c databases, GI can be 19c or 26ai. You cannot run a database version higher than the GI version.
7. Backup and Recovery
Backup Destinations
Exadata offers three mutually exclusive backup approaches (you cannot mix them). (Backup Guide)
| Destination | Retention Options | Key Feature |
|---|---|---|
| Autonomous Recovery Service (recommended) | Bronze 14d, Silver 35d (default), Gold 65d, Platinum 95d, Custom up to 10 years | Real-time data protection (~0 RPO), ZDLRA-based, long-term retention 90-3,650 days |
| OCI Object Storage | 7, 15, 30 (default), 45, 60 days | L0 (full) + L1 (incremental) backups, cost-effective |
| Direct RMAN (not recommended) | Customer-managed | For existing RMAN scripts; must unregister from backup automation first |
Exam trap: These three approaches are MUTUALLY EXCLUSIVE. Mixing them causes backup automation to break. If the exam asks about hybrid backup configurations, the answer is that they are not supported.
Automatic Backup Schedule
- Default backup window: 00:00-06:00 UTC
- Custom windows: 2-hour windows on even-numbered hours
- Backups do NOT necessarily complete within the scheduling window
- All backups are encrypted with the TDE master key
- System automatically deletes expired backups per retention policy
RMAN Channel Allocation
Channels are allocated based on OCPU count per node:
| OCPUs Per Node | Backup Channels | Restore Channels |
|---|---|---|
| 12 or fewer | 2 | 4 |
| 13-24 | 4 | 8 |
| More than 24 | 8 | 16 |
Maximum 255 channels cluster-wide. Custom allocation via dbaascli (1-32 per node).
Recovery Options
- Restore to latest: Last known good state, minimum data loss
- Restore to timestamp: Specific point in time
- Restore to SCN: Specific System Change Number
Long-term retention (LTR) backups represent a single point in time and cannot use point-in-time recovery. LTR backups can restore to a NEW database only; in-place restore is NOT supported.
8. Data Guard
Oracle Data Guard on Exadata provides high availability and disaster recovery through physical standby databases. (Data Guard Guide)
Key Configuration Facts
| Property | Value |
|---|---|
| Standby type supported | Physical standby only |
| Max standbys per primary | 6 |
| Active Data Guard | Supported (requires license); read-only standby with real-time query |
| Protection modes | Maximum Performance (default, async), Maximum Availability (sync, zero data loss) |
| Cross-region | Supported via Remote VCN Peering |
| Cross-AD (same region) | Supported; recommended for fault isolation |
| Minimum TCP port | 1521 |
| db_unique_name max length | 30 characters |
| Cascading standby | NOT supported |
Prerequisites
- Two existing Exadata VM Clusters (one for primary, one for standby)
- Standby DB Home must have identical patches as primary DB Home
- Primary and standby must be on same major release version (standby can be higher minor version)
- Same-region: both instances in same VCN; cross-region: Remote VCN Peering required
- Network security rules allowing TCP port 1521 between primary and standby subnets
Operations
| Operation | Description | Data Loss? |
|---|---|---|
| Switchover | Planned role reversal; primary becomes standby, standby becomes primary | Zero data loss |
| Failover | Unplanned; standby becomes primary when primary fails | Depends on protection mode |
| Reinstate | Returns failed database back to standby role | N/A |
Exam trap: You CANNOT terminate a primary database while it has active standby associations. You must first terminate all standbys or switchover the primary to standby role, then terminate.
Health Status Indicators
The Console shows switchover/failover readiness with color-coded indicators:
- Green: All standbys ready
- Yellow: Subset of standbys ready (multi-standby scenario)
- Red: No standbys ready
- Gray: Unknown / cannot determine
Backup from Standby
Starting with dbaascli 25.3.1.0.0, Data Guard broker is mandatory when configuring Recovery Service as backup destination in Data Guard-enabled databases. Backups can be scheduled on the standby to offload the primary.
9. Management Interfaces
Exadata supports four management interfaces for all lifecycle operations:
| Interface | Use Case |
|---|---|
| OCI Console | Web-based GUI for provisioning, patching, monitoring |
| OCI CLI | Command-line automation; oci db command family |
| REST API | Programmatic access; full lifecycle management |
| dbaascli | In-VM command-line tool for database-specific operations (backup, patching, recovery, diagnostics) |
Key dbaascli commands for the exam:
dbaascli database backup --start --dbname <name> # On-demand backup
dbaascli database backup --getConfig --dbname <name> # View backup config
dbaascli database runDatapatch --dbname <name> # Apply datapatch
dbaascli database upgrade --dbname <name> # Upgrade database
dbaascli database getDetails --dbname <name> # View database details
dbaascli database convertToPDB --dbname <name> # Convert non-CDB to PDB
dbaascli database addInstance --dbname <name> # Add RAC instance
10. Key Differences: Exadata vs BaseDB
This comparison appears frequently on the exam. Understand what Exadata provides that BaseDB does not.
| Feature | Exadata Database Service | Base Database Service |
|---|---|---|
| Hardware | Dedicated Exadata rack (engineered system) | Standard OCI compute shapes + Block Volumes |
| Storage | Exadata Storage Servers (ASM on dedicated hardware) | OCI Block Volumes (iSCSI) |
| Smart Scan | Yes (SQL offloaded to storage servers) | No |
| Storage Indexes | Yes (automatic I/O elimination) | No |
| HCC | Yes (10:1 compression) | No (only basic/advanced compression) |
| RDMA / PMEM | Yes (X8M+, ~14 microsecond reads) | No |
| Database Edition | Enterprise Edition Extreme Performance only | Standard Edition or Enterprise Edition |
| Multiple DB Homes per system | Yes (multiple per VM Cluster) | Single DB Home per DB system |
| VM Clusters | Up to 8 per infrastructure | N/A (each DB system is independent) |
| RAC | Built-in (multi-node VM Clusters) | Available on multi-node shapes |
| Scaling | Add/remove OCPUs online; add storage servers | Scale compute shape; expand block volume storage |
| Database consolidation | Primary use case (many DBs on shared infrastructure) | Individual DB systems |
| Minimum cost | 48-hour minimum commitment | Per-hour billing |
| IORM | Yes (automatic I/O prioritization) | No |
Exam trap: Exadata is ALWAYS Enterprise Edition Extreme Performance (which includes all options: In-Memory, RAC, Active Data Guard, etc.). BaseDB can use Standard Edition or Enterprise Edition. If the exam asks about running Standard Edition on Exadata, the answer is that you cannot.
11. Licensing and Billing
Licensing Models
| Model | Description | Constraint |
|---|---|---|
| License Included | Oracle EE Extreme Performance included in subscription; all management packs and options included | Available on all shapes |
| BYOL | Use existing Oracle EE licenses; customer validates license compliance | NOT available on X6 shapes |
Billing Model
- Minimum commitment: 48 hours per infrastructure instance
- After minimum: Per-second billing
- OCPU scaling: Per-second with 1-minute minimum per added OCPU
- Zero core: Infrastructure-only charges when scaled to zero cores
- Payment models: Pay As You Go (PAYG) or Annual Universal Credits
Developer VM Cluster
Oracle Database licenses are waived on Developer VM Clusters. You pay only infrastructure costs. However, Developer clusters have severe constraints (single VM, 1 core/8GB/20GB per PDB, no Data Guard, no cross-region backup).
12. Exam Traps and Common Gotchas
- VM Cluster type is permanent: Standard vs Developer cannot change after creation.
- SCAN listener port is permanent: Cannot be changed after VM Cluster provisioning.
- GI version gates DB version: Database version cannot exceed GI version. 26ai DB requires 26ai GI.
- Patching order matters: GI first, then DB Home, then Database. Physical infrastructure patching is Oracle-managed and not a customer step. Skipping the customer-controlled order causes failures.
- Backup approaches are mutually exclusive: Oracle-managed, user-configured (dbaascli), and direct RMAN cannot be mixed.
- HCC is Exadata-only: If a question mentions 10:1 compression in the cloud, the answer is Exadata, not BaseDB.
- X8M = RoCE, X8 = InfiniBand: The "M" suffix denotes the modern RoCE interconnect.
- Max 6 standbys per primary: Data Guard supports up to 6 physical standbys. Cascading standby is NOT supported.
- Cannot terminate primary with active standbys: Must remove standbys or switchover first.
- Cloud@Customer is OCI-managed: Even though hardware is in the customer's data center, provisioning and lifecycle management go through OCI Console/API.
- 15 GB free in /u01 required for patching: Precheck will fail without it.
- No infrastructure maintenance within 24 hours of patching: The system blocks patching if maintenance is scheduled.
- Recovery Service LTR: In-place restore NOT supported for long-term retention backups; must restore to a new database.
- Fixed shapes cannot scale: Quarter/Half/Full rack storage and compute are set at provisioning.
- Subnets have reserved ranges: Client subnet must not overlap 192.168.16.16/28; backup subnet must not overlap 192.168.128.0/20.
References
- Exadata Database Service on Dedicated Infrastructure Overview
- ExaDB-D Description and Specifications
- Provisioning Exadata Cloud Infrastructure
- Patching and Updating ExaDB-D
- Backup and Recovery on ExaDB-D
- Data Guard with Exadata Cloud Infrastructure
- Exadata Cloud@Customer Overview
- PMEM and RDMA on Exadata
- Exadata Database Service Features
- 1Z0-1093-25 Exam Page