Oracle 1Z0-1093-25 Focused Study Guide
Oracle AI Cloud Database Services 2025 Professional Covers key tested topics with full context from the domain study guides.
Domain 1: Base Database Service
VCN Networking Fundamentals
Every BaseDB DB system must be placed in a VCN subnet. The VCN must exist before DB system creation.
Service Gateway
A service gateway provides resources in a private subnet with access to supported Oracle services (like Object Storage) within the same region — without sending traffic over the internet. Key facts:
- Works with private subnets only — the whole point is to avoid internet exposure
- Connects to Oracle services in the same region as the VCN only — it cannot reach services in other regions
- Does NOT provide internet connectivity — that's what an Internet Gateway does
- Common use: BaseDB in a private subnet needs a service gateway to back up to Object Storage and to receive managed updates
Dynamic Routing Gateway
A DRG connects your VCN to on-premises networks. It supports two connection types:
- IPSec VPN — encrypted tunnel over the public internet
- FastConnect — dedicated private connection (no internet traversal)
The DRG does NOT connect through a "database service network" — that is not a real OCI concept. It also does not provide access to OCI services (that's the service gateway's role).
Route Tables
Route tables control how traffic leaves the VCN. They do not control inbound traffic (that's security lists/NSGs).
- Each rule specifies a destination CIDR block and a target (a gateway like DRG, IGW, or SGW)
- Gateways added to your VCN DO require route table entries to function — traffic won't reach them without a matching rule
- The default route table is shared by subnets that don't specify a custom route table — you cannot delete the default route table
- Route tables do NOT control ingress/egress rules or specify traffic types — that's the job of security lists and NSGs
IAM: Permissions and Compartments
DB System Patching Permissions
To patch DB systems in your tenancy, you need the manage db-systems permission. There is no dedicated "patch" verb in OCI IAM for database systems. Patching operations fall under the broader manage verb. The correct policy statement is:
Allow group database-admins to manage db-systems in tenancy
Compartments
Compartments are OCI's primary mechanism for organizing and isolating resources. Key facts:
- Compartments are tenancy-wide — when you create a compartment, it is available in every region your tenancy is subscribed to
- They can be organized in a hierarchical (nested) fashion — up to six levels deep
- They allow you to isolate resources and control access via IAM policies
- After creating a compartment, you need to write at least one policy to grant any group access to resources in it
Exam trap: A statement saying "compartments are not tenancy-wide across regions" is FALSE — this is a frequently tested trick question.
Oracle Data Guard Protection Modes
BaseDB Data Guard supports two protection modes (Maximum Protection is NOT available through the OCI Console):
| Protection Mode | Transport Type | Behavior |
|---|---|---|
| Maximum Performance | ASYNC | Default mode. No primary performance impact. Minimal data loss possible because redo is shipped asynchronously. |
| Maximum Availability | SYNC | Zero data loss under normal operation. The primary waits for standby acknowledgment before committing. Falls back to async if the standby becomes unavailable — this prevents the primary from hanging. |
Key points:
- Maximum Availability does NOT use sync transport only — it has an async fallback mechanism
- Fast-Start Failover (FSFO) is not supported through the OCI Console feature
- The protection mode can be changed after Data Guard is initially enabled
- Maximum one standby per primary database
- Both databases must have identical versions and editions
Backup Recovery Options
When you use the OCI Console to recover a DB system database, three restore methods are available:
| Method | Description |
|---|---|
| Restore to Latest | Recovers to the last known good state with minimal data loss |
| Restore to Timestamp | Point-in-time recovery to a specific date/time you specify |
| Restore to SCN | Recovers to a specific System Change Number |
All three methods are available through the Console UI. Additionally:
- Object Storage IS an available backup destination — databases absolutely have access to it
- If Data Guard is enabled, you must terminate the standby database association before performing a recovery
- Backups occur at the CDB level — individual PDB-level backup is not supported
- Cross-region backup and restore is supported
VM DB System Scaling and Rebooting
OCPU Scaling
Scaling OCPUs on a VM DB system is done by changing the shape. Oracle's documentation literally titles this operation "Change the Shape of a DB System". The behavior differs by configuration:
| Configuration | Behavior | Downtime? |
|---|---|---|
| Single-node | Shape change requires a restart | Yes |
| 2-node RAC | Shape change occurs in a rolling fashion | No downtime |
Prerequisites for shape change: DB system must be Available, database must use SPFILE (not PFILE), SGA_TARGET must be nonzero, Cluster Ready Services must be running. Shape change does NOT affect storage allocation.
Rebooting a Node
To reboot a VM DB system node using the OCI Console:
- Navigate to the VM DB system details page
- Find the specific node you want to reboot
- Click the Actions menu (three dots) on that node
- Select Reboot
You can reboot nodes individually — you do NOT have to reboot all nodes at once for multi-node systems. This is not limited to REST APIs or DBCLI.
Block Storage After Provisioning
After provisioning a VM DB system, storage scaling is one-directional:
| Operation | Supported? | Impact |
|---|---|---|
| Increase storage | Yes, at any time | No downtime — online operation |
| Decrease storage | No — not supported in place | Must create a new DB system and migrate data |
Storage range: 256 GB to 80 TB for data, plus up to 20 TB for recovery. Oracle recommends keeping recovery storage at 20% of total or higher.
Key difference from Exadata: BaseDB storage can only scale up. Exadata storage can scale up AND down elastically.
Storage Management Software
When configuring a VM DB system, you choose between two storage management options. This choice is permanent — you cannot change it after provisioning.
| Storage Manager | Single-Node | 2-Node RAC | Notes |
|---|---|---|---|
| ASM (Automatic Storage Management) | Yes | Yes | Standard choice for all configurations |
| LVM (Logical Volume Manager) | Yes | No | Enables fast provisioning; single-node only |
Valid DB system configuration options:
- Two-node RAC with ASM — RAC always requires ASM
- Single-node with LVM — enables fast provisioning
- Single-node with ASM — also valid but without fast provisioning benefit
Invalid: Two-node RAC with LVM is NOT supported.
Patching Methods
When you need to patch a database within a month of Oracle releasing a patch, the method with the least downtime is out-of-place patching:
- Create a new Database Home at the target patch level
- Move the database from the old Home to the new Home
Why this is the least downtime approach:
- Only affects the single database being moved (not all databases in the Home)
- The new Home is fully patched before the database moves — no patching-while-running
- Easy rollback — just move the database back to the original Home
- Datapatch executes automatically during the move
Always patch in this order: Grid Infrastructure (DB system level) first, then Database Home, then individual databases. Skipping this order causes failures.
Other methods (download from MOS and apply in-place, use DBAASCLI for in-place patching) all require more downtime because the database is being patched while it resides in the Home.
Domain 2: Exadata Database Service
Shared Responsibility Model
Exadata Database Service splits management responsibilities between Oracle and the customer. Understanding who manages what is critical for the exam.
What the Customer Manages:
| Responsibility | Details |
|---|---|
| Guest VM OS patching | Customer controls when and how VM OS is updated |
| Grid Infrastructure patching | Customer initiates GI updates through Console/CLI/API |
| Database software patching | Customer decides when to apply database patches |
| Database lifecycle | Create, scale, backup, terminate databases |
| Data, schemas, encryption keys | All data-level decisions |
| Additional software in VMs | Any non-Oracle software installed on the VMs |
What Oracle Manages:
| Responsibility | Details |
|---|---|
| Physical hardware | DB servers, storage servers — all physical components |
| Hypervisor | The virtualization layer is Oracle-controlled |
| Exadata storage servers | Customer has NO direct access to storage servers |
| Firmware and BIOS | All firmware updates are Oracle's responsibility |
| Base OS and hardware patching | Infrastructure-level patches |
| Security scans and updates | Hardware-level security |
| Infrastructure monitoring | Physical health monitoring |
Oracle staff are NOT authorized to access customer VMs. This is a hard security boundary.
Infrastructure Shapes: X8 vs X8M
Exadata systems use different internal network fabrics depending on the generation:
| Shape Generation | Internal Interconnect | Key Features |
|---|---|---|
| X8M, X9M, X11M (modern) | RoCE (RDMA over Converged Ethernet) | PMEM cache, PMEM log, ~14 microsecond read latency, iDB protocol |
| X8, X7, X6 (legacy) | InfiniBand | Traditional Exadata interconnect |
The "M" suffix indicates the modern RoCE-based interconnect generation. Key facts:
- X8M introduced Persistent Memory (PMEM) for ultra-low-latency reads and redo writes
- X8M introduced RoCE, replacing InfiniBand
- Saying "both X8 and X8M use InfiniBand" is FALSE — X8M uses RoCE
- Saying "X8 uses InfiniBand, X8M uses RoCE" is TRUE — this correctly describes the difference
OCPU/ECPU Scaling
Exadata allows online scaling of OCPUs/ECPUs with zero downtime. This is a key advantage over BaseDB where shape changes require a restart (single-node) or rolling change (RAC).
Minimum OCPU at creation vs. scaling to zero:
| Context | Minimum |
|---|---|
| At VM Cluster creation (X10M and earlier) | 2 OCPUs per VM |
| At VM Cluster creation (X11M) | 8 ECPUs per VM |
| After creation (scaling down) | Zero — you can scale down to zero CPU cores |
When scaled to zero, you pay only infrastructure charges. Scaling is billed per-second with a 1-minute minimum per added OCPU.
CLI tool for scripting OCPU changes: Use the OCI CLI (oci db command family). There is no separate "OCPU CLI" tool. DBAASCLI is for database operations within the VM, not for infrastructure scaling.
Network Security Groups (NSGs)
NSGs provide security rules that apply to a specific set of VNICs (not entire subnets). For Exadata:
- NSG rules apply to a subset of VNICs — only those VNICs explicitly added to the NSG
- They can be applied to both client and backup network VNICs
- They are NOT limited to the client network only
- They are NOT limited to a single network
- They do NOT automatically apply to all VNICs on a subnet — that's what security lists do
The key distinction: Security lists apply to ALL traffic in/out of a subnet. NSGs apply only to the VNICs that are members of the group. NSGs are more granular.
VM Cluster and Compartment Management
Moving a VM Cluster to a Different Compartment
When you move a VM cluster to a different OCI compartment, the behavior is:
- The VM cluster moves to the new compartment
- All dependent resources move with it — this includes DB Homes and databases within those homes
- Resources that are NOT dependent on the VM cluster stay in the original compartment — this includes the Cloud Exadata Infrastructure resource itself (the physical rack)
- The Exadata infrastructure and the VM cluster can be in different compartments
This is important because the four-level Exadata resource hierarchy means each level can potentially be in a different compartment:
Cloud Exadata Infrastructure (stays in original compartment)
└── Cloud VM Cluster (moves to new compartment)
└── DB Home (moves with VM cluster)
└── Database (moves with DB Home)
Cloud Exadata Infrastructure Resource Capabilities
The infrastructure resource represents the physical Exadata rack. What it allows and does NOT allow:
| Allowed | NOT Allowed |
|---|---|
| Enable compute and storage server expansion (add more servers) | Provide customer direct access to storage servers (Oracle manages these) |
| Schedule automatic infrastructure maintenance windows | Allow customers to execute infrastructure maintenance themselves |
Oracle manages all infrastructure maintenance. Customers schedule the window; Oracle performs the work.
Cloud VM Cluster Resource Capabilities
The VM cluster is where customers have the most control:
| Allowed | NOT Allowed |
|---|---|
| Scale OCPUs up and down to match workload demand | Schedule infrastructure maintenance (that's at the Infrastructure level) |
| Manage networking — client subnet, backup subnet, NSGs | Add compute/storage servers (that's at the Infrastructure level) |
| Manage memory allocation per VM | Upgrade the Exadata system model (hardware is fixed) |
| Manage storage allocation | |
| Manage Grid Infrastructure version and patches | |
| Manage VM OS | |
| Manage DB Homes — create, patch, delete | |
| Manage database backups |
Subnet Configuration for Exadata
Exadata requires two separate subnets, both of which should be private:
| Subnet | Purpose | Reserved CIDR (must NOT overlap) |
|---|---|---|
| Client Subnet | Application connectivity to databases via SCAN listener | 192.168.16.16/28 |
| Backup Subnet | Backup traffic and Data Guard replication | 192.168.128.0/20 |
Additional requirements:
- A service gateway is required in the VCN for Object Storage backup access
- The SCAN listener port range is 1024-8999 (default 1521) — cannot be changed after provisioning
- Oracle recommends using regional subnets for high availability across availability domains
- Both subnets must be private — there is no "service subnet" type in OCI (the service gateway is a VCN gateway, not a subnet)
Storage Server Monitoring
Exadata provides four management interfaces, but only one is specifically designed for storage server monitoring:
| Tool | Purpose | Scope |
|---|---|---|
| ExaCLI | Monitoring and managing Exadata storage servers | Storage server metrics, alerts, disks, grid disks |
| DBAASCLI | Database-specific operations (backup, patching, recovery) | Database lifecycle within VMs |
| OCI Console | Web GUI for provisioning, patching, monitoring | All Exadata resources |
| OCI CLI | Command-line automation (oci db commands) |
All OCI resources |
| REST API | Programmatic access | All OCI resources |
ExaCLI provides access to storage server metrics (METRICCURRENT, METRICHISTORY), alerts (ALERTHISTORY), and storage objects (CELLDISK, GRIDDISK, PHYSICALDISK). It is the dedicated tool for the storage tier.
Performance Hub
Performance Hub is the primary diagnostic interface within OCI Database Management. It is NOT enabled out of the box on Exadata — you must enable the Database Management service first.
| Prerequisite | Required |
|---|---|
| Database edition | Enterprise Edition |
| Management level | Depends on feature (see below) |
| Enablement | Must explicitly enable Database Management service |
| Performance Hub Feature | Available at Basic (Free) | Available at Full (Paid) |
|---|---|---|
| ASH Analytics | Yes | Yes |
| SQL Monitoring | Yes | Yes |
| ADDM | No | Yes (19c+ EE) |
| AWR Report | No | Yes (12.1+ EE) |
| Blocking Sessions | No | Yes (12.2+ EE) |
| Top Activity Lite | No | Yes |
Backup and Key Management
Automated Backup Destinations
For ExaDB in the OCI public cloud, automated backups can go to three destinations — but they are mutually exclusive (you cannot mix them):
| Destination | Notes |
|---|---|
| Autonomous Recovery Service (recommended) | Based on ZDLRA technology; real-time protection (~0 RPO); retention 14-3,650 days |
| OCI Object Storage | L0 (full) + L1 (incremental); retention 7-60 days; cost-effective |
| Direct RMAN (not recommended) | For existing RMAN scripts; must unregister from backup automation first |
Recovery Appliance is for on-premises/Cloud@Customer — not standard public cloud automated backups.
On-Demand Backups
You can take on-demand backups in addition to automatic backups — they work alongside each other:
- Click the Create Backup button in the backup section of the database details page
- You do NOT need to disable automatic backups first
- Can also use
dbaascli database backup --start --dbname <name>from within the VM
Key Management
TDE (Transparent Data Encryption) is enabled by default on all Exadata databases. Three facts about key management:
| Fact | Detail |
|---|---|
| Choose at creation | When creating a database, you select Oracle-managed keys or customer-managed keys (via OCI Vault) |
| Change after creation | You CAN change from Oracle-managed to customer-managed keys after the database is created (one-way transition) |
| Rotate keys | Rotate the Vault encryption key from the database details page in the OCI Console |
What is NOT true:
- "Cannot use OCI Vault" — FALSE, you absolutely can use Vault
- "Must rotate every 24 hours" — FALSE, there is no such requirement
Domain 3: MySQL HeatWave
Service Architecture and Provisioning
MySQL HeatWave is a fully managed database service on OCI built on MySQL Enterprise Edition (NOT Community Edition). This is a critical distinction — MySQL Community Edition is not a component of the service.
Two Core Components:
| Component | Description |
|---|---|
| DB System | A cloud-based compute instance running MySQL Enterprise Edition. Handles OLTP workloads, connections, security. This is the MySQL server itself. |
| HeatWave Cluster | Optional. One or more HeatWave nodes providing in-memory query acceleration, ML, GenAI, Lakehouse. Data is sharded and distributed among nodes. |
A DB System without a HeatWave cluster is simply a managed MySQL database. The HeatWave cluster is an optional add-on accelerator.
DB System components include:
- Oracle Linux operating system
- Network attached block storage
- Virtual network interface (VNIC)
- A compute instance
- MySQL Enterprise Edition software
Provisioning prerequisites:
Before creating a MySQL DB system, you must have networking in place. The first step is to create a Virtual Cloud Network (VCN) using the VCN Wizard. MySQL HeatWave DB Systems are deployed in a private subnet within a VCN — they are not directly accessible from the internet. Connection requires a compute instance, VPN, or bastion session in the same VCN.
Primary HeatWave Services
HeatWave provides five key services. The three primary ones tested on the exam are:
| Service | What It Does |
|---|---|
| OLAP | In-memory query acceleration using the RAPID engine. Queries are automatically offloaded from InnoDB to the HeatWave cluster. Provides orders-of-magnitude speedup for analytical queries. |
| AutoML | In-database machine learning. Supports classification, regression, time series forecasting, anomaly detection, recommendations, and topic modeling. Runs entirely inside HeatWave — no external ML service needed. |
| Lakehouse | Query data stored in OCI Object Storage without loading it into InnoDB. Supports multiple file formats. Requires a HeatWave cluster. |
Additional services: GenAI (vector store, RAG, natural language to SQL) and Auto Pilot (automated performance tuning — always active, not a separate purchase).
NOT HeatWave services: Blockchain, Bastion (these are separate OCI services).
Automated DBA Tasks
HeatWave MySQL automates several time-consuming DBA tasks as part of the managed service:
| Automated by Oracle | Customer Responsibility |
|---|---|
| Backup and recovery — automatic backup scheduling with full and incremental backups | Diagnosing database errors and bad queries — performance troubleshooting is a customer task |
| Patching the underlying OS and updating the MySQL server — Oracle manages all patches | Defining data access and retention policies — IAM and data governance are customer decisions |
| Instance resource provisioning — compute, storage, and networking setup is automated |
Backup Types
MySQL HeatWave supports exactly two backup creation types:
| Type | Description |
|---|---|
| Full | Complete backup of the entire database |
| Incremental | Captures only changes since the last full or incremental backup |
Incremental backups are functionally equivalent to full backups for recovery — you do not need to maintain a chain. Any single backup (full or incremental) can restore the DB system to the point in time when that backup was taken.
NOT valid backup types in HeatWave MySQL: differential, mirror, MySQL dump, point-in-time backup (PITR is a restore option, not a backup type).
HeatWave Lakehouse
Lakehouse enables querying data stored in OCI Object Storage (NOT Block Storage) without loading it into InnoDB tables on the DB System.
Supported file formats:
- CSV
- Parquet
- Avro
- JSON
- Aurora/Redshift export files
Data is read from Object Storage, transformed to HeatWave's memory-optimized format, and loaded into HeatWave cluster memory for in-memory query processing. You can join InnoDB tables with Lakehouse external tables in the same SQL query — no ETL pipeline needed.
Requires a HeatWave cluster to be enabled. A standalone DB System without HeatWave cannot query Object Storage data.
HeatWave Cluster Architecture
The HeatWave Cluster uses a distributed, scalable, shared-nothing architecture:
- Each HeatWave node hosts an instance of the RAPID query processing engine
- Data is sharded and distributed among the HeatWave nodes using workload-aware partitioning
- Each CPU core processes its data partition in parallel (massively parallel architecture)
- Data is stored in main memory using a hybrid columnar format — this is in-memory processing, NOT file-based storage
- DML operations (INSERT, UPDATE, DELETE) on loaded tables are automatically propagated to HeatWave nodes — no manual synchronization needed
- The HeatWave Storage Layer automatically persists loaded data to OCI Object Storage, enabling pause/resume without data loss
What is NOT true about HeatWave clusters:
- "Modifications to MySQL data do not reflect within the HeatWave cluster" — FALSE, changes propagate automatically
- "Nodes operate independently allowing seamless scaling" — misleading, nodes work together on distributed queries
- "Nodes use file storage for data retention" — FALSE, HeatWave uses in-memory processing
Stopped MySQL DB System Behavior
When you stop a MySQL DB system through the OCI Console:
| Aspect | What Happens |
|---|---|
| OCPU billing | Stops — you no longer pay for compute |
| Storage billing | Continues — you still pay for block storage |
| Database connections | Cannot connect — the MySQL endpoint is unreachable |
| Backups | NOT deleted — existing backups are retained |
| System state | NOT permanently deleted — the system can be restarted |
| Maintenance | Deferred — maintenance applies when the system is restarted |
Data Import from Object Storage
To import data from an OCI Object Storage bucket into a standalone HeatWave DB system, use the data import feature available in the OCI Console. This allows you to import data directly from Object Storage into MySQL tables.
For loading data into the HeatWave cluster memory (after it exists in InnoDB), use the Auto Parallel Load utility, which automates the sharding and distribution process across HeatWave nodes.
Domain 4: NoSQL Database Service
NoSQL Handle Interface
The NoSQLHandle is the primary interface for interacting with the NoSQL Database Cloud Service through the SDKs. It is a thread-safe, reusable connection object.
What the handle CAN do:
- Get rows from a table — via
get()andquery()methods - Access multiple tables — a single handle can operate on any table in the service; operations specify the table name per request
- Get dynamic information on a table — via
getTableUsage()for usage statistics andgetTable()for table metadata
What the handle CANNOT do:
- Set row retention time (TTL) — TTL is NOT a handle-level operation. It is set via DDL (
CREATE TABLE ... USING TTL) at the table level, or per-row during write operations (put()with TTL parameter). The handle interface has no method for setting retention time.
Read Unit Costs by Consistency Level
The capacity model charges differently based on the consistency level of read operations:
| Consistency Level | Cost per 1 KB | Guarantee |
|---|---|---|
| Eventually Consistent | 1 Read Unit | Data may not reflect the most recent write; reads from any replica |
| Absolute Consistent | 2 Read Units (2x cost) | Data guaranteed to reflect the most recent write; reads from the master replica |
Read sizes are rounded up to the next KB. A 1.5 KB record costs 2 RU (eventual) or 4 RU (absolute).
IAM Permissions for NoSQL
NoSQL access control uses OCI IAM with three resource types controlled independently:
| Resource Type | Controls |
|---|---|
| nosql-tables | Table DDL: CREATE, ALTER, DROP, MOVE |
| nosql-rows | Data DML: GET, PUT, DELETE, QUERY |
| nosql-indexes | Index DDL: CREATE INDEX, DROP INDEX |
The four IAM verbs and what they grant:
| Verb | nosql-tables | nosql-rows | nosql-indexes |
|---|---|---|---|
| inspect | List tables | N/A | List indexes |
| read | Get table metadata | Read rows | Get index metadata |
| use | Alter table limits | Read/write rows | N/A |
| manage | Full DDL control | Full DML control | Full index control |
Policy examples use specific permissions like NOSQL_TABLE_CREATE, NOSQL_TABLE_DROP, NOSQL_TABLE_ALTER.
Supported Data Types
NDCS supports 16 data types for fixed-schema columns:
| Type | Category | Notes |
|---|---|---|
| STRING | Scalar | Unicode UTF-8 |
| INTEGER | Scalar | 32-bit |
| LONG | Scalar | 64-bit |
| FLOAT | Scalar | 32-bit IEEE |
| DOUBLE | Scalar | 64-bit IEEE |
| NUMBER | Scalar | Arbitrary precision (most expensive) |
| BINARY | Scalar | Variable-length byte array |
| FIXED_BINARY | Scalar | Fixed-size byte array |
| BOOLEAN | Scalar | TRUE or FALSE |
| TIMESTAMP | Scalar | UTC point in time (precision 0-9) |
| UUID | Scalar | Universally unique identifier |
| ENUM | Complex | Symbolic tokens from a defined set |
| ARRAY | Complex | Ordered collection of typed items |
| MAP | Complex | Unordered string-keyed pairs |
| RECORD | Complex | Fixed collection of key-value pairs |
| JSON | Complex | Any valid JSON data |
NOT supported: XML, Triple
Exam note: Both Boolean/Timestamp/Float and Enum/Map/Record are valid sets of supported types. The exam answer for "which set is supported" is Enum, Map, Record — these are the NoSQL-specific complex types that distinguish it from relational databases.
OCID Format
Every OCI resource has an Oracle Cloud Identifier (OCID) with this format:
ocid1.<resource_type>.<realm>.[region].[future_use].<unique_id>
| Component | In the OCID? |
|---|---|
| Resource type | Yes |
| Unique ID | Yes |
| Region | Yes (optional for global resources) |
| Realm | Yes |
| Compartment ID | No — compartment association is metadata about the resource, not embedded in the OCID string |
Compartments
(See Domain 1 section above.) Compartments are tenancy-wide across all subscribed regions. This fact is tested in both NoSQL and general OCI contexts.
Connection Requirements
To connect to the NoSQL Database Cloud Service programmatically, you need four authentication components stored in the OCI configuration file (~/.oci/config):
| Component | Purpose |
|---|---|
| Signing key | RSA private key used to sign API requests |
| Fingerprint | MD5 hash of the public key, used to identify the key pair |
| API signing key | The key pair registered with your OCI user account |
| Tenancy OCID | Identifies your OCI tenancy |
Other authentication methods include: Instance Principal (for OCI Compute), Delegation Token, Session Token, and OKE Workload Identity.
DDL Rate Limit
The NoSQL Database Cloud Service enforces a rate limit on DDL (Data Definition Language) operations:
- Maximum 4 DDL operations per minute per region
- This limit applies to both non-hosted (standard) and hosted (dedicated) environments
- DDL operations include: CREATE TABLE, ALTER TABLE, DROP TABLE, CREATE INDEX, DROP INDEX
- This limit is NOT customizable via service limits requests
- Index creation is also subject to this rate limit
Domain 5: Database Management Service
Optimizer Statistics vs. SQL Plan Management
Database Management separates these into two distinct feature areas:
Optimizer Statistics:
- Monitor Automatic Statistics Collection jobs — status, completions, failures
- Monitor High-Frequency Statistics collection
- Monitor Manual Statistics collection jobs
- View and implement Optimizer Statistics Advisor findings
SQL Plan Management — SEPARATE feature:
- SPM configuration
- SQL plan baselines
- Automatic SPM Evolve Advisor — this belongs here, NOT under Optimizer Statistics
- Selective plan capturing
- High-frequency SPM Evolve (19c+ on Exadata only)
The SPM Evolve Advisor configuration is a SQL Plan Management capability, not an Optimizer Statistics capability. If asked what Optimizer Statistics does NOT perform, the answer is SPM Evolve Advisor configuration.
OCI Monitoring Methods
OCI Monitoring uses three core concepts for measuring quantitative metrics:
| Concept | Description |
|---|---|
| Metrics | Time-series data collected from OCI resources (e.g., CpuUtilization, StorageUsed, DBTime) |
| Queries | MQL (Monitoring Query Language) expressions used to retrieve and aggregate metric data |
| Alarms | Rules that evaluate metrics against thresholds and trigger notifications when breached |
What are NOT core monitoring methods:
- Notifications — this is a separate OCI service (OCI Notifications) used to deliver alarm notifications via email, Slack, PagerDuty, etc. It is a delivery mechanism, not a monitoring method.
- Data points — these are individual timestamp-value pairs within a metric; they are sub-components of metrics, not a separate monitoring method.
Database Management metrics are emitted in the oracle_oci_database namespace with 90-day retention.
Fleet Summary Dashboard
The fleet summary provides a single-page view of all managed databases. It has six tiles:
| Tile | What It Shows |
|---|---|
| Inventory | Database count categorized by type, deployment, version, or cluster |
| Monitoring Status | Donut chart showing which databases are being successfully monitored |
| Resource Usage | CPU and Storage allocation/utilization with change percentages |
| Alarms | Open alarm count broken down by severity |
| Members | Individual databases with Avg Active Sessions, CPU, Storage, I/O |
| Performance | Tree map visualization of database performance |
Three key capabilities:
- Compare database performance metrics over time (via the Performance tile and time period comparison)
- View current database resource usage (via the Resource Usage tile)
- View the statuses of the databases (via the Monitoring Status tile)
NOT fleet summary capabilities: migrating databases to OCI, executing DDL commands, viewing database log entries.
Supported Database Types
Database Management supports a wide range of Oracle database deployments:
| Supported Type | Connection Mode |
|---|---|
| Base Database Service (BaseDB) | Private Endpoint |
| ExaDB-D (Dedicated Infrastructure) | Private Endpoint or Management Agent |
| ExaDB-XS (Exascale Infrastructure) | Private Endpoint |
| ExaDB-C@C (Cloud@Customer) | Management Agent only |
| Autonomous AI Databases | Private Endpoint |
| External Databases (on-premises) | Management Agent + Connector |
| AWS RDS Oracle | Management Agent |
| Oracle Database@Azure | Private Endpoint |
Minimum version: Oracle Database 11.2.0.4 and later. Both CDB and PDB monitoring are supported (PDB requires Full Management on the parent CDB).
NOT supported: MongoDB, PostgreSQL, non-Oracle databases, Oracle versions prior to 11.2.0.4.
Management Agents and Dynamic Groups
When installing and configuring management agents for Database Management, dynamic groups play a critical role in authentication:
How it works:
- Management Agents are installed on compute instances that have network access to the database
- A dynamic group is created with a matching rule that includes the management agent resource
- The dynamic group enables the agent to authenticate with OCI services using instance principals — this means the agent on the compute instance can call OCI Management Agent service APIs without storing credentials
- Resource principal policies are used separately to allow managed databases to read secrets from OCI Vault
The distinction:
- Instance principals (via dynamic groups) = how the Management Agent authenticates with the OCI Management Agent service
- Resource principals = how the managed database authenticates to read Vault secrets (policy type:
request.principal.type = dbmgmtmanageddatabase)
Preferred Credential Types
Database Management uses three configurable preferred credential types for connecting to managed databases:
| Credential Type | Purpose |
|---|---|
| Administration | Full administrative access for tuning, SQL jobs, parameter changes |
| Basic monitoring | Read-only monitoring access for metrics collection |
| Advanced diagnostics | Access to diagnostic features (AWR, ADDM, Performance Hub) |
"Superuser" is NOT a valid credential type in Database Management. If a question asks which credential type cannot be configured, the answer is Superuser.