Instruction

Creating an Oracle AI Database 26ai RAC One Node Database (Silent DBCA Method)

Assumptions

This instruction assumes:

  • Oracle Linux 8.x or 9.x (x86_64) is installed and operational on all cluster nodes
  • Oracle Grid Infrastructure 26ai is installed and running with full Clusterware (not Oracle Restart) on all nodes
  • The ASM instance is running with +DATA and +RECO diskgroups created and mounted on all nodes
  • Oracle AI Database 26ai Enterprise Edition software is already installed (software-only) in a separate ORACLE_HOME on all candidate nodes
  • The oracle user exists and belongs to the required OS groups (oinstall, dba, oper, backupdba, dgdba, kmdba, racdba)
  • The SCAN listener is running and accessible from all nodes
  • The cluster has a minimum of 2 nodes (required for RAC One Node failover capability)
  • Each candidate node has at least 20 GB of available physical memory for SGA (16 GB) + PGA (4 GB)
  • The +DATA diskgroup has at least 20 GB of free space
  • The +RECO diskgroup has at least 50 GB of free space for the Fast Recovery Area
  • /dev/shm is sized to at least 16 GB on all candidate nodes
  • SSH user equivalence is configured between all candidate nodes for the oracle user
  • No X11 display is needed or available
  • The reader is logged in as the oracle user on one of the candidate nodes

Prerequisites

Automatic setup

No automatic setup is available for this task. Grid Infrastructure and database software installation are covered in separate instructions. This instruction begins after both are installed.

Manual setup

All verification steps are performed as the oracle user unless noted otherwise. The srvctl and asmcmd commands in this section require the Grid Infrastructure bin directory in PATH. If srvctl is not found, prefix commands with the full path (e.g., $GRID_HOME/bin/srvctl) or export PATH temporarily: export PATH=/u01/app/26.0.0/grid/bin:$PATH.

  1. Verify Clusterware is running on all nodes

As root on each node (use the full path if crsctl is not in root's PATH):

/u01/app/26.0.0/grid/bin/crsctl check crs

Expected output:

CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

All three services must show online. If any service is offline, start Clusterware with /u01/app/26.0.0/grid/bin/crsctl start crs as root before proceeding. Replace the path with the actual Grid home location on your system.

  1. Verify ASM instance status
srvctl status asm

Expected output:

ASM is running on node1,node2

ASM must be running on all candidate nodes.

  1. Verify diskgroup status and free space
asmcmd lsdg

Expected output:

State    Type    Rebal  Sector  Logical_Sector  Block  AU  ...  Total_MB  Free_MB  ...  Name
MOUNTED  EXTERN  N      512     512             4096   4M  ...  204800    184320   ...  DATA/
MOUNTED  EXTERN  N      512     512             4096   4M  ...  102400    97280    ...  RECO/

Both DATA and RECO must show MOUNTED state. Verify Free_MB meets the minimum requirements: 20480 MB (20 GB) for DATA and 51200 MB (50 GB) for RECO.

  1. Verify SCAN listener status
srvctl status scan_listener

Expected output:

SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node1

At least one SCAN listener must be running. A typical cluster has three SCAN listeners (LISTENER_SCAN1 through LISTENER_SCAN3), each running on a different node. Your output may show one or three depending on the cluster SCAN configuration.

  1. Verify SSH equivalence between nodes

From the node where you will run DBCA:

ssh node2 hostname

node2 -- the other candidate node. Replace with each candidate node hostname.

Expected output:

node2

The command must complete without a password prompt. If it prompts for a password, SSH equivalence is not configured. Configure it using ssh-keygen and ssh-copy-id before proceeding.

  1. Verify database software on all candidate nodes
ls -l $ORACLE_HOME/bin/dbca

Expected output:

-rwxr-x--- 1 oracle oinstall ... /u01/app/oracle/product/26/db_1/bin/dbca

Repeat on each candidate node via SSH:

ssh node2 "ls -l /u01/app/oracle/product/26/db_1/bin/dbca"
  1. Verify the DBCA template exists
ls -l $ORACLE_HOME/assistants/dbca/templates/General_Purpose.dbc

Expected output:

-rw-r--r-- 1 oracle oinstall ... /u01/app/oracle/product/26/db_1/assistants/dbca/templates/General_Purpose.dbc
  1. Verify /dev/shm size on all candidate nodes

The SGA requires /dev/shm to be at least as large as sga_max_size (16 GB). Check every candidate node, not just the local node.

for node in node1 node2; do
  echo "=== $node ==="
  ssh $node "df -h /dev/shm"
done

node1, node2 -- replace with the actual hostnames of all candidate nodes.

Expected output:

=== node1 ===
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            32G   0   32G   0% /dev/shm
=== node2 ===
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            32G   0   32G   0% /dev/shm

The Size column must be at least 16 GB on every node. If any node shows less than 16 GB, resize as root on that node:

mount -o remount,size=17G /dev/shm

To persist across reboots, update /etc/fstab on the affected node to include size=17G in the tmpfs mount options for /dev/shm.

Additional setup

  1. Set environment variables

As the oracle user on the node where you will run DBCA, set the environment for the database home. The PATH must list the database home before the Grid home so that DBCA and other database utilities resolve to the correct binaries.

{ORACLE_BASE} -- the Oracle base directory (e.g., /u01/app/oracle).

{DB_HOME} -- the Oracle AI Database 26ai home directory (e.g., /u01/app/oracle/product/26/db_1).

{GRID_HOME} -- the Grid Infrastructure home directory (e.g., /u01/app/26.0.0/grid).

{PROD_DC1_SVC} -- the RAC One Node failover service name. Application JDBC URLs and tnsnames.ora entries reference this service for transparent connectivity.

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/26/db_1
export ORACLE_SID=PROD_DC1
export PATH=$ORACLE_HOME/bin:$PATH

The ORACLE_SID is set to PROD_DC1 (the SID prefix) before running DBCA. After database creation, DBCA names the active instance PROD_DC1_1 (prefix plus _1). Step 3 updates ORACLE_SID to match the running instance.

Verify the environment:

echo "ORACLE_BASE=$ORACLE_BASE"
echo "ORACLE_HOME=$ORACLE_HOME"
echo "ORACLE_SID=$ORACLE_SID"
which dbca

Expected output:

ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/26/db_1
ORACLE_SID=PROD_DC1
/u01/app/oracle/product/26/db_1/bin/dbca

Confirm that which dbca returns the path under the database home, not the Grid home. If it returns a Grid home path, adjust PATH to place $ORACLE_HOME/bin first.

Database Creation

Step 1: Run DBCA in silent mode

Warning: The passwords specified in this command will be visible in ps output and shell history while the command runs. To suppress history recording, run set +o history before executing the command and set -o history after.

As the oracle user, run the following command. Replace the password placeholders with your chosen passwords.

{SYS_PASSWORD} -- password for the SYS account (minimum 8 characters, must include uppercase, lowercase, and digit).

{SYSTEM_PASSWORD} -- password for the SYSTEM account (same requirements).

{PDB_ADMIN_PASSWORD} -- password for the PDB admin account (same requirements).

dbca -silent -createDatabase \
  -templateName General_Purpose.dbc \
  -gdbName PROD \
  -sid PROD_DC1 \
  -databaseConfigType RACONENODE \
  -RACOneNodeServiceName PROD_DC1_SVC \
  -nodelist node1,node2 \
  -createAsContainerDatabase true \
  -numberOfPDBs 1 \
  -pdbName PROD_PDB1 \
  -pdbAdminPassword {PDB_ADMIN_PASSWORD} \
  -sysPassword {SYS_PASSWORD} \
  -systemPassword {SYSTEM_PASSWORD} \
  -storageType ASM \
  -datafileDestination +DATA \
  -recoveryAreaDestination +RECO \
  -recoveryAreaSize 51200 \
  -characterSet AL32UTF8 \
  -memoryMgmtType AUTO_SGA \
  -initParams "db_unique_name=PROD_DC1,db_name=PROD,sga_target=16G,sga_max_size=16G,pga_aggregate_target=4G,memory_target=0" \
  -databaseType MULTIPURPOSE \
  -redoLogFileSize 1024 \
  -emConfiguration NONE \
  -useOMF true

Replace node1,node2 with the actual hostnames of all candidate nodes in the cluster. Every node listed becomes a candidate server for RAC One Node failover. The first node in the list is where DBCA starts the initial instance.

Parameter notes:

  • -gdbName PROD -- sets DB_NAME to PROD.
  • -sid PROD_DC1 -- sets the SID prefix. DBCA uses this as the instance name prefix for RAC One Node. The active instance will be named PROD_DC1_1.
  • -databaseConfigType RACONENODE -- creates a RAC One Node database instead of a single-instance or full RAC database.
  • -RACOneNodeServiceName PROD_DC1_SVC -- defines the failover service that enables online relocation between candidate nodes.
  • -nodelist node1,node2 -- specifies all candidate nodes for failover. Omitting this parameter silently creates a single-instance database with no error or warning.
  • -createAsContainerDatabase true -- creates a CDB. This is mandatory; non-CDB architecture is desupported.
  • -memoryMgmtType AUTO_SGA -- enables Automatic Shared Memory Management (ASMM). Oracle manages SGA component sizes within the sga_target limit. Automatic Memory Management (AUTO, which uses memory_target) is not supported in RAC environments when HugePages are configured, which is the standard Oracle Linux configuration for production databases.
  • -initParams -- sets initialization parameters explicitly. The memory_target=0 entry confirms AMM is disabled. Using -initParams for SGA and PGA sizing provides reliable control over memory allocation.
  • -recoveryAreaSize 51200 -- sets the Fast Recovery Area to 50 GB (51200 MB). Adjust based on your backup retention requirements.
  • -redoLogFileSize 1024 -- sets each online redo log file to 1024 MB (1 GB).
  • -useOMF true -- enables Oracle Managed Files for automatic file naming and placement.

Database creation takes 15-45 minutes depending on hardware and storage performance.

Expected output:

Prepare for db operation
10% complete
Registering database with Oracle Clusterware
20% complete
Copying database files
40% complete
Creating and starting Oracle instance
50% complete
60% complete
70% complete
Completing Database Creation
80% complete
90% complete
100% complete

Look at the log file "$ORACLE_BASE/cfgtoollogs/dbca/PROD_DC1/PROD_DC1.log" for further details.

Step 2: Check the DBCA log for errors

tail -30 $ORACLE_BASE/cfgtoollogs/dbca/PROD_DC1/PROD_DC1.log

Expected output:

The final lines should show successful completion with no ORA- or PRCR- errors. Look for a line similar to:

Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/PROD_DC1

If the log contains errors, consult the Troubleshooting section before proceeding.

Step 3: Update ORACLE_SID to the active instance name

After DBCA completes, the active instance is named using the prefix plus _1. Update the environment to reflect the running instance name:

export ORACLE_SID=PROD_DC1_1

Verify the instance is accessible:

sqlplus / as sysdba <<'EOF'
SELECT instance_name, status FROM v$instance;
EXIT;
EOF

Expected output:

INSTANCE_NAME    STATUS
---------------- ------
PROD_DC1_1       OPEN

Step 4: Save PDB state

By default, PDBs remain in MOUNTED state after a CDB restart. Before saving the PDB state, verify that the PDB is open:

sqlplus / as sysdba <<'EOF'
SHOW PDBS;
EXIT;
EOF

If PROD_PDB1 shows MOUNTED instead of READ WRITE, open it first:

sqlplus / as sysdba <<'EOF'
ALTER PLUGGABLE DATABASE PROD_PDB1 OPEN;
EXIT;
EOF

Once PROD_PDB1 is in READ WRITE mode, save the state so it opens automatically after every CDB restart:

sqlplus / as sysdba <<'EOF'
ALTER PLUGGABLE DATABASE PROD_PDB1 SAVE STATE;
EXIT;
EOF

Expected output:

Pluggable database altered.

Verify the state was saved:

sqlplus / as sysdba <<'EOF'
SELECT con_name, instance_name, state FROM dba_pdb_saved_states;
EXIT;
EOF

Expected output:

CON_NAME       INSTANCE_NAME  STATE
-------------- -------------- -----
PROD_PDB1      PROD_DC1_1     OPEN

Validation

Quick check

The -d flag takes DB_UNIQUE_NAME (PROD_DC1), not DB_NAME (PROD). All srvctl commands that reference a database use DB_UNIQUE_NAME.

srvctl status database -d PROD_DC1

Expected output:

Instance PROD_DC1_1 is running on node node1

Full validation

  1. Verify Clusterware registration and RAC One Node configuration
srvctl config database -d PROD_DC1

Expected output (key fields):

Database unique name: PROD_DC1
Database name: PROD
Oracle home: /u01/app/oracle/product/26/db_1
Oracle user: oracle
Spfile: +DATA/PROD_DC1/PARAMETERFILE/spfilePROD_DC1.ora
...
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: PROD_DC1
Candidate servers: node1,node2
...

Verify that Type is RACOneNode, Database name is PROD, Database unique name is PROD_DC1, and Candidate servers lists all expected nodes.

  1. Verify database version and CDB status
sqlplus / as sysdba <<'EOF'
SELECT banner_full FROM v$version;
SELECT name, db_unique_name, open_mode, cdb FROM v$database;
EXIT;
EOF

Expected output:

BANNER_FULL
--------------------------------------------------------------------------------
Oracle AI Database 26ai Enterprise Edition Release 23.26.1.0.0 - Production
Version 23.26.1.0.0

NAME   DB_UNIQUE_NAME  OPEN_MODE   CDB
------ --------------- ----------  ---
PROD   PROD_DC1        READ WRITE  YES

Confirm NAME is PROD, DB_UNIQUE_NAME is PROD_DC1, and CDB is YES.

  1. Verify PDB status
sqlplus / as sysdba <<'EOF'
SHOW PDBS;
EXIT;
EOF

Expected output:

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PROD_PDB1                      READ WRITE NO

PROD_PDB1 must be in READ WRITE mode. If it shows MOUNTED, open it manually with ALTER PLUGGABLE DATABASE PROD_PDB1 OPEN; and then run ALTER PLUGGABLE DATABASE PROD_PDB1 SAVE STATE;.

  1. Verify memory parameters
sqlplus / as sysdba <<'EOF'
SHOW PARAMETER sga_target;
SHOW PARAMETER sga_max_size;
SHOW PARAMETER pga_aggregate_target;
SHOW PARAMETER memory_target;
EXIT;
EOF

Expected output:

NAME            TYPE        VALUE
--------------- ----------- -----
sga_target      big integer 16G

NAME            TYPE        VALUE
--------------- ----------- -----
sga_max_size    big integer 16G

NAME                    TYPE        VALUE
----------------------- ----------- -----
pga_aggregate_target    big integer 4G

NAME            TYPE        VALUE
--------------- ----------- -----
memory_target   big integer 0
  1. Verify datafile locations and recovery area
sqlplus / as sysdba <<'EOF'
SELECT name FROM v$datafile;
SELECT name, space_limit/1024/1024 AS limit_mb FROM v$recovery_file_dest;
EXIT;
EOF

Expected output:

NAME
----------------------------------------------------------------------
+DATA/PROD_DC1/DATAFILE/system.xxx.xxxxxxxxx
+DATA/PROD_DC1/DATAFILE/sysaux.xxx.xxxxxxxxx
+DATA/PROD_DC1/DATAFILE/undotbs1.xxx.xxxxxxxxx
+DATA/PROD_DC1/DATAFILE/users.xxx.xxxxxxxxx
+DATA/PROD_DC1/xxxxxxxx/DATAFILE/system.xxx.xxxxxxxxx
+DATA/PROD_DC1/xxxxxxxx/DATAFILE/sysaux.xxx.xxxxxxxxx
+DATA/PROD_DC1/xxxxxxxx/DATAFILE/undotbs1.xxx.xxxxxxxxx

NAME                                       LIMIT_MB
------------------------------------------ --------
+RECO                                         51200

All datafiles must reside on +DATA. The recovery area must be +RECO. The xxxxxxxx component in the PDB datafile paths is the Oracle-assigned PDB GUID, a hexadecimal string generated at creation time. The value will differ on every installation.

  1. Verify the RAC One Node failover service
srvctl status service -d PROD_DC1

Expected output:

Service PROD_DC1_SVC is running on node(s): node1

The failover service PROD_DC1_SVC must be running on the active node.

  1. Verify candidate node readiness

Confirm the database resource is registered on all candidate nodes (even though only one instance runs at a time):

srvctl status database -d PROD_DC1 -verbose

Expected output:

Instance PROD_DC1_1 is running on node node1
Online relocation: INACTIVE

Also verify the remote node can see the database resource:

ssh node2 "$ORACLE_HOME/bin/srvctl status database -d PROD_DC1"

Expected output:

Instance PROD_DC1_1 is running on node node1

The remote node must report the same instance status. If the command fails, verify that Oracle home and Grid home are correctly installed on the remote node.

Troubleshooting

Problem Cause Solution
ORA-27102: out of memory /dev/shm is smaller than sga_max_size or insufficient physical memory Resize /dev/shm as root: mount -o remount,size=17G /dev/shm. Verify with df -h /dev/shm on all candidate nodes.
DBCA creates a single-instance database instead of RAC One Node -nodelist parameter was omitted or -databaseConfigType RACONENODE was not specified Caution: this permanently drops the database and all datafiles. Drop the database with dbca -silent -deleteDatabase -sourceDB PROD_DC1 -sysDBAPassword {SYS_PASSWORD} (replace {SYS_PASSWORD} with the actual SYS password) and re-run the DBCA command with both -databaseConfigType RACONENODE and -nodelist node1,node2. Verify with srvctl config database -d PROD_DC1 and check Type: RACOneNode.
DIM-00019: create service error SCAN listener is not running Start the SCAN listener: srvctl start scan_listener. Verify with srvctl status scan_listener.
ORA-15001: diskgroup "DATA" does not exist Wrong diskgroup name or ASM is not running on the local node Verify ASM is running: srvctl status asm. Verify diskgroup names: asmcmd lsdg. Use the + prefix in DBCA parameters (e.g., +DATA not DATA).
ORA-15025: could not open disk ASM disk permissions do not allow the oracle user (or grid user) to access the raw devices Verify disk ownership and permissions. For AFD: asmcmd afd_lsdsk. For standard ASM: check /dev/oracleasm/disks/ or raw device permissions.
PRCR-1079: Failed to start resource Clusterware registration failure, often due to incorrect Oracle home permissions or missing binaries on a candidate node Verify the database home exists on all candidate nodes: ssh node2 ls $ORACLE_HOME/bin/oracle. Check CRS log: /u01/app/26.0.0/grid/log/{hostname}/crsd/crsd.log (replace /u01/app/26.0.0/grid with the actual Grid home location).
[DBT-50000] Unable to check mount point or DBCA hangs during creation Candidate node is unreachable via SSH Verify SSH equivalence: ssh node2 hostname. Reconfigure SSH keys if needed.
sga_target or pga_aggregate_target shows unexpected values after creation DBCA template overrode the -initParams values Connect as SYSDBA and set manually: ALTER SYSTEM SET sga_target=16G SCOPE=SPFILE; and ALTER SYSTEM SET pga_aggregate_target=4G SCOPE=SPFILE;. Restart the database with srvctl stop database -d PROD_DC1 and srvctl start database -d PROD_DC1.
PDB is in MOUNTED state after CDB restart PDB state was not saved Run ALTER PLUGGABLE DATABASE PROD_PDB1 OPEN; then ALTER PLUGGABLE DATABASE PROD_PDB1 SAVE STATE; to persist across restarts.
Instance not starting on the expected node Clusterware selected a different candidate node Check candidate servers: srvctl config database -d PROD_DC1. Relocate if needed: srvctl relocate database -d PROD_DC1 -n node1.
ORA-01078: failure in processing system parameters SPFile is missing or corrupt in ASM Check SPFile location: srvctl config database -d PROD_DC1 (look for Spfile line). Verify file exists: asmcmd ls +DATA/PROD_DC1/PARAMETERFILE/. The actual filename uses an OMF-generated name with a numeric suffix.

References