Oracle 19c: Convert RAC Database to RAC One Node (4-Node Cluster)
Assumptions
This instruction assumes:
- Oracle Linux 7.x or 8.x (x86_64) is installed on all 4 cluster nodes
- Oracle Grid Infrastructure 19c is installed and running on all 4 nodes with full Clusterware (not Oracle Restart)
- Oracle Database 19c (19.3 or later with any Release Update) is installed in a shared Oracle home or identical local Oracle homes at the same path on all 4 nodes
- The database is an administrator-managed RAC database with 4 running instances (one per node), named
RACDB1,RACDB2,RACDB3,RACDB4 - The database is NOT part of an Oracle Data Guard configuration (standby databases require additional steps not covered here)
- Storage uses Oracle ASM with shared disk groups for data and Fast Recovery Area
- The SPFile is stored in Oracle ASM (standard RAC configuration)
- The password file is stored in Oracle ASM (standard RAC configuration) -- if filesystem-based, see the two-password-file requirement in the Prerequisites section
- The reader has OS access as the
oracleuser on all 4 cluster nodes - The reader has SYSDBA privilege on the database
- Oracle RAC One Node is properly licensed (it is a separately licensed Enterprise Edition option, distinct from full RAC)
- The SCAN listener and node VIP listeners are configured and operational via Grid Infrastructure
- No ongoing or failed online database relocation exists
- The database is either a non-CDB or a CDB with PDBs (the conversion is a metadata-only operation in Clusterware and does not affect datafiles, PDBs, or tablespace content)
- A maintenance window has been scheduled -- stopping 3 of 4 instances will disconnect all sessions on those instances
Throughout this instruction, the following placeholder values are used:
| Placeholder | Description | Example Value |
|---|---|---|
RACDB |
Database unique name (db_unique_name) |
RACDB |
RACDB1 through RACDB4 |
Original RAC instance names | RACDB1, RACDB2, RACDB3, RACDB4 |
node1 through node4 |
Cluster node hostnames | node1, node2, node3, node4 |
RACDB_1 |
RAC One Node instance name after conversion | RACDB_1 |
racdb_svc |
Application service name | racdb_svc |
scan-name |
SCAN listener hostname | scan-cluster.example.com |
Prerequisites
Automatic setup
No automatic setup is available for this task. The srvctl convert database command handles all Clusterware registration changes automatically. Specifically, Oracle handles:
- Clusterware re-registration: The command re-registers the database as type
RACONENODEin the Oracle Cluster Registry (OCR). No manual OCR editing is required. - Instance naming: After conversion, Oracle renames the instance using the pattern
prefix_1(where prefix is either the first 12 characters ofdb_unique_nameor the value specified via the-instanceparameter). During online relocation, a temporary second instance namedprefix_2is created; after relocation completes, the instance reverts toprefix_1on the new node. - Failover: Oracle Clusterware automatically handles instance failover to a candidate node if the active node fails. Failover uses the same instance name (
prefix_1), unlike online relocation which alternates betweenprefix_1andprefix_2.
Manual setup
All commands are run as the oracle user unless noted otherwise.
- Verify current RAC configuration
srvctl config database -db RACDB
Expected output:
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/RACDB/PARAMETERFILE/spfile.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA,RECO
Mount point paths:
Services: racdb_svc
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: RACDB1,RACDB2,RACDB3,RACDB4
Configured nodes: node1,node2,node3,node4
...
Verify that Type is RAC and Database instances lists all 4 instances.
- Verify all instances are running
srvctl status database -db RACDB
Expected output:
Instance RACDB1 is running on node node1
Instance RACDB2 is running on node node2
Instance RACDB3 is running on node node3
Instance RACDB4 is running on node node4
All 4 instances must be running before you begin.
- Verify redo threads exist
The database must either use Oracle Managed Files (OMF) or have at least two redo threads. A 4-instance RAC database has 4 redo threads, satisfying this requirement.
sqlplus -s / as sysdba <<'EOF'
SELECT thread#, COUNT(*) AS log_groups FROM v$log GROUP BY thread# ORDER BY thread#;
EXIT;
EOF
Expected output:
THREAD# LOG_GROUPS
---------- ----------
1 3
2 3
3 3
4 3
Four threads must be present (one per original instance).
- Verify SPFile is in shared storage (ASM)
srvctl config database -db RACDB | grep -i spfile
Expected output:
Spfile: +DATA/RACDB/PARAMETERFILE/spfile.ora
The SPFile path must begin with + (indicating ASM). If the SPFile is on a local filesystem, move it to ASM before proceeding.
- Verify password file location
For RAC One Node to relocate or fail over to other nodes, the password file must be accessible from all nodes.
srvctl config database -db RACDB | grep -i "Password file"
Expected output:
Password file: +DATA/RACDB/PASSWORD/pwdracdb.ora
If the password file path begins with + (ASM), no further action is needed.
If the password file is on a local filesystem (e.g., $ORACLE_HOME/dbs/orapwRACDB1), you must create two copies on every candidate node, named for the two RAC One Node instance name variants:
# On each candidate node, copy the password file to both names:
cp $ORACLE_HOME/dbs/orapwRACDB1 $ORACLE_HOME/dbs/orapwRACDB_1
cp $ORACLE_HOME/dbs/orapwRACDB1 $ORACLE_HOME/dbs/orapwRACDB_2
Both orapwRACDB_1 and orapwRACDB_2 are required because RAC One Node alternates between RACDB_1 and RACDB_2 during online relocation. You must recopy both files to all candidate nodes every time you update the password file. Using ASM-stored password files avoids this requirement entirely.
- Check for services with PRECONNECT TAF policy
srvctl config service -db RACDB
Expected output:
Review each service listed. If any service shows TAF policy: PRECONNECT, it must be changed to BASIC or NONE before conversion:
srvctl modify service -db RACDB -service racdb_svc -tafpolicy BASIC
- Verify at least one dynamic database service exists
Oracle RAC One Node databases require at least one dynamic database service (in addition to the default database service) to be configured with the remaining instance as preferred. This is an Oracle requirement for RAC One Node -- the database must have a registered service for connection management during online relocation and failover.
srvctl config service -db RACDB
If no services are listed, create one:
srvctl add service -db RACDB -service racdb_svc -preferred RACDB1
srvctl start service -db RACDB -service racdb_svc
Expected output (after start):
$
Verify the service is running:
srvctl status service -db RACDB -service racdb_svc
Expected output:
Service racdb_svc is running on instance(s) RACDB1
- Take a full RMAN backup
Warning: The conversion itself is a metadata-only operation, but removing instances from the CRS configuration is irreversible without manual re-addition. Back up the database before proceeding.
rman target / <<'EOF'
BACKUP DATABASE PLUS ARCHIVELOG;
EXIT;
EOF
Verify the backup completed successfully before continuing.
Additional setup
- Notify application teams and schedule the maintenance window
Stopping 3 of 4 instances will immediately disconnect all sessions connected to those instances. Sessions connected via the SCAN listener and a service name may reconnect to the remaining instance (if TAF or Application Continuity is configured), but sessions using direct SID-based connections (e.g., RACDB2, RACDB3, RACDB4) will fail and cannot reconnect without connection string changes.
Plan for:
- Application connection draining before stopping instances (covered in Step 1 of the Conversion procedure)
- Updating any connection strings that reference specific SIDs to use the service name instead
Conversion
Step 1: Drain connections from instances to be removed
Warning: Skipping connection draining will immediately terminate all active sessions on the instances being stopped. For production environments, always drain connections first.
Before stopping each instance, drain existing connections to allow active transactions to complete. The -drain_timeout parameter specifies how many seconds to wait for sessions to finish before the instance stops. Set the timeout based on your longest expected transaction.
Stop instance RACDB4 with connection draining (120 seconds):
srvctl stop instance -db RACDB -instance RACDB4 -drain_timeout 120 -stopoption "TRANSACTIONAL LOCAL"
Expected output:
$
Verify the instance is stopped:
srvctl status instance -db RACDB -instance RACDB4
Expected output:
Instance RACDB4 is not running on node node4
Repeat for RACDB3:
srvctl stop instance -db RACDB -instance RACDB3 -drain_timeout 120 -stopoption "TRANSACTIONAL LOCAL"
Verify:
srvctl status instance -db RACDB -instance RACDB3
Expected output:
Instance RACDB3 is not running on node node3
Repeat for RACDB2:
srvctl stop instance -db RACDB -instance RACDB2 -drain_timeout 120 -stopoption "TRANSACTIONAL LOCAL"
Verify:
srvctl status instance -db RACDB -instance RACDB2
Expected output:
Instance RACDB2 is not running on node node2
Verify only RACDB1 remains running:
srvctl status database -db RACDB
Expected output:
Instance RACDB1 is running on node node1
Instance RACDB2 is not running on node node2
Instance RACDB3 is not running on node node3
Instance RACDB4 is not running on node node4
Step 2: Remove non-target instances from Clusterware
Warning: Removing an instance from the CRS configuration is irreversible. To restore a removed instance, you must use
srvctl add instanceto re-register it manually.
Remove each stopped instance from the Clusterware registration. For administrator-managed databases, stopped instances must be removed (not just stopped) before conversion.
srvctl remove instance -db RACDB -instance RACDB4 -noprompt
Expected output:
$
srvctl remove instance -db RACDB -instance RACDB3 -noprompt
srvctl remove instance -db RACDB -instance RACDB2 -noprompt
Verify only one instance remains in the configuration:
srvctl config database -db RACDB | grep "Database instances"
Expected output:
Database instances: RACDB1
Step 3: Reconfigure services for single instance
All database services must be modified so the preferred instance is the single remaining instance and the available instance list is cleared. The -modifyconfig flag ensures that only the named instances are assigned to the service -- any previously configured instances are removed.
For each service:
srvctl modify service -db RACDB -service racdb_svc -modifyconfig -preferred "RACDB1"
Expected output:
$
When -modifyconfig is used with only -preferred specified (no -available parameter), the available instance list is implicitly cleared because -modifyconfig assigns only the explicitly named instances.
Verify the service configuration:
srvctl config service -db RACDB -service racdb_svc
Expected output:
Service name: racdb_svc
...
Preferred instances: RACDB1
Available instances:
...
The Available instances line must be empty. If you have additional services, repeat the srvctl modify service command for each one.
Verify the service is running:
srvctl status service -db RACDB -service racdb_svc
Expected output:
Service racdb_svc is running on instance(s) RACDB1
Step 4: Convert the database to RAC One Node
The -instance parameter specifies the instance name prefix (not the full instance name). With -instance RACDB, the resulting instance will be named RACDB_1. If you omit -instance, Oracle uses the first 12 characters of db_unique_name as the prefix.
srvctl convert database -db RACDB -dbtype RACONENODE -instance RACDB
Expected output:
$
Services were reconfigured in Step 3 before conversion. After restarting the database in Step 5, verify services are running on the new instance name (RACDB_1) as part of Post-Conversion Step 1.
Step 5: Restart the database
The database must be restarted for the new RAC One Node configuration to take effect.
srvctl stop database -db RACDB
Expected output:
$
srvctl start database -db RACDB
Expected output:
$
Verify the instance is running under its new name:
srvctl status database -db RACDB
Expected output:
Instance RACDB_1 is running on node node1
Step 6: Configure candidate servers
By default, after conversion the database can only run on the node where it was converted. Configure all 4 nodes as candidate servers to enable online relocation and failover to any node.
srvctl modify database -db RACDB -server "node1,node2,node3,node4"
Expected output:
$
Oracle Clusterware attempts to start the RAC One Node instance on the servers in the order listed. If the first server fails, it tries the second, and so on. The node where the database is currently running must be included in the list.
Verify the candidate server configuration:
srvctl config database -db RACDB | grep -i "Candidate servers"
Expected output:
Candidate servers: node1,node2,node3,node4
Post-Conversion
Step 1: Verify service status after restart
After the restart, verify that services are running on the new instance:
srvctl status service -db RACDB
Expected output:
Service racdb_svc is running on instance(s) RACDB_1
If the service is not running or shows the old instance name (RACDB1), reconfigure it for the new RAC One Node instance name and start it:
srvctl modify service -db RACDB -service racdb_svc -modifyconfig -preferred "RACDB_1"
srvctl start service -db RACDB -service racdb_svc
Step 2: Update connection strings
After conversion, the instance name changes from RACDB1 to RACDB_1. Any application or tool that connects using a SID-based connection string (referencing RACDB1, RACDB2, RACDB3, or RACDB4 directly) will fail.
All connections must use the service name (racdb_svc) through the SCAN listener. Service-based connections are not affected by the conversion and will automatically follow the instance during online relocation and failover.
Verify connectivity through the service name:
sqlplus sys@"(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=scan-name)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=racdb_svc)))" as sysdba <<'EOF'
SELECT instance_name, host_name FROM v$instance;
EXIT;
EOF
Expected output:
INSTANCE_NAME HOST_NAME
---------------- ----------
RACDB_1 node1
Step 3: Clean up orphaned undo tablespaces (optional)
The undo tablespaces for the removed instances (UNDOTBS2, UNDOTBS3, UNDOTBS4) are no longer needed. Dropping them reclaims disk space.
Warning: Dropping a tablespace with
INCLUDING CONTENTS AND DATAFILESpermanently removes the tablespace and its datafiles. This operation cannot be undone. Ensure you have a current backup before proceeding.
First, verify no active undo segments remain in the tablespaces:
sqlplus -s / as sysdba <<'EOF'
SET LINESIZE 120
SELECT tablespace_name, status, COUNT(*) AS segments
FROM dba_undo_extents
WHERE tablespace_name IN ('UNDOTBS2','UNDOTBS3','UNDOTBS4')
AND status = 'ACTIVE'
GROUP BY tablespace_name, status;
EXIT;
EOF
Expected output:
no rows selected
If rows are returned with status ACTIVE, wait for the transactions to complete before proceeding.
Drop each orphaned undo tablespace:
sqlplus -s / as sysdba <<'EOF'
ALTER TABLESPACE UNDOTBS2 OFFLINE;
DROP TABLESPACE UNDOTBS2 INCLUDING CONTENTS AND DATAFILES;
ALTER TABLESPACE UNDOTBS3 OFFLINE;
DROP TABLESPACE UNDOTBS3 INCLUDING CONTENTS AND DATAFILES;
ALTER TABLESPACE UNDOTBS4 OFFLINE;
DROP TABLESPACE UNDOTBS4 INCLUDING CONTENTS AND DATAFILES;
EXIT;
EOF
Expected output (per tablespace):
Tablespace altered.
Tablespace dropped.
Leave the extra redo threads (2, 3, 4) in place. They consume negligible resources and are required if the database is ever converted back to RAC.
To reverse the conversion back to full RAC, use srvctl convert database -db RACDB -dbtype RAC, then re-add instances with srvctl add instance. See the Oracle 19c RACAD Guide for the full reverse procedure.
Step 4: Update monitoring and backup scripts
After conversion:
- Instance name changed: Update any monitoring scripts, RMAN backup scripts, or cron jobs that reference
RACDB1throughRACDB4. The active instance is nowRACDB_1(and temporarilyRACDB_2during online relocation). - Oracle Enterprise Manager: If OEM is monitoring this database, remove the old RAC instance targets and update the database target type. The specifics depend on your OEM configuration.
- RMAN backup scripts: Update any scripts that reference specific instance names. RMAN connects through the service name and will work without changes if your scripts already use service-based connections.
Validation
Quick check
srvctl config database -db RACDB | grep "Type:"
Expected output:
Type: RACOneNode
Full validation
- Verify complete RAC One Node configuration
srvctl config database -db RACDB
Expected output (key fields):
Database unique name: RACDB
Database name: RACDB
...
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: RACDB
Candidate servers: node1,node2,node3,node4
Database instances: RACDB_1
...
Verify Type is RACOneNode, Candidate servers lists all 4 nodes, and Database instances shows RACDB_1.
- Verify instance is running
srvctl status database -db RACDB
Expected output:
Instance RACDB_1 is running on node node1
- Verify service status
srvctl status service -db RACDB
Expected output:
Service racdb_svc is running on instance(s) RACDB_1
- Verify database accessibility
sqlplus -s / as sysdba <<'EOF'
SELECT instance_name, status, database_status FROM v$instance;
EXIT;
EOF
Expected output:
INSTANCE_NAME STATUS DATABASE_STATUS
---------------- -------- -----------------
RACDB_1 OPEN ACTIVE
- Test online relocation to another node
This validates that the candidate server list is configured correctly and that RAC One Node can move the instance between nodes.
srvctl relocate database -db RACDB -node node2
Expected output:
$
During relocation, a temporary second instance (RACDB_2) starts on the target node, services migrate, and the original instance shuts down. This takes approximately 1-5 minutes depending on active connections and the relocation timeout.
Verify the instance moved:
srvctl status database -db RACDB
Expected output:
Instance RACDB_2 is running on node node2
The instance name is now RACDB_2 (it alternates between _1 and _2 on each relocation). On failover (unplanned), the instance name remains RACDB_1.
Relocate back to the original node:
srvctl relocate database -db RACDB -node node1
Verify:
srvctl status database -db RACDB
Expected output:
Instance RACDB_1 is running on node node1
- Verify service follows the relocated instance
After each relocation above, confirm the service moved with the instance:
srvctl status service -db RACDB
Expected output:
Service racdb_svc is running on instance(s) RACDB_1
Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
PRCD-1214: Administrator-managed RAC database has more than one instance during srvctl convert |
More than one instance still registered in CRS | Remove all instances except one: srvctl stop instance -db RACDB -instance RACDB2 then srvctl remove instance -db RACDB -instance RACDB2 -noprompt. Repeat for each extra instance. |
PRCD-1153: Failed to convert the configuration of cluster database during srvctl convert |
An ongoing or failed relocation is blocking the conversion | Complete the relocation: srvctl relocate database -db RACDB -node node1. If stuck, abort it: srvctl stop database -db RACDB -stopoption ABORT then srvctl start database -db RACDB. |
PRCD-1156: Failed to convert database due to services having PRECONNECT TAF policy |
A service has PRECONNECT TAF which is incompatible with RAC One Node | Change to BASIC: srvctl modify service -db RACDB -service racdb_svc -tafpolicy BASIC. |
| Relocation fails with "target node not in candidate list" | Target node is not in the candidate server list | Add the node: srvctl modify database -db RACDB -server "node1,node2,node3,node4". |
ORA-01017: invalid username/password after relocation to a new node |
Password file not found on target node (filesystem-based password files only) | Copy both password files to the target node: scp $ORACLE_HOME/dbs/orapwRACDB_1 node2:$ORACLE_HOME/dbs/ and scp $ORACLE_HOME/dbs/orapwRACDB_2 node2:$ORACLE_HOME/dbs/. Use ASM-stored password files to avoid this issue. |
Instance not starting after srvctl convert database |
Database was not restarted after conversion | Restart: srvctl stop database -db RACDB then srvctl start database -db RACDB. |
| Service not running after conversion and restart | Service was not started or configuration was not updated | Start the service: srvctl start service -db RACDB -service racdb_svc. If it fails, reconfigure: srvctl modify service -db RACDB -service racdb_svc -modifyconfig -preferred "RACDB_1". |
| Applications fail to connect after conversion | Connection strings use SID-based connections (RACDB1, RACDB2, etc.) instead of service name |
Update connection strings to use the service name (racdb_svc) through the SCAN listener. SID-based connections to the old instance names will never work after conversion. |
ACTIVE undo segments found when dropping orphaned tablespace |
Transactions from the removed instances have not fully expired | Wait for undo retention period to pass (check UNDO_RETENTION parameter), then retry. Query SELECT tablespace_name, status, COUNT(*) FROM dba_undo_extents WHERE tablespace_name = 'UNDOTBS2' GROUP BY tablespace_name, status; to monitor. |
| PDBs in MOUNTED state after relocation or failover | PDB does not have an associated service, so Oracle leaves it in MOUNTED state after relocation | Create a service for the PDB: srvctl add service -db RACDB -service pdb_svc -pdb PDB1 -preferred RACDB_1 and start it. Then save the PDB state: ALTER PLUGGABLE DATABASE PDB1 SAVE STATE;. |