Instruction

Patching Oracle 19c Grid Infrastructure and Database in a Data Guard Environment (In-Place OPatchAuto Method)

Assumptions

This instruction assumes:

  • Oracle Linux 7.x, 8.x, or 9.x (x86_64)
  • Oracle Grid Infrastructure 19c and Oracle Database 19c are installed on all nodes
  • The environment uses Oracle RAC (two or more nodes per site) with separate GI and DB homes
  • A physical standby database is configured on a separate site using Data Guard
  • Data Guard Broker manages the configuration -- all Data Guard operations use DGMGRL exclusively
  • Fast-Start Failover (FSFO) is enabled (steps to disable and re-enable are included; skip if FSFO is not configured)
  • The database uses CDB architecture with database files on ASM
  • Root access is available on all nodes (OPatchAuto requires root)
  • The Release Update patch and latest OPatch have been downloaded from My Oracle Support
  • The patch README confirms Data Guard Standby-First Installable support

Prerequisites

Automatic setup

No additional RPM packages are required beyond the original Grid Infrastructure and Database installation.

Manual setup

All steps must be completed on every node across both primary and standby sites before patching begins.

  1. Update OPatch in the GI home on each node

Back up the existing OPatch directory and extract the latest OPatch (patch 6880880 from My Oracle Support). Run as the grid user:

mv {GRID_HOME}/OPatch {GRID_HOME}/OPatch.bak
unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip -d {GRID_HOME}
{GRID_HOME}/OPatch/opatch version

{GRID_HOME} -- the Grid Infrastructure home directory (e.g., /u01/app/19.0.0.0/grid).

Expected output:

OPatch Version: 12.2.0.1.44
...

The version number varies. The OPatch README on MOS specifies the minimum version required for each RU.

  1. Update OPatch in the DB home on each node

Run as the oracle user:

mv {DB_HOME}/OPatch {DB_HOME}/OPatch.bak
unzip -oq /u01/software/p6880880_190000_Linux-x86-64.zip -d {DB_HOME}
{DB_HOME}/OPatch/opatch version

{DB_HOME} -- the Oracle Database home directory (e.g., /u01/app/oracle/product/19.0.0/dbhome_1).

  1. Extract the Release Update patch on each node

Extract the patch zip to a staging directory. The directory must be readable by both the grid and oracle OS users. Run as root:

mkdir -p /u01/software/{PATCH_NUMBER}
cd /u01/software/{PATCH_NUMBER}
unzip /u01/software/p{PATCH_NUMBER}_190000_Linux-x86-64.zip
chown -R root:oinstall /u01/software/{PATCH_NUMBER}
chmod -R 755 /u01/software/{PATCH_NUMBER}

{PATCH_NUMBER} -- the Release Update patch number from My Oracle Support (e.g., 36916690 for 19.25.0).

  1. Run conflict check against the GI home

Run as the grid user on each node:

{GRID_HOME}/OPatch/opatch prereq CheckConflictAgainstOHWithDetail \
  -phBaseDir /u01/software/{PATCH_NUMBER}/{PATCH_NUMBER}

Expected output:

Prereq "checkConflictAgainstOHWithDetail" passed.
  1. Run conflict check against the DB home

Run as the oracle user on each node:

{DB_HOME}/OPatch/opatch prereq CheckConflictAgainstOHWithDetail \
  -phBaseDir /u01/software/{PATCH_NUMBER}/{PATCH_NUMBER}

Expected output:

Prereq "checkConflictAgainstOHWithDetail" passed.
  1. Verify sufficient disk space in GI and DB homes

In-place patching requires free space in both homes. Run on each node:

df -h {GRID_HOME} {DB_HOME}

Verify at least 10 GB free in each filesystem. The exact requirement varies per RU — check the patch README.

Additional setup (optional)

  1. Back up the GI and DB homes

Optional but recommended. Create tar backups of both homes before patching for additional rollback safety. Run as root on each node:

tar czf /u01/backup/grid_home_backup_$(hostname)_$(date +%Y%m%d).tar.gz -C {GRID_HOME} .
tar czf /u01/backup/db_home_backup_$(hostname)_$(date +%Y%m%d).tar.gz -C {DB_HOME} .

Pre-Patch Checks

Step 1: Verify Data Guard health via Broker

Connect to DGMGRL from the primary site as the oracle user (or sysdg):

dgmgrl sys/{SYS_PASSWORD}@{PRIMARY_DB}

{PRIMARY_DB} -- the primary database unique name or TNS alias.

{SYS_PASSWORD} -- the SYS password. Use an Oracle Wallet or password file to avoid passing passwords on the command line in production.

DGMGRL> SHOW CONFIGURATION;

Expected output:

Configuration - {DG_CONFIG}

  Protection Mode: MaxPerformance
  Members:
  {PRIMARY_DB} - Primary database
    {STANDBY_DB} - Physical standby database

Fast-Start Failover:  Enabled

Configuration Status:
SUCCESS

{DG_CONFIG} -- the Data Guard Broker configuration name.

{STANDBY_DB} -- the standby database unique name.

Verify each database individually:

DGMGRL> SHOW DATABASE '{PRIMARY_DB}';
DGMGRL> SHOW DATABASE '{STANDBY_DB}';

Both databases must show SUCCESS. Transport lag and apply lag must be zero or minimal. Do not proceed if either database shows a warning or error.

Step 2: Verify ASM disk group health

Run as the grid user on any node:

asmcmd lsdg

Expected output:

State    Type    Rebal  ...  Name
MOUNTED  EXTERN  N      ...  DATA/
MOUNTED  EXTERN  N      ...  RECO/
...

All disk groups must show MOUNTED with no rebalance operations in progress.

Step 3: Disable Fast-Start Failover (if running)

Skip this step if FSFO is not enabled in your environment.

Warning: If FSFO remains enabled during patching, an automatic failover could occur to a standby that is partially patched or shut down. Disable FSFO before patching any node.

DGMGRL> DISABLE FAST_START FAILOVER;

Expected output:

Disabled.

Verify:

DGMGRL> SHOW FAST_START FAILOVER;

Confirm the output shows Fast-Start Failover: Disabled.

Step 4: Disable scheduled jobs on the primary database (optional)

Optional. Prevents jobs from running during the patching window. Connect to the primary CDB as SYSDBA. First, capture the list of enabled jobs for re-enablement after patching:

SPOOL /tmp/enabled_jobs_before_patch.lst
SELECT owner, job_name FROM dba_scheduler_jobs WHERE enabled = 'TRUE' AND owner NOT IN ('SYS','SYSTEM');
SPOOL OFF

Disable all non-system scheduled jobs:

ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0 SCOPE=BOTH;

BEGIN
  FOR rec IN (SELECT owner, job_name FROM dba_scheduler_jobs WHERE enabled = 'TRUE' AND owner NOT IN ('SYS','SYSTEM'))
  LOOP
    DBMS_SCHEDULER.DISABLE(rec.owner || '.' || rec.job_name, TRUE);
  END LOOP;
END;
/

Patch Application

Phase 1: Patch Standby Site

Patch the standby site first. Run opatchauto apply on one standby node at a time.

Step 5: Patch standby node 1

Warning: OPatchAuto stops the entire GI stack on the node being patched, including all database instances, listeners, and ASM. The node is unavailable during patching. On a multi-node RAC standby, remaining nodes continue to operate.

Run as root on standby node 1:

export ORACLE_HOME={GRID_HOME}
{GRID_HOME}/OPatch/opatchauto apply /u01/software/{PATCH_NUMBER}/{PATCH_NUMBER}

OPatchAuto performs the following automatically:

  1. Stops the GI stack on the node (database instances, listener, ASM, CRS)
  2. Patches the GI home
  3. Patches the DB home (if the patch contains DB components)
  4. Restarts the GI stack and all managed resources

Patching takes approximately 15-45 minutes depending on hardware and patch size.

Expected output (summary at end):

OPatchauto session completed at ...
opatchauto succeeded.

Step 6: Verify standby node 1 after patching

Run as the grid user on standby node 1:

crsctl stat res -t

Verify the GI stack, ASM, database instances, and listeners are running on the patched node.

Check the patch inventory. Run as the grid user:

{GRID_HOME}/OPatch/opatch lspatches

Expected output:

{PATCH_NUMBER};Database Release Update : 19.x.0.0.yymmdd
...

Run as the oracle user:

{DB_HOME}/OPatch/opatch lspatches

Confirm the RU patch number appears in both GI and DB home inventories.

Step 7: Patch remaining standby nodes

Repeat steps 5 and 6 for each remaining standby node, one node at a time. Wait for each node to complete and verify before proceeding to the next.

Step 8: Validate standby Data Guard health

After all standby nodes are patched, verify Data Guard is healthy. Connect to DGMGRL:

DGMGRL> SHOW CONFIGURATION;

Expected output:

Configuration - {DG_CONFIG}

  Protection Mode: MaxPerformance
  Members:
  {PRIMARY_DB} - Primary database
    {STANDBY_DB} - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS

Verify redo apply is running on the standby:

DGMGRL> SHOW DATABASE '{STANDBY_DB}';

Confirm Database Status: SUCCESS and that transport lag and apply lag are zero or decreasing.

Phase 2: Patch Primary Site

After validating the standby site, patch the primary site one node at a time.

Step 9: Relocate services from primary node 1

Before patching each primary node, relocate services to surviving nodes to minimize client impact. Run as the oracle user:

srvctl status service -db {DB_NAME}

{DB_NAME} -- the database name registered with CRS (e.g., cdb1).

For each service running on the node being patched:

srvctl relocate service -db {DB_NAME} -service {SERVICE_NAME} -oldinst {INSTANCE_ON_THIS_NODE} -newinst {INSTANCE_ON_OTHER_NODE}

{SERVICE_NAME} -- the service name to relocate.

{INSTANCE_ON_THIS_NODE} -- the instance name on the node about to be patched (e.g., cdb1_1).

{INSTANCE_ON_OTHER_NODE} -- the instance name on a surviving node (e.g., cdb1_2).

Verify services have relocated:

srvctl status service -db {DB_NAME}

Step 10: Patch primary node 1

Run as root on primary node 1:

export ORACLE_HOME={GRID_HOME}
{GRID_HOME}/OPatch/opatchauto apply /u01/software/{PATCH_NUMBER}/{PATCH_NUMBER}

OPatchAuto stops the GI stack on this node, patches GI and DB homes, and restarts everything. Active sessions connected to the instance on this node are terminated. TAF or Application Continuity handles reconnection to surviving instances.

Expected output (summary at end):

OPatchauto session completed at ...
opatchauto succeeded.

Step 11: Verify primary node 1 after patching

Run as the grid user:

crsctl stat res -t

Verify GI stack, ASM, database instances, and listeners are running.

Check patch inventory in both homes:

{GRID_HOME}/OPatch/opatch lspatches
{DB_HOME}/OPatch/opatch lspatches

Confirm the RU patch number appears in both inventories.

Step 12: Patch remaining primary nodes

Repeat steps 9-11 for each remaining primary node. Relocate services away from each node before patching it. Wait for each node to complete and verify before proceeding.

Post-Patch Validation

Step 13: Run datapatch

OPatchAuto runs datapatch automatically during in-place patching. Verify that datapatch completed successfully by checking the patch registry. Connect to the primary CDB as SYSDBA:

SELECT patch_id, patch_uid, target_version, status, action, description
FROM dba_registry_sqlpatch
ORDER BY action_time DESC;

Expected output:

PATCH_ID    PATCH_UID   TARGET_VERSION   STATUS     ACTION     DESCRIPTION
----------- ----------- ---------------- ---------- ---------- --------------------------------
{PATCH_NUMBER} ...       19.x.0.0.yymmdd  SUCCESS    APPLY      Database Release Update : 19.x...

The STATUS column must show SUCCESS for the RU patch.

Verify datapatch applied to all PDBs:

SELECT con_id, patch_id, status FROM cdb_registry_sqlpatch ORDER BY con_id, action_time DESC;

All containers (CDB$ROOT and each PDB) must show SUCCESS for the RU patch.

If any container shows WITH ERRORS, review the datapatch log at {ORACLE_BASE}/cfgtoollogs/sqlpatch/ and re-run datapatch manually:

{ORACLE_BASE} -- the Oracle base directory (e.g., /u01/app/oracle).

{DB_HOME}/OPatch/datapatch -verbose

Step 14: Verify Data Guard health via Broker

DGMGRL> SHOW CONFIGURATION;

Expected output:

Configuration - {DG_CONFIG}

  Protection Mode: MaxPerformance
  Members:
  {PRIMARY_DB} - Primary database
    {STANDBY_DB} - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS

Verify both databases individually:

DGMGRL> SHOW DATABASE '{PRIMARY_DB}';
DGMGRL> SHOW DATABASE '{STANDBY_DB}';

Both must show SUCCESS with zero transport lag and apply lag.

Step 15: Verify database components

SELECT comp_name, version, status FROM dba_registry ORDER BY comp_name;

Expected output:

COMP_NAME                                  VERSION            STATUS
------------------------------------------ ------------------ ----------
...                                        19.0.0.0.0         VALID
...

All components must show VALID status.

Step 16: Recompile invalid objects

@?/rdbms/admin/utlrp.sql

Expected output:

...
PL/SQL procedure successfully completed.

TIMESTAMP
----------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END ...

DOC>   The following query reports the number of objects that are INVALID
DOC>   after the recompilation.

OBJECTS WITH ERRORS
-------------------
                  0

Zero objects with errors indicates a clean recompilation. A small number of invalid objects may persist for licensed options not in use -- verify these are expected.

Step 17: Re-enable Fast-Start Failover (if disabled in Step 3)

DGMGRL> ENABLE FAST_START FAILOVER;

Expected output:

Enabled.

Verify:

DGMGRL> SHOW FAST_START FAILOVER;

Confirm Fast-Start Failover: Enabled and that the observer is connected.

Step 18: Re-enable scheduled jobs (if disabled in Step 4)

ALTER SYSTEM SET JOB_QUEUE_PROCESSES=1000 SCOPE=BOTH;

Re-enable the jobs that were disabled in step 3. For each job recorded:

BEGIN
  DBMS_SCHEDULER.ENABLE('{OWNER}.{JOB_NAME}');
END;
/

Step 19: Final GI and database verification

Run on all nodes across both sites.

Verify GI stack:

crsctl stat res -t

Verify database instances:

srvctl status database -db {DB_NAME}

Expected output:

Instance {DB_NAME}_1 is running on node {NODE1}
Instance {DB_NAME}_2 is running on node {NODE2}

All instances must be running on all nodes.

Validation

Quick check

Verify the patch is applied in both GI and DB homes on any node:

{GRID_HOME}/OPatch/opatch lspatches && {DB_HOME}/OPatch/opatch lspatches

Expected output:

{PATCH_NUMBER};Database Release Update : 19.x.0.0.yymmdd
...

The RU patch number must appear in both home inventories on all nodes.

Full validation

  1. Verify patch inventory on all nodes

Run on each node as the grid user and oracle user respectively:

{GRID_HOME}/OPatch/opatch lspatches
{DB_HOME}/OPatch/opatch lspatches

All nodes must show the same RU patch in both homes.

  1. Verify GI stack on all nodes
crsctl stat res -t

All resources must show ONLINE on their respective nodes.

  1. Verify Data Guard configuration
DGMGRL> SHOW CONFIGURATION;
DGMGRL> SHOW DATABASE '{PRIMARY_DB}';
DGMGRL> SHOW DATABASE '{STANDBY_DB}';

All databases SUCCESS, transport lag and apply lag zero.

  1. Verify datapatch status
SELECT patch_id, status, action FROM dba_registry_sqlpatch ORDER BY action_time DESC;

The RU patch must show STATUS = SUCCESS.

  1. Verify FSFO is re-enabled
DGMGRL> SHOW FAST_START FAILOVER;

Fast-Start Failover: Enabled with observer connected.

Troubleshooting

Problem Cause Solution
opatchauto apply fails with conflict error An existing patch conflicts with the RU Run opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir {patch_dir} to identify the conflict. Roll back the conflicting patch first: opatch rollback -id {conflict_patch_id}
GI stack does not restart after patching CRS daemon failed to start or OCR is corrupted Check {GRID_HOME}/log/{hostname}/alertnode1.log. Try manual start: crsctl start crs. If OCR is corrupted, restore from backup: ocrconfig -restore {backup_file}
OPatchAuto fails with permission error Not running as root, or patch directory permissions are incorrect Run opatchauto apply as root. Verify patch directory ownership: chown -R root:oinstall /u01/software/{PATCH_NUMBER} and permissions: chmod -R 755 /u01/software/{PATCH_NUMBER}
Data Guard shows WARNING or ERROR after standby patching Redo apply lag from patching downtime Allow time for the standby to catch up. Verify with DGMGRL> SHOW DATABASE '{STANDBY_DB}'. If apply lag persists, restart redo apply: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
datapatch -verbose fails with ORA-20008 Datapatch detects a mismatched inventory state Verify opatch lspatches matches on all homes. Re-run datapatch: {DB_HOME}/OPatch/datapatch -verbose. Check logs at {DB_HOME}/cfgtoollogs/sqlpatch/
Services do not relocate before patching primary node Target instance is not running or service not configured for the target Verify target instance is running: srvctl status instance -db {DB_NAME} -instance {INSTANCE}. Check service configuration: srvctl config service -db {DB_NAME} -service {SERVICE_NAME}
Database instance does not start after node patching SPFile or password file on ASM is inaccessible Verify ASM is running: srvctl status asm. Check SPFile location: srvctl config database -db {DB_NAME}. Verify the file exists: asmcmd ls +DATA/{DB_NAME}/PARAMETERFILE/
OPatchAuto hangs during patching Lock file from a previous failed attempt Check for lock files in {GRID_HOME}/.patch_storage/ and {DB_HOME}/.patch_storage/. Remove stale lock files and re-run

Rollback Procedure

If the patch causes issues, roll back using opatchauto rollback. Roll back in reverse order: primary site first, then standby site.

Step 1: Roll back primary nodes

Run as root on each primary node, one at a time:

export ORACLE_HOME={GRID_HOME}
{GRID_HOME}/OPatch/opatchauto rollback /u01/software/{PATCH_NUMBER}/{PATCH_NUMBER}

OPatchAuto stops the GI stack, removes the patch from GI and DB homes, and restarts everything.

Step 2: Run datapatch rollback on primary

After rolling back all primary nodes, run datapatch to reverse SQL changes. Connect as SYSDBA on the primary:

{DB_HOME}/OPatch/datapatch -verbose

Datapatch detects the rolled-back binaries and reverses the SQL changes.

Step 3: Roll back standby nodes

Run as root on each standby node, one at a time:

export ORACLE_HOME={GRID_HOME}
{GRID_HOME}/OPatch/opatchauto rollback /u01/software/{PATCH_NUMBER}/{PATCH_NUMBER}

Step 4: Verify rollback

Check patch inventory on all nodes:

{GRID_HOME}/OPatch/opatch lspatches
{DB_HOME}/OPatch/opatch lspatches

The rolled-back patch number must no longer appear. Verify Data Guard health via Broker:

DGMGRL> SHOW CONFIGURATION;

Configuration status must be SUCCESS.

Step 5: Re-enable FSFO and scheduled jobs after rollback

Re-enable Fast-Start Failover via Broker:

DGMGRL> ENABLE FAST_START FAILOVER;

Re-enable scheduled jobs using the list captured before patching (see /tmp/enabled_jobs_before_patch.lst from Step 3 of Pre-Patch Checks):

ALTER SYSTEM SET JOB_QUEUE_PROCESSES=1000 SCOPE=BOTH;

BEGIN
  DBMS_SCHEDULER.ENABLE('{OWNER}.{JOB_NAME}');
END;
/

Repeat for each job in the captured list.

References