Wednesday, December 28, 2016

RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process

My database is oracle12c(12.1.0.2.0) version. One of my database obsolete archive log files were not getting deleted and getting the below error..

RMAN>delete noprompt force archivelog until time 'sysdate-14';
RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
archived log file name=+DATA/XXX_AZ/ARCHIVELOG/2016_09_28/thread_2_seq_12062.22129.923740541 thread=2 sequence=12062
RMAN>


I followed the temp solution to delete the old archive log files manually.

Here is the current RMAN setting...


RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name FRONTEND_AZ are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u04/rmanbackup/FRONTEND1/auto_%d_%F.ctl';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+data/xxx_az/controlfile/snap_XXX.ctl';

RMAN>


Temporary  solution :

Step 1  Change the Archive log deletion policy to STANDBY.

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;
new RMAN configuration parameters are successfully stored
RMAN-08591: WARNING: invalid archived log deletion policy


Step 2  Delete the obsolete archive log files.

 RMAN>delete noprompt force archivelog until time 'sysdate-14';


archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12858.23739.924517385 RECID=73224 STAMP=924517385
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12859.23741.924518285 RECID=73230 STAMP=924518284
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12860.23743.924519185 RECID=73236 STAMP=924519185
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12861.23745.924520085 RECID=73242 STAMP=924520084
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12862.23747.924520985 RECID=73248 STAMP=924520985
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12863.23749.924521885 RECID=73254 STAMP=924521884
Deleted 21176 objects

RMAN-08591: WARNING: invalid archived log deletion policy

RMAN>

Now archive log files are getting deleted successfully!!


Step 3  Change the RMAN deletion policy back to original.


RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
new RMAN configuration parameters are successfully stored


Step 4  Verify the RMAN settings.


 RMAN> show all;

RMAN configuration parameters for database with db_unique_name XXXXXXX_AZ are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u04/rmanbackup/XXXXXXX1/auto_%d_%F.ctl';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+data/XXXXXXX_az/controlfile/snap_XXXXXXX.ctl';

RMAN>


Parmanent   solution :  The golden gate was implemented previously for data replication to upgrade to  Oracle 12c. But Golden Gate is no more used and still there are couple of Golden Gate extract process is running and it caused the archive log deletion issue.

The currently running Golden Gate extract process stopped and archive log deletion job started deleting all the obsolete archive log files.

Here are the steps i followed to currently running extract process :

GGSCI (hostname) 2> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT ABENDED FEEXT 00:00:00 5234:47:30
EXTRACT RUNNING FEPMP 00:00:00 00:00:05
EXTRACT STOPPED GWEXT 00:00:02 4671:09:27
EXTRACT STOPPED GWPMP 00:00:00 4671:09:09
EXTRACT STOPPED S 00:00:01 6650:16:15





ggsci> delete extract FEEXT 

ggsci> stop extract FEPMP 

ggsci> delete extract FEPMP 

ggsci> delete extract GWEXt 

ggsci> delete extract GWPMP 

ggsci> delete extract S 

ggsci> stop mgr 

It will prompt (y/n)---- y and continue (press enter) 

ggsci> delete mgr 

ggsci> info all 





Friday, December 2, 2016

How to drop RAC database manually

I am demonstrating how we can drop the database in RAC environment manually in Oracle12c(12.1.0.2.0) version.

I have two node RAC and the database name is TEST.  The instance names are TEST1 & TEST2.

Step 1  Verify the instance

   
[oracle@usbenhost01 ~]$ srvctl status database -d TEST
Instance TEST1 is running on node usbenhost01
Instance TEST2 is running on node usbenhost02


[oracle@usbenhost01 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details     
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.TEST.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.asm
               ONLINE  ONLINE       usbenhost01              Started,STABLE
               ONLINE  ONLINE       usbenhost02              Started,STABLE
ora.net1.network
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.ons
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       usbenhost01              169.254.59.131 192.1
                                                             68.1.101,STABLE
ora.cvu
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.test.db
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
      2        ONLINE  ONLINE       usbenhost02              Open,STABLE
ora.usben.db
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
      2        ONLINE  ONLINE       usbenhost02              Open,STABLE
ora.usbenhost01.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.usbenhost02.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------


Step 2  Shutdown the both instance


SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
TEST1

SQL>  shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
TEST2

SQL>  shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

Step3  Mount the first instance and update the cluster parameter.


 SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
TEST1
SQL> startup mount
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  2926320 bytes
Variable Size             436209936 bytes
Database Buffers           79691776 bytes
Redo Buffers                5459968 bytes
Database mounted.
SQL>
SQL> sho parameter cluster_data

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     TRUE
cluster_database_instances           integer     2
SQL>  alter system set cluster_database=FALSE scope=spfile;

System altered.

Step 3  Mount the first instance in restrict mode and drop the database.


SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

SQL> startup mount restrict
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  2926320 bytes
Variable Size             415238416 bytes
Database Buffers          100663296 bytes
Redo Buffers                5459968 bytes
Database mounted.
SQL> sho parameter cluster_data

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     FALSE
cluster_database_instances           integer     1
SQL> drop database;

Database dropped.

Monitor the alert log while dropping the database

Step 4  Update the OCR


[oracle@usbenhost01 ~]$ srvctl status database -d TEST
Instance TEST1 is not running on node usbenhost01
Instance TEST2 is not running on node usbenhost02
[oracle@usbenhost01 ~]$
[oracle@usbenhost01 ~]$ srvctl remove database -d TEST
Remove the database TEST? (y/[n]) y
[oracle@usbenhost01 ~]$   

Step 5 Verify the instance.


 [oracle@usbenhost01 ~]$  srvctl status database -d TEST
PRCD-1120 : The resource for database TEST could not be found.
PRCR-1001 : Resource ora.test.db does not exist
[oracle@usbenhost01 ~]$  crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details                                                                                                                                                                   
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.TEST.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.asm
               ONLINE  ONLINE       usbenhost01              Started,STABLE
               ONLINE  ONLINE       usbenhost02              Started,STABLE
ora.net1.network
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.ons
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       usbenhost01              169.254.59.131 192.                                                                                                                                                             1
                                                             68.1.101,STABLE
ora.cvu
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.usben.db
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
      2        ONLINE  ONLINE       usbenhost02              Open,STABLE
ora.usbenhost01.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.usbenhost02.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------
[oracle@usbenhost01 ~]$   

Step 6  Go to OS and clean if there is any files related to the database. Go to ASM disk and clean if there is any relevant files to  this database.