Wednesday, December 28, 2016

RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process

My database is oracle12c(12.1.0.2.0) version. One of my database obsolete archive log files were not getting deleted and getting the below error..

RMAN>delete noprompt force archivelog until time 'sysdate-14';
RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
archived log file name=+DATA/XXX_AZ/ARCHIVELOG/2016_09_28/thread_2_seq_12062.22129.923740541 thread=2 sequence=12062
RMAN>


I followed the temp solution to delete the old archive log files manually.

Here is the current RMAN setting...


RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name FRONTEND_AZ are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u04/rmanbackup/FRONTEND1/auto_%d_%F.ctl';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+data/xxx_az/controlfile/snap_XXX.ctl';

RMAN>


Temporary  solution :

Step 1  Change the Archive log deletion policy to STANDBY.

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;
new RMAN configuration parameters are successfully stored
RMAN-08591: WARNING: invalid archived log deletion policy


Step 2  Delete the obsolete archive log files.

 RMAN>delete noprompt force archivelog until time 'sysdate-14';


archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12858.23739.924517385 RECID=73224 STAMP=924517385
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12859.23741.924518285 RECID=73230 STAMP=924518284
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12860.23743.924519185 RECID=73236 STAMP=924519185
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12861.23745.924520085 RECID=73242 STAMP=924520084
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12862.23747.924520985 RECID=73248 STAMP=924520985
deleted archived log
archived log file name=+DATA/XXXXXXX_AZ/ARCHIVELOG/2016_10_06/thread_2_seq_12863.23749.924521885 RECID=73254 STAMP=924521884
Deleted 21176 objects

RMAN-08591: WARNING: invalid archived log deletion policy

RMAN>

Now archive log files are getting deleted successfully!!


Step 3  Change the RMAN deletion policy back to original.


RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
new RMAN configuration parameters are successfully stored


Step 4  Verify the RMAN settings.


 RMAN> show all;

RMAN configuration parameters for database with db_unique_name XXXXXXX_AZ are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u04/rmanbackup/XXXXXXX1/auto_%d_%F.ctl';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+data/XXXXXXX_az/controlfile/snap_XXXXXXX.ctl';

RMAN>


Parmanent   solution :  The golden gate was implemented previously for data replication to upgrade to  Oracle 12c. But Golden Gate is no more used and still there are couple of Golden Gate extract process is running and it caused the archive log deletion issue.

The currently running Golden Gate extract process stopped and archive log deletion job started deleting all the obsolete archive log files.

Here are the steps i followed to currently running extract process :

GGSCI (hostname) 2> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT ABENDED FEEXT 00:00:00 5234:47:30
EXTRACT RUNNING FEPMP 00:00:00 00:00:05
EXTRACT STOPPED GWEXT 00:00:02 4671:09:27
EXTRACT STOPPED GWPMP 00:00:00 4671:09:09
EXTRACT STOPPED S 00:00:01 6650:16:15





ggsci> delete extract FEEXT 

ggsci> stop extract FEPMP 

ggsci> delete extract FEPMP 

ggsci> delete extract GWEXt 

ggsci> delete extract GWPMP 

ggsci> delete extract S 

ggsci> stop mgr 

It will prompt (y/n)---- y and continue (press enter) 

ggsci> delete mgr 

ggsci> info all 





Friday, December 2, 2016

How to drop RAC database manually

I am demonstrating how we can drop the database in RAC environment manually in Oracle12c(12.1.0.2.0) version.

I have two node RAC and the database name is TEST.  The instance names are TEST1 & TEST2.

Step 1  Verify the instance

   
[oracle@usbenhost01 ~]$ srvctl status database -d TEST
Instance TEST1 is running on node usbenhost01
Instance TEST2 is running on node usbenhost02


[oracle@usbenhost01 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details     
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.TEST.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.asm
               ONLINE  ONLINE       usbenhost01              Started,STABLE
               ONLINE  ONLINE       usbenhost02              Started,STABLE
ora.net1.network
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.ons
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       usbenhost01              169.254.59.131 192.1
                                                             68.1.101,STABLE
ora.cvu
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.test.db
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
      2        ONLINE  ONLINE       usbenhost02              Open,STABLE
ora.usben.db
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
      2        ONLINE  ONLINE       usbenhost02              Open,STABLE
ora.usbenhost01.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.usbenhost02.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------


Step 2  Shutdown the both instance


SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
TEST1

SQL>  shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
TEST2

SQL>  shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

Step3  Mount the first instance and update the cluster parameter.


 SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
TEST1
SQL> startup mount
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  2926320 bytes
Variable Size             436209936 bytes
Database Buffers           79691776 bytes
Redo Buffers                5459968 bytes
Database mounted.
SQL>
SQL> sho parameter cluster_data

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     TRUE
cluster_database_instances           integer     2
SQL>  alter system set cluster_database=FALSE scope=spfile;

System altered.

Step 3  Mount the first instance in restrict mode and drop the database.


SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

SQL> startup mount restrict
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  2926320 bytes
Variable Size             415238416 bytes
Database Buffers          100663296 bytes
Redo Buffers                5459968 bytes
Database mounted.
SQL> sho parameter cluster_data

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     FALSE
cluster_database_instances           integer     1
SQL> drop database;

Database dropped.

Monitor the alert log while dropping the database

Step 4  Update the OCR


[oracle@usbenhost01 ~]$ srvctl status database -d TEST
Instance TEST1 is not running on node usbenhost01
Instance TEST2 is not running on node usbenhost02
[oracle@usbenhost01 ~]$
[oracle@usbenhost01 ~]$ srvctl remove database -d TEST
Remove the database TEST? (y/[n]) y
[oracle@usbenhost01 ~]$   

Step 5 Verify the instance.


 [oracle@usbenhost01 ~]$  srvctl status database -d TEST
PRCD-1120 : The resource for database TEST could not be found.
PRCR-1001 : Resource ora.test.db does not exist
[oracle@usbenhost01 ~]$  crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details                                                                                                                                                                   
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.TEST.dg
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.asm
               ONLINE  ONLINE       usbenhost01              Started,STABLE
               ONLINE  ONLINE       usbenhost02              Started,STABLE
ora.net1.network
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
ora.ons
               ONLINE  ONLINE       usbenhost01              STABLE
               ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       usbenhost01              169.254.59.131 192.                                                                                                                                                             1
                                                             68.1.101,STABLE
ora.cvu
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.usben.db
      1        ONLINE  ONLINE       usbenhost01              Open,STABLE
      2        ONLINE  ONLINE       usbenhost02              Open,STABLE
ora.usbenhost01.vip
      1        ONLINE  ONLINE       usbenhost01              STABLE
ora.usbenhost02.vip
      1        ONLINE  ONLINE       usbenhost02              STABLE
--------------------------------------------------------------------------------
[oracle@usbenhost01 ~]$   

Step 6  Go to OS and clean if there is any files related to the database. Go to ASM disk and clean if there is any relevant files to  this database.

Sunday, October 16, 2016

ORA-27303: additional information: startup egid = 54322 (dba), current egid = 54321 (oinstall)

 My database is Oracle12c and two node RAC database.  I have standby database with two node  RAC.

I  was getting the  below error during the business hours.  We were not to able  connect the database. Oracle stopped taking new connections. But existing  connections were fine and it did not impact the transactions.

[oracle@usbenhost1 ~]$ sqlplus gthangavelu

SQL*Plus: Release 12.1.0.2.0 Production on Fri Oct 14 15:22:15 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54322 (dba), current egid = 54321 (oinstall)


Enter user-name:


I was able to connect the standby database.  Then i have done some oracle metalink search and found that this is related to $ORACLE_HOME/bin/oracle   file permission issue.

Looked at the standby database $ORACLE_HOME/bin/oracle file privilege. It was different than the primary database. I came to conclusion that, $ORACLE_HOME/bin/oracle file permission  is the root cause for this connectivity issue.

Primary database  First node Privileges :
[oracle@usbenhost1 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@usbenhost1 bin]$ ls -ltr oracle
-rwxr-x--x. 1 oracle oinstall 323762228 Dec 28  2014 oracle
[oracle@usbenhost1 bin]$

 Primary database   Second   node  Privileges :
[oracle@usbenhost2 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@usbenhost2 bin]$ ls -ltr oracle
-rwxr-x--x. 1 oracle oinstall 323762228 Dec 28  2014 oracle
[oracle@usbenhost2 bin]$

Secondary database  First node Privileges :
[oracle@usbenhost3 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@usbenhost3 bin]$ ls -ltr oracle
-rwsr-s--x. 1 oracle dba 323762276 Feb 23  2015 oracle
[oracle@usbenhost3 bin]$

Secondary database  Second  node Privileges :
[oracle@usbenhost4 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@usbenhost4 bin]$ ls -ltr oracle
-rwsr-s--x. 1 oracle dba 323762276 Feb 23  2015 oracle
[oracle@usbenhost4 bin]$


I changed the privilege as below in both primary nodes and able to connect the  database.

[oracle@usbenhost1 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@usbenhost1 bin]$ ls -ltr oracle
-rwxr-x--x. 1 oracle dba 323762228 Dec 28  2014 oracle
[oracle@usbenhost1 bin]$ chmod 6751 oracle
[oracle@usbenhost1 1in]$ ls -ltr oracle
-rwsr-s--x. 1 oracle dba 323762276 Feb 23  2015 oracle
[oracle@usbenhost1 bin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Oct 14 17:24:33 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>


[oracle@usbenhost2 bin]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/bin
[oracle@usbenhost2 bin]$ ls -ltr oracle
-rwxr-x--x. 1 oracle dba 323762228 Dec 28  2014 oracle
[oracle@usbenhost2 bin]$ chmod 6751 oracle
[oracle@usbenhost2 bin]$ ls -ltr oracle
-rwsr-s--x. 1 oracle dba 323762228 Dec 28  2014 oracle
[oracle@usbenhost2 bin]$ sqlplus  / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Oct 14 17:23:23 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>

Issue resolved now!

This issue could happen when $GI_HOME/bin/oracle   privilege is  changed same as above.  But my case,  i had an issue with $ORACLE_HOME/bin/oracle privilege.

Hope this post is helpful!

Tuesday, March 29, 2016

Removing Node from oracle12c RAC Cluster

In this Article, I am removing  the node from existing Oracle12c Cluster.  I have three node RAC and i am going to remove third node from the cluster.

Environment :

Hostnames   : ractest1, ractest2, ractest3
Instance       : usben1, usben2, usben3
DB name       : usben
OS                : Red Hat Enterprise Linux Server release 6.4 (Santiago)
DB version   : 12.1.0.2.0

Goal : Remove the ractest3 node from the cluster.

High level steps:
  • Pre Verification
  • Removing a oracle database instance
  • Removing RDBMS software
  • Removing node from cluster
  • Post Verification
Pre Verification


 [root@RACTEST1 ~]# sudo su - oracle
[oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /grid/app/oracle
[oracle@RACTEST1 ~]$ olsnodes
ractest1
ractest2
ractest3
[oracle@RACTEST1 ~]$ crsctl get cluster mode status
Cluster is running in "standard" mode
[oracle@RACTEST1 ~]$ srvctl config gns
PRKF-1110 : Neither GNS server nor GNS client is configured on this cluster
[oracle@RACTEST1 ~]$ oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.1.0  global  cluster_interconnect
[oracle@RACTEST1 ~]$ crsctl get node role config
Node 'ractest1' configured role is 'hub'
[oracle@RACTEST1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled
[oracle@RACTEST1 ~]$ asmcmd showclusterstate
Normal
[oracle@RACTEST1 ~]$ srvctl status asm -detail
ASM is running on ractest2,ractest3,ractest1
ASM is enabled.
[oracle@RACTEST1 ~]$  crsctl get node role config -all
Node 'ractest1' configured role is 'hub'
Node 'ractest2' configured role is 'hub'
Node 'ractest3' configured role is 'hub'
[oracle@RACTEST1 ~]$  crsctl get node role status -all
Node 'ractest1' active role is 'hub'
Node 'ractest2' active role is 'hub'
Node 'ractest3' active role is 'hub'
[oracle@RACTEST1 ~]$  crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details                                                                                                                                                                   
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               ONLINE  ONLINE       ractest3                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               OFFLINE OFFLINE      ractest3                 STABLE
ora.TEST.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               ONLINE  ONLINE       ractest3                 STABLE
ora.VOTE.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               ONLINE  ONLINE       ractest3                 STABLE
ora.VOTE1.dg
               ONLINE  OFFLINE      ractest1                 STABLE
               ONLINE  OFFLINE      ractest2                 STABLE
               ONLINE  OFFLINE      ractest3                 STABLE
ora.VOTE2.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               ONLINE  ONLINE       ractest3                 STABLE
ora.asm
               ONLINE  ONLINE       ractest1                 Started,STABLE
               ONLINE  ONLINE       ractest2                 Started,STABLE
               ONLINE  ONLINE       ractest3                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               ONLINE  ONLINE       ractest3                 STABLE
ora.ons
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
               ONLINE  ONLINE       ractest3                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       ractest3                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       ractest1                 169.254.66.3 192.16                                                                                                                                                             8
                                                             .1.101,STABLE
ora.cvu
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.oc4j
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.ractest1.vip
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.ractest2.vip
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       ractest3                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.usben.db
      1        ONLINE  ONLINE       ractest1                 Open,STABLE
      2        ONLINE  ONLINE       ractest2                 Open,STABLE
      3        ONLINE  ONLINE       ractest3                 Open,STABLE
--------------------------------------------------------------------------------
[oracle@RACTEST1 ~]$

[oracle@RACTEST1 ~]$ exit
logout
[root@RACTEST1 ~]# olsnodes -s
ractest1        Active
ractest2        Active
ractest3        Active
[root@RACTEST1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   5d17422445e54f1abf131f15b967c07f (ORCL:VOTE2) [VOTE2]
Located 1 voting disk(s).
[root@RACTEST1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1752
         Available space (kbytes) :     407816
         ID                       :  540510110
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@RACTEST1 ~]# srvctl status database -d usben
Instance usben1 is running on node ractest1
Instance usben2 is running on node ractest2
Instance usben3 is running on node ractest3
[root@RACTEST1 ~]# srvctl config service -d usben
[root@RACTEST1 ~]# srvctl status service -d usben
[root@RACTEST1 ~]#

Removing Oracle Database Instance 

Login as oracle account and run the dbca in silent mode.  Login the node which is remain in the cluster to remove ractest3.  In this case, i can login  either ractest1 or ractest2. 

 [root@RACTEST1 ~]# sudo su - oracle
[oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [oracle] ? usben1
The Oracle base has been set to /ora/app/oracle
[oracle@RACTEST1 ~]$ which dbca
/ora/app/oracle/product/12.1.0.1/db_1/bin/dbca
[oracle@RACTEST1 ~]$ dbca -silent -deleteInstance -nodeList ractest3 -gdbName usben -instanceName usben3 -sysDBAUserName sys -sysDBAPassword admin123
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/ora/app/oracle/cfgtoollogs/dbca/usben1.log" for further details.
[oracle@RACTEST1 ~]$ cat /ora/app/oracle/cfgtoollogs/dbca/usben1.log
The Database Configuration Assistant will delete the Oracle instance and its associated OFA directory structure. All information about this instance will be deleted.

Do you want to proceed?
Deleting instance
DBCA_PROGRESS : 1%
DBCA_PROGRESS : 2%
DBCA_PROGRESS : 6%
DBCA_PROGRESS : 13%
DBCA_PROGRESS : 20%
DBCA_PROGRESS : 26%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 46%
DBCA_PROGRESS : 53%
DBCA_PROGRESS : 60%
DBCA_PROGRESS : 66%
Completing instance management.
DBCA_PROGRESS : 100%
Instance "usben3" deleted successfully from node "ractest3".
[oracle@RACTEST1 ~]$

[oracle@RACTEST1 ~]$ srvctl status database -d usben
Instance usben1 is running on node ractest1
Instance usben2 is running on node ractest2
[oracle@RACTEST1 ~]$ srvctl config database -d usben -v
Database unique name: usben
Database name:
Oracle home: /ora/app/oracle/product/12.1.0.1/db_1
Oracle user: oracle
Spfile: +DATA/USBEN/PARAMETERFILE/spfileusben.ora
Password file: +DATA/USBEN/PASSWORDFILE/orapwusben
Domain: localdomain
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: usben1,usben2
Configured nodes: ractest1,ractest2
Database is administrator managed

Check the redo log thread and UNDO tablespace for removed instance.  
sys@usben1> select inst_id, instance_name, status,
to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') as "START_TIME"
from gv$instance order by inst_id;  2    3

   INST_ID INSTANCE_NAME    STATUS       START_TIME
---------- ---------------- ------------ --------------------
         1 usben1           OPEN         29-MAR-2016 08:12:57
         2 usben2           OPEN         29-MAR-2016 09:11:19

sys@usben1> select thread#,instance from v$thread;

   THREAD# INSTANCE
---------- --------------------
         1 usben1
         2 usben2
sys@usben1>

sys@usben1> select group# from v$log where thread# = 3;

no rows selected

sys@usben1>  select  ablespace_name from dba_tablespaces where tablespace_name like '%UNDO%';

TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2

2 rows selected.

sys@usben1> exit

All undo and redo objects are cleaned for ractest3,

Check the Listener configuration.
[oracle@RACTEST1 ~]$ srvctl config listener -a
Name: LISTENER
Type: Database Listener
Network: 1, Owner: oracle
Home:
  /grid/app/12.1.0/grid on node(s) ractest3,ractest2,ractest1
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
[oracle@RACTEST1 ~]$

Oracle database instance uben3 is successfully removed on ractest3 node!  Let us remove the RDBMS software on ractest3 node.

Removing RDBMS Software Login the node which is to be deleted and run the below commands.
The below command removes the ractest1 and ractest2 node on the nodelist from ractest3.


[oracle@RACTEST3 ~]$ . oraenv
ORACLE_SID = [oracle] ? usben3
The Oracle base has been set to /ora/app/oracle
[oracle@RACTEST3 ~]$ echo $ORACLE_HOME
/ora/app/oracle/product/12.1.0.1/db_1
[oracle@RACTEST3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@RACTEST3 bin]$ pwd
/ora/app/oracle/product/12.1.0.1/db_1/oui/bin
[oracle@RACTEST3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ractest3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3997 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST3 bin]$

Run the following command in ractest3  node to deinstall the oracle home from ractest3.

[oracle@RACTEST3 bin]$ cd $ORACLE_HOME/deinstall
[oracle@RACTEST3 deinstall]$ pwd
/ora/app/oracle/product/12.1.0.1/db_1/deinstall
[oracle@RACTEST3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /grid/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #######################                                                                                                                                                             ##
## [START] Install check configuration ##


Checking for existence of the Oracle home location /ora/app/oracle/product/12.1.                                                                                                                                                             0.1/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Data                                                                                                                                                             base
Oracle Base selected for deinstall is: /ora/app/oracle
Checking for existence of central inventory location /grid/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /grid/app/12.1.0/g                                                                                                                                                             rid
The following nodes are part of this cluster: ractest3,ractest2,ractest1
Checking for sufficient temp space availability on node(s) : 'RACTEST3'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /grid/app/oraInventory/logs/netdc_                                                                                                                                                             check2016-03-29_12-10-22-PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /grid/app/oraInventory/logs/datab                                                                                                                                                             asedc_check2016-03-29_12-10-30-PM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for                                                                                                                                                              this Oracle home. Local configurations of the discovered databases will be remov        ed [govinddb3,usben3]: Hit Enter Key
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /grid/app/oraInventory/logs//ocm_check5734.log
Oracle Configuration Manager check END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /grid/app/12.1.0/grid
The following nodes are part of this cluster: ractest3,ractest2,ractest1
The cluster node(s) on which the Oracle home deinstallation will be performed are:RACTEST3
Oracle Home selected for deinstall is: /ora/app/oracle/product/12.1.0.1/db_1
Inventory Location where the Oracle home registered is: /grid/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.out'
Any error messages from this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /grid/app/oraInventory/logs/databasedc_clean2016-03-29_12-17-36-PM.log

Network Configuration clean config START

Network de-configuration trace file location: /grid/app/oraInventory/logs/netdc_clean2016-03-29_12-17-36-PM.log

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /grid/app/oraInventory/logs//ocm_clean5734.log
Oracle Configuration Manager clean END

######################### DECONFIG CLEAN OPERATION END #########################


####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################
############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2016-03-29_00-09-19PM/response/deinstall_2016-03-29_00-10-13-PM.rsp
Location of logs /grid/app/oraInventory/logs/

############ ORACLE DEINSTALL TOOL START ############

####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.out'
Any error messages from this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to RACTEST3
Setting CLUSTER_NODES to RACTEST3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2016-03-29_00-09-19PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/ora/app/oracle/product/12.1.0.1/db_1' from the central inventory on the local node : Done

Delete directory '/ora/app/oracle/product/12.1.0.1/db_1' on the local node : Done

The Oracle Base directory '/ora/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2016-03-29_00-09-19PM' on node 'RACTEST3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/ora/app/oracle/product/12.1.0.1/db_1' from the central inventory on the local node.
Successfully deleted directory '/ora/app/oracle/product/12.1.0.1/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL TOOL END #############
[oracle@RACTEST3 deinstall]$



Run the below command in any cluster node that remain in the cluster. For my case, it should be either ractest1 or ractest2.  This will remove the ractest3 on node list on ractest1 and ractest2.

 [oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [oracle] ? usben1
The Oracle base has been set to /ora/app/oracle
[oracle@RACTEST1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@RACTEST1 bin]$ pwd
/ora/app/oracle/product/12.1.0.1/db_1/oui/bin
[oracle@RACTEST1 bin]$  ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ractest1,ractest2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3993 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST1 bin]$


Now verify the inventory and make sure ractest3 is completely removed. Run the below command in any cluster node that remain in the cluster. For my case, it should be either ractest1 or ractest2.



































I ran on both the node and ractest3 is removed from the inventory.

Run the below command in ractest3.




















Oracle home info is completely removed on ractest3 node. It shows that RDBMS software is completely removed on the ractest3 node.

Removing Node from the Cluster

Run the below command and make sure the node we want to delete is active and it is pinned.


[root@RACTEST1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /grid/app/oracle
[root@RACTEST1 ~]# pwd
/root
[root@RACTEST1 ~]# olsnodes -s -t
ractest1        Active  Unpinned
ractest2        Active  Unpinned
ractest3        Active  Unpinned
[root@RACTEST1 ~]#


[root@RACTEST3 ~]# . oraenv
ORACLE_SID = [+ASM3] ?
The Oracle base remains unchanged with value /grid/app/oracle
[root@RACTEST3 ~]# olsnodes -s -t
ractest1        Active  Unpinned
ractest2        Active  Unpinned
ractest3        Active  Unpinned
[root@RACTEST3 ~]#


Disable the Oracle Clusterware applications and daemons on ractest3
[root@RACTEST3 install]# cd $ORACLE_HOME/crs/install
[root@RACTEST3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.56.0/255.255.255.0/eth0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node ractest1
VIP Name: RACTEST1-vip
VIP IPv4 Address: 192.168.56.113
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node ractest2
VIP Name: RACTEST2-vip
VIP IPv4 Address: 192.168.56.114
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
PRKO-2313 : A VIP named ractest3 does not exist.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ractest3'
CRS-2673: Attempting to stop 'ora.crsd' on 'ractest3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'ractest3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'ractest3'
CRS-2673: Attempting to stop 'ora.TEST.dg' on 'ractest3'
CRS-2673: Attempting to stop 'ora.VOTE2.dg' on 'ractest3'
CRS-2677: Stop of 'ora.DATA.dg' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.VOTE2.dg' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.TEST.dg' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.VOTE.dg' on 'ractest3'
CRS-2677: Stop of 'ora.VOTE.dg' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ractest3'
CRS-2677: Stop of 'ora.asm' on 'ractest3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'ractest3' has completed
CRS-2677: Stop of 'ora.crsd' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ractest3'
CRS-2673: Attempting to stop 'ora.evmd' on 'ractest3'
CRS-2673: Attempting to stop 'ora.storage' on 'ractest3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ractest3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ractest3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ractest3'
CRS-2677: Stop of 'ora.storage' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ractest3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.asm' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ractest3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ractest3'
CRS-2677: Stop of 'ora.cssd' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'ractest3'
CRS-2677: Stop of 'ora.crf' on 'ractest3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ractest3'
CRS-2677: Stop of 'ora.gipcd' on 'ractest3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ractest3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2016/03/29 14:17:07 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2016/03/29 14:17:31 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

error: package cvuqdisk is not installed
2016/03/29 14:17:32 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

[root@RACTEST3 install]#

Run the following command as root to update the Clusterware configuration to delete the node from the cluster.

[root@RACTEST1 ~]# crsctl delete node -n ractest3
CRS-4661: Node ractest3 successfully deleted.
[root@RACTEST1 ~]# olsnodes -s -t
ractest1        Active  Unpinned
ractest2        Active  Unpinned
[root@RACTEST1 ~]#

As the Oracle Grid owner, run the below command on the node being removed to update the inventory.

[root@RACTEST3 bin]# sudo su - oracle
[oracle@RACTEST3 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM3
The Oracle base has been set to /grid/app/oracle
[oracle@RACTEST3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@RACTEST3 bin]$ echo $ORACLE_HOME
/grid/app/12.1.0/grid
[oracle@RACTEST3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/grid/app/12.1.0/grid "CLUSTER_NODES={ractest3}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST3 bin]$

As the Oracle Grid owner, run the deinstall command from the node being removed to delete the Oracle Grid Infrastructure software.

[root@RACTEST3 bin]# sudo su - oracle
[oracle@RACTEST3 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM3
The Oracle base has been set to /grid/app/oracle
[oracle@RACTEST3 ~]$ cd /grid/app/12.1.0/grid/deinstall/
[oracle@RACTEST3 ~]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /grid/app/oraInventory/logs/

As a Grid owner, Execute the runInstaller(without the -local option) from one of the node which remains on the cluster.  This is to update the inventories with a list of the nodes that are to remain in the cluster. 

[oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged with value /grid/app/oracle
[oracle@RACTEST1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@RACTEST1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/grid/app/12.1.0/grid "CLUSTER_NODES={ractest1.ractest2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3993 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST1 bin]$


Post Verification

Check the inventory on either ractest1 or ractest2 and make sure, ractest3 is completely gone.




















Run the cluvfy command to perform the post check for node removal.

[oracle@RACTEST1 ContentsXML]$  cluvfy stage -post nodedel -n ractest3 -verbose

Performing post-checks for node removal

Checking CRS integrity...
The Oracle Clusterware is healthy on node "ractest1"

CRS integrity check passed

Clusterware version consistency passed.
Result:
Node removal check passed

Post-check for node removal was successful.
[oracle@RACTEST1 ContentsXML]$

Check the cluster resource and local resource and make sure, ractest3 is not appearing.


[oracle@RACTEST1 ContentsXML]$  olsnodes -s -t
ractest1        Active  Unpinned
ractest2        Active  Unpinned
[oracle@RACTEST1 ContentsXML]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
ora.LISTENER.lsnr
               ONLINE  OFFLINE      ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
ora.TEST.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
ora.VOTE.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
ora.VOTE1.dg
               ONLINE  OFFLINE      ractest1                 STABLE
               ONLINE  OFFLINE      ractest2                 STABLE
ora.VOTE2.dg
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
ora.asm
               ONLINE  ONLINE       ractest1                 Started,STABLE
               ONLINE  ONLINE       ractest2                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
ora.ons
               ONLINE  ONLINE       ractest1                 STABLE
               ONLINE  ONLINE       ractest2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       ractest1                 169.254.66.3 192.168
                                                             .1.101,STABLE
ora.cvu
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.oc4j
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.ractest1.vip
      1        ONLINE  INTERMEDIATE ractest2                 FAILED OVER,STABLE
ora.ractest2.vip
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       ractest1                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       ractest2                 STABLE
ora.usben.db
      1        ONLINE  ONLINE       ractest1                 Open,STABLE
      2        ONLINE  ONLINE       ractest2                 Open,STABLE
--------------------------------------------------------------------------------
 [oracle@RACTEST1 ContentsXML]$ crsctl status res -t | grep -i ractest3
[oracle@RACTEST1 ContentsXML]$


Hope this post helps!