In this Article, I am removing the node from existing Oracle12c Cluster. I have three node RAC and i am going to remove third node from the cluster.
Environment :
Hostnames : ractest1, ractest2, ractest3
Instance : usben1, usben2, usben3
DB name : usben
OS : Red Hat Enterprise Linux Server release 6.4 (Santiago)
DB version : 12.1.0.2.0
Goal : Remove the ractest3 node from the cluster.
High level steps:
Check the redo log thread and UNDO tablespace for removed instance.
All undo and redo objects are cleaned for ractest3,
Check the Listener configuration.
Run the below command in any cluster node that remain in the cluster. For my case, it should be either ractest1 or ractest2. This will remove the ractest3 on node list on ractest1 and ractest2.
Now verify the inventory and make sure ractest3 is completely removed. Run the below command in any cluster node that remain in the cluster. For my case, it should be either ractest1 or ractest2.
I ran on both the node and ractest3 is removed from the inventory.
Run the following command as root to update the Clusterware configuration to delete the node from the cluster.
As the Oracle Grid owner, run the below command on the node being removed to update the inventory.
Post Verification
Run the cluvfy command to perform the post check for node removal.
Check the cluster resource and local resource and make sure, ractest3 is not appearing.
Environment :
Hostnames : ractest1, ractest2, ractest3
Instance : usben1, usben2, usben3
DB name : usben
OS : Red Hat Enterprise Linux Server release 6.4 (Santiago)
DB version : 12.1.0.2.0
Goal : Remove the ractest3 node from the cluster.
High level steps:
- Pre Verification
- Removing a oracle database instance
- Removing RDBMS software
- Removing node from cluster
- Post Verification
Pre Verification
[oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /grid/app/oracle
[oracle@RACTEST1 ~]$ olsnodes
ractest1
ractest2
ractest3
[oracle@RACTEST1 ~]$ crsctl get cluster mode status
Cluster is running in "standard" mode
[oracle@RACTEST1 ~]$ srvctl config gns
PRKF-1110 : Neither GNS server nor GNS client is configured on this cluster
[oracle@RACTEST1 ~]$ oifcfg getif
eth0 192.168.56.0 global public
eth1 192.168.1.0 global cluster_interconnect
[oracle@RACTEST1 ~]$ crsctl get node role config
Node 'ractest1' configured role is 'hub'
[oracle@RACTEST1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled
[oracle@RACTEST1 ~]$ asmcmd showclusterstate
Normal
[oracle@RACTEST1 ~]$ srvctl status asm -detail
ASM is running on ractest2,ractest3,ractest1
ASM is enabled.
[oracle@RACTEST1 ~]$ crsctl get node role config -all
Node 'ractest1' configured role is 'hub'
Node 'ractest2' configured role is 'hub'
Node 'ractest3' configured role is 'hub'
[oracle@RACTEST1 ~]$ crsctl get node role status -all
Node 'ractest1' active role is 'hub'
Node 'ractest2' active role is 'hub'
Node 'ractest3' active role is 'hub'
[oracle@RACTEST1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
ONLINE ONLINE ractest3 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
OFFLINE OFFLINE ractest3 STABLE
ora.TEST.dg
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
ONLINE ONLINE ractest3 STABLE
ora.VOTE.dg
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
ONLINE ONLINE ractest3 STABLE
ora.VOTE1.dg
ONLINE OFFLINE ractest1 STABLE
ONLINE OFFLINE ractest2 STABLE
ONLINE OFFLINE ractest3 STABLE
ora.VOTE2.dg
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
ONLINE ONLINE ractest3 STABLE
ora.asm
ONLINE ONLINE ractest1 Started,STABLE
ONLINE ONLINE ractest2 Started,STABLE
ONLINE ONLINE ractest3 Started,STABLE
ora.net1.network
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
ONLINE ONLINE ractest3 STABLE
ora.ons
ONLINE ONLINE ractest1 STABLE
ONLINE ONLINE ractest2 STABLE
ONLINE ONLINE ractest3 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ractest2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE ractest3 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE ractest1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE ractest1 169.254.66.3 192.16 8
.1.101,STABLE
ora.cvu
1 ONLINE ONLINE ractest1 STABLE
ora.mgmtdb
1 OFFLINE OFFLINE STABLE
ora.oc4j
1 ONLINE ONLINE ractest1 STABLE
ora.ractest1.vip
1 ONLINE ONLINE ractest1 STABLE
ora.ractest2.vip
1 ONLINE ONLINE ractest2 STABLE
ora.scan1.vip
1 ONLINE ONLINE ractest2 STABLE
ora.scan2.vip
1 ONLINE ONLINE ractest3 STABLE
ora.scan3.vip
1 ONLINE ONLINE ractest1 STABLE
ora.usben.db
1 ONLINE ONLINE ractest1 Open,STABLE
2 ONLINE ONLINE ractest2 Open,STABLE
3 ONLINE ONLINE ractest3 Open,STABLE
--------------------------------------------------------------------------------
[oracle@RACTEST1 ~]$
[oracle@RACTEST1 ~]$ exit
logout
[root@RACTEST1 ~]# olsnodes -s
ractest1 Active
ractest2 Active
ractest3 Active
[root@RACTEST1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 5d17422445e54f1abf131f15b967c07f (ORCL:VOTE2) [VOTE2]
Located 1 voting disk(s).
[root@RACTEST1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1752
Available space (kbytes) : 407816
ID : 540510110
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@RACTEST1 ~]# srvctl status database -d usben
Instance usben1 is running on node ractest1
Instance usben2 is running on node ractest2
Instance usben3 is running on node ractest3
[root@RACTEST1 ~]# srvctl config service -d usben
[root@RACTEST1 ~]# srvctl status service -d usben
[root@RACTEST1 ~]#
|
Removing Oracle Database Instance
Login as oracle account and run the dbca in silent mode. Login the node which is remain in the cluster to remove ractest3. In this case, i can login either ractest1 or ractest2.
[oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [oracle] ? usben1
The Oracle base has been set to /ora/app/oracle
[oracle@RACTEST1 ~]$ which dbca
/ora/app/oracle/product/12.1.0.1/db_1/bin/dbca
[oracle@RACTEST1 ~]$ dbca -silent -deleteInstance -nodeList ractest3 -gdbName usben -instanceName usben3 -sysDBAUserName sys -sysDBAPassword admin123
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/ora/app/oracle/cfgtoollogs/dbca/usben1.log" for further details.
[oracle@RACTEST1 ~]$ cat /ora/app/oracle/cfgtoollogs/dbca/usben1.log
The Database Configuration Assistant will delete the Oracle instance and its associated OFA directory structure. All information about this instance will be deleted.
Do you want to proceed?
Deleting instance
DBCA_PROGRESS : 1%
DBCA_PROGRESS : 2%
DBCA_PROGRESS : 6%
DBCA_PROGRESS : 13%
DBCA_PROGRESS : 20%
DBCA_PROGRESS : 26%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 46%
DBCA_PROGRESS : 53%
DBCA_PROGRESS : 60%
DBCA_PROGRESS : 66%
Completing instance management.
DBCA_PROGRESS : 100%
Instance "usben3" deleted successfully from node "ractest3".
[oracle@RACTEST1 ~]$
[oracle@RACTEST1 ~]$ srvctl status database -d usben
Instance usben1 is running on node ractest1
Instance usben2 is running on node ractest2
[oracle@RACTEST1 ~]$ srvctl config database -d usben -v
Database unique name: usben
Database name:
Oracle home: /ora/app/oracle/product/12.1.0.1/db_1
Oracle user: oracle
Spfile: +DATA/USBEN/PARAMETERFILE/spfileusben.ora
Password file: +DATA/USBEN/PASSWORDFILE/orapwusben
Domain: localdomain
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: usben1,usben2
Configured nodes: ractest1,ractest2
Database is administrator managed
|
Check the redo log thread and UNDO tablespace for removed instance.
sys@usben1> select inst_id, instance_name, status,
to_char(startup_time,'DD-MON-YYYY
HH24:MI:SS') as "START_TIME"
from gv$instance order
by inst_id; 2 3
INST_ID INSTANCE_NAME
STATUS START_TIME
---------- ----------------
------------ --------------------
1 usben1 OPEN
29-MAR-2016 08:12:57
2 usben2 OPEN
29-MAR-2016 09:11:19
sys@usben1> select thread#,instance from v$thread; THREAD# INSTANCE ---------- -------------------- 1 usben1 2 usben2 sys@usben1>
sys@usben1> select group# from v$log where thread# = 3;
no rows selected
sys@usben1> select ablespace_name from dba_tablespaces where
tablespace_name like '%UNDO%';
TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2
2 rows selected.
sys@usben1> exit
|
All undo and redo objects are cleaned for ractest3,
Check the Listener configuration.
[oracle@RACTEST1 ~]$ srvctl config listener -a
Name: LISTENER
Type: Database Listener
Network: 1, Owner: oracle
Home:
/grid/app/12.1.0/grid on
node(s) ractest3,ractest2,ractest1
End points: TCP:1521
Listener is enabled.
Listener is individually enabled
on nodes:
Listener is individually disabled
on nodes:
[oracle@RACTEST1 ~]$
|
Oracle database instance uben3 is successfully removed on ractest3 node! Let us remove the RDBMS software on ractest3 node.
Removing RDBMS Software Login the node which is to be deleted and run the below commands.
The below command removes the ractest1 and ractest2 node on the nodelist from ractest3.
The below command removes the ractest1 and ractest2 node on the nodelist from ractest3.
[oracle@RACTEST3 ~]$ . oraenv
ORACLE_SID = [oracle] ? usben3
The Oracle base has been set to /ora/app/oracle
[oracle@RACTEST3 ~]$ echo $ORACLE_HOME
/ora/app/oracle/product/12.1.0.1/db_1
[oracle@RACTEST3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@RACTEST3 bin]$ pwd
/ora/app/oracle/product/12.1.0.1/db_1/oui/bin
[oracle@RACTEST3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ractest3}" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3997 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST3 bin]$
|
Run the following command in ractest3 node to deinstall the oracle home from ractest3.
[oracle@RACTEST3 bin]$ cd $ORACLE_HOME/deinstall
[oracle@RACTEST3 deinstall]$ pwd
/ora/app/oracle/product/12.1.0.1/db_1/deinstall
[oracle@RACTEST3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /grid/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START ####################### ##
## [START] Install check configuration ##
Checking for existence of the Oracle home location /ora/app/oracle/product/12.1. 0.1/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Data base
Oracle Base selected for deinstall is: /ora/app/oracle
Checking for existence of central inventory location /grid/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /grid/app/12.1.0/g rid
The following nodes are part of this cluster: ractest3,ractest2,ractest1
Checking for sufficient temp space availability on node(s) : 'RACTEST3'
## [END] Install check configuration ##
Network Configuration check config START
Network de-configuration trace file location: /grid/app/oraInventory/logs/netdc_ check2016-03-29_12-10-22-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /grid/app/oraInventory/logs/datab asedc_check2016-03-29_12-10-30-PM.log
Use comma as separator when specifying list of values as input
Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be remov ed [govinddb3,usben3]: Hit Enter Key
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /grid/app/oraInventory/logs//ocm_check5734.log
Oracle Configuration Manager check END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /grid/app/12.1.0/grid
The following nodes are part of this cluster: ractest3,ractest2,ractest1
The cluster node(s) on which the Oracle home deinstallation will be performed are:RACTEST3
Oracle Home selected for deinstall is: /ora/app/oracle/product/12.1.0.1/db_1
Inventory Location where the Oracle home registered is: /grid/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.out'
Any error messages from this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /grid/app/oraInventory/logs/databasedc_clean2016-03-29_12-17-36-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /grid/app/oraInventory/logs/netdc_clean2016-03-29_12-17-36-PM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /grid/app/oraInventory/logs//ocm_clean5734.log
Oracle Configuration Manager clean END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2016-03-29_00-09-19PM/response/deinstall_2016-03-29_00-10-13-PM.rsp
Location of logs /grid/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.out'
Any error messages from this session will be written to: '/grid/app/oraInventory/logs/deinstall_deconfig2016-03-29_00-10-13-PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to RACTEST3
Setting CLUSTER_NODES to RACTEST3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2016-03-29_00-09-19PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/ora/app/oracle/product/12.1.0.1/db_1' from the central inventory on the local node : Done
Delete directory '/ora/app/oracle/product/12.1.0.1/db_1' on the local node : Done
The Oracle Base directory '/ora/app/oracle' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2016-03-29_00-09-19PM' on node 'RACTEST3'
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/ora/app/oracle/product/12.1.0.1/db_1' from the central inventory on the local node.
Successfully deleted directory '/ora/app/oracle/product/12.1.0.1/db_1' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############
[oracle@RACTEST3 deinstall]$
|
ORACLE_SID = [oracle] ? usben1
The Oracle base has been set to /ora/app/oracle
[oracle@RACTEST1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@RACTEST1 bin]$ pwd
/ora/app/oracle/product/12.1.0.1/db_1/oui/bin
[oracle@RACTEST1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ractest1,ractest2}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3993 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST1 bin]$
|
Now verify the inventory and make sure ractest3 is completely removed. Run the below command in any cluster node that remain in the cluster. For my case, it should be either ractest1 or ractest2.
I ran on both the node and ractest3 is removed from the inventory.
Run the below command in ractest3.
Oracle home info is completely removed on ractest3 node. It shows that RDBMS software is completely removed on the ractest3 node.
Removing Node from the Cluster
Run the below command and make sure the node we want to delete is active and it is pinned.
[root@RACTEST1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to
/grid/app/oracle
[root@RACTEST1 ~]# pwd
/root
[root@RACTEST1 ~]# olsnodes -s -t
ractest1 Active Unpinned
ractest2 Active Unpinned
ractest3 Active Unpinned
[root@RACTEST1 ~]#
[root@RACTEST3 ~]# . oraenv
ORACLE_SID = [+ASM3] ?
The Oracle base remains unchanged
with value /grid/app/oracle
[root@RACTEST3 ~]# olsnodes -s -t
ractest1 Active Unpinned
ractest2 Active Unpinned
ractest3 Active Unpinned
[root@RACTEST3 ~]#
|
Disable the Oracle Clusterware applications and daemons on ractest3
[root@RACTEST3 install]# cd
$ORACLE_HOME/crs/install
[root@RACTEST3 install]#
./rootcrs.pl -deconfig -force
Using configuration parameter
file: ./crsconfig_params
Network 1 exists
Subnet IPv4:
192.168.56.0/255.255.255.0/eth0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on
nodes:
Network is individually disabled
on nodes:
VIP exists: network number 1,
hosting node ractest1
VIP Name: RACTEST1-vip
VIP IPv4 Address: 192.168.56.113
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on
nodes:
VIP is individually disabled on
nodes:
VIP exists: network number 1,
hosting node ractest2
VIP Name: RACTEST2-vip
VIP IPv4 Address: 192.168.56.114
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on
nodes:
VIP is individually disabled on
nodes:
ONS exists: Local port 6100,
remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on
nodes:
ONS is individually disabled on
nodes:
PRKO-2313 : A VIP named ractest3
does not exist.
CRS-2791: Starting shutdown of
Oracle High Availability Services-managed resources on 'ractest3'
CRS-2673: Attempting to stop
'ora.crsd' on 'ractest3'
CRS-2790: Starting shutdown of
Cluster Ready Services-managed resources on 'ractest3'
CRS-2673: Attempting to stop
'ora.DATA.dg' on 'ractest3'
CRS-2673: Attempting to stop
'ora.TEST.dg' on 'ractest3'
CRS-2673: Attempting to stop
'ora.VOTE2.dg' on 'ractest3'
CRS-2677: Stop of 'ora.DATA.dg' on
'ractest3' succeeded
CRS-2677: Stop of 'ora.VOTE2.dg'
on 'ractest3' succeeded
CRS-2677: Stop of 'ora.TEST.dg' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.VOTE.dg' on 'ractest3'
CRS-2677: Stop of 'ora.VOTE.dg' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.asm' on 'ractest3'
CRS-2677: Stop of 'ora.asm' on
'ractest3' succeeded
CRS-2792: Shutdown of Cluster
Ready Services-managed resources on 'ractest3' has completed
CRS-2677: Stop of 'ora.crsd' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.ctssd' on 'ractest3'
CRS-2673: Attempting to stop
'ora.evmd' on 'ractest3'
CRS-2673: Attempting to stop
'ora.storage' on 'ractest3'
CRS-2673: Attempting to stop
'ora.mdnsd' on 'ractest3'
CRS-2673: Attempting to stop
'ora.gpnpd' on 'ractest3'
CRS-2673: Attempting to stop
'ora.drivers.acfs' on 'ractest3'
CRS-2677: Stop of 'ora.storage' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.asm' on 'ractest3'
CRS-2677: Stop of
'ora.drivers.acfs' on 'ractest3' succeeded
CRS-2677: Stop of 'ora.ctssd' on
'ractest3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on
'ractest3' succeeded
CRS-2677: Stop of 'ora.evmd' on
'ractest3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on
'ractest3' succeeded
CRS-2677: Stop of 'ora.asm' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.cluster_interconnect.haip' on 'ractest3'
CRS-2677: Stop of
'ora.cluster_interconnect.haip' on 'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.cssd' on 'ractest3'
CRS-2677: Stop of 'ora.cssd' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.crf' on 'ractest3'
CRS-2677: Stop of 'ora.crf' on
'ractest3' succeeded
CRS-2673: Attempting to stop
'ora.gipcd' on 'ractest3'
CRS-2677: Stop of 'ora.gipcd' on
'ractest3' succeeded
CRS-2793: Shutdown of Oracle High
Availability Services-managed resources on 'ractest3' has completed
CRS-4133: Oracle High Availability
Services has been stopped.
2016/03/29 14:17:07 CLSRSC-4006:
Removing Oracle Trace File Analyzer (TFA) Collector.
2016/03/29 14:17:31 CLSRSC-4007:
Successfully removed Oracle Trace File Analyzer (TFA) Collector.
error: package cvuqdisk is not
installed
2016/03/29 14:17:32 CLSRSC-336:
Successfully deconfigured Oracle Clusterware stack on this node
[root@RACTEST3 install]#
|
[root@RACTEST1 ~]# crsctl delete
node -n ractest3
CRS-4661: Node ractest3
successfully deleted.
[root@RACTEST1 ~]# olsnodes -s -t
ractest1 Active Unpinned
ractest2 Active Unpinned
[root@RACTEST1 ~]#
|
As the Oracle Grid owner, run the below command on the node being removed to update the inventory.
[root@RACTEST3 bin]# sudo su -
oracle
[oracle@RACTEST3 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM3
The Oracle base has been set to
/grid/app/oracle
[oracle@RACTEST3 ~]$ cd
$ORACLE_HOME/oui/bin
[oracle@RACTEST3 bin]$ echo
$ORACLE_HOME
/grid/app/12.1.0/grid
[oracle@RACTEST3 bin]$
./runInstaller -updateNodeList ORACLE_HOME=/grid/app/12.1.0/grid "CLUSTER_NODES={ractest3}"
CRS=TRUE -silent -local
Starting Oracle Universal
Installer...
Checking swap space: must be
greater than 500 MB. Actual 3999
MB Passed
The inventory pointer is located
at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST3 bin]$
|
As the Oracle Grid owner, run the deinstall command from the node being removed to delete the Oracle Grid Infrastructure software.
[root@RACTEST3 bin]# sudo su - oracle
[oracle@RACTEST3 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM3
The Oracle base has been set to /grid/app/oracle
[oracle@RACTEST3 ~]$ cd
/grid/app/12.1.0/grid/deinstall/
[oracle@RACTEST3 ~]$ ./deinstall
-local
Checking for required files and
bootstrapping ...
Please wait ...
Location of logs /grid/app/oraInventory/logs/
|
As a Grid owner, Execute the runInstaller(without the -local option) from one of the node which remains on the cluster. This is to update the inventories with a list of the nodes that are to remain in the cluster.
[oracle@RACTEST1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged
with value /grid/app/oracle
[oracle@RACTEST1 ~]$ cd
$ORACLE_HOME/oui/bin
[oracle@RACTEST1 bin]$
./runInstaller -updateNodeList ORACLE_HOME=/grid/app/12.1.0/grid
"CLUSTER_NODES={ractest1.ractest2}" CRS=TRUE -silent -local
Starting Oracle Universal
Installer...
Checking swap space: must be
greater than 500 MB. Actual 3993
MB Passed
The inventory pointer is located
at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@RACTEST1 bin]$
|
Check the inventory on either ractest1 or ractest2 and make sure, ractest3 is completely gone.
Run the cluvfy command to perform the post check for node removal.
[oracle@RACTEST1
ContentsXML]$ cluvfy stage -post
nodedel -n ractest3 -verbose
Performing post-checks for node
removal
Checking CRS integrity...
The Oracle Clusterware is healthy
on node "ractest1"
CRS integrity check passed
Clusterware version consistency
passed.
Result:
Node removal check passed
Post-check for node removal was
successful.
[oracle@RACTEST1 ContentsXML]$
|
[oracle@RACTEST1
ContentsXML]$ olsnodes -s -t
ractest1 Active Unpinned
ractest2 Active Unpinned
[oracle@RACTEST1 ContentsXML]$
crsctl status res -t
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE
ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
ora.LISTENER.lsnr
ONLINE
OFFLINE ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
ora.TEST.dg
ONLINE ONLINE
ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
ora.VOTE.dg
ONLINE ONLINE
ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
ora.VOTE1.dg
ONLINE OFFLINE
ractest1 STABLE
ONLINE
OFFLINE ractest2 STABLE
ora.VOTE2.dg
ONLINE ONLINE
ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
ora.asm
ONLINE ONLINE
ractest1
Started,STABLE
ONLINE ONLINE
ractest2
Started,STABLE
ora.net1.network
ONLINE ONLINE
ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
ora.ons
ONLINE ONLINE
ractest1 STABLE
ONLINE ONLINE
ractest2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE ractest2 STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE ractest1 STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE ractest2 STABLE
ora.MGMTLSNR
1
ONLINE ONLINE ractest1 169.254.66.3 192.168
.1.101,STABLE
ora.cvu
1
ONLINE ONLINE ractest2 STABLE
ora.mgmtdb
1
OFFLINE OFFLINE STABLE
ora.oc4j
1
ONLINE ONLINE ractest1 STABLE
ora.ractest1.vip
1
ONLINE INTERMEDIATE
ractest2 FAILED
OVER,STABLE
ora.ractest2.vip
1
ONLINE ONLINE ractest2 STABLE
ora.scan1.vip
1
ONLINE ONLINE ractest2 STABLE
ora.scan2.vip
1
ONLINE ONLINE ractest1 STABLE
ora.scan3.vip
1
ONLINE ONLINE ractest2 STABLE
ora.usben.db
1
ONLINE ONLINE
ractest1
Open,STABLE
2
ONLINE ONLINE ractest2 Open,STABLE
--------------------------------------------------------------------------------
[oracle@RACTEST1 ContentsXML]$ crsctl status
res -t | grep -i ractest3
[oracle@RACTEST1 ContentsXML]$
|
Hope this post helps!
No comments:
Post a Comment