Wednesday, October 28, 2015

How to Configure and Create ASM disks using ASMLib

I am going to cover the steps to configure and  create the ASM disk group on Two node RAC cluster environment.

Storage team will attach  the raw disk on the node and handover to  UNIX SA and SA will format, scan and label  the disk, then handover to  DBA team.  DBA will create the diskgroup on the ASM instance. I am using Two node RAC and node names are RACTEST1 & RACTEST2.

High level steps for adding disk.

  1. Storage team add disk on the server
  2. Format the disk on ractest1 node
  3. Label the disk on ractest1  node 
  4. Query and List the disk 
  5. Scan the disk on ractest2 node
  6. Create the diskgroup on ASM instance on ractest1
  7. verify the disk on ASM instance for both node

We are adding 6 disks on the server and names are as follows.

  1. ORACLEDISK01_PRI
  2. ORACLEDISK01_SEC
  3. ORACLEDISK02_PRI
  4. ORACLEDISK02_SEC
  5. ORACLEDISK03_PRI
  6. ORACLEDISK03_SEC
Step 1 Storage team add raw disk on the server.

Storage team add the above 6 disk on the first node and make the disk shareable. Then attach the above said disk on the second node.  Now we can see the above disks on  both the nodes.

After adding the disk, the names will appear on server as below.






















Step 2  Format the newly added disk on the node1

Login to RACTEST1 node and Partition the disk as below.  Follow the below step for sdf, sdg, sdh,sdi,sdj & sdk.

















After formatting the disk, it will show on the server as below. 





















Step 3  Label the newly added disk on the node1

Here is the command to label the disk
oracleasm createdisk ORACLEDISK01_PRI /dev/sdf1
oracleasm createdisk ORACLEDISK01_SEC /dev/sdg1
oracleasm createdisk ORACLEDISK02_PRI /dev/sdh1
oracleasm createdisk ORACLEDISK02_SEC /dev/sdi1
oracleasm createdisk ORACLEDISK03_PRI /dev/sdj1
oracleasm createdisk ORACLEDISK03_SEC /dev/sdk1



Now label the disk as below in RACTEST1 node














Step 4  Query the newly added disk on the node1

Now We can Query the disk on RACTEST1  as below. This is verification step to make sure the disk is created as we expected. 

oracleasm querydisk /dev/sdf1
oracleasm querydisk /dev/sdg1
oracleasm querydisk /dev/sdh1
oracleasm querydisk /dev/sdi1
oracleasm querydisk /dev/sdj1
oracleasm querydisk /dev/sdk1
oracleasm listdisks





















Step 5  Scan the above disk on RACTEST2 node. This disk will not appear on node2 unless we scan the disk on the second node.

Here is the command to scan the disk.

oracleasm scandisks


















Step 6  Now the disk is ready to create in ASM instance.  Here we are adding new disk group on ASM instance.


SQL> CREATE DISKGROUP oracledata NORMAL REDUNDANCY
FAILGROUP oracledisk01 DISK
'/dev/oracleasm/disks/ORACLEDISK01_PRI' NAME ORADISK01_PRI,
'/dev/oracleasm/disks/ORACLEDISK02_PRI' NAME ORADISK01_SEC
FAILGROUP oracledisk02 DISK
'/dev/oracleasm/disks/ORACLEDISK01_SEC' NAME ORADISK02_PRI,
'/dev/oracleasm/disks/ORACLEDISK02_SEC' NAME ORADISK02_SEC
ATTRIBUTE 'au_size'='4M',
'compatible.asm' = '11.2.0.2.0',
'compatible.rdbms' = '11.2.0.2.0',
'compatible.advm' = '11.2.0.2.0';  2    3    4    5    6    7    8    9   10   11
CREATE DISKGROUP oracledata NORMAL REDUNDANCY
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/ORACLEDISK02_SEC' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ORACLEDISK02_SEC' is not in the discovery set
ORA-15031: disk specification '/dev/oracleasm/disks/ORACLEDISK01_SEC' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ORACLEDISK01_SEC' is not in the discovery set
ORA-15031: disk specification '/dev/oracleasm/disks/ORACLEDISK02_PRI' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ORACLEDISK02_PRI' is not in the discovery set
ORA-15031: disk specification '/dev/oracleasm/disks/ORACLEDISK01_PRI' matches no disks
ORA-15014: path '/dev/oracleasm/disks/ORACLEDISK01_PRI' is not in the discovery set


SQL>

The above step failed, because, it is configured using asmlib.  We will have to use UDEV  for the above disk group creation command to be successful.

SQL> CREATE DISKGROUP oracledata NORMAL REDUNDANCY
FAILGROUP oracledisk01 DISK
'ORCL:ORACLEDISK01_PRI' NAME ORADISK01_PRI,
'ORCL:ORACLEDISK02_PRI' NAME ORADISK01_SEC
FAILGROUP oracledisk02 DISK
'ORCL:ORACLEDISK01_SEC' NAME ORADISK02_PRI,
'ORCL:ORACLEDISK02_SEC' NAME ORADISK02_SEC
  2    3    4    5    6    7    8  ATTRIBUTE 'au_size'='4M',
'compatible.asm' = '11.2.0.2.0',
'compatible.rdbms' = '11.2.0.2.0',
'compatible.advm' = '11.2.0.2.0'
/  9   10   11   12

Diskgroup created.

SQL>


Step 7  Now the diskgroup is successfully created. Let us verify the disk on the ASM instance.

First let us check the disk status on both node.













Oh.. The disk group status shows DISMOUNTED on second node. Let us mount the database now.












Now both node looks good.

let us check the more info about the disk and diskgroup.

Query to check the diskspace :

select name Diskgroup,
round(total_mb/1024,2) "Total_TB",
round(free_mb/1024,2) "Free_GB",
round(((free_mb/total_mb)*100),2) "Available%" 
from v$asm_diskgroup where name='ORACLEDATA';










Query to check the disk status info.


select dg.inst_id, 
dg.name dg_name,  
dg.state dg_state,  
dg.type, 
d.MOUNT_STATUS, 
d.HEADER_STATUS H_status, 
d.MODE_STATUS,
d.FAILGROUP ,
d.DISK_NUMBER dsk_no 
FROM GV$ASM_DISK d,  
Gv$asm_diskgroup dg
where dg.inst_id=d.inst_id 
and  dg.group_number(+)=d.group_number 
and dg.name='ORACLEDATA'
/














Now we are done with creating the diskgroup.  Now we can use the diskgroup for database!

Tuesday, October 13, 2015

Registering archive logfiles manually on a standby database

Some time, the standby database might go far behind with primary database. One of the reason, the archive logs are not shipping to standby database quickly due to network issue. There may be several archive logs are waiting in standby location for one archive log file.  


For instance, Archive log sequence # 100 is not shipped to standby location. But other archive logs(sequence#101 to sequence#200) were shipped to standby and it is waiting to ship the sequence# 100.  In this scenario, DBA can copy the archive log file and register manually. Once manually registered, the sequence#101 to sequence#200 will apply automatically on standby database.

 Step 1

Let us assume, the archive log file devdb12_1_283_801988635.arc is not shipped to standby due to some reason. Copy the archive log file devdb12_1_283_801988635.arc manually from primary to standby archive destination location.


Step 2

Run the below command in standby database.

 ALTER DATABASE REGISTER LOGFILE '/dbArch/oradata/devdb12/ devdb12_1_283_801988635.arc';

Step 3

Verify the standby database and see if the copied archive log file is applied.

 
Select sequence#,applied from v$archived_log where sequence# >= 283

Monitor the alert log and keep watch the archive log files shipment.

Saturday, October 10, 2015

Oracle Data Guard fail over using Broker

This topic  demonstrates how to  fail over on Oracle 11g data guard environment.

The below scenario is tested in 11.2.0.4 version. The following database is used for fail over testing.

Data Guard
Database
Primary
devdb12
Standby
devdb12_dg

The manual fail over is tested from primary(devdb12) to standby(devdb12_dg). 

Things to check prior to fail over :

Make sure, application is down and no application connection on the database.

Verify the data guard replication  and see standby is close to primary. we need to ensure that, all the archive logs are shipped to standby and applied successfully. If not, then there will be data loss after fail over.

If we  have multiple standby, then it is better to choose the standby which has highest log sequence# to  minimize the data loss.

Generally Fail over can be done for the following circumstances
  • Primary  is crashed
  • Primary is not performing well
  • Primary database is in mount mode and not able to open the database  
Prerequisite : The data guard broker should be configured on the database.  Click  here to review the broker configuration steps in case you need.


>dgmgrl
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> show configuration;

Configuration - devdb12_dg_broker

 Protection Mode: MaxPerformance
  Databases:
    chfsdb43    - Primary database
      Error: ORA-16810: multiple errors or warnings detected for the database

    devdb12_dg - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

DGMGRL> failover to devdb12_dg;
Performing failover NOW, please wait...
Error: ORA-16798: unable to complete recovery during failover

Failed.
Unable to failover
DGMGRL> failover to devdb12_dg immediate;
Performing failover NOW, please wait...
Failover succeeded, new primary is "devdb12_dg"
DGMGRL> exit



After fail over, we need to rebuild the standby or flash back to prior stage and apply the archive log files.

Oracle Data Guard switch over using Broker

I am covering switch over steps(using Broker) on this topic. The below step is tested in Oracle 11.2.0.4 version.

Primary database : devdb12
Standby database : devdb12_dg

Just in case if you want to do manual switch over, refer this Click

Prerequisite : The data guard broker should be configured on the database.  Click  here to review the broker configuration steps in case you need. Also double check the archive log files and  make sure standby is not behind. Otherwise, switch over might take long time and it will cause longer application outage.

Step 1 Just make sure application is down and no other connection on the database.

Run this query and verify the DB connections.

select count(*),username from v$session
group by username;

Some times, if there is a lot of activity on the database, oracle LOCAL process’ may optionally be killed. Here is the sample script using example of chfdallindb41/42.

$tokill=`ps -ef | grep LOCAL=NO | grep -v grep | awk '{print $2}'`; echo $tokill;
6809 6811 6813 6815 6817 6819 6873 6941

$tokill=`ps -ef | grep LOCAL=NO | grep -v grep | awk '{print $2}'`; kill -9 $tokill;

Step 2  Login to devdb12 and connect as sys using broker

switchover to devdb12_dg ;










Watch out the alert log and make sure replicating the data between devdb12_dg to devdb12.



Step 3 It is optional step. Just in case if you want to switch back to old primary.

switchover to devdb12









Watch out the alert log and make sure replicating the data between devdb12  to devdb12_dg.




Friday, October 9, 2015

How to Configure the Data Guard broker?

Data Guard broker is a nice tool to switch over and fail over the database in data guard environment. Using data guard broker is like a driving a car with navigation system. Hence, if broker fails in the middle of switch over, then we don't know where we are and we got to troubleshoot and see which step it failed.

I personally use manual switch over and of course there are some database i use broker too.

How do we configure the broker?  Please follow these steps.

My Database info :

Primary database devdb12
Standby database  devdb12_dg

Step 1   Configuring the broker parameters

Set initialization parameter DG_BROKER_START to a value of TRUE on both databases so that Data Guard Broker background process (DMON) would start automatically whenever a Data Guard configuration was successfully implemented.

ALTER SYSTEM SET dg_broker_start = TRUE;








Broker creates two data file to store the information. It creates on the default directory$ORACLE_HOME/dbs.  We can also change the directory by using the below command.

alter system set dg_broker_config_file1 = '/data01/dbArch/' scope=both;
alter system set dg_broker_config_file2 = '/data01/dbArch/' scope=both;

Step 2   Configuring the listener.

During actions like a swichtover and a failover the Dataguard Broker interface will stop and start the instances. The Dataguard Broker Interface needs static registration of the databases in the listener.ora files. The required format is db_unique_name_DGMGRL[.db_domain].

The below entry should be in primary database listener.ora file. Listener should be up and running.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 99.999.99.999)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
   (SID_DESC =
      (GLOBAL_DBNAME = devdb12)
      (ORACLE_HOME = /ora/app/oracle/product/11.2.0/db_1)
      (SID_NAME = devdb12)
    )
   (SID_DESC =
      (GLOBAL_DBNAME = devdb12_dgmgrl)
      (ORACLE_HOME = /ora/app/oracle/product/11.2.0/db_1)
      (SID_NAME = devdb12)
    )
)

The below entry should be in standby database listener.ora file. Listener should be up and running.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 99.999.99.999)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
   (SID_DESC =
      (GLOBAL_DBNAME = devdb12_dg)
      (ORACLE_HOME = /ora/app/oracle/product/11.2.0/db_1)
      (SID_NAME = devdb12_dg)
    )
   (SID_DESC =
      (GLOBAL_DBNAME = devdb12_dg_dgmgrl)
      (ORACLE_HOME = /ora/app/oracle/product/11.2.0/db_1)
      (SID_NAME = devdb12_dg)
    )
)

Step 3   Create the broker for primary database. This command can be executed only on primary database.  Use the CREATE CONFIGURATION command to initialize a Data Guard Broker configuration named devdb12_broker, and confirm its creation with the SHOW CONFIGURATION command:

create configuration 'devdb12_broker' as
primary database is devdb12
connect identifier is devdb12;








show configuration













Step 4 Add the broker for standby database. This command can be excuted only on primary database. If we have three standby then, we need to add broker config for three times on the primary database.

add database devdb12_dg as
 connect identifier is devdb12_dg
 maintained as physical;







show configuration












Step 5  Enabling the broker

Issue the ENABLE CONFIGURATION command to activate the configuration and then confirm the successful activation of the primary and standby databases via the SHOW DATABASE command:

enable configuration;






show database devdb12
show database devdb12_dg

























Show configuration














Now you are done with Broker setup! :-)

Thursday, October 8, 2015

Soft Link Versus Hard Link in Unix

We all work on unix for long time. But many of us may not know the exact difference between soft link(symlink) and Hard Link. Today i am going to cover how to create and what is the difference between two.

Soft Link(Symlink) Soft link is like shortcut in windows. It is pointer to another file. Soft Link creates new file with new inode  which has the pointer to the inode location of the original file. 

The below diagram explain how soft link works.

















Here is an example to create soft link.







The soft link will be invalid if we delete or move the original file to different location. 









Soft link can also be pointer to different file system or directory.

















Hard Link Hard Link is bit different than the soft link. It reference the inode directly on the disk. The hard link will not break if we move the original files to different location or delete the original file.  We can create the hardlink only for files(not directories) . 

The below diagram explain how the hard link works!





















Here is an example to create Hard link.











The hard link never be broken when we delete or move the original file to different location.














Hard Link can not be created across the file system.  but soft link does!







Hard Link can not be created for directories! but soft link does!









Now we have better idea how both links works.

Conclusion :


When to use Soft link?


  1. To create a link across the file system
  2. To create the link for directories


When to use Hard link?

  1. In case if you want to move the file to some other location on the same file system
  2. It does not create new file with different inode. Hence it takes less storage. 
  3. Performance wise it is slightly better. since it is direct access