Overview:
The local SRDF device known as source (R1) device is configured in a paring relationship with a remote target (R2) device, forming a SRDF pair. While the R2 device is mirrored with R1 device, the R2 device is write-disabled to the host. After the R2 device becomes synchronized with its R1 device, you can split R2 device from R1 at any time so making the R2 device fully accessible to its host. After split, the target R2 device contains R1 data and is available for performing business continuity.
SRDF can be used asynchronously or synchronously. At my client side, they used SRDF/A (SRDF/Asynchronous) which provides dependent write consistent (Oracle restartable) images of the database, utilizing reduced network bandwidth (as compared withSRDF/Synchronous solutions). SRDF/A does not have a latency overhead on the database, regardless of database size or replication distance.
DR Site Configuration with Oracle:
Client Environent:
Steps:
1- Prepare the DR nodes (Prepare the cluster nodes)
Just prepare the nodes as normal as you do.
2- Prepare the shared storage (Prepare the shared storage)
You just need to work on the additional LUN (10GB) on the DR nodes for this step. As the other LUNs are the replica of the source site, don't touch them at all.
3- Install GI on the new DR nodes , start from the node1 (eg; DOR01). When you reach to "Create ASM Disk Group" installation page, select your new disk (LUN 10GB) for your OCR and Vote. Select Redundancy as External. Now GI installation will use the new LUN for your cluster.
4- After your GI successfully installed, start the ASMCA on any node and mount the replicated Diskgroups (DGDATA and DGFLASH)
5- After mounting diskgroups, you can verify using ASMCMD for the contents on these diskgroups.
6- Now copy the db parameter files (eg; mydb1.ora & mydb2.ora) from the source site and place them on the target DR nodes respectively.
7- Now add the database and its instances to the newly created cluster (on DR site)
srvctl add database -d MYDB -o D:\app\Inam\product\11.2.0\dbhome_1
srvctl add instance -d MYDB -i MYDB1 -n DOR01
srvctl add instance -d MYDB -i MYDB2 -n DOR02
srvctl start database -d MYDB
8- You need to change the remote_listener to scan as the remote listener at source site would be different eg;
alter system set remote_listener='dr-scan:1521';
Symmetrix Remote Data Facility (SRDF) is a Symmetric based business continuance and disaster restart solution. In simple terms, SRDF is a configuration of multiple Symmetrix units whose purpose is to maintain real time copies of logical data volume in more than one location. The Symmetrix unit can be in the same room, in different building in the same campus or hundreds of miles apart.
SRDF Configuration:The local SRDF device known as source (R1) device is configured in a paring relationship with a remote target (R2) device, forming a SRDF pair. While the R2 device is mirrored with R1 device, the R2 device is write-disabled to the host. After the R2 device becomes synchronized with its R1 device, you can split R2 device from R1 at any time so making the R2 device fully accessible to its host. After split, the target R2 device contains R1 data and is available for performing business continuity.
SRDF can be used asynchronously or synchronously. At my client side, they used SRDF/A (SRDF/Asynchronous) which provides dependent write consistent (Oracle restartable) images of the database, utilizing reduced network bandwidth (as compared withSRDF/Synchronous solutions). SRDF/A does not have a latency overhead on the database, regardless of database size or replication distance.
DR Site Configuration with Oracle:
Client Environent:
Source: Oracle RAC 11gR2 (11.2.0.3) on Windows 2008R2 with two nodes (eg; OR01 & OR02), there are two diskgroups (DGDATA and DGFLASH) with two disks (LUNs)
Target: Oracle RAC 11gR2 (11.2.0.3) on Windows 2008R2 with two nodes (eg; DOR01 & DOR02), the same disks (LUNs)have been exposed/replicated to the target systems used at source site using EMC's SRDF replication console. An additional LUN (disk) 10GB has been added on the target site.
Steps:
Below are the steps taken to have the replicated database up and running on DR site. As mentioned above that the same disks already have been attached to the DR nodes as replicated ones being used at source site. Storage attachment job was done by infrastructure team not by DBA.
1- Prepare the DR nodes (Prepare the cluster nodes)
Just prepare the nodes as normal as you do.
2- Prepare the shared storage (Prepare the shared storage)
You just need to work on the additional LUN (10GB) on the DR nodes for this step. As the other LUNs are the replica of the source site, don't touch them at all.
3- Install GI on the new DR nodes , start from the node1 (eg; DOR01). When you reach to "Create ASM Disk Group" installation page, select your new disk (LUN 10GB) for your OCR and Vote. Select Redundancy as External. Now GI installation will use the new LUN for your cluster.
4- After your GI successfully installed, start the ASMCA on any node and mount the replicated Diskgroups (DGDATA and DGFLASH)
5- After mounting diskgroups, you can verify using ASMCMD for the contents on these diskgroups.
6- Now copy the db parameter files (eg; mydb1.ora & mydb2.ora) from the source site and place them on the target DR nodes respectively.
7- Now add the database and its instances to the newly created cluster (on DR site)
srvctl add database -d MYDB -o D:\app\Inam\product\11.2.0\dbhome_1
srvctl add instance -d MYDB -i MYDB1 -n DOR01
srvctl add instance -d MYDB -i MYDB2 -n DOR02
srvctl start database -d MYDB
8- You need to change the remote_listener to scan as the remote listener at source site would be different eg;
alter system set remote_listener='dr-scan:1521';
1 comment:
Hi Inam,
This post save my life. Thanks!
Post a Comment