Category Archives: RAC Notes

RAC Notes Scanned Documents

10gR2 RAC Installation on RHEL 4U8

Administrating RAC Using SRVCTL and CRSCTL

Cluster Startup Process in 10g and 11gRel 1

Clusterware Architecture

Deinstallation of RAC Environment

RAC Service Management

Converting Standalone database to RAC Using rconfig utility


> Oracle has introduced rconfig utility in version 10g of oracle.

> The pre-requisite for rconfig is database area location should either
Cluster filesystem or ASM.


#su – oracle
rac1>$crsctl check crs
$ps -ef|grep smon
$srvctl stop database -d hrms
(no need to shutdown the database it should be up and running while creating
a database)

>>>Creating a standalone database

#xhost +
.oracle single instance db
.create database
.General purpose
global database name:prod
.use same password
confirm password
.select asm
.select one disk group
.use common location for all data files
.click on browse
.select ASM_DG_FRA
.click on ok
.enable archiving
.edit archive mode parameters
Remove the entries
.click on ok

Note:To find a file location when the instance is not running

$find /u01 -name “alert_*.log”

rac1># su – oracle
$ ps-ef|grep smon
$ sqlplus / as sysdba

SQL> select name,open_mode,log_mode from v$database;
SQL>show parameter cluster
SQL>select name,open_mode,log_mode from v$database;
SQL>select name from v$controlfile;
SQL>select name from v$datafile;
SQL>select member from v$logfile;

# su – oracle
$cd $ORACLE_HOME/dbs
$cd assistants
$cd rconfig
$cd SampleXMLs/
$cp ConvertTORAC.xml ~
$vi ConvertTORAC.xml

specify current Oraclehome of non-rac database for source DBHome


specify OracleHome where the rac database should be configured


specify SID for non RAC database and credential

<n:Source DBInfo SID=”prod”>

Note:asminfo element is required only if the current non-rac


specify prefix for RAC instances Specify the list on non rac

<n:InstancePrefix>prod <n:Node name=”rac1″>

<n:InstancePrefix>prod <n:Node name=”rac2″>

The non-Rac database should have some storage

<n:shared storage type=”ASM”

Specify database area location to be configured


Specify flash Recovery area


$ rconfig ConvertTORAC.xml

Troubleshooting Oracle clusterware and collecting Diagnostic information from crs_home and db_home using in RAC

>The default location of alert log of cluster is:
$cd lnx01
$tail -50 alertlnx01.log | more

Collecting diagnostic information from oracle home
>we need to run as root user,
lnx01]# export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0/crs_home
#cd $ORA_CRS_HOME/bin
bin]# ./ –collect -crs $ORA_CRS_HOME
#ls *.gz
#mv *.gz $HOME
#unzip ocrData_lnx01.tar.gz
#gunzip ocrData_lnx01.tar.gz
#tar -xvf ocrData_lnx01.tar

Collecting diagnostic information from ORACLE_HOME

lnx01]#export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_home
#cd $ORA_CRS_HOME/bin
#./ –collect –oh $ORACLE_HOME
#ls *.gz
#mv *.gz $HOME
#gunzip oraData_lnx01.gz
#gunzip oraData_lnx01.tar.gz
#tar -xvf oraData_lnx01.tar.gz

SCAN IP and SCAN Name (Single Client Access Name) 11gR2 RAC

SCAN IP and SCAN Name (Single Client Access Name) 11gR2
>In 10g and 11gR1 dusring node additions and deletions, we need to modify the service entries
manually.To overcome this problem in 11gR2 oracle has introduced scan name and scan IP’s.
>Irrespective of the no.of nodes,oracle recommends to have 3 scan IP’s with a single scan name
which will be resolved to any of the scan IP in round robin manner.

Note:subnet mask of public IP,Virtual IP and Scan IP should be same

>During Grid Infrastucture installations,for every scan ip oracle creates one scan vip and scan

>scan vip and scan listener forms a pair .

>scan ip can be placed either in /etc/hosts or DNS Server.

Service entries in 11gR2

———————————————————- lnx01 lnx02 lnx01-priv lnx02-priv lnx01-vip lnx02-vip cluster-scan cluster-scan cluster-scan

DNS Entries

localhost IN A
dns IN A
lnx01 IN A
lnx02 IN A
lnx01-vip IN A
lnx02-vip IN A
cluster-scan IN A

In RAC Nodes
#vi /etc/resolv.conf

RAC Backup and Recovery

Backup and Recovery
>Full backup
>Incremental/Diffrential Backup
>Compressed Backup

Configuration Modes:

>No catalog

2.Physical Backup
>Cold Backup/offline backup/consistent backup
>Hot Backup/online/inconsistent

3.Logical Backup
Traditional Logical i.e;


Datapump Utilities

Note: If the database storage area loacation is ASM then only logical and RMAN backups
are possible.
>If the database storage are location is CFS all the above backups are possible.
>In a Rac system see that channels are equally distributed among all the instances.
>In some environments an instance is totally dedicated for RMAN backups.

example: OCR BACKUP
$ocrconfig -export (in 10g)
$ocrconfig -export (11g) Note:online backup is possible in RAC.
Voting disk
$dd command



>Patch is a bug fix

>collection of bug fixes is called a patch set

>Different types of patches released by oracle are:
1.Interim patch / one-off patch
2.Patch sets
3.Critical patch updates.
4.Patchset updates
5.CRS Hash bundle patches

>All above patches can be installed using Opatch utility except Patchsets.

>Patchsets are installed by invoking runinstaller.

>CRS_HASH bundle patches are patches to fix the bugs in the cluster.

>Clusterware can be patched in two ways
1.Rolling upgrade
2.Non-Rolling upgrade

>Incase of rolling upgrade we will bringdown all the services on then ode that we
wish to install the patch set.This is node by node activity.

>In case of non-rolling upgrade we bring down the entire cluster and install the patch set.

>To know the list of patches installed in CRS_HOME
$ Opatch lsinventory -detail $ORA_CRS_HOME

>To know the list of patches installed in ORACLE_HOME
$ opatch lsinventory -detail $ORACLE_HOME


Prepatch Consideration and Recommendation
>Take the backup of oracle inventory
>Take the backup of clusterware binaries and oracle binaries
>take the backup of oracle database

$cat /etc/oraInst.loc
$cat /var/opt/oracle/oraInst.loc

lnx01]# cd /opt
opt]# unzip p63………
#cp Readme.html /root/Desktop
#mozilla or firefox
#file>open file>Desktop>Readme.html

$su – oracle
$export ORACLE_SID=hrms1
$emctl stop dbconsole
$ssh lnx02
$emctl stop database
lnx01]$isqlplusctl stop (to stop browser)
$srvctl stop service -d hrms
$srvctl stop database -d hrms
$ps -ef
$srvctl stop asm -n lnx01
$srvctl stop asm -n lnx02
$srvctl stop nodeapps -n lnx01
$srvctl stop nodeapps -n lnx02
lnx01]#/etc/init.d/ stop (for stopping cluster in first node)
#ssh lnx02 /etc/init.d/ stop (to stop cluster in 2nd node)
$crs_stat -t
$crsctl query crs softwareversion
$crsctl query crs activeversion

lnx01]#xhost +
#su – oracle
$cd /opt/Disk1
change the path
Run the script
lnx01]#/u01/app/oracle/product/10.2.0/crs_home/bin/crsctl stop crs
lnx01]# su – oracle
$crsctl check crs
$ps -ef|grep smon
$crsctl query crs softwareversion
$crsctl query crs activeversion
lnx01]ssh lnx02 /u01/app/oracle/product/10.2.0/crs_home/bin/crsctl stop
lnx01]ssh lnx02 /u01/app/oracle/product/10.2.0/crs_home/install/
Note:Upgrade Conmpleted Sucessfully
$sqlplus -v (
Now we will Patchset on Oracle Home
Note:At the time of installing patchset on Oracle_Home cluster must be up and running.

lnx01]$ export ORACLE_SID=hrms1
$emctl stop dbconsole
$isqlplusctl stop
$srvctl stop service -d hrms
$srvctl stop database -d hrms
$srvctl stop asm -n lnx01
$srvctl stop asm -n lnx02
$srvctl stop listener -n lnx01
$srvctl stop listener -n lnx02
$cd /opt/Disk1/
Next > Next > (Untill you get install button)
(execute script on both the nodes)
exit > ok > yes
# exit

Now we will upgrade the database First,

lnx01]$srvctl start listener -n lnx01
lnx02]$srvctl start listener -n lnx02
lnx01]$srvctl start asm -n lnx01
lnx01]$srvctl start asm -n lnx02
$export ORACLE_SID=hrms1
$sqlplus / as sysdba
SQL>startup nomount
SQL>shut immediate
SQL>startup upgrade
SQL>@$ORACLE_HOME/rdbms/catupgrd.sql (command to upgrade the database)
After the compltetion of the process do this,
SQL>shut immediate
SQL>select object_status from dba_objects where status=’INVALID’;
SQL>select comp_name,version,status from dba_registry;
SQL>alter system set cluster_database=true scope=spfile;
$srvctl start database -d hrms
$srvctl start service -d hrms
$emca -upgrade db -cluster
(this will upgrade enterprise manager console)

Creating Stored Scripts

Creating Stored Scripts:

RMAN>list script names; (to see existing script names)

RMAN>create script bkp
{backup datafile 4;} (local script method)

RMAN>create global script bkp1
{backup database;}

RMAN>list script names;

RMAN>print script bkp;

RMAN>print script bkp1; ( to see script contents)

RMAN>run {execute script bkp;}

Taking Incremental Backup

RMAN>backup incremental level 0 database;

RMAN>backup incremental level 1 database;

RMAN>backup incremental level 2 database;

RMAN>backup incremental level 1 cummulative database;(in 11g)

Adding a node to the existing RAC environment

Adding a node to the existing RAC environment

STEPS: Configure hardware and operating system

1>Propogate clusterware to the new node by executing from $ORA_CRS_HOME/oui/bin

2>Reconfigure Virtual IP’S by invoking vipca

3>Propogate oracle binaries to the new node by executing from $ORACLE_HOME/oui/bin

4>Reconfigure listener by invoking dbca

5>Add instance by invoking dbca

dbca—->instance management—->Add instance

11gR2 RAC New Features

11gR2 RAC New Features

> ASM has been integrated with clusterware binaries i.e:grid
>Oracle has rearchitected grid infrastuructures in to two stacks
1.Oracle High Availability Service stack
2.Cluster Ready Service Stack
>We cannot place OCR and Voting File in raw partitions (we can place in asm diskgroups).
>CTSS has been introduced to synchronize data and time.
>Oracle has introduced scan name and scan ip’s SCAN NAME and SCAN IP’S can be placed
either in /etc/hosts or DNS Server.

NOTE: IF SCAN IPS’S are placed in /etc/hosts only one scan ip will be enabled
.If placed in DNS all three will be enabled

>SSH Configuration is automated
>we can start and stop all nodes in the cluster with a single command.
>Oracle has introduced SCAN LISTENER ,for every SCAN IP, it creates one scan VIP
and one scan listener.

Collecting RAC Diagnostic Information

Collecting RAC Diagnostic Information

rac1>#cd /u01/product/11.2.0/grid_home/
rac1>#./ –collect –crs $ORACLE_HOME
#ls *.gz
#mv *.gz $HOME
#export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_home
#./ –collect –crs $ORACLE_HOME

rac1>$cd $ORACLE_HOME/l0g
log>$ cd rac1
$tail -50 alert rac1.log | more