REDHAT AS 2.1 a no-single-point-of-failure cluster system
双机热备Oracle安装指南
1 首先配置硬件环境:
1、 阵列:Gs4008/73G *4 1台
PT4400R PIII 900/36G/scsi 29160lp 2台
操作系统:REDHAT ADVANCED SERVER 2.1
应用软件:ORACLE 9I企业版
用一根直连线连接两台机器,作为心跳线。
每台机器有一根网线和外部网络系统相连。
用两张SCSI卡分别连阵列的两个CHANNEL。
2 操作系统软件配置:
2.1 安装REDHAT AS 2.1的时候选择EVERYTHING安装。
2.2 Editing the /etc/hosts File
db1的机器的hosts:
127.0.0.1 db1
10.10.22.50 cluster0
10.10.200.50 ecluster0
10.10.22.51 cluster1
10.10.200.51 ecluster1
10.10.22.52 clusteralias
db2的机器的hosts:
127.0.0.1 db2
10.10.22.51 cluster1
10.10.200.51 ecluster1
10.10.22.50 cluster0
10.10.200.50 ecluster0
10.10.22.52 clusteralias
2.3 检查GRUB的启动配置:
default=0
timeout=10
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
title Red Hat Linux Advanced Server (2.4.9-e.3smp)
root (hd0,0)
kernel /boot/vmlinuz-2.4.9-e.3smp ro root=/dev/sda1
max_scsi_luns=2 nmi_watchdog=1
initrd /boot/initrd-2.4.9-e.3smp.img
title Red Hat Linux Advanced Server-up (2.4.9-e.3)
root (hd0,0)
kernel /boot/vmlinuz-2.4.9-e.3 ro root=/dev/sda1
initrd /boot/initrd-2.4.9-e.3.img
2.4 Displaying Devices Configured in the Kernel:
Character devices:
1 mem
2 pty
3 ttyp
4 ttyS
5 cua
7 vcs
10 misc
29 fb
36 netlink
128 ptm
129 ptm
130 ptm
131 ptm
132 ptm
133 ptm
134 ptm
135 ptm
136 pts
137 pts
138 pts
139 pts
140 pts
141 pts
142 pts
143 pts
162 raw
180 usb
254 iscsictl
Block devices:
1 ramdisk
2 fd
3 ide0
8 sd
9 md
12 unnamed
14 unnamed
38 unnamed
39 unnamed
65 sd
66 sd
3 集群所需硬件配置:
3.1 配置心跳线:
10.10.200.51 ecluster1
10.10.200.50 ecluster0
3.2 Configuring Power Switches:
Make device special – watchdog
Change to the /dev directory
# cd /dev
Make the device
# ./MAKEDEV watchdog
Set up and test compatibility for NMI watchdog
If you're using the software watchdog, you should also use the
NMI watchdog timer. The NMI watchdog timer may not work on some
legacy systems. Carry out the following on both nodes in the cluster.
Use vi to edit the /etc/grub.conf file and add nmi_watchdog=1 to
the end of the kernel line:
title RedHat Linux Advanced Server (LinuxHA)
root (hd0,0)
kernel /vmlinuz-kernel-version ro root=/dev/sda3 nmi_watchdog=1
initrd /fibreha.img
You then need to check that the server supports the NMI watchdog
timer. In order to do this reboot the server and login as root.
# init 6
Type the following at a command line:
# cat /proc/interrupts
If the NMI entry is not zero then the server supports the NMI
watchdog timer。
3.3 配置磁盘阵列:
3.3.1 Configuring Quorum Partitions
分区:
Disk geometry for /dev/sdb: 0.000-140008.000 megabytes
Disk label type: msdos
Minor Start End Type Filesystem Flags
1 0.016 20.000 primary ext3
2 21.000 41.000 primary ext3
3 50.000 140007.000 primary ext3
配置裸设备:
editing /etc/sysconfig/rawdevices,
cat /etc/sysconfig/rawdevices
# raw device bindings
# format: <rawdev> <major> <minor>
# <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
# /dev/raw/raw2 8 5
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdb2
rebooting or by execute the following command:
service rawdevices restart
Query all the raw devices by using the command raw -aq:
# raw -aq
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 18
3.3.2 创建文件系统:
# mke2fs -j -b 4096 /dev/sdb3
3.4 集群软件安装:
3.4.1检查安装的集群软件版本:
rpm -aq|grep clum
clumanager-1.0.11-1
配置日志:
将以下加到/etc/syslog.conf中
#
# Cluster messages coming in on local4 go to /var/log/cluster
#
local4.* /var/log/cluster
# Log anything (except mail) of level info or higher.
# Don’t log private authentication messages!
*.info;mail.none;news.none;authpriv.none;local4.none /
var/log/messages
3.4.2配置集群软件:
cluconfig
Red Hat Cluster Manager Configuration Utility (running on db1)
- Configuration file exists already.
Would you like to use those prior settings as defaults?
(yes/no) [yes]:
Enter cluster name [Red Hat Cluster Manager]:
Enter IP address for cluster alias [10.10.22.52]:
--------------------------------
Information for Cluster Member 0
--------------------------------
Enter name of cluster member [cluster0]:
Looking for host cluster0 (may take a few seconds)...
Enter number of heartbeat channels (minimum = 1) [1]:
Information about Channel 0
Channel type: net or serial [net]:
Enter hostname of the cluster member on heartbeat channel 0
[ecluster0]:
Looking for host ecluster0 (may take a few seconds)...
Information about Quorum Partitions
Enter Primary Quorum Partition [/dev/raw/raw1]:
Enter Shadow Quorum Partition [/dev/raw/raw2]:
Information About the Power Switch That Power Cycles Member
'cluster0'
Choose one of the following power switches:
o NONE
o RPS10
o BAYTECH
o APCSERIAL
o APCMASTER
o WTI_NPS
o SW_WATCHDOG
Power switch [sw_watchdog]: SW_WATCHDOG
--------------------------------
Information for Cluster Member 1
--------------------------------
Enter name of cluster member [cluster1]:
Looking for host cluster1 (may take a few seconds)...
Information about Channel 0
Enter hostname of the cluster member on heartbeat channel 0
[ecluster1]:
Looking for host ecluster1 (may take a few seconds)...
Information about Quorum Partitions
Enter Primary Quorum Partition [/dev/raw/raw1]:
Enter Shadow Quorum Partition [/dev/raw/raw2]:
Information About the Power Switch That Power Cycles Member
'cluster1'
Choose one of the following power switches:
o NONE
o RPS10
o BAYTECH
o APCSERIAL
o APCMASTER
o WTI_NPS
o SW_WATCHDOG
Power switch [sw_watchdog]: SW_WATCHDOG
Cluster name: Red Hat Cluster Manager
Cluster alias IP address: 10.10.22.52
--------------------
Member 0 Information
--------------------
Name: cluster0
Primary quorum partition: /dev/raw/raw1
Shadow quorum partition: /dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: ecluster0
Power switch IP address or hostname: cluster0
Identifier on power controller for member cluster0: unused
--------------------
Member 1 Information
--------------------
Name: cluster1
Primary quorum partition: /dev/raw/raw1
Shadow quorum partition: /dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: ecluster1
Power switch IP address or hostname: cluster1
Identifier on power controller for member cluster1: unused
--------------------------
Power Switch 0 Information
--------------------------
Power switch IP address or hostname: cluster0
Type: sw_watchdog
Login or port: unused
Password: unused
--------------------------
Power Switch 1 Information
--------------------------
Power switch IP address or hostname: cluster1
Type: sw_watchdog
Login or port: unused
Password: unused
Save the cluster member information? yes/no [yes]:
Writing to configuration file...done
Configuration information has been saved to /etc/cluster.conf.
----------------------------
Setting up Quorum Partitions
----------------------------
Warning: Cannot run cludiskutil: either or both device names
for quorum partitions not properly set. Fix raw partitions before
going further
Saving configuration information to quorum partitions: done
Do you wish to allow remote monitoring of the cluster? yes/no
[yes]:
--------------------------------------------------------------
--
Configuration on this member is complete.
To configure the next member, invoke the following command on
that system:
# /sbin/cluconfig --init=/dev/raw/raw1
Refer to the Red Hat Cluster Manager Installation and
Administration Guide
for details.
在另外一台db2上运行如下语句:
/sbin/cluconfig --init=/dev/raw/raw1
Red Hat Cluster Manager Configuration Utility (running on db2)
- Retrieving the database from the quorum partition...
- Verifying database information...
--------------------------------
Information for Cluster Member 0
--------------------------------
Looking for host cluster0 (may take a few seconds)...
Information about Channel 0
Looking for host ecluster0 (may take a few seconds)...
Information about Quorum Partitions
Information About the Power Switch That Power Cycles Member 'cluster0'
--------------------------------
Information for Cluster Member 1
--------------------------------
Looking for host cluster1 (may take a few seconds)...
Information about Channel 0
Looking for host ecluster1 (may take a few seconds)...
Information about Quorum Partitions
Information About the Power Switch That Power Cycles Member 'cluster1'
Press <Return> to continue.
Cluster name: Red Hat Cluster Manager
Cluster alias IP address: 10.10.22.52
--------------------
Member 0 Information
--------------------
Name: cluster0
Primary quorum partition: /dev/raw/raw1
Shadow quorum partition: /dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: ecluster0
Power switch IP address or hostname: cluster0
Identifier on power controller for member cluster0: unused
--------------------
Member 1 Information
--------------------
Name: cluster1
Primary quorum partition: /dev/raw/raw1
Shadow quorum partition: /dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: ecluster1
Power switch IP address or hostname: cluster1
Identifier on power controller for member cluster1: unused
--------------------------
Power Switch 0 Information
--------------------------
Power switch IP address or hostname: cluster0
Type: sw_watchdog
Login or port: unused
Password: unused
--------------------------
Power Switch 1 Information
--------------------------
Power switch IP address or hostname: cluster1
Type: sw_watchdog
Login or port: unused
Password: unused
Save the cluster member information? yes/no [yes]:
Writing to configuration file...done
Configuration information has been saved to /etc/cluster.conf.
Saving configuration information to quorum partitions: done
Do you wish to allow remote monitoring of the cluster? yes/no [yes]:
----------------------------------------------------------------
Configuration on this member is complete.
Execute "/sbin/service cluster start" to start the cluster software.
在两台服务器上分别运行:
/sbin/service cluster start
Starting cluster management agent: done.
Starting cluster manager services: done.
3.4.3配置加入oracle服务
3.4.3.1ORACLE服务的启动和停止脚本:
oracle
#!/bin/sh
#
# Cluster service script to start/stop oracle
#
cd /home/oracle
case $1 in
'start')
su - oracle -c ./startdb
;;
'stop')
su - oracle -c ./stopdb
;;
esac
startdb
#!/bin/sh
#
#
# Script to start the Oracle Database Server instance.
#
###########################################################################
#
# ORACLE_RELEASE
#
# Specifies the Oracle product release.
#
###########################################################################
#ORACLE_RELEASE=8.1.6
###########################################################################
#
# ORACLE_SID
#
# Specifies the Oracle system identifier or "sid", which is the name of the
# Oracle Server instance.
#
###########################################################################
#export ORACLE_SID=TESTDB
###########################################################################
#
# ORACLE_BASE
#
# Specifies the directory at the top of the Oracle software product and
# administrative file structure.
#
###########################################################################
#export ORACLE_BASE=/u01/app/oracle
###########################################################################
#
# ORACLE_HOME
#
# Specifies the directory containing the software for a given release.
# The Oracle recommended value is $ORACLE_BASE/product/<release>
#
###########################################################################
#export ORACLE_HOME=/u01/app/oracle/product/${ORACLE_RELEASE}
###########################################################################
#
# LD_LIBRARY_PATH
#
# Required when using ORacle products that use shared libraries.
#
###########################################################################
#export LD_LIBRARY_PATH=/u01/app/oracle/product/${ORACLE_RELEASE}/lib
###########################################################################
#
# PATH
#
# Verify that the users search path includes $ORCLE_HOME/bin
#
###########################################################################
#export PATH=$PATH:/u01/app/oracle/product/${ORACLE_RELEASE}/bin
###########################################################################
#
# This does the actual work.
#
# The oracle server manager is used to start the Oracle Server instance
# based on the initSID.ora initialization parameters file specified.
#
###########################################################################
/u01/app/oracle/product/8.1.7/bin/svrmgrl <<EOF
connect internal;
startup ;
spool off
EOF
/u01/app/oracle/product/8.1.7/bin/lsnrctl start
exit 0
stopdb
#!/bin/sh
#
#
# Script to STOP the Oracle Database Server instance.
#
###########################################################################
#
# ORACLE_RELEASE
#
# Specifies the Oracle product release.
#
###########################################################################
#ORACLE_RELEASE=8.1.6
###########################################################################
#
# ORACLE_SID
#
# Specifies the Oracle system identifier or "sid", which is the name of the
# Oracle Server instance.
#
###########################################################################
#export ORACLE_SID=TESTDB
###########################################################################
#
# ORACLE_BASE
#
# Specifies the directory at the top of the Oracle software product and
# administrative file structure.
#
###########################################################################
#export ORACLE_BASE=/u01/app/oracle
###########################################################################
#
# ORACLE_HOME
#
# Specifies the directory containing the software for a given release.
# The Oracle recommended value is $ORACLE_BASE/product/<release>
#
###########################################################################
#export ORACLE_HOME=/u01/app/oracle/product/${ORACLE_RELEASE}
###########################################################################
#
# LD_LIBRARY_PATH
#
# Required when using ORacle products that use shared libraries.
#
###########################################################################
#export LD_LIBRARY_PATH=/u01/app/oracle/product/${ORACLE_RELEASE}/lib
###########################################################################
#
# PATH
#
# Verify that the users search path includes $ORCLE_HOME/bin
#
###########################################################################
#export PATH=$PATH:/u01/app/oracle/product/${ORACLE_RELEASE}/bin
###########################################################################
#
# This does the actual work.
#
# The oracle server manager is used to STOP the Oracle Server instance
# in a tidy fashion.
#
###########################################################################
/u01/app/oracle/product/8.1.7/bin/svrmgrl << EOF
spool /home/oracle/stopdb.log
connect internal;
shutdown immediate;
exit
spool off
EOF
/u01/app/oracle/product/8.1.7/bin/lsnrctl stop
exit 0
3.4.3.2配置服务:
cluadmin
service add
The user interface will prompt you for information about the service.
Not all information is required for all services.
Enter a question mark (?) at a prompt to obtain help.
Enter a colon (:) and a single-character command at a prompt to do
one of the following:
c - Cancel and return to the top-level cluadmin command
r - Restart to the initial prompt while keeping previous responses
p - Proceed with the next prompt
Currently defined services:
Service name: oracle
Preferred member [None]: cluster0
Relocate when the preferred member joins the cluster (yes/no/?) [no]: no
User script (e.g., /usr/foo/script or None) [None]: /home/oracle/oracle
Status check interval [0]: 20
Do you want to add an IP address to the service (yes/no/?) [no]: yes
IP Address Information
IP address: 10.10.22.52
Netmask (e.g. 255.255.255.0 or None) [None]: 255.255.255.0
Broadcast (e.g. X.Y.Z.255 or None) [None]: 10.10.22.255
Do you want to (a)dd, (m)odify, (d)elete or (s)how an IP address, or are you
(f)inished adding IP addresses [f]: f
Do you want to add a disk device to the service (yes/no/?) [no]: yes
Disk Device Information
Device special file (e.g., /dev/sdb4): /dev/sdb3
Filesystem type (e.g., ext2, or ext3): ext3
Mount point (e.g., /usr/mnt/service1) [None]: /data2
Mount options (e.g., rw,nosuid,sync):
Forced unmount support (yes/no/?) [yes]: yes
Would you like to allow NFS access to this filesystem (yes/no/?) [no]:
Would you like to share to Windows clients (yes/no/?) [no]:
Do you want to (a)dd, (m)odify, (d)elete or (s)how DEVICES, or are you (f)
inished adding DEVICES [f]: f
name: oracle
preferred node: cluster0
relocate: no
user script: /home/oracle/oracle
monitor interval: 20
IP address 0: 10.10.22.52
netmask 0: 255.255.255.0
broadcast 0: 10.10.22.255
device 0: /dev/sdb3
mount point, device 0: /data2
force unmount, device 0: yes
samba share, device 0: None
Add oracle service as shown? (yes/no/?) yes
0) cluster0
1) cluster1
c) cancel
Choose member to start service on: 0
Added oracle
3.4.3.3.
查看服务运行情况:
clustat
Cluster Status Monitor (Red Hat Cluster Manager)
10:44:08
Cluster alias: clusteralias
========================= M e m b e r S t a t u s
==========================
Member Status Node Id Power Switch
-------------- ---------- ---------- ------------
cluster0 Up 0 Good
cluster1 Up 1 Good
========================= H e a r t b e a t S t a t u s
====================
Name Type Status
------------------------------ ---------- ------------
ecluster0 <--> ecluster1 network ONLINE
========================= S e r v i c e S t a t u s
========================
Last Monitor Restart
Service Status Owner Transition Interval Count
-------------- -------- -------------- ---------------- -------- -------
oracle started cluster0
若您有关于案例的建议,请反馈:
该案例暂时没有网友评论
✖
案例意见反馈
亲~登录后才可以操作哦!
确定你的邮箱还未认证,请认证邮箱或绑定手机后进行当前操作