直到有朋友提出需求,我才开始重新审视Zenoss的HA配置,梳理数据同步与服务的延续,以及高可性的切换时,分布式结构是否稳固的问题。
经过几天的测试与研究,终于搞出一个Demo环境。现将整个过程和配置思想放在这里,与大家分享。
1. 结构说明
图为主从两台Zenoss服务器。由Heartbeat控制VIP网卡,MySQL和Zenoss服务以及DRBD块的挂载。同时DRBD实现主从服务器中的相关Zenoss数据(MYSQL,Zenoss主目录和Zenoss性能目录)的同步与处理。需要注意的是,DRBD只是一个虚设备,因此,每台服务器应该划分专用分区用以存放Zenoss数据。在Demo环境中,用Vmware重新添加了一块磁盘。并将划分为:
/dev/sdb1 /var/lib/mysql MySQL数据库
/dev/sdb2 /opt/zenoss/ zenoss主目录
/dev/sdb3 /opt/zenoss/perf Zenoss RRD数据库
注意,sdbX分区并不需要格式化,因此,请在系统安装后,再用fdisk命令进行划分。具体方法参看网上资料,这里就不重复了。
2. 前期准备
利用Vmware创建了两个台相同规格的Centos系统(版本为Centos5.6 32Bit)。注意在安装Centos时,先不安装MYSQL(主要希望之后的MYSQL数据目录可以在两节点是使用-DRBD)。系统的网络配置要求如下:
zenoss master
hostname:zenossha1
eth0 ip:119.10.119.5
eth1 ip:192.168.100.5
zenoss slave
hostsname:zenossha2
eth0 ip:119.10.119.6
eth1 ip:192.168.100.6
同时规划vip为119.10.119.8
系统安装好后,首先将建议两台服务器的hosts关系,以备之后的Heartbeat和DRBD的内部通讯。
点击(此处)折叠或打开
- # vi /etc/hosts
点击(此处)折叠或打开
- 192.168.100.5 zenossha1
- 192.168.100.6 zenossha2
前先准备工作完成后,接下来,在两台服务器上安装DRBD
安装好后,在两台服务器上同时建立DRBD配置文件
点击(此处)折叠或打开
- # yun -y install drbd82 kmod-drbd82
点击(此处)折叠或打开
- # vi /etc/drbd.conf
点击(此处)折叠或打开
- global { usage-count no;
- }
- common { protocol C;
- disk {
- on-io-error detach; no-disk-flushes; no-md-flushes;
- }
- net {
- max-buffers 2048; unplug-watermark 2048;
- }
- syncer {
- rate 700000K; al-extents 1801;
- } }
- resource mysql { device /dev/drbd1; disk /dev/sdb1; meta-disk internal;
- on zenossha1 {
- address 192.168.100.5:7789;
- }
- on zenossha2 {
- address 192.168.100.6:7789;
- } }
- resource zenhome { device /dev/drbd2; disk /dev/sdb2; meta-disk internal;
- on zenossha1 {
- address 192.168.100.5:7790;
- }
- on zenossha2 {
- address 192.168.100.6:7790;
- } }
- resource zenperf { device /dev/drbd3; disk /dev/sdb3; meta-disk internal;
- on zenossha1 {
- address 192.168.100.5:7791;
- }
- on zenossha2 {
- address 192.168.100.6:7791;
- } }
接下来,我们创建源数据。在创建之前,需要测试一下磁盘的写入
点击(此处)折叠或打开
- # dd if=/dev/zero of=/dev/sdb1 bs=1M count=128
点击(此处)折叠或打开
- # drbdadm create-md mysql
- # drbdadm create-md zenhome
- # drbdadm create-md zenperf
点击(此处)折叠或打开
- # service drbd start
注意,第一次双节点启动drbd时,需要全盘同步。同步速度与网络环境有关。
点击(此处)折叠或打开
- more /proc/drbd
- version: 8.2.6 (api:88/proto:86-88)
- GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-x8664-build, 2008-10-03 11:30:17
- 1: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
- ns:35982212 nr:0 dw:476244 dr:35982432 al:209 bm:2167 lo:1 pe:50 ua:14883 ap:43 oos:12687508
- [=============>......] sync'ed: 73.7% (12390/47063)M
- finish: 1:14:11 speed: 2,832 (3,780) K/sec
- 2: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
- ns:35724156 nr:0 dw:892220 dr:34843136 al:414 bm:2155 lo:1 pe:8 ua:348 ap:0 oos:12674912
- [=============>......] sync'ed: 73.8% (12377/47063)M
- finish: 0:54:52 speed: 3,848 (3,784) K/sec
- 3: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
- ns:35911460 nr:0 dw:435600 dr:35961280 al:189 bm:2165 lo:1 pe:2055 ua:14912 ap:2049 oos:10854136
- [==============>.....] sync'ed: 76.6% (10599/45251)M
- finish: 0:44:07 speed: 4,072 (3,780) K/sec
请待drbd同步完成之后,再进行下面的操作。
接下来,我们需要对于源数据格式化,同时需要指定两台服务器间DRBD的主从关系。因此,下面的需要分部来做。
在主服务器上,首先查看drbd状态
点击(此处)折叠或打开
- # cat /proc/drbd
- version: 8.2.6 (api:88/proto:86-88)
- GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-i386-build, 2008-10-03 11:42:32
- 1: cs:WFConnection st:Primary/Unknown ds:UpToDate/DUnknown C r---
- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:2570252
- 2: cs:WFConnection st:Secondary/Unknown ds:Inconsistent/DUnknown C r---
- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:2650604
- 3: cs:WFConnection st:Secondary/Unknown ds:Inconsistent/DUnknown C r---
- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:2650604
点击(此处)折叠或打开
- # drbdadm -- -o primary mysql
- # drbdadm -- -o primary zenhome
- # drbdadm -- -o primary zenperf
点击(此处)折叠或打开
- # mkfs.ext3 /dev/drbd1
- # mkfs.ext3 /dev/drbd2
- # mkfs.ext3 /dev/drbd3
点击(此处)折叠或打开
- # service drbd stop
在从服务器上重复主服务器的操作
两台服务器做好格式化后,再将从服务器的DBRD源数据块降为从。
点击(此处)折叠或打开
- # drbdadm -- -o primary mysql
- # drbdadm -- -o primary zenhome
- # drbdadm -- -o primary zenperf
- # mkfs.ext3 /dev/drbd1
- # mkfs.ext3 /dev/drbd2
- # mkfs.ext3 /dev/drbd3
点击(此处)折叠或打开
- # drbdadm secondary mysql
- # drbdadm secondary zenhomy
- # drbdadm secondary zenperf
点击(此处)折叠或打开
- # servuce drbd start
- # drbdadm -- -o primary mysql
- # drbdadm -- -o primary zenhome
- # drbdadm -- -o primary zenperf
点击(此处)折叠或打开
- # mkdir /var/lib/mysql -p
- # mount /dev/drbd1 /var/lib/mysql
- # mkdir /opt/zenoss/ -p
- # mount /dev/drbd2 /opt/zenoss
- # mkdir /opt/zenoss/perf -p
- # mount /dev/drbd3 /opt/zenoss/perf
挂载完成后,在主服务器的Zneoss相关目录就可以直接使用。而从服务器上,并不能同时使用DRBD资源,不过,接下来,我们仍然可以在主和从上安装MYSQL与zenoss。(事实上,在从服务器上,我只是希望MySQL和Zenoss的服务安装,实际的数据,最终还是调用DRBD上的数据)。
两台服务器同时安装MySQL与Zenoss
点击(此处)折叠或打开
- # yum -y install mysql mysql-server
- # service mysqld start
- # rpm -ivh zenoss-3.2.1.el5.i386.rpm
点击(此处)折叠或打开
- # chown zenoss:zenoss -R /opt/zenoss/perf
点击(此处)折叠或打开
- # service zenoss start
点击(此处)折叠或打开
- # service zenoss stop
- # service mysqld stop
- # chkconfig zenoss off
接下来,我们安装heartbeat。注意,由于yum安装Heartbeat存在BUG。因此,需要执行两次heartbeat安装。
点击(此处)折叠或打开
- #yum -y install heartbeat
- #yum -y install heartbeat
点击(此处)折叠或打开
- # rpm -qa |grep heartbeat
- heartbeat-stonith-2.1.3-3.el5.centos
- heartbeat-2.1.3-3.el5.centos
- heartbeat-pils-2.1.3-3.el5.centos
设置heartbeat服务开机自启动
点击(此处)折叠或打开
- # chkconfig --add heartbeat
- # chkconfig heartbeat on
点击(此处)折叠或打开
- # vi /etc/ha.d/authkeys
- auth 3
- #1 crc
- #2 sha1 HI!
- 3 md5 zenosshaTestforMurA!
点击(此处)折叠或打开
- # vi /etc/ha.d/ha.cf
- debugfile /var/log/ha-debug
- logfile /var/log/ha-log
- keepalive 1
- deadtime 20
- warntime 5
- initdead 40
- udpport 694
- ucast eth1 192.168.100.6 #ucast中指定IP为对方IP,即在主服务器写192.168.100.6,在从服务器写192.168.100.5
- auto_failback on
- node zenossha1
- node zenossha2
- ping 192.168.100.1
- respawn hacluster /usr/lib/heartbeat/ipfail
- apiauth ipfail uid=hacluster
- use_logd yes
点击(此处)折叠或打开
- # vi /etc/ha.d/haresources
- zenossha1 IPaddr::119.10.119.8/28/eth0 drbddisk::mysql Filesystem::/dev/drbd1::/var/lib/mysql::ext3 drbddisk::zenhome Filesystem::/dev/drbd2::/opt/zenoss drbddisk::zenperf Filesystem::/dev/drbd3::/opt/zenoss/perf::ext3::noatime,data=writeback mysqld zenoss
6. 测试Zenoss HA
现在,整个配置工作已经完成。接下来,主从同时启动heartbeat,测试HA。
点击(此处)折叠或打开
- # service heartbeat start
点击(此处)折叠或打开
- # ip a
- 1: lo:
mtu 16436 qdisc noqueue - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- 2: eth0:
mtu 1500 qdisc pfifo_fast qlen 1000 - link/ether 00:0c:29:f7:36:75 brd ff:ff:ff:ff:ff:ff
- inet 119.10.119.5/28 brd 119.10.119.15 scope global eth0
- inet 119.10.119.8/28 brd 119.10.119.15 scope global secondary eth0:0
- 3: eth1:
mtu 1500 qdisc pfifo_fast qlen 1000 - link/ether 00:0c:29:f7:36:7f brd ff:ff:ff:ff:ff:ff
- inet 192.168.100.5/24 brd 192.168.100.255 scope global eth1
点击(此处)折叠或打开
- # df -l
- 文件系统 1K-块 已用 可用 已用% 挂载点
- /dev/mapper/VolGroup00-LogVol00
- 6030784 2748296 2971192 49% /
- /dev/sda1 101086 12353 83514 13% /boot
- tmpfs 1037484 0 1037484 0% /dev/shm
- /dev/drbd1 3162284 99996 2901648 4% /var/lib/mysql
- /dev/drbd2 3162316 479952 2521724 16% /opt/zenoss
- /dev/drbd3 3992452 77052 3712588 3% /opt/zenoss/perf
点击(此处)折叠或打开
- # service mysqld status
- mysqld (pid 12631) 正在运行...
- # service zenoss status
- Daemon: zeoctl program running; pid=12938
- Daemon: zopectl program running; pid=12943
- Daemon: zenhub program running; pid=12978
- Daemon: zenjobs program running; pid=13007
- Daemon: zenping program running; pid=13073
- Daemon: zensyslog program running; pid=13111
- Daemon: zenstatus program running; pid=13113
- Daemon: zenactions program running; pid=13140
- Daemon: zentrap program running; pid=13240
- Daemon: zenmodeler program running; pid=13245
- Daemon: zenperfsnmp program running; pid=13279
- Daemon: zencommand program running; pid=13313
- Daemon: zenprocess program running; pid=13339
- Daemon: zenwin program running; pid=13377
- Daemon: zeneventlog program running; pid=13415
点击(此处)折叠或打开
- # service heartbeat stop
- Stopping High-Availability services:
- [确定]
点击(此处)折叠或打开
- # ip a
- 1: lo:
mtu 16436 qdisc noqueue - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- 2: eth0:
mtu 1500 qdisc pfifo_fast qlen 1000 - link/ether 00:0c:29:f7:36:75 brd ff:ff:ff:ff:ff:ff
- inet 119.10.119.5/28 brd 119.10.119.15 scope global eth0
- 3: eth1:
mtu 1500 qdisc pfifo_fast qlen 1000 - link/ether 00:0c:29:f7:36:7f brd ff:ff:ff:ff:ff:ff
- inet 192.168.100.5/24 brd 192.168.100.255 scope global eth1
- # df -l
- 文件系统 1K-块 已用 可用 已用% 挂载点
- /dev/mapper/VolGroup00-LogVol00
- 6030784 2746420 2973068 49% /
- /dev/sda1 101086 12353 83514 13% /boot
- tmpfs 1037484 0 1037484 0% /dev/shm
- # service mysqld status
- mysqld 已停
- # service zenoss status
- Startup script not found at /opt/zenoss/bin/zenoss.
点击(此处)折叠或打开
- # cat /var/log/ha-debug
- Daemon: zeneventlog stopping...
- Daemon: zenwin stopping...
- Daemon: zenprocess stopping...
- Daemon: zencommand stopping...
- Daemon: zenperfsnmp stopping...
- Daemon: zenmodeler stopping...
- Daemon: zentrap stopping...
- Daemon: zenactions stopping...
- Daemon: zenstatus stopping...
- Daemon: zensyslog stopping...
- Daemon: zenping stopping...
- Daemon: zenjobs stopping...
- Daemon: zenhub stopping...
- Daemon: zopectl .
- daemon process stopped
- Daemon: zeoctl .
- daemon process stopped
- 停止 MySQL: [确定]
- INFO: Success
- INFO: Success
- INFO: Success
- In IP Stop
- SIOCDELRT: No such process
- INFO: Success
点击(此处)折叠或打开
- # ip a
- 1: lo:
mtu 16436 qdisc noqueue - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- 2: eth0:
mtu 1500 qdisc pfifo_fast qlen 1000 - link/ether 00:50:56:2a:c0:d8 brd ff:ff:ff:ff:ff:ff
- inet 119.10.119.6/28 brd 119.10.119.15 scope global eth0
- inet 119.10.119.8/28 brd 119.10.119.15 scope global secondary eth0:0
- 3: eth1:
mtu 1500 qdisc pfifo_fast qlen 1000 - link/ether 00:50:56:3c:96:bc brd ff:ff:ff:ff:ff:ff
- inet 192.168.100.6/24 brd 192.168.100.255 scope global eth1
- # df -l
- 文件系统 1K-块 已用 可用 已用% 挂载点
- /dev/mapper/VolGroup00-LogVol00
- 6030784 3226032 2493456 57% /
- /dev/sda1 101086 12353 83514 13% /boot
- tmpfs 1037484 0 1037484 0% /dev/shm
- /dev/drbd1 3162284 99996 2901648 4% /var/lib/mysql
- /dev/drbd2 3162316 479676 2522000 16% /opt/zenoss
- /dev/drbd3 3992452 77052 3712588 3% /opt/zenoss/perf
- # service mysqld status
- mysqld (pid 9919) 正在运行...
- # service zenoss status
- Daemon: zeoctl program running; pid=10232
- Daemon: zopectl program running; pid=10237
- Daemon: zenhub program running; pid=10272
- Daemon: zenjobs program running; pid=10301
- Daemon: zenping program running; pid=10363
- Daemon: zensyslog program running; pid=10402
- Daemon: zenstatus program running; pid=10408
- Daemon: zenactions program running; pid=10434
- Daemon: zentrap program running; pid=10539
- Daemon: zenmodeler program running; pid=10544
- Daemon: zenperfsnmp program running; pid=10578
- Daemon: zencommand program running; pid=10613
- Daemon: zenprocess program running; pid=10638
- Daemon: zenwin program running; pid=10675
- Daemon: zeneventlog program running; pid=10713
- # cat /var/log/ha-debug
- ipfail[23546]: 2012/04/11_12:35:54 debug: Other side is unstable.
- heartbeat[23536]: 2012/04/11_12:36:08 info: Received shutdown notice from 'zenossha1'.
- heartbeat[23536]: 2012/04/11_12:36:08 info: Resources being acquired from zenossha1.
- heartbeat[23536]: 2012/04/11_12:36:08 debug: StartNextRemoteRscReq(): child count 1
- heartbeat[8929]: 2012/04/11_12:36:08 info: acquire local HA resources (standby).
- heartbeat[8929]: 2012/04/11_12:36:08 info: local HA resource acquisition completed (standby).
- heartbeat[8930]: 2012/04/11_12:36:08 info: No local resources [/usr/share/heartbeat/ResourceManager listkeys zenossha2] to acquire.
- heartbeat[23536]: 2012/04/11_12:36:08 info: Standby resource acquisition done [foreign].
- heartbeat[23536]: 2012/04/11_12:36:08 debug: StartNextRemoteRscReq(): child count 1
- heartbeat[8955]: 2012/04/11_12:36:08 debug: notify_world: setting SIGCHLD Handler to SIG_DFL
- logd is not runningharc[8955]: 2012/04/11_12:36:08 info: Running /etc/ha.d/rc.d/status status
- logd is not runningmach_down[8971]: 2012/04/11_12:36:08 info: Taking over resource group IPaddr::119.10.119.8/28/eth0
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:08 info: Acquiring resource group: zenossha1 IPaddr::119.10.119.8/28/eth0 drbddisk::mysql Filesystem::/dev/drbd1::/var/lib/mysql::ext3 drbddisk::zenhome Filesystem::/dev/drbd2::/opt/zenoss drbddisk::zenperf Filesystem::/dev/drbd3::/opt/zenoss/perf::ext3::noatime,data=writeback mysqld zenoss
- logd is not runningIPaddr[9024]: 2012/04/11_12:36:09 INFO: Resource is stopped
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:09 info: Running /etc/ha.d/resource.d/IPaddr 119.10.119.8/28/eth0 start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:09 debug: Starting /etc/ha.d/resource.d/IPaddr 119.10.119.8/28/eth0 start
- logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 INFO: Using calculated netmask for 119.10.119.8: 255.255.255.240
- logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 DEBUG: Using calculated broadcast for 119.10.119.8: 119.10.119.15
- logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 INFO: eval ifconfig eth0:0 119.10.119.8 netmask 255.255.255.240 broadcast 119.10.119.15
- logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 DEBUG: Sending Gratuitous Arp for 119.10.119.8 on eth0:0 [eth0]
- logd is not runningIPaddr[9093]: 2012/04/11_12:36:10 INFO: Success
- INFO: Success
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: /etc/ha.d/resource.d/IPaddr 119.10.119.8/28/eth0 start done. RC=0
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 info: Running /etc/ha.d/resource.d/drbddisk mysql start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: Starting /etc/ha.d/resource.d/drbddisk mysql start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: /etc/ha.d/resource.d/drbddisk mysql start done. RC=0
- logd is not runningFilesystem[9269]: 2012/04/11_12:36:10 INFO: Resource is stopped
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /var/lib/mysql ext3 start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: Starting /etc/ha.d/resource.d/Filesystem /dev/drbd1 /var/lib/mysql ext3 start
- logd is not runningFilesystem[9350]: 2012/04/11_12:36:11 INFO: Running start for /dev/drbd1 on /var/lib/mysql
- logd is not runningFilesystem[9339]: 2012/04/11_12:36:11 INFO: Success
- INFO: Success
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: /etc/ha.d/resource.d/Filesystem /dev/drbd1 /var/lib/mysql ext3 start done. RC=0
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 info: Running /etc/ha.d/resource.d/drbddisk zenhome start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: Starting /etc/ha.d/resource.d/drbddisk zenhome start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: /etc/ha.d/resource.d/drbddisk zenhome start done. RC=0
- logd is not runningFilesystem[9459]: 2012/04/11_12:36:11 INFO: Resource is stopped
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd2 /opt/zenoss start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: Starting /etc/ha.d/resource.d/Filesystem /dev/drbd2 /opt/zenoss start
- logd is not runningFilesystem[9540]: 2012/04/11_12:36:12 INFO: Running start for /dev/drbd2 on /opt/zenoss
- logd is not runningFilesystem[9540]: 2012/04/11_12:36:12 INFO: Starting filesystem check on /dev/drbd2
- fsck 1.39 (29-May-2006)
- /dev/drbd2: clean, 29826/402400 files, 132525/803216 blocks
- logd is not runningFilesystem[9529]: 2012/04/11_12:36:12 INFO: Success
- INFO: Success
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 debug: /etc/ha.d/resource.d/Filesystem /dev/drbd2 /opt/zenoss start done. RC=0
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 info: Running /etc/ha.d/resource.d/drbddisk zenperf start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 debug: Starting /etc/ha.d/resource.d/drbddisk zenperf start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 debug: /etc/ha.d/resource.d/drbddisk zenperf start done. RC=0
- logd is not runningFilesystem[9654]: 2012/04/11_12:36:12 INFO: Resource is stopped
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd3 /opt/zenoss/perf ext3 noatime,data=writeback start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 debug: Starting /etc/ha.d/resource.d/Filesystem /dev/drbd3 /opt/zenoss/perf ext3 noatime,data=writeback start
- logd is not runningFilesystem[9735]: 2012/04/11_12:36:13 INFO: Running start for /dev/drbd3 on /opt/zenoss/perf
- logd is not runningFilesystem[9724]: 2012/04/11_12:36:13 INFO: Success
- INFO: Success
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 debug: /etc/ha.d/resource.d/Filesystem /dev/drbd3 /opt/zenoss/perf ext3 noatime,data=writeback start done. RC=0
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 info: Running /etc/init.d/mysqld start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 debug: Starting /etc/init.d/mysqld start
- 启动 MySQL: [确定]
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:15 debug: /etc/init.d/mysqld start done. RC=0
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:24 info: Running /etc/init.d/zenoss start
- logd is not runningResourceManager[8997]: 2012/04/11_12:36:24 debug: Starting /etc/init.d/zenoss start
- Daemon: zeoctl .
- daemon process started, pid=10232
- Daemon: zopectl heartbeat[23536]: 2012/04/11_12:36:30 WARN: node zenossha1: is dead
- heartbeat[23536]: 2012/04/11_12:36:30 info: Dead node zenossha1 gave up resources.
- heartbeat[23536]: 2012/04/11_12:36:30 info: Link zenossha1:eth1 dead.
- ipfail[23546]: 2012/04/11_12:36:30 info: Status update: Node zenossha1 now has status dead
- ipfail[23546]: 2012/04/11_12:36:30 debug: Found ping node 192.168.100.1!
- ipfail[23546]: 2012/04/11_12:36:31 info: NS: We are still alive!
- ipfail[23546]: 2012/04/11_12:36:31 info: Link Status update: Link zenossha1/eth1 now has status dead
- ipfail[23546]: 2012/04/11_12:36:31 debug: Found ping node 192.168.100.1!
- ipfail[23546]: 2012/04/11_12:36:32 info: Asking other side for ping node count.
- ipfail[23546]: 2012/04/11_12:36:32 debug: Message [num_ping] sent.
- ipfail[23546]: 2012/04/11_12:36:32 info: Checking remote count of ping nodes.
- .
- daemon process started, pid=10237
- Daemon: zenhub starting...
- Daemon: zenjobs starting...
- Daemon: zenping starting...
- Daemon: zensyslog starting...
- Daemon: zenstatus starting...
- Daemon: zenactions starting...
- Daemon: zentrap starting...
- Daemon: zenmodeler starting...
- Daemon: zenperfsnmp starting...
- Daemon: zencommand starting...
- Daemon: zenprocess starting...
- Daemon: zenwin starting...
- Daemon: zeneventlog starting...
- logd is not runningResourceManager[8997]: 2012/04/11_12:37:08 debug: /etc/init.d/zenoss start done. RC=0
- logd is not runningmach_down[8971]: 2012/04/11_12:37:08 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired
- heartbeat[23536]: 2012/04/11_12:37:08 info: mach_down takeover complete.
- logd is not runningmach_down[8971]: 2012/04/11_12:37:08 info: mach_down takeover complete for node zenossha1.
我们再次启动主服务器上的heartbeat服务,将服务切换回主服务器,相关验证工作请大家自行操作。