[root@es3 ~]# masterha_check_repl --conf=/root/app1.cnf Tue Aug 20 10:22:41 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Tue Aug 20 10:22:41 2019 - [info] Reading application default configuration from /root/app1.cnf.. Tue Aug 20 10:22:41 2019 - [info] Reading server configuration from /root/app1.cnf.. Tue Aug 20 10:22:41 2019 - [info] MHA::MasterMonitor version 0.58. Tue Aug 20 10:22:42 2019 - [info] GTID failover mode = 1 Tue Aug 20 10:22:42 2019 - [info] Dead Servers: Tue Aug 20 10:22:42 2019 - [info] Alive Servers: Tue Aug 20 10:22:42 2019 - [info] es1(192.168.56.14:3306) Tue Aug 20 10:22:42 2019 - [info] es2(192.168.56.15:3306) Tue Aug 20 10:22:42 2019 - [info] es3(192.168.56.16:3306) Tue Aug 20 10:22:42 2019 - [info] Alive Slaves: Tue Aug 20 10:22:42 2019 - [info] es1(192.168.56.14:3306) Version=5.7.24-log (oldest major version between slaves) log-bin:enabled Tue Aug 20 10:22:42 2019 - [info] GTID ON Tue Aug 20 10:22:42 2019 - [info] Replicating from es3(192.168.56.16:3306) Tue Aug 20 10:22:42 2019 - [info] es2(192.168.56.15:3306) Version=5.7.24-log (oldest major version between slaves) log-bin:enabled Tue Aug 20 10:22:42 2019 - [info] GTID ON Tue Aug 20 10:22:42 2019 - [info] Replicating from 192.168.56.16(192.168.56.16:3306) Tue Aug 20 10:22:42 2019 - [info] Current Alive Master: es3(192.168.56.16:3306) Tue Aug 20 10:22:42 2019 - [info] Checking slave configurations.. Tue Aug 20 10:22:42 2019 - [info] read_only=1 is not set on slave es2(192.168.56.15:3306). Tue Aug 20 10:22:42 2019 - [info] Checking replication filtering settings.. Tue Aug 20 10:22:42 2019 - [info] binlog_do_db= , binlog_ignore_db= Tue Aug 20 10:22:42 2019 - [info] Replication filtering check ok. Tue Aug 20 10:22:42 2019 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Tue Aug 20 10:22:42 2019 - [info] Checking SSH publickey authentication settings on the current master.. Tue Aug 20 10:22:43 2019 - [info] HealthCheck: SSH to es3 is reachable. Tue Aug 20 10:22:43 2019 - [info] es3(192.168.56.16:3306) (current master) +--es1(192.168.56.14:3306) +--es2(192.168.56.15:3306)
Tue Aug 20 10:22:43 2019 - [info] Checking replication health on es1.. Tue Aug 20 10:22:43 2019 - [info] ok. Tue Aug 20 10:22:43 2019 - [info] Checking replication health on es2.. Tue Aug 20 10:22:43 2019 - [info] ok. Tue Aug 20 10:22:43 2019 - [info] Checking master_ip_failover_script status: Tue Aug 20 10:22:43 2019 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=es3 --orig_master_ip=192.168.56.16 --orig_master_port=3306
----- Failover Report -----
app1: MySQL Master failover es3(192.168.56.16:3306) to es1(192.168.56.14:3306) succeeded
Master es3(192.168.56.16:3306) is down!
Check MHA Manager logs at es3 for details.
Started automated(non-interactive) failover. Invalidated master IP address on es3(192.168.56.16:3306) Power off es3. Selected es1(192.168.56.14:3306) as a new master. es1(192.168.56.14:3306): OK: Applying all logs succeeded. es1(192.168.56.14:3306): OK: Activated master IP address. es2(192.168.56.15:3306): OK: Slave started, replicating from es1(192.168.56.14:3306) es1(192.168.56.14:3306): Resetting slave info succeeded. Master failover to es1(192.168.56.14:3306) completed successfully. Tue Aug 20 10:23:14 2019 - [info] Sending mail.. [root@es3 ~]# ll 2、遇到问题,缺省情况下,如果MHA检测到连续发生宕机,且两次宕机时间间隔不足八小时的话,则不会进行Failover,需要删除最近时间的app1.failover.complete
[error][/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln310] Last failover was done at 2019/08/20 10:23:14. Current time is too early to do failover again. If you want to do failover, manually remove /data/manager/app1.failover.complete and run this script again. Tue Aug 20 10:54:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/ManagerUtil.pm, ln177] Got ERROR: at /usr/bin/masterha_manager line 65. 或者增加如下参数启动