1. 安裝簡介 2. 高可用搭建 3. 高可用及負載均衡測試 4. 問題處理 # 一、安裝簡介 ## 1.1 安裝目的 MySQL官方提供了InnoDB Cluster,該集群由MySQL MGR和MySQL Router組成。MySQL MGR在資料庫層面實現自主高可用性,而MySQL Route ...
- 安裝簡介
- 高可用搭建
- 高可用及負載均衡測試
- 問題處理
一、安裝簡介
1.1 安裝目的
MySQL官方提供了InnoDB Cluster,該集群由MySQL MGR和MySQL Router組成。MySQL MGR在資料庫層面實現自主高可用性,而MySQL Router則負責代理訪問。在部署完成後,MySQL Router將形成單點,如果出現故障,將會影響資料庫集群的可用性。因此,為了提高資料庫系統的可用性,需要搭建MySQL Router的高可用性方案。
1.2 MySQL router高可用組件介紹
本篇文章中的高可用方案,主要是通過Corosync和Pacemaker是兩個開源軟體項目實現,它們結合起來為高可用性集群提供了通信、同步、資源管理和故障轉移等服務。
1.2.1 corosync
Corosync是一個開源的高可用性集群通信和同步服務,可以實現集群節點之間的通信和數據同步,同時提供了可靠的消息傳遞機制和成員管理功能,以確保在分散式環境下集群的穩定運行。 Corosync基於可靠的UDP多播協議進行通信,並提供了可插拔的協議棧介面,可以支持多種協議和網路環境。它還提供了一個API,可以讓其他應用程式使用Corosync的通信和同步服務。
1.2.2 pacemaker
Pacemaker是一個開源的高可用性集群資源管理和故障轉移工具,可以實現在集群節點之間自動管理資源(如虛擬IP、文件系統、資料庫等),併在節點或資源故障時進行自動遷移,從而確保整個系統的高可用性和連續性。 Pacemaker支持多種資源管理策略,可以根據不同的需求進行配置。它還提供了一個靈活的插件框架,可以支持不同的集群環境和應用場景,比如虛擬化、雲計算等。
將Corosync和Pacemaker結合起來,可以提供一個完整的高可用性集群解決方案。它通過Corosync實現集群節點之間的通信和同步,通過Pacemaker實現集群資源管理和故障轉移,從而確保整個系統的高可用性和連續性。 它們結合起來為高可用性集群提供了可靠的通信、同步、資源管理和故障轉移等服務,是構建可靠、高效的分散式系統的重要基礎。
1.2.3 ldirectord
ldirectord是一個用於Linux系統的負載均衡工具,它可以管理多個伺服器上的服務,並將客戶端請求分發到這些伺服器中的一個或多個上,以提高服務的可用性和性能。ldirectord通常是與Heartbeat或Keepalived等集群軟體一起使用,以確保高可用性和負載均衡。 ldirectord主要用途包括:
- 負載均衡:ldirectord可以基於不同的負載均衡演算法進行請求分發,例如輪詢、加權輪詢、最少連接、源地址哈希等。它可以將客戶端請求分發到多個後端伺服器中的一個或多個上,從而實現負載均衡。
- 健康檢查:ldirectord可以定期檢查後端伺服器的可用性,並將不可用的伺服器從服務池中排除,從而確保服務的高可用性和穩定性。
- 會話保持:ldirectord可以根據客戶端的IP地址、Cookie等標識,將客戶端請求路由到相同的後端伺服器上,從而實現會話保持,確保客戶端與後端伺服器之間的連接不會被中斷。
- 動態配置:ldirectord支持動態添加、刪除、修改後端伺服器和服務,管理員可以通過命令行或配置文件等方式進行操作,從而實現動態配置。
ldirectord是專門為LVS監控而編寫的,用來監控lvs架構中伺服器池(server pool) 的伺服器狀態。 ldirectord 運行在 IPVS 節點上, ldirectord作為一個守護進程啟動後會對伺服器池中的每個真實伺服器發送請求進行監控,如果伺服器沒有響應 ldirectord 的請求,那麼ldirectord 認為該伺服器不可用, ldirectord 會運行 ipvsadm 對 IPVS表中該伺服器進行刪除,如果等下次再次檢測有相應則通過ipvsadm 進行添加。
2、安裝規劃
MySQL及MySQL Router版本均為8.0.32
IP | 主機名 | 安裝組件 | 使用埠 |
---|---|---|---|
172.17.140.25 | gdb1 | MySQL MySQL Router ipvsadm ldirectord pcs pacemaker corosync | MySQL:3309 MySQL Router:6446 MySQL Router:6447 pcs_tcp:13314 pcs_udp:13315 |
172.17.140.24 | gdb2 | MySQL MySQL Router ipvsadm ldirectord pcs pacemaker corosync | MySQL:3309 MySQL Router:6446 MySQL Router:6447 pcs_tcp:13314 pcs_udp:13315 |
172.17.139.164 | gdb3 | MySQL MySQL Router ipvsadm ldirectord pcs pacemaker corosync | MySQL:3309 MySQL Router:6446 MySQL Router:6447 pcs_tcp:13314 pcs_udp:13315 |
172.17.129.1 | VIP | 6446、6447 | |
172.17.139.62 | MySQL client |
大概安裝步驟如下
二、高可用搭建
2.1 基礎環境設置(三台伺服器都做)
- 分別在三台伺服器上根據規劃設置主機名
hostnamectl set-hostname gdb1
hostnamectl set-hostname gdb2
hostnamectl set-hostname gdb3
- 將下麵內容追加保存在三台伺服器的文件/etc/hosts中
172.17.140.25 gdb1
172.17.140.24 gdb2
172.17.139.164 gdb3
- 在三台伺服器上禁用防火牆
systemctl stop firewalld
systemctl disable firewalld
- 在三台伺服器上禁用selinux,如果selinux未關閉,修改配置文件後,需要重啟伺服器才會生效
如下輸出表示完成關閉
- 在三台伺服器上分別執行下麵命令,用戶建立互相
建立互信,僅僅是為了伺服器間傳輸文件方便,不是集群搭建的必要基礎。
ssh-keygen -t dsa
ssh-copy-id gdb1
ssh-copy-id gdb2
ssh-copy-id gdb3
執行情況如下
[#19#root@gdb1 ~ 16:16:54]19 ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa): ## 直接回車
/root/.ssh/id_dsa already exists.
Overwrite (y/n)? y ## 如果原來有ssh配置文件,可以輸入y覆蓋
Enter passphrase (empty for no passphrase): ## 直接回車
Enter same passphrase again: ## 直接回車
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:qwJXgfN13+N1U5qvn9fC8pyhA29iuXvQVhCupExzgTc root@gdb1
The key's randomart image is:
+---[DSA 1024]----+
| . .. .. |
| o . o Eo. .|
| o ooooo.o o.|
| oo = .. *.o|
| . S .. o +o|
| . . .o o . .|
| o . * ....|
| . . + *o+o+|
| .. .o*.+++o|
+----[SHA256]-----+
[#20#root@gdb1 ~ 16:17:08]20 ssh-copy-id gdb1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_dsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@gdb1's password: ## 輸入gdb1伺服器的root用戶對應密碼
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gdb1'"
and check to make sure that only the key(s) you wanted were added.
[#21#root@gdb1 ~ 16:17:22]21 ssh-copy-id gdb2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_dsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@gdb2's password: ## 輸入gdb2伺服器的root用戶對應密碼
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gdb2'"
and check to make sure that only the key(s) you wanted were added.
[#22#root@gdb1 ~ 16:17:41]22 ssh-copy-id gdb3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_dsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@gdb3's password: ## 輸入gdb3伺服器的root用戶對應密碼
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gdb3'"
and check to make sure that only the key(s) you wanted were added.
[#23#root@gdb1 ~ 16:17:44]23
任意切換伺服器,不需要輸入密碼,則說明互相建立成功
[#24#root@gdb1 ~ 16:21:16]24 ssh gdb1
Last login: Tue Feb 21 16:21:05 2023 from 172.17.140.25
[#1#root@gdb1 ~ 16:21:19]1 logout
Connection to gdb1 closed.
[#25#root@gdb1 ~ 16:21:19]25 ssh gdb2
Last login: Tue Feb 21 16:21:09 2023 from 172.17.140.25
[#1#root@gdb2 ~ 16:21:21]1 logout
Connection to gdb2 closed.
[#26#root@gdb1 ~ 16:21:21]26 ssh gdb3
Last login: Tue Feb 21 10:53:47 2023
[#1#root@gdb3 ~ 16:21:22]1 logout
Connection to gdb3 closed.
[#27#root@gdb1 ~ 16:21:24]27
- 時鐘同步,對於分散式、集中式集群,時鐘同步都非常重要,時間不一致會引發各種異常情況
yum -y install ntpdate // 安裝ntpdate客戶端
ntpdate npt1.aliyun.com // 如果連通外網,可以指定阿裡雲ntp伺服器,或者指定內網ntp server
hwclock -w // 更新BIOS時間
2.2 通過MySQL Router搭建讀寫分離MGR集群
具體參考文章https://gitee.com/GreatSQL/GreatSQL-Doc/blob/master/deep-dive-mgr/deep-dive-mgr-07.md
2.3 在三台伺服器上分別進行進行MySQL Router部署並啟動,MySQL Router配置文件如下
# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=system
user=root
keyring_path=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/var/lib/mysqlrouter/keyring
master_key_path=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/mysqlrouter.key
connect_timeout=5
read_timeout=30
dynamic_state=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/bin/../var/lib/mysqlrouter/state.json
client_ssl_cert=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/var/lib/mysqlrouter/router-cert.pem
client_ssl_key=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/var/lib/mysqlrouter/router-key.pem
client_ssl_mode=DISABLED
server_ssl_mode=AS_CLIENT
server_ssl_verify=DISABLED
unknown_config_option=error
[logger]
level=INFO
[metadata_cache:bootstrap]
cluster_type=gr
router_id=1
user=mysql_router1_g9c62rk29lcn
metadata_cluster=gdbCluster
ttl=0.5
auth_cache_ttl=-1
auth_cache_refresh_interval=2
use_gr_notifications=0
[routing:bootstrap_rw]
bind_address=0.0.0.0
bind_port=6446
destinations=metadata-cache://gdbCluster/?role=PRIMARY
routing_strategy=first-available
protocol=classic
[routing:bootstrap_ro]
bind_address=0.0.0.0
bind_port=6447
destinations=metadata-cache://gdbCluster/?role=SECONDARY
routing_strategy=round-robin-with-fallback
protocol=classic
[routing:bootstrap_x_rw]
bind_address=0.0.0.0
bind_port=6448
destinations=metadata-cache://gdbCluster/?role=PRIMARY
routing_strategy=first-available
protocol=x
[routing:bootstrap_x_ro]
bind_address=0.0.0.0
bind_port=6449
destinations=metadata-cache://gdbCluster/?role=SECONDARY
routing_strategy=round-robin-with-fallback
protocol=x
[http_server]
port=8443
ssl=1
ssl_cert=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/var/lib/mysqlrouter/router-cert.pem
ssl_key=/opt/software/mysql-router-8.0.32-linux-glibc2.17-x86_64-minimal/var/lib/mysqlrouter/router-key.pem
[http_auth_realm:default_auth_realm]
backend=default_auth_backend
method=basic
name=default_realm
[rest_router]
require_realm=default_auth_realm
[rest_api]
[http_auth_backend:default_auth_backend]
backend=metadata_cache
[rest_routing]
require_realm=default_auth_realm
[rest_metadata_cache]
require_realm=default_auth_realm
2.4 驗證三台MySQL Router連接測試
[#12#root@gdb2 ~ 14:12:45]12 mysql -uroot -pAbc1234567* -h172.17.140.25 -P6446 -N -e 'select now()' 2> /dev/null
+---------------------+
| 2023-03-17 14:12:46 |
+---------------------+
[#13#root@gdb2 ~ 14:12:46]13 mysql -uroot -pAbc1234567* -h172.17.140.25 -P6447 -N -e 'select now()' 2> /dev/null
+---------------------+
| 2023-03-17 14:12:49 |
+---------------------+
[#14#root@gdb2 ~ 14:12:49]14 mysql -uroot -pAbc1234567* -h172.17.140.24 -P6446 -N -e 'select now()' 2> /dev/null
+---------------------+
| 2023-03-17 14:12:52 |
+---------------------+
[#15#root@gdb2 ~ 14:12:52]15 mysql -uroot -pAbc1234567* -h172.17.140.24 -P6447 -N -e 'select now()' 2> /dev/null
+---------------------+
| 2023-03-17 14:12:55 |
+---------------------+
[#16#root@gdb2 ~ 14:12:55]16 mysql -uroot -pAbc1234567* -h172.17.139.164 -P6446 -N -e 'select now()' 2> /dev/null
+---------------------+
| 2023-03-17 14:12:58 |
+---------------------+
[#17#root@gdb2 ~ 14:12:58]17 mysql -uroot -pAbc1234567* -h172.17.139.164 -P6447 -N -e 'select now()' 2> /dev/null
+---------------------+
| 2023-03-17 14:13:01 |
+---------------------+
[#18#root@gdb2 ~ 14:13:01]18
2.5 安裝pacemaker
- 安裝pacemaker
安裝pacemaker會依賴corosync這個包,所以直接安裝pacemaker這一個包就可以了
[#1#root@gdb1 ~ 10:05:55]1 yum -y install pacemaker
- 安裝pcs管理工具
[#1#root@gdb1 ~ 10:05:55]1 yum -y install pcs
- 創建集群認證操作系統用戶,用戶名為hacluster,密碼設置為abc123
[#13#root@gdb1 ~ 10:54:13]13 echo abc123 | passwd --stdin hacluster
更改用戶 hacluster 的密碼 。
passwd:所有的身份驗證令牌已經成功更新。
- 啟動pcsd,並且設置開機自啟動
[#16#root@gdb1 ~ 10:55:30]16 systemctl enable pcsd
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
[#17#root@gdb1 ~ 10:56:03]17 systemctl start pcsd
[#18#root@gdb1 ~ 10:56:08]18 systemctl status pcsd
● pcsd.service - PCS GUI and remote configuration interface
Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2023-02-22 10:56:08 CST; 6s ago
Docs: man:pcsd(8)
man:pcs(8)
Main PID: 27677 (pcsd)
Tasks: 4
Memory: 29.9M
CGroup: /system.slice/pcsd.service
└─27677 /usr/bin/ruby /usr/lib/pcsd/pcsd
2月 22 10:56:07 gdb1 systemd[1]: Starting PCS GUI and remote configuration interface...
2月 22 10:56:08 gdb1 systemd[1]: Started PCS GUI and remote configuration interface.
[#19#root@gdb1 ~ 10:56:14]19
- 修改pcsd的TCP埠為指定的13314
sed -i '/#PCSD_PORT=2224/a\
PCSD_PORT=13314' /etc/sysconfig/pcsd
重啟pcsd服務,讓新埠生效
[#23#root@gdb1 ~ 11:23:20]23 systemctl restart pcsd
[#24#root@gdb1 ~ 11:23:39]24 systemctl status pcsd
● pcsd.service - PCS GUI and remote configuration interface
Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2023-02-22 11:23:39 CST; 5s ago
Docs: man:pcsd(8)
man:pcs(8)
Main PID: 30041 (pcsd)
Tasks: 4
Memory: 27.3M
CGroup: /system.slice/pcsd.service
└─30041 /usr/bin/ruby /usr/lib/pcsd/pcsd
2月 22 11:23:38 gdb1 systemd[1]: Starting PCS GUI and remote configuration interface...
2月 22 11:23:39 gdb1 systemd[1]: Started PCS GUI and remote configuration interface.
[#25#root@gdb1 ~ 11:23:45]25
- 設置集群認證信息,通過操作系統用戶hacluster進行認證
[#27#root@gdb1 ~ 11:31:43]27 cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
[#28#root@gdb1 ~ 11:32:15]28 pcs cluster auth gdb1:13314 gdb2:13314 gdb3:13314 -u hacluster -p 'abc123'
gdb1: Authorized
gdb2: Authorized
gdb3: Authorized
[#29#root@gdb1 ~ 11:33:18]29
- 創建集群,任意節點執行即可
## 名稱為gdb_ha , udp協議為13315, 掩碼為24 ,集群成員為主機gdb1, gdb2, gdb3
[#31#root@gdb1 ~ 11:41:48]31 pcs cluster setup --force --name gdb_ha --transport=udp --addr0 24 --mcastport0 13315 gdb1 gdb2 gdb3
Destroying cluster on nodes: gdb1, gdb2, gdb3...
gdb1: Stopping Cluster (pacemaker)...
gdb2: Stopping Cluster (pacemaker)...
gdb3: Stopping Cluster (pacemaker)...
gdb2: Successfully destroyed cluster
gdb1: Successfully destroyed cluster
gdb3: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'gdb1', 'gdb2', 'gdb3'
gdb2: successful distribution of the file 'pacemaker_remote authkey'
gdb3: successful distribution of the file 'pacemaker_remote authkey'
gdb1: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
gdb1: Succeeded
gdb2: Succeeded
gdb3: Succeeded
Synchronizing pcsd certificates on nodes gdb1, gdb2, gdb3...
gdb1: Success
gdb2: Success
gdb3: Success
Restarting pcsd on the nodes in order to reload the certificates...
gdb1: Success
gdb2: Success
gdb3: Success
- 確認完整的集群配置,在任意節點查看即可
[#21#root@gdb2 ~ 11:33:18]21 more /etc/corosync/corosync.conf
totem {
version: 2
cluster_name: gdb_ha
secauth: off
transport: udp
rrp_mode: passive
interface {
ringnumber: 0
bindnetaddr: 24
mcastaddr: 239.255.1.1
mcastport: 13315
}
}
nodelist {
node {
ring0_addr: gdb1
nodeid: 1
}
node {
ring0_addr: gdb2
nodeid: 2
}
node {
ring0_addr: gdb3
nodeid: 3
}
}
quorum {
provider: corosync_votequorum
}
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
[#22#root@gdb2 ~ 14:23:50]22
- 啟動所有集群節點的pacemaker 相關服務,任意節點執行即可
[#35#root@gdb1 ~ 15:30:51]35 pcs cluster start --all
gdb1: Starting Cluster (corosync)...
gdb2: Starting Cluster (corosync)...
gdb3: Starting Cluster (corosync)...
gdb3: Starting Cluster (pacemaker)...
gdb1: Starting Cluster (pacemaker)...
gdb2: Starting Cluster (pacemaker)...
關閉服務時,使用pcs cluster stop --all,或者用pcs cluster stop 《server》關閉某一臺
- 在每個節點上設置pacemaker相關服務開機自啟動
[#35#root@gdb1 ~ 15:30:51]35 systemctl enable pcsd corosync pacemaker
[#36#root@gdb1 ~ 15:30:53]36 pcs cluster enable --all
- 沒有STONITH 設備時,禁用STONITH 組件功能
禁用STONITH 組件功能後,分散式鎖管理器DLM等資源以及依賴DLM的所有服務:例如cLVM2,GFS2,OCFS2等都將無法啟動,不禁用時會有錯誤信息
pcs property set stonith-enabled=false
完整的命令執行過程如下
[#32#root@gdb1 ~ 15:48:20]32 systemctl status pacemaker
● pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; disabled; vendor preset: disabled)
Active: active (running) since 三 2023-02-22 15:35:48 CST; 1min 54s ago
Docs: man:pacemakerd
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html
Main PID: 25661 (pacemakerd)
Tasks: 7
Memory: 51.1M
CGroup: /system.slice/pacemaker.service
├─25661 /usr/sbin/pacemakerd -f
├─25662 /usr/libexec/pacemaker/cib
├─25663 /usr/libexec/pacemaker/stonithd
├─25664 /usr/libexec/pacemaker/lrmd
├─25665 /usr/libexec/pacemaker/attrd
├─25666 /usr/libexec/pacemaker/pengine
└─25667 /usr/libexec/pacemaker/crmd
2月 22 15:35:52 gdb1 crmd[25667]: notice: Fencer successfully connected
2月 22 15:36:11 gdb1 crmd[25667]: notice: State transition S_ELECTION -> S_INTEGRATION
2月 22 15:36:12 gdb1 pengine[25666]: error: Resource start-up disabled since no STONITH resources have been defined
2月 22 15:36:12 gdb1 pengine[25666]: error: Either configure some or disable STONITH with the stonith-enabled option
2月 22 15:36:12 gdb1 pengine[25666]: error: NOTE: Clusters with shared data need STONITH to ensure data integrity
2月 22 15:36:12 gdb1 pengine[25666]: notice: Delaying fencing operations until there are resources to manage
2月 22 15:36:12 gdb1 pengine[25666]: notice: Calculated transition 0, saving inputs in /var/lib/pacemaker/pengine/pe-input-0.bz2
2月 22 15:36:12 gdb1 pengine[25666]: notice: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
2月 22 15:36:12 gdb1 crmd[25667]: notice: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
2月 22 15:36:12 gdb1 crmd[25667]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE
[#33#root@gdb1 ~ 15:37:43]33 pcs property set stonith-enabled=false
[#34#root@gdb1 ~ 15:48:20]34 systemctl status pacemaker
● pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; disabled; vendor preset: disabled)
Active: active (running) since 三 2023-02-22 15:35:48 CST; 12min ago
Docs: man:pacemakerd
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html
Main PID: 25661 (pacemakerd)
Tasks: 7
Memory: 51.7M
CGroup: /system.slice/pacemaker.service
├─25661 /usr/sbin/pacemakerd -f
├─25662 /usr/libexec/pacemaker/cib
├─25663 /usr/libexec/pacemaker/stonithd
├─25664 /usr/libexec/pacemaker/lrmd
├─25665 /usr/libexec/pacemaker/attrd
├─25666 /usr/libexec/pacemaker/pengine
└─25667 /usr/libexec/pacemaker/crmd
2月 22 15:36:12 gdb1 pengine[25666]: notice: Calculated transition 0, saving inputs in /var/lib/pacemaker/pengine/pe-input-0.bz2
2月 22 15:36:12 gdb1 pengine[25666]: notice: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
2月 22 15:36:12 gdb1 crmd[25667]: notice: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
2月 22 15:36:12 gdb1 crmd[25667]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE
2月 22 15:48:20 gdb1 crmd[25667]: notice: State transition S_IDLE -> S_POLICY_ENGINE
2月 22 15:48:21 gdb1 pengine[25666]: warning: Blind faith: not fencing unseen nodes
2月 22 15:48:21 gdb1 pengine[25666]: notice: Delaying fencing operations until there are resources to manage
2月 22 15:48:21 gdb1 pengine[25666]: notice: Calculated transition 1, saving inputs in /var/lib/pacemaker/pengine/pe-input-1.bz2
2月 22 15:48:21 gdb1 crmd[25667]: notice: Transition 1 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
2月 22 15:48:21 gdb1 crmd[25667]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE
[#35#root@gdb1 ~ 15:48:31]35
- 驗證pcs集群狀態正常,無異常信息輸出
[#35#root@gdb1 ~ 15:48:31]35 crm_verify -L
[#36#root@gdb1 ~ 17:33:31]36
2.6 安裝ldirectord(三台都做)
- ldirectord下載
下載地址 https://rpm.pbone.net/info_idpl_23860919_distro_centos_6_com_ldirectord-3.9.5- 3.1.x86_64.rpm.html
![圖片](../圖片/Typora_img/MySQL Router高可用搭建 _img/640-16849369139302.png)新標簽打開獲取到地址後,可以用迅雷下載
- 下載依賴包ipvsadm
[#10#root@gdb1 ~ 19:51:20]10 wget http://mirror.centos.org/altarch/7/os/aarch64/Packages/ipvsadm-1.27-8.el7.aarch64.rpm
- 執行安裝,如果安裝過程中,還需要其他依賴,需要自行處理
[#11#root@gdb1 ~ 19:51:29]11 yum -y install ldirectord-3.9.5-3.1.x86_64.rpm ipvsadm-1.27-8.el7.aarch64.rpm
- 創建配置文件/etc/ha.d/ldirectord.cf,編寫內容如下
checktimeout=3
checkinterval=1
autoreload=yes
logfile="/var/log/ldirectord.log"
quiescent=no
virtual=172.17.129.1:6446
real=172.17.140.25:6446 gate
real=172.17.140.24:6446 gate
real=172.17.139.164:6446 gate
scheduler=rr
service=mysql
protocol=tcp
checkport=6446
checktype=connect
login="root"
passwd="Abc1234567*"
database="information_schema"
request="SELECT 1"
virtual=172.17.129.1:6447
real=172.17.140.25:6447 gate
real=172.17.140.24:6447 gate
real=172.17.139.164:6447 gate
scheduler=rr
service=mysql
protocol=tcp
checkport=6447
checktype=connect
login="root"
passwd="Abc1234567*"
database="information_schema"
request="SELECT 1"
參數說明
checktimeout=3
:後端伺服器健康檢查等待時間checkinterval=5
:兩次檢查間隔時間autoreload=yes
:自動添加或者移除真實伺服器logfile="/var/log/ldirectord.log"
:日誌文件全路徑quiescent=no
:故障時移除伺服器的時候中斷所有連接virtual=172.17.129.1:6446
:VIPreal=172.17.140.25:6446 gate
:真實伺服器scheduler=rr
:指定調度演算法:rr為輪詢,wrr為帶權重的輪詢service=mysql
:健康檢測真實伺服器時ldirectord使用的服務protocol=tcp
:服務協議checktype=connect
:ldirectord守護進程使用什麼方法監視真實伺服器checkport=16310
:健康檢測使用的埠login="root"
:健康檢測使用的用戶名passwd="a123456"
:健康檢測使用的密碼database="information_schema"
:健康檢測訪問的預設databaserequest="SELECT1"
:健康檢測執行的檢測命令
將編寫好的配置文件,分發到另外兩個伺服器
[#22#root@gdb1 ~ 20:51:57]22 cd /etc/ha.d/
[#23#root@gdb1 /etc/ha.d 20:52:17]23 scp ldirectord.cf gdb2:`pwd`
ldirectord.cf 100% 1300 1.1MB/s 00:00
[#24#root@gdb1 /etc/ha.d 20:52:26]24 scp ldirectord.cf gdb3:`pwd`
ldirectord.cf 100% 1300 1.4MB/s 00:00
[#25#root@gdb1 /etc/ha.d 20:52:29]25
2.7 配置迴環網卡上配置VIP(三台都做)
此操作用於pcs內部負載均衡,在lo網卡上配置VIP用於pcs cluster內部通信,如果不操作,則無法進行負載均衡,腳本內容如下vip.sh,放在mysql_bin目錄即可
#!/bin/bash
. /etc/init.d/functions
SNS_VIP=172.16.50.161
case "$1" in
start)
ifconfig lo:0 $SNS_VIP netmask 255.255.240.0 broadcast $SNS_VIP
# /sbin/route add -host $SNS_VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p >/dev/null 2>&1
echo "RealServer Start OK"
;;
stop)
ifconfig lo:0 down
# route del $SNS_VIP >/dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "RealServer Stoped"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
exit 0
啟動配置
# sh vip.sh start
停止配置
# sh vip.sh stop
2.8 集群資源添加(任意節點執行即可)
- pcs中添加vip資源
[#6#root@gdb1 ~ 11:27:30]6 pcs resource create vip --disabled ocf:heartbeat:IPaddr nic=eth0 ip=172.17.129.1 cidr_netmask=24 broadcast=172.17.143.255 op monitor interval=5s timeout=20s
命令解析
pcs resource create
:pcs創建資源對象的起始命令vip
: 虛擬IP(VIP)資源對象的名稱,可以根據需要自定義--disable
: 表示在創建資源對象時將其禁用。這是為了避免資源在尚未完全配置的情況下被Pacemaker集群所使用ocf:heartbeat:IPaddr
:告訴Pacemaker使用Heartbeat插件(即ocf:heartbeat)中的IPaddr插件來管理這個VIP資源nic=eth0
:這個選項指定了網路介面的名稱,即將VIP綁定到哪個網卡上ip=172.17.129.1
:指定了要分配給VIP的IP地址cidr_netmask=24
:指定了VIP的子網掩碼。在這個例子中,CIDR格式的子網掩碼為24,相當於255.255.255.0broadcast=172.17.143.255
:指定了廣播地址op monitor interval=5s timeout=20s
:定義了用於監視這個VIP資源的操作。interval=5s表示Pacemaker將每5秒檢查一次資源的狀態,timeout=20s表示Pacemaker將在20秒內等待資源的響應。如果在這20秒內資源沒有響應,Pacemaker將視為資源不可用。
- pcs中添加lvs資源
[#7#root@gdb1 ~ 11:34:50]7 pcs resource create lvs --disabled ocf:heartbeat:ldirectord op monitor interval=10s timeout=10s
命令解析
pcs resource create
:pcs創建資源對象的起始命令lvs
: 虛擬IP(VIP)資源對象的名稱,可以根據需要自定義--disable
: 表示在創建資源對象時將其禁用。這是為了避免資源在尚未完全配置的情況下被Pacemaker集群所使用ocf:heartbeat:ldirectord
:告訴Pacemaker使用Heartbeat插件(即ocf:heartbeat)中的ldirectord插件來管理LVS的負載均衡器,使用的配置文件為上面配置的/etc/ha.d/ldirectord.cfop monitor interval=10s timeout=10s
:定義了用於監視這個LVS資源的操作。interval=10s表示Pacemaker將每10秒檢查一次資源的狀態,timeout=10s表示Pacemaker將在10秒內等待資源的響應。如果在這10秒內資源沒有響應,Pacemaker將視為資源不可用。
- 創建完成後檢測resource狀態
[#9#root@gdb1 ~ 11:35:42]9 pcs resource show
vip (ocf::heartbeat:IPaddr): Stopped (disabled)
lvs (ocf::heartbeat:ldirectord): Stopped (disabled)
[#10#root@gdb1 ~ 11:35:48]10
- 創建resource group,並添加resource
[#10#root@gdb1 ~ 11:37:36]10 pcs resource group add dbservice vip
[#11#root@gdb1 ~ 11:37:40]11 pcs resource group add dbservice lvs
[#12#root@gdb1 ~ 11:37:44]12
2.9 集群啟停
集群啟動
- 啟動resource
# pcs resource enable vip lvs 或者 pcs resource enable dbservice
如果之前有異常,可以通過下麵的命令清理異常信息,然後再啟動
# pcs resource cleanup vip
# pcs resource cleanup lvs
- 啟動狀態確認,執行命令
pcs status
[#54#root@gdb1 /etc/ha.d 15:54:22]54 pcs status
Cluster name: gdb_ha
Stack: corosync
Current DC: gdb1 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
Last updated: Thu Feb 23 15:55:27 2023
Last change: Thu Feb 23 15:53:55 2023 by hacluster via crmd on gdb82
3 nodes configured
2 resource instances configured
Online: [ gdb1 gdb2 gdb3 ]
Full list of resources:
Resource Group: dbservice
lvs (ocf::heartbeat:ldirectord): Started gdb2
vip (ocf::heartbeat:IPaddr): Started gdb3
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[#55#root@gdb1 /etc/ha.d 15:55:27]55
輸出結果說明
Cluster name: gdb_ha
: 集群的名稱為 gdb_ha。
Stack: corosync
:該集群使用的通信協議棧為 corosync。
`Current DC: gdb3 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum ``:當前的集群控制器(DC)為 gdb3,其版本為 1.1.23-1.el7_9.1-9acf116022,並且該節點所在的分區具有投票權。
Last updated: Thu Feb 23 15:55:27 2023
:最後一次更新集群狀態信息的時間為 2023 年 2 月 23 日 15:55:27。
Last change: Thu Feb 23 15:53:55 2023 by hacluster via crmd on gdb2
:最後一次更改集群配置的時間為 2023 年 2 月 23 日 15:53:55,由用戶 hacluster 通過 crmd 在節點 gdb2 上執行。
3 nodes configured
:該集群配置了 3 個節點。
2 resource instances configured
:該集群中配置了 2 個資源實例。
Online: [ gdb1 gdb2 gdb3 ]
:當前線上的節點為 gdb1、gdb2 和 gdb3。
Full list of resources
:列出了該集群中所有的資源,包括資源名稱、資源類型和所在節點,以及資源的啟動狀態和當前狀態。其中,dbservice 是資源組名稱,lvs 是類型為 ocf: