lvs+keepalived+nginx实现高性能负载均衡集群

lvs+keepalived+nginx实现高性能负载均衡集群


DR模式的缺陷:

1:Realserver和 lvs的vip提供服务的端口必须一致。

也就是说:vip的端口对外端口为 80,但后端服务的真实端口为8080,通过lvs的DR模式是实现不了的。

2:Realserver和LVS不能在同一台机器上

3: Realserver 和LVS需要在同一个vlan或者局域网下。

1、 nginx安装

防火墙设置:

firewall-cmd --permanent --add-port=80/tcp

firewall-cmd --permanent --add-port=443/tcp

firewall-cmd --reload

firewall-cmd --list-all-zones


wget http://nginx.org/download/nginx-1.14.0.tar.gz

wget

<code>wget https://sourceforge.net/projects/pcre/files/pcre/8.42/pcre-8.42.tar.gz/<code>

yum -y install gcc gcc-c++ autoconf automake zlib zlib-devel openssl openssl-devel pcre-devel perl*

useradd -M -s /sbin/nologin www

tar -xzf openssl-1.0.2o.tar.gz

cd /opt/openssl-1.0.2o

./config

make

make install

tar -xzf pcre-8.42.tar.gz

tar -xzf nginx-1.14.0.tar.gz

cd nginx-1.14.0

解决方案:

打开nginx源文件下的/opt/nginx-1.14.0/auto/lib/openssl/conf文件:

找到这么一段代码:

CORE_INCS="$CORE_INCS $OPENSSL/.openssl/include"

CORE_DEPS="$CORE_DEPS $OPENSSL/.openssl/include/openssl/ssl.h"

CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libssl.a"

CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libcrypto.a"

CORE_LIBS="$CORE_LIBS $NGX_LIBDL"

修改成以下代码:

CORE_INCS="$CORE_INCS $OPENSSL/include"

CORE_DEPS="$CORE_DEPS $OPENSSL/include/openssl/ssl.h"

CORE_LIBS="$CORE_LIBS $OPENSSL/libssl.a"

CORE_LIBS="$CORE_LIBS $OPENSSL/libcrypto.a"

CORE_LIBS="$CORE_LIBS $NGX_LIBDL"

./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_stub_status_module --with-http_v2_module --with-http_ssl_module --with-http_sub_module --with-http_gzip_static_module --with-http_realip_module --with-http_flv_module --with-http_mp4_module --with-pcre --with-pcre-jit --with-stream --with-openssl=../openssl-1.0.2o --with-pcre=../pcre-8.42

make && make install

2、 LVS+keepalived

环境规划

网络拓扑图

lvs+keepalived+nginx实现高性能负载均衡集群

6.1、开启路由转发功能

分别在lvs master和lvs slave执行如下操作:

vim /etc/sysctl.conf

net.ipv4.ip_forward = 1

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

net.ipv4.conf.ens33.send_redirects = 0

6.2、ipvs安装

分别在lvs master和lvs slave执行如下操作:

yum -y install ipvsadm

ipvsadm

lsmod | grep ip_vs

lvs+keepalived+nginx实现高性能负载均衡集群

6.3、keepalived安装

分别在lvs master和lvs slave执行如下操作:

yum -y install keepalived

6.4、keepalived配置

6.4.1、lvs master配置如下:

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

# notification_email {

# [email protected]

# [email protected]

# [email protected]

# }

# notification_email_from [email protected]

# smtp_server 192.168.200.1

# smtp_connect_timeout 30

router_id LVS_01

#vrrp_skip_check_adv_addr #注释这一段,否则停止master,vip访问不了

#vrrp_strict

#vrrp_garp_interval 0

#vrrp_gna_interval 0

}


vrrp_instance VI_1 {

state MASTER

interface ens33

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.11.218/23 dev ens33 label ens33:1 #配置与服务器同一网段

}

}


virtual_server 192.168.11.218 80 {

delay_loop 6

<code>    lb_algo rr      #负载均衡调度算法,一般用wrr、rr、wlc/<code>

lb_kind DR #负载均衡转发规则。一般包括DR,NAT,TUN 3种。

persistence_timeout 50

protocol TCP


real_server 192.168.11.213 80 {

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.11.214 80 {

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

6.4.2、lvs salve配置如下:

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

# notification_email {

# [email protected]

# [email protected]

# [email protected]

# }

# notification_email_from [email protected]

# smtp_server 192.168.200.1

# smtp_connect_timeout 30

router_id LVS_02

#vrrp_skip_check_adv_addr #注释这一段,否则停止backup,vip访问不了

#vrrp_strict

#vrrp_garp_interval 0

#vrrp_gna_interval 0

}


vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 51

priority 80

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.11.218/23 dev ens33 label ens33:1 #配置与服务器同一网段

}

}


virtual_server 192.168.11.218 80 {

delay_loop 6

lb_algo rr

lb_kind DR

persistence_timeout 50

protocol TCP


real_server 192.168.11.213 80 {

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.11.214 80 {

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

6.5、realserver的配置

两台web服务器都要执行下面脚本:

cat /etc/rc.d/init.d/realserver.sh

#!/bin/bash

SNS_VIP=192.168.11.218

#/etc/rc.d/init.d/functions

case "$1" in

start)

ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP

/sbin/route add -host $SNS_VIP dev lo:0

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

sysctl -p >/dev/null 2>&1

echo "RealServer Start OK"

;;

stop)

ifconfig lo:0 down

route del $SNS_VIP >/dev/null 2>&1

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce

echo "RealServer Stoped"

;;

*)

echo "Usage: $0 {start|stop}"

exit 1

esac

exit 0


chmod u+x /etc/rc.d/init.d/realserver.sh

/etc/rc.d/init.d/realserver.sh start

lvs+keepalived+nginx实现高性能负载均衡集群

6.6、启动keepalived并进行测试

systemctl start firewalld

systemctl start keepalived

systemctl stop firewalld

ps -ef |grep keepalived

注:重启keepalived服务后,lvs master本地网卡添加了ens33:1的ip,即vip地址

配置心得:如果vip访问不了,先重启服务器,开启keepalived服务,然后才关闭防火墙

lvs+keepalived+nginx实现高性能负载均衡集群

tail -f /var/log/messages

lvs+keepalived+nginx实现高性能负载均衡集群

ipvsadm -L -n

lvs+keepalived+nginx实现高性能负载均衡集群

ip add |grep ens33 #lvs master有vip地址

lvs+keepalived+nginx实现高性能负载均衡集群

ip add |grep ens33 #lvs backup没有vip地址

lvs+keepalived+nginx实现高性能负载均衡集群

watch ipvsadm -Ln

lvs+keepalived+nginx实现高性能负载均衡集群

lvs+keepalived+nginx实现高性能负载均衡集群

ipvsadm -D -t 127.0.0.1:80 删除lvs路由

6.7、测试负载均衡

kill掉192.168.11.214 nginx:

pkill nginx #192.168.11.214操作

ipvsadm -L -n #查看lvs的转发

lvs+keepalived+nginx实现高性能负载均衡集群

访问vip:http://192.168.11.218

lvs+keepalived+nginx实现高性能负载均衡集群

重启192.168.11.214 nginx:

./nginx

ipvsadm -L -n

lvs+keepalived+nginx实现高性能负载均衡集群

关闭其中一台keepalived服务,vip地址飘移到另外一台keepalived服务器,lvs服务器ping vip地址正常,访问网站正常:

systemctl stop keepalived

lvs+keepalived+nginx实现高性能负载均衡集群

lvs+keepalived+nginx实现高性能负载均衡集群

lvs+keepalived+nginx实现高性能负载均衡集群

总结:依次停止某一台服务(master keepalived,backup keepalived,213 nginx,214 nginx),查看访问http://192.168.11.218是否正常。

6.8、防火墙配置

Lvs两台服务器防火墙配置:

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 \\

--in-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT


firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 \\

--out-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT


firewall-cmd --zone=public --add-port=80/tcp --permanent

firewall-cmd --reload


nginx两台服务器防火墙配置:

firewall-cmd --zone=public --add-port=80/tcp --permanent

firewall-cmd --reload


查看防火墙配置:

iptables -L OUTPUT_direct --line-numbers

iptables -L INPUT_direct --line-numbers

删除防火墙配置:

firewall-cmd --direct --permanent --remove-rule ipv4 filter INPUT 0 \\

--in-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT


firewall-cmd --direct --permanent --remove-rule ipv4 filter OUTPUT 0 \\

--out-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT

firewall-cmd --zone=public --remove-port=80/tcp --permanent

firewall-cmd --reload


总结:

当 MASTER 服务器无法提供服务时,VIP 会在 MASTER 上自动移除,BACKUP 服务器会提升为 MASTER 状态,绑定 VIP 、接管服务。当 MASTER 修复加入网络后,会自动抢回 VIP ,成为 MASTER 身份。当后端提供服务nginx服务挂起时,会自动切换至其它nginx服务器。


分享到:


相關文章: