1、Tomcat Cluster
(1) httpd + tomcat cluster
httpd: mod_proxy, mod_proxy_http, mod_proxy_balancer
tomcat cluster:http connector
(2) httpd + tomcat cluster
httpd: mod_proxy, mod_proxy_ajp, mod_proxy_balancer
tomcat cluster:ajp connector
(3) nginx + tomcat cluster
实验环境
1、同步三台主机的时间
2、将三台主机的主机名按照上图修改
3、修改本机的/etc/hosts文件,使三台主机可以互相解析主机名
4、在A和B上安装好open-jdk和tomcat,并启动tomcat
示例1:实现nginx反向代理tomcat集群
1、在A和B上的设置
[root@node1 tomcat]#mkdir -pv /usr/share/tomcat/webapps/myapp/WEB-INF
vim /usr/share/tomcat/webapps/myapp/index.jsp
<%@ page language="java" %>
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1> ---在B上将颜色改为green,Tomcat改为B
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
html>
测试
http://172.18.21.107:8080/myapp/
http://172.18.21.7:8080/myapp/
2、调度器上设置
yum install nginx
vim /etc/nginx/nginx.conf
upstream tcsrvs{
server 172.18.21.107:8080;
server 172.18.21.7:8080;
}
vim default.conf
location / {
proxy_pass http://tcsrvs;
}
测试:http://172.18.21.106/myapp/
示例2:实现httpd反向代理tomcat集群
1、在A和B上的设置同上
2、在调度器上的设置
yum install httpd -y
cd /etc/httpd/conf.d
vim vhost.conf
<proxy balancer://tcsrvs> ---定义一个后端服务器组
BalancerMember http://172.18.21.107:8080 ---如果和后端服务器连接的协议为ajp协议,把http改为ajp并且把端口改为8009即可
BalancerMember http://172.18.21.7:8080
ProxySet lbmethod=byrequests--- lbmethod为定义调度算法,有三种:byrequests相当于rr和wrr、bybusyness相当于LC、bytraffic根据流量调度
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
httpd -t
service httpd start
测试
http://172.18.21.106/myapp/
httpd的负载集群功能具有健康状态检查功能
可以把后端的一个tomcat停掉
然后访问http://172.18.21.106/myapp/
会发现不往关闭的tomcat主机调度了
备注:
在BalancerMember http://172.18.21.107:8080后面可以加一些键和值,比如手动定义后端主机的状态和权重等
如BalancerMember http://172.18.21.107:8080 status=D loadfactor=2 表示将后端这台主机手动标记为不可以用状态,设置权重为2
BalancerMember:
BalancerMember [balancerurl] url [key=value [key=value ...]]
status:手动定义后端主机的状态
D: Worker is disabled and will not accept any requests.不可用
S: Worker is administratively stopped.
I: Worker is in ignore-errors mode and will always be considered available.
H: Worker is in hot-standby mode and will only be used if no other viable workers are available.备用
E: Worker is in an error state.
N: Worker is in drain mode and will only accept existing sticky sessions destined for itself and ignore all other requests.
loadfactor:负载因子,即权重;
2、实现httpd和nginx的会话粘性绑定
tomcat集群保存客户端会话的方式有三种:
(1) session sticky 在调度器上设置,将后端的tomcat服务器响应给客户端的响应报文中添加cookie,这样客户端就会收到这个cookie信息,下次访问时携带此cookie信息,就会被调度至同一个后端tomcat服务器,实现会话粘性绑定
(2) session cluster 在后端的tomcat上设置,实现将客户端的会话同时保存至后端的所有tomcat集群,使所有的集群都有相同的会话,这样即使一台服务器坏了,还有其他服务器可以使用
tomcat delta manager
(3) session server 使用专门的缓存服务器来保存会话,比如memcached
此示例是根据上图的拓扑图实现
示例1:httpd的会话绑定
在两个后端tomcat上的设置
vim /etc/tomcat/server.xml
<Engine name="Catalina" defaultHost="localhost" jvmroute="tomcatA"> ---在此行增加一个jvmroute="tomcatA,在B上增加jvmroute="tomcatB"
systemctl restart tomcat
在调度器上的设置
vim vhost.conf
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED --env表示除非后端的这个主机荡机,否则一直调度到此服务器,实现会话绑定
<proxy balancer://tcsrvs>
BalancerMember http://172.18.21.107:8080 route=tomcatA
BalancerMember http://172.18.21.7:8080 route=tomcatB
ProxySet lbmethod=byrequests
ProxySet stickysession=ROUTEID
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
service httpd reload
测试
http://172.18.21.106/myapp/
示例2:nginx的会话绑定
vim /etc/nginx/nginx.conf
upstream tcsrvs{
server 172.18.21.107:8080;
server 172.18.21.7:8080;
hash $request_uri consistent;
}
vim /etc/nginx/conf.d/default.conf
location / {
proxy_pass http://tcsrvs;
}
nginx -t
nginx -s reload
测试
http://172.18.21.106/myapp/发现只能调度至第一次访问的后端服务器,实现会话绑定。
3、实现后端的两个tomcat服务器保存有相同的会话
此实验是在1中的拓扑图实现的
在A和B上的设置
访问tomcat的官方文档
http://172.18.21.107:8080/docs/cluster-howto.html
Document---->Clustering
将官方文档中的如下内容复制到tomcat配置文件的<engine>或<host>中,此实验放到Engine中
vim /etc/tomcat/server.xml
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatA"> ---在B中增加jvmRoute="tomcatB"
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.21.4" ---多播地址,用来后端的两个tomcat进行多播通讯,发送心跳信息等,证明它们在同一个集群内,并且告诉对方是否存活,为了防止实验时和教室其他同学冲突,最好修改一下
port="45564"
frequency="500" ---表示每0.5s发送一次心跳信息告诉其他成员自己还活着
dropTime="3000"/> ---表示3s没有发送信息就证明坏了
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.18.21.107" ---另外一台主机修改为172.18.21.7
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/> ---表示在多播内核其他成员通讯的最大线程,如果只有三台主机,最大线程是2就可以了
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
注意:CentOS 7上的tomcat自带的文档中的配置示例有语法错误;没有加最后的/
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
cp /etc/tomcat/web.xml /usr/share/tomcat/webapps/myapp/WEB-INF/ ---生产中web.sml文件程序已经写好并放到WEB-INF目录,不需要复制了
cd /usr/share/tomcat/webapps/myapp/WEB-INF/
vim web.xml
在此文件内部没有注释的地方加如下内容
<distributable/> ---注意一定要在这个文件的内部,而不是最后加
systemctl restart tomcat
在调度器上的设置
vim vhost.conf
#Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://tcsrvs>
BalancerMember http://172.18.21.107:8080
BalancerMember http://172.18.21.7:8080
ProxySet lbmethod=byrequests
#ProxySet stickysession=ROUTEID
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
service httpd reload
测试
http://172.18.21.106/myapp/
发现无论访问后端的哪个主机,会话都是一样的,说明已经成功将客户端的会话保存至两台tomcat服务器中
4、memcache
- 简述
memcach是 高性能、分布式的内存对象缓存系统,memcache缓存时键和值都在内存中存储,所以访问速度很快,memcache缓存时要依赖于客户端的智能,缓存的过程是客户端先去访问后端的tomcat服务器,然后再请求memcache将访问的数据缓存下来,下次再访问时就直接会访问memcache中的缓存,这一点与varnish不同,varnish是客户端直接去访问varnish缓存服务器,如果缓存上面没有,varnish会替客户端到后端服务器上去寻找,然后缓存下来,所以memcache就被称为旁挂式缓存,强依赖于客户端的智能
varnish相当于是递归,而memcache相当于迭代 - 特性
缓存:cache,无持久存储功能,因为存储于内存当中断电就会丢失;
bypass缓存,依赖于客户端的智能;
k/v cache,仅支持存储可流式化数据,也就是发送数据之前打碎,然后到接收端再重组的过程称为可流式化。 - 安装和配置
yum install memcached
监听的端口:11211/tcp, 11211/udp
systemctl start memcached
memcached -h ---可以查看程序命令常用选项 - 命令:
统计类:stats, stats items, stats slabs, stats sizes
存储类:set, add, replace, append在后面插入值, prepend在前面插入值
命令格式:<command name> <key> <flags标志位> <exptime> <bytes>
<cas unique>
检索类:get, delete, incr/decr自增和减
清空:flush_all
示例
[root@node1 ~]#telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
add mykey 1 600 7 ---1是标志位,可以随便定义,600是过期时间为10分钟,7是存储7个字节,mykey是键
helloo ---为值,注意值的字节数一定要和定义的对上,定义的是7个字节,如果不加前面的空格,不是7个字节会报错
STORED
get mykey
VALUE mykey 1 7
helloo
END
append mykey 1 600 7 ---在值后面追加内容
system
STORED
get mykey
VALUE mykey 1 14
helloo system
END
prepend mykey 1 600 4 ---在值前面追加内容
new
STORED
get mykey
VALUE mykey 1 18
new helloo system
END
add count 1 1200 1 ---添加一个count键
0
STORED
get count
VALUE count 1 1
0
END
incr count 2 ---自增
2
get count
VALUE count 1 1
2
END
incr count 2
4
decr count 1 ---自减
3
delete count ---删除键
DELETED
get count
END
stats ---查看状态
flush_all --清空所有键和值
OK
5、实现session会话保持到memcache服务器
要想将会话保存至后端memcache中,并且每个memcache都保存相同的会话,需要一个管理项目memcached-session-manager
项目地址:https://github.com/magro/memcached-session-manager
下载如下jar文件至各tomcat的/usr/share/tomcat/lib/目录中,其中的${version}要换成你所需要的版本号,tc${6,7,8}要换成与tomcat版本相同的版本号。
1、Add memcached-session-manager jars to tomcat
memcached-session-manager-2.1.1.jar
memcached-session-manager-tc7-2.1.1.jar ---要根据tomcat的版本,实验时是7版本,所以这里要下载tc7
spymemcached-2.9.1.jar
2、Add custom serializers to your webapp (optional)
这里下载的是kryo-serializer,有如下jar文件需要下载
msm-kryo-serializer-2.1.1.jar
kryo-serializers-0.42.jar
kryo-4.0.1.jar
minlog-1.3.0.jar
reflectasm-1.11.3-shaded.jar
reflectasm-1.11.3.jar
asm-5.2.jar
objenesis-2.6.jar
实现过程如下
1、在director上实现nginx或者httpd的反向代理至tomcat集群,本实验用的是httpd
vim /etc/httpd/conf.d/vhost.conf
<proxy balancer://tcsrvs>
BalancerMember http://172.18.21.107:8080
BalancerMember http://172.18.21.7:8080
ProxySet lbmethod=byrequests
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
service httpd start
2、在两个后端服务器上的设置
安装tomcat和memcache并启动服务
[root@node1 app]#ls /app
asm-5.2.jar memcached-session-manager-2.1.1.jar msm-kryo-serializer-2.1.1.jar reflectasm-1.11.3-shaded.jar
kryo-4.0.1.jar memcached-session-manager-tc7-2.1.1.jar objenesis-2.6.jar spymemcached-2.9.1.jar
kryo-serializers-0.42.jar minlog-1.3.0.jar reflectasm-1.11.3.jar
[root@node1 app]#cd /app
[root@node1 app]#cp * /usr/share/tomcat/lib/ ---复制.jar文件到此目录
vim /etc/tomcat/server.xml ---将官方文档中的此段内容复制到tomcat的配置文件中
<Context path="/myapp" docBase="/usr/share/tomcat/webapps/myapp" reloadable="true"> ---访问的uri为myapp,实际上访问的是/usr/share/tomcat/webapps/myapp
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:172.18.21.107:11211,n2:172.18.21.7:11211"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
systemctl restart tomcat
3、在tomcat上分别创建测试目录和.jsp文件
mkdir -pv /usr/share/tomcat/webapps/myapp/WEB-INF/
vim /usr/share/tomcat/webapps/myapp/index.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head> ---在B上改为B
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1> ---在B上改为B和green
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
4、测试
安装客户端工具
yum install -y libmemcached --不安装此软件包无法使用memdump等客户端工具
http://172.18.21.106/myapp/ ---访问发现被调度到不同的tomcat主机,但会话是一样的
[root@node2 myapp]#memdump --server 172.18.21.107:11211 ---此命令可以查看到memcache中缓存的值
validity:643C757E5D5176595045F4BC02048072-n1
643C757E5D5176595045F4BC02048072-n1
bak:643C757E5D5176595045F4BC02048072-n2
[root@node2 myapp]#systemctl stop memcached ---关闭一台memcached
http://172.18.21.106/myapp/ ---继续访问发现会话仍然不变,说明会话在两台memcache中都缓存了
总结:客户端发起请求,通过前端的调度器,将请求调度至后端的tomcat服务器,如果调度的是tomcatA就会把会话缓存至memcacheA,同时备份至memcacheB,这样两个缓存服务器中就都有客户端的会话,下次客户端再访问时无论调度到哪个tomcat,都会从后端的memcache中取会话,就会得到相同的会话内容。