Nginx和resin(tomcat)集群中解决session共享的问题
作者:平凡屋之换 | 来源:互联网 | 2014-05-28 09:40
在web服务器中需要中修改配置:resion中:shell$vimresin.conf##查找httpaddress*port8080/##注释掉!--httpaddress*port8080/--##查找serveridaddress127.0.0.1port6800##替换成serveridaaddress192.
在web服务器中需要中修改配置:
resion中:
shell $> vim resin.conf
##
查找
## 注释掉
## 查找
## 替换成
tomcat中:(经过试验确认,虚拟主机也支持,只需按下面修改一次即可)
设置tomcat的server.xml, 在两台服务器的tomcat的配置文件中分别找到:
分别修改为:
Tomcat01:(192.168.0.100)
Tomcat02:(192.168.0.101)
nginx的修改:
nginx_upstream_jvm_route 是一个 Nginx 的扩展模块,用来实现基于 COOKIE 的 Session
Sticky 的功能。
安装方法:
1.先获得nginx_upstream_jvm_route模块:
地址:http://sh0happly.blog.51cto.com/attachment/201004/1036375_1271836572.zip,解压后上传到/root下
2.进入Nginx源码目录:
cd nginx-0.7.61
patch -p0 <../nginx-upstream-jvm-route/jvm_route.patch
会出现以下提示:
patching file src/http/ngx_http_upstream.c
Hunk #1 succeeded at 3869 (offset 132 lines).
Hunk #3 succeeded at 4001 (offset 132 lines).
Hunk #5 succeeded at 4100 (offset 132 lines).
patching file src/http/ngx_http_upstream.h
3.安装nginx:
shell $> ./configure --prefix=/usr/local/nginx
--with-http_stub_status_module --with-http_ssl_module
--add-module=/root/nginx-upstream-jvm-route/
shell $> make
shell $> make install
4.修改配置,例如:
1.For resin
upstream backend {
server 192.168.0.100
srun_id=a; #这里srun_id=a对应的是
server1 resin配置里的 server id="a"
server 192.168.0.101
srun_id=b;
jvm_route
$COOKIE_JSESSIONID|sessionid;
}
2.For tomcat
upstream tomcat {
server 192.168.0.100:8080 srun_id=a; #这里srun_id=a对应的是
tomcat01 配置里的 jvmRoute="a"
server 192.168.0.101:8080 srun_id=b; #这里srun_id=a对应的是
tomcat02 配置里的 jvmRoute="b"
jvm_route $COOKIE_JSESSIONID|sessionid reverse;
}
server {
server_name test.com;
charset utf-8,GB2312;
index index.html;
if (-d $request_filename) {
rewrite ^/(.*)([^/])$ http://$host/$1$2/ permanent;
}
location / {
proxy_pass http://tomcat/;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
在两台的tomcat上增加配置:
prefix="crm_log." suffix=".txt"
timestamp="true"/>
在/usr/local/tomcat/apps/jsp的下面新增index.jsp
<%
String name=request.getParameter("name");
out.println("this is 192.168.0.100:hello
"+name+"!
"); #或192.168.0.101
%>
通过访问:http://test.com,页面会一直保持在192.168.0.100的页面,当清空COOKIEs和session后,再次刷新,页面会保持在192.168.0.101上。
一个实例:http://hi.baidu.com/scenkoy/blog/item/2cd89da9b57696f71e17a29e.html
测试环境:
server1 服务器上安装了 nginx + tomcat01
server2 服务器上只安装了
tomcat02
server1 IP 地址: 192.168.2.88
server2 IP 地址: 192.168.2.89
安装步骤:
1. 在server1 上安装配置 nginx + nginx_upstream_jvm_route
shell $> wget -c http://sysoev.ru/nginx/nginx-0.7.61.tar.gz
shell $> svn checkout
http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/
nginx-upstream-jvm-route-read-only
shell $> tar zxvf nginx-0.7.61
shell $> cd nginx-0.7.61
shell $> patch -p0 <
../nginx-upstream-jvm-route-read-only/jvm_route.patch
shell $> useradd www
shell $> ./configure --user=www --group=www
--prefix=/usr/local//nginx --with-http_stub_status_module
--with-http_ssl_module
--add-module=/root/nginx-upstream-jvm-route-read-only
shell $> make
shell $> make install
2.分别在两台机器上安装 tomcat和java (略)
设置tomcat的server.xml, 在两台服务器的tomcat的配置文件中分别找到:
分别修改为:
Tomcat01:
Tomcat02:
并在webapps下面建立aa文件夹,在里面建立要测试的index.jsp文件,内容如下:
<%@ page language="java" import="java.util.*"
pageEncoding="UTF-8"%>
<%
%>
88
<%out.print(request.getSession()) ;%>
<%out.println(request.getHeader("COOKIE")); %>
两个tomcat一样只需要修改红色的部分
分别启动两个tomcat
3.设置nginx
shell $> cd /usr/local/nginx/conf
shell $> mv nginx.conf nginx.bak
shell $> vi nginx.conf
## 以下是配置 ###
user www www;
worker_processes 4;
error_log logs/nginx_error.log crit;
pid
/usr/local/nginx/nginx.pid;
#Specifies the value for maximum file descriptors that can be
opened by this process.
worker_rlimit_nofile 51200;
events
{
use epoll;
worker_connections 2048;
}
http
{
upstream backend {
server 192.168.2.88:8080
srun_id=a;
server 192.168.2.89:8080 srun_id=b;
jvm_route $COOKIE_JSESSIONID|sessionid
reverse;
}
include
mime.types;
default_type application/octet-stream;
#charset gb2312;
charset UTF-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 20m;
limit_rate 1024k;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
gzip on;
#gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types
text/plain application/x-Javascript text/css application/xml;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
server
{
listen
80;
server_name 192.168.2.88;
index index.html index.htm index.jsp;
root /var/www;
#location ~ .*\.jsp$
location / aa/
{
proxy_pass
http://backend;
proxy_redirect off;
proxy_set_header
X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header
X-Real-IP $remote_addr;
proxy_set_header Host
$http_host;
}
location ~
.*\.(gif|jpg|jpeg|png|bmp|swf)$
{
expires
30d;
}
location ~ .*\.(js|css)?$
{
expires
1h;
}
location /Nginxstatus {
stub_status on;
access_log off;
}
log_format access '$remote_addr - $remote_user [$time_local]
"$request" '