MongoDB副本集启动备忘(不含切片)ngx13numactl--interleaveall/data/dbs/mongodb/bin/mongod--rest--replSetrhodb/ngx13:10003--fork--port10003--dbpath/data/dbs/node13/--logpath/d
MongoDB副本集启动备忘(不含切片)
ngx13
numactl --interleave=all /data/dbs/mongodb/bin/mongod --rest
--replSet rhodb/ngx13:10003 --fork --port 10003 --dbpath
/data/dbs/node13/ --logpath /data/dbs/log
ngx12
/data/dbs/mongodb/bin/mongod --rest --replSet rhodb/ngx12:10002
--fork --port 10002 --dbpath /data/dbs/node12/ --logpath
/data/dbs/log
ngx11
/data/mongodb-linux-x86_64-2.0.7/bin/mongod --rest --replSet
rhodb/ngx11:10001 --fork --port 10001 --dbpath /data/dbs/node11/
--logpath /data/dbs/log
选择在ngx13初始化
/data/dbs/mongodb/bin/mongo ngx13:10003
/data/dbs/mongodb/bin/mongo ngx12:10002
PRIMARY> rs.initiate({
... _id : "rhodb",
... members : [
... {_id : 1, host : "ngx13:10003", priority:2},
... {_id : 2, host : "ngx12:10002", priority:3},
... {_id : 3, host : "ngx11:10001", priority:4},
... ]
... });
初始化完成后返回
{
"info" : "Config now saved locally. Should come online in about a
minute.",
"ok" : 1
}
>
SECONDARY>
设置从库可读
/data/dbs/mongodb/bin/mongo ngx12:10002
SECONDARY>rs.slaveOk();
让 mongoose 连接 replSet
mongoose 建立 replSet 连接方式如下:
mongoose.connectSet('mongodb://ngx11:10001/rhodb,mongodb://ngx12:10002/rhodb,mongodb://ngx13:10003',
{read_secondary: true});
其中,read_secondary 配置项允许读取从服务器的数据,以减轻主数据库读写压力。
将单数据库的数据导入到副本集中
mongodump --host ngx13:27017 -o /tmp/rhodb
mongorestore --host rhodb/ngx11:10001,ngx12:10002,ngx13:10003
/tmp/rhodb
测试的时候,直接 kill 掉主从服务器中的一个,另一个会自动切换为工作状态,应用仍能正常运行。
放弃nginx-gridfs doesn't work with authenticated replica
set
Hi Doug,
Unfortunately the nginx-gridfs module isn
"s1">'t officially supported by 10gen, and I don't think it's very aggressively maintained (looks like the last update was 4 months ago, before 2.0.7 came out). Your best bet might be to ping the nginx-gridfs creators.
- 隐藏引用文字 -
"s1">On Monday, August 27, 2012 5:54:49 PM UTC-4, Doug Mayer wrote:
"s1"> Has anyone been able to successfully build nginx-gridfs and pointed it to a MongoDB 2.0.7 replica set that requires authentication? I'm having trouble getting it to run in Ubuntu 12.04. Nginx logs the following errors, no errors in MongoDB's log (as one would expect):
"s1"> 2012/08/27 21:41:08 [error] 8016#0: Mongo Exception: Unknown Error
"s1"> 2012/08/27 21:41:08 [error] 8013#0: Mongo Exception: Unknown Error
"s1"> 2012/08/27 21:41:08 [error] 8018#0: Mongo Exception: Unknown Error
"s1"> 2012/08/27 21:41:08 [alert] 8010#0: worker process 8014 exited with fatal code 2 and cannot be respawned
"s1"> 2012/08/27 21:41:08 [alert] 8010#0: worker process 8015 exited with fatal code 2 and cannot be respawned
"s1"> 2012/08/27 21:41:08 [alert] 8010#0: worker process 8016 exited with fatal code 2 and cannot be respawned
"s1"> 2012/08/27 21:41:08 [alert] 8010#0: worker process 8013 exited with fatal code 2 and cannot be respawned
"s1"> 2012/08/27 21:41:08 [alert] 8010#0: worker process 8018 exited with fatal code 2 and cannot be respawned
"s1"> 2012/08/27 21:41:08 [error] 8021#0: Mongo Exception: Unknown Error
"s1"> 2012/08/27 21:41:08 [alert] 8010#0: worker process 8021 exited with fatal code 2 and cannot be respawned
And my relevant nginx config location:
location /gridfs/ {
internal;
gridfs mydbname
user=myuser
pass=mypass;
mongo "ebp"
10.1.2.3:27017
10.1.2.4:27017
10.1.2.5:27017;
}
"s1"> I can use the same exact user/pass to log in to the mongo shell and get to that DB. I've tried against nginx-gridfs v0.8 as well as master with the same results.
Anyone have a clue what may be going on here?
修改MongoDB副本集的优先级,重新配置
cfg = "o">{
... _id : "rhodb",
... version : 4,
... members : [
... {_id : 1, host :
"s2">"ngx13:10003"},
... {_id : 2, host :
"s2">"ngx12:10002"},
... {_id : 3, host :
"s2">"ngx11:10001"},
... ]
... }
cfg "o">= rs.conf()
cfg.members[1 "o">].priority = 4
cfg.members[2 "o">].priority = 3
cfg.members[3 "o">].priority = 2
rs.reconfig(cfg)
{
... _id : "rhodb",
... version : 3,
... members : [
... {_id : 1, host :
"s2">"ngx13:10003", priority:5},
... {_id : 2, host :
"s2">"ngx12:10002", priority:4},
... {_id : 3, host :
"s2">"ngx11:10001", priority:3},
... ]
... }
nginx、memcache和MongoDB维护注意事项
机器分工概况
============
xxx.xxx.xxx.68 / 69 为.Net主程的负载机器,并部署:
1、双机双主缓存(以下简称htmlcache)
2、单机Memcache(以下简称jscache)
3、Nginx内核webserver
工作机理介绍:
1、动态请求分发到10.10.10.50
2、html取自htmlcache
3、js、css自动缓存到jscache(节前停用)
4、图片需要等领导要求再改进
异常处理
异常风险在于10.10.10.50,可能导致:
1、68、69主动挂在50机器.Net主程序失败(df命令查询)
2、动态请求“
502 bad gateway”
如果出现此问题,需要通知windows主机管理员处理。
其它无
============
xxx.xxx.xxx.86 / 87 (以下简称ngx11、ngx12)为图片主程的负载机器,并部署:
1、MongoDB(以下简称rhodb)
2、图片一级缓存Rhosync(Nginx内核webserver)
xxx.xxx.xxx.88 (以下简称ngx13)为后向mongoDB Cache,并部署图片二级缓存:
1、MongoDB(以下简称rhodb)
2、Nginx内核webserver
工作机理介绍:
1、图片请求时,ngx11、ngx12判断图片一级缓存是否有图,有则直接返回
2、如果图片一级缓存没有,则读取mongoDB Cache,判断是否有,有则返回
3、如果图片二级缓存没有,则统一转发到ngx13
4、ngx13负责主动挂载图片存储阵列Dell NAS(10.10.10.15)
5、如果为小图请求,则有ngx13通过图片库GimagesMagic处理
异常处理
主要异常风险在于10.10.10.15,可能导致:
1、ngx13主动挂在15机器图片存储阵列Dell NAS失败(df命令查询)
2、图片请求“502 bad gateway”
如果出现此问题,需要通知10.10.10.15的windows主机管理员处理。
其它无
次要异常风险在于xxx.xxx.xxx.88,可能导致:
1、图片请求“502 bad gateway”
如果出现此问题,需要进一步检查:
1)机器能否ping通,如果重启后则要执行:numactl --interleave
"o">=all /data/dbs/mongodb/bin/mongod --rest --replSet rhodb/ngx13:10003 --fork --port 10003 --dbpath /data/dbs/node13/ --logpath /data/dbs/log
2)资源使用情况
3)MongoDB服务是否关闭,如果关闭,则要看ngx11、ngx12的MongoDB服务是否同时存在
其它无
============
Memcached启动
/usr/local/bin/
memcached -d -m 8192 -u root -p 11211 -c 12800 -P /tmp/
memcached.pid
-p 设置端口号(默认不设置为: 11211)
-U UDP监听端口 (默认: 11211, 0 时关闭)
-l 绑定地址 (默认:所有都允许,无论内外网或者本机更换IP,有安全隐患,若设置为127.0.0.1就只能本机访问)
-d 独立进程运行
-u 绑定使用指定用于运行进程
-m 允许最大内存用量,单位M (默认: 64 MB)
-P 将PID写入文件,这样可以使得后边进行快速进程终止, 需要与 -d 一起使用