配置环境:VMware+centos7+mongodb4.2.0
1.启动两个shard的实例 并配置其复制集
/usr/local/mongodb/bin/mongod --dbpath /data/mongodb/m18 --logpath /data/wwwlog/mlog/m18.log --port 27018 --fork --shardsvr --replSet=rs0
/usr/local/mongodb/bin/mongod --dbpath /data/mongodb/m19 --logpath /data/wwwlog/mlog/m19.log --port 27019 --fork --shardsvr --replSet=rs0
/usr/local/mongodb/bin/mongo --port 27018
use admin
rs.initiate({_id: 'rs0', members: [{_id: 0, host: '127.0.0.1:27018'}, {_id: 1, host: '127.0.0.1:27019'}]})
/usr/local/mongodb/bin/mongod --dbpath /data/mongodb/m20 --logpath /data/wwwlog/mlog/m20.log --port 27020 --fork --shardsvr --replSet=rs1
/usr/local/mongodb/bin/mongod --dbpath /data/mongodb/m21 --logpath /data/wwwlog/mlog/m21.log --port 27021 --fork --shardsvr --replSet=rs1
/usr/local/mongodb/bin/mongo --port 27020
use admin
rs.initiate({_id: 'rs1', members: [{_id: 0, host: '127.0.0.1:27020'}, {_id: 1, host: '127.0.0.1:27021'}]})
2.启动configsvr实例
/usr/local/mongodb/bin/mongod --dbpath /data/mongodb/m22 --logpath /data/wwwlog/mlog/m22.log --port 27022 --fork --configsvr -replSet=conf
/usr/local/mongodb/bin/mongod --dbpath /data/mongodb/m23 --logpath /data/wwwlog/mlog/m23.log --port 27023 --fork --configsvr -replSet=conf
/usr/local/mongodb/bin/mongo --port 27022
use admin
rs.initiate({_id: 'conf', members: [{_id: 0, host: '127.0.0.1:27022'}, {_id: 1, host: '127.0.0.1:27023'}]})
3.启动mongos实例
/usr/local/mongodb/bin/mongos --logpath /data/wwwlog/mlog/m24.log --port 27024 --configdb conf/127.0.0.1:27022,127.0.0.1:27023 --fork
4.连接27024服务
/usr/local/mongodb/bin/mongo --port 27024
5.增加shard
sh.addShard('rs0/127.0.0.1:27018,127.0.1:27019');
sh.addShard('rs1/127.0.0.1:27020,127.0.1:27021');
sh.status();
6.定义哪个库需要分片
sh.enableSharding('database');
7.定义哪个表和哪个键分片
sh.shardCollection('database.collection', {field:1});
mongo将每个shard分成了多个chunk 默认是将多条数据并入到一个chunk,当shard之间的chunk数失衡(>=3)时,才将chunk移动到其它shard,以保持平衡 影响性能(foursquare mongo事件)
默认chunk大小60M db.settings.find();
修改chunk大小 db.settings.save({_id:'chunksize'},{$set:{value:4}});
预先分chunk 需要根据业务规划好
sh.shardCollection('shop.user',{userid:1})
预先分配40个chunk,每个chunk存储1000条数据
for(var i=1;i<=40;i++){sh.splitAt('shop.user',{userid:i*1000})}