分片 (sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。
基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能会变差(+++里面的说明+++),查询则尽量避免跨分片查询。使用分片的时机:
1,机器的磁盘不够用了。使用分片解决磁盘空间的问题。 2,单个mongod已经不能满足写数据的性能要求。通过分片让写压力分散到各个分片上面,使用分片服务器自身的资源。 3,想把大量数据放到内存里提高性能。和上面一样,通过分片使用分片服务器自身的资源。
二、部署安装 : 前提是 安装 了mongodb(本文用3.0测试)
在搭建分片之前,先了解下分片中 各个角色 的作用。
① 配置服务器。是一个独立的mongod进程,保存集群和分片的元数据,即各分片包含了哪些数据的信息。最先开始建立,启用日志功能。像启动普通的mongod一样启动配置服务器,指定configsvr选项。不需要太多的空间和资源,配置服务器的1KB空间相当于真是数据的200MB。保存的只是数据的分布表。 ② 路由服务器。即mongos,起到一个路由的功能,供程序连接。本身不保存数据,在启动时从配置服务器加载集群信息,开启mongos进程需要知道配置服务器的地址,指定configdb选项。 ③ 分片服务器。是一个独立普通的mongod进程,保存数据信息。可以是一个副本集也可以是单独的一台服务器。
部署环境:3台机子
A:配置(3)、路由1、分片1;
B:分片2,路由2;
C:分片3
在部署之前先明白 片键 的意义,一个好的片键对分片至关重要。 片键必须是一个索引 ,通过sh.shardCollection加会自动创建索引。一个自增的片键对写入和数据均匀分布就不是很好,因为自增的片键总会在一个分片上写入,后续达到某个阀值可能会写到别的分片。但是按照片键查询会非常高效。随机片键对数据的均匀分布效果很好。注意尽量避免在多个分片上进行查询。在所有分片上查询,mongos会对结果进行归并排序。
启动上面这些服务,因为在后台运行,所以用配置文件启动,配置文件说明。
1)配置服务器的启动。(A上开启3个,Port:20000、21000、22000)
配置服务器是一个普通的mongod进程,所以只需要新开一个实例即可。配置服务器必须开启1个或则3个,开启2个则会报错:
BadValue need either 1 or 3 configdbs
因为要放到后台用用配置文件启动,需要修改配置文件:
/etc/mongod_20000.conf
#数据目录 dbpath=/usr/local/config/ #日志文件 logpath=/var/log/mongodb/mongodb_config.log #日志追加 logappend=true #端口 port = 20000 #最大连接数 maxConns = 50 pidfilepath = /var/run/mongo_20000.pid #日志,redo log journal = true #刷写提交机制 journalCommitInterval = 200 #守护进程模式 fork = true #刷写数据到日志的频率 syncdelay = 60 #storageEngine = wiredTiger #操作日志,单位M oplogSize = 1000 #命名空间的文件大小,默认16M,最大2G。 nssize = 16 noauth = true unixSocketPrefix = /tmp configsvr = true
/etc/mongod_21000.conf
数据目录 dbpath=/usr/local/config1/ #日志文件 logpath=/var/log/mongodb/mongodb_config1.log #日志追加 logappend=true #端口 port = 21000 #最大连接数 maxConns = 50 pidfilepath = /var/run/mongo_21000.pid #日志,redo log journal = true #刷写提交机制 journalCommitInterval = 200 #守护进程模式 fork = true #刷写数据到日志的频率 syncdelay = 60 #storageEngine = wiredTiger #操作日志,单位M oplogSize = 1000 #命名空间的文件大小,默认16M,最大2G。 nssize = 16 noauth = true unixSocketPrefix = /tmp configsvr = true
开启配置服务器:
root@mongo1:~# mongod -f /etc/mongod_20000.conf about to fork child process, waiting until server is ready for connections. forked process: 8545 child process started successfully, parent exiting root@mongo1:~# mongod -f /etc/mongod_21000.conf about to fork child process, waiting until server is ready for connections. forked process: 8595 child process started successfully, parent exiting
同理再起一个22000端口的配置服务器。
#数据目录 dbpath=/usr/local/config2/ #日志文件 logpath=/var/log/mongodb/mongodb_config2.log #日志追加 logappend=true #端口 port = 22000 #最大连接数 maxConns = 50 pidfilepath = /var/run/mongo_22000.pid #日志,redo log journal = true #刷写提交机制 journalCommitInterval = 200 #守护进程模式 fork = true #刷写数据到日志的频率 syncdelay = 60 #storageEngine = wiredTiger #操作日志,单位M oplogSize = 1000 #命名空间的文件大小,默认16M,最大2G。 nssize = 16 noauth = true unixSocketPrefix = /tmp configsvr = trueView Code
2)路由服务器的启动。(A、B上各开启1个,Port:30000)
路由服务器不保存数据,把日志记录一下即可。
# mongos #日志文件 logpath=/var/log/mongodb/mongodb_route.log #日志追加 logappend=true #端口 port = 30000 #最大连接数 maxConns = 100 #绑定地址 #bind_ip=192.168.200.*,..., pidfilepath = /var/run/mongo_30000.pid configdb=192.168.200.A:20000,192.168.200.A:21000,192.168.200.A:22000 #必须是1个或则3个配置 。
#configdb=127.0.0.1:20000 #报错
#守护进程模式 fork = true
configdb,不能在其后面带的配置服务器的地址写成localhost或则127.0.0.1,需要设置成其他分片也能访问的地址,即192.168.200.A:20000/21000/22000。否则在addshard的时候会报错:
{ "ok" : 0, "errmsg" : "can't use localhost as a shard since all shards need to communicate. either use all shards and configdbs in localhost or all in actual IPs host: 172.16.5.104:20000 isLocalHost:0" }
开启mongos:
root@mongo1:~# mongos -f /etc/mongod_30000.conf 2015-07-10T14:42:58.741+0800 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for production about to fork child process, waiting until server is ready for connections. forked process: 8965 child process started successfully, parent exiting
3)分片服务器的启动:
就是一个普通的mongod进程:
root@mongo1:~# mongod -f /etc/mongod_40000.conf note: noprealloc may hurt performance in many applications about to fork child process, waiting until server is ready for connections. forked process: 9020 child process started successfully, parent exiting
A服务器上面的服务开启完毕
root@mongo1:~# ps -ef | grep mongo root 9020 1 0 14:47 ? 00:00:06 mongod -f /etc/mongod_40000.conf root 9990 1 0 15:14 ? 00:00:02 mongod -f /etc/mongod_20000.conf root 10004 1 0 15:14 ? 00:00:01 mongod -f /etc/mongod_21000.conf root 10076 1 0 15:20 ? 00:00:00 mongod -f /etc/mongod_22000.conf root 10096 1 0 15:20 ? 00:00:00 mongos -f /etc/mongod_30000.conf
按照上面的方法再到B上开启分片服务和路由服务(配置文件一样),以及在C上开启分片服务。 到此分片的配置服务器、路由服务器、分片服务器都已经部署完成。
三、配置分片: 下面的操作都是在mongodb的命令行里执行
1)添加分片:sh.addShard("IP:Port")
登陆路由服务器 mongos 操作 :
root@mongo1:~# mongo --port=30000 MongoDB shell version: 3.0.4 connecting to: 127.0.0.1:30000/test mongos>
添加分片:
mongos> sh.status() #查看集群的信息 --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("559f72470f93270ba60b26c6") } shards: balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } mongos> sh.addShard("192.168.200.A:40000") #添加分片 { "shardAdded" : "shard0000", "ok" : 1 } mongos> sh.addShard("192.168.200.B:40000") #添加分片 { "shardAdded" : "shard0001", "ok" : 1 } mongos> sh.addShard("192.168.200.C:40000") #添加分片 { "shardAdded" : "shard0002", "ok" : 1 } mongos> sh.status() #查看集群信息 --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("559f72470f93270ba60b26c6") } shards: #分片信息 { "_id" : "shard0000", "host" : "192.168.200.A:40000" } { "_id" : "shard0001", "host" : "192.168.200.B:40000" } { "_id" : "shard0002", "host" : "192.168.200.C:40000" } balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" }
2)开启分片功能:sh.enableSharding("库名")、sh.shardCollection("库名.集合名",{"key":1})
mongos> sh.enableSharding("dba") #首先对数据库启用分片 { "ok" : 1 } mongos> sh.status() #查看分片信息 --- Sharding Status ---...
... databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } mongos> sh.shardCollection("dba.account",{"name":1}) #再对集合进行分片,name字段是片键。 { "collectionsharded" : "dba.account", "ok" : 1 } mongos> sh.status() --- Sharding Status ---... shards: { "_id" : "shard0000", "host" : "192.168.200.51:40000" } { "_id" : "shard0001", "host" : "192.168.200.52:40000" } { "_id" : "shard0002", "host" : "192.168.200.53:40000" } ... databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } #库 dba.account shard key: { "name" : 1 } #集合 chunks: shard0000 1 { "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
上面加粗部分表示分片信息已经配置完成。
四、测试 : 对dba库的account集合进行测试,随机写入,查看是否分散到3个分片中。
通过一个python脚本进行随机写入:分别向A、B 2个mongos各写入10万条记录。
#!/usr/bin/env python #-*- coding:utf-8 -*- #随即写MongoDB Shard 测试 import pymongo import time from random import Random def random_str(randomlength=8): str = '' chars = 'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789' length = len(chars) - 1 random = Random() for i in range(randomlength): str+=chars[random.randint(0, length)] return str def inc_data(conn): db = conn.dba # db = conn.test collection = db.account for i in range(100000): str = '' chars = 'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789' length = len(chars) - 1 random = Random() for i in range(15): str+=chars[random.randint(0, length)] string = str collection.insert({"name" : string, "age" : 123+i, "address" : "hangzhou"+string}) if __name__ =='__main__': conn = pymongo.MongoClient(host='192.168.200.A/B',port=30000) StartTime = time.time() print "===============$inc===============" print "StartTime : %s" %StartTime inc_data(conn) EndTime = time.time() print "EndTime : %s" %EndTime CostTime = round(EndTime-StartTime) print "CostTime : %s" %CostTimeView Code
查看是否分片:db.collection.stats()
mongos> db.account.stats() #查看集合的分布情况 ...
... "shards" : { "shard0000" : { "ns" : "dba.account", "count" : 89710, "size" : 10047520, ...
... "shard0001" : { "ns" : "dba.account", "count" : 19273, "size" : 2158576, ...
... "shard0002" : { "ns" : "dba.account", "count" : 91017, "size" : 10193904, ...
...
上面加粗部分为集合的基本信息,可以看到分片成功,各个分片都有数据(count)。到此MongoDB分片集群搭建成功。
++++++++++++++++++++++++++++++++++++++++++++++++
感兴趣的同学可以看下面这个比较有趣的现象:
#在写之前分片的基本信息: mongos> sh.status() --- Sharding Status --- ... ... databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } dba.account shard key: { "name" : 1 } chunks: shard0000 1 { "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0) #可以看到这里片键的写入,都是写在shard0000里面的。 #在写期间的分片基本信息: mongos> sh.status() --- Sharding Status --- ... ... databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } dba.account shard key: { "name" : 1 } chunks: #数据块分布 shard0000 1 shard0001 1 shard0002 1 { "name" : { "$minKey" : 1 } } -->> { "name" : "5yyfY8mmR5HyhGJ" } on : shard0001 Timestamp(2, 0) { "name" : "5yyfY8mmR5HyhGJ" } -->> { "name" : "woQAv99Pq1FVoMX" } on : shard0002 Timestamp(3, 0) { "name" : "woQAv99Pq1FVoMX" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(3, 1) #可以看到片键写入的基本分布 #在写完成后的基本信息: mongos> sh.status() --- Sharding Status --- ... ... databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } dba.account shard key: { "name" : 1 } chunks: #数据块分布 shard0000 2 shard0001 1 shard0002 2 { "name" : { "$minKey" : 1 } } -->> { "name" : "5yyfY8mmR5HyhGJ" } on : shard0001 Timestamp(2, 0) { "name" : "5yyfY8mmR5HyhGJ" } -->> { "name" : "UavMbMlfszZOFrz" } on : shard0000 Timestamp(4, 0) { "name" : "UavMbMlfszZOFrz" } -->> { "name" : "t9LyVSNXDmf6esP" } on : shard0002 Timestamp(4, 1) { "name" : "t9LyVSNXDmf6esP" } -->> { "name" : "woQAv99Pq1FVoMX" } on : shard0002 Timestamp(3, 4) { "name" : "woQAv99Pq1FVoMX" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(3, 1) #最后片键写入的分布
上面加粗的信息对比上看到,本来在每个分片上都只有一个块,最后在shard0000、shard0002上有2个块,被拆分了。shard0001不变。 这是因为mongos在收到写请求的时候,会检查当前块的拆分阀值点。到达该阀值的时候,会向分片发起一个拆分的请求。 例子中shard0000和shard0002里的块被拆分了。分片内的数据进行了迁移(有一定的消耗),最后通过一个均衡器来对数据进行转移分配。所以在写入途中要是看到一个分片中集合的数量变小也是正常的。
balancer: #均衡器 Currently enabled: yes Currently running: yes #正在转移 Balancer lock taken at Fri Jul 10 2015 22:57:27 GMT+0800 (CST) by mongo2:30000:1436540125:1804289383:Balancer:846930886
所以要是遇到分片写入比单点 写入慢就是因为分片路由服务(mongos)需要维护元数据、数据迁移、路由开销等 。
++++++++++++++++++++++++++++++++++++++++++++++++
上面的分片都是单点的,要是一个分片坏了,则数据会丢失,利用之前减少的副本集,能否把副本集加入到分片中?下面就来说明下。
1)添加副本集分片服务器(mmm副本集名称):这里测试就只对一个分片加副本集,要实现完全的高可用就需要对所有分片加副本集,避免单点故障
一个普通的副本集:
mmm:PRIMARY> rs.status() { "set" : "mmm", "date" : ISODate("2015-07-10T16:17:19Z"), "myState" : 1, "members" : [ { "_id" : 2, "name" : "192.168.200.245:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 418, "optime" : Timestamp(1436545003, 1), "optimeDate" : ISODate("2015-07-10T16:16:43Z"), "lastHeartbeat" : ISODate("2015-07-10T16:17:17Z"), "lastHeartbeatRecv" : ISODate("2015-07-10T16:17:18Z"), "pingMs" : 0, "syncingTo" : "192.168.200.25:27017" }, { "_id" : 3, "name" : "192.168.200.25:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 891321, "optime" : Timestamp(1436545003, 1), "optimeDate" : ISODate("2015-07-10T16:16:43Z"), "self" : true }, { "_id" : 4, "name" : "192.168.200.245:37017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 36, "optime" : Timestamp(1436545003, 1), "optimeDate" : ISODate("2015-07-10T16:16:43Z"), "lastHeartbeat" : ISODate("2015-07-10T16:17:17Z"), "lastHeartbeatRecv" : ISODate("2015-07-10T16:17:17Z"), "pingMs" : 0, "syncingTo" : "192.168.200.25:27017" } ], "ok" : 1 }View Code
现在需要把这个副本集加入到分片中:
mongos> sh.addShard("mmm/192.168.200.25:27017,192.168.200.245:27017,192.168.200.245:37017") #加入副本集分片 { "shardAdded" : "mmm", "ok" : 1 } mongos> sh.status() --- Sharding Status --- ...
...
shards: { "_id" : "mmm", "host" : "mmm/192.168.200.245:27017,192.168.200.245:37017,192.168.200.25:27017" } { "_id" : "shard0000", "host" : "192.168.200.51:40000" } { "_id" : "shard0001", "host" : "192.168.200.52:40000" } { "_id" : "shard0002", "host" : "192.168.200.53:40000" } balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 4 : Success databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } dba.account shard key: { "name" : 1 } chunks: mmm 1 shard0000 1 shard0001 1 shard0002 2 { "name" : { "$minKey" : 1 } } -->> { "name" : "5yyfY8mmR5HyhGJ" } on : shard0001 Timestamp(2, 0) { "name" : "5yyfY8mmR5HyhGJ" } -->> { "name" : "UavMbMlfszZOFrz" } on : mmm Timestamp(5, 0) { "name" : "UavMbMlfszZOFrz" } -->> { "name" : "t9LyVSNXDmf6esP" } on : shard0002 Timestamp(4, 1) { "name" : "t9LyVSNXDmf6esP" } -->> { "name" : "woQAv99Pq1FVoMX" } on : shard0002 Timestamp(3, 4) { "name" : "woQAv99Pq1FVoMX" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(5, 1) { "_id" : "abc", "partitioned" : false, "primary" : "shard0000" } #未设置分片
上面加粗部分表示副本集分片已经成功加入,并且 新加入的分片会分到已有的分片数据 。
mongos> db.account.stats() ... ... "shards" : { "mmm" : { "ns" : "dba.account", "count" : 7723, #后加入的分片得到了数据 "size" : 741408, "avgObjSize" : 96, "storageSize" : 2793472, "numExtents" : 5, "nindexes" : 2, "lastExtentSize" : 2097152, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 719488, "indexSizes" : { "_id_" : 343392, "name_1" : 376096 }, "ok" : 1 }, ... ...
2)继续用python脚本写数据,填充到副本集中
由于之前的副本集是比较老的版本(2.4),所以在写入副本集分片的时候报错:
mongos> db.account.insert({"name":"UavMbMlfsz1OFrz"}) WriteResult({ "nInserted" : 0, "writeError" : { "code" : 83, "errmsg" : "write results unavailable from 192.168.200.25:27017 :: caused by :: Location28563 cannot send batch write operation to server 192.168.200.25:27017 (192.168.200.25)" } })
太混蛋了,错误提示不太人性化,搞了半天。所以说版本一致性还是很重要的。现在重新开了一个副本集:
mablevi:PRIMARY> rs.status() { "set" : "mablevi", "date" : ISODate("2015-07-10T18:22:36.761Z"), "myState" : 1, "members" : [ { "_id" : 1, "name" : "192.168.200.53:50000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 820, "optime" : Timestamp(1436552412, 213), "optimeDate" : ISODate("2015-07-10T18:20:12Z"), "electionTime" : Timestamp(1436551910, 1), "electionDate" : ISODate("2015-07-10T18:11:50Z"), "configVersion" : 2, "self" : true }, { "_id" : 2, "name" : "192.168.200.53:50001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 650, "optime" : Timestamp(1436552412, 213), "optimeDate" : ISODate("2015-07-10T18:20:12Z"), "lastHeartbeat" : ISODate("2015-07-10T18:22:36.737Z"), "lastHeartbeatRecv" : ISODate("2015-07-10T18:22:36.551Z"), "pingMs" : 0, "syncingTo" : "192.168.200.53:50000", "configVersion" : 2 }, { "_id" : 3, "name" : "192.168.200.53:50002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 614, "optime" : Timestamp(1436552412, 213), "optimeDate" : ISODate("2015-07-10T18:20:12Z"), "lastHeartbeat" : ISODate("2015-07-10T18:22:36.742Z"), "lastHeartbeatRecv" : ISODate("2015-07-10T18:22:36.741Z"), "pingMs" : 0, "syncingTo" : "192.168.200.53:50001", "configVersion" : 2 } ], "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1436551942, 1), "electionId" : ObjectId("55a00ae6a08c789ce9e4b50d") } }View Code
把之前的副本集分片删除了,如何删除见下面3)。
新的副本集加入分片中:
mongos> sh.addShard("mablevi/192.168.200.53:50000,192.168.200.53:50001,192.168.200.53:50002") { "shardAdded" : "mablevi", "ok" : 1 } mongos> sh.status() --- Sharding Status --- ... ... shards: { "_id" : "mablevi", "host" : "mablevi/192.168.200.53:50000,192.168.200.53:50001,192.168.200.53:50002" } { "_id" : "shard0000", "host" : "192.168.200.51:40000" } { "_id" : "shard0001", "host" : "192.168.200.52:40000" } { "_id" : "shard0002", "host" : "192.168.200.53:40000" } ... ... dba.account shard key: { "name" : 1 } chunks: mablevi 1 shard0000 1 shard0001 1 shard0002 2 { "name" : { "$minKey" : 1 } } -->> { "name" : "5yyfY8mmR5HyhGJ" } on : shard0001 Timestamp(2, 0) { "name" : "5yyfY8mmR5HyhGJ" } -->> { "name" : "UavMbMlfszZOFrz" } on : mablevi Timestamp(9, 0) #新加入的分片得到数据 { "name" : "UavMbMlfszZOFrz" } -->> { "name" : "t9LyVSNXDmf6esP" } on : shard0002 Timestamp(4, 1) { "name" : "t9LyVSNXDmf6esP" } -->> { "name" : "woQAv99Pq1FVoMX" } on : shard0002 Timestamp(3, 4) { "name" : "woQAv99Pq1FVoMX" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(9, 1) { "_id" : "abc", "partitioned" : false, "primary" : "shard0000" } { "_id" : "mablevi", "partitioned" : false, "primary" : "shard0001" }
继续用python写入操作:
mongos> db.account.stats() { ...
... "shards" : { "mablevi" : { "ns" : "dba.account", "count" : 47240, "size" : 5290880, ...
...
副本集的分片被写入了47240条记录。此时把副本集分片的Primary shutdown掉,再查看:
mongos> db.account.stats() { "sharded" : true, "code" : 13639, "ok" : 0, "errmsg" : "exception: can't connect to new replica set master [192.168.200.53:50000], err: couldn't connect to server 192.168.200.53:50000 (192.168.200.53), connection attempt failed" #由于副本集的Primary被shutdown之后,选举新主还是要几秒的时间,期间数据不能访问,导致分片数据也不能访问 } mongos> db.account.stats() ... ... "shards" : { "mablevi" : { "ns" : "dba.account", "count" : 47240, #副本集新主选举完毕之后,分片数据访问正常。数据没有丢失,高可用得到了实现。 "size" : 5290880, ... ...
要是让副本集分片只剩下一台(Secondary),则分片会报错 :
mongos> db.account.stats() { "sharded" : true, "code" : 10009, "ok" : 0, "errmsg" : "exception: ReplicaSetMonitor no master found for set: mablevi" #数据不能访问 }
要是觉得分片太多了,想删除,则:
mongos> use admin #需要到admin下面删除 switched to db admin mongos> db.runCommand({"removeshard":"mmm"}) { "msg" : "draining started successfully", "state" : "started", #开始删除,数据正在转移 "shard" : "mmm", "ok" : 1 } mongos> sh.status() --- Sharding Status ---...
... shards: { "_id" : "mmm", "host" : "mmm/192.168.200.245:27017,192.168.200.245:37017,192.168.200.25:27017", "draining" : true } #删除的分片数据移动到其他分片 { "_id" : "shard0000", "host" : "192.168.200.51:40000" } { "_id" : "shard0001", "host" : "192.168.200.52:40000" } { "_id" : "shard0002", "host" : "192.168.200.53:40000" } ...
... databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "shard0000" } { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } dba.account shard key: { "name" : 1 } chunks: shard0000 2 shard0001 1 shard0002 2 { "name" : { "$minKey" : 1 } } -->> { "name" : "5yyfY8mmR5HyhGJ" } on : shard0001 Timestamp(2, 0) { "name" : "5yyfY8mmR5HyhGJ" } -->> { "name" : "UavMbMlfszZOFrz" } on : shard0000 Timestamp(8, 0) { "name" : "UavMbMlfszZOFrz" } -->> { "name" : "t9LyVSNXDmf6esP" } on : shard0002 Timestamp(4, 1) #这里已经没有了被删除分片信息 { "name" : "t9LyVSNXDmf6esP" } -->> { "name" : "woQAv99Pq1FVoMX" } on : shard0002 Timestamp(3, 4) { "name" : "woQAv99Pq1FVoMX" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(7, 1) { "_id" : "abc", "partitioned" : false, "primary" : "shard0000" } { "_id" : "mablevi", "partitioned" : false, "primary" : "shard0001" } mongos> db.runCommand({"removeshard":"mmm"}) #再次执行,直到执行成功,要是原来分片的数据比较大,这里比较费时。 { "msg" : "removeshard completed successfully", "state" : "completed", #完成删除 "shard" : "mmm", "ok" : 1 } mongos> sh.status() --- Sharding Status ---... shards: #分片消失 { "_id" : "shard0000", "host" : "192.168.200.51:40000" } { "_id" : "shard0001", "host" : "192.168.200.52:40000" } { "_id" : "shard0002", "host" : "192.168.200.53:40000" } ...
... { "name" : { "$minKey" : 1 } } -->> { "name" : "5yyfY8mmR5HyhGJ" } on : shard0001 Timestamp(2, 0) { "name" : "5yyfY8mmR5HyhGJ" } -->> { "name" : "UavMbMlfszZOFrz" } on : shard0000 Timestamp(8, 0) { "name" : "UavMbMlfszZOFrz" } -->> { "name" : "t9LyVSNXDmf6esP" } on : shard0002 Timestamp(4, 1) #已经没有了被删除分片的信息 { "name" : "t9LyVSNXDmf6esP" } -->> { "name" : "woQAv99Pq1FVoMX" } on : shard0002 Timestamp(3, 4) { "name" : "woQAv99Pq1FVoMX" } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(7, 1) { "_id" : "abc", "partitioned" : false, "primary" : "shard0000" } { "_id" : "mablevi", "partitioned" : false, "primary" : "shard0001" }
分片被删除之后,数据被移到其他分片中,不会丢失。
db.adminCommand({"flushRouterConfig":1})
最后来查看下分片成员:db.runCommand({ listshards : 1 })
mongos> use admin #需要进入admin才能执行 switched to db admin mongos> db.runCommand({ listshards : 1 }) { "shards" : [ { "_id" : "shard0000", "host" : "192.168.200.51:40000" }, { "_id" : "shard0001", "host" : "192.168.200.52:40000" }, { "_id" : "shard0002", "host" : "192.168.200.53:40000" }, { "_id" : "mablevi", "host" : "mablevi/192.168.200.53:50000,192.168.200.53:50001,192.168.200.53:50002" } ], "ok" : 1 }
到此已经把MongoDB分片原理、搭建、应用大致已经介绍完。
分片很好的解决了单台服务器磁盘空间、内存、cpu等硬件资源的限制问题,把数据水平拆分出去,降低单节点的访问压力。每个分片都是一个独立的数据库,所有的分片组合起来构成一个逻辑上的完整的数据库。因此,分片机制降低了每个分片的数据操作量及需要存储的数据量,达到多台服务器来应对不断增加的负载和数据的效果。后面文章还会继续对分片的其他方面进行说明介绍。
说明: http://docs.mongodb.org/manual/core/sharding-introduction/
配置: http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/
应用: http://www.caiyiting.com/blog/2014/replica-sets-sharding-realization.html