副本集概述
- 副本集(Replica Set)是一组带有故障转移的 MongoDB 实例组成的集群,由一个主(Primary)服务器和多个从(Secondary)服务器构成。通过Replication,将数据的更新由Primary推送到其他实例上,在一定的延迟之后,每个MongoDB实例维护相同的数据集副本。通过维护冗余的数据库副本,能够实现数据的异地备份,读写分离和自动故障转移。
- MongoDB 副本集中没有固定的主节点,在启动后,多个服务节点间将自动选举产生一个主节点。该主节点被称为primary,一个或多个从节点被称为secondaries。primary基本上就是master节点,不同之处在于primary节点在不同时间可能是不同的服务器。如果当前的主节点失效了,副本集中的其余节点将会试图选出一个新的主节点。

节点说明
MongoDB副本集架构通过部署多种节点来达到高可用和读写分离的效果,每个副本集实例包含一个主节点(Primary节点)、一个或多个从节点(Secondary节点)、隐藏节点(Hidden节点)、仲裁节点(Arbiter节点)和可选的一个或多个只读节点(ReadOnly节点)。其中主节点、从节点和隐藏节点合起来统称为“主备节点”。各节点的说明如下:
主节点(Primary节点)
- 负责执行和响应数据读写请求。每个副本集实例中只能有一个主节点。主节点将其数据集的所有更改记录在其操作日志(即oplog小于50 GB)中。
从节点(Secondary节点)
- 通过操作日志(oplog)同步主节点的数据,可在主节点故障时通过选举成为新的主节点,保障高可用。
- 通过从节点的连接地址进行连接时,只能读取数据不能写入数据。
- 从节点具有高可用保障,即某个从节点故障时,系统会自动将其与隐藏节点切换,若未自动切换,您可以自行切换,从节点的连接地址保持不变。
触发节点的角色切换后,会产生1次30秒内的连接闪断,建议您在业务低峰期操作或确保应用具备重连机制。
隐藏节点(Hidden节点)
- 通过操作日志(oplog)同步主节点的数据,可在从节点故障时接替该故障节点成为新的从节点,也可在只读节点故障时接替该故障节点成为新的只读节点,保障高可用。
- 隐藏节点仅用作高可用,对客户端不可见。
- 隐藏节点不在“主节点的备用列表”中,不会被选举为主节点,但会参与投票选举主节点。
- 每个副本集实例中只能有一个隐藏节点。
仲裁节点(Arbiter节点)
- 仲裁节点,只是用来投票,且投票的权重只能为1,不复制数据,也不能提升为primary。
- 仲裁节点常用于节点数量是偶数的副本集中。
- 通常将Arbiter部署在业务服务器上,切忌将其部署在Primary节点或Secondary节点服务器上。
只读节点(ReadOnly节点)
- 通过操作日志(oplog)从延迟最低的主节点或从节点同步数据,应用于有大量读请求的场景,以减轻主节点和从节点的访问压力。两个或以上只读节点可以使用ReadOnly Connection String URI连接实现读请求负载均衡。
- 只读节点具有高可用保障,即某个只读节点故障时,系统会自动将其与隐藏节点切换,若未自动切换,您可以自行切换,只读节点的连接地址保持不变。
触发节点的角色切换后,会产生1次30秒内的连接闪断,建议您在业务低峰期操作或确保应用具备重连机制。
- 只读节点具有独立的连接地址,适合独立系统直连访问,与已有主从节点的连接互不干扰。
- 只读节点不在“主节点的备用列表”中,不会被选举为主节点,也不会参与投票选举主节点。
副本集部署架构
- MongoDB 6.x 官方介绍副本节点最少为3台,建议副本集成员为奇数,最多50个副本节点,最多7个节点参与选举。
- 限制副本节点的数量,主要是因为一个集群中过多的副本节点,增加了复制的成本,反而拖累了集群的整体性能。
- 太多的副本节点参与选举,也会增加选举的时间。而官方建议奇数的节点,是为了避免脑裂的发生。
副本集搭建过程
环境准备
主机名IP地址成员MongoDB-Master172.16.70.181主节点MongoDB-Slave01172.16.70.182从节点MongoDB-Slave02172.16.70.183从节点- # 三个节点统一设置,这里以 MongoDB-Master 为例
- [root@MongoDB-Master ~]# cat /etc/redhat-release
- CentOS Linux release 7.9.2009 (Core)
- [root@MongoDB-Master ~]# uname -r
- 3.10.0-1160.el7.x86_64
- # 修改ulimit 系统资源限制
- [root@MongoDB-Master ~]# cat /etc/security/limits.conf
- ....末行追加以下内容....
- root soft nproc 65535
- root hard nproc 65535
- root hard nofile 65535
- root soft nofile 65535
- [root@MongoDB-Master ~]# setenforce 0
- [root@MongoDB-Master ~]# sed -i.bak '7s/enforcing/disabled/' /etc/selinux/config
- [root@MongoDB-Master ~]# systemctl stop firewalld
- [root@MongoDB-Master ~]# systemctl status firewalld
- ● firewalld.service - firewalld - dynamic firewall daemon
- Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
- Active: inactive (dead)
- Docs: man:firewalld(1)
复制代码 安装 MongoDB 6.0
- # 使用Yum方式安装当前最新稳定版本,这里以 MongoDB-Master 为例,其他两从节点一样操作。
- [root@MongoDB-Master ~]# cat /etc/yum.repos.d/mongodb-org-6.0.repo
- [mongodb-org-6.0]
- name=MongoDB Repository
- baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/x86_64/
- gpgcheck=1
- enabled=1
- gpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc
- [root@MongoDB-Master ~]# yum install -y mongodb-org
- =========================================================================================================
- # 如要安装特定版本的 MongoDB,请单独指定每个组件包并将版本号附加到包名称,例如
- yum install -y mongodb-org-6.0.10 mongodb-org-database-6.0.10 mongodb-org-server-6.0.10 mongodb-org-mongos-6.0.10 mongodb-org-tools-6.0.10
- # yum当更新版本可用时升级软件包。为防止意外升级,请固定包。要固定包,请将以下exclude指令添加到您的/etc/yum.conf文件中
- exclude=mongodb-org,mongodb-org-database,mongodb-org-server,mongodb-mongosh,mongodb-org-mongos,mongodb-org-tools
- =========================================================================================================
- # 查看安装版本
- [root@MongoDB-Master ~]# mongod --version
- db version v6.0.11
- Build Info: {
- "version": "6.0.11",
- "gitVersion": "f797f841eaf1759c770271ae00c88b92b2766eed",
- "openSSLVersion": "OpenSSL 1.0.1e-fips 11 Feb 2013",
- "modules": [],
- "allocator": "tcmalloc",
- "environment": {
- "distmod": "rhel70",
- "distarch": "x86_64",
- "target_arch": "x86_64"
- }
- }
- # 修改配置文件 (bindIp: 127.0.0.1,172.16.70.181)
- [root@MongoDB-Master ~]# grep -Ev "^$" /etc/mongod.conf
- # mongod.conf
- # for documentation of all options, see:
- # http://docs.mongodb.org/manual/reference/configuration-options/
- # where to write logging data.
- systemLog:
- destination: file
- logAppend: true
- path: /var/log/mongodb/mongod.log
- # Where and how to store data.
- storage:
- dbPath: /var/lib/mongo
- journal:
- enabled: true
- # engine:
- # wiredTiger:
- # how the process runs
- processManagement:
- timeZoneInfo: /usr/share/zoneinfo
- # network interfaces
- net:
- port: 27017
- # 注意:本机的ip地址。否则后面进行副本集初始化的时候可能会失败!
- bindIp: 127.0.0.1,172.16.70.181 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
- #security:
- #operationProfiling:
- replication:
- # 定义副本集名称
- replSetName: testrs0
- #sharding:
- ## Enterprise-Only Options
- #auditLog:
- #snmp:
- # 确保运行MongoDB的用户有权访问相关目录
- [root@MongoDB-Master ~]# grep mongo /etc/passwd
- mongod:x:997:996:mongod:/var/lib/mongo:/bin/false
- [root@MongoDB-Master ~]# ls -ld /var/log/mongodb/mongod.log /var/lib/mongo
- drwxr-xr-x 4 mongod mongod 4096 Oct 20 10:25 /var/lib/mongo
- -rw-r----- 1 mongod mongod 171974 Oct 20 10:22 /var/log/mongodb/mongod.log
- # 启动MongoDB
- [root@MongoDB-Master ~]# systemctl start mongod && systemctl enable mongod
- [root@MongoDB-Master ~]# systemctl list-units | grep mongod
- mongod.service loaded active running MongoDB Database Server
- [root@MongoDB-Master ~]# ps axu | grep mongod
- mongod 1563 1.8 2.4 2805700 97536 ? Ssl 11:26 0:01 /usr/bin/mongod -f /etc/mongod.conf
- root 1703 0.0 0.0 112808 968 pts/0 S+ 11:27 0:00 grep --color=auto mongod
- [root@MongoDB-Master ~]# netstat -ntpl | grep mongod
- tcp 0 0 172.16.70.181:27017 0.0.0.0:* LISTEN 1563/mongod
- tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1563/mongod
- [root@MongoDB-Master ~]# ls -l /tmp/mongodb-27017.sock
- srwx------ 1 mongod mongod 0 Oct 20 11:26 /tmp/mongodb-27017.sock
复制代码 部署副本集
这里的_id与配置文件mongod.conf中replSetName保持一致。- # 在任意节点执行 rs.initiate,这里选择在MongoDB-Master操作初始化。
- [root@MongoDB-Master ~]# mongosh
- test> rs.initiate( {
- ... _id : "testrs0",
- ... members: [
- ... { _id: 0, host: "172.16.70.181:27017" },
- ... { _id: 1, host: "172.16.70.182:27017" },
- ... { _id: 2, host: "172.16.70.183:27017" }
- ... ]
- ... })
- { ok: 1 }
- # 查看副本集配置,确保只有一个主节点
- testrs0 [direct: primary] test> rs.conf()
- {
- _id: 'testrs0',
- version: 1,
- term: 1,
- members: [
- {
- _id: 0,
- host: '172.16.70.181:27017',
- arbiterOnly: false, # 是否为仲裁者,默认为false
- buildIndexes: true, # 是否为构建索引成员
- hidden: false, # 是否为隐藏成员
- priority: 1, # 范围0~1000,默认为1,值大为主节点primary,值为0则不能成为primay(仲裁)
- tags: {},
- secondaryDelaySecs: Long("0"), # 从节点复制延迟时间,单位秒s
- votes: 1 # 选举投票的数量
- },
- {
- _id: 1,
- host: '172.16.70.182:27017',
- arbiterOnly: false,
- buildIndexes: true,
- hidden: false,
- priority: 1, # 默认为1
- tags: {},
- secondaryDelaySecs: Long("0"),
- votes: 1
- },
- {
- _id: 2,
- host: '172.16.70.183:27017',
- arbiterOnly: false,
- buildIndexes: true,
- hidden: false,
- priority: 1, # 默认为1
- tags: {},
- secondaryDelaySecs: Long("0"),
- votes: 1
- }
- ],
- protocolVersion: Long("1"),
- writeConcernMajorityJournalDefault: true,
- settings: {
- chainingAllowed: true,
- heartbeatIntervalMillis: 2000,
- heartbeatTimeoutSecs: 10,
- electionTimeoutMillis: 10000,
- catchUpTimeoutMillis: -1,
- catchUpTakeoverDelayMillis: 30000,
- getLastErrorModes: {},
- getLastErrorDefaults: { w: 1, wtimeout: 0 },
- replicaSetId: ObjectId("6531f7207c8f661d6f787810")
- }
- }
- testrs0 [direct: primary] test> rs.status()
- {
- set: 'testrs0', # 副本集名称
- date: ISODate("2023-10-20T04:06:56.352Z"), # 当前时间
- myState: 1, # 成员的副本状态(0~10);常见 1:PRIMARY,2:SECONDARY,7:ARBITER,8:DOWN
- term: Long("2"), # 获得选举的票数
- syncSourceHost: '', # 实例同步成员的主机名
- syncSourceId: -1, # 实例同成员名称
- heartbeatIntervalMillis: Long("2000"), # 心跳频率,毫秒ms
- majorityVoteCount: 2, # 被选举为主节点所需要的票数
- writeMajorityCount: 2, # 满足写操作所需要的票数
- votingMembersCount: 3, # 该副本集中成员数量
- writableVotingMembersCount: 3, # 有投票权的成员数量
- optimes: {
- lastCommittedOpTime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- lastCommittedWallTime: ISODate("2023-10-20T04:06:47.335Z"),
- readConcernMajorityOpTime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- appliedOpTime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- durableOpTime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- lastAppliedWallTime: ISODate("2023-10-20T04:06:47.335Z"),
- lastDurableWallTime: ISODate("2023-10-20T04:06:47.335Z")
- },
- lastStableRecoveryTimestamp: Timestamp({ t: 1697774793, i: 1 }),
- electionCandidateMetrics: {
- lastElectionReason: 'stepUpRequestSkipDryRun',
- lastElectionDate: ISODate("2023-10-20T04:04:37.304Z"),
- electionTerm: Long("2"),
- lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1697774676, i: 1 }), t: Long("1") },
- lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1697774676, i: 1 }), t: Long("1") },
- numVotesNeeded: 2,
- priorityAtElection: 1,
- electionTimeoutMillis: Long("10000"),
- priorPrimaryMemberId: 1,
- numCatchUpOps: Long("0"),
- newTermStartDate: ISODate("2023-10-20T04:04:37.311Z"),
- wMajorityWriteAvailabilityDate: ISODate("2023-10-20T04:04:38.331Z")
- },
- electionParticipantMetrics: {
- votedForCandidate: true,
- electionTerm: Long("1"),
- lastVoteDate: ISODate("2023-10-20T03:42:36.120Z"),
- electionCandidateMemberId: 1,
- voteReason: '',
- lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1697773344, i: 1 }), t: Long("-1") },
- maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1697773344, i: 1 }), t: Long("-1") },
- priorityAtElection: 1
- },
- members: [
- {
- _id: 0, # 副本集中节点编号
- name: '172.16.70.181:27017', # 服务器名称及端口号
- health: 1, # 健康状态;1为正常,0为异常
- state: 1, # 当前状态;数值小为primary,数值大为secondary
- stateStr: 'PRIMARY', # 主节点(PRIMARY),从节点(SECONDARY)
- uptime: 2424, # 在线时间(秒)
- optime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") }, # 最后一次应用日志(oplog)信息
- optimeDate: ISODate("2023-10-20T04:06:47.000Z"), # 最后一次应用日志(oplog)时间
- lastAppliedWallTime: ISODate("2023-10-20T04:06:47.335Z"), # 该成员在主节点上应用的最后一次操作的时间
- lastDurableWallTime: ISODate("2023-10-20T04:06:47.335Z"), # 最后一次写入成员日志的操作首次在主节点上应用时的时间
- syncSourceHost: '',
- syncSourceId: -1,
- infoMessage: '',
- electionTime: Timestamp({ t: 1697774677, i: 1 }), # primary从操作日志选举时间戳信息
- electionDate: ISODate("2023-10-20T04:04:37.000Z"), # 被选定为primary的时间
- configVersion: 1, # 副本集版本
- configTerm: 2,
- self: true,
- lastHeartbeatMessage: ''
- },
- {
- _id: 1,
- name: '172.16.70.182:27017',
- health: 1,
- state: 2, # 数值小为primary,数值大为secondary
- stateStr: 'SECONDARY', # 从节点
- uptime: 84,
- optime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- optimeDurable: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- optimeDate: ISODate("2023-10-20T04:06:47.000Z"),
- optimeDurableDate: ISODate("2023-10-20T04:06:47.000Z"),
- lastAppliedWallTime: ISODate("2023-10-20T04:06:47.335Z"),
- lastDurableWallTime: ISODate("2023-10-20T04:06:47.335Z"),
- lastHeartbeat: ISODate("2023-10-20T04:06:55.647Z"),
- lastHeartbeatRecv: ISODate("2023-10-20T04:06:55.575Z"),
- pingMs: Long("0"),
- lastHeartbeatMessage: '',
- syncSourceHost: '172.16.70.183:27017',
- syncSourceId: 2,
- infoMessage: '',
- configVersion: 1,
- configTerm: 2
- },
- {
- _id: 2,
- name: '172.16.70.183:27017',
- health: 1,
- state: 2, # 数值小为primary,数值大为secondary
- stateStr: 'SECONDARY', # 从节点
- uptime: 1471,
- optime: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- optimeDurable: { ts: Timestamp({ t: 1697774807, i: 1 }), t: Long("2") },
- optimeDate: ISODate("2023-10-20T04:06:47.000Z"),
- optimeDurableDate: ISODate("2023-10-20T04:06:47.000Z"),
- lastAppliedWallTime: ISODate("2023-10-20T04:06:47.335Z"),
- lastDurableWallTime: ISODate("2023-10-20T04:06:47.335Z"),
- lastHeartbeat: ISODate("2023-10-20T04:06:55.561Z"),
- lastHeartbeatRecv: ISODate("2023-10-20T04:06:54.598Z"),
- pingMs: Long("0"),
- lastHeartbeatMessage: '',
- syncSourceHost: '172.16.70.181:27017',
- syncSourceId: 0,
- infoMessage: '',
- configVersion: 1,
- configTerm: 2
- }
- ],
- ok: 1,
- '$clusterTime': {
- clusterTime: Timestamp({ t: 1697774807, i: 1 }),
- signature: {
- hash: Binary.createFromBase64("AAAAAAAAAAAAAAAAAAAAAAAAAAA=", 0),
- keyId: Long("0")
- }
- },
- operationTime: Timestamp({ t: 1697774807, i: 1 })
- }
- testrs0 [direct: primary] test>
复制代码
复制功能测试
- # 在主节点(172.16.70.181)上新增mydb库,并创建myColl文档
- [root@MongoDB-Master ~]# mongosh
- testrs0 [direct: primary] test>
- testrs0 [direct: primary] test> show dbs
- admin 80.00 KiB
- config 208.00 KiB
- local 484.00 KiB
- testrs0 [direct: primary] test> use mydb
- switched to db mydb
- testrs0 [direct: primary] mydb> db.myColl.insertOne({ name: "zhang" })
- {
- acknowledged: true,
- insertedId: ObjectId("65361759be7d5c1abe9d83ee")
- }
- testrs0 [direct: primary] mydb> db.myColl.find()
- [ { _id: ObjectId("65361759be7d5c1abe9d83ee"), name: 'zhang' } ]
- # 在从节点(172.16.70.182/183)上查看复制同步数据结果
- [root@MongoDB-Slave01 ~]# mongosh
- testrs0 [direct: secondary] test> use mydb
- switched to db mydb
- testrs0 [direct: secondary] mydb> show collections
- myColl
- testrs0 [direct: secondary] mydb> db.myColl.find()
- MongoServerError: not primary and secondaryOk=false - consider using db.getMongo().setReadPref() or readPreference in the connection string
- # MongoServerError 报错!
- # 这是因为mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读
- testrs0 [direct: secondary] mydb> db.getMongo().setReadPref('secondary')
- testrs0 [direct: secondary] mydb> db.myColl.find()
- [ { _id: ObjectId("65361759be7d5c1abe9d83ee"), name: 'zhang' } ]
- # 此时,主节点数据已经同步到从节点上
复制代码 故障转移功能测试
- # 假设主节点(172.16.70.181)故障
- [root@MongoDB-Master ~]# systemctl stop mongod
- [root@MongoDB-Master ~]# netstat -ntpl | grep mongod
- # 登录从节点查看副本集状态
- [root@MongoDB-Slave01 ~]# mongosh
- testrs0 [direct: primary] test> rs.status()
- {
- set: 'testrs0',
- date: ISODate("2023-10-23T07:55:53.369Z"),
- myState: 1,
- term: Long("2"),
- syncSourceHost: '',
- syncSourceId: -1,
- heartbeatIntervalMillis: Long("2000"),
- majorityVoteCount: 2,
- writeMajorityCount: 2,
- votingMembersCount: 3,
- writableVotingMembersCount: 3,
- optimes: {
- lastCommittedOpTime: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- lastCommittedWallTime: ISODate("2023-10-23T07:55:43.519Z"),
- readConcernMajorityOpTime: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- appliedOpTime: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- durableOpTime: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- lastAppliedWallTime: ISODate("2023-10-23T07:55:43.519Z"),
- lastDurableWallTime: ISODate("2023-10-23T07:55:43.519Z")
- },
- lastStableRecoveryTimestamp: Timestamp({ t: 1698047693, i: 1 }),
- electionCandidateMetrics: {
- lastElectionReason: 'stepUpRequestSkipDryRun',
- lastElectionDate: ISODate("2023-10-23T07:51:13.449Z"),
- electionTerm: Long("2"),
- lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1698047468, i: 1 }), t: Long("1") },
- lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1698047468, i: 1 }), t: Long("1") },
- numVotesNeeded: 2,
- priorityAtElection: 1,
- electionTimeoutMillis: Long("10000"),
- priorPrimaryMemberId: 0,
- numCatchUpOps: Long("0"),
- newTermStartDate: ISODate("2023-10-23T07:51:13.456Z"),
- wMajorityWriteAvailabilityDate: ISODate("2023-10-23T07:51:14.459Z")
- },
- electionParticipantMetrics: {
- votedForCandidate: true,
- electionTerm: Long("1"),
- lastVoteDate: ISODate("2023-10-23T07:47:58.216Z"),
- electionCandidateMemberId: 0,
- voteReason: '',
- lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1698047267, i: 1 }), t: Long("-1") },
- maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1698047267, i: 1 }), t: Long("-1") },
- priorityAtElection: 1
- },
- members: [
- {
- _id: 0,
- name: '172.16.70.181:27017',
- health: 0,
- state: 8,
- stateStr: '(not reachable/healthy)',
- uptime: 0,
- optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
- optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
- optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
- optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
- lastAppliedWallTime: ISODate("2023-10-23T07:51:14.906Z"),
- lastDurableWallTime: ISODate("2023-10-23T07:51:14.906Z"),
- lastHeartbeat: ISODate("2023-10-23T07:55:52.462Z"),
- lastHeartbeatRecv: ISODate("2023-10-23T07:51:28.493Z"),
- pingMs: Long("0"),
- lastHeartbeatMessage: 'Error connecting to 172.16.70.181:27017 :: caused by :: Connection refused', # 提示: Error
- syncSourceHost: '',
- syncSourceId: -1,
- infoMessage: '',
- configVersion: 1,
- configTerm: 2
- },
- {
- _id: 1,
- name: '172.16.70.182:27017',
- health: 1,
- state: 1,
- stateStr: 'PRIMARY',
- uptime: 1005,
- optime: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- optimeDate: ISODate("2023-10-23T07:55:43.000Z"),
- lastAppliedWallTime: ISODate("2023-10-23T07:55:43.519Z"),
- lastDurableWallTime: ISODate("2023-10-23T07:55:43.519Z"),
- syncSourceHost: '',
- syncSourceId: -1,
- infoMessage: '',
- electionTime: Timestamp({ t: 1698047473, i: 1 }),
- electionDate: ISODate("2023-10-23T07:51:13.000Z"),
- configVersion: 1,
- configTerm: 2,
- self: true,
- lastHeartbeatMessage: ''
- },
- {
- _id: 2,
- name: '172.16.70.183:27017',
- health: 1,
- state: 2,
- stateStr: 'SECONDARY',
- uptime: 486,
- optime: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- optimeDurable: { ts: Timestamp({ t: 1698047743, i: 1 }), t: Long("2") },
- optimeDate: ISODate("2023-10-23T07:55:43.000Z"),
- optimeDurableDate: ISODate("2023-10-23T07:55:43.000Z"),
- lastAppliedWallTime: ISODate("2023-10-23T07:55:43.519Z"),
- lastDurableWallTime: ISODate("2023-10-23T07:55:43.519Z"),
- lastHeartbeat: ISODate("2023-10-23T07:55:52.043Z"),
- lastHeartbeatRecv: ISODate("2023-10-23T07:55:52.568Z"),
- pingMs: Long("0"),
- lastHeartbeatMessage: '',
- syncSourceHost: '172.16.70.182:27017',
- syncSourceId: 1,
- infoMessage: '',
- configVersion: 1,
- configTerm: 2
- }
- ],
- ok: 1,
- '$clusterTime': {
- clusterTime: Timestamp({ t: 1698047743, i: 1 }),
- signature: {
- hash: Binary.createFromBase64("AAAAAAAAAAAAAAAAAAAAAAAAAAA=", 0),
- keyId: Long("0")
- }
- },
- operationTime: Timestamp({ t: 1698047743, i: 1 })
- }
- testrs0 [direct: primary] test>
- # 此次,从节点(172.16.70.182)经过选举后,成为新的主节点。
- # 原主节点(172.16.70.181)故障恢复后,将成为新的主节点(172.16.70.182)的从节点。
- # 如果想实例预设成为主节点,可设置更高优先级priority(默认优先级为1,m值是0~1000之间的数字,数字越大优先级越高,m=0,则此节点永远不能成为主节点)
- # 即先移除rs.remove("ip:port"),再新增rs.add( { host: "ip:port", priority: Num } )
复制代码 更改副本集优先级
- # 查看当前副本集配置
- rs.conf()
- # n为 _id 值,从0开始为第一个节点,1为第二个节点,....
- # 默认优先级为1,m值是0~1000之间的数字,数字越大优先级越高,m=0,则此节点永远不能成为主节点(仲裁)
- cfg.members[n].priority = m
- # 重新配置当前副本集
- rs.reconfig(cfg)
复制代码 新增副本集成员
- # 必须在主节点上操作
- # 新增具有默认投票和优先级的成员到副本集
- rs.add( { host: "mongodbd4.example.net:27017" } )
- rs.add( "mongodbd4.example.net:27017" )
- # 新增优先级0的成员到副本集
- rs.add( { host: "mongodbd4.example.net:27017", priority: 0 } )
- # 新增仲裁者成员到副本集
- rs.add( { host: "mongodb3.example.net:27017", arbiterOnly: true } )
- rs.add("mongodb3.example.net:27017", true)
复制代码 删除副本成员
- rs.remove("mongod3.example.net:27017")
- rs.remove("mongod3.example.net")
复制代码 替换副本集成员
- cfg = rs.conf()
- cfg.members[0].host = "mongo2.example.net"
- rs.reconfig(cfg)
复制代码 作者:上古南城
出处:https://www.cnblogs.com/zhangwencheng
版权:本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出 原文链接
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |