ToB企服应用市场:ToB评测及商务社交产业平台

标题: docker离线安装及部署各类中间件(x86系统架构) [打印本页]

作者: 傲渊山岳    时间: 2025-1-21 10:08
标题: docker离线安装及部署各类中间件(x86系统架构)
媒介:此文主要针对需要在x86内网服务器搭建系统的情况
一、docker离线安装

1、下载docker镜像

https://download.docker.com/linux/static/stable/x86_64/
版本:docker-23.0.6.tgz
2、将docker-23.0.6.tgz 文件上传到服务器上面,这里放在了/home下


3、创建 docker.service文件

  1. # 进入/etc/systemd/system ,创建 docker.service文件
  2. cd  /etc/systemd/system
  3. touch   docker.service
复制代码
将下方内容拷入 docker.service 文件中, :wq 保存
  1. [Unit]
  2. Description=Docker Application Container Engine
  3. Documentation=https://docs.docker.com
  4. After=network-online.target firewalld.service
  5. Wants=network-online.target
  6. [Service]
  7. Type=notify
  8. # the default is not to use systemd for cgroups because the delegate issues still
  9. # exists and systemd currently does not support the cgroup feature set required
  10. # for containers run by docker
  11. ExecStart=/usr/bin/dockerd
  12. ExecReload=/bin/kill -s HUP $MAINPID
  13. # Having non-zero Limit*s causes performance problems due to accounting overhead
  14. # in the kernel. We recommend using cgroups to do container-local accounting.
  15. LimitNOFILE=infinity
  16. LimitNPROC=infinity
  17. LimitCORE=infinity
  18. # Uncomment TasksMax if your systemd version supports it.
  19. # Only systemd 226 and above support this version.
  20. #TasksMax=infinity
  21. TimeoutStartSec=0
  22. # set delegate yes so that systemd does not reset the cgroups of docker containers
  23. Delegate=yes
  24. # kill only the docker process, not all processes in the cgroup
  25. KillMode=process
  26. # restart the docker process if it exits prematurely
  27. Restart=on-failure
  28. StartLimitBurst=3
  29. StartLimitInterval=60s
  30. [Install]
  31. WantedBy=multi-user.target
复制代码
4、安装步骤

  1. # 进入docker文件所在目录
  2. cd  /home
  3. # 解压:
  4. tar -zxvf docker-23.0.6.tgz
  5. # 解压完成后, 多了一个docker文件
  6. # 将文件拷贝到/usr/bin/下面
  7. cp docker/* /usr/bin/
  8. # 赋予 docker.service 可执行权限
  9. chmod +x /etc/systemd/system/docker.service
  10. # 启动docker
  11. systemctl daemon-reload
  12. systemctl start docker
  13. systemctl enable docker.service
  14. # 查看docker 版本
  15. docker -v
复制代码
二、docker-compose离线安装

1、下载地点

https://github.com/docker/compose/releases
版本:docker-compose-linux-x86_64
2、将docker-compose镜像同样放在/home下

3、执行操作

  1. mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
  2. chmod +x /usr/local/bin/docker-compose
  3. #查看版本号
  4. docker-compose -vr
复制代码
参考链接:https://blog.csdn.net/Chat_FJ/article/details/136738261
三、把jar包制作成docker镜像启动

1、离线jdk镜像安装

(1)在能连接互联网的x86系统中利用docker拉取一个jdk镜像
  1. #查询可用的jdk版本,这里选择的是dockette/jdk8
  2. docker search jdk
  3. #拉取镜像
  4. docker pull dockette/jdk8
  5. #本地导出保存镜像
  6. docker save -o /home/jdk8.tar dockette/jdk8:latest
复制代码
(2)把导出的包拷贝到离线的服务器上,这里同样是/home下
  1. #执行安装命令
  2. docker load -i jdk8.tar
  3. #查看已安装的镜像
  4. docker images
复制代码

2、创建jar的docker镜像

(1)创建一个文件夹作为自界说docker目次
  1. cd /home
  2. mkdir mydocker-jar
复制代码
(2)将jar包拷贝到此文件夹中,并创建Dockerfile文件
  1. #直接使用vi创建
  2. vi Dockerfile
复制代码
输入以下内容
  1. #表示基于dockette/jdk8镜像构建
  2. FROM dockette/jdk8
  3. #表示指定容器内的工作目录为/test
  4. WORKDIR /test
  5. #指定日志输出
  6. ENV LOG_PATH /var/log/myapp.log
  7. #拷贝jar到容器工作目录/test
  8. COPY  base-app-platform.jar   /test/base-app-platform.jar
  9. #执行java启动jar的指令
  10. CMD ["java","-jar","base-app-platform.jar","-Dfile.encoding=utf-8","--logging.file=${LOG_PATH}"]
复制代码
(3)构建镜像并启动容器
  1. docker build -t app-docker .
复制代码

  1. #执行启动命令
  2. #--restart=always 开机重启
  3. #-v /home/mydocker-jar/logs:/var/log 把容器中的日志复制到宿主机方便查看
  4. #-v /home/mydocker-jar/base-app-platform.jar:/test/base-app-platform.jar 把jar包挂载到宿主机,方便每次更新版本只需要覆盖宿主机上的jar
  5. #-p 9001:9001:容器中jar的端口映射到宿主机
  6. docker run -it --restart=always -v /home/mydocker-jar/logs:/var/log -v /home/mydocker-jar/base-app-platform.jar:/test/base-app-platform.jar  -p 9001:9001 --name appdocker -d app-docker
复制代码

STATUS是up表示启动成功,随后可访问ip:port测试接口
四、mysql离线安装

1、在能连接互联网的x86系统中利用docker拉取一个mysql镜像

  1. #拉取镜像
  2. docker pull mysql:8.2.0
  3. #本地导出保存镜像
  4. docker save -o /home/xht/mysql-8.2.0.tar mysql:8.2.0
复制代码
2、把导出的包拷贝到离线的服务器上,这里同样是/home下

  1. #执行安装命令
  2. docker load -i mysql-8.2.0.tar
  3. #查看已安装的镜像
  4. docker images
复制代码
3、创建宿主机挂载的目次

  1. mkdir -p /home/mysql8/data   /home/mysql8/logs  /home/mysql8/conf
  2. cd /home/mysql8/conf
  3. touch my.cnf
  4. vi my.cnf
复制代码
设置文件my.cnf输入以下内容
  1. [mysqld]
  2. pid-file        = /var/run/mysqld/mysqld.pid
  3. socket                = /var/run/mysqld/mysqld.sock
  4. datadir                = /var/lib/mysql
  5. #log-error        = /var/log/mysql/error.log
  6. # Disabling symbolic-links is recommended to prevent assorted security risks
  7. symbolic-links=0
  8. max_connections = 2000
  9. max_user_connections = 1900
  10. max_connect_errors = 100000
  11. max_allowed_packet = 50M
  12. lower_case_table_names=1
  13. [mysqld]
  14. skip-name-resolve
复制代码
4、启动mysql容器

  1. docker run  -p 3306:3306 --privileged --restart=always --name mysql8 -v /home/mysql8/conf/my.cnf:/etc/mysql/my.cnf -v /home/mysql8/logs:/logs -v /home/mysql8/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=admin123 -d mysql:8.2.0
复制代码
随后可利用navicat进行连接测试(注意:这里安装的是mysql8,如果利用较低版本的navicat连接会提示报错,可以通过修改mysql8的加密规则为mysql_native_password大概利用较新版本的navicat进行连接测试)

5、主从复制

(1)修改主库服务器的设置my.cnf,随后重启数据库
  1. [mysqld]
  2. pid-file        = /var/run/mysqld/mysqld.pid
  3. socket                = /var/run/mysqld/mysqld.sock
  4. datadir                = /var/lib/mysql
  5. #log-error        = /var/log/mysql/error.log
  6. # Disabling symbolic-links is recommended to prevent assorted security risks
  7. symbolic-links=0
  8. # 启用二进制日志,日志的存放地址
  9. log-bin= mysql-bin
  10. # 日志最长保存七天,自动删除,防止服务器爆满
  11. # expire_logs_days = 7 # mysql8中已不可用此参数,改为binlog_expire_logs_seconds
  12. binlog_expire_logs_seconds=604800
  13. # 设置服务器ID,主从服务器唯一,不可使用相同id
  14. server-id = 3
  15. max_connections = 2000
  16. max_user_connections = 1900
  17. max_connect_errors = 100000
  18. max_allowed_packet = 50M
  19. lower_case_table_names=1
  20. [mysqld]
  21. skip-name-resolve
复制代码
(2)连接进入主库,设置主从复制专用的用户
  1. # 查看服务器id
  2. show variables like 'server_id';
  3. #查看用户
  4. SELECT user FROM mysql.user;
  5. #添加主从复制用户,并授权可外部服务器连接
  6. CREATE USER 'myslave'@'%' IDENTIFIED WITH 'mysql_native_password' BY '密码';
  7. GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'%';
  8. # 查看权限
  9. SHOW GRANTS FOR 'myslave'@'%';
  10. #刷新权限
  11. FLUSH PRIVILEGES;
  12. #显示master服务器的状态信息,包括当前的日志文件和位置
  13. SHOW MASTER STATUS;
复制代码
(3)按照上文安装mysql的步骤,在从服务中安装mysql,修改设置文件my.cnf
  1. [mysqld]
  2. pid-file        = /var/run/mysqld/mysqld.pid
  3. socket                = /var/run/mysqld/mysqld.sock
  4. datadir                = /var/lib/mysql
  5. #log-error        = /var/log/mysql/error.log
  6. # Disabling symbolic-links is recommended to prevent assorted security risks
  7. symbolic-links=0
  8. # 启用二进制日志,日志的存放地址
  9. log-bin= mysql-bin
  10. # 日志最长保存七天,自动删除,防止服务器爆满
  11. # expire_logs_days = 7 # mysql8中已不可用此参数,改为binlog_expire_logs_seconds
  12. binlog_expire_logs_seconds=604800
  13. # 设置服务器ID,主从服务器唯一,不可使用相同id
  14. server-id = 5
  15. max_connections = 2000
  16. max_user_connections = 1900
  17. max_connect_errors = 100000
  18. max_allowed_packet = 50M
  19. lower_case_table_names=1
  20. [mysqld]
  21. skip-name-resolve
复制代码
(4)连接进入从库,设置主从复制
  1. show variables like 'server_id';
  2. change master to
  3. master_host='主库ip',
  4. master_port=主库端口,
  5. master_user='myslave',
  6. master_password='密码',
  7. # 根据在主库中查询出的信息填写,SHOW MASTER STATUS
  8. # File对应master_log_file,Position对应master_log_pos
  9. master_log_file='mysql-bin.000002',
  10. master_log_pos=4201,
  11. master_connect_retry=30;
  12. #开启主从复制
  13. start slave;
  14. #查看主从复制状态,Slave_IO_Running、Slave_SQL_Running这两个字段的值都为Yes即可
  15. show slave status;
  16. #停止主从复制
  17. stop slave;
复制代码
注意:在主服务器重启之后可能会导致日志信息改变,Slave_SQL_Running变为No, 此时需停止主从复制之后修改这两个参数,重新执行change master to…,再开启主从复制
五、nginxwebui离线安装

1、在能连接互联网的x86系统中利用docker拉取一个镜像

  1. #拉取镜像
  2. docker pull cym1102/nginxwebui:3.7.3
  3. #本地导出保存镜像
  4. docker save -o /home/xht/nginxwebui-3.7.3.tar nginxwebui:3.7.3
复制代码
2、把导出的包拷贝到离线的服务器上,这里同样是/home下

  1. #执行安装命令
  2. docker load -i nginxwebui-3.7.3.tar
  3. #查看已安装的镜像
  4. docker images
复制代码
3、执行命令

  1. docker run -itd --restart=always --name=nginxWebUI -v /home/nginxWebUI:/home/nginxWebUI -e BOOT_OPTIONS="--server.port=8081" --privileged=true --net=host  cym1102/nginxwebui:3.7.3 /bin/bash
复制代码
注意:这里启动成功之后,访问ip:8081可能会访问不了界面,可关闭防火墙大概设置防火墙开放8081端口进行访问。
六、redis离线安装

1、在能连接互联网的x86系统中利用docker拉取一个镜像

  1. #拉取镜像
  2. docker pull redis:6.2.5
  3. #本地导出保存镜像
  4. docker save -o /home/xht/redis-6.2.5.tar redis:6.2.5
复制代码
2、把导出的包拷贝到离线的服务器上,这里同样是/home下

  1. #执行安装命令
  2. docker load -i redis-6.2.5.tar
  3. #查看已安装的镜像
  4. docker images
复制代码
3、创建当地挂载目次

  1. cd /home
  2. mkdir redis
  3. mkdir redis/data
  4. cd redis
  5. touch redis.conf
  6. # 修改配置文件,可参考下述
  7. vi redis.conf
  8. # Redis configuration file example
  9. # Note on units: when memory size is needed, it is possible to specify
  10. # it in the usual form of 1k 5GB 4M and so forth:
  11. #
  12. # 1k => 1000 bytes
  13. # 1kb => 1024 bytes
  14. # 1m => 1000000 bytes
  15. # 1mb => 1024*1024 bytes
  16. # 1g => 1000000000 bytes
  17. # 1gb => 1024*1024*1024 bytes
  18. #
  19. # units are case insensitive so 1GB 1Gb 1gB are all the same.
  20. ################################## INCLUDES ###################################
  21. # Include one or more other config files here.  This is useful if you
  22. # have a standard template that goes to all Redis servers but also need
  23. # to customize a few per-server settings.  Include files can include
  24. # other files, so use this wisely.
  25. #
  26. # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
  27. # from admin or Redis Sentinel. Since Redis always uses the last processed
  28. # line as value of a configuration directive, you'd better put includes
  29. # at the beginning of this file to avoid overwriting config change at runtime.
  30. #
  31. # If instead you are interested in using includes to override configuration
  32. # options, it is better to use include as the last line.
  33. #
  34. # include .path        olocal.conf
  35. # include c:path        oother.conf
  36. ################################ GENERAL  #####################################
  37. # On Windows, daemonize and pidfile are not supported.
  38. # However, you can run redis as a Windows service, and specify a logfile.
  39. # The logfile will contain the pid.
  40. # Accept connections on the specified port, default is 6379.
  41. # If port 0 is specified Redis will not listen on a TCP socket.
  42. port 6379
  43. # TCP listen() backlog.
  44. #
  45. # In high requests-per-second environments you need an high backlog in order
  46. # to avoid slow clients connections issues. Note that the Linux kernel
  47. # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
  48. # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
  49. # in order to get the desired effect.
  50. tcp-backlog 511
  51. # By default Redis listens for connections from all the network interfaces
  52. # available on the server. It is possible to listen to just one or multiple
  53. # interfaces using the "bind" configuration directive, followed by one or
  54. # more IP addresses.
  55. #
  56. # Examples:
  57. #
  58. # bind 192.168.1.100 10.0.0.1
  59. # bind 127.0.0.1
  60. # Specify the path for the Unix socket that will be used to listen for
  61. # incoming connections. There is no default, so Redis will not listen
  62. # on a unix socket when not specified.
  63. #
  64. # unixsocket /tmp/redis.sock
  65. # unixsocketperm 700
  66. # Close the connection after a client is idle for N seconds (0 to disable)
  67. timeout 0
  68. # TCP keepalive.
  69. #
  70. # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
  71. # of communication. This is useful for two reasons:
  72. #
  73. # 1) Detect dead peers.
  74. # 2) Take the connection alive from the point of view of network
  75. #    equipment in the middle.
  76. #
  77. # On Linux, the specified value (in seconds) is the period used to send ACKs.
  78. # Note that to close the connection the double of the time is needed.
  79. # On other kernels the period depends on the kernel configuration.
  80. #
  81. # A reasonable value for this option is 60 seconds.
  82. tcp-keepalive 0
  83. # Specify the server verbosity level.
  84. # This can be one of:
  85. # debug (a lot of information, useful for development/testing)
  86. # verbose (many rarely useful info, but not a mess like the debug level)
  87. # notice (moderately verbose, what you want in production probably)
  88. # warning (only very important / critical messages are logged)
  89. loglevel notice
  90. # Specify the log file name. Also 'stdout' can be used to force
  91. # Redis to log on the standard output.
  92. logfile ""
  93. # To enable logging to the Windows EventLog, just set 'syslog-enabled' to
  94. # yes, and optionally update the other syslog parameters to suit your needs.
  95. # If Redis is installed and launched as a Windows Service, this will
  96. # automatically be enabled.
  97. # syslog-enabled no
  98. # Specify the source name of the events in the Windows Application log.
  99. # syslog-ident redis
  100. # Set the number of databases. The default database is DB 0, you can select
  101. # a different one on a per-connection basis using SELECT <dbid> where
  102. # dbid is a number between 0 and 'databases'-1
  103. databases 16
  104. ################################ SNAPSHOTTING  ################################
  105. #
  106. # Save the DB on disk:
  107. #
  108. #   save <seconds> <changes>
  109. #
  110. #   Will save the DB if both the given number of seconds and the given
  111. #   number of write operations against the DB occurred.
  112. #
  113. #   In the example below the behaviour will be to save:
  114. #   after 900 sec (15 min) if at least 1 key changed
  115. #   after 300 sec (5 min) if at least 10 keys changed
  116. #   after 60 sec if at least 10000 keys changed
  117. #
  118. #   Note: you can disable saving completely by commenting out all "save" lines.
  119. #
  120. #   It is also possible to remove all the previously configured save
  121. #   points by adding a save directive with a single empty string argument
  122. #   like in the following example:
  123. #
  124. #   save ""
  125. save 900 1
  126. save 300 10
  127. save 60 10000
  128. # By default Redis will stop accepting writes if RDB snapshots are enabled
  129. # (at least one save point) and the latest background save failed.
  130. # This will make the user aware (in a hard way) that data is not persisting
  131. # on disk properly, otherwise chances are that no one will notice and some
  132. # disaster will happen.
  133. #
  134. # If the background saving process will start working again Redis will
  135. # automatically allow writes again.
  136. #
  137. # However if you have setup your proper monitoring of the Redis server
  138. # and persistence, you may want to disable this feature so that Redis will
  139. # continue to work as usual even if there are problems with disk,
  140. # permissions, and so forth.
  141. stop-writes-on-bgsave-error yes
  142. # Compress string objects using LZF when dump .rdb databases?
  143. # For default that's set to 'yes' as it's almost always a win.
  144. # If you want to save some CPU in the saving child set it to 'no' but
  145. # the dataset will likely be bigger if you have compressible values or keys.
  146. rdbcompression yes
  147. # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
  148. # This makes the format more resistant to corruption but there is a performance
  149. # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
  150. # for maximum performances.
  151. #
  152. # RDB files created with checksum disabled have a checksum of zero that will
  153. # tell the loading code to skip the check.
  154. rdbchecksum yes
  155. # The filename where to dump the DB
  156. dbfilename dump.rdb
  157. # The working directory.
  158. #
  159. # The DB will be written inside this directory, with the filename specified
  160. # above using the 'dbfilename' configuration directive.
  161. #
  162. # The Append Only File will also be created inside this directory.
  163. #
  164. # Note that you must specify a directory here, not a file name.
  165. dir ./
  166. ################################# REPLICATION #################################
  167. # Master-Slave replication. Use slaveof to make a Redis instance a copy of
  168. # another Redis server. A few things to understand ASAP about Redis replication.
  169. #
  170. # 1) Redis replication is asynchronous, but you can configure a master to
  171. #    stop accepting writes if it appears to be not connected with at least
  172. #    a given number of slaves.
  173. # 2) Redis slaves are able to perform a partial resynchronization with the
  174. #    master if the replication link is lost for a relatively small amount of
  175. #    time. You may want to configure the replication backlog size (see the next
  176. #    sections of this file) with a sensible value depending on your needs.
  177. # 3) Replication is automatic and does not need user intervention. After a
  178. #    network partition slaves automatically try to reconnect to masters
  179. #    and resynchronize with them.
  180. #
  181. # slaveof <masterip> <masterport>
  182. # If the master is password protected (using the "requirepass" configuration
  183. # directive below) it is possible to tell the slave to authenticate before
  184. # starting the replication synchronization process, otherwise the master will
  185. # refuse the slave request.
  186. #
  187. # masterauth <master-password>
  188. # When a slave loses its connection with the master, or when the replication
  189. # is still in progress, the slave can act in two different ways:
  190. #
  191. # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
  192. #    still reply to client requests, possibly with out of date data, or the
  193. #    data set may just be empty if this is the first synchronization.
  194. #
  195. # 2) if slave-serve-stale-data is set to 'no' the slave will reply with
  196. #    an error "SYNC with master in progress" to all the kind of commands
  197. #    but to INFO and SLAVEOF.
  198. #
  199. slave-serve-stale-data yes
  200. # You can configure a slave instance to accept writes or not. Writing against
  201. # a slave instance may be useful to store some ephemeral data (because data
  202. # written on a slave will be easily deleted after resync with the master) but
  203. # may also cause problems if clients are writing to it because of a
  204. # misconfiguration.
  205. #
  206. # Since Redis 2.6 by default slaves are read-only.
  207. #
  208. # Note: read only slaves are not designed to be exposed to untrusted clients
  209. # on the internet. It's just a protection layer against misuse of the instance.
  210. # Still a read only slave exports by default all the administrative commands
  211. # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
  212. # security of read only slaves using 'rename-command' to shadow all the
  213. # administrative / dangerous commands.
  214. slave-read-only yes
  215. # Replication SYNC strategy: disk or socket.
  216. #
  217. # -------------------------------------------------------
  218. # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
  219. # -------------------------------------------------------
  220. #
  221. # New slaves and reconnecting slaves that are not able to continue the replication
  222. # process just receiving differences, need to do what is called a "full
  223. # synchronization". An RDB file is transmitted from the master to the slaves.
  224. # The transmission can happen in two different ways:
  225. #
  226. # 1) Disk-backed: The Redis master creates a new process that writes the RDB
  227. #                 file on disk. Later the file is transferred by the parent
  228. #                 process to the slaves incrementally.
  229. # 2) Diskless: The Redis master creates a new process that directly writes the
  230. #              RDB file to slave sockets, without touching the disk at all.
  231. #
  232. # With disk-backed replication, while the RDB file is generated, more slaves
  233. # can be queued and served with the RDB file as soon as the current child producing
  234. # the RDB file finishes its work. With diskless replication instead once
  235. # the transfer starts, new slaves arriving will be queued and a new transfer
  236. # will start when the current one terminates.
  237. #
  238. # When diskless replication is used, the master waits a configurable amount of
  239. # time (in seconds) before starting the transfer in the hope that multiple slaves
  240. # will arrive and the transfer can be parallelized.
  241. #
  242. # With slow disks and fast (large bandwidth) networks, diskless replication
  243. # works better.
  244. repl-diskless-sync no
  245. # When diskless replication is enabled, it is possible to configure the delay
  246. # the server waits in order to spawn the child that transfers the RDB via socket
  247. # to the slaves.
  248. #
  249. # This is important since once the transfer starts, it is not possible to serve
  250. # new slaves arriving, that will be queued for the next RDB transfer, so the server
  251. # waits a delay in order to let more slaves arrive.
  252. #
  253. # The delay is specified in seconds, and by default is 5 seconds. To disable
  254. # it entirely just set it to 0 seconds and the transfer will start ASAP.
  255. repl-diskless-sync-delay 5
  256. # Slaves send PINGs to server in a predefined interval. It's possible to change
  257. # this interval with the repl_ping_slave_period option. The default value is 10
  258. # seconds.
  259. #
  260. # repl-ping-slave-period 10
  261. # The following option sets the replication timeout for:
  262. #
  263. # 1) Bulk transfer I/O during SYNC, from the point of view of slave.
  264. # 2) Master timeout from the point of view of slaves (data, pings).
  265. # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
  266. #
  267. # It is important to make sure that this value is greater than the value
  268. # specified for repl-ping-slave-period otherwise a timeout will be detected
  269. # every time there is low traffic between the master and the slave.
  270. #
  271. # repl-timeout 60
  272. # Disable TCP_NODELAY on the slave socket after SYNC?
  273. #
  274. # If you select "yes" Redis will use a smaller number of TCP packets and
  275. # less bandwidth to send data to slaves. But this can add a delay for
  276. # the data to appear on the slave side, up to 40 milliseconds with
  277. # Linux kernels using a default configuration.
  278. #
  279. # If you select "no" the delay for data to appear on the slave side will
  280. # be reduced but more bandwidth will be used for replication.
  281. #
  282. # By default we optimize for low latency, but in very high traffic conditions
  283. # or when the master and slaves are many hops away, turning this to "yes" may
  284. # be a good idea.
  285. repl-disable-tcp-nodelay no
  286. # Set the replication backlog size. The backlog is a buffer that accumulates
  287. # slave data when slaves are disconnected for some time, so that when a slave
  288. # wants to reconnect again, often a full resync is not needed, but a partial
  289. # resync is enough, just passing the portion of data the slave missed while
  290. # disconnected.
  291. #
  292. # The bigger the replication backlog, the longer the time the slave can be
  293. # disconnected and later be able to perform a partial resynchronization.
  294. #
  295. # The backlog is only allocated once there is at least a slave connected.
  296. #
  297. # repl-backlog-size 1mb
  298. # After a master has no longer connected slaves for some time, the backlog
  299. # will be freed. The following option configures the amount of seconds that
  300. # need to elapse, starting from the time the last slave disconnected, for
  301. # the backlog buffer to be freed.
  302. #
  303. # A value of 0 means to never release the backlog.
  304. #
  305. # repl-backlog-ttl 3600
  306. # The slave priority is an integer number published by Redis in the INFO output.
  307. # It is used by Redis Sentinel in order to select a slave to promote into a
  308. # master if the master is no longer working correctly.
  309. #
  310. # A slave with a low priority number is considered better for promotion, so
  311. # for instance if there are three slaves with priority 10, 100, 25 Sentinel will
  312. # pick the one with priority 10, that is the lowest.
  313. #
  314. # However a special priority of 0 marks the slave as not able to perform the
  315. # role of master, so a slave with priority of 0 will never be selected by
  316. # Redis Sentinel for promotion.
  317. #
  318. # By default the priority is 100.
  319. slave-priority 100
  320. # It is possible for a master to stop accepting writes if there are less than
  321. # N slaves connected, having a lag less or equal than M seconds.
  322. #
  323. # The N slaves need to be in "online" state.
  324. #
  325. # The lag in seconds, that must be <= the specified value, is calculated from
  326. # the last ping received from the slave, that is usually sent every second.
  327. #
  328. # This option does not GUARANTEE that N replicas will accept the write, but
  329. # will limit the window of exposure for lost writes in case not enough slaves
  330. # are available, to the specified number of seconds.
  331. #
  332. # For example to require at least 3 slaves with a lag <= 10 seconds use:
  333. #
  334. # min-slaves-to-write 3
  335. # min-slaves-max-lag 10
  336. #
  337. # Setting one or the other to 0 disables the feature.
  338. #
  339. # By default min-slaves-to-write is set to 0 (feature disabled) and
  340. # min-slaves-max-lag is set to 10.
  341. ################################## SECURITY ###################################
  342. # Require clients to issue AUTH <PASSWORD> before processing any other
  343. # commands.  This might be useful in environments in which you do not trust
  344. # others with access to the host running redis-server.
  345. #
  346. # This should stay commented out for backward compatibility and because most
  347. # people do not need auth (e.g. they run their own servers).
  348. #
  349. # Warning: since Redis is pretty fast an outside user can try up to
  350. # 150k passwords per second against a good box. This means that you should
  351. # use a very strong password otherwise it will be very easy to break.
  352. #
  353. # requirepass foobared
  354. # Command renaming.
  355. #
  356. # It is possible to change the name of dangerous commands in a shared
  357. # environment. For instance the CONFIG command may be renamed into something
  358. # hard to guess so that it will still be available for internal-use tools
  359. # but not available for general clients.
  360. #
  361. # Example:
  362. #
  363. # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
  364. #
  365. # It is also possible to completely kill a command by renaming it into
  366. # an empty string:
  367. #
  368. # rename-command CONFIG ""
  369. #
  370. # Please note that changing the name of commands that are logged into the
  371. # AOF file or transmitted to slaves may cause problems.
  372. ################################### LIMITS ####################################
  373. # Set the max number of connected clients at the same time. By default
  374. # this limit is set to 10000 clients, however if the Redis server is not
  375. # able to configure the process file limit to allow for the specified limit
  376. # the max number of allowed clients is set to the current file limit
  377. # minus 32 (as Redis reserves a few file descriptors for internal uses).
  378. #
  379. # Once the limit is reached Redis will close all the new connections sending
  380. # an error 'max number of clients reached'.
  381. #
  382. # maxclients 10000
  383. # If Redis is to be used as an in-memory-only cache without any kind of
  384. # persistence, then the fork() mechanism used by the background AOF/RDB
  385. # persistence is unnecessary. As an optimization, all persistence can be
  386. # turned off in the Windows version of Redis. This will redirect heap
  387. # allocations to the system heap allocator, and disable commands that would
  388. # otherwise cause fork() operations: BGSAVE and BGREWRITEAOF.
  389. # This flag may not be combined with any of the other flags that configure
  390. # AOF and RDB operations.
  391. # persistence-available [(yes)|no]
  392. # Don't use more memory than the specified amount of bytes.
  393. # When the memory limit is reached Redis will try to remove keys
  394. # according to the eviction policy selected (see maxmemory-policy).
  395. #
  396. # If Redis can't remove keys according to the policy, or if the policy is
  397. # set to 'noeviction', Redis will start to reply with errors to commands
  398. # that would use more memory, like SET, LPUSH, and so on, and will continue
  399. # to reply to read-only commands like GET.
  400. #
  401. # This option is usually useful when using Redis as an LRU cache, or to set
  402. # a hard memory limit for an instance (using the 'noeviction' policy).
  403. #
  404. # WARNING: If you have slaves attached to an instance with maxmemory on,
  405. # the size of the output buffers needed to feed the slaves are subtracted
  406. # from the used memory count, so that network problems / resyncs will
  407. # not trigger a loop where keys are evicted, and in turn the output
  408. # buffer of slaves is full with DELs of keys evicted triggering the deletion
  409. # of more keys, and so forth until the database is completely emptied.
  410. #
  411. # In short... if you have slaves attached it is suggested that you set a lower
  412. # limit for maxmemory so that there is some free RAM on the system for slave
  413. # output buffers (but this is not needed if the policy is 'noeviction').
  414. #
  415. # WARNING: not setting maxmemory will cause Redis to terminate with an
  416. # out-of-memory exception if the heap limit is reached.
  417. #
  418. # NOTE: since Redis uses the system paging file to allocate the heap memory,
  419. # the Working Set memory usage showed by the Windows Task Manager or by other
  420. # tools such as ProcessExplorer will not always be accurate. For example, right
  421. # after a background save of the RDB or the AOF files, the working set value
  422. # may drop significantly. In order to check the correct amount of memory used
  423. # by the redis-server to store the data, use the INFO client command. The INFO
  424. # command shows only the memory used to store the redis data, not the extra
  425. # memory used by the Windows process for its own requirements. Th3 extra amount
  426. # of memory not reported by the INFO command can be calculated subtracting the
  427. # Peak Working Set reported by the Windows Task Manager and the used_memory_peak
  428. # reported by the INFO command.
  429. #
  430. # maxmemory <bytes>
  431. # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
  432. # is reached. You can select among five behaviors:
  433. #
  434. # volatile-lru -> remove the key with an expire set using an LRU algorithm
  435. # allkeys-lru -> remove any key according to the LRU algorithm
  436. # volatile-random -> remove a random key with an expire set
  437. # allkeys-random -> remove a random key, any key
  438. # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
  439. # noeviction -> don't expire at all, just return an error on write operations
  440. #
  441. # Note: with any of the above policies, Redis will return an error on write
  442. #       operations, when there are no suitable keys for eviction.
  443. #
  444. #       At the date of writing these commands are: set setnx setex append
  445. #       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
  446. #       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
  447. #       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
  448. #       getset mset msetnx exec sort
  449. #
  450. # The default is:
  451. #
  452. # maxmemory-policy noeviction
  453. # LRU and minimal TTL algorithms are not precise algorithms but approximated
  454. # algorithms (in order to save memory), so you can select as well the sample
  455. # size to check. For instance for default Redis will check three keys and
  456. # pick the one that was used less recently, you can change the sample size
  457. # using the following configuration directive.
  458. #
  459. # maxmemory-samples 3
  460. ############################## APPEND ONLY MODE ###############################
  461. # By default Redis asynchronously dumps the dataset on disk. This mode is
  462. # good enough in many applications, but an issue with the Redis process or
  463. # a power outage may result into a few minutes of writes lost (depending on
  464. # the configured save points).
  465. #
  466. # The Append Only File is an alternative persistence mode that provides
  467. # much better durability. For instance using the default data fsync policy
  468. # (see later in the config file) Redis can lose just one second of writes in a
  469. # dramatic event like a server power outage, or a single write if something
  470. # wrong with the Redis process itself happens, but the operating system is
  471. # still running correctly.
  472. #
  473. # AOF and RDB persistence can be enabled at the same time without problems.
  474. # If the AOF is enabled on startup Redis will load the AOF, that is the file
  475. # with the better durability guarantees.
  476. #
  477. # Please check http://redis.io/topics/persistence for more information.
  478. appendonly no
  479. # The name of the append only file (default: "appendonly.aof")
  480. appendfilename "appendonly.aof"
  481. # The fsync() call tells the Operating System to actually write data on disk
  482. # instead of waiting for more data in the output buffer. Some OS will really flush
  483. # data on disk, some other OS will just try to do it ASAP.
  484. #
  485. # Redis supports three different modes:
  486. #
  487. # no: don't fsync, just let the OS flush the data when it wants. Faster.
  488. # always: fsync after every write to the append only log . Slow, Safest.
  489. # everysec: fsync only one time every second. Compromise.
  490. #
  491. # The default is "everysec", as that's usually the right compromise between
  492. # speed and data safety. It's up to you to understand if you can relax this to
  493. # "no" that will let the operating system flush the output buffer when
  494. # it wants, for better performances (but if you can live with the idea of
  495. # some data loss consider the default persistence mode that's snapshotting),
  496. # or on the contrary, use "always" that's very slow but a bit safer than
  497. # everysec.
  498. #
  499. # More details please check the following article:
  500. # http://antirez.com/post/redis-persistence-demystified.html
  501. #
  502. # If unsure, use "everysec".
  503. # appendfsync always
  504. appendfsync everysec
  505. # appendfsync no
  506. # When the AOF fsync policy is set to always or everysec, and a background
  507. # saving process (a background save or AOF log background rewriting) is
  508. # performing a lot of I/O against the disk, in some Linux configurations
  509. # Redis may block too long on the fsync() call. Note that there is no fix for
  510. # this currently, as even performing fsync in a different thread will block
  511. # our synchronous write(2) call.
  512. #
  513. # In order to mitigate this problem it's possible to use the following option
  514. # that will prevent fsync() from being called in the main process while a
  515. # BGSAVE or BGREWRITEAOF is in progress.
  516. #
  517. # This means that while another child is saving, the durability of Redis is
  518. # the same as "appendfsync none". In practical terms, this means that it is
  519. # possible to lose up to 30 seconds of log in the worst scenario (with the
  520. # default Linux settings).
  521. #
  522. # If you have latency problems turn this to "yes". Otherwise leave it as
  523. # "no" that is the safest pick from the point of view of durability.
  524. no-appendfsync-on-rewrite no
  525. # Automatic rewrite of the append only file.
  526. # Redis is able to automatically rewrite the log file implicitly calling
  527. # BGREWRITEAOF when the AOF log size grows by the specified percentage.
  528. #
  529. # This is how it works: Redis remembers the size of the AOF file after the
  530. # latest rewrite (if no rewrite has happened since the restart, the size of
  531. # the AOF at startup is used).
  532. #
  533. # This base size is compared to the current size. If the current size is
  534. # bigger than the specified percentage, the rewrite is triggered. Also
  535. # you need to specify a minimal size for the AOF file to be rewritten, this
  536. # is useful to avoid rewriting the AOF file even if the percentage increase
  537. # is reached but it is still pretty small.
  538. #
  539. # Specify a percentage of zero in order to disable the automatic AOF
  540. # rewrite feature.
  541. auto-aof-rewrite-percentage 100
  542. auto-aof-rewrite-min-size 64mb
  543. # An AOF file may be found to be truncated at the end during the Redis
  544. # startup process, when the AOF data gets loaded back into memory.
  545. # This may happen when the system where Redis is running
  546. # crashes, especially when an ext4 filesystem is mounted without the
  547. # data=ordered option (however this can't happen when Redis itself
  548. # crashes or aborts but the operating system still works correctly).
  549. #
  550. # Redis can either exit with an error when this happens, or load as much
  551. # data as possible (the default now) and start if the AOF file is found
  552. # to be truncated at the end. The following option controls this behavior.
  553. #
  554. # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
  555. # the Redis server starts emitting a log to inform the user of the event.
  556. # Otherwise if the option is set to no, the server aborts with an error
  557. # and refuses to start. When the option is set to no, the user requires
  558. # to fix the AOF file using the "redis-check-aof" utility before to restart
  559. # the server.
  560. #
  561. # Note that if the AOF file will be found to be corrupted in the middle
  562. # the server will still exit with an error. This option only applies when
  563. # Redis will try to read more data from the AOF file but not enough bytes
  564. # will be found.
  565. aof-load-truncated yes
  566. ################################ LUA SCRIPTING  ###############################
  567. # Max execution time of a Lua script in milliseconds.
  568. #
  569. # If the maximum execution time is reached Redis will log that a script is
  570. # still in execution after the maximum allowed time and will start to
  571. # reply to queries with an error.
  572. #
  573. # When a long running script exceeds the maximum execution time only the
  574. # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
  575. # used to stop a script that did not yet called write commands. The second
  576. # is the only way to shut down the server in the case a write command was
  577. # already issued by the script but the user doesn't want to wait for the natural
  578. # termination of the script.
  579. #
  580. # Set it to 0 or a negative value for unlimited execution without warnings.
  581. lua-time-limit 5000
  582. ################################ REDIS CLUSTER  ###############################
  583. #
  584. # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  585. # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
  586. # in order to mark it as "mature" we need to wait for a non trivial percentage
  587. # of users to deploy it in production.
  588. # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  589. #
  590. # Normal Redis instances can't be part of a Redis Cluster; only nodes that are
  591. # started as cluster nodes can. In order to start a Redis instance as a
  592. # cluster node enable the cluster support uncommenting the following:
  593. #
  594. # cluster-enabled yes
  595. # Every cluster node has a cluster configuration file. This file is not
  596. # intended to be edited by hand. It is created and updated by Redis nodes.
  597. # Every Redis Cluster node requires a different cluster configuration file.
  598. # Make sure that instances running in the same system do not have
  599. # overlapping cluster configuration file names.
  600. #
  601. # cluster-config-file nodes-6379.conf
  602. # Cluster node timeout is the amount of milliseconds a node must be unreachable
  603. # for it to be considered in failure state.
  604. # Most other internal time limits are multiple of the node timeout.
  605. #
  606. # cluster-node-timeout 15000
  607. # A slave of a failing master will avoid to start a failover if its data
  608. # looks too old.
  609. #
  610. # There is no simple way for a slave to actually have a exact measure of
  611. # its "data age", so the following two checks are performed:
  612. #
  613. # 1) If there are multiple slaves able to failover, they exchange messages
  614. #    in order to try to give an advantage to the slave with the best
  615. #    replication offset (more data from the master processed).
  616. #    Slaves will try to get their rank by offset, and apply to the start
  617. #    of the failover a delay proportional to their rank.
  618. #
  619. # 2) Every single slave computes the time of the last interaction with
  620. #    its master. This can be the last ping or command received (if the master
  621. #    is still in the "connected" state), or the time that elapsed since the
  622. #    disconnection with the master (if the replication link is currently down).
  623. #    If the last interaction is too old, the slave will not try to failover
  624. #    at all.
  625. #
  626. # The point "2" can be tuned by user. Specifically a slave will not perform
  627. # the failover if, since the last interaction with the master, the time
  628. # elapsed is greater than:
  629. #
  630. #   (node-timeout * slave-validity-factor) + repl-ping-slave-period
  631. #
  632. # So for example if node-timeout is 30 seconds, and the slave-validity-factor
  633. # is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
  634. # slave will not try to failover if it was not able to talk with the master
  635. # for longer than 310 seconds.
  636. #
  637. # A large slave-validity-factor may allow slaves with too old data to failover
  638. # a master, while a too small value may prevent the cluster from being able to
  639. # elect a slave at all.
  640. #
  641. # For maximum availability, it is possible to set the slave-validity-factor
  642. # to a value of 0, which means, that slaves will always try to failover the
  643. # master regardless of the last time they interacted with the master.
  644. # (However they'll always try to apply a delay proportional to their
  645. # offset rank).
  646. #
  647. # Zero is the only value able to guarantee that when all the partitions heal
  648. # the cluster will always be able to continue.
  649. #
  650. # cluster-slave-validity-factor 10
  651. # Cluster slaves are able to migrate to orphaned masters, that are masters
  652. # that are left without working slaves. This improves the cluster ability
  653. # to resist to failures as otherwise an orphaned master can't be failed over
  654. # in case of failure if it has no working slaves.
  655. #
  656. # Slaves migrate to orphaned masters only if there are still at least a
  657. # given number of other working slaves for their old master. This number
  658. # is the "migration barrier". A migration barrier of 1 means that a slave
  659. # will migrate only if there is at least 1 other working slave for its master
  660. # and so forth. It usually reflects the number of slaves you want for every
  661. # master in your cluster.
  662. #
  663. # Default is 1 (slaves migrate only if their masters remain with at least
  664. # one slave). To disable migration just set it to a very large value.
  665. # A value of 0 can be set but is useful only for debugging and dangerous
  666. # in production.
  667. #
  668. # cluster-migration-barrier 1
  669. # By default Redis Cluster nodes stop accepting queries if they detect there
  670. # is at least an hash slot uncovered (no available node is serving it).
  671. # This way if the cluster is partially down (for example a range of hash slots
  672. # are no longer covered) all the cluster becomes, eventually, unavailable.
  673. # It automatically returns available as soon as all the slots are covered again.
  674. #
  675. # However sometimes you want the subset of the cluster which is working,
  676. # to continue to accept queries for the part of the key space that is still
  677. # covered. In order to do so, just set the cluster-require-full-coverage
  678. # option to no.
  679. #
  680. # cluster-require-full-coverage yes
  681. # In order to setup your cluster make sure to read the documentation
  682. # available at http://redis.io web site.
  683. ################################## SLOW LOG ###################################
  684. # The Redis Slow Log is a system to log queries that exceeded a specified
  685. # execution time. The execution time does not include the I/O operations
  686. # like talking with the client, sending the reply and so forth,
  687. # but just the time needed to actually execute the command (this is the only
  688. # stage of command execution where the thread is blocked and can not serve
  689. # other requests in the meantime).
  690. #
  691. # You can configure the slow log with two parameters: one tells Redis
  692. # what is the execution time, in microseconds, to exceed in order for the
  693. # command to get logged, and the other parameter is the length of the
  694. # slow log. When a new command is logged the oldest one is removed from the
  695. # queue of logged commands.
  696. # The following time is expressed in microseconds, so 1000000 is equivalent
  697. # to one second. Note that a negative number disables the slow log, while
  698. # a value of zero forces the logging of every command.
  699. slowlog-log-slower-than 10000
  700. # There is no limit to this length. Just be aware that it will consume memory.
  701. # You can reclaim memory used by the slow log with SLOWLOG RESET.
  702. slowlog-max-len 128
  703. ################################ LATENCY MONITOR ##############################
  704. # The Redis latency monitoring subsystem samples different operations
  705. # at runtime in order to collect data related to possible sources of
  706. # latency of a Redis instance.
  707. #
  708. # Via the LATENCY command this information is available to the user that can
  709. # print graphs and obtain reports.
  710. #
  711. # The system only logs operations that were performed in a time equal or
  712. # greater than the amount of milliseconds specified via the
  713. # latency-monitor-threshold configuration directive. When its value is set
  714. # to zero, the latency monitor is turned off.
  715. #
  716. # By default latency monitoring is disabled since it is mostly not needed
  717. # if you don't have latency issues, and collecting data has a performance
  718. # impact, that while very small, can be measured under big load. Latency
  719. # monitoring can easily be enabled at runtime using the command
  720. # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
  721. latency-monitor-threshold 0
  722. ############################# Event notification ##############################
  723. # Redis can notify Pub/Sub clients about events happening in the key space.
  724. # This feature is documented at http://redis.io/topics/notifications
  725. #
  726. # For instance if keyspace events notification is enabled, and a client
  727. # performs a DEL operation on key "foo" stored in the Database 0, two
  728. # messages will be published via Pub/Sub:
  729. #
  730. # PUBLISH __keyspace@0__:foo del
  731. # PUBLISH __keyevent@0__:del foo
  732. #
  733. # It is possible to select the events that Redis will notify among a set
  734. # of classes. Every class is identified by a single character:
  735. #
  736. #  K     Keyspace events, published with __keyspace@<db>__ prefix.
  737. #  E     Keyevent events, published with __keyevent@<db>__ prefix.
  738. #  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
  739. #  $     String commands
  740. #  l     List commands
  741. #  s     Set commands
  742. #  h     Hash commands
  743. #  z     Sorted set commands
  744. #  x     Expired events (events generated every time a key expires)
  745. #  e     Evicted events (events generated when a key is evicted for maxmemory)
  746. #  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
  747. #
  748. #  The "notify-keyspace-events" takes as argument a string that is composed
  749. #  of zero or multiple characters. The empty string means that notifications
  750. #  are disabled.
  751. #
  752. #  Example: to enable list and generic events, from the point of view of the
  753. #           event name, use:
  754. #
  755. #  notify-keyspace-events Elg
  756. #
  757. #  Example 2: to get the stream of the expired keys subscribing to channel
  758. #             name __keyevent@0__:expired use:
  759. #
  760. #  notify-keyspace-events Ex
  761. #
  762. #  By default all notifications are disabled because most users don't need
  763. #  this feature and the feature has some overhead. Note that if you don't
  764. #  specify at least one of K or E, no events will be delivered.
  765. notify-keyspace-events ""
  766. ############################### ADVANCED CONFIG ###############################
  767. # Hashes are encoded using a memory efficient data structure when they have a
  768. # small number of entries, and the biggest entry does not exceed a given
  769. # threshold. These thresholds can be configured using the following directives.
  770. hash-max-ziplist-entries 512
  771. hash-max-ziplist-value 64
  772. # Similarly to hashes, small lists are also encoded in a special way in order
  773. # to save a lot of space. The special representation is only used when
  774. # you are under the following limits:
  775. list-max-ziplist-entries 512
  776. list-max-ziplist-value 64
  777. # Sets have a special encoding in just one case: when a set is composed
  778. # of just strings that happen to be integers in radix 10 in the range
  779. # of 64 bit signed integers.
  780. # The following configuration setting sets the limit in the size of the
  781. # set in order to use this special memory saving encoding.
  782. set-max-intset-entries 512
  783. # Similarly to hashes and lists, sorted sets are also specially encoded in
  784. # order to save a lot of space. This encoding is only used when the length and
  785. # elements of a sorted set are below the following limits:
  786. zset-max-ziplist-entries 128
  787. zset-max-ziplist-value 64
  788. # HyperLogLog sparse representation bytes limit. The limit includes the
  789. # 16 bytes header. When an HyperLogLog using the sparse representation crosses
  790. # this limit, it is converted into the dense representation.
  791. #
  792. # A value greater than 16000 is totally useless, since at that point the
  793. # dense representation is more memory efficient.
  794. #
  795. # The suggested value is ~ 3000 in order to have the benefits of
  796. # the space efficient encoding without slowing down too much PFADD,
  797. # which is O(N) with the sparse encoding. The value can be raised to
  798. # ~ 10000 when CPU is not a concern, but space is, and the data set is
  799. # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
  800. hll-sparse-max-bytes 3000
  801. # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
  802. # order to help rehashing the main Redis hash table (the one mapping top-level
  803. # keys to values). The hash table implementation Redis uses (see dict.c)
  804. # performs a lazy rehashing: the more operation you run into a hash table
  805. # that is rehashing, the more rehashing "steps" are performed, so if the
  806. # server is idle the rehashing is never complete and some more memory is used
  807. # by the hash table.
  808. #
  809. # The default is to use this millisecond 10 times every second in order to
  810. # actively rehash the main dictionaries, freeing memory when possible.
  811. #
  812. # If unsure:
  813. # use "activerehashing no" if you have hard latency requirements and it is
  814. # not a good thing in your environment that Redis can reply from time to time
  815. # to queries with 2 milliseconds delay.
  816. #
  817. # use "activerehashing yes" if you don't have such hard requirements but
  818. # want to free memory asap when possible.
  819. activerehashing yes
  820. # The client output buffer limits can be used to force disconnection of clients
  821. # that are not reading data from the server fast enough for some reason (a
  822. # common reason is that a Pub/Sub client can't consume messages as fast as the
  823. # publisher can produce them).
  824. #
  825. # The limit can be set differently for the three different classes of clients:
  826. #
  827. # normal -> normal clients including MONITOR clients
  828. # slave  -> slave clients
  829. # pubsub -> clients subscribed to at least one pubsub channel or pattern
  830. #
  831. # The syntax of every client-output-buffer-limit directive is the following:
  832. #
  833. # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
  834. #
  835. # A client is immediately disconnected once the hard limit is reached, or if
  836. # the soft limit is reached and remains reached for the specified number of
  837. # seconds (continuously).
  838. # So for instance if the hard limit is 32 megabytes and the soft limit is
  839. # 16 megabytes / 10 seconds, the client will get disconnected immediately
  840. # if the size of the output buffers reach 32 megabytes, but will also get
  841. # disconnected if the client reaches 16 megabytes and continuously overcomes
  842. # the limit for 10 seconds.
  843. #
  844. # By default normal clients are not limited because they don't receive data
  845. # without asking (in a push way), but just after a request, so only
  846. # asynchronous clients may create a scenario where data is requested faster
  847. # than it can read.
  848. #
  849. # Instead there is a default limit for pubsub and slave clients, since
  850. # subscribers and slaves receive data in a push fashion.
  851. #
  852. # Both the hard or the soft limit can be disabled by setting them to zero.
  853. client-output-buffer-limit normal 0 0 0
  854. client-output-buffer-limit slave 256mb 64mb 60
  855. client-output-buffer-limit pubsub 32mb 8mb 60
  856. # Redis calls an internal function to perform many background tasks, like
  857. # closing connections of clients in timeot, purging expired keys that are
  858. # never requested, and so forth.
  859. #
  860. # Not all tasks are perforemd with the same frequency, but Redis checks for
  861. # tasks to perform according to the specified "hz" value.
  862. #
  863. # By default "hz" is set to 10. Raising the value will use more CPU when
  864. # Redis is idle, but at the same time will make Redis more responsive when
  865. # there are many keys expiring at the same time, and timeouts may be
  866. # handled with more precision.
  867. #
  868. # The range is between 1 and 500, however a value over 100 is usually not
  869. # a good idea. Most users should use the default of 10 and raise this up to
  870. # 100 only in environments where very low latency is required.
  871. hz 10
  872. # When a child rewrites the AOF file, if the following option is enabled
  873. # the file will be fsync-ed every 32 MB of data generated. This is useful
  874. # in order to commit the file to the disk more incrementally and avoid
  875. # big latency spikes.
  876. aof-rewrite-incremental-fsync yes
  877. ################################## INCLUDES ###################################
  878. # Include one or more other config files here.  This is useful if you
  879. # have a standard template that goes to all Redis server but also need
  880. # to customize a few per-server settings.  Include files can include
  881. # other files, so use this wisely.
  882. #
  883. # include /path/to/local.conf
  884. # include /path/to/other.conf
复制代码
4、启动容器

  1. docker run --restart=always -p 6379:6379 --name myredis -v /home/redis/redis.conf:/etc/redis/redis.conf -v /home/redis/data:/data -d redis:6.2.5 redis-server /etc/redis/redis.conf
复制代码
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。




欢迎光临 ToB企服应用市场:ToB评测及商务社交产业平台 (https://dis.qidao123.com/) Powered by Discuz! X3.4