Redis数据迁移-RedisShake

打印 上一主题 下一主题

主题 882|帖子 882|积分 2648

redis-shake是阿里云Redis团队开源的用于Redis数据迁移和数据过滤的工具。

  一、根本功能

      
redis-shake它支持解析、规复、备份、同步四个功能

   
规复restore:将RDB文件规复到目的redis数据库。

   
备份dump:将源redis的全量数据通过RDB文件备份起来。

   
解析decode:对RDB文件进行读取,并以json格式解析存储。

   
同步sync:支持源redis和目的redis的数据同步,支持全量和增量数据的迁移,支持单节点、主从版、集群版之间的互相同步。

   
同步rump:支持源redis和目的redis的数据同步,仅支持全量的迁移,采用scan和restore下令进行迁移,支持不同云厂商不同redis版本的迁移。

   

   二、根本原理

   

   
   三、RedisShake同步原理

   
         
1.源Redis服务实例相当于主库,Redis-shake相当于从库,它会发送psync指令给源Redis服务实例。

   
2.源Redis实例先把RDB文件传输给 Redis-shake ,Redis-shake 会把RDB文件发送给目的实例。

   
3.源实例会再把增量下令发送给 Redis-shake ,Redis-shake负责把这些增量下令再同步给目的实例。

   

    四、RedisShake安装

    确保您在当地呆板上设置了 Golang 环境。
    4.1、release包下载

    Releases · tair-opensource/RedisShake · GitHub
    4.2、解压

   
  1. tar -zxvf redis-shake-linux-amd64.tar.gz -C /home/redisshake/
复制代码
                  
解压完了之后有两个文件:

            

    4.3、修改shake.toml配置文件

   
  1. function = ""
  2. [sync_reader]
  3. cluster = false            # set to true if source is a redis cluster
  4. address = "127.0.0.1:6379" # when cluster is true, set address to one of the cluster node
  5. username = ""              # keep empty if not using ACL
  6. password = ""              # keep empty if no authentication is required
  7. tls = false
  8. sync_rdb = true # set to false if you don't want to sync rdb
  9. sync_aof = true # set to false if you don't want to sync aof
  10. prefer_replica = true # set to true if you want to sync from replica node
  11. #[scan_reader]
  12. #cluster = false            # set to true if source is a redis cluster
  13. #address = "127.0.0.1:6379" # when cluster is true, set address to one of the cluster node
  14. #username = ""              # keep empty if not using ACL
  15. #password = ""              # keep empty if no authentication is required
  16. #tls = false
  17. #dbs = []                   # set you want to scan dbs such as [1,5,7], if you don't want to scan all
  18. #scan = true                # set to false if you don't want to scan keys
  19. #ksn = false                # set to true to enabled Redis keyspace notifications (KSN) subscription
  20. #count = 1                  # number of keys to scan per iteration
  21. # [rdb_reader]
  22. # filepath = "/tmp/dump.rdb"
  23. # [aof_reader]
  24. # filepath = "/tmp/.aof"
  25. # timestamp = 0              # subsecond
  26. [redis_writer]
  27. cluster = false            # set to true if target is a redis cluster
  28. sentinel = false           # set to true if target is a redis sentinel
  29. master = ""                # set to master name if target is a redis sentinel
  30. address = "192.168.72.129:6379" # when cluster is true, set address to one of the cluster node
  31. username = ""              # keep empty if not using ACL
  32. password = ""              # keep empty if no authentication is required
  33. tls = false
  34. off_reply = false       # ture off the server reply
  35. [advanced]
  36. dir = "data"
  37. ncpu = 0        # runtime.GOMAXPROCS, 0 means use runtime.NumCPU() cpu cores
  38. pprof_port = 0  # pprof port, 0 means disable
  39. status_port = 0 # status port, 0 means disable
  40. # log
  41. log_file = "shake.log"
  42. log_level = "info"     # debug, info or warn
  43. log_interval = 5       # in seconds
  44. # redis-shake gets key and value from rdb file, and uses RESTORE command to
  45. # create the key in target redis. Redis RESTORE will return a "Target key name
  46. # is busy" error when key already exists. You can use this configuration item
  47. # to change the default behavior of restore:
  48. # panic:   redis-shake will stop when meet "Target key name is busy" error.
  49. # rewrite: redis-shake will replace the key with new value.
  50. # ignore:  redis-shake will skip restore the key when meet "Target key name is busy" error.
  51. rdb_restore_command_behavior = "panic" # panic, rewrite or skip
  52. # redis-shake uses pipeline to improve sending performance.
  53. # This item limits the maximum number of commands in a pipeline.
  54. pipeline_count_limit = 1024
  55. # Client query buffers accumulate new commands. They are limited to a fixed
  56. # amount by default. This amount is normally 1gb.
  57. target_redis_client_max_querybuf_len = 1024_000_000
  58. # In the Redis protocol, bulk requests, that are, elements representing single
  59. # strings, are normally limited to 512 mb.
  60. target_redis_proto_max_bulk_len = 512_000_000
  61. # If the source is Elasticache or MemoryDB, you can set this item.
  62. aws_psync = "" # example: aws_psync = "10.0.0.1:6379@nmfu2sl5osync,10.0.0.1:6379@xhma21xfkssync"
  63. # destination will delete itself entire database before fetching files
  64. # from source during full synchronization.
  65. # This option is similar redis replicas RDB diskless load option:
  66. #   repl-diskless-load on-empty-db
  67. empty_db_before_sync = false
  68. [module]
  69. # The data format for BF.LOADCHUNK is not compatible in different versions. v2.6.3 <=> 20603
  70. target_mbbloom_version = 20603
复制代码
        官方使用指导手册:     
     
Sync Reader | RedisShake            4.4、启动RediShake

   
  1. ./redis-shake shake.toml
复制代码
   

    4.5、测试数据迁移
                  
在192.168.72.128这台呆板上插入几个key

      

                           
可以看到RedisShake工具在实施监听key,有新增的就会把新增的key迁移到另外一呆板的redis中;看打印的日记是写了4条数据

        

                                   
在192.168.72.129这台呆板上查看迁移的数据

         

         

          4.5、数据校验

                                          
通过info keyspace下令,查看所有的key和过期key的数量。

            

                              

         


免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
回复

使用道具 举报

0 个回复

正序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

南七星之家

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表