【数据堆栈】StarRocks docker摆设

金歌  金牌会员 | 2025-2-19 13:01:55 | 显示全部楼层 | 阅读模式
打印 上一主题 下一主题

主题 916|帖子 916|积分 2748

一、环境准备

参考:https://docs.starrocks.io/zh/docs/2.5/deployment/environment_configurations/
安装 docker 及 docker-compose

docker
  1. # 先卸载系统的旧版本
  2. yum remove docker \
  3.               docker-common \
  4.               docker-selinux \
  5.               docker-engine
  6. # 设置仓库
  7. yum install -y yum-utils \
  8.   device-mapper-persistent-data \
  9.   lvm2
  10. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  11. # 安装Docker
  12. sudo yum install -y docker-ce docker-ce-cli containerd.io
  13. # docker相关配置
  14. cat > /etc/docker/daemon.json <<EOF
  15. {
  16.     "data-root": "/data/docker",
  17.     "storage-driver": "overlay2",
  18.     "exec-opts": ["native.cgroupdriver=systemd"],
  19.     "live-restore": true,
  20.     "registry-mirrors": [
  21.         "https://docker.rainbond.cc",
  22.         "https://docker.m.daocloud.io",
  23.         "https://noohub.ru",
  24.         "https://huecker.io",
  25.         "https://dockerhub.timeweb.cloud",
  26.         "https://3md2h0z0.mirror.aliyuncs.com",
  27.         "https://registry.docker-cn.com",
  28.         "http://hub-mirror.c.163.com",
  29.         "https://mirror.ccs.tencentyun.com",
  30.         "https://docker.mirrors.ustc.edu.cn",
  31.         "http://f1361db2.m.daocloud.io"
  32.     ],
  33.     "log-opts": {"max-size":"500m", "max-file":"3"},
  34.     "log-driver": "json-file"
  35. }
  36. EOF
  37. # 启动Docker,设置开机自启动
  38. systemctl start docker
  39. systemctl enable docker
复制代码
docker-compose
  1. wget https://github.com/docker/compose/releases/download/v2.15.1/docker-compose-Linux-x86_64 -O /usr/local/bin/docker-compose
  2. sudo chmod +x /usr/local/bin/docker-compose
  3. docker-compose --version
复制代码
操作系统相关禁用及配置【CentOS Linux 7 (Core)】

禁用防火墙
  1. systemctl stop firewalld.service
  2. systemctl disable firewalld.service
复制代码
禁用 SELinux
  1. setenforce 0
  2. sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
  3. sed -i 's/SELINUXTYPE/#SELINUXTYPE/' /etc/selinux/config
复制代码
内存设置
  1. cat >> /etc/sysctl.conf << EOF
  2. vm.overcommit_memory=1
  3. EOF
  4. sysctl -p
复制代码
高并发配置
  1. cat >> /etc/sysctl.conf << EOF
  2. vm.max_map_count = 262144
  3. EOF
  4. sysctl -p
  5. echo 120000 > /proc/sys/kernel/threads-max
  6. echo 200000 > /proc/sys/kernel/pid_max
复制代码
二、StarRocks-v2.5【存算一体,3FE,3BE】

参考:https://docs.starrocks.io/zh/docs/2.5/deployment/prepare_deployment_files/
各节点先准备好相关配置文件,以及拉取对应镜像(v2.5当前末了一个版本是 2.5.22 但没有docker官方镜像,所以用2.5.21)


  • BE: docker pull starrocks/be-ubuntu:2.5.21
  • FE: docker pull starrocks/fe-ubuntu:2.5.21
BE节点配置

目次
  1. mkdir -pv /data/starrocks/be/conf
  2. cd /data/starrocks/be
复制代码
./docker-compose.yaml
  1. cat > ./docker-compose.yaml  <<EOF
  2. version: '3.7'
  3. services:
  4.   be:
  5.     image: starrocks/be-ubuntu:2.5.21
  6.     container_name: be
  7.     restart: always
  8.     network_mode: host
  9.     command:
  10.       /opt/starrocks/be/bin/start_be.sh
  11.     volumes:
  12.       - ./conf/be.conf:/opt/starrocks/be/conf/be.conf
  13.       - ./storage:/opt/starrocks/be/storage
  14.       - ./log:/opt/starrocks/be/log
  15.       - /etc/localtime:/etc/localtime
  16.     healthcheck:
  17.       test: ["CMD-SHELL","curl -s -w '%{http_code}' -o /dev/null http://127.0.0.1:8040/api/health || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'"]
  18.       interval: 30s
  19.       timeout: 20s
  20.       retries: 3
  21.       start_period: 3m
  22.     logging:
  23.       driver: "json-file"
  24.       options:
  25.         tag: "{{.Name}}"
  26.         max-size: "10m"
  27. EOF
复制代码
./conf/be.conf


  • priority_networks 改为宿主机网段
   其余配置参考:https://docs.starrocks.io/zh/docs/2.5/administration/Configuration/#%E9%85%8D%E7%BD%AE-be-%E9%9D%99%E6%80%81%E5%8F%82%E6%95%B0
  1. cat > ./conf/be.conf  <<EOF
  2. # Licensed to the Apache Software Foundation (ASF) under one
  3. # or more contributor license agreements.  See the NOTICE file
  4. # distributed with this work for additional information
  5. # regarding copyright ownership.  The ASF licenses this file
  6. # to you under the Apache License, Version 2.0 (the
  7. # "License"); you may not use this file except in compliance
  8. # with the License.  You may obtain a copy of the License at
  9. #
  10. #   http://www.apache.org/licenses/LICENSE-2.0
  11. #
  12. # Unless required by applicable law or agreed to in writing,
  13. # software distributed under the License is distributed on an
  14. # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  15. # KIND, either express or implied.  See the License for the
  16. # specific language governing permissions and limitations
  17. # under the License.
  18. # INFO, WARNING, ERROR, FATAL
  19. sys_log_level = INFO
  20. #JAVA_HOME=/usr/local/jdk
  21. # ports for admin, web, heartbeat service
  22. be_port = 9060
  23. webserver_port = 8040
  24. heartbeat_service_port = 9050
  25. brpc_port = 8060
  26. # Choose one if there are more than one ip except loopback address.
  27. # Note that there should at most one ip match this list.
  28. # If no ip match this rule, will choose one randomly.
  29. # use CIDR format, e.g. 10.10.10.0/24
  30. # Default value is empty.
  31. # 以 CIDR 形式 10.10.10.0/24 指定 BE IP 地址,适用于机器有多个 IP,需要指定优先使用的网络。
  32. priority_networks = 10.101.1.0/24
  33. # data root path, separate by ';'
  34. # you can specify the storage medium of each root path, HDD or SSD, seperate by ','
  35. # eg:
  36. # storage_root_path = /data1,medium:HDD;/data2,medium:SSD;/data3
  37. # /data1, HDD;
  38. # /data2, SSD;
  39. # /data3, HDD(default);
  40. #
  41. # Default value is ${STARROCKS_HOME}/storage, you should create it by hand.
  42. # storage_root_path = ${STARROCKS_HOME}/storage
  43. # Advanced configurations
  44. # sys_log_dir = ${STARROCKS_HOME}/log
  45. # sys_log_roll_mode = SIZE-MB-1024
  46. # sys_log_roll_num = 10
  47. # sys_log_verbose_modules = *
  48. # log_buffer_level = -1
  49. default_rowset_type = beta
  50. cumulative_compaction_num_threads_per_disk = 4
  51. base_compaction_num_threads_per_disk = 2
  52. cumulative_compaction_check_interval_seconds = 2
  53. routine_load_thread_pool_size = 40
  54. cumulative_compaction_budgeted_bytes=314572800
  55. brpc_max_body_size = 8589934592
  56. trash_file_expire_time_sec=600
  57. mem_limit = 90%
  58. pipeline_max_num_drivers_per_exec_thread=102400
  59. disable_storage_page_cache = true
  60. #disable_column_pool=true
  61. #chunk_reserved_bytes_limit=100000000
  62. EOF
复制代码
./conf/log4j.properties
  1. cat > ./conf/log4j.properties <<EOF
  2. # log configuration for jars called via JNI in BE
  3. # Because there are almost no other logs except jdbc bridge now, so it's enough to only output to stdout.
  4. # If necessary, we can add special log files later
  5. log4j.rootLogger=ERROR, stdout
  6. log4j.appender.stdout=org.apache.log4j.ConsoleAppender
  7. log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
  8. log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
  9. EOF
复制代码
./conf/hadoop_env.sh
  1. cat > ./conf/hadoop_env.sh <<EOF
  2. # This file is licensed under the Elastic License 2.0. Copyright 2021-present, StarRocks Inc.
  3. export HADOOP_CLASSPATH=${STARROCKS_HOME}/lib/hadoop/common/*:${STARROCKS_HOME}/lib/hadoop/common/lib/*:${STARROCKS_HOME}/lib/hadoop/hdfs/*:${STARROCKS_HOME}/lib/hadoop/hdfs/lib/*
  4. if [ -z "${HADOOP_USER_NAME}" ]
  5. then
  6.     if [ -z "${USER}" ]
  7.     then
  8.         export HADOOP_USER_NAME=$(id -u -n)
  9.     else
  10.         export HADOOP_USER_NAME=${USER}
  11.     fi
  12. fi
  13. # the purpose is to use local hadoop configuration first.
  14. # under HADOOP_CONF_DIR(eg. /etc/ecm/hadoop-conf), there are hadoop/hdfs/hbase conf files.
  15. # and by putting HADOOP_CONF_DIR at front of HADOOP_CLASSPATH, local hadoop conf file will be searched & used first.
  16. # local hadoop configuration is usually well-tailored and optimized, we'd better to leverage that.
  17. # for example, if local hdfs has enabled short-circuit read, then we can use short-circuit read and save io time
  18. if [ ${HADOOP_CONF_DIR}"X" != "X" ]; then
  19.     export HADOOP_CLASSPATH=${HADOOP_CONF_DIR}:${HADOOP_CLASSPATH}
  20. fi
  21. EOF
复制代码
./conf/core-site.xml
  1. cat > ./conf/core-site.xml <<EOF
  2. <configuration>
  3.   <property>
  4.       <name>fs.s3.impl</name>
  5.       <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
  6.    </property>
  7. </configuration>
  8. EOF
复制代码
FE节点配置

目次
  1. mkdir -pv /data/starrocks/fe/conf
  2. cd /data/starrocks/fe
复制代码
./docker-compose.yaml
  1. cat > ./docker-compose.yaml <<EOF
  2. version: '3.7'
  3. services:
  4.   fe:
  5.     image: starrocks/fe-ubuntu:2.5.21
  6.     container_name: fe
  7.     restart: always
  8.     network_mode: host
  9.     command:
  10.       /opt/starrocks/fe/bin/start_fe.sh
  11.     volumes:
  12.       - ./conf/fe.conf:/opt/starrocks/fe/conf/fe.conf
  13.       - ./meta:/opt/starrocks/fe/meta
  14.       - ./log:/opt/starrocks/fe/log
  15.       - /etc/localtime:/etc/localtime
  16.     healthcheck:
  17.       test: ["CMD-SHELL","curl -s -w '%{http_code}' -o /dev/null http://127.0.0.1:8030/api/bootstrap || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'"]
  18.       interval: 30s
  19.       timeout: 20s
  20.       retries: 3
  21.       start_period: 3m
  22.     logging:
  23.       driver: "json-file"
  24.       options:
  25.         tag: "{{.Name}}"
  26.         max-size: "10m"
  27. EOF
复制代码
./conf/fe.conf


  • priority_networks:改为宿主机网段
  • JAVA_OPTS_FOR_JDK_9:JVM内存根据宿主机公道调解,该配置宿主机总内存为32GB
   其余配置参考:https://docs.starrocks.io/zh/docs/2.5/administration/Configuration/#%E9%85%8D%E7%BD%AE-fe-%E9%9D%99%E6%80%81%E5%8F%82%E6%95%B0
  1. cat > ./conf/fe.conf <<EOF
  2. # Licensed to the Apache Software Foundation (ASF) under one
  3. # or more contributor license agreements.  See the NOTICE file
  4. # distributed with this work for additional information
  5. # regarding copyright ownership.  The ASF licenses this file
  6. # to you under the Apache License, Version 2.0 (the
  7. # "License"); you may not use this file except in compliance
  8. # with the License.  You may obtain a copy of the License at
  9. #
  10. #   http://www.apache.org/licenses/LICENSE-2.0
  11. #
  12. # Unless required by applicable law or agreed to in writing,
  13. # software distributed under the License is distributed on an
  14. # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  15. # KIND, either express or implied.  See the License for the
  16. # specific language governing permissions and limitations
  17. # under the License.
  18. #####################################################################
  19. ## The uppercase properties are read and exported by bin/start_fe.sh.
  20. ## To see all Frontend configurations,
  21. ## see fe/src/com/starrocks/common/Config.java
  22. # the output dir of stderr and stdout
  23. LOG_DIR = ${STARROCKS_HOME}/log
  24. #JAVA_HOME=/usr/local/jdk
  25. DATE = "$(date +%Y%m%d-%H%M%S)"
  26. JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:$STARROCKS_HOME/log/fe.gc.log.$DATE"
  27. # For jdk 9+, this JAVA_OPTS will be used as default JVM options
  28. #JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Duser.timezone=GMT+8 -Xmx8g -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:$STARROCKS_HOME/log/fe.gc.log.$DATE:time"
  29. JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Duser.timezone=GMT+8 -Xmx8g -XX:+UseG1GC -Xlog:gc*:$STARROCKS_HOME/log/fe.gc.log.$DATE:time"
  30. ##
  31. ## the lowercase properties are read by main program.
  32. ##
  33. # INFO, WARN, ERROR, FATAL
  34. sys_log_level = INFO
  35. # store metadata, create it if it is not exist.
  36. # Default value is ${STARROCKS_HOME}/meta
  37. # meta_dir = ${STARROCKS_HOME}/meta
  38. http_port = 8030
  39. rpc_port = 9020
  40. query_port = 9030
  41. edit_log_port = 9010
  42. mysql_service_nio_enabled = true
  43. # Choose one if there are more than one ip except loopback address.
  44. # Note that there should at most one ip match this list.
  45. # If no ip match this rule, will choose one randomly.
  46. # use CIDR format, e.g. 10.10.10.0/24
  47. # Default value is empty.
  48. # priority_networks = 10.10.10.0/24;192.168.0.0/16
  49. # 为那些有多个 IP 地址的服务器声明一个选择策略。
  50. # 请注意,最多应该有一个 IP 地址与此列表匹配。这是一个以分号分隔格式的列表,用 CIDR 表示法,例如 10.10.10.0/24。
  51. # 如果没有匹配这条规则的ip,会随机选择一个。
  52. priority_networks = 10.101.1.0/24
  53. # Advanced configurations
  54. # log_roll_size_mb = 1024
  55. # sys_log_dir = ${STARROCKS_HOME}/log
  56. # sys_log_roll_num = 10
  57. # sys_log_verbose_modules =
  58. # audit_log_dir = ${STARROCKS_HOME}/log
  59. # audit_log_modules = slow_query, query
  60. # audit_log_roll_num = 10
  61. # meta_delay_toleration_second = 10
  62. # qe_max_connection = 1024
  63. # max_conn_per_user = 100
  64. # qe_query_timeout_second = 300
  65. # qe_slow_log_ms = 5000
  66. max_create_table_timeout_second = 120
  67. report_queue_size = 2048
  68. max_routine_load_task_num_per_be = 40
  69. enable_collect_query_detail_info = true
  70. enable_udf = true
  71. EOF
复制代码
./conf/core-site.xml
  1. cat > ./conf/core-site.xml <<EOF
  2. <configuration>
  3.   <property>
  4.       <name>fs.s3.impl</name>
  5.       <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
  6.    </property>
  7. </configuration>
  8. EOF
复制代码
./conf/hadoop_env.sh
  1. cat > ./conf/hadoop_env.sh <<EOF
  2. # This file is licensed under the Elastic License 2.0. Copyright 2021-present, StarRocks Inc.
  3. export HADOOP_CLASSPATH=${STARROCKS_HOME}/lib/hadoop/common/*:${STARROCKS_HOME}/lib/hadoop/common/lib/*:${STARROCKS_HOME}/lib/hadoop/hdfs/*:${STARROCKS_HOME}/lib/hadoop/hdfs/lib/*
  4. if [ -z "${HADOOP_USER_NAME}" ]
  5. then
  6.     if [ -z "${USER}" ]
  7.     then
  8.         export HADOOP_USER_NAME=$(id -u -n)
  9.     else
  10.         export HADOOP_USER_NAME=${USER}
  11.     fi
  12. fi
  13. # the purpose is to use local hadoop configuration first.
  14. # under HADOOP_CONF_DIR(eg. /etc/ecm/hadoop-conf), there are hadoop/hdfs/hbase conf files.
  15. # and by putting HADOOP_CONF_DIR at front of HADOOP_CLASSPATH, local hadoop conf file will be searched & used first.
  16. # local hadoop configuration is usually well-tailored and optimized, we'd better to leverage that.
  17. # for example, if local hdfs has enabled short-circuit read, then we can use short-circuit read and save io time
  18. if [ ${HADOOP_CONF_DIR}"X" != "X" ]; then
  19.     export HADOOP_CLASSPATH=${HADOOP_CONF_DIR}:${HADOOP_CLASSPATH}
  20. fi
  21. EOF
复制代码
服务启动

参考:https://docs.starrocks.io/zh/docs/2.5/deployment/deploy_manually/
启动 Leader FE 节点
  1. # 进入某个fe服务器,哪个都可以,一个就行
  2. cd /data/starrocks/fe
  3. docker-compose up -d
  4. # 检查 FE 节点是否启动成功
  5. docker ps
  6. cat ./log/fe.log | grep thrift
复制代码
启动 BE 节点
   在一个 StarRocks 集群中摆设并添加至少 3 个 BE 节点后,这些节点将自动形成一个 BE 高可用集群。
  1. # 3个BE节点都启动
  2. cd /data/starrocks/be
  3. docker-compose up -d
  4. # 检查 BE 节点是否启动成功
  5. docker ps
  6. cat ./log/be.INFO | grep heartbeat
复制代码
BE节点添加

进入 Leader FE
  1. docker exec -it fe mysql -h 127.0.0.1 -P9030 -uroot
复制代码
查察节点状态
  1. SHOW PROC '/frontends'\G
复制代码


  • 如果字段 Alive 为 true,阐明该 FE 节点正常启动并加入集群。
  • 如果字段 Role 为 FOLLOWER,阐明该 FE 节点有资格被选为 Leader FE 节点。
  • 如果字段 Role 为 LEADER,阐明该 FE 节点为 Leader FE 节点。
将BE节点添加至集群
  1. -- 将 <be_address> 替换为 BE 节点的 IP 地址(priority_networks)或 FQDN,
  2. -- 并将 <heartbeat_service_port>(默认:9050)替换为您在 be.conf 中指定的 heartbeat_service_port。
  3. ALTER SYSTEM ADD BACKEND "<be_address>:<heartbeat_service_port>", "<be2_address>:<heartbeat_service_port>", "<be3_address>:<heartbeat_service_port>";
  4. -- 查看 BE 节点状态
  5. SHOW PROC '/backends'\G
复制代码


  • 如果字段 Alive 为 true,阐明该 BE 节点正常启动并加入集群
FE 节点添加

进入 Leader FE 节点以外的另外两个服务器
   向集群中添加新的 Follower FE 节点时,您必须在初次启动新 FE 节点时为其分配一个 helper 节点(本质上是一个现有的 Follower FE 节点)以同步全部 FE 元数据信息。
  1. cd /data/starrocks/fe# 先不消docker-copmose方式启动docker run --rm  \--network host  \--privileged=true -it \-v /data/starrocks/fe/log:/opt/starrocks/fe/log \-v /data/starrocks/fe/meta:/opt/starrocks/fe/meta \-v /data/starrocks/fe/conf:/opt/starrocks/fe/conf \starrocks/fe-ubuntu:2.5.21  bash# 进入了容器# 这里ip填主节点的  SHOW PROC '/frontends'\G
  2. /opt/starrocks/fe/bin/start_fe.sh --helper 10.101.1.1:9010 --daemon# 查察 FE 日记,检查 FE 节点是否启动成功。先执行下面加入节点再回来查cat fe/log/fe.log | grep thrift
复制代码
Leader FE 节点
  1. # 【Leader FE】进入集群,执行加入该ip的节点
  2. ALTER SYSTEM ADD FOLLOWER "10.101.1.2:9010";
  3. # 【Leader FE】集群删除节点
  4. # ALTER SYSTEM DROP follower "10.101.1.2:9010";
  5. # 【Leader FE】查看节点状态,字段 Join,Alive 是否为 true
  6. SHOW PROC '/frontends'\G
复制代码
Follower FE 节点
  1. # 从节点退出容器,使用 docker-compose 启动,其他节点也同样如此操作加入集群
  2. docker-compose up -d
复制代码
三、监控(待完善)


四、VIP + Nginx + Keepalived(待完善)


keepalived.service
  1. systemctl status keepalived.service
复制代码
/etc/keepalived/keepalived.conf
  1. global_defs {
  2.    notification_email {
  3.      acassen@firewall.loc
  4.      failover@firewall.loc
  5.      sysadmin@firewall.loc
  6.      bnd@bndxqc.com.cn
  7.    }
  8.    notification_email_from Alexandre.Cassen@firewall.loc
  9.    smtp_server 127.0.0.1
  10.    smtp_connect_timeout 30
  11.    router_id LVS_DEVEL_1
  12.    vrrp_skip_check_adv_addr
  13.    #vrrp_strict
  14.    vrrp_garp_interval 0
  15.    vrrp_gna_interval 0
  16. }
  17. vrrp_script chk_nginx {
  18.     script "/etc/keepalived/check_nginx.sh"
  19.     interval 5
  20.     weight   -15
  21. }
  22. vrrp_instance VI_NGINX {
  23.     state MASTER
  24.     interface ens192
  25.     virtual_router_id 157
  26.     mcast_src_ip 10.101.1.1
  27.     priority 120
  28.     advert_int 1
  29.     authentication {
  30.         auth_type PASS
  31.         auth_pass aaa123456
  32.     }
  33.     virtual_ipaddress {
  34.         10.101.1.7/24
  35.     }
  36.     track_script {
  37.         chk_nginx
  38.     }
  39. }
复制代码
五、StarRocks-v3.2【存算分离,1FE,1CN】

存算分离sr3.2
  1. mkdir -p /data/starrocks
  2. cd /data/starrocks
  3. cat > docker-compose.yml <<- 'EOF'
  4. version: "3"
  5. services:
  6.   minio:
  7.     container_name: starrocks-minio
  8.     image: minio/minio:latest
  9.     environment:
  10.       MINIO_ROOT_USER: miniouser
  11.       MINIO_ROOT_PASSWORD: miniopassword
  12.     volumes:
  13.       - ./minio/data:/minio_data
  14.     ports:
  15.       - "9001:9001"
  16.       - "9000:9000"
  17.     entrypoint: sh
  18.     command: '-c ''mkdir -p /minio_data/starrocks && minio server /minio_data --console-address ":9001"'''
  19.     healthcheck:
  20.       test: ["CMD", "mc", "ready", "local"]
  21.       interval: 5s
  22.       timeout: 5s
  23.       retries: 5
  24.     networks:
  25.       network:
  26.         ipv4_address: 10.5.0.6
  27.   minio_mc:
  28.     # This service is short lived, it does this:
  29.     # - starts up
  30.     # - checks to see if the MinIO service `minio` is ready
  31.     # - creates a MinIO Access Key that the StarRocks services will use
  32.     # - exits
  33.     image: minio/mc:latest
  34.     entrypoint:
  35.       - sh
  36.       - -c
  37.       - |
  38.       until mc ls minio > /dev/null 2>&1; do
  39.         sleep 0.5
  40.       done
  41.       # 设置别名
  42.       mc alias set myminio http://minio:9000 miniouser miniopassword
  43.       
  44.       # 创建服务账号
  45.       mc admin user svcacct add --access-key AAAAAAAAAAAAAAAAAAAA \
  46.       --secret-key BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB \
  47.       myminio \
  48.       miniouser
  49.       
  50.       # 更新服务账号,设置过期时间为1年后(此处可根据实际需要调整)
  51.       EXPIRE_DATE=$(date -Iseconds -d '+1 year')
  52.       mc admin user svcacct update myminio --access-key=AAAAAAAAAAAAAAAAAAAA --expire="${EXPIRE_DATE}Z"
  53.     depends_on:
  54.       - minio
  55.     networks:
  56.       network:
  57.         ipv4_address: 10.5.0.7
  58.       
  59.   starrocks-fe:
  60.     image: starrocks/fe-ubuntu:3.1-latest
  61.     hostname: starrocks-fe
  62.     container_name: starrocks-fe
  63.     user: root
  64.     volumes:
  65.       - ./starrocks/fe/meta:/opt/starrocks/fe/meta
  66.       - ./starrocks/fe/log:/opt/starrocks/fe/log
  67.     command: >
  68.       bash -c "echo run_mode=shared_data >> /opt/starrocks/fe/conf/fe.conf &&
  69.       echo aws_s3_path=starrocks >> /opt/starrocks/fe/conf/fe.conf &&
  70.       echo aws_s3_endpoint=minio:9000 >> /opt/starrocks/fe/conf/fe.conf &&
  71.       echo aws_s3_access_key=AAAAAAAAAAAAAAAAAAAA >> /opt/starrocks/fe/conf/fe.conf
  72.       echo aws_s3_secret_key=BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB >> /opt/starrocks/fe/conf/fe.conf
  73.       echo aws_s3_use_instance_profile=false >> /opt/starrocks/fe/conf/fe.conf &&
  74.       echo cloud_native_storage_type=S3 >> /opt/starrocks/fe/conf/fe.conf &&
  75.       echo aws_s3_use_aws_sdk_default_behavior=true >> /opt/starrocks/fe/conf/fe.conf &&
  76.       bash /opt/starrocks/fe/bin/start_fe.sh"
  77.     ports:
  78.       - 8030:8030
  79.       - 9020:9020
  80.       - 9030:9030
  81.     healthcheck:
  82.       test: 'mysql -uroot -h10.5.0.2 -P 9030 -e "show frontends\G" |grep "Alive: true"'
  83.       interval: 10s
  84.       timeout: 5s
  85.       retries: 3
  86.     depends_on:
  87.       - minio
  88.     networks:
  89.       network:
  90.         ipv4_address: 10.5.0.2
  91.   starrocks-cn:
  92.     image: starrocks/cn-ubuntu:3.1-latest
  93.     command:
  94.       - /bin/bash
  95.       - -c
  96.       - |
  97.         sleep 15s;
  98.         mysql --connect-timeout 2 -h starrocks-fe -P9030 -uroot -e "ALTER SYSTEM ADD COMPUTE NODE "starrocks-cn:9050";"
  99.         /opt/starrocks/cn/bin/start_cn.sh
  100.     ports:
  101.       - 8040:8040
  102.     hostname: starrocks-cn
  103.     container_name: starrocks-cn
  104.     user: root
  105.     volumes:
  106.       - ./starrocks/cn/storage:/opt/starrocks/cn/storage
  107.       - ./starrocks/cn/log:/opt/starrocks/cn/log
  108.     depends_on:
  109.       - starrocks-fe
  110.       - minio
  111.     healthcheck:
  112.       test: 'mysql -uroot -h10.5.0.2 -P 9030 -e "SHOW COMPUTE NODES\G" |grep "Alive: true"'
  113.       interval: 10s
  114.       timeout: 5s
  115.       retries: 3
  116.     networks:
  117.       network:
  118.         ipv4_address: 10.5.0.3
  119. networks:
  120.   network:
  121.     driver: bridge
  122.     ipam:
  123.       config:
  124.         - subnet: 10.5.0.0/16
  125.           gateway: 10.5.0.1
  126. EOF
  127. docker-compose up -d
  128. docker ps
复制代码
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

金歌

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表