一、环境准备
参考:https://docs.starrocks.io/zh/docs/2.5/deployment/environment_configurations/
安装 docker 及 docker-compose
docker
- # 先卸载系统的旧版本
- yum remove docker \
- docker-common \
- docker-selinux \
- docker-engine
- # 设置仓库
- yum install -y yum-utils \
- device-mapper-persistent-data \
- lvm2
- yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- # 安装Docker
- sudo yum install -y docker-ce docker-ce-cli containerd.io
- # docker相关配置
- cat > /etc/docker/daemon.json <<EOF
- {
- "data-root": "/data/docker",
- "storage-driver": "overlay2",
- "exec-opts": ["native.cgroupdriver=systemd"],
- "live-restore": true,
- "registry-mirrors": [
- "https://docker.rainbond.cc",
- "https://docker.m.daocloud.io",
- "https://noohub.ru",
- "https://huecker.io",
- "https://dockerhub.timeweb.cloud",
- "https://3md2h0z0.mirror.aliyuncs.com",
- "https://registry.docker-cn.com",
- "http://hub-mirror.c.163.com",
- "https://mirror.ccs.tencentyun.com",
- "https://docker.mirrors.ustc.edu.cn",
- "http://f1361db2.m.daocloud.io"
- ],
- "log-opts": {"max-size":"500m", "max-file":"3"},
- "log-driver": "json-file"
- }
- EOF
- # 启动Docker,设置开机自启动
- systemctl start docker
- systemctl enable docker
复制代码 docker-compose
- wget https://github.com/docker/compose/releases/download/v2.15.1/docker-compose-Linux-x86_64 -O /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- docker-compose --version
复制代码 操作系统相关禁用及配置【CentOS Linux 7 (Core)】
禁用防火墙
- systemctl stop firewalld.service
- systemctl disable firewalld.service
复制代码 禁用 SELinux
- setenforce 0
- sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
- sed -i 's/SELINUXTYPE/#SELINUXTYPE/' /etc/selinux/config
复制代码 内存设置
- cat >> /etc/sysctl.conf << EOF
- vm.overcommit_memory=1
- EOF
- sysctl -p
复制代码 高并发配置
- cat >> /etc/sysctl.conf << EOF
- vm.max_map_count = 262144
- EOF
- sysctl -p
- echo 120000 > /proc/sys/kernel/threads-max
- echo 200000 > /proc/sys/kernel/pid_max
复制代码 二、StarRocks-v2.5【存算一体,3FE,3BE】
参考:https://docs.starrocks.io/zh/docs/2.5/deployment/prepare_deployment_files/
各节点先准备好相关配置文件,以及拉取对应镜像(v2.5当前末了一个版本是 2.5.22 但没有docker官方镜像,所以用2.5.21)
- BE: docker pull starrocks/be-ubuntu:2.5.21
- FE: docker pull starrocks/fe-ubuntu:2.5.21
BE节点配置
目次
- mkdir -pv /data/starrocks/be/conf
- cd /data/starrocks/be
复制代码 ./docker-compose.yaml
- cat > ./docker-compose.yaml <<EOF
- version: '3.7'
- services:
- be:
- image: starrocks/be-ubuntu:2.5.21
- container_name: be
- restart: always
- network_mode: host
- command:
- /opt/starrocks/be/bin/start_be.sh
- volumes:
- - ./conf/be.conf:/opt/starrocks/be/conf/be.conf
- - ./storage:/opt/starrocks/be/storage
- - ./log:/opt/starrocks/be/log
- - /etc/localtime:/etc/localtime
- healthcheck:
- test: ["CMD-SHELL","curl -s -w '%{http_code}' -o /dev/null http://127.0.0.1:8040/api/health || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'"]
- interval: 30s
- timeout: 20s
- retries: 3
- start_period: 3m
- logging:
- driver: "json-file"
- options:
- tag: "{{.Name}}"
- max-size: "10m"
- EOF
复制代码 ./conf/be.conf
- priority_networks 改为宿主机网段
其余配置参考:https://docs.starrocks.io/zh/docs/2.5/administration/Configuration/#%E9%85%8D%E7%BD%AE-be-%E9%9D%99%E6%80%81%E5%8F%82%E6%95%B0
- cat > ./conf/be.conf <<EOF
- # Licensed to the Apache Software Foundation (ASF) under one
- # or more contributor license agreements. See the NOTICE file
- # distributed with this work for additional information
- # regarding copyright ownership. The ASF licenses this file
- # to you under the Apache License, Version 2.0 (the
- # "License"); you may not use this file except in compliance
- # with the License. You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing,
- # software distributed under the License is distributed on an
- # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- # KIND, either express or implied. See the License for the
- # specific language governing permissions and limitations
- # under the License.
- # INFO, WARNING, ERROR, FATAL
- sys_log_level = INFO
- #JAVA_HOME=/usr/local/jdk
- # ports for admin, web, heartbeat service
- be_port = 9060
- webserver_port = 8040
- heartbeat_service_port = 9050
- brpc_port = 8060
- # Choose one if there are more than one ip except loopback address.
- # Note that there should at most one ip match this list.
- # If no ip match this rule, will choose one randomly.
- # use CIDR format, e.g. 10.10.10.0/24
- # Default value is empty.
- # 以 CIDR 形式 10.10.10.0/24 指定 BE IP 地址,适用于机器有多个 IP,需要指定优先使用的网络。
- priority_networks = 10.101.1.0/24
- # data root path, separate by ';'
- # you can specify the storage medium of each root path, HDD or SSD, seperate by ','
- # eg:
- # storage_root_path = /data1,medium:HDD;/data2,medium:SSD;/data3
- # /data1, HDD;
- # /data2, SSD;
- # /data3, HDD(default);
- #
- # Default value is ${STARROCKS_HOME}/storage, you should create it by hand.
- # storage_root_path = ${STARROCKS_HOME}/storage
- # Advanced configurations
- # sys_log_dir = ${STARROCKS_HOME}/log
- # sys_log_roll_mode = SIZE-MB-1024
- # sys_log_roll_num = 10
- # sys_log_verbose_modules = *
- # log_buffer_level = -1
- default_rowset_type = beta
- cumulative_compaction_num_threads_per_disk = 4
- base_compaction_num_threads_per_disk = 2
- cumulative_compaction_check_interval_seconds = 2
- routine_load_thread_pool_size = 40
- cumulative_compaction_budgeted_bytes=314572800
- brpc_max_body_size = 8589934592
- trash_file_expire_time_sec=600
- mem_limit = 90%
- pipeline_max_num_drivers_per_exec_thread=102400
- disable_storage_page_cache = true
- #disable_column_pool=true
- #chunk_reserved_bytes_limit=100000000
- EOF
复制代码 ./conf/log4j.properties
- cat > ./conf/log4j.properties <<EOF
- # log configuration for jars called via JNI in BE
- # Because there are almost no other logs except jdbc bridge now, so it's enough to only output to stdout.
- # If necessary, we can add special log files later
- log4j.rootLogger=ERROR, stdout
- log4j.appender.stdout=org.apache.log4j.ConsoleAppender
- log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
- log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
- EOF
复制代码 ./conf/hadoop_env.sh
- cat > ./conf/hadoop_env.sh <<EOF
- # This file is licensed under the Elastic License 2.0. Copyright 2021-present, StarRocks Inc.
- export HADOOP_CLASSPATH=${STARROCKS_HOME}/lib/hadoop/common/*:${STARROCKS_HOME}/lib/hadoop/common/lib/*:${STARROCKS_HOME}/lib/hadoop/hdfs/*:${STARROCKS_HOME}/lib/hadoop/hdfs/lib/*
- if [ -z "${HADOOP_USER_NAME}" ]
- then
- if [ -z "${USER}" ]
- then
- export HADOOP_USER_NAME=$(id -u -n)
- else
- export HADOOP_USER_NAME=${USER}
- fi
- fi
- # the purpose is to use local hadoop configuration first.
- # under HADOOP_CONF_DIR(eg. /etc/ecm/hadoop-conf), there are hadoop/hdfs/hbase conf files.
- # and by putting HADOOP_CONF_DIR at front of HADOOP_CLASSPATH, local hadoop conf file will be searched & used first.
- # local hadoop configuration is usually well-tailored and optimized, we'd better to leverage that.
- # for example, if local hdfs has enabled short-circuit read, then we can use short-circuit read and save io time
- if [ ${HADOOP_CONF_DIR}"X" != "X" ]; then
- export HADOOP_CLASSPATH=${HADOOP_CONF_DIR}:${HADOOP_CLASSPATH}
- fi
- EOF
复制代码 ./conf/core-site.xml
- cat > ./conf/core-site.xml <<EOF
- <configuration>
- <property>
- <name>fs.s3.impl</name>
- <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
- </property>
- </configuration>
- EOF
复制代码 FE节点配置
目次
- mkdir -pv /data/starrocks/fe/conf
- cd /data/starrocks/fe
复制代码 ./docker-compose.yaml
- cat > ./docker-compose.yaml <<EOF
- version: '3.7'
- services:
- fe:
- image: starrocks/fe-ubuntu:2.5.21
- container_name: fe
- restart: always
- network_mode: host
- command:
- /opt/starrocks/fe/bin/start_fe.sh
- volumes:
- - ./conf/fe.conf:/opt/starrocks/fe/conf/fe.conf
- - ./meta:/opt/starrocks/fe/meta
- - ./log:/opt/starrocks/fe/log
- - /etc/localtime:/etc/localtime
- healthcheck:
- test: ["CMD-SHELL","curl -s -w '%{http_code}' -o /dev/null http://127.0.0.1:8030/api/bootstrap || bash -c 'kill -s 15 -1 && (sleep 10; kill -s 9 -1)'"]
- interval: 30s
- timeout: 20s
- retries: 3
- start_period: 3m
- logging:
- driver: "json-file"
- options:
- tag: "{{.Name}}"
- max-size: "10m"
- EOF
复制代码 ./conf/fe.conf
- priority_networks:改为宿主机网段
- JAVA_OPTS_FOR_JDK_9:JVM内存根据宿主机公道调解,该配置宿主机总内存为32GB
其余配置参考:https://docs.starrocks.io/zh/docs/2.5/administration/Configuration/#%E9%85%8D%E7%BD%AE-fe-%E9%9D%99%E6%80%81%E5%8F%82%E6%95%B0
- cat > ./conf/fe.conf <<EOF
- # Licensed to the Apache Software Foundation (ASF) under one
- # or more contributor license agreements. See the NOTICE file
- # distributed with this work for additional information
- # regarding copyright ownership. The ASF licenses this file
- # to you under the Apache License, Version 2.0 (the
- # "License"); you may not use this file except in compliance
- # with the License. You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing,
- # software distributed under the License is distributed on an
- # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- # KIND, either express or implied. See the License for the
- # specific language governing permissions and limitations
- # under the License.
- #####################################################################
- ## The uppercase properties are read and exported by bin/start_fe.sh.
- ## To see all Frontend configurations,
- ## see fe/src/com/starrocks/common/Config.java
- # the output dir of stderr and stdout
- LOG_DIR = ${STARROCKS_HOME}/log
- #JAVA_HOME=/usr/local/jdk
- DATE = "$(date +%Y%m%d-%H%M%S)"
- JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:$STARROCKS_HOME/log/fe.gc.log.$DATE"
- # For jdk 9+, this JAVA_OPTS will be used as default JVM options
- #JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Duser.timezone=GMT+8 -Xmx8g -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:$STARROCKS_HOME/log/fe.gc.log.$DATE:time"
- JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Duser.timezone=GMT+8 -Xmx8g -XX:+UseG1GC -Xlog:gc*:$STARROCKS_HOME/log/fe.gc.log.$DATE:time"
- ##
- ## the lowercase properties are read by main program.
- ##
- # INFO, WARN, ERROR, FATAL
- sys_log_level = INFO
- # store metadata, create it if it is not exist.
- # Default value is ${STARROCKS_HOME}/meta
- # meta_dir = ${STARROCKS_HOME}/meta
- http_port = 8030
- rpc_port = 9020
- query_port = 9030
- edit_log_port = 9010
- mysql_service_nio_enabled = true
- # Choose one if there are more than one ip except loopback address.
- # Note that there should at most one ip match this list.
- # If no ip match this rule, will choose one randomly.
- # use CIDR format, e.g. 10.10.10.0/24
- # Default value is empty.
- # priority_networks = 10.10.10.0/24;192.168.0.0/16
- # 为那些有多个 IP 地址的服务器声明一个选择策略。
- # 请注意,最多应该有一个 IP 地址与此列表匹配。这是一个以分号分隔格式的列表,用 CIDR 表示法,例如 10.10.10.0/24。
- # 如果没有匹配这条规则的ip,会随机选择一个。
- priority_networks = 10.101.1.0/24
- # Advanced configurations
- # log_roll_size_mb = 1024
- # sys_log_dir = ${STARROCKS_HOME}/log
- # sys_log_roll_num = 10
- # sys_log_verbose_modules =
- # audit_log_dir = ${STARROCKS_HOME}/log
- # audit_log_modules = slow_query, query
- # audit_log_roll_num = 10
- # meta_delay_toleration_second = 10
- # qe_max_connection = 1024
- # max_conn_per_user = 100
- # qe_query_timeout_second = 300
- # qe_slow_log_ms = 5000
- max_create_table_timeout_second = 120
- report_queue_size = 2048
- max_routine_load_task_num_per_be = 40
- enable_collect_query_detail_info = true
- enable_udf = true
- EOF
复制代码 ./conf/core-site.xml
- cat > ./conf/core-site.xml <<EOF
- <configuration>
- <property>
- <name>fs.s3.impl</name>
- <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
- </property>
- </configuration>
- EOF
复制代码 ./conf/hadoop_env.sh
- cat > ./conf/hadoop_env.sh <<EOF
- # This file is licensed under the Elastic License 2.0. Copyright 2021-present, StarRocks Inc.
- export HADOOP_CLASSPATH=${STARROCKS_HOME}/lib/hadoop/common/*:${STARROCKS_HOME}/lib/hadoop/common/lib/*:${STARROCKS_HOME}/lib/hadoop/hdfs/*:${STARROCKS_HOME}/lib/hadoop/hdfs/lib/*
- if [ -z "${HADOOP_USER_NAME}" ]
- then
- if [ -z "${USER}" ]
- then
- export HADOOP_USER_NAME=$(id -u -n)
- else
- export HADOOP_USER_NAME=${USER}
- fi
- fi
- # the purpose is to use local hadoop configuration first.
- # under HADOOP_CONF_DIR(eg. /etc/ecm/hadoop-conf), there are hadoop/hdfs/hbase conf files.
- # and by putting HADOOP_CONF_DIR at front of HADOOP_CLASSPATH, local hadoop conf file will be searched & used first.
- # local hadoop configuration is usually well-tailored and optimized, we'd better to leverage that.
- # for example, if local hdfs has enabled short-circuit read, then we can use short-circuit read and save io time
- if [ ${HADOOP_CONF_DIR}"X" != "X" ]; then
- export HADOOP_CLASSPATH=${HADOOP_CONF_DIR}:${HADOOP_CLASSPATH}
- fi
- EOF
复制代码 服务启动
参考:https://docs.starrocks.io/zh/docs/2.5/deployment/deploy_manually/
启动 Leader FE 节点
- # 进入某个fe服务器,哪个都可以,一个就行
- cd /data/starrocks/fe
- docker-compose up -d
- # 检查 FE 节点是否启动成功
- docker ps
- cat ./log/fe.log | grep thrift
复制代码 启动 BE 节点
在一个 StarRocks 集群中摆设并添加至少 3 个 BE 节点后,这些节点将自动形成一个 BE 高可用集群。
- # 3个BE节点都启动
- cd /data/starrocks/be
- docker-compose up -d
- # 检查 BE 节点是否启动成功
- docker ps
- cat ./log/be.INFO | grep heartbeat
复制代码 BE节点添加
进入 Leader FE
- docker exec -it fe mysql -h 127.0.0.1 -P9030 -uroot
复制代码 查察节点状态
- 如果字段 Alive 为 true,阐明该 FE 节点正常启动并加入集群。
- 如果字段 Role 为 FOLLOWER,阐明该 FE 节点有资格被选为 Leader FE 节点。
- 如果字段 Role 为 LEADER,阐明该 FE 节点为 Leader FE 节点。
将BE节点添加至集群
- -- 将 <be_address> 替换为 BE 节点的 IP 地址(priority_networks)或 FQDN,
- -- 并将 <heartbeat_service_port>(默认:9050)替换为您在 be.conf 中指定的 heartbeat_service_port。
- ALTER SYSTEM ADD BACKEND "<be_address>:<heartbeat_service_port>", "<be2_address>:<heartbeat_service_port>", "<be3_address>:<heartbeat_service_port>";
- -- 查看 BE 节点状态
- SHOW PROC '/backends'\G
复制代码
- 如果字段 Alive 为 true,阐明该 BE 节点正常启动并加入集群
FE 节点添加
进入 Leader FE 节点以外的另外两个服务器
向集群中添加新的 Follower FE 节点时,您必须在初次启动新 FE 节点时为其分配一个 helper 节点(本质上是一个现有的 Follower FE 节点)以同步全部 FE 元数据信息。
- cd /data/starrocks/fe# 先不消docker-copmose方式启动docker run --rm \--network host \--privileged=true -it \-v /data/starrocks/fe/log:/opt/starrocks/fe/log \-v /data/starrocks/fe/meta:/opt/starrocks/fe/meta \-v /data/starrocks/fe/conf:/opt/starrocks/fe/conf \starrocks/fe-ubuntu:2.5.21 bash# 进入了容器# 这里ip填主节点的 SHOW PROC '/frontends'\G
- /opt/starrocks/fe/bin/start_fe.sh --helper 10.101.1.1:9010 --daemon# 查察 FE 日记,检查 FE 节点是否启动成功。先执行下面加入节点再回来查cat fe/log/fe.log | grep thrift
复制代码 Leader FE 节点
- # 【Leader FE】进入集群,执行加入该ip的节点
- ALTER SYSTEM ADD FOLLOWER "10.101.1.2:9010";
- # 【Leader FE】集群删除节点
- # ALTER SYSTEM DROP follower "10.101.1.2:9010";
- # 【Leader FE】查看节点状态,字段 Join,Alive 是否为 true
- SHOW PROC '/frontends'\G
复制代码 Follower FE 节点
- # 从节点退出容器,使用 docker-compose 启动,其他节点也同样如此操作加入集群
- docker-compose up -d
复制代码 三、监控(待完善)
…
四、VIP + Nginx + Keepalived(待完善)
…
keepalived.service
- systemctl status keepalived.service
复制代码 /etc/keepalived/keepalived.conf
- global_defs {
- notification_email {
- acassen@firewall.loc
- failover@firewall.loc
- sysadmin@firewall.loc
- bnd@bndxqc.com.cn
- }
- notification_email_from Alexandre.Cassen@firewall.loc
- smtp_server 127.0.0.1
- smtp_connect_timeout 30
- router_id LVS_DEVEL_1
- vrrp_skip_check_adv_addr
- #vrrp_strict
- vrrp_garp_interval 0
- vrrp_gna_interval 0
- }
- vrrp_script chk_nginx {
- script "/etc/keepalived/check_nginx.sh"
- interval 5
- weight -15
- }
- vrrp_instance VI_NGINX {
- state MASTER
- interface ens192
- virtual_router_id 157
- mcast_src_ip 10.101.1.1
- priority 120
- advert_int 1
- authentication {
- auth_type PASS
- auth_pass aaa123456
- }
- virtual_ipaddress {
- 10.101.1.7/24
- }
- track_script {
- chk_nginx
- }
- }
复制代码 五、StarRocks-v3.2【存算分离,1FE,1CN】
存算分离sr3.2
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |