1、前置先容
本次监控平台搭建,我利用2台阿里云服务器来完本钱次的搭建部署利用,设置如下:
- 阿里云ECS1:2核2G,Ubuntu 22.02,内网ip:172.16.0.178,开放端口:3306,9104
- 阿里云ECS2:2核2G,Ubuntu 22.02,内网ip:172.16.0.179,开放端口:9090,3000
整体部署架构图如下:
2、搭建流程
2.1、安装 Docker
由于本次我利用的是阿里云ECS Ubuntu 22.04,以是安装流程请参考这篇文章:
https://xuzhibin.blog.csdn.net/article/details/142757626
2.2、安装 MySQL
在服务器1上,执行下述命令,创建 schema.sql 环境初始化脚本,容器启动时候该脚本主动执行:
- # 初始化 MySQL 配置,随着容器启动自动执行
- mkdir -p /etc/mysql/init.d
- cat > /etc/mysql/init.d/schema.sql <<-'EOF'
- SET NAMES utf8mb4;
- SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
- SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
- SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL';
- # 初始化数据库 + 表
- DROP DATABASE IF EXISTS sakila;
- CREATE DATABASE sakila;
- USE sakila;
- CREATE TABLE actor (
- actor_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
- first_name VARCHAR(45) NOT NULL,
- last_name VARCHAR(45) NOT NULL,
- last_update TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (actor_id),
- KEY idx_actor_last_name (last_name)
- ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
- # 创建 remote 用户,允许任何主机连接
- CREATE USER remote@'%' IDENTIFIED with mysql_native_password BY 'remote';
- grant all privileges on *.* to remote@'%';
- # 创建 exporter 用户,进行访问授权
- CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporter';
- GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';
- GRANT SELECT ON performance_schema.* TO 'exporter'@'%';
- flush privileges;
- EOF
复制代码 在服务器1上,通过 Docker 创建 MySQL 容器:
- docker run \
- -p 3306:3306 \
- --name db \
- -v /etc/mysql/init.d:/docker-entrypoint-initdb.d \
- -e MYSQL_ROOT_PASSWORD=root \
- -d mysql:8
复制代码 2.3、安装 MySQL Exporter
在服务器1上,安装 MySQL Exporter,用于Prometheus进行数据采集,命令如下:
- docker run -d -p 9104:9104 --name mysql_exporter -e DATA_SOURCE_NAME="exporter:exporter@(172.16.0.178:3306)/sakila" prom/mysqld-exporter
复制代码 通过docker ps命令可以查看:
此时通过公网IP + 端标语方式,可以访问数据指标页面:
2.4、安装 Prometheus
在服务器2上,初始化设置文件,命令如下:
- mkdir /etc/prometheus
- cat > /etc/prometheus/prometheus.yml <<-'EOF'
- # my global config
- global:
- scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
- evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
- # scrape_timeout is set to the global default (10s).
- # Alertmanager configuration
- alerting:
- alertmanagers:
- - static_configs:
- - targets:
- # - alertmanager:9093
- # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
- rule_files:
- # - "first_rules.yml"
- # - "second_rules.yml"
- # A scrape configuration containing exactly one endpoint to scrape:
- # Here it's Prometheus itself.
- scrape_configs:
- # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- - job_name: "prometheus"
- # metrics_path defaults to '/metrics'
- # scheme defaults to 'http'.
- static_configs:
- - targets: ["localhost:9090"]
- ### 以下内容为 MySQL 配置
- - job_name: 'mysql_metrics'
- scrape_interval: 5s
- metrics_path: '/metrics'
- static_configs:
- # mysql-exporter 容器内网IP:端口号
- - targets: ['172.16.0.178:9104']
- EOF
复制代码 在服务器2上,启动 Prometheus 容器,命令如下:
- docker run -d -p 9090:9090 --name=prometheus -v /etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml bitnami/prometheus:latest
复制代码 此时,通过公网IP + 端标语方式访问,可以看到Prometheus页面:
2.5、安装 Grafana
在服务器2上,构建Grafana可视化仪表盘,命令如下:
- docker run -d -p 3000:3000 --name=grafana grafana/grafana
复制代码 创建完毕后,可以直接通过公网IP地址 + 端标语访问,初始登录账号暗码都是admin:
登录完毕后,第一步,设置好MySQL数据源,如下图:
第二步,设置仪表盘:
最后,看到下面这张图,就阐明MySQL监控平台搭建完毕!
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |