ElasticStack从入门到醒目

打印 上一主题 下一主题

主题 2039|帖子 2039|积分 6117

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?立即注册

x
什么是ElasticStack
ElasticStack早期名称为elk
elk代表了三个组件

  • ElasticSearch
    负责数据存储和检索。
  • Logstash
    负责数据的采集,将源数据采集到ElasticSearch进行存储。
  • Kibana
    负责数据的展示。雷同Granfa
由于Logstash是一个重量级产物,安装包超过300MB+,很多同砚只是用于采集日记,于是利用其他采集工具代替,比如flume,fluentd等产物替代。
厥后elastic公司也发现了这个问题,于是开发了一堆beats产物,其中典型代表就是Filebeat,metricbeat,heartbeat等。
而后,对于安全而言,又推出了xpack等相关组件,以及云环境的组件。
后期名称命名为elk stack(elk 技术栈),厥后公司为了宣传ElasticStack
ElasticStack 架构
ElasticStack版本
https://www.elastic.co/   elastic官网
最新版本8+,8版本默认启用了https协议,我们先安装7.17版本,然后手动启动https协议。
后面再练习安装8版本
选择elastic安装方式,我们再Ubuntu上摆设elastic
二进制包摆设单机es环境
摆设
  1. 1.下载elk安装包
  2. root@elk:~# cat install_elk.sh
  3. #!/bin/bash
  4. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-linux-x86_64.tar.gz
  5. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-linux-x86_64.tar.gz.sha512
  6. shasum -a 512 -c elasticsearch-7.17.28-linux-x86_64.tar.gz.sha512
  7. tar -xzf elasticsearch-7.17.28-linux-x86_64.tar.gz -C /usr/local
  8. cd elasticsearch-7.17.28/
  9. 2.修改配置文件
  10. root@elk:~# vim /usr/local/elasticsearch-7.17.28/config/elasticsearch.yml
  11. root@elk:~# egrep -v "^#|^$" /usr/local/elasticsearch-7.17.28/config/elasticsearch.yml
  12. cluster.name: xu-elasticstack
  13. path.data: /var/lib/es7
  14. path.logs: /var/log/es7
  15. network.host: 0.0.0.0
  16. discovery.type: single-node
  17. 相关参数说明:
  18.         port
  19.                 默认端口是9200
  20.                 # By default Elasticsearch listens for HTTP traffic on the first free port it
  21.                 # finds starting at 9200. Set a specific HTTP port here:
  22.                 #
  23.                 #http.port: 9200
  24.                
  25.         cluster.name
  26.                 集群的名称
  27.                
  28.         path.data
  29.                 ES的数据存储路径。
  30.                
  31.         path.logs
  32.                 ES的日志存储路径。
  33.                
  34.         network.host
  35.                 # elasticStack默认只允许本机访问
  36.                 # By default Elasticsearch is only accessible on localhost. Set a different
  37.                 # address here to expose this node on the network:
  38.                 #
  39.                 #network.host: 192.168.0.1
  40.                 ES服务监听的地址。
  41.                
  42.         discovery.type
  43.                 # 如果部署的es集群就需要配置discovery.seed_hosts和cluster.initial_master_nodes参数
  44.                 # Pass an initial list of hosts to perform discovery when this node is started:
  45.                 # The default list of hosts is ["127.0.0.1", "[::1]"]
  46.                 #
  47.                 #discovery.seed_hosts: ["host1", "host2"]
  48.                 #
  49.                 # Bootstrap the cluster using an initial set of master-eligible nodes:
  50.                 #
  51.                 #cluster.initial_master_nodes: ["node-1", "node-2"]               
  52.                 指的ES集群的部署类型,此处的"single-node",表示的是一个单点环境。
  53.                
  54. 3.如果此时直接启动elastic会报错
  55. 3.1测试报错,官方给出的启动命令
  56. Elasticsearch can be started from the command line as follows:
  57. ./bin/elasticsearch
  58. root@elk:~# /usr/local/elasticsearch-7.17.28/bin/elasticsearch
  59. # 这些是java类型报错
  60. Mar 17, 2025 7:44:51 AM sun.util.locale.provider.LocaleProviderAdapter <clinit>
  61. WARNING: COMPAT locale provider will be removed in a future release
  62. [2025-03-17T07:44:53,125][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [elk] uncaught exception in thread [main]
  63. org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
  64.         at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:173) ~[elasticsearch-7.17.28.jar:7.17.28]
  65.         at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) ~[elasticsearch-7.17.28.jar:7.17.28]
  66.         at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.17.28.jar:7.17.28]
  67.         at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.17.28.jar:7.17.28]
  68.         at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.17.28.jar:7.17.28]
  69.         at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) ~[elasticsearch-7.17.28.jar:7.17.28]
  70.         at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.17.28.jar:7.17.28]
  71. Caused by: java.lang.RuntimeException: can not run elasticsearch as root
  72.         at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:107) ~[elasticsearch-7.17.28.jar:7.17.28]
  73.         at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183) ~[elasticsearch-7.17.28.jar:7.17.28]
  74.         at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.28.jar:7.17.28]
  75.         at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) ~[elasticsearch-7.17.28.jar:7.17.28]
  76.         ... 6 more
  77. uncaught exception in thread [main]
  78. java.lang.RuntimeException: can not run elasticsearch as root  # 不允许root直接启动
  79.         at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:107)
  80.         at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183)
  81.         at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)
  82.         at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169)
  83.         at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160)
  84.         at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)
  85.         at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)
  86.         at org.elasticsearch.cli.Command.main(Command.java:77)
  87.         at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125)
  88.         at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
  89. For complete error details, refer to the log at /var/log/es7/xu-elasticstack.log
  90. 2025-03-17 07:44:53,713764 UTC [1860] INFO  Main.cc@111 Parent process died - ML controller exiting
  91. 3.2创建启动用户
  92. root@elk:~# useradd -m  elastic
  93. root@elk:~# id elastic
  94. uid=1001(elastic) gid=1001(elastic) groups=1001(elastic)
  95. # 通过elastic用户启动,此时还有一个报错
  96. root@elk:~# su - elastic  -c  "/usr/local/elasticsearch-7.17.28/bin/elasticsearch"
  97. could not find java in bundled JDK at /usr/local/elasticsearch-7.17.28/jdk/bin/java
  98. # 系统中是存在java包的,但是elastic用户找不到,切换到elastic查看下
  99. root@elk:~# ll /usr/local/elasticsearch-7.17.28/jdk/bin/java
  100. -rwxr-xr-x 1 root root 12328 Feb 20 09:09 /usr/local/elasticsearch-7.17.28/jdk/bin/java*
  101. root@elk:~# su - elastic
  102. $ pwd     
  103. /home/elastic
  104. $ ls /usr/local/elasticsearch-7.17.28/jdk/bin/java
  105. # 报错的原因就是权限被拒绝,也就是elastic没有权限访问java包
  106. ls: cannot access '/usr/local/elasticsearch-7.17.28/jdk/bin/java': Permission denied
  107. # 一层一层地向外找,最后能找到/usr/local/elasticsearch-7.17.28/jdk/bin目录没有权限,导致报错
  108. root@elk:~#  chown elastic:elastic  -R /usr/local/elasticsearch-7.17.28/
  109. root@elk:~# ll -d /usr/local/elasticsearch-7.17.28/jdk/bin/
  110. drwxr-x--- 2 elastic elastic 4096 Feb 20 09:09 /usr/local/elasticsearch-7.17.28/jdk/bin//
  111. # 此时再进行启动测试,发现其他错误
  112. # 我们指定地path.data和path.log不存在,我们需要手动创建
  113. java.lang.IllegalStateException: Unable to access 'path.data' (/var/lib/es7)
  114. org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Unable to access 'path.data' (/var/lib/es7)
  115. root@elk:~# install -d /var/{log,lib}/es7 -o elastic -g elastic
  116. root@elk:~# ll -d /var/{log,lib}/es7
  117. drwxr-xr-x 2 elastic elastic 4096 Mar 17 08:01 /var/lib/es7/
  118. drwxr-xr-x 2 elastic elastic 4096 Mar 17 07:44 /var/log/es7/
  119. # 现在重新启动服务,可以启动成功,检测端口
  120. root@elk:~# su - elastic  -c  "/usr/local/elasticsearch-7.17.28/bin/elasticsearch"
  121. root@elk:~# netstat -tunlp | egrep "9[2|3]00"
  122. tcp6       0      0 :::9200                 :::*                    LISTEN      2544/java           
  123. tcp6       0      0 :::9300                 :::*                    LISTEN      2544/java   
复制代码
通过浏览器访问9200
同时elastic提供了一个api,可以查看当前的主机数
  1. [root@zabbix ~]# curl 192.168.121.21:9200/_cat/nodes
  2. 172.16.1.21 40 97 0 0.11 0.29 0.20 cdfhilmrstw * elk
  3. # 在命令行访问,由于目前的单节点部署es,所以node只有一个
  4. # 前面我们启动es是前台启动前台启动会存在两个问题
  5. 1.占用终端
  6. 2.如果想结束es比较困难,所以这里一般我们采用后台运行的方式启动
  7. 官方给我们的后台运行方式
  8. elasticsearch 的-d参数
  9. To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:
  10. ./bin/elasticsearch -d -p pid
  11. root@elk:~# su - elastic -c '/usr/local/elasticsearch-7.17.28/bin/elasticsearch -d'
  12. # 常见报错问题
  13. Q1:最大虚拟内存映射太小
  14. bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
  15. ERROR: Elasticsearch did not exit normally - check the logs at /var/log/es7/AAA.log
  16. root@elk:~# sysctl -q vm.max_map_count
  17. vm.max_map_count = 65530
  18. root@elk:~# echo "vm.max_map_count = 262144" >> /etc/sysctl.d/es.conf
  19. root@elk:~# sysctl -w vm.max_map_count=262144
  20. vm.max_map_count = 262144
  21. root@elk:~# sysctl -q vm.max_map_count
  22. vm.max_map_count = 262144
  23. Q2:es配置文件写错
  24. java.net.UnknownHostException: single-node
  25. Q3:出现lock字样说明已经有ES实例启动。杀死现有进程后再重新执行启动命令
  26. java.lang.IllegalStateException: failed to obtain node locks, tried [[/var/lib/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
  27. Q5:ES集群部署的有问题,缺少master角色。
  28. {"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
复制代码
卸载环境
  1. 1.停止elasticsearch
  2. root@elk:~# kill `ps -ef | grep java | grep -v grep |awk '{print $2}'`
  3. root@elk:~# ps -ef | grep java
  4. root        4437    1435  0 09:21 pts/2    00:00:00 grep --color=auto java
  5. 2.删除数据目录、日志目录、安装包、用户
  6. root@elk:~# rm -rf /usr/local/elasticsearch-7.17.28/ /var/{lib,log}/es7/
  7. root@elk:~# userdel -r elastic
复制代码
基于deb包安装ES单点
  1. 1.安装deb包
  2. root@elk:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
  3. 2.安装es
  4. root@elk:~# dpkg -i elasticsearch-7.17.28-amd64.deb
  5. # 通过二进制包安装es可以使用systemctl管理
  6. ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
  7. sudo systemctl daemon-reload
  8. sudo systemctl enable elasticsearch.service
  9. ### You can start elasticsearch service by executing
  10. sudo systemctl start elasticsearch.service
  11. Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
  12. 3.修改es配置文件
  13. root@elk:~# vim /etc/elasticsearch/elasticsearch.yml
  14. root@elk:~# egrep -v "^#|^$" /etc/elasticsearch/elasticsearch.yml
  15. cluster.name: xu-es
  16. path.data: /var/lib/elasticsearch
  17. path.logs: /var/log/elasticsearch
  18. network.host: 0.0.0.0
  19. discovery.type: single-node
  20. 4.启动es
  21. systemctl enable elasticsearch --now
  22. # 查看es的service文件,下面的参数都是在二进制安装的时候我们自己做的
  23. User=elasticsearch
  24. Group=elasticsearch
  25. ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quie
  26. cat /usr/share/elasticsearch/bin/systemd-entrypoint
  27. #!/bin/sh
  28. # This wrapper script allows SystemD to feed a file containing a passphrase into
  29. # the main Elasticsearch startup script
  30. if [ -n "$ES_KEYSTORE_PASSPHRASE_FILE" ] ; then
  31.   exec /usr/share/elasticsearch/bin/elasticsearch "$@" < "$ES_KEYSTORE_PASSPHRASE_FILE"
  32. else
  33.   exec /usr/share/elasticsearch/bin/elasticsearch "$@"
  34. fi
复制代码
es常见术语
  1. 1.索引 Index
  2.         用户进行数据读写的单元
  3. 2.分片 Shared
  4.         一个索引至少要有一个分片,如果一个索引仅有一个分片,意味着该索引的数据只能全量存储在某个节点上,且分片是不可拆分的,隶属于某个节点。
  5.         换句话说,分片是ES集群最小的调度单元。
  6.         一个索引数据也可以被分散的存储在不同的分片上,且这些分片可以放在不同的节点,从而实现数据的分布式存储。
  7. 3.副本 replica
  8.         副本是针对分片来说的,一个分片可以有0个或多个副本。
  9.         当副本数量为0时,意味着只有主分片(priamry shard),当主分片所在的节点宕机时,数据就无法访问了。
  10.         当副本数量大于0时,意味着同时存在主分片和副本分片(replica shard):
  11.                 主分片负责数据的读写(read write,rw)
  12.                 副本分片负责数据的读的负载均衡(read only,ro)
  13. 4.文档 document
  14.         指的是用户存储的数据。其中包含元数据和源数据。
  15.         元数据:
  16.                 用于描述源数据的数据。
  17.         源数据:  
  18.                 用户实际存储的数据。
  19. 5.分配: allocation
  20.         指的是将索引的不同分片(包含主分片和副本分片)分配到整个集群的过程。
复制代码
查看集群状态
  1. # es提供了api /_cat/health
  2. root@elk:~# curl 127.1:9200/_cat/health
  3. 1742210504 11:21:44 xu-es green 1 1 3 3 0 0 0 0 - 100.0%
  4. root@elk:~# curl 127.1:9200/_cat/health?v
  5. epoch      timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
  6. 1742210512 11:21:52  xu-es   green           1         1      3   3    0    0        0             0                  -                100.0%
复制代码
es集群环境摆设
  1. 1.安装es集群服务
  2. root@elk1:~# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
  3. root@elk1:~# dpkg -i elasticsearch-7.17.28-amd64.deb
  4. root@elk2:~# dpkg -i elasticsearch-7.17.28-amd64.deb
  5. root@elk3:~# dpkg -i elasticsearch-7.17.28-amd64.deb
  6. 2.配置es,三台机器一样的配置
  7. # 不需要配置discovery.type了
  8. [root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
  9. cluster.name: es-cluster
  10. path.data: /var/lib/elasticsearch
  11. path.logs: /var/log/elasticsearch
  12. network.host: 0.0.0.0
  13. http.port: 9200
  14. discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
  15. 3.启动服务
  16. systemctl enable elasticsearch --now
  17. 4.测试,带有*的是master节点
  18. root@elk:~# curl 127.1:9200/_cat/nodes
  19. 172.16.1.23  6 97 25 0.63 0.57 0.25 cdfhilmrstw - elk3
  20. 172.16.1.22  5 96 23 0.91 0.76 0.33 cdfhilmrstw - elk2
  21. 172.16.1.21 19 90 39 1.22 0.87 0.35 cdfhilmrstw * elk
  22. root@elk:~# curl 127.1:9200/_cat/nodes?v
  23. ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
  24. 172.16.1.23            9          83   2    0.12    0.21     0.18 cdfhilmrstw -      elk3
  25. 172.16.1.22            8          96   3    0.16    0.28     0.24 cdfhilmrstw -      elk2
  26. 172.16.1.21           22          97   3    0.09    0.30     0.25 cdfhilmrstw *      elk
  27. # 集群部署故障 没有uuid  集群缺少master
  28. [root@elk3 ~]# curl http://192.168.121.92:9200/_cat/nodes?v
  29. {"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
  30. [root@elk3 ~]# curl 192.168.121.91:9200
  31. {
  32.   "name" : "elk91",
  33.   "cluster_name" : "es-cluster",
  34.   "cluster_uuid" : "_na_",
  35.   ...
  36. }
  37. [root@elk3 ~]#
  38. [root@elk3 ~]# curl 10.0.0.92:9200
  39. {
  40.   "name" : "elk92",
  41.   "cluster_name" : "es-cluster",
  42.   "cluster_uuid" : "_na_",
  43.   ...
  44. }
  45. [root@elk3 ~]#
  46. [root@elk3 ~]#
  47. [root@elk3 ~]# curl 10.0.0.93:9200
  48. {
  49.   "name" : "elk93",
  50.   "cluster_name" : "es-cluster",
  51.   "cluster_uuid" : "_na_",
  52.   ...
  53. }
  54. [root@elk3 ~]#
  55. [root@elk3 ~]# curl http://192.168.121.91:9200/_cat/nodes
  56. {"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
  57. # 解决方式
  58. 1.停止集群的ES服务
  59. [root@elk91 ~]# systemctl stop elasticsearch.service
  60. [root@elk92 ~]# systemctl stop elasticsearch.service
  61. [root@elk93 ~]# systemctl stop elasticsearch.service
  62. 2.删除数据,日志,和临时数据
  63. [root@elk91 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
  64. [root@elk92 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
  65. [root@elk93 ~]# rm -rf /var/{lib,log}/elasticsearch/* /tmp/*
  66. 3.添加配置项
  67. [root@elk1 ~]# grep -E "^(cluster|path|network|discovery|http)" /etc/elasticsearch/elasticsearch.yml
  68. cluster.name: es-cluster
  69. path.data: /var/lib/elasticsearch
  70. path.logs: /var/log/elasticsearch
  71. network.host: 0.0.0.0
  72. http.port: 9200
  73. discovery.seed_hosts: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]
  74. cluster.initial_master_nodes: ["192.168.121.91", "192.168.121.92", "192.168.121.93"]  ######
  75. 4.重启服务
  76. 5.测试
  77. es集群master选举流程
  78. 1.启动时会检查集群是否有master,如果有则不发起选举master;
  79. 1.刚开始启动,所有节点均为人自己是master,并向集群的其他节点发送信息(包含ClusterStateVersion,ID等)
  80. 2.基于类似gossip协议获取所有可以参与master选举的节点列表;
  81. 3.先比较"ClusterStateVersion",谁最大,谁优先级高,会被选举出master;
  82. 4.如果比不出来,则比较ID,谁的ID小,就优先成为master;
  83. 5.当集群半数以上节点参与选举完成后,则完成master选举,比如有N个节点,仅需要"(N/2)+1"节点就可以确认master;
  84. 6.master选举完成后,会向集群列表通报最新的master节点,此时才意味着选举完成;
复制代码
DSL

  • ES相比与MySQL

    • MySQL属于关系型数据库:
      增删改查: 基于SQL
    • ES属于文档型数据库,和MangoDB很相似
      增删改查: DSL语句,属于ES独有的有种查询语言。
      针对模糊查询,mysql无法充实利用索引,性能较低,而是用ES查询模糊数据,是非常高效的。

往es中添加单条请求
利用postman进行测试
  1. # 本质上是使用了curl
  2. curl --location 'http://192.168.121.21:9200/test_linux/doc' \
  3. --header 'Content-Type: application/json' \
  4. --data '{
  5.     "name": "孙悟空",
  6.     "hobby": [
  7.         "蟠桃",
  8.         "紫霞仙子"
  9.     ]
  10. }
  11. curl --location '192.168.121.21:9200/_bulk' \
  12. --header 'Content-Type: application/json' \
  13. --data '{ "create" : { "_index" : "test_linux_ss", "_id" : "1001" } }
  14. { "name" : "猪八戒","hobby": ["猴哥","高老庄"] }
  15. {"create": {"_index":"test_linux_ss","_id":"1002"}}
  16. {"name":"白龙马","hobby":["驮唐僧","吃草"]}
  17. '
复制代码
查询数据
  1. curl --location '192.168.121.22:9200/test_linux_ss/_doc/1001' \
  2. --data ''
  3. curl --location --request GET '192.168.121.22:9200/test_linux_ss/_search' \
  4. --header 'Content-Type: application/json' \
  5. --data '{
  6.     "query":{
  7.         "match":{
  8.             "name":"猪八戒"
  9.         }
  10.     }
  11. }'
复制代码
删除数据
  1. curl --location --request DELETE '192.168.121.22:9200/test_linux_ss/_doc/1001'
复制代码
kibana
摆设kibana
kibana是针对ES做的一款可视化工具。未来的操作都可以在ES中完成。
  1. 1.下载kibana
  2. root@elk:~# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.28-amd64.deb
  3. 2.安装kibana
  4. root@elk:~# dpkg -i kibana-7.17.28-amd64.deb
  5. 3.修改配置文件
  6. root@elk:~# vim /etc/kibana/kibana.yml
  7. root@elk:~# grep -E "^(elasticsearch.host|i18n|server)" /etc/kibana/kibana.yml
  8. server.port: 5601
  9. server.host: "0.0.0.0"
  10. elasticsearch.hosts: ["http://192.168.121.21:9200","http://192.168.121.22:9200","http://192.168.121.23:9200"]
  11. i18n.locale: "zh-CN"
  12. 4.启动kibana
  13. root@elk:~# systemctl enable kibana.service --now
  14. Synchronizing state of kibana.service with SysV service script with /lib/systemd/systemd-sysv-install.
  15. Executing: /lib/systemd/systemd-sysv-install enable kibana
  16. Created symlink /etc/systemd/system/multi-user.target.wants/kibana.service → /etc/systemd/system/kibana.service.
  17. root@elk:~# netstat  -tunlp | grep 5601
  18. tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      19392/node   
复制代码
web端访问测试
基于KQL基本利用
过滤数据
Filebeat
摆设Filebeat
  1. 1.下载Filebeat
  2. root@elk2:~# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.28-amd64.deb
  3. 2.安装Filebeat
  4. root@elk2:~# dpkg -i filebeat-7.17.28-amd64.deb
  5. 3.编写Filebeat配置文件
  6. # Filebeat需要我们自己创建config目录,然后编写配置文件
  7. mkdir /etc/filebeat/config
  8. vim /etc/filebeat/config/01-log-to-console.yaml
  9. # Filebeat配置文件包含两个部分,Input和Output,Input指明要从哪里采集数据;Output指明数据存储到哪里,根据官方文档进行配置
  10. # 目前还没有服务的运行日志,所以Input来源指定为特殊文件,Output去向输出的终端console
  11. root@elk2:~# cat /etc/filebeat/config/01-log-to-console.yaml
  12. # 定义数据从哪里来
  13. filebeat.inputs:
  14.   # 指定数据源的类型是log,表示从文件读取数据
  15. - type: log
  16.   # 指定文件的路径
  17.   paths:
  18.     - /tmp/student.log
  19. # 定义数据到终端
  20. output.console:
  21.   pretty: true
  22. 4.运行filebeat实例
  23. filebeat -e -c /etc/filebeat/config/01-log-to-console.yaml
  24. 5.创建student.log文件,并写入数据
  25. root@elk2:~# echo ABC > /tmp/student.log
  26. // 输出提示
  27. {
  28.   "@timestamp": "2025-03-18T14:48:42.432Z",
  29.   "@metadata": {
  30.     "beat": "filebeat",
  31.     "type": "_doc",
  32.     "version": "7.17.28"
  33.   },
  34.   "message": "ABC",   // 检测到的内容变化
  35.   "input": {
  36.     "type": "log"
  37.   },
  38.   "ecs": {
  39.     "version": "1.12.0"
  40.   },
  41.   "host": {
  42.     "name": "elk2"
  43.   },
  44.   "agent": {
  45.     "type": "filebeat",
  46.     "version": "7.17.28",
  47.     "hostname": "elk2",
  48.     "ephemeral_id": "7f116862-382c-48f4-8797-c4b689e6e6fe",
  49.     "id": "ba0b7fa3-59b2-4988-bfa1-d9ac8728bcaf",
  50.     "name": "elk2"
  51.   },
  52.   "log": {
  53.     "offset": 0,   // 偏移量offset=0 表示从0开始
  54.     "file": {
  55.       "path": "/tmp/student.log"
  56.     }
  57.   }
  58. }
  59. # 向文件student.log中追加写入数据
  60. root@elk2:~# echo 123 >> /tmp/student.log
  61. // 查看filebeat输出提示
  62. {
  63.   "@timestamp": "2025-03-18T14:51:17.449Z",
  64.   "@metadata": {
  65.     "beat": "filebeat",
  66.     "type": "_doc",
  67.     "version": "7.17.28"
  68.   },
  69.   "log": {
  70.     "offset": 4,  // 偏移量从4开始的
  71.     "file": {
  72.       "path": "/tmp/student.log"
  73.     }
  74.   },
  75.   "message": "123",   // 统计到了数据 123
  76.   "input": {
  77.     "type": "log"
  78.   },
  79.   "ecs": {
  80.     "version": "1.12.0"
  81.   },
  82.   "host": {
  83.     "name": "elk2"
  84.   },
  85.   "agent": {
  86.     "id": "ba0b7fa3-59b2-4988-bfa1-d9ac8728bcaf",
  87.     "name": "elk2",
  88.     "type": "filebeat",
  89.     "version": "7.17.28",
  90.     "hostname": "elk2",
  91.     "ephemeral_id": "7f116862-382c-48f4-8797-c4b689e6e6fe"
  92.   }
  93. }
复制代码
Filebeat特性
  1. # 使用echo的静默输出进行追加 # 此时filebeat采集不到数据
  2. root@elk2:~# echo -n 456 >> /tmp/student.log
  3. root@elk2:~# cat /tmp/student.log
  4. ABC
  5. 123
  6. 456root@elk2:~#
  7. root@elk2:~# echo -n abc  >> /tmp/student.log
  8. root@elk2:~# cat /tmp/student.log
  9. ABC
  10. 123
  11. 456789abcroot@elk2:~#
  12. # 使用非静默写入数据  此时filebeat就可以采集到数据了
  13. root@elk2:~# echo haha >> /tmp/student.log
  14. root@elk2:~# cat /tmp/student.log
  15. ABC
  16. 123
  17. 456789abchaha
  18. // 查看filebeat输出信息
  19. {
  20.   "@timestamp": "2025-03-18T14:55:37.476Z",
  21.   "@metadata": {
  22.     "beat": "filebeat",
  23.     "type": "_doc",
  24.     "version": "7.17.28"
  25.   },
  26.   "host": {
  27.     "name": "elk2"
  28.   },
  29.   "agent": {
  30.     "name": "elk2",
  31.     "type": "filebeat",
  32.     "version": "7.17.28",
  33.     "hostname": "elk2",
  34.     "ephemeral_id": "7f116862-382c-48f4-8797-c4b689e6e6fe",
  35.     "id": "ba0b7fa3-59b2-4988-bfa1-d9ac8728bcaf"
  36.   },
  37.   "log": {
  38.     "offset": 8,  // 偏移量=8
  39.     "file": {
  40.       "path": "/tmp/student.log"
  41.     }
  42.   },
  43.   "message": "456789abchaha",  // 采集到的数据是静默输出+非静默输出 的所有数据
  44.   "input": {
  45.     "type": "log"
  46.   },
  47.   "ecs": {
  48.     "version": "1.12.0"
  49.   }
  50. }
复制代码
由此可以得出Filebeat的第一条性质:
filebeat默认是按行采集数据;
  1. # 现在将Filebeat停止,修改student.log,如果重新开始进行数据采集,那么Filebeat是采集文件中所有的内容,还是只采集Filebeat停止后新增的内容
  2. root@elk2:~# echo xixi >> /tmp/student.log
  3. # 重启Filebeat
  4. // 查看Filebeat的输出信息
  5. {
  6.   "@timestamp": "2025-03-18T15:00:51.759Z",
  7.   "@metadata": {
  8.     "beat": "filebeat",
  9.     "type": "_doc",
  10.     "version": "7.17.28"
  11.   },
  12.   "ecs": {
  13.     "version": "1.12.0"
  14.   },
  15.   "host": {
  16.     "name": "elk2"
  17.   },
  18.   "agent": {
  19.     "type": "filebeat",
  20.     "version": "7.17.28",
  21.     "hostname": "elk2",
  22.     "ephemeral_id": "81db6575-7f98-4ca4-a86f-4d0127c1e2a4",
  23.     "id": "ba0b7fa3-59b2-4988-bfa1-d9ac8728bcaf",
  24.     "name": "elk2"
  25.   },
  26.   "log": {
  27.     "offset": 22,  // 偏移量也不是从0开始
  28.     "file": {
  29.       "path": "/tmp/student.log"
  30.     }
  31.   },
  32.   "message": "xixi",  // 只采集Filebeat停止后新增的内容
  33.   "input": {
  34.     "type": "log"
  35.   }
  36. }
  37. // 在上面有一条提示,当再次启动filebeat时回导入/var/lib/filebeat/registry/filebeat目录下的json文件
  38. 2025-03-18T15:00:51.756Z        INFO        memlog/store.go:124        Finished loading transaction log file for '/var/lib/filebeat/registry/filebeat'. Active transaction id=5
  39. // 在我们第一次启动时也会从/var/lib/filebeat/registry/filebeat目录下导入,但是第一次启动时不会存在该目录,自然也没有信息
  40. // 查看/var/lib/filebeat/registry/filebeat下的json文件,在这个json文件中记录了offset值,这就是filebeat能够在停止后,重启不会从头开始记录文件内容
  41. {"op":"set","id":1}
  42. {"k":"filebeat::logs::native::1441831-64768","v":{"id":"native::1441831-64768","prev_id":"","timestamp":[431172511,1742309322],"ttl":-1,"identifier_name":"native","source":"/tmp/student.log","offset":0,"type":"log","FileStateOS":{"inode":1441831,"device":64768}}}
  43. {"op":"set","id":2}
  44. {"k":"filebeat::logs::native::1441831-64768","v":{"prev_id":"","source":"/tmp/student.log","type":"log","FileStateOS":{"inode":1441831,"device":64768},"id":"native::1441831-64768","offset":4,"timestamp":[434614328,1742309323],"ttl":-1,"identifier_name":"native"}}
  45. {"op":"set","id":3}
  46. {"k":"filebeat::logs::native::1441831-64768","v":{"id":"native::1441831-64768","identifier_name":"native","ttl":-1,"type":"log","FileStateOS":{"inode":1441831,"device":64768},"prev_id":"","source":"/tmp/student.log","offset":8,"timestamp":[450912955,1742309478]}}
  47. {"op":"set","id":4}
  48. {"k":"filebeat::logs::native::1441831-64768","v":{"type":"log","identifier_name":"native","offset":22,"timestamp":[478003874,1742309738],"source":"/tmp/student.log","ttl":-1,"FileStateOS":{"inode":1441831,"device":64768},"id":"native::1441831-64768","prev_id":""}}
  49. {"op":"set","id":5}
  50. {"k":"filebeat::logs::native::1441831-64768","v":{"id":"native::1441831-64768","ttl":-1,"FileStateOS":{"device":64768,"inode":1441831},"identifier_name":"native","prev_id":"","source":"/tmp/student.log","offset":22,"timestamp":[478003874,1742309738],"type":"log"}}
  51. {"op":"set","id":6}
  52. {"k":"filebeat::logs::native::1441831-64768","v":{"offset":22,"timestamp":[759162512,1742310051],"type":"log","FileStateOS":{"device":64768,"inode":1441831},"id":"native::1441831-64768","prev_id":"","identifier_name":"native","source":"/tmp/student.log","ttl":-1}}
  53. {"op":"set","id":7}
  54. {"k":"filebeat::logs::native::1441831-64768","v":{"offset":22,"timestamp":[759368397,1742310051],"type":"log","FileStateOS":{"inode":1441831,"device":64768},"prev_id":"","source":"/tmp/student.log","ttl":-1,"identifier_name":"native","id":"native::1441831-64768"}}
  55. {"op":"set","id":8}
  56. {"k":"filebeat::logs::native::1441831-64768","v":{"ttl":-1,"identifier_name":"native","id":"native::1441831-64768","source":"/tmp/student.log","timestamp":[761513338,1742310052],"FileStateOS":{"inode":1441831,"device":64768},"prev_id":"","offset":27,"type":"log"}}
  57. {"op":"set","id":9}
  58. {"k":"filebeat::logs::native::1441831-64768","v":{"source":"/tmp/student.log","timestamp":[795028411,1742310356],"FileStateOS":{"inode":1441831,"device":64768},"prev_id":"","offset":27,"ttl":-1,"type":"log","identifier_name":"native","id":"native::1441831-64768"}}
复制代码
这是Filebeat的第二个特性
filebeat默认会在"/var/lib/filebeat"目录下记录已经采集的文件offset信息,以便于下一次采集接着该位置继续采集数据;
Filebeat写入es
  1. 将Filebeat的Output写入到es中
  2. 根据官方文档查看配置
  3. The Elasticsearch output sends events directly to Elasticsearch using the Elasticsearch HTTP API.
  4. Example configuration:
  5. output.elasticsearch:  
  6.   hosts: ["https://myEShost:9200"]
  7. root@elk2:~# cat /etc/filebeat/config/02-log-to-es.yaml
  8. # 定义数据从哪里来
  9. filebeat.inputs:
  10.   # 指定数据源的类型是log,表示从文件读取数据
  11. - type: log
  12.   # 指定文件的路径
  13.   paths:
  14.     - /tmp/student.log
  15. # 定义数据到终端
  16. output.elasticsearch:  
  17.   hosts:
  18.     - 192.168.121.21:9200
  19.     - 192.168.121.22:9200
  20.     - 192.168.121.23:9200
  21. # 删除filebeat的json文件
  22. root@elk2:~# rm -rf /var/lib/filebeat
  23. # 启动filebeat实例
  24. root@elk2:~# filebeat -e -c  /etc/filebeat/config/02-log-to-es.yaml
复制代码
在kibana中收集到数据了
查看收集到的数据
设置刷新频率
自定义索引
  1. # 索引名称我们可以自己定义,官方给出了定义方式 index参数设置
  2. output.elasticsearch:
  3.   hosts: ["http://localhost:9200"]
  4.   index: "%{[fields.log_type]}-%{[agent.version]}-%{+yyyy.MM.dd}"
  5.   
  6. root@elk2:~# cat /etc/filebeat/config/03-log-to-constom.yaml
  7. # 定义数据从哪里来
  8. filebeat.inputs:
  9.   # 指定数据源的类型是log,表示从文件读取数据
  10. - type: log
  11.   # 指定文件的路径
  12.   paths:
  13.     - /tmp/student.log
  14. # 定义数据到终端
  15. output.elasticsearch:  
  16.   hosts:
  17.     - 192.168.121.21:9200
  18.     - 192.168.121.22:9200
  19.     - 192.168.121.23:9200
  20.   # 自定义索引名称
  21.   index: "test_filebeat-%{+yyyy.MM.dd}"
  22. # 启动filebeat,此时会报错
  23. root@elk2:~# filebeat -e -c /etc/filebeat/config/03-log-to-constom.yaml
  24. 2025-03-19T02:55:18.951Z        INFO        instance/beat.go:698        Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] Hostfs Path: [/]
  25. 2025-03-19T02:55:18.958Z        INFO        instance/beat.go:706        Beat ID: a109c2d1-fbb6-4b82-9416-29f9488ccabc
  26. # 你必须要设置setup.template.name and setup.template.patter这两个参数,也就是说如果我们想自定义index名称,必须设置setup.template.name and setup.template.patter
  27. 2025-03-19T02:55:18.958Z        ERROR        instance/beat.go:1027        Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified
  28. Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified
  29. # setup.template.name and setup.template.patter在官网中给出了提示、
  30. If you change this setting, you also need to configure the setup.template.name and setup.template.pattern options (see Elasticsearch index template).
  31. # 官方给出的实例
  32. setup.template.name
  33.         The name of the template. The default is filebeat. The Filebeat version is always appended to the given name, so the final name is filebeat-%{[agent.version]}.
  34.        
  35. setup.template.pattern
  36.         The template pattern to apply to the default index settings. The default pattern is filebeat. The Filebeat version is always included in the pattern, so the final pattern is filebeat-%{[agent.version]}.
  37. Example:
  38. setup.template.name: "filebeat"
  39. setup.template.pattern: "filebeat"
  40. # 还需要设置shared和replicas 官方给出的默认设置
  41. setup.template.settings:
  42.   index.number_of_shards: 1
  43.   index.number_of_replicas: 1
  44. # 配置我们自己的索引模板(就是创建索引的规则)
  45. root@elk2:~# cat /etc/filebeat/config/03-log-to-constom.yaml
  46. # 定义数据从哪里来
  47. filebeat.inputs:
  48.   # 指定数据源的类型是log,表示从文件读取数据
  49. - type: log
  50.   # 指定文件的路径
  51.   paths:
  52.     - /tmp/student.log
  53. # 定义数据到终端
  54. output.elasticsearch:  
  55.   hosts:
  56.     - 192.168.121.21:9200
  57.     - 192.168.121.22:9200
  58.     - 192.168.121.23:9200
  59.   # 自定义索引名称
  60.   index: "test_filebeat-%{+yyyy.MM.dd}"
  61. # 定义索引模板(就是创建索引的规则)的名称
  62. setup.template.name: "test_filebeat"
  63. # 定义索引模板的匹配模式,表示当前索引模板针对哪些索引生效。
  64. setup.template.pattern: "test_filebeat-*"
  65. # # 定义索引模板的规则信息
  66. setup.template.settings:
  67.   # 分片数
  68.   index.number_of_shards: 3
  69.   # 每个分片有多少个副本
  70.   index.number_of_replicas: 0
  71. # 启动filebeat 此时可以正常启动filebeat,但是在kibana中发现没有建立我们设置index,查看启动信息
  72. root@elk2:~# filebeat  -e -c /etc/filebeat/config/03-log-to-constom.yaml
  73. # 这里的大概意思就是ILM设置为了auto  如果启用了此配置,则忽略自定义索引的所有信息  所以我们要将ILM设置为false
  74. 2025-03-19T03:10:02.548Z        INFO        [index-management]        idxmgmt/std.go:260        Auto ILM enable success.
  75. 2025-03-19T03:10:02.558Z        INFO        [index-management.ilm]        ilm/std.go:170        ILM policy filebeat exists already.
  76. 2025-03-19T03:10:02.559Z        INFO        [index-management]        idxmgmt/std.go:396        Set setup.template.name to '{filebeat-7.17.28 {now/d}-000001}' as ILM is enabled.
  77. # 查看官网对于index lifecycle management  ILM配置
  78. When index lifecycle management (ILM) is enabled, the default index is "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{index_num}", for example, "filebeat-8.17.3-2025-03-17-000001". Custom index settings are ignored when ILM is enabled. If you’re sending events to a cluster that supports index lifecycle management, see Index lifecycle management (ILM) to learn how to change the index name.
  79. # ilm默认是auto模式,支持true、false、auto
  80. Enables or disables index lifecycle management on any new indices created by Filebeat. Valid values are true, false, and auto. When auto (the default) is specified on version 7.0 and later
  81. setup.ilm.enabled: auto
  82. # 添加ilm配置到我们自己的配置文件中
  83. # 启动filebeat
  84. root@elk2:~# filebeat  -e -c /etc/filebeat/config/03-log-to-constom.yaml
复制代码
索引模板已被创建
  1. # 此时我想修改我的shared和replicas
  2. # 直接修改配置文件 改为5shared 0repicas
复制代码
此时还是3shared 0replicas
  1. # 这是由于 setup.template.overwrite 参数 默认是false也就是不覆盖
  2. setup.template.overwrite
  3. A boolean that specifies whether to overwrite the existing template. The default is false. Do not enable this option if you start more than one instance of Filebeat at the same time. It can overload Elasticsearch by sending too many template update requests.
  4. # 设置setup.template.overwrite 为 true
  5. root@elk2:~# cat /etc/filebeat/config/03-log-to-constom.yaml
  6. # 定义数据从哪里来
  7. filebeat.inputs:
  8.   # 指定数据源的类型是log,表示从文件读取数据
  9. - type: log
  10.   # 指定文件的路径
  11.   paths:
  12.     - /tmp/student.log
  13. # 定义数据到终端
  14. output.elasticsearch:  
  15.   hosts:
  16.     - 192.168.121.21:9200
  17.     - 192.168.121.22:9200
  18.     - 192.168.121.23:9200
  19.   # 自定义索引名称
  20.   index: "test_filebeat-%{+yyyy.MM.dd}"
  21. # 禁用索引生命周期管理(index lifecycle management,ILM)
  22. # 如果启用了此配置,则忽略自定义索引的所有信息
  23. setup.ilm.enabled: false
  24. # 如果索引模板存在,是否覆盖,默认值为false,如果明确需要,则可以将其设置为ture。
  25. # 但是官方建议将其设置为false,原因是每次写数据时,都会建立tcp链接,消耗资源。
  26. setup.template.overwrite: true
  27. # 定义索引模板(就是创建索引的规则)的名称
  28. setup.template.name: "test_filebeat"
  29. # 定义索引模板的匹配模式,表示当前索引模板针对哪些索引生效。
  30. setup.template.pattern: "test_filebeat-*"
  31. # # 定义索引模板的规则信息
  32. setup.template.settings:
  33.   # 分片数
  34.   index.number_of_shards: 5
  35.   # 每个分片有多少个副本
  36.   index.number_of_replicas: 0
  37. # 启动filebeat
  38. # 此时就修改完成了shared和replicas
复制代码
Filebeat采集nginx实战
  1. 1.安装nginx
  2. root@elk2:~# apt install -y nginx
  3. 2.启动nginx
  4. root@elk2:~# systemctl start nginx
  5. root@elk2:~# netstat -tunlp | grep 80
  6. tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17956/nginx: master
  7. tcp6       0      0 :::80                   :::*                    LISTEN      17956/nginx: master
  8. 3.测试访问
  9. root@elk2:~# curl 127.1
  10. # 日志位置
  11. root@elk2:~# ll /var/log/nginx/access.log
  12. -rw-r----- 1 www-data adm 86 Mar 19 06:58 /var/log/nginx/access.log
  13. root@elk2:~# cat /var/log/nginx/access.log
  14. 127.0.0.1 - - [19/Mar/2025:06:58:31 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.81.0"
  15. 4.编写Filebeat实例
  16. root@elk2:~# cat /etc/filebeat/config/04-log-to-nginx.yaml
  17. # 定义数据从哪里来
  18. filebeat.inputs:
  19.   # 指定数据源的类型是log,表示从文件读取数据
  20. - type: log
  21.   # 指定文件的路径
  22.   paths:
  23.     - /var/log/nginx/access.log*
  24. # 定义数据到终端
  25. output.elasticsearch:  
  26.   hosts:
  27.     - 192.168.121.21:9200
  28.     - 192.168.121.22:9200
  29.     - 192.168.121.23:9200
  30.   # 自定义索引名称
  31.   index: "test_filebeat-%{+yyyy.MM.dd}"
  32. # 禁用索引生命周期管理(index lifecycle management,ILM)
  33. # 如果启用了此配置,则忽略自定义索引的所有信息
  34. setup.ilm.enabled: false
  35. # 如果索引模板存在,是否覆盖,默认值为false,如果明确需要,则可以将其设置为ture。
  36. # 但是官方建议将其设置为false,原因是每次写数据时,都会建立tcp链接,消耗资源。
  37. setup.template.overwrite: true
  38. # 定义索引模板(就是创建索引的规则)的名称
  39. setup.template.name: "test_filebeat"
  40. # 定义索引模板的匹配模式,表示当前索引模板针对哪些索引生效。
  41. setup.template.pattern: "test_filebeat-*"
  42. # # 定义索引模板的规则信息
  43. setup.template.settings:
  44.   # 分片数
  45.   index.number_of_shards: 5
  46.   # 每个分片有多少个副本
  47.   index.number_of_replicas: 0
  48. 5.启动filebeat
  49. root@elk2:~# filebeat  -e -c /etc/filebeat/config/04-log-to-nginx.yaml
复制代码
Filebeat分析nginx日记
filebeat modules
  1. # filebeat support any modules
  2. # 关于filebeat模块官方的解释:他能简化filebeat分析的日志格式
  3. # Filebeat modules simplify the collection, parsing, and visualization of common log formats.
  4. # 默认情况下这些模块都是disable  在需要时我们需要自己设置为enabled
  5. root@elk2:~# ls -l /etc/filebeat/modules.d/
  6. total 300
  7. -rw-r--r-- 1 root root   484 Feb 13 16:58 activemq.yml.disabled
  8. -rw-r--r-- 1 root root   476 Feb 13 16:58 apache.yml.disabled
  9. -rw-r--r-- 1 root root   281 Feb 13 16:58 auditd.yml.disabled
  10. -rw-r--r-- 1 root root  2112 Feb 13 16:58 awsfargate.yml.disabled
  11. 。。。
  12. root@elk2:~# ls -l /etc/filebeat/modules.d/ | wc -l
  13. 72
  14. # 查看哪些模块处于enabled和disabled
  15. root@elk2:~# filebeat modules list
  16. # 启动模块
  17. root@elk2:~# filebeat modules enable apache nginx mysql redis
  18. Enabled apache
  19. Enabled nginx
  20. Enabled mysql
  21. Enabled redis
  22. # 停止模块
  23. root@elk2:~# filebeat modules disable  apache  mysql redis
  24. Disabled apache
  25. Disabled mysql
  26. Disabled redis
复制代码
配置filebeat监控nginx
  1. # 需要在filebeat的配置文件中配置模块功能  在/etc/filebeat/filebeat.yml文件中规定了配置方式
  2. filebeat.config.modules:
  3.   # Glob pattern for configuration loading
  4.   path: ${path.config}/modules.d/*.yml
  5.   # Set to true to enable config reloading
  6.   reload.enabled: false
  7.   # Period on which files under path should be checked for changes
  8.   #reload.period: 10s
  9. module_nginx# 写入filebeat实例
  10. root@elk2:~# cat /etc/filebeat/config/07-module-nginx-to-es.yaml
  11. # Config modules
  12. filebeat.config.modules:
  13.   # Glob pattern for configuration loading  指定在哪个路径下加载
  14.   path: ${path.config}/modules.d/*.yml
  15.   # Set to true to enable config reloading 自动加载/etc/filebeat/modules.d/下面的yml文件。
  16.   reload.enabled: true
  17.   # Period on which files under path should be checked for changes
  18.   #reload.period: 10s
  19. output.elasticsearch:  
  20.   hosts:
  21.     - 192.168.121.21:9200
  22.     - 192.168.121.22:9200
  23.     - 192.168.121.23:9200
  24.   index: "module_nginx-%{+yyyy.MM.dd}"
  25. setup.ilm.enabled: false
  26. setup.template.overwrite: true
  27. setup.template.name: "module_nginx"
  28. setup.template.pattern: "module_nginx-*"
  29. setup.template.settings:
  30.   index.number_of_shards: 5
  31.   index.number_of_replicas: 0
  32. # 准备nginx日志测试用例
  33. root@elk2:~# cat /var/log/nginx/access.log
  34. 192.168.121.1 - - [19/Mar/2025:16:42:23 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Linux; Android) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 CrKey/1.54.248666"
  35. 1.168.121.1 - - [19/Mar/2025:16:42:26 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Linux; Android) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 CrKey/1.54.248666"
  36. 92.168.121.1 - - [19/Mar/2025:16:42:29 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1"
  37. 192.168.11.1 - - [19/Mar/2025:16:42:31 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Linux; Android 13; SM-G981B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"
  38. 192.168.121.1 - - [19/Mar/2025:16:42:40 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.0 Safari/605.1.15"
  39. # 启动filebeat实例
  40. root@elk2:~# filebeat -e -c /etc/filebeat/config/07-module-nginx-to-es.yaml
  41. # 在收集到的结果中包含正确的日志和错误的日志,我们如果想只收集正确的,需要设置nginx模板的配置文件 /etc/filebeat/module.d/nginx.yml
  42. - module: nginx
  43.   # Access logs
  44.   access:
  45.     enabled: true
  46.     # Set custom paths for the log files. If left empty,
  47.     # Filebeat will choose the paths depending on your OS.
  48.     var.paths: ["/var/log/nginx/access.log"]
  49.   # Error logs
  50.   error:
  51.     enabled: false    # 从true改为false
  52.     # Set custom paths for the log files. If left empty,
  53.     # Filebeat will choose the paths depending on your OS.
  54.     #var.paths:
  55.   # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
  56.   ingress_controller:
  57.     enabled: false
  58.     # Set custom paths for the log files. If left empty,
  59.     # Filebeat will choose the paths depending on your OS.
  60.     #var.paths:
复制代码
kibana分析PV
  1. pv:  page view  页面访问量
  2. 1个request 就是一个 pv
复制代码
kibana分析IP
kibana分析带宽
kibana制作Dashboard
kibana分析设备
kibana分析操作体系占比
kibana分析全球用户占比
filebeat采集tomcat日记
摆设tomcat
  1. [root@elk2 ~]# wget https://dlcdn.apache.org/tomcat/tomcat-11/v11.0.5/bin/apache-tomcat-11.0.5.tar.gz
  2. [root@elk2 ~]# tar xf apache-tomcat-11.0.5.tar.gz  -C /usr/local
  3. # 配置环境变量
  4. # es本身有jdk环境,我们将es的jdk环境配置到环境变量中,让tomcat调用es的jdk环境
  5. # es的jdk环境目录
  6. [root@elk2 ~]# ll /usr/share/elasticsearch/jdk/
  7. # 添加环境变量
  8. [root@elk2 ~]# vim /etc/profile.d/tomcat.sh
  9. [root@elk2 ~]# source /etc/profile.d/tomcat.sh
  10. [root@elk2 ~]# cat /etc/profile.d/tomcat.sh
  11. #!/bin/bash
  12. export JAVA_HOME=/usr/share/elasticsearch/jdk
  13. export TOMCAT_HOME=/usr/local/apache-tomcat-11.0.5
  14. export PATH=$PATH:$JAVA_HOME/bin:$TOMCAT_HOME/bin
  15. [root@elk3 ~]# java -version
  16. openjdk version "22.0.2" 2024-07-16
  17. OpenJDK Runtime Environment (build 22.0.2+9-70)
  18. OpenJDK 64-Bit Server VM (build 22.0.2+9-70, mixed mode, sharing)
  19. # 因为tomcat默认的日志格式显示的信息很少,所以这里需要修改tomcat配置文件,修改日志格式
  20. [root@elk3 ~]# vim /usr/local/apache-tomcat-11.0.5/conf/server.xml
  21. ...
  22.           <Host name="tomcat.test.com"  appBase="webapps"
  23.                 unpackWARs="true" autoDeploy="true">
  24.                 <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
  25.             prefix="tomcat.test.com_access_log" suffix=".json"
  26. pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","request":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","http_user_agent":"%{User-Agent}i"}"/>
  27.           </Host>
  28. # 启动tomcat
  29. [root@elk2 ~]# catalina.sh  start
  30. [root@elk2 ~]# netstat -tunlp | grep 8080
  31. tcp6       0      0 :::8080                 :::*                    LISTEN      98628/java   
  32. # 访问测试
  33. [root@elk2 ~]# cat /etc/hosts
  34. 127.0.0.1 localhost
  35. # The following lines are desirable for IPv6 capable hosts
  36. ::1     ip6-localhost ip6-loopback
  37. fe00::0 ip6-localnet
  38. ff00::0 ip6-mcastprefix
  39. ff02::1 ip6-allnodes
  40. ff02::2 ip6-allrouters
  41. 192.168.121.92 tomcat.test.com
  42. [root@elk2 ~]# cat /usr/local/apache-tomcat-11.0.5/logs/tomcat.test.com_access_log.2025-03-23.json
  43. {"clientip":"192.168.121.92","ClientUser":"-","authenticated":"-","AccessTime":"[23/Mar/2025:20:55:41 +0800]","request":"GET / HTTP/1.1","status":"200","SendBytes":"11235","Query?string":"","partner":"-","http_user_agent":"curl/7.81.0"}
复制代码
配置filebeat监控tomcat
  1. # 启动tomcat模块
  2. [root@elk3 ~]# filebeat modules enable tomcat
  3. Enabled tomcat
  4. [root@elk3 ~]# ll /etc/filebeat/modules.d/tomcat.yml
  5. -rw-r--r-- 1 root root 623 Feb 14 00:58 /etc/filebeat/modules.d/tomcat.yml
  6. # 配置tomcat模块
  7. [root@elk3 ~]# cat /etc/filebeat/modules.d/tomcat.yml
  8. # Module: tomcat
  9. # Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-tomcat.html
  10. - module: tomcat
  11.   log:
  12.     enabled: true
  13.     # Set which input to use between udp (default), tcp or file.
  14.     # var.input: udp
  15.     var.input: file
  16.     # var.syslog_host: tomcat.test.com
  17.     # var.syslog_port: 8080
  18.     # Set paths for the log files when file input is used.
  19.     # var.paths:
  20.     #   - /var/log/tomcat/*.log
  21.     var.paths:
  22.       - /usr/local/apache-tomcat-11.0.5/logs/tomcat.test.com_access_log.2025-03-23.json
  23.     # Toggle output of non-ECS fields (default true).
  24.     # var.rsa_fields: true
  25.     # Set custom timezone offset.
  26.     # "local" (default) for system timezone.
  27.     # "+02:00" for GMT+02:00
  28.     # var.tz_offset: local
  29. # 配置filebeat
  30. [root@elk3 ~]# cat /etc/filebeat/config/02-tomcat-es.yaml
  31. filebeat.config.modules:
  32.   path: ${path.config}/modules.d/*.yml
  33.   reload.enabled: true
  34. output.elasticsearch:
  35.   hosts:
  36.   - 192.168.121.91:9200
  37.   - 192.168.121.92:9200
  38.   - 192.168.121.93:9200
  39.   index: test-modules-tomcat-%{+yyyy.MM.dd}
  40. setup.ilm.enabled: false
  41. setup.template.name: "test-modules-tomcat"
  42. setup.template.pattern: "test-modules-tomcat-*"
  43. setup.template.overwrite: true
  44. setup.template.settings:
  45.   index.number_of_shards: 5
  46.   index.number_of_replicas: 0
  47. # 启动filebeat
  48. [root@elk3 ~]# filebeat -e -c /etc/filebeat/config/02-tomcat-es.yaml
复制代码
filebeat processors
filebeat 处理器
https://www.elastic.co/guide/en/beats/filebeat/7.17/filtering-and-enhancing-data.html
  1. # 在我们使用filebeat采集tomcat日志时,由于tomcat的日志在我们的设置下是json格式的,我们想要得出json格式中的具体信息,需要通过filebeat processors进行进一步配置
  2. # decode JSON fields 参数可以实现解析json格式
  3. # 官方配置
  4. processors:
  5.   - decode_json_fields:
  6.       fields: ["field1", "field2", ...]
  7.       process_array: false
  8.       max_depth: 1
  9.       target: ""
  10.       overwrite_keys: false
  11.       add_error_key: true
  12. # fields:指定要对哪个字段进行json解析
  13. # process_array:一个bool值,指定是否解析数字,默认是false,可选配置
  14. # max_depth:最大解析深度,默认值为1 将解码中所示字段中的 JSON 对象fields,值为2还将解码嵌入在这些解析文档的字段中的对象,可选配置
  15. # target:解码后的 JSON 将写入的字段。默认情况下,解码后的 JSON 对象将替换读取它的字符串字段。要将解码后的 JSON 字段合并到事件的根目录中,请指定 target一个空字符串 ( target: "")。请注意,null值 ( target:) 被视为未设置字段 可选配置
  16. # overwrite_keys:布尔值,指定事件中的现有键是否被解码的 JSON 对象中的键覆盖。默认值为false。可选配置
  17. # add_error_key:如果设置为 ,true并且在解码 JSON 密钥时发生错误,则该error字段将成为事件的一部分,并带有错误消息。如果设置为false,则事件的字段中不会有任何错误。默认值为false。可选配置
  18. # 写入filebeat配置
  19. [root@elk3 ~]# cat /etc/filebeat/config/02-tomcat-es.yaml,
  20. cat: /etc/filebeat/config/02-tomcat-es.yaml,: No such file or directory
  21. [root@elk3 ~]# cat /etc/filebeat/config/02-tomcat-es.yaml
  22. filebeat.config.modules:
  23.   path: ${path.config}/modules.d/*.yml
  24.   reload.enabled: true
  25. processors:
  26.   - decode_json_fields:
  27.       fields: ["event.original"]
  28.       process_array: false
  29.       max_depth: 1
  30.       target: ""
  31.       overwrite_keys: false
  32.       add_error_key: true
  33. output.elasticsearch:
  34.   hosts:
  35.   - 192.168.121.91:9200
  36.   - 192.168.121.92:9200
  37.   - 192.168.121.93:9200
  38.   index: test-modules-tomcat-%{+yyyy.MM.dd}
  39. setup.ilm.enabled: false
  40. setup.template.name: "test-modules-tomcat"
  41. setup.template.pattern: "test-modules-tomcat-*"
  42. setup.template.overwrite: true
  43. setup.template.settings:
  44.   index.number_of_shards: 5
  45.   index.number_of_replicas: 0
  46. # 启动filebeat
  47. [root@elk3 ~]# filebeat -e -c /etc/filebeat/config/02-tomcat-es.yaml
  48. # 通过filebeat删除一个字段
  49. processors:
  50.   - drop_fields:
  51.       when:
  52.         condition
  53.       fields: ["field1", "field2", ...]
  54.       ignore_missing: false
  55.       
  56. The supported conditions are:
  57. equals
  58. contains
  59. regexp
  60. range
  61. network
  62. has_fields
  63. or
  64. and
  65. not
  66. # 删除status 是404的 event.module 字段值
  67. [root@elk3 ~]# cat /etc/filebeat/config/02-tomcat-es.yaml
  68. filebeat.config.modules:
  69.   path: ${path.config}/modules.d/*.yml
  70.   reload.enabled: true
  71. processors:
  72.   - decode_json_fields:
  73.       fields: ["event.original"]
  74.       process_array: false
  75.       max_depth: 1
  76.       target: ""
  77.       overwrite_keys: false
  78.       add_error_key: true
  79.   - drop_fields:
  80.       when:
  81.         equals:
  82.           status: "404"
  83.       fields: ["event.module"]
  84.       ignore_missing: false
  85. output.elasticsearch:
  86.   hosts:
  87.   - 192.168.121.91:9200
  88.   - 192.168.121.92:9200
  89.   - 192.168.121.93:9200
  90.   index: test-modules-tomcat-%{+yyyy.MM.dd}
  91. setup.ilm.enabled: false
  92. setup.template.name: "test-modules-tomcat"
  93. setup.template.pattern: "test-modules-tomcat-*"
  94. setup.template.overwrite: true
  95. setup.template.settings:
  96.   index.number_of_shards: 5
  97.   index.number_of_replicas: 0
  98. [root@elk3 ~]# filebeat -e -c /etc/filebeat/config/02-tomcat-es.yam
复制代码
filebeat采集es集群日记
  1. # 启动模块
  2. [root@elk1 ~]# filebeat modules enable elasticsearch
  3. Enabled elasticsearch
  4. [root@elk1 ~]# cat /etc/filebeat/config/02-es-log.yml
  5. filebeat.config.modules:
  6.   path: ${path.config}/modules.d/*.yml
  7.   reload.enabled: true
  8. output.elasticsearch:
  9.   hosts:
  10.   - 192.168.121.91:9200
  11.   - 192.168.121.92:9200
  12.   - 192.168.121.93:9200
  13.   index: es-log-modules-eslog-%{+yyyy.MM.dd}
  14. setup.ilm.enabled: false
  15. setup.template.name: "es-log"
  16. setup.template.pattern: "es-log-*"
  17. setup.template.overwrite: true
  18. setup.template.settings:
  19.   index.number_of_shards: 5
  20.   index.number_of_replicas: 0
复制代码
filebeat采集mysql日记
  1. # 部署mysql
  2. [root@elk1 ~]# wget https://dev.mysql.com/get/Downloads/MySQL-8.4/mysql-8.4.4-linux-glibc2.28-x86_64.tar.xz
  3. [root@elk1 ~]# tar xf mysql-8.4.4-linux-glibc2.28-x86_64.tar.xz -C /usr/local/
  4. # 准备启动脚本并授权
  5. [root@elk1 ~]# cp /usr/local/mysql-8.4.4-linux-glibc2.28-x86_64/support-files/mysql.server /etc/init.d/
  6. [root@elk1 ~]# vim /etc/init.d/mysql.server
  7. [root@elk1 ~]# grep -E "^(basedir=|datadir=)" /etc/init.d/mysql.server
  8. basedir=/usr/local/mysql-8.4.4-linux-glibc2.28-x86_64/
  9. datadir=/var/lib/mysql
  10. [root@elk1 ~]# useradd -m mysql
  11. [root@elk1 ~]# install -d /var/lib/mysql -o mysql -g mysql
  12. [root@elk1 ~]# ll -d /var/lib/mysql
  13. drwxr-xr-x 2 mysql mysql 4096 Mar 25 17:05 /var/lib/mysql/
  14. # 准备配置文件
  15. [root@elk1 ~]# vim /etc/my.cnf
  16. [root@elk1 ~]# cat /etc/my.cnf
  17. [mysqld]
  18. basedir=/usr/local/mysql-8.4.4-linux-glibc2.28-x86_64/
  19. datadir=/var/lib/mysql
  20. socket=/tmp/mysql80.sock
  21. port=3306
  22. [client]
  23. socket=/tmp/mysql80.sock
  24. # 启动数据库
  25. [root@elk1 ~]# vim /etc/profile.d/mysql.sh
  26. [root@elk1 ~]# cat /etc/profile.d/mysql.sh
  27. #!/bin/bash
  28. export MYSQL_HOME=/usr/local/mysql-8.4.4-linux-glibc2.28-x86_64/
  29. export PATH=$PATH:$MYSQL_HOME/bin
  30. [root@elk1 ~]# source  /etc/profile.d/mysql.sh
  31. [root@elk1 ~]# mysqld --initialize-insecure  --user=mysql  --datadir=/var/lib/mysql  --basedir=/usr/local/mysql-8.4.4-linux-glibc2.28-x86_64
  32. 2025-03-25T09:08:36.829914Z 0 [System] [MY-015017] [Server] MySQL Server Initialization - start.
  33. 2025-03-25T09:08:36.842773Z 0 [System] [MY-013169] [Server] /usr/local/mysql-8.4.4-linux-glibc2.28-x86_64/bin/mysqld (mysqld 8.4.4) initializing of server in progress as process 7905
  34. 2025-03-25T09:08:36.918780Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
  35. 2025-03-25T09:08:37.818933Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
  36. 2025-03-25T09:08:42.504501Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
  37. 2025-03-25T09:08:46.909940Z 0 [System] [MY-015018] [Server] MySQL Server Initialization - end.
  38. [root@elk1 ~]# /etc/init.d/mysql.server start
  39. Starting mysql.server (via systemctl): mysql.server.service.
  40. [root@elk1 ~]# netstat -tunlp | grep 3306
  41. tcp6       0      0 :::3306                 :::*                    LISTEN      8141/mysqld         
  42. tcp6       0      0 :::33060                :::*                    LISTEN      8141/mysqld  
  43. # 开启filebeat的模块
  44. [root@elk1 ~]# filebeat modules enable mysql
  45. Enabled mysql
  46. # 配置filebeat
  47. [root@elk1 ~]# cat /etc/filebeat/config/03-es-mysql-log.yaml
  48. filebeat.config.modules:
  49.   path: ${path.config}/modules.d/mysql.yml
  50.   reload.enabled: true
  51. output.elasticsearch:
  52.   hosts:
  53.   - 192.168.121.91:9200
  54.   - 192.168.121.92:9200
  55.   - 192.168.121.93:9200
  56.   index: es-modules-mysql-%{+yyyy.MM.dd}
  57. setup.ilm.enabled: false
  58. setup.template.name: "es-modules-mysql"
  59. setup.template.pattern: "es-modules-mysql-*"
  60. setup.template.overwrite: true
  61. setup.template.settings:
  62.   index.number_of_shards: 5
  63.   index.number_of_replicas: 0
  64. # 配置mysql modules
  65. [root@elk1 ~]# cat /etc/filebeat/modules.d/mysql.yml
  66. # Module: mysql
  67. # Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-mysql.html
  68. - module: mysql
  69.   # Error logs
  70.   error:
  71.     enabled: true
  72.     # Set custom paths for the log files. If left empty,
  73.     # Filebeat will choose the paths depending on your OS.
  74.     #var.paths:
  75.     var.paths: ["/var/lib/mysql/elk1.err"]
  76.   # Slow logs
  77.   slowlog:
  78.     enabled: true
  79.     # Set custom paths for the log files. If left empty,
  80.     # Filebeat will choose the paths depending on your OS.
  81.     #var.paths:
  82. # 启动filebeat实例
  83. [root@elk1 ~]# filebeat -e -c /etc/filebeat/config/03-es-mysql-log.yaml
复制代码
filebeat采集redis
  1. # 安装redis
  2. [root@elk1 ~]# apt install -y redis
  3. # redis日志文件位置
  4. [root@elk1 ~]# cat /var/log/redis/redis-server.log
  5. 8618:C 25 Mar 2025 17:18:37.442 # WARNING supervised by systemd - you MUST set appropriate values for TimeoutStartSec and TimeoutStopSec in your service unit.
  6. 8618:C 25 Mar 2025 17:18:37.442 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
  7. 8618:C 25 Mar 2025 17:18:37.442 # Redis version=6.0.16, bits=64, commit=00000000, modified=0, pid=8618, just started
  8. 8618:C 25 Mar 2025 17:18:37.442 # Configuration loaded
  9.                 _._                                                  
  10.            _.-``__ ''-._                                             
  11.       _.-``    `.  `_.  ''-._           Redis 6.0.16 (00000000/0) 64 bit
  12.   .-`` .-```.  ```\/    _.,_ ''-._                                   
  13. (    '      ,       .-`  | `,    )     Running in standalone mode
  14. |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
  15. |    `-._   `._    /     _.-'    |     PID: 8618
  16.   `-._    `-._  `-./  _.-'    _.-'                                   
  17. |`-._`-._    `-.__.-'    _.-'_.-'|                                 
  18. |    `-._`-._        _.-'_.-'    |           http://redis.io        
  19.   `-._    `-._`-.__.-'_.-'    _.-'                                   
  20. |`-._`-._    `-.__.-'    _.-'_.-'|                                 
  21. |    `-._`-._        _.-'_.-'    |                                 
  22.   `-._    `-._`-.__.-'_.-'    _.-'                                   
  23.       `-._    `-.__.-'    _.-'                                       
  24.           `-._        _.-'                                          
  25.               `-.__.-'                                               
  26. 8618:M 25 Mar 2025 17:18:37.446 # Server initialized
  27. 8618:M 25 Mar 2025 17:18:37.446 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
  28. 8618:M 25 Mar 2025 17:18:37.447 * Ready to accept connections
  29. # 启动redis modules
  30. [root@elk1 ~]# filebeat modules enable redis
  31. Enabled redis
  32. [root@elk1 ~]# cat /etc/filebeat/config/04-es-redis-log.yaml
  33. filebeat.config.modules:
  34.   path: ${path.config}/modules.d/redis.yml
  35.   reload.enabled: true
  36. output.elasticsearch:
  37.   hosts:
  38.   - 192.168.121.91:9200
  39.   - 192.168.121.92:9200
  40.   - 192.168.121.93:9200
  41.   index: es-modules-redis-%{+yyyy.MM.dd}
  42. setup.ilm.enabled: false
  43. setup.template.name: "es-modules-redis"
  44. setup.template.pattern: "es-modules-redis-*"
  45. setup.template.overwrite: true
  46. setup.template.settings:
  47.   index.number_of_shards: 5
  48.   index.number_of_replicas: 0
  49. # 启动filebeat实例
  50. [root@elk1 ~]# filebeat  -e -c /etc/filebeat/config/04-es-redis-log.yaml
复制代码
filebeat多行合并问题
  1. # Manage multiline messages
  2. #
  3. parsers:
  4. - multiline:
  5.     type: pattern
  6.     pattern: '^\['
  7.     negate: true
  8.     match: after
  9.    
  10. multiline.type
  11.         Defines which aggregation method to use. The default is pattern. The other option is count which lets you aggregate constant number of lines.
  12. multiline.pattern
  13.         Specifies the regular expression pattern to match. Note that the regexp patterns supported by Filebeat differ somewhat from the patterns supported by Logstash. See Regular expression                 support for a list of supported regexp patterns. Depending on how you configure other multiline options, lines that match the specified regular expression are considered either                         continuations of a previous line or the start of a new multiline event. You can set the negate option to negate the pattern.
  14. multiline.negate
  15.         Defines whether the pattern is negated. The default is false.
  16. multiline.match
  17.         Specifies how Filebeat combines matching lines into an event. The settings are after or before. The behavior of these settings depends on what you specify for negate:
复制代码
manager multiline redis log message
  1. # 通过manager multiline message优化redis集群日志采集规则
  2. # type: filestream 是对 旧版本log的替代
  3. [root@elk1 ~]# cat /etc/filebeat/config/04-es-redis-log.yaml
  4. filebeat.inputs:
  5. - type: filestream
  6.   paths:
  7.     - /var/log/redis/redis-server.log*
  8.   # 配置解析器
  9.   parsers:
  10.     # 定义多行匹配
  11.   - multiline:
  12.       # 指定匹配的类型
  13.       type: pattern
  14.       # 定义匹配模式
  15.       pattern: '^\d'
  16.       # 参考官网: https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html
  17.       negate: true
  18.       match: after
  19. output.elasticsearch:
  20.   hosts:
  21.   - 192.168.121.91:9200
  22.   - 192.168.121.92:9200
  23.   - 192.168.121.93:9200
  24.   index: es-modules-redis-%{+yyyy.MM.dd}
  25. setup.ilm.enabled: false
  26. setup.template.name: "es-modules-redis"
  27. setup.template.pattern: "es-modules-redis-*"
  28. setup.template.overwrite: true
  29. setup.template.settings:
  30.   index.number_of_shards: 5
  31.   index.number_of_replicas: 0
  32. [root@elk1 ~]# filebeat -e -c /etc/filebeat/config/04-es-redis-log.yaml
  33. # redis日志中的图案就筛选出来了
复制代码
manager multiline tomcat error log message
  1. # tomcat error log path : /usr/local/apache-tomcat-11.0.5/logs/catalina.*
  2. [root@elk2 ~]# cat /etc/filebeat/config/01-es-cluster-tomcat.yml
  3. filebeat.inputs:
  4. - type: filestream
  5.   paths:
  6.     - /usr/local/apache-tomcat-11.0.5/logs/catalina*
  7.   parsers:
  8.   - multiline:
  9.       type: pattern
  10.       pattern: '^\d'
  11.       negate: true
  12.       match: after
  13. output.elasticsearch:
  14.   hosts:
  15.   - 192.168.121.91:9200
  16.   - 192.168.121.92:9200
  17.   - 192.168.121.93:9200
  18.   index: test-modules-tomcat-elk2-%{+yyyy.MM.dd}
  19. setup.ilm.enabled: false
  20. setup.template.name: "test-modules-tomcat-elk2"
  21. setup.template.pattern: "test-modules-tomcat-elk2*"
  22. setup.template.overwrite: true
  23. setup.template.settings:
  24.   index.number_of_shards: 5
  25.   index.number_of_replicas: 0
复制代码
filebeat多实例
  1. 1.启动实例1
  2. filebeat -e -c /etc/filebeat/config/01-log-to-console.yaml  --path.data /tmp/xixi
  3. 2.启动实例2
  4. filebeat -e -c /etc/filebeat/config/02-log-to-es.yaml  --path.data /tmp/haha
  5. # 通过filebeat采集 /var/log/syslog  /var/log/auth.log
  6. root@elk2:~# cat /etc/filebeat/config/05-log-to-syslog.yaml
  7. # 定义数据从哪里来
  8. filebeat.inputs:
  9.   # 指定数据源的类型是log,表示从文件读取数据
  10. - type: log
  11.   # 指定文件的路径
  12.   paths:
  13.     - /var/log/syslog
  14. # 定义数据到终端
  15. output.elasticsearch:  
  16.   hosts:
  17.     - 192.168.121.21:9200
  18.     - 192.168.121.22:9200
  19.     - 192.168.121.23:9200
  20.   # 自定义索引名称
  21.   index: "test_syslog-%{+yyyy.MM.dd}"
  22. # 禁用索引生命周期管理(index lifecycle management,ILM)
  23. # 如果启用了此配置,则忽略自定义索引的所有信息
  24. setup.ilm.enabled: false
  25. # 如果索引模板存在,是否覆盖,默认值为false,如果明确需要,则可以将其设置为ture。
  26. # 但是官方建议将其设置为false,原因是每次写数据时,都会建立tcp链接,消耗资源。
  27. setup.template.overwrite: true
  28. # 定义索引模板(就是创建索引的规则)的名称
  29. setup.template.name: "test_syslog"
  30. # 定义索引模板的匹配模式,表示当前索引模板针对哪些索引生效。
  31. setup.template.pattern: "test_syslog-*"
  32. # # 定义索引模板的规则信息
  33. setup.template.settings:
  34.   # 分片数
  35.   index.number_of_shards: 5
  36.   # 每个分片有多少个副本
  37.   index.number_of_replicas: 0
  38. root@elk2:~# cat /etc/filebeat/config/06-log-to-auth.yaml
  39. # 定义数据从哪里来
  40. filebeat.inputs:
  41.   # 指定数据源的类型是log,表示从文件读取数据
  42. - type: log
  43.   # 指定文件的路径
  44.   paths:
  45.     - /var/log/auth.log
  46. # 定义数据到终端
  47. output.elasticsearch:  
  48.   hosts:
  49.     - 192.168.121.21:9200
  50.     - 192.168.121.22:9200
  51.     - 192.168.121.23:9200
  52.   # 自定义索引名称
  53.   index: "test_auth-%{+yyyy.MM.dd}"
  54. # 禁用索引生命周期管理(index lifecycle management,ILM)
  55. # 如果启用了此配置,则忽略自定义索引的所有信息
  56. setup.ilm.enabled: false
  57. # 如果索引模板存在,是否覆盖,默认值为false,如果明确需要,则可以将其设置为ture。
  58. # 但是官方建议将其设置为false,原因是每次写数据时,都会建立tcp链接,消耗资源。
  59. setup.template.overwrite: true
  60. # 定义索引模板(就是创建索引的规则)的名称
  61. setup.template.name: "test_auth"
  62. # 定义索引模板的匹配模式,表示当前索引模板针对哪些索引生效。
  63. setup.template.pattern: "test_auth-*"
  64. # # 定义索引模板的规则信息
  65. setup.template.settings:
  66.   # 分片数
  67.   index.number_of_shards: 5
  68.   # 每个分片有多少个副本
  69.   index.number_of_replicas: 0
  70. # 通过多实例的方式启动filebeat
  71. root@elk2:~# filebeat -e -c /etc/filebeat/config/05-log-to-syslog.yaml --path.data /tmp/xixi
  72. root@elk2:~# filebeat -e -c /etc/filebeat/config/06-log-to-syslog.yaml --path.data /tmp/haha
复制代码
EFK分析web集群
摆设web集群
  1. 1.部署tomcat服务器
  2. # 192.168.121.92 192.168.121.93部署tomcat
  3. 参照filebeat收集tomcat章节的部署tomcat方式
  4. 2.部署nginx
  5. # 192.168.121.91部署nginx
  6. [root@elk1 ~]# apt install -y nginx
  7. [root@elk1 ~]# vim /etc/nginx/nginx.conf
  8. ...
  9. upstream es-web{
  10.     server 192.168.121.92:8080;
  11.     server 192.168.121.93:8080;
  12. }
  13. server {
  14.     server_name es.web.com;
  15.     location / {
  16.         proxy_pass http://es-web;
  17.     }
  18. }
  19. ...
  20. [root@elk1 ~]# nginx -t
  21. [root@elk1 ~]# systemctl restart nginx
  22. # 访问测试
  23. [root@elk1 ~]# curl es.web.com
复制代码
采集web集群日记
  1. # 91加载nginx模块
  2. # 92 93 加载tomcat模块
  3. [root@elk1 ~]# filebeat modules enable nginx
  4. Enabled nginx
  5. [root@elk2 ~]# filebeat modules enable tomcat
  6. Enabled tomcat
  7. [root@elk3 ~]# filebeat modules enable tomcat
  8. Enabled tomcat
  9. 1.配置nginx模块功能
  10. [root@elk1 ~]# cat /etc/filebeat/modules.d/nginx.yml
  11. # Module: nginx
  12. # Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-nginx.html
  13. - module: nginx
  14.   # Access logs
  15.   access:
  16.     enabled: true
  17.     # Set custom paths for the log files. If left empty,
  18.     # Filebeat will choose the paths depending on your OS.
  19.     var.paths: /var/log/nginx/access.log
  20.   # Error logs
  21.   error:
  22.     enabled: false
  23.     # Set custom paths for the log files. If left empty,
  24.     # Filebeat will choose the paths depending on your OS.
  25.     #var.paths:
  26.   # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
  27.   ingress_controller:
  28.     enabled: false
  29.     # Set custom paths for the log files. If left empty,
  30.     # Filebeat will choose the paths depending on your OS.
  31.     #var.paths:
  32. 2.配置tomcat模块功能
  33. [root@elk2 ~]# cat /etc/filebeat/modules.d/tomcat.yml
  34. # Module: tomcat
  35. # Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-tomcat.html
  36. - module: tomcat
  37.   log:
  38.     enabled: true
  39.     # Set which input to use between udp (default), tcp or file.
  40.     var.input: file
  41.     # var.syslog_host: localhost
  42.     # var.syslog_port: 9501
  43.     # Set paths for the log files when file input is used.
  44.     var.paths:
  45.       - /usr/local/apache-tomcat-11.0.5/logs/*.json
  46.     # Toggle output of non-ECS fields (default true).
  47.     # var.rsa_fields: true
  48.     # Set custom timezone offset.
  49.     # "local" (default) for system timezone.
  50.     # "+02:00" for GMT+02:00
  51.     # var.tz_offset: local
  52. 3.配置91filebeat配置文件
  53. [root@elk1 ~]# cat /etc/filebeat/config/01-es-web-nginx.yaml
  54. filebeat.config.modules:
  55.   # Glob pattern for configuration loading
  56.   path: ${path.config}/modules.d/*.yml
  57.   # Set to true to enable config reloading
  58.   reload.enabled: true
  59. output.elasticsearch:
  60.   hosts:
  61.   - 192.168.121.91:9200
  62.   - 192.168.121.92:9200
  63.   - 192.168.121.93:9200
  64.   index: es-web-nginx-%{+yyyy.MM.dd}
  65. setup.ilm.enabled: false
  66. setup.template.name: "es-web-nginx"
  67. setup.template.pattern: "es-web-nginx-*"
  68. setup.template.overwrite: true
  69. setup.template.settings:
  70.   index.number_of_shards: 5
  71.   index.number_of_replicas: 0
  72. 4.92配置监控tomcat配置文件
  73. [root@elk2 ~]# cat /etc/filebeat/config/01-es-web-tomcat.yaml
  74. filebeat.config.modules:
  75.   path: ${path.config}/modules.d/*.yml
  76.   reload.enabled: true
  77. processors:
  78.   - decode_json_fields:
  79.       fields: ["event.original"]
  80.       process_array: false
  81.       max_depth: 1
  82.       target: ""
  83.       overwrite_keys: false
  84.       add_error_key: true
  85.   - drop_fields:
  86.       when:
  87.         equals:
  88.           status: "404"
  89.       fields: ["event.module"]
  90.       ignore_missing: false
  91. output.elasticsearch:
  92.   hosts:
  93.   - 192.168.121.91:9200
  94.   - 192.168.121.92:9200
  95.   - 192.168.121.93:9200
  96.   index: test-modules-tomcat91-%{+yyyy.MM.dd}
  97. setup.ilm.enabled: false
  98. setup.template.name: "test-modules-tomcat91"
  99. setup.template.pattern: "test-modules-tomcat91-*"
  100. setup.template.overwrite: true
  101. setup.template.settings:
  102.   index.number_of_shards: 5
  103.   index.number_of_replicas: 0
  104. 5.93配置监控tomcat配置文件
  105. [root@elk3 ~]# cat /etc/filebeat/config/02-es-web-tomcat.yaml
  106. filebeat.config.modules:
  107.   path: ${path.config}/modules.d/*.yml
  108.   reload.enabled: true
  109. processors:
  110.   - decode_json_fields:
  111.       fields: ["event.original"]
  112.       process_array: false
  113.       max_depth: 1
  114.       target: ""
  115.       overwrite_keys: false
  116.       add_error_key: true
  117.   - drop_fields:
  118.       when:
  119.         equals:
  120.           status: "404"
  121.       fields: ["event.module"]
  122.       ignore_missing: false
  123. output.elasticsearch:
  124.   hosts:
  125.   - 192.168.121.91:9200
  126.   - 192.168.121.92:9200
  127.   - 192.168.121.93:9200
  128.   index: test-modules-tomcat93-%{+yyyy.MM.dd}
  129. setup.ilm.enabled: false
  130. setup.template.name: "test-modules-tomcat93"
  131. setup.template.pattern: "test-modules-tomcat93-*"
  132. setup.template.overwrite: true
  133. setup.template.settings:
  134.   index.number_of_shards: 5
  135.   index.number_of_replicas: 0
  136. # 依次启动filebeat
复制代码
更改字段类型
  1. 我们想统计带宽大小,但是此时是无法统计的
  2. 这是一个字符类型的值
  3. # 通过processors修改字段类型
  4. # 官网配置
  5. # 支持的类型
  6. # The supported types include: integer, long, float, double, string, boolean, and ip.
  7. processors:
  8.   - convert:
  9.       fields:
  10.         - {from: "src_ip", to: "source.ip", type: "ip"}
  11.         - {from: "src_port", to: "source.port", type: "integer"}
  12.       ignore_missing: true
  13.       fail_on_error: false
  14. # 配置filebeat配置文件
  15. filebeat.config.modules:
  16.   path: ${path.config}/modules.d/*.yml
  17.   reload.enabled: true
  18. processors:
  19.   - decode_json_fields:
  20.       fields: ["event.original"]
  21.       process_array: false
  22.       max_depth: 1
  23.       target: ""
  24.       overwrite_keys: false
  25.       add_error_key: true
  26.   - convert:
  27.       fields:
  28.         - {from: "SendBytes", type: "long"}
  29.   - drop_fields:
  30.       when:
  31.         equals:
  32.           status: "404"
  33.       fields: ["event.module"]
  34.       ignore_missing: false
  35. output.elasticsearch:
  36.   hosts:
  37.   - 192.168.121.91:9200
  38.   - 192.168.121.92:9200
  39.   - 192.168.121.93:9200
  40.   index: test-modules-tomcat91-%{+yyyy.MM.dd}
  41. setup.ilm.enabled: false
  42. setup.template.name: "test-modules-tomcat91"
  43. setup.template.pattern: "test-modules-tomcat91-*"
  44. setup.template.overwrite: true
  45. setup.template.settings:
  46.   index.number_of_shards: 5
  47.   index.number_of_replicas: 0
复制代码
Ansible摆设EFK集群
  1. [root@ansible efk]# cat set_es.sh
  2. #!/bin/bash
  3. ansible-playbook 01-install-elaticsearch.yaml
  4. ansible-playbook 02-install-kibana.yaml
  5. ansible-playbook 03-install-filebeat.yaml
  6. ansible-playbook 04-set-web.yaml
  7. ansible-playbook 05-config-filebeat.yaml
  8. [root@ansible efk]# bash set_es.sh
  9. [root@ansible efk]# cat 01-install-elaticsearch.yaml
  10. ---
  11. - name: Install es cluster
  12.   hosts: all
  13.   tasks:
  14.     - name: get es deb package
  15.       get_url:
  16.         url: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.28-amd64.deb
  17.         dest: /root/
  18.     - name: Install es
  19.       shell:
  20.         cmd: dpkg -i /root/elasticsearch-7.17.28-amd64.deb | cat
  21.     - name: Configer es
  22.       copy:
  23.         src: conf/elasticsearch.yml
  24.         dest: /etc/elasticsearch/elasticsearch.yml
  25.     - name: start es
  26.       systemd:
  27.         name: elasticsearch
  28.         state: started
  29.         enabled: yes
  30. [root@ansible efk]# cat 02-install-kibana.yaml
  31. ---
  32. - name: Install kibana
  33.   hosts: elk1
  34.   tasks:
  35.     - name: Get kibana deb package
  36.       get_url:
  37.         url: https://artifacts.elastic.co/downloads/kibana/kibana-7.17.28-amd64.deb
  38.         dest: /root
  39.     - name: Install kibana
  40.       shell:
  41.         cmd: dpkg -i kibana-7.17.28-amd64.deb | cat
  42.     - name: Config kibana
  43.       copy:
  44.         src: conf/kibana.yml
  45.         dest: /etc/kibana/kibana.yml
  46.     - name: Start kibana
  47.       systemd:
  48.         name: kibana
  49.         state: started
  50.         enabled: yes
  51. [root@ansible efk]# cat 03-install-filebeat.yaml
  52. ---
  53. - name: Install filebeat
  54.   hosts: elk
  55.   tasks:
  56.     - name: Get filebeat code
  57.       get_url:
  58.         url: https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.28-amd64.deb
  59.         dest: /root
  60.     - name: Install filebeat
  61.       shell:
  62.         cmd: dpkg -i filebeat-7.17.28-amd64.deb | cat
  63.     - name: Configer filebeat
  64.       file:
  65.         path: /etc/filebeat/config
  66.         state: directory
  67. [root@ansible efk]# cat 04-set-web.yaml
  68. ---
  69. - name: Set nginx
  70.   hosts: elk1
  71.   tasks:
  72.     - name: Install nginx
  73.       shell:
  74.         cmd: apt install -y nginx | cat
  75.     - name: config nginx
  76.       copy:
  77.         src: conf/nginx.conf
  78.         dest: /etc/nginx/nginx.conf
  79.     - name: start nginx
  80.       systemd:
  81.         name: nginx
  82.         state: started
  83.         enabled: yes
  84.     - name: Configure hosts
  85.       copy:
  86.         content: 192.168.121.91 es.web.com
  87.         dest: /etc/hosts
  88. - name: Set tomcat
  89.   hosts: elk2,elk3
  90.   tasks:
  91.     - name: Get tomcat code
  92.       get_url:
  93.         url: https://dlcdn.apache.org/tomcat/tomcat-11/v11.0.5/bin/apache-tomcat-11.0.5.tar.gz
  94.         dest: /root/
  95.     - name: unarchive tomcat code
  96.       unarchive:
  97.         src: /root/apache-tomcat-11.0.5.tar.gz
  98.         dest: /usr/local
  99.         remote_src: yes
  100.     - name: Configure jdk PATH
  101.       copy:
  102.         src: conf/tomcat.sh
  103.         dest: /etc/profile.d/
  104.     - name: reload profile
  105.       shell:
  106.         cmd: source /etc/profile.d/tomcat.sh | cat
  107.     - name: Configure tomcat
  108.       copy:
  109.         src: conf/server.xml
  110.         dest: /usr/local/apache-tomcat-11.0.5/conf/server.xml
  111.     - name: start tomcat
  112.       shell:
  113.         cmd: catalina.sh  start |cat
  114. [root@ansible efk]# cat 05-config-filebeat.yaml
  115. ---
  116. - name: configure filebeat
  117.   hosts: elk1
  118.   tasks:
  119.     - name: enable nginx modules
  120.       shell:
  121.         cmd: filebeat modules enable nginx | cat
  122.     - name: configure nginx modules
  123.       copy:
  124.         src: conf/nginx.yml
  125.         dest: /etc/filebeat/modules.d/nginx.yml
  126.     - name: configure filebeat
  127.       copy:
  128.         src: conf/01-es-cluster-nginx.yml
  129.         dest: /etc/filebeat/config/01-es-cluster-nginx.yml
  130. - name: configure filebeat
  131.   hosts: elk2 elk3
  132.   tasks:
  133.     - name: enable tomcat modules
  134.       shell:
  135.         cmd: filebeat modules enable nginx | cat
  136.     - name: configure tomcat modules
  137.       copy:
  138.         src: conf/tomcat.yml
  139.         dest: /etc/filebeat/modules.d/tomcat.yml
  140.     - name: configure filebeat
  141.       template:
  142.         src: conf/01-es-cluster-tomcat.yml.j2
  143.         dest: /etc/filebeat/config/01-es-cluster-tomcat.yml
复制代码
logstash
安装配置logstash
  1. 1.部署logstash
  2. [root@elk3 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.28-amd64.deb
  3. [root@elk3 ~]# dpkg -i logstash-7.17.28-amd64.deb
  4. 2.创建符号链接,将Logstash命令添加到PATH环境变量
  5. [root@elk3 ~]# ln -svf  /usr/share/logstash/bin/logstash /usr/local/bin/
  6. '/usr/local/bin/logstash' -> '/usr/share/logstash/bin/logstash'
  7. 3.基于命令行的方式启动实例,使用-e选项指定配置信息(不推荐)
  8. [root@elk3 ~]# logstash -e "input { stdin { type => stdin } } output { stdout { codec => rubydebug } }"  --log.level warn
  9. ...
  10. The stdin plugin is now waiting for input:
  11. 111111111111111111111111111
  12. {
  13.     "@timestamp" => 2025-03-13T06:51:32.821Z,
  14.           "type" => "stdin",
  15.        "message" => "111111111111111111111111111",
  16.           "host" => "elk93",
  17.       "@version" => "1"
  18. }
  19. 4.基于配置文件方式启动Logstash
  20. [root@elk3 ~]# vim /etc/logstash/conf.d/01-stdin-to-stdout.conf
  21. [root@elk3 ~]# cat /etc/logstash/conf.d/01-stdin-to-stdout.conf
  22. input {
  23.   stdin {
  24.     type => stdin
  25.   }
  26. }
  27. output {
  28.   stdout {
  29.     codec => rubydebug
  30.   }
  31. }
  32. [root@elk3 ~]# logstash -f /etc/logstash/conf.d/01-stdin-to-stdout.conf
  33. ...
  34. 333333333333333333333333333333
  35. {
  36.           "type" => "stdin",
  37.        "message" => "333333333333333333333333333333",
  38.           "host" => "elk93",
  39.     "@timestamp" => 2025-03-13T06:54:20.223Z,
  40.       "@version" => "1"
  41. }
  42. # https://www.elastic.co/guide/en/logstash/7.17/plugins-inputs-file.html
  43. [root@elk3 ~]# cat /etc/logstash/conf.d/02-file-to-stdout.conf
  44. input {
  45.   file {
  46.     path => "/tmp/student.txt"
  47.   }
  48. }
  49. output {
  50.   stdout {
  51.     codec => rubydebug
  52.   }
  53. }
  54. [WARN ] 2025-03-26 09:40:52.788 [[main]<file] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
  55. {
  56.           "path" => "/tmp/student.txt",
  57.       "@version" => "1",
  58.     "@timestamp" => 2025-03-26T01:40:52.879Z,
  59.        "message" => "aaaddd",
  60.           "host" => "elk3"
  61. }
复制代码
Logstash采集文本日记策略
  1. Logstash采集策略和filebeat的采集策略类似
  2.         1.以换行符为准,以行为单位进行采集
  3.         2.也存在和filebeat类似的偏移量的概念
  4. [root@elk3 ~]# ll /usr/share/logstash/data/plugins/inputs/file/.sincedb_782d533684abe27068ac85b78871b9fd
  5. -rw-r--r-- 1 root root 53 Mar 26 09:57 /usr/share/logstash/data/plugins/inputs/file/.sincedb_782d533684abe27068ac85b78871b9fd
  6. [root@elk3 ~]# cat /usr/share/logstash/data/plugins/inputs/file/.sincedb_782d533684abe27068ac85b78871b9fd
  7. 408794 0 64768 12 1742955373.9715059 /tmp/student.txt  # 12就是偏移量
  8. [root@elk3 ~]# cat /tmp/student.txt
  9. ABC
  10. 2025def
  11. [root@elk3 ~]# ll -i /tmp/student.txt
  12. 408794 -rw-r--r-- 1 root root 12 Mar 26 09:45 /tmp/student.txt
  13. # 可以直接修改偏移量进行指定位置采集  我们修改偏移量到8  查看采集结果
  14. {
  15.     "@timestamp" => 2025-03-26T02:20:50.776Z,
  16.           "host" => "elk3",
  17.        "message" => "def",
  18.           "path" => "/tmp/student.txt",
  19.       "@version" => "1"
  20. }
复制代码
start_position
  1. 在filebeat中如果我们删除了filebeat的json文件,filebeat下一次采集从头开始,对于logstash来说,并不是这样
  2. [root@elk3 ~]# rm -f /usr/share/logstash/data/plugins/inputs/file/.sincedb_782d533684abe27068ac85b78871b9fd
  3. [root@elk3 ~]# logstash -f /etc/logstash/conf.d/02-file-to-stdout.conf
  4. # 还是默认采集最后面的数据
  5. [root@elk3 ~]# echo 123 >> /tmp/student.txt
  6. [root@elk3 ~]# cat /tmp/student.txt
  7. ABC
  8. 2025def
  9. 123
  10. {
  11.       "@version" => "1",
  12.     "@timestamp" => 2025-03-26T02:26:17.008Z,
  13.        "message" => "123",
  14.           "host" => "elk3",
  15.           "path" => "/tmp/student.txt"
  16. }
  17. // 这时就需要star_position参数
  18. start_position
  19. Value can be any of: beginning, end
  20. Default value is "end"
  21. [root@elk3 ~]# cat /etc/logstash/conf.d/02-file-to-stdout.conf
  22. input {
  23.   file {
  24.     path => "/tmp/student.txt"
  25.     start_position => "beginning"
  26.   }
  27. }
  28. output {
  29.   stdout {
  30.     codec => rubydebug
  31.   }
  32. }
  33. [root@elk3 ~]# rm -f /usr/share/logstash/data/plugins/inputs/file/.sincedb_782d533684abe27068ac85b78871b9fd
  34. [root@elk3 ~]# logstash -f /etc/logstash/conf.d/02-file-to-stdout.conf
  35. {
  36.       "@version" => "1",
  37.           "host" => "elk3",
  38.        "message" => "2025def",
  39.           "path" => "/tmp/student.txt",
  40.     "@timestamp" => 2025-03-26T02:31:50.020Z
  41. }
  42. {
  43.       "@version" => "1",
  44.           "host" => "elk3",
  45.        "message" => "ABC",
  46.           "path" => "/tmp/student.txt",
  47.     "@timestamp" => 2025-03-26T02:31:49.813Z
  48. }
  49. {
  50.       "@version" => "1",
  51.           "host" => "elk3",
  52.        "message" => "123",
  53.           "path" => "/tmp/student.txt",
  54.     "@timestamp" => 2025-03-26T02:31:50.037Z
  55. }
复制代码
filter plugins
  1. # logstash的输出有很多的字段,如果有一些我不想要,可以使用filter plugins进行过滤
  2. # 移除@version字段
  3. [root@elk3 ~]# cat /etc/logstash/conf.d/02-file-to-stdout.conf
  4. input {
  5.   file {
  6.     path => "/tmp/student.txt"
  7.     start_position => "beginning"
  8.   }
  9. }
  10. filter {
  11.   mutate {
  12.     remove_field => [ "@version" ]
  13.   }
  14. }
  15. output {
  16.   stdout {
  17.     codec => rubydebug
  18.   }
  19. }
  20. # -r模式启动logstash 可以实现重载
  21. [root@elk3 ~]# logstash -r -f /etc/logstash/conf.d/02-file-to-stdout.conf
  22. {
  23.     "@timestamp" => 2025-03-26T03:01:02.078Z,
  24.           "host" => "elk3",
  25.        "message" => "111",
  26.           "path" => "/tmp/student.txt"
  27. }
复制代码
logstash架构
logstash多实例
  1. 启动实例1:
  2. [root@elk93 ~]# logstash -f /etc/logstash/conf.d/01-stdin-to-stdout.conf
  3. 启动实例2:
  4. [root@elk93 ~]# logstash -rf /etc/logstash/conf.d/02-file-to-stdout.conf  --path.data /tmp/logstash-multiple
复制代码
logstash与pipeline关系
  1. - 一个Logstash实例可以有多个pipeline,若没有定义pipeline id,则默认为main pipeline。
  2.        
  3.        
  4.         - 每个pipeline都有三个组件组成,其中filter插件是可选组件:
  5.                 - input :
  6.                         数据从哪里来 。
  7.                        
  8.                 - filter:
  9.                         数据经过哪些插件处理,该组件是可选组件。
  10.                        
  11.                 - output:
  12.                         数据到哪里去。
复制代码
logstash采集nginx日记
  1. 1.安装nginx
  2. [root@elk3 ~]# apt install -y nginx
  3. 2.logstash采集nginx
  4. [root@elk3 ~]# cat /etc/logstash/conf.d/03-nginx-grok.conf
  5. input {
  6.   file {
  7.     path => "/var/log/nginx/access.log"
  8.     start_position => "beginning"
  9.   }
  10. }
  11. filter {
  12.   mutate {
  13.     remove_field => [ "@version" ]
  14.   }
  15. }
  16. output {
  17.   stdout {
  18.     codec => rubydebug
  19.   }
  20. }
  21. [root@elk3 ~]# logstash -r -f /etc/logstash/conf.d/03-nginx-grok.conf
  22. {
  23.           "host" => "elk3",
  24.        "message" => "127.0.0.1 - - [26/Mar/2025:14:43:58 +0800] "GET / HTTP/1.1" 200 612 "-" "curl/7.81.0"",
  25.     "@timestamp" => 2025-03-26T06:45:24.375Z,
  26.           "path" => "/var/log/nginx/access.log"
  27. }
  28. {
  29.           "host" => "elk3",
  30.        "message" => "127.0.0.1 - - [26/Mar/2025:14:43:57 +0800] "GET / HTTP/1.1" 200 612 "-" "curl/7.81.0"",
  31.     "@timestamp" => 2025-03-26T06:45:24.293Z,
  32.           "path" => "/var/log/nginx/access.log"
  33. }
  34. {
  35.           "host" => "elk3",
  36.        "message" => "127.0.0.1 - - [26/Mar/2025:14:43:58 +0800] "GET / HTTP/1.1" 200 612 "-" "curl/7.81.0"",
  37.     "@timestamp" => 2025-03-26T06:45:24.373Z,
  38.           "path" => "/var/log/nginx/access.log"
  39. }
复制代码
grok plugins
  1. # 基于正则提取
  2. Logstash ships with about 120 patterns by default. You can find them here: https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns.
  3. [root@elk3 ~]# cat  /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.3.4/patterns/legacy/httpd
  4. HTTPDUSER %{EMAILADDRESS}|%{USER}
  5. HTTPDERROR_DATE %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}
  6. # Log formats
  7. HTTPD_COMMONLOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" (?:-|%{NUMBER:response}) (?:-|%{NUMBER:bytes})
  8. HTTPD_COMBINEDLOG %{HTTPD_COMMONLOG} %{QS:referrer} %{QS:agent}
  9. # Error logs
  10. HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:message}
  11. HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[(?:%{WORD:module})?:%{LOGLEVEL:loglevel}\] \[pid %{POSINT:pid}(:tid %{NUMBER:tid})?\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_message}:)?( \[client %{IPORHOST:clientip}:%{POSINT:clientport}\])?( %{DATA:errorcode}:)? %{GREEDYDATA:message}
  12. HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
  13. # Deprecated
  14. COMMONAPACHELOG %{HTTPD_COMMONLOG}
  15. COMBINEDAPACHELOG %{HTTPD_COMBINEDLOG}
  16. # 配置logstash
  17. [root@elk3 ~]# cat /etc/logstash/conf.d/03-nginx-grok.conf
  18. input {
  19.   file {
  20.     path => "/var/log/nginx/access.log"
  21.     start_position => "beginning"
  22.   }
  23. }
  24. filter {
  25.   mutate {
  26.     remove_field => [ "@version" ]
  27.   }
  28.   # 基于正则提取任意文本,并将其封装为一个特定的字段。使用设置好的模板
  29.   grok {
  30.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  31.   }
  32. }
  33. output {
  34.   stdout {
  35.     codec => rubydebug
  36.   }
  37. }
  38. [root@elk3 ~]# logstash -r -f /etc/logstash/conf.d/03-nginx-grok.conf
  39. {
  40.         "message" => "192.168.121.1 - - [26/Mar/2025:14:52:06 +0800] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Linux; Android 8.0.0; SM-G955U Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"",
  41.            "path" => "/var/log/nginx/access.log",
  42.         "request" => "/",
  43.        "clientip" => "192.168.121.1",
  44.            "host" => "elk3",
  45.       "timestamp" => "26/Mar/2025:14:52:06 +0800",
  46.            "auth" => "-",
  47.            "verb" => "GET",
  48.        "response" => "200",
  49.           "ident" => "-",
  50.     "httpversion" => "1.1",
  51.      "@timestamp" => 2025-03-26T06:52:07.342Z,
  52.           "bytes" => "396"
  53. }
复制代码
useragent plugins
  1. 用于提取用户的设备信息
  2. [root@elk3 ~]# cat /etc/logstash/conf.d/03-nginx-grok.conf
  3. input {
  4.   file {
  5.     path => "/var/log/nginx/access.log"
  6.     start_position => "beginning"
  7.   }
  8. }
  9. filter {
  10.   mutate {
  11.     remove_field => [ "@version" ]
  12.   }
  13.   grok {
  14.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  15.   }
  16.   useragent {
  17.           # 指定从哪个字段解析用户设备信息
  18.     source => 'message'
  19.     # 将解析的结果存储到某个特定字段,若不指定,则默认放在顶级字段。
  20.     target => "xu-ua"
  21.   }      
  22. }
  23. [root@elk3 ~]# logstash -r -f /etc/logstash/conf.d/03-nginx-grok.conf
  24. output {
  25.   stdout {
  26.     codec => rubydebug
  27.   }
  28. }
  29. {
  30.         "message" => "192.168.121.1 - - [26/Mar/2025:16:45:10 +0800] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Linux; Android 13; SM-G981B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36"",
  31.        "clientip" => "192.168.121.1",
  32.       "timestamp" => "26/Mar/2025:16:45:10 +0800",
  33.         "request" => "/",
  34.           "bytes" => "396",
  35.            "verb" => "GET",
  36.     "httpversion" => "1.1",
  37.      "@timestamp" => 2025-03-26T08:45:11.587Z,
  38.            "host" => "elk3",
  39.            "auth" => "-",
  40.           "xu-ua" => {
  41.               "name" => "Chrome Mobile",
  42.            "version" => "134.0.0.0",
  43.                 "os" => "Android",
  44.            "os_name" => "Android",
  45.         "os_version" => "13",
  46.             "device" => "Samsung SM-G981B",
  47.            "os_full" => "Android 13",
  48.              "minor" => "0",
  49.           "os_major" => "13",
  50.              "patch" => "0",
  51.              "major" => "134"
  52.     },
  53.           "ident" => "-",
  54.            "path" => "/var/log/nginx/access.log",
  55.        "response" => "200"
  56. }
复制代码
geoip plugins
  1. 基于公网IP地址分析你的经纬度坐标点
  2. [root@elk3 ~]# cat /etc/logstash/conf.d/03-nginx-grok.conf
  3. input {
  4.   file {
  5.     path => "/var/log/nginx/access.log"
  6.     start_position => "beginning"
  7.   }
  8. }
  9. filter {
  10.   mutate {
  11.     remove_field => [ "@version" ]
  12.   }
  13.   grok {
  14.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  15.   }
  16.   useragent {
  17.     source => 'message'
  18.     target => "xu-ua"
  19.   }      
  20.   geoip {
  21.         source => "clientip"
  22.   }
  23. }
  24. [root@elk3 ~]# logstash -r -f /etc/logstash/conf.d/03-nginx-grok.conf
  25. output {
  26.   stdout {
  27.     codec => rubydebug
  28.   }
  29. }
  30. "geoip" => {
  31.              "longitude" => -119.705,
  32.          "country_code2" => "US",
  33.            "region_name" => "Oregon",
  34.               "timezone" => "America/Los_Angeles",
  35.                     "ip" => "52.222.36.125",
  36.         "continent_code" => "NA",
  37.          "country_code3" => "US",
  38.               "latitude" => 45.8401,
  39.           "country_name" => "United States",
  40.               "dma_code" => 810,
  41.            "postal_code" => "97818",
  42.            "region_code" => "OR",
  43.               "location" => {
  44.             "lat" => 45.8401,
  45.             "lon" => -119.705
  46.         },
  47.              "city_name" => "Boardman"
  48.     }
复制代码
date plugins
  1. [root@elk3 ~]# cat /etc/logstash/conf.d/03-nginx-grok.conf
  2. input {
  3.   file {
  4.     path => "/var/log/nginx/access.log"
  5.     start_position => "beginning"
  6.   }
  7. }
  8. filter {
  9.   mutate {
  10.     remove_field => [ "@version" ]
  11.   }
  12.   grok {
  13.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  14.   }
  15.   useragent {
  16.     source => 'message'
  17.     target => "xu-ua"
  18.   }      
  19.   geoip {
  20.         source => "clientip"
  21.   }
  22.   date {
  23.         # 匹配日期字段,将其转换为日期格式,将来存储到ES,基于官方的示例对号入座对应的格式即可。
  24.     # https://www.elastic.co/guide/en/logstash/7.17/plugins-filters-date.html#plugins-filters-date-match
  25.     match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  26.     # 将match匹配的日期修改后的值直接覆盖到指定字段,若不定义,则默认覆盖"@timestamp"。
  27.     target => "xu-timestamp"
  28.   }
  29. }
  30. output {
  31.   stdout {
  32.     codec => rubydebug
  33.   }
  34. }
  35. [root@elk3 ~]# logstash -r -f /etc/logstash/conf.d/03-nginx-grok.conf
  36. "xu-timestamp" => 2025-03-26T09:17:18.000Z,
复制代码
mutate plugins
  1. 如果我们想统计带宽 我们会发现 "bytes" => "396"
  2. 是字符串类型,不能累加,所以要使用mutate plugins转换类型
  3. [root@elk3 ~]# cat /etc/logstash/conf.d/03-nginx-grok.conf
  4. input {
  5.   file {
  6.     path => "/var/log/nginx/access.log"
  7.     start_position => "beginning"
  8.   }
  9. }
  10. filter {
  11.   mutate {
  12.     convert => {
  13.       "bytes" => "integer"
  14.     }
  15.     remove_field => [ "@version" ]
  16.   }
  17.   grok {
  18.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  19.   }
  20.   useragent {
  21.     source => 'message'
  22.     target => "xu-ua"
  23.   }      
  24.   geoip {
  25.         source => "clientip"
  26.   }
  27.   date {
  28.     match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  29.     target => "xu-timestamp"
  30.   }
  31. }
  32. output {
  33.   stdout {
  34.     codec => rubydebug
  35.   }
  36. }
复制代码
logstash 采集日记输出到es
  1. [root@elk3 ~]# cat /etc/logstash/conf.d/08-nginx-to-es.conf
  2. input {
  3.   file {
  4.     path => "/var/log/nginx/access.log"
  5.     start_position => "beginning"
  6.   }
  7. }
  8. filter {
  9.   grok {
  10.     match => { "message" => "%{HTTPD_COMMONLOG}" }
  11.   }
  12.   useragent {
  13.     source => "message"
  14.     target => "xu_user_agent"
  15.   }
  16.   geoip {
  17.     source => "clientip"
  18.   }
  19.   date {
  20.     match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  21.     target => "xu-timestamp"
  22.   }
  23.   # 对指定字段进行转换处理
  24.   mutate {
  25.     # 将指定字段转换成我们需要转换的类型
  26.     convert => {
  27.       "bytes" => "integer"
  28.     }
  29.    
  30.     remove_field => [ "@version","host","message" ]
  31.   }
  32. }
  33. output {
  34.   stdout {
  35.     codec => rubydebug
  36.   }
  37.   elasticsearch {
  38.       # 对应的ES集群主机列表
  39.       hosts => ["10.0.0.91:9200","10.0.0.92:9200","10.0.0.93:9200"]
  40.       # 对应的ES集群的索引名称
  41.       index => "xu-elk-nginx"
  42.   }
  43. }
  44. 存在的问题:
  45.         Failed (timed out waiting for connection to open). Sleeping for 0.02
  46. 问题描述:
  47.         此问题在 ElasticStack 7.17.28版本中,可能会出现Logstash无法写入ES的情况。
  48.        
  49. TODO:
  50.         需要调研官方是否做了改动,导致无法写入成功,需要额外的参数配置。
  51. 临时解决方案:
  52.         - 回退版本到7.17.23版本。
  53.         - 注释掉geoip的配置
复制代码
解决在写入es时geoip plugins识别时间过长问题
  1. 通过查看官网,我们可以看到geoip模块可以指定数据库, 我们通过指定数据库的方式来解决这个问题
  2. 1.查看Logstash本地默认的geoip插件
  3. [root@elk3 ~]# tree /usr/share/logstash/data/plugins/filters/geoip/1742980310/
  4. /usr/share/logstash/data/plugins/filters/geoip/1742980310/
  5. ├── COPYRIGHT.txt
  6. ├── elastic-geoip-database-service-agreement-LICENSE.txt
  7. ├── GeoLite2-ASN.mmdb
  8. ├── GeoLite2-ASN.tgz
  9. ├── GeoLite2-City.mmdb
  10. ├── GeoLite2-City.tgz
  11. ├── LICENSE.txt
  12. └── README.txt
  13. 0 directories, 8 files
  14. 2.配置logstash
  15. [root@elk3 ~]# cat /etc/logstash/conf.d/04-nginx-to-es.conf
  16. input {
  17.   file {
  18.     path => "/var/log/nginx/access.log"
  19.     start_position => "beginning"
  20.   }
  21. }
  22. filter {
  23.   mutate {
  24.     convert => {
  25.       "bytes" => "integer"
  26.     }
  27.     remove_field => [ "@version" ]
  28.   }
  29.   grok {
  30.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  31.   }
  32.   useragent {
  33.     source => 'message'
  34.     target => "xu-ua"
  35.   }      
  36.   geoip {
  37.         source => "clientip"
  38.         database => "/usr/share/logstash/data/plugins/filters/geoip/1742980310/GeoLite2-City.mmdb"
  39.   }
  40.   date {
  41.     match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  42.     target => "xu-timestamp"
  43.   }
  44. }
  45. output {
  46.   stdout {
  47.     codec => rubydebug
  48.   }
  49.   elasticsearch {
  50.         index => "xu-logstash"
  51.         hosts => ["http://192.168.121.91:9200","http://192.168.121.92:9200","http://192.168.121.93:9200"]
  52.   }
  53. }
  54. [root@elk3 ~]#
复制代码
解决geoip.location数据类型不正确问题
这时的经纬度是float类型,是不能出图的
在kibana中创建索引模板
ELFK架构
json插件案例
  1. graph LR
  2. filebeat--->|发送|logstash
  3. 让logstash接收filebeat收集到的数据,logstash优先于filebeat启动
  4. 也就是logstash中的input plugins type 是beats
  5. # 配置logstash
  6. [root@elk3 ~]# grep -v "^#" /etc/logstash/conf.d/05-beat-es.conf
  7. input {
  8.   beats {
  9.       port => 5044
  10.   }
  11. }
  12. filter {
  13.   mutate {
  14.     remove_field => [ "@version","host","agent","ecs","tags","input","log" ]
  15.   }
  16.   json {
  17.      source => "message"
  18.    }
  19. }
  20. output {
  21.   stdout {
  22.     codec => rubydebug
  23.   }
  24. }
  25. # 配置filebeat
  26. [root@elk1 ~]# cat /etc/filebeat/config/05-json.yaml
  27. filebeat.inputs:
  28. - type: filestream
  29.   paths:
  30.     - /tmp/student.json
  31. output.logstash:
  32.   hosts: ["192.168.121.93:5044"]
  33. # 先启动logstash,在启动filebeat
  34. [root@elk3 conf.d]# logstash -rf 05-beat-es.conf
  35. [root@elk3 ~]# netstat -tunlp | grep 5044
  36. tcp6       0      0 :::5044                 :::*                    LISTEN      120181/java   
  37. # 在启动filebeat
  38. [root@elk1 ~]# filebeat  -e -c /etc/filebeat/config/05-json.yaml
  39. // 准备测试数据
  40. {
  41.   "name":"aaa",
  42.   "hobby":["写小说","唱歌"]
  43. }
  44. {
  45.   "name":"bbb",
  46.   "hobby":["健身","台球","打豆豆"]
  47. }
  48. {
  49.   "name":"ccc",
  50.   "hobby":["乒乓球","游泳","游戏"]
  51. }
  52. {
  53.    "name": "ddd",
  54.    "hobby": ["打游戏","打篮球"]
  55. }
  56. # 查看采集结果,由于filebeat的采集规则是按行采集,就导致我们准备的一条数据它采集出来了多条
  57. "message" => "   "name": "ddd",",
  58. "message" => "   "hobby": ["打游戏","打篮球"]",
  59. ...
  60. # 需要使用filebeat的多行合并进行处理
  61. [root@elk1 ~]# cat /etc/filebeat/config/05-json.yaml
  62. filebeat.inputs:
  63. - type: filestream
  64.   paths:
  65.     - /tmp/student.json
  66.   parsers:
  67.     - multiline:
  68.         type: count
  69.         count_lines: 4
  70. output.logstash:
  71.   hosts: ["192.168.121.93:5044"]
  72. # 查看数据采集情况
  73. {
  74.        "message" => "{\n  "name":"aaa",\n  "hobby":["写小说","唱歌"]\n}",
  75.           "name" => "aaa",
  76.          "hobby" => [
  77.         [0] "写小说",
  78.         [1] "唱歌"
  79.     ],
  80.     "@timestamp" => 2025-03-27T07:46:14.390Z
  81. }
复制代码
写入es
  1. [root@elk3 ~]# grep -v "^#" /etc/logstash/conf.d/05-beat-es.conf
  2. input {
  3.   beats {
  4.       port => 5044
  5.   }
  6. }
  7. filter {
  8.   mutate {
  9.     remove_field => [ "@version","host","agent","ecs","tags","input","log" ]
  10.   }
  11.   json {
  12.      source => "message"
  13.    }
  14. }
  15. output {
  16.   stdout {
  17.     codec => rubydebug
  18.   }
  19.   elasticsearch {
  20.         hosts => ["http://192.168.121.91:9200"]
  21.   }
  22. }
复制代码
ELFK架构梳理之电商指标分享项目案例
  1. 1.生成测试数据
  2. [root@elk1 ~]# cat gen-log.py
  3. #!/usr/bin/env python
  4. # -*- coding: UTF-8 -*-
  5. # @author : Jason Yin
  6. import datetime
  7. import random
  8. import logging
  9. import time
  10. import sys
  11. LOG_FORMAT = "%(levelname)s %(asctime)s [com.oldboyedu.%(module)s] - %(message)s "
  12. DATE_FORMAT = "%Y-%m-%d %H:%M:%S"
  13. # 配置root的logging.Logger实例的基本配置
  14. logging.basicConfig(level=logging.INFO, format=LOG_FORMAT, datefmt=DATE_FORMAT, filename=sys.argv[1]
  15. , filemode='a',)
  16. actions = ["浏览页面", "评论商品", "加入收藏", "加入购物车", "提交订单", "使用优惠券", "领取优惠券",
  17. "搜索", "查看订单", "付款", "清空购物车"]
  18. while True:
  19.     time.sleep(random.randint(1, 5))
  20.     user_id = random.randint(1, 10000)
  21.     # 对生成的浮点数保留2位有效数字.
  22.     price = round(random.uniform(15000, 30000),2)
  23.     action = random.choice(actions)
  24.     svip = random.choice([0,1,2])
  25.     logging.info("DAU|{0}|{1}|{2}|{3}".format(user_id, action,svip,price))
  26. [root@elk1 ~]#  python3 gen-log.py /tmp/apps.log
  27. 2.查看数据内容
  28. [root@elk1 ~]# tail -f /tmp/apps.log
  29. ...
  30. INFO 2025-03-27 17:03:10 [com.oldboyedu.gen-log] - DAU|7973|加入购物车|0|19300.65
  31. INFO 2025-03-27 17:03:13 [com.oldboyedu.gen-log] - DAU|8617|加入购物车|2|19720.57
  32. INFO 2025-03-27 17:03:14 [com.oldboyedu.gen-log] - DAU|6879|搜索|2|24774.85
  33. INFO 2025-03-27 17:03:19 [com.oldboyedu.gen-log] - DAU|804|付款|2|21352.22
  34. INFO 2025-03-27 17:03:22 [com.oldboyedu.gen-log] - DAU|3014|清空购物车|0|19908.62
  35. ...
  36. # 启动logstash实例
  37. [root@elk3 conf.d]# cat 06-beats_apps-to-es.conf
  38. input {
  39.   beats {
  40.       port => 9999
  41.   }
  42. }
  43. filter {
  44. mutate {
  45.     split => { "message" => "|" }
  46.     add_field => {
  47.       "other" => "%{[message][0]}"
  48.       "userId" => "%{[message][1]}"
  49.       "action" => "%{[message][2]}"
  50.       "svip" => "%{[message][3]}"
  51.       "price" => "%{[message][4]}"
  52.     }
  53. }
  54. mutate{
  55.         split => { "other" => " " }
  56.     add_field => {
  57.        datetime => "%{[other][1]} %{[other][2]}"
  58.     }
  59.    
  60.     convert => {
  61.        "price" => "float"
  62.      }
  63.         remove_field => [ "@version","host","agent","ecs","tags","input","log","message","other"]
  64.   }
  65. }
  66. output {
  67. #  stdout {
  68. #    codec => rubydebug
  69. #  }
  70.   elasticsearch {
  71.      index => "linux96-logstash-elfk-apps"
  72.      hosts => ["http://192.168.121.91:9200","http://192.168.121.92:9200","http://192.168.121.93:9200"]
  73.   }
  74. }
  75. # 启动filebeat实例
  76. [root@elk1 ~]# cat /etc/filebeat/config/06-filestream-to-logstash.yml
  77. filebeat.inputs:
  78. - type: filestream
  79.   paths:
  80.     - /tmp/apps.log
  81. output.logstash:
  82.   hosts: ["192.168.121.93:9999"]
复制代码
ELK架构
logstash if语句
  1. 在logstash中支持if语句,假如有多个input,可以通过if来对不同的input做不同的过滤,以及不同的输出
  2. # 配置logstash if
  3. [root@elk3 ~]# cat /etc/logstash/conf.d/08-logstash-if.conf
  4. input {
  5.         beats {
  6.                 port => 9999
  7.                 type => "xu-filebeat"
  8.         }
  9.         file {
  10.                 path => "/var/log/nginx/access.log"
  11.                 start_position => "beginning"
  12.             type => "xu-file"
  13.         }
  14.         tcp {
  15.                 port => 8888
  16.             type => "xu-tcp"
  17.         }
  18. }
  19. fileter {
  20.         if [type] == "xu-tcp" {
  21.            mutate {
  22.        add_field => {
  23.           school => "school1"
  24.           class => "one"
  25.        }
  26.        remove_field => [ "@version","port"]
  27.      }
  28.         } else if [type] == "xu-filebeat" {
  29.                 mutate {
  30.         split => { "message" => "|" }
  31.         add_field => {
  32.           "other" => "%{[message][0]}"
  33.           "userId" => "%{[message][1]}"
  34.           "action" => "%{[message][2]}"
  35.           "svip" => "%{[message][3]}"
  36.           "price" => "%{[message][4]}"
  37.           "address" => "1.1.1.1"
  38.         }
  39.       }
  40.       mutate {
  41.         split => { "other" => " " }
  42.         add_field => {
  43.            datetime => "%{[other][1]} %{[other][2]}"
  44.         }
  45.         
  46.         convert => {
  47.            "price" => "float"
  48.          }
  49.         remove_field => [ "@version","host","agent","ecs","tags","input","log","message","other"]
  50.       }
  51.       date {
  52.         match => [ "datetime", "yyyy-MM-dd HH:mm:ss" ]
  53.       }
  54.         } else {
  55.                 grok {
  56.         match => { "message" => "%{HTTPD_COMMONLOG}" }
  57.       }
  58.    
  59.       useragent {
  60.         source => "message"
  61.         target => "xu_user_agent"
  62.       }
  63.    
  64.       geoip {
  65.         source => "clientip"
  66.         database => "/usr/share/logstash/data/plugins/filters/geoip/CC/GeoLite2-City.mmdb"
  67.       }
  68.    
  69.       date {
  70.         match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  71.         target => "xu-timestamp"
  72.       }
  73.      
  74.       mutate {
  75.         convert => {
  76.           "bytes" => "integer"
  77.         }
  78.         
  79.         add_field => {
  80.            office => "https://studylinux.cn"
  81.         }
  82.         remove_field => [ "@version","host","message" ]
  83.       }
  84.         }
  85. }
  86. output {
  87.         if [type] == "xu-filebeat" {
  88.       elasticsearch {
  89.          index => "xu-logstash-if-filebeat"
  90.          hosts => ["http://10.0.0.91:9200","http://10.0.0.92:9200","http://10.0.0.93:9200"]
  91.       }
  92.   } else if [type] == "xu-tcp" {
  93.       elasticsearch {
  94.          index => "xu-logstash-if-tcp"
  95.          hosts => ["http://10.0.0.91:9200","http://10.0.0.92:9200","http://10.0.0.93:9200"]
  96.       }
  97.   }else {
  98.       elasticsearch {
  99.          index => "xu-logstash-if-file"
  100.          hosts => ["http://10.0.0.91:9200","http://10.0.0.92:9200","http://10.0.0.93:9200"]
  101.       }
  102.   }
  103. }
  104. [root@elk3 ~]#
复制代码
pipeline
  1. # pipline配置文件位置
  2. [root@elk3 ~]# ll /etc/logstash/pipelines.yml
  3. -rw-r--r-- 1 root root 285 Feb 18 18:52 /etc/logstash/pipelines.yml
  4. # 修改pipline配置文件
  5. [root@elk3 ~]# tail -4 /etc/logstash/pipelines.yml
  6. - pipeline.id: xixi
  7.   path.config: "/etc/logstash/conf.d/01-file-to-stdout.conf"
  8. - pipeline.id: haha
  9.   path.config: "/etc/logstash/conf.d/03-nginx-grok.conf"
  10. # 启动logstash,可以通过logstash -r直接启动不需要再指定配置文件了
  11. [root@elk3 ~]# logstash  -r
  12.         # 直接会有一个报错 ERROR: Failed to read pipelines yaml file. Location: /usr/share/logstash/config/pipelines.yml
  13.         # logstash默认会去 /usr/share/logstash/config/pipelines.yml找这个文件,此时我们做一个软链接即可
  14. # 配置软链接
  15. [root@elk3 ~]# mkdir /usr/share/logstash/config/
  16. [root@elk3 ~]# ln -svf /etc/logstash/pipelines.yml  /usr/share/logstash/config/
  17. '/usr/share/logstash/config/pipelines.yml' -> '/etc/logstash/pipelines.yml'
  18. [root@elk3 ~]# logstash  -r
  19. ...
  20. [INFO ] 2025-03-29 10:16:50.372 [[xixi]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"xixi"}
  21. [INFO ] 2025-03-29 10:16:54.380 [[haha]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"haha"}
  22. ...
复制代码
ES集群安全
基于base_auth加密
es集群加密
  1. # 在配置加密之前是可以正常访问的
  2. [root@elk1 ~]# curl 127.1:9200/_cat/nodes
  3. 192.168.121.92  6 94 0 0.05 0.03 0.00 cdfhilmrstw - elk2
  4. 192.168.121.91 22 95 4 0.25 0.27 0.25 cdfhilmrstw * elk1
  5. 192.168.121.93 29 94 6 0.02 0.25 0.48 cdfhilmrstw - elk3
  6. 1 生成证书文件
  7. [root@elk1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert -out /etc/elasticsearch/elastic-certificates.p12 -pass ""
  8. ...
  9. Certificates written to /etc/elasticsearch/elastic-certificates.p12
  10. This file should be properly secured as it contains the private key for
  11. your instance.
  12. This file is a self contained file and can be copied and used 'as is'
  13. For each Elastic product that you wish to configure, you should copy
  14. this '.p12' file to the relevant configuration directory
  15. and then follow the SSL configuration instructions in the product guide.
  16. 2.把证书文件拷贝到其他节点
  17. [root@elk1 ~]# chmod 640 /etc/elasticsearch/elastic-certificates.p12
  18. [root@elk1 ~]# scp -r /etc/elasticsearch/elastic-certificates.p12  192.168.121.92:/etc/elasticsearch/elastic-certificates.p12
  19. [root@elk1 ~]# scp - /etc/elasticsearch/elastic-certificates.p12  192.168.121.93:/etc/elasticsearch/elastic-certificates.p12
  20. 3.修改ES集群的配置文件,并同步到所有节点
  21. [root@elk1 ~]# tail -5 /etc/elasticsearch/elasticsearch.yml
  22. xpack.security.enabled: true
  23. xpack.security.transport.ssl.enabled: true
  24. xpack.security.transport.ssl.verification_mode: certificate
  25. xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
  26. xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
  27. [root@elk1 ~]# scp /etc/elasticsearch/elasticsearch.yml  192.168.121.92:/etc/elasticsearch/elasticsearch.yml
  28. [root@elk1 ~]# scp /etc/elasticsearch/elasticsearch.yml  192.168.121.93:/etc/elasticsearch/elasticsearch.yml
  29. 4.重启es
  30. [root@elk1 ~]# systemctl restart elasticsearch.service
  31. # 此时就不能直接访问了
  32. [root@elk1 ~]# curl 127.1:9200
  33. {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm="security" charset="UTF-8""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm="security" charset="UTF-8""}},"status":401}
  34. 5.生成随机密码
  35. [root@elk1 ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords  auto
  36. Changed password for user apm_system
  37. PASSWORD apm_system = aBsQ3WI9ydUVTx2hk2JT
  38. Changed password for user kibana_system
  39. PASSWORD kibana_system = xoMBWbFyYmadDyrYcwyI
  40. Changed password for user kibana
  41. PASSWORD kibana = xoMBWbFyYmadDyrYcwyI
  42. Changed password for user logstash_system
  43. PASSWORD logstash_system = fWx19jXFHinpcraglh8E
  44. Changed password for user beats_system
  45. PASSWORD beats_system = NgKipgH0LfnFGFAazun6
  46. Changed password for user remote_monitoring_user
  47. PASSWORD remote_monitoring_user = Af4hu6PrhPYvn2S5zcEj
  48. Changed password for user elastic
  49. PASSWORD elastic = 0Nj2dpMTSNYurPqQHInA
  50. [root@elk1 ~]# curl -u elastic:MSfRhWKA3lRhufYpxF9u 127.1:9200/_cat/nodes
  51. 192.168.121.91 40 96 22 0.62 0.74 0.53 cdfhilmrstw - elk1
  52. 192.168.121.92 17 96 20 0.44 0.67 0.36 cdfhilmrstw * elk2
  53. 192.168.121.93 23 96 32 0.54 1.00 0.73 cdfhilmrstw - elk3
  54. 6.kibana连接es
  55.         6.1修改kibana配置文件
  56. [root@elk1 ~]# tail -2 /etc/kibana/kibana.yml
  57. elasticsearch.username: "kibana_system"
  58. elasticsearch.password: "47UD4ZOypuWO100QciH4"
  59.         6.2重启kibana
  60. [root@elk1 ~]# systemctl restart kibana.service
  61.         6.3web访问kibana
复制代码
重置es密码
  1. 在es集群中有类似root用户的superuser的角色,我们可以创建一个用户属于superuser的角色,通过这个用户去修改elastic的密码
  2. 1.创建一个超级管理员角色
  3. [root@elk1 ~]# /usr/share/elasticsearch/bin/elasticsearch-users useradd xu -p 123456 -r superuser
  4. 2.基于管理员修改密码
  5. [root@elk1 ~]# curl -s --user xu:123456 -XPUT "http://localhost:9200/_xpack/security/user/elastic/_password?pretty" -H 'Content-Type: application/json' -d'
  6.      {
  7.        "password" : "654321"
  8.      }'
  9.      
  10. [root@elk1 ~]# curl -uelastic:654321 127.1:9200/_cat/nodes
  11. 192.168.121.91 35 96 7 0.38 0.43 0.52 cdfhilmrstw - elk1
  12. 192.168.121.92 20 96 2 0.20 0.20 0.25 cdfhilmrstw * elk2
  13. 192.168.121.93 27 97 5 0.10 0.18 0.38 cdfhilmrstw - elk3
复制代码
filebeat对接es加密
  1. [root@elk1 ~]# cat /etc/filebeat/config/07-tcp-to-es_tls.yaml
  2. filebeat.inputs:
  3. - type: tcp
  4.   host: "0.0.0.0:9000"
  5. output.elasticsearch:
  6.   hosts:
  7.   - 192.168.121.91:9200
  8.   - 192.168.121.92:9200
  9.   - 192.168.121.93:9200
  10.   # 指定连接ES集群的用户名
  11.   username: "elastic"
  12.   # 指定连接ES集群的密码
  13.   password: "654321"
  14.   index: xu-es-tls-filebeat
  15. setup.ilm.enabled: false
  16. setup.template.name: "xu-es-tls-filebeat"
  17. setup.template.pattern: "xu-es-tls-filebeat-*"
  18. setup.template.overwrite: true
  19. setup.template.settings:
  20.   index.number_of_shards: 3
  21.   index.number_of_replicas: 0
复制代码
logstash对接es加密
  1. [root@elk3 ~]# cat /etc/logstash/conf.d/09-tcp-to-es_tls.conf
  2. input {
  3.   tcp {
  4.     port => 8888
  5.   }
  6. }  
  7. output {
  8.   elasticsearch {
  9.     hosts => ["192.168.121.91:9200","192.168.121.92:9200","192.168.121.93:9200"]
  10.     index => "oldboyedu-logstash-tls-es"
  11.     user => elastic
  12.     password => "654321"
  13.   }
  14. }
复制代码
api-key
  1. 为什么要启用api-key
  2.         为了安全性,使用用户名和密码的方式进行认证会暴露用户信息。
  3.         ElasticSearch也支持api-key的方式进行认证。这样就可以保证安全性。api-key是不能用于登录kibana,安全性得到保障。
  4.         而且可以基于api-key实现权限控制。
  5. elasticsearch默认是没有启动api的,需要通过配置文件设置并启动api功能
复制代码
启动es api功能
  1. [root@elk1 ~]# tail /etc/elasticsearch/elasticsearch.yml
  2. # 启用api_key功能
  3. xpack.security.authc.api_key.enabled: true
  4. # 指定API密钥加密算法
  5. xpack.security.authc.api_key.hashing.algorithm: pbkdf2
  6. # 缓存的API密钥时间
  7. xpack.security.authc.api_key.cache.ttl: 1d
  8. # API密钥保存数量的上限
  9. xpack.security.authc.api_key.cache.max_keys: 10000
  10. # 用于内存中缓存的API密钥凭据的哈希算法
  11. xpack.security.authc.api_key.cache.hash_algo: ssha256
  12. [root@elk1 ~]# !scp
  13. scp /etc/elasticsearch/elasticsearch.yml  192.168.121.93:/etc/elasticsearch/elasticsearch.yml
  14. root@192.168.121.93's password:
  15. elasticsearch.yml                                                                                                                                          100% 4270   949.6KB/s   00:00   
  16. [root@elk1 ~]# scp /etc/elasticsearch/elasticsearch.yml  192.168.121.92:/etc/elasticsearch/elasticsearch.yml
  17. root@192.168.121.92's password:
  18. elasticsearch.yml
  19. [root@elk1 ~]# systemctl restart elasticsearch.service
复制代码
创建api
  1. # 解析api
  2. [root@elk1 ~]# echo "TzBCTzY1VUJiWUdnVHlBNjZRTXc6eE9JWW9wT3dTT09Sam1UNE5RYnRjUQ==" | base64 -d ;echo
  3. O0BO65UBbYGgTyA66QMw:xOIYopOwSOORjmT4NQbtcQ
  4. # 配置filebeat
  5. [root@elk1 ~]# cat /etc/filebeat/config/07-tcp-to-es_tls.yaml
  6. filebeat.inputs:
  7. - type: tcp
  8.   host: "0.0.0.0:9000"
  9. output.elasticsearch:
  10.   hosts:
  11.   - 192.168.121.91:9200
  12.   - 192.168.121.92:9200
  13.   - 192.168.121.93:9200
  14.   #username: "elastic"
  15.   #password: "654321"
  16.   api_key: zvWA4JUBqFmHNaf3P8bM:d-goeFONRPelMuRxSr2Bxg
  17.   index: xu-es-tls-filebeat
  18. setup.ilm.enabled: false
  19. setup.template.name: "xu-es-tls-filebeat"
  20. setup.template.pattern: "xu-es-tls-filebeat-*"
  21. setup.template.overwrite: true
  22. setup.template.settings:
  23.   index.number_of_shards: 3
  24.   index.number_of_replicas: 0
  25. [root@elk1 ~]# filebeat -e -c /etc/filebeat/config/07-tcp-to-es_tls.yaml
复制代码
基于ES的api创建api-key并实现权限管理
参考链接:
https://www.elastic.co/guide/en/beats/filebeat/7.17/beats-api-keys.html
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-privileges.html#privileges-list-cluster
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-privileges.html#privileges-list-indices
  1. 1.创建api-key
  2. # 发送请求
  3. POST /_security/api_key
  4. {
  5.   "name": "jasonyin2020",
  6.   "role_descriptors": {
  7.     "filebeat_monitoring": {
  8.       "cluster": ["all"],
  9.       "index": [
  10.         {
  11.           "names": ["xu-es-apikey*"],
  12.           "privileges": ["create_index", "create"]
  13.         }
  14.       ]
  15.     }
  16.   }
  17. }
  18. # 返回数据
  19. {
  20.   "id" : "0vXs4ZUBqFmHNaf3s8Zn",
  21.   "name" : "jasonyin2020",
  22.   "api_key" : "y1Vi5fL6RfGy_B47YWBXcw",
  23.   "encoded" : "MHZYczRaVUJxRm1ITmFmM3M4Wm46eTFWaTVmTDZSZkd5X0I0N1lXQlhjdw=="
  24. }
  25. # 解析
  26. [root@elk1 ~]# echo MHZYczRaVUJxRm1ITmFmM3M4Wm46eTFWaTVmTDZSZkd5X0I0N1lXQlhjdw== | base64 -d  ;echo
  27. 0vXs4ZUBqFmHNaf3s8Zn:y1Vi5fL6RfGy_B47YWBXcw
复制代码
https
es集群配置https
  1. 1.自建CA证书
  2. [root@elk1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /etc/elasticsearch/elastic-stack-ca.p12 --pass ""
  3. [root@elk1 ~]# ll /etc/elasticsearch/elastic-stack-ca.p12
  4. -rw------- 1 root elasticsearch 2672 Mar 29 20:44 /etc/elasticsearch/elastic-stack-ca.p12
  5. 2.基于CA证书创建ES证书
  6. [root@elk1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /etc/elasticsearch/elastic-stack-ca.p12 --out /etc/elasticsearch/elastic-certificates-https.p12 --pass "" --days 3650 --ca-pass ""
  7. [root@elk1 ~]# ll /etc/elasticsearch/elastic-stack-ca.p12
  8. -rw------- 1 root elasticsearch 2672 Mar 29 20:44 /etc/elasticsearch/elastic-stack-ca.p12
  9. [root@elk1 ~]# ll /etc/elasticsearch/elastic-certificates-https.p12
  10. -rw------- 1 root elasticsearch 3596 Mar 29 20:48 /etc/elasticsearch/elastic-certificates-https.p12
  11. 3.修改配置文件
  12. [root@elk1 ~]# tail -2 /etc/elasticsearch/elasticsearch.yml
  13. xpack.security.http.ssl.enabled: true
  14. xpack.security.http.ssl.keystore.path: elastic-certificates-https.p12
  15. [root@elk1 ~]# chmod 640 /etc/elasticsearch/elastic-certificates-https.p12
  16. [root@elk1 ~]# scp  -rp /etc/elasticsearch/elastic{-certificates-https.p12,search.yml} 192.168.121.92:/etc/elasticsearch/
  17. root@192.168.121.92's password:
  18. elastic-certificates-https.p12                                                                                                                             100% 3596     1.6MB/s   00:00   
  19. elasticsearch.yml                                                                                                                                          100% 4378     6.0MB/s   00:00   
  20. [root@elk1 ~]# scp  -rp /etc/elasticsearch/elastic{-certificates-https.p12,search.yml} 192.168.121.93:/etc/elasticsearch/
  21. root@192.168.121.93's password:
  22. elastic-certificates-https.p12                                                                                                                             100% 3596   894.2KB/s   00:00   
  23. elasticsearch.yml  
  24. 4.重启ES集群
  25. [root@elk1 ~]# systemctl restart elasticsearch.service
  26. [root@elk2 ~]# systemctl restart elasticsearch.service
  27. [root@elk3 ~]# systemctl restart elasticsearch.service
  28. [root@elk1 ~]# curl https://127.1:9200/_cat/nodes -u elastic:654321 -k
  29. 192.168.121.92 16 94 63 1.88 0.92 0.35 cdfhilmrstw - elk2
  30. 192.168.121.91 14 96 30 0.79 0.90 0.55 cdfhilmrstw * elk1
  31. 192.168.121.93  8 97 53 1.22 0.71 0.33 cdfhilmrstw - elk3
  32. 5.修改kibana的配置跳过自建证书校验
  33. [root@elk1 ~]# vim /etc/kibana/kibana.yml
  34. ...
  35. # 指向ES集群的地址协议为https
  36. elasticsearch.hosts: ["https://192.168.121.91:9200","https://192.168.121.92:9200","https://192.168.121.93:9200"]
  37. # 跳过证书校验
  38. elasticsearch.ssl.verificationMode: none
  39. [root@elk1 ~]# systemctl restart kibana.service
复制代码
filebeat对接https加密
  1. # 编写filebeat配置文件
  2. [root@elk92 filebeat]# cat 17-tcp-to-es-tls.yaml
  3. filebeat.inputs:
  4. - type: tcp
  5.   host: "0.0.0.0:9000"
  6. output.elasticsearch:
  7.   hosts:
  8.   - https://192.168.121.91:9200
  9.   - https://192.168.121.92:9200
  10.   - https://192.168.121.93:9200
  11.   api_key: "m1wPlJUBrDbi_DeiIc-1:RcEw7Mk2QQKH_CGhMBnfbg"
  12.   index: xu-es-apikey-tls-2025
  13.   # 配置es集群的tls,此处跳过证书校验。默认值为: full
  14.   # 参考链接:
  15.   #         https://www.elastic.co/guide/en/beats/filebeat/7.17/configuration-ssl.html#client-verification-mode
  16.   ssl.verification_mode: none
  17. setup.ilm.enabled: false
  18. setup.template.name: "xu"
  19. setup.template.pattern: "xu*"
  20. setup.template.overwrite: true
  21. setup.template.settings:
  22.   index.number_of_shards: 3
  23.   index.number_of_replicas: 0
复制代码
logstash对接https加密
  1. [root@elk93 logstash]# cat 13-tcp-to-es_api-key.conf
  2. input {
  3.   tcp {
  4.     port => 8888
  5.   }
  6. }  
  7. output {
  8.   elasticsearch {
  9.     hosts => ["192.168.121.91:9200","192.168.121.92:9200","192.168.121.93:9200"]
  10.     index => "xu-api-key"
  11.     #user => elastic
  12.     #password => "123456"xu
  13.     # 指定api-key的方式认证
  14.     api_key => "oFwZlJUBrDbi_DeiLc9O:HWBj0LC2RWiUNTudV-6CBw"
  15.    
  16.     # 使用api-key则必须启动ssl
  17.     ssl => true
  18.     # 跳过ssl证书验证
  19.     ssl_certificate_verification => false
  20.   }
  21. }
  22. [root@elk93 logstash]#
  23. [root@elk93 logstash]# logstash -rf 13-tcp-to-es_api-key.conf
复制代码
基于kibana实现RBAC
参考链接:
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-privileges.html
创建角色
创建用户
ES8摆设
单点摆设ES8集群
  1. 环境准备:
  2.         192.168.121.191 elk191
  3.         192.168.121.192 elk192
  4.         192.168.121.193 elk193
  5. 1.获取安装包,安装es8
  6. [root@elk191 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.17.3-amd64.deb
  7. [root@elk191 ~]# dpkg -i elasticsearch-8.17.3-amd64.deb
  8. # es8默认就支持https
  9. --------------------------- Security autoconfiguration information ------------------------------
  10. Authentication and authorization are enabled.
  11. TLS for the transport and HTTP layers is enabled and configured.
  12. The generated password for the elastic built-in superuser is : P0-MRYuCOTFj*4*rGNZk   # 内置elastic超级用户的密码
  13. If this node should join an existing cluster, you can reconfigure this with
  14. '/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
  15. after creating an enrollment token on your existing cluster.
  16. You can complete the following actions at any time:
  17. Reset the password of the elastic built-in superuser with
  18. '/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.
  19. Generate an enrollment token for Kibana instances with
  20. '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.
  21. Generate an enrollment token for Elasticsearch nodes with
  22. '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.
  23. -------------------------------------------------------------------------------------------------
  24. ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
  25. sudo systemctl daemon-reload
  26. sudo systemctl enable elasticsearch.service
  27. ### You can start elasticsearch service by executing
  28. sudo systemctl start elasticsearch.service
  29. 2.启动es8
  30. [root@elk191 ~]# systemctl enable elasticsearch.service --now
  31. Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /lib/systemd/system/elasticsearch.service.
  32. [root@elk191 ~]# netstat  -tunlp | grep -E "9[2|3]00"
  33. tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      1669/java           
  34. tcp6       0      0 ::1:9300                :::*                    LISTEN      1669/java           
  35. tcp6       0      0 :::9200                 :::*                    LISTEN      1669/java
  36. 3.测试访问
  37. [root@elk191 ~]# curl -u elastic:NVPLcMy0_n8aGL=UGAGc https://127.1:9200 -k
  38. {
  39.   "name" : "elk191",
  40.   "cluster_name" : "elasticsearch",
  41.   "cluster_uuid" : "-cw1TGvZSau0J2x-ThOJsg",
  42.   "version" : {
  43.     "number" : "8.17.3",
  44.     "build_flavor" : "default",
  45.     "build_type" : "deb",
  46.     "build_hash" : "a091390de485bd4b127884f7e565c0cad59b10d2",
  47.     "build_date" : "2025-02-28T10:07:26.089129809Z",
  48.     "build_snapshot" : false,
  49.     "lucene_version" : "9.12.0",
  50.     "minimum_wire_compatibility_version" : "7.17.0",
  51.     "minimum_index_compatibility_version" : "7.0.0"
  52.   },
  53.   "tagline" : "You Know, for Search"
  54. }
  55. [root@elk191 ~]# curl -u elastic:NVPLcMy0_n8aGL=UGAGc https://127.1:9200/_cat/nodes -k
  56. 127.0.0.1 9 97 13 0.35 0.59 0.31 cdfhilmrstw * elk191
复制代码
摆设kibana8
  1. 1.获取安装包,安装kibana
  2. [root@elk191 ~]# wget  https://artifacts.elastic.co/downloads/kibana/kibana-8.17.3-amd64.deb
  3. [root@elk191 ~]# dpkg -i kibana-8.17.3-amd64.deb
  4. 2.配置kibana
  5. [root@elk191 ~]# grep -vE "^$|^#" /etc/kibana/kibana.yml
  6. server.port: 5601
  7. server.host: "0.0.0.0"
  8. logging:
  9.   appenders:
  10.     file:
  11.       type: file
  12.       fileName: /var/log/kibana/kibana.log
  13.       layout:
  14.         type: json
  15.   root:
  16.     appenders:
  17.       - default
  18.       - file
  19. pid.file: /run/kibana/kibana.pid
  20. i18n.locale: "zh-CN"
  21. 3.启动kibana
  22. [root@elk191 ~]# systemctl enable --now kibana.service
  23. [root@elk191 ~]# ss -ntl | grep 5601
  24. LISTEN 0      511               0.0.0.0:5601      0.0.0.0:*     
  25. 4.生成kibana专用token
  26. [root@elk191 ~]# /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
  27. eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTkyLjE2OC4xMjEuMTkxOjkyMDAiXSwiZmdyIjoiZmNjMWI3MzJlNzIwMzMzMjI0ZDc5Zjk1YTUyZjIzZmUyNjMzMzYwZDIxY2Q0NzY3YjQ2ZjExZDhiOGYxZTFlZiIsImtleSI6IjdjNTk3SlVCeEI5S3NHd1ZPWVQ5OmYtN0FRWkhEUTVtMnlCZXdiMnJLbXcifQ==
  28. 5.kiban服务器获取校验码
  29. [root@elk191 ~]# /usr/share/kibana/bin/kibana-verification-code
  30. Your verification code is:  414 756
复制代码
es8集群摆设
  1. 1.拷贝配置文件到其他节点
  2. [root@elk191 ~]# scp elasticsearch-8.17.3-amd64.deb 10.0.0.192:~
  3. [root@elk191 ~]# scp elasticsearch-8.17.3-amd64.deb 10.0.0.193:~
  4. 2.其他节点安装ES8软件包
  5. [root@elk192 ~]# dpkg -i elasticsearch-8.17.3-amd64.deb
  6. [root@elk193 ~]# dpkg -i elasticsearch-8.17.3-amd64.deb
  7. # 配置es8
  8. [root@elk191 ~]# grep -Ev "^$|^#" /etc/elasticsearch/elasticsearch.yml
  9. cluster.name: xu-application
  10. path.data: /var/lib/elasticsearch
  11. path.logs: /var/log/elasticsearch
  12. network.host: 0.0.0.0
  13. discovery.seed_hosts: ["192.168.121.191","192.168.121.192","192.168.121.193"]
  14. cluster.initial_master_nodes: ["192.168.121.191","192.168.121.192","192.168.121.193"]
  15. xpack.security.enabled: true
  16. xpack.security.enrollment.enabled: true
  17. xpack.security.http.ssl:
  18.   enabled: true
  19.   keystore.path: certs/http.p12
  20. xpack.security.transport.ssl:
  21.   enabled: true
  22.   verification_mode: certificate
  23.   keystore.path: certs/transport.p12
  24.   truststore.path: certs/transport.p12
  25. http.host: 0.0.0.0
  26. 3.在现有集群任意节点生成token令牌文件
  27. [root@elk191 ~]#  /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
  28. 4.在待加入节点使用token重新配置新节点的配置文件
  29. [root@elk192 ~]# /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token  eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTkyLjE2OC4xMjEuMTkxOjkyMDAiXSwiZmdyIjoiMzIwODY0YzMxNmEyMDQ4YmIwYzVjNDNhY2FlZjQ4MTg2OTM3MmVhNTg2NjdiYTAwMjBjN2Y2ZTczN2YzNWU0MCIsImtleSI6IkE3RTY4SlVCU1BhTWhMRFN0VWdlOmdaM0dIS0RNUndld3o3ZWM0Qk1ySEEifQ==
  30. [root@elk193 ~]# /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token  eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTkyLjE2OC4xMjEuMTkxOjkyMDAiXSwiZmdyIjoiMzIwODY0YzMxNmEyMDQ4YmIwYzVjNDNhY2FlZjQ4MTg2OTM3MmVhNTg2NjdiYTAwMjBjN2Y2ZTczN2YzNWU0MCIsImtleSI6IkE3RTY4SlVCU1BhTWhMRFN0VWdlOmdaM0dIS0RNUndld3o3ZWM0Qk1ySEEifQ==
  31. 5.同步配置文件
  32. [root@elk191 ~]# scp /etc/elasticsearch/elasticsearch.yml  192.168.121.192:/etc/elasticsearch/
  33. [root@elk191 ~]# scp /etc/elasticsearch/elasticsearch.yml  192.168.121.193:/etc/elasticsearch/
  34. 6.启动客户端es
  35. [root@elk192 ~]# systemctl enable elasticsearch.service --now
  36. [root@elk193 ~]# systemctl enable elasticsearch.service --now
  37. 7.访问测试
  38. [root@elk193 ~]# curl -u elastic:123456 -k https://192.168.121.191:9200/_cat/nodes
  39. 192.168.121.191 17 97 10 0.61 0.55 0.72 cdfhilmrstw * elk191
  40. 192.168.121.193 15 97 55 1.72 1.05 0.49 cdfhilmrstw - elk193
  41. 192.168.121.192 13 97  4 0.25 0.45 0.52 cdfhilmrstw - elk192
复制代码
常见错误
  1. 1.常见的错误处理Q1:
  2. ERROR: Aborting enrolling to cluster. Unable to remove existing secure settings. Error was: Aborting enrolling to cluster. Unable to remove existing security configuration, elasticsearch.keystore did not contain expected setting [autoconfiguration.password_hash]., with exit code 74
  3. 问题分析:
  4. 说明本地已经有安全配置的相关参数设置了。删除之前的配置"elasticsearch.keystore"。
  5. 解决方案:
  6. rm -f /etc/elasticsearch/elasticsearch.keystore
  7. 常见的错误处理Q2:
  8. ERROR: Skipping security auto configuration because this node is configured to bootstrap or to join a multi-node cluster, which is not supported., with exit code 80
  9. 解决方案:
  10. export IS_UPGRADE=false
  11. 常见的错误处理Q3:
  12. ERROR: Aborting enrolling to cluster. This node doesn't appear to be auto-configured for security. Expected configuration is missing from elasticsearch.yml., with exit code 64
  13. 错误分析:
  14. 检查配置文件,发现会缺少安全相关的配置,可能是同步'elasticsearch.yml'失败了。
  15. 解决方案:
  16. 修改"/etc/elasticsearch/elasticsearch.yml",添加安全配置即可,可以手动将elk192节点的配置拷贝到该节点即可。
  17. 如果还是解决不了,可以对比elk191节点的配置和elk192的配置文件不同之处,然后将对应的配置拷贝过去即可,我此处测试缺少certs目录。
  18. [root@elk191 ~]# scp -rp /etc/elasticsearch/certs/ 10.0.0.192:/etc/elasticsearch/
  19. [root@elk191 ~]# scp /etc/elasticsearch/elasticsearch.yml 10.0.0.192:/etc/elasticsearch/
  20. [root@elk191 ~]# scp /etc/elasticsearch/elasticsearch.keystore  10.0.0.192:/etc/elasticsearch/
  21. [root@elk191 ~]# scp -rp /etc/elasticsearch/elasticsearch.yml  10.0.0.192:/etc/elasticsearch/
  22. ERROR: Aborting enrolling to cluster. Unable to remove existing secure settings. Error was: Aborting enrolling to cluster. Unable to remove existing security configuration, elasticsearch.keystore did not contain expected setting [xpack.security.transport.ssl.keystore.secure_password]., with exit code 74
复制代码
es8和es7区别
  1. - ES8和ES7对比部署
  2.         1.ES8默认启用了https,支持认证等功能;
  3.         2.ES8新增'elasticsearch-reset-password'脚本,对于elastic用户重置密码更加简单;
  4.         3.ES8新增'elasticsearch-create-enrollment-token'脚本,可以为组件创建token信息,比如kibana组件;
  5.         4.ES8新增kibana新增'kibana-verification-code'用于生成校验码。
  6.         5.kibana支持更多的语言:English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR"
  7.         6.kibana的webUI更加丰富,支持AI助手,手动创建索引等功能;
  8.         7.ES8集群部署时,需要借助'elasticsearch-reconfigure-node'脚本来加入已存在的集群,默认就是单master节点的配置;
复制代码
ES7 JVM调优
  1. 1.ES默认会吃掉物理机的一半内存
  2. [root@elk91 ~]# ps -ef | grep java | grep Xms
  3. elastic+   10045       1  2 Mar14 ?        00:56:32 /usr/share/elasticsearch/jdk/bin/java ...  -Xms1937m -Xmx1937m ...
  4. 2.关于ES集群的调优原则
  5.                 - 1.ES集群的JVM大小应该为物理机的一半,但是不的超过32GB;
  6.                 - 2.比如你的集群内存为32GB,则默认应该为16GB,但是如果你的物理机是128GB,默认也会吃掉一半,因此我们需要手动配置为32GB;
  7. 3.将ES内存设置为256Mb
  8. [root@elk1 ~]# vim /etc/elasticsearch/jvm.options
  9. [root@elk1 ~]# egrep "^-Xm[s|x]" /etc/elasticsearch/jvm.options
  10. -Xms256m
  11. -Xmx256m
  12. 4.拷贝配置文件并滚动重启ES7集群
  13. [root@elk1 ~]# scp /etc/elasticsearch/jvm.options 192.168.121.92:/etc/elasticsearch/
  14. jvm.options                                                                                                                                                100% 3474     2.7MB/s   00:00   
  15. [root@elk1 ~]# scp /etc/elasticsearch/jvm.options 192.168.121.93:/etc/elasticsearch/
  16. jvm.options
  17. [root@elk1 ~]# systemctl restart elasticsearch.service
  18. [root@elk2 ~]# systemctl restart elasticsearch.service
  19. [root@elk3 ~]# systemctl restart elasticsearch.service
  20. 5.测试验证
  21. [root@elk1 ~]# free -h
  22.                total        used        free      shared  buff/cache   available
  23. Mem:           3.8Gi       1.1Gi       1.9Gi       1.0Mi       800Mi       2.4Gi
  24. Swap:          3.8Gi        26Mi       3.8Gi
  25. [root@elk1 ~]# ps -ef | grep java | grep Xms
  26. -Xms256m -Xmx256m
  27. curl -k -u elastic:123456 https://127.1:9200/_cat/nodes
  28. 192.168.121.92 68 67 94 4.01 2.12 0.96 cdfhilmrstw * elk2
  29. 192.168.121.91 59 56 42 1.72 0.87 0.43 cdfhilmrstw - elk1
  30. 192.168.121.93 63 61 92 3.30 2.26 1.14 cdfhilmrstw - elk3
复制代码
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

忿忿的泥巴坨

论坛元老
这个人很懒什么都没写!
快速回复 返回顶部 返回列表