ELK 8.15.3 版本Logstash和Kibana与KS设置SSL实现https安全连接 ...

美食家大橙子  金牌会员 | 2024-11-6 15:11:32 | 来自手机 | 显示全部楼层 | 阅读模式
打印 上一主题 下一主题

主题 891|帖子 891|积分 2673

在客户端部署上线的日记服务器由于等保扫瞄到ES(Elasticsearch) 7.17.8版本有安全漏洞,所以要做一个升级,升级到最新的 8.15.3。
由于ES 8.X 版本是默认启动了安全设置,也就是设置文件 elasticsearch.yml 中的 xpack.security.enabled 设置默认是设为 true 的,所以当启动 ES的时候访问9200端口会要求输入账户和暗码。
接下来,我将一步步分享如何手动设置 elastic 超等用户的暗码以及如何让 logstash 和 kibana 以 https的方式对 es 举行数据的写入(logstash output)和读取(kibana必要读es数据将不同index内的数据举行可视化等操作)。
ELK部署方式

OS: Ubuntu 22.04 LTS
部署运行方式:docker-compose (也就是说logstash, es和kibana都是以容器的方式运行的,而且属于一个网络)
docker 镜像拉取

  1. # 拉取 es
  2. docker pull docker.elastic.co/elasticsearch/elasticsearch:8.15.3
  3. # 拉取 logstash
  4. docker pull docker.elastic.co/logstash/logstash:8.15.3
  5. # 拉取 kibana
  6. docker pull docker.elastic.co/elasticsearch/elasticsearch:8.15.3
复制代码
拉取完成后,可以通过 docker images 下令检察镜像:

更换镜像

将所有正在运行的7.X版本镜像的容器都停下来
  1. docker-compose down
复制代码
编辑 docker-compose.yaml 文件,修改镜像的tag为8.15.3
下一步修改宿主机上映射到 es 容器内的设置文件 elasticsearch.yml 内容,我们首先设置 elastic 超等用户的暗码,后面再把 https 连接必要的证书搞定,所以先把 xpack.security.transport.ssl.enabled 设置为 false(默认是 true)。
必要加上以下设置内容:
  1. # 此配置项用于启用或禁用 Elasticsearch 的安全特性
  2. xpack.security.enabled: true
  3. # 此配置项用于启用或禁用通过 HTTPS(SSL/TLS)对 Elasticsearch 的 HTTP 接口进行加密。
  4. xpack.security.transport.ssl.enabled: false
复制代码
然后启动 docker-compose
  1. docker-compose up -d
复制代码
设置 elastic 超等用户的暗码

确保 elasticsearch(容器名)在正常运行后,可以尝试访问 http://ip:9200, 这个时候发现会有弹出框要求输出用户名和暗码(由于 xpack.security.enabled: true)。
现在进入 elasticsearch 容器内部:
  1. docker exec -it elasticsearch bash
  2. elasticsearch@elasticsearch:~$ pwd
  3. # 容器内部当前目录
  4. /usr/share/elasticsearch
复制代码
有以下两种设置暗码的方式,手动和随机自动:
手动设置暗码

  1. ./bin/elasticsearch-reset-password -u elastic
复制代码
按照提示输入想要为 elastic 用户设置的新暗码即可。
自动生成随机暗码

-i:关闭交互模式,Elasticsearch 会自动生成一个随机暗码
  1. ./bin/elasticsearch-reset-password -u elastic
  2. -i
复制代码
如果设置的暗码是:p0s9Lb3uThEJfN5T0v6x
这个时候再去访问 http://ip:9200,输入 elastic/p0s9Lb3uThEJfN5T0v6x 就可以正常访问了。
Logstash通过elastic账号与es建立http连接

虽然我们终极达成的目标是让logstash和kibana以 https(http+ssl)的方式与es建立连接,但在设置相干证书之前我们可以尝试一下让logstash仅通过账号暗码的方式与es建立http连接(kibana无法尝试,在 xpack.security.enabled: true 设置的环境下必须通过https)。
1. 将elastic 超等用户暗码写入 docker-compose.yml

  1. services:
  2.   elasticsearch:
  3.     restart: always
  4.     image: docker.elastic.co/elasticsearch/elasticsearch:8.15.3
  5.     container_name: elasticsearch
  6.     hostname: elasticsearch
  7.     privileged: true
  8.     environment:
  9.       - "ES_JAVA_OPTS=-Xms8192m -Xmx8192m"
  10.       - "http.host=0.0.0.0"
  11.       - "node.name=node-01"
  12.       - "cluster.name=cluster-01"
  13.       - "discovery.type=single-node"
  14.       # 将密码作为环境变量写入
  15.       - "ELASTIC_PASSWORD=p0s9Lb3uThEJfN5T0v6x"
复制代码
2. 修改宿主机上的 logstash.yml 和涉及到output给es的设置文件

logstash.yml设置新增内容如下
  1. # 开启xpack监控
  2. # ===================es版本升级,需要加入认证信息==============
  3. xpack.monitoring.elasticsearch.username: "elastic"
  4. xpack.monitoring.elasticsearch.password: "p0s9Lb3uThEJfN5T0v6x"                 
复制代码
对于将吸取到的日记处理完之后output给es的设置内容,必要加上账户暗码等信息。
以下是 ChatGPT 生成的一个示例,必要注意:

  • hosts 要加上 http://;
  • 若有逻辑分支则涉及到es的部门都必要添加 user/password 信息。
  1. input {
  2.   # 假设从stdin输入数据
  3.   stdin {}
  4. }
  5. filter {
  6.   # 使用 Ruby 进行自定义处理
  7.   ruby {
  8.     code => "
  9.       event.set('new_field', event.get('message').upcase)  # 将'message'字段转为大写并保存到'new_field'
  10.       event.set('timestamp', Time.now)                     # 添加当前时间戳到'timestamp'字段
  11.     "
  12.   }
  13. }
  14. output {
  15.   # 输出到 Elasticsearch
  16.   elasticsearch {
  17.     # 注意需要加上http,安全验证未开启的情况下是不需要的
  18.     hosts => ["http://elasticsearch:9200"]
  19.     index => "logstash-ruby-example"
  20.     # 增加用户名
  21.     user => "elastic"
  22.     # 增加用户名密码
  23.     password => "p0s9Lb3uThEJfN5T0v6x"
  24.     document_id => "%{[@metadata][fingerprint]}"  # 自定义document ID(可选)
  25.   }
  26. }
复制代码
3. 重启 elasticsearch 和 logstash docker 容器

重启下令如下:
  1. docker-compose restart
  2. elasticsearch
  3. docker-compose restart
  4. logstash
复制代码
重启之后检查 logstash 和 elasticsearch 容器日记是否有报错信息:
  1. docker logs logstash
  2. docker logs elasticsearch
  3. # 如果只想看最新N条,添加 --tail 参数指定一下数量就可以
  4. docker logs --tail N <logstash-container-name>
复制代码
如果没有可以进入到 logstash 容器内部,运行查验下令检查是否能正常连接上es:
  1. # 进入 logstash 容器内部
  2. docker exec -it logstash /bin/bash
  3. logstash@d996504f0329:~$ pwd
  4. /usr/share/logstash
  5. # 运行如下检查命令,如连接成功会显示es相关信息
  6. curl -u elastic:p0s9Lb3uThEJfN5T0v6x http://elasticsearch:9200
复制代码
成功连接到 Elasticsearch 的输出示例(由 ChatGPT 生成):
  1. {
  2.   "name" : "your-node-name",
  3.   "cluster_name" : "your-cluster-name",
  4.   "cluster_uuid" : "uuid-string",
  5.   "version" : {
  6.     "number" : "8.15.3",
  7.     "build_flavor" : "default",
  8.     "build_type" : "docker",
  9.     "build_hash" : "abc12345",
  10.     "build_date" : "2024-01-01T00:00:00Z",
  11.     "build_snapshot" : false,
  12.     "lucene_version" : "9.7.0",
  13.     "minimum_wire_compatibility_version" : "7.10.0",
  14.     "minimum_index_compatibility_version" : "7.0.0"
  15.   },
  16.   "tagline" : "You Know, for Search"
  17. }
复制代码
当然也可以实际让logstash吸取处理一些数据,然后检查es里面是否有数据的新增,以此确保logstash可以成功将处理完成的数据写入es。
ES生成CA和证书

为什么必要?(以下回答由 ChatGPT 生成)
   在 Elasticsearch 中,生成 CA(Certificate Authority,证书颁发机构) 和 证书 的过程是为了启用 安全通信,即通过 SSL/TLS 加密来保护 Elasticsearch 节点之间的通信 以及 客户端和服务器之间的通信。
  详细步骤如下(copy from 参考文章 [1]):
  1. # 进入docker容器的es实例
  2. docker exec -it elasticsearch bash
  3. # 生成ca证书 遇到提示回车,共两次
  4. bin/elasticsearch-certutil  ca
  5. # 生成节点证书 遇到提示回车,共三次
  6. bin/elasticsearch-certutil  cert --ca elastic-stack-ca.p12
  7.    
  8. # 切换到config目录
  9. cd config
  10. # 创建certs文件夹
  11. mkdir certs
  12. # 回到原目录
  13. cd ..
  14. # 将ca证书移动到certs中
  15. mv elastic-certificates.p12 elastic-stack-ca.p12 config/certs                  
复制代码
把生成的两个证书文件放在容器 /usr/share/elasticsearch/config/certs 下后,就可以执行 exit 先退出了。
现在编辑宿主机上的 elasticsearch.yml 文件,新增内容如下:
  1. # 将 xpack.security.transport.ssl.enabled 设置为 false的配置注释掉
  2. # xpack.security.transport.ssl.enabled: false
  3. # 新增以下内容
  4. xpack.security.enrollment.enabled: true
  5. # 是否开启ssl
  6. xpack.security.http.ssl:
  7.   enabled: true
  8.   keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  9.   truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  10. # 是否开启访问安全认证
  11. xpack.security.transport.ssl:
  12.   enabled: true
  13.   verification_mode: certificate
  14.   keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  15.   truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
复制代码
Http 证书生成

为什么必要?(以下回答由 ChatGPT 生成)
   Elasticsearch 生成 HTTP 证书 是为了通过 HTTPS(SSL/TLS 加密) 来保护 HTTP 层 的通信。重要原因是为了确保在客户端(如浏览器、Kibana、Logstash)与 Elasticsearch 之间通过 REST API 举行通信时,数据是加密的,而且可以实现 身份验证 和 数据完备性。
  以下操作步骤参考 [2]
再次进入 elasticsearch 容器内部,生成 http 证书。
  1. # 进入docker容器的es实例
  2. docker exec -it elasticsearch bash
  3. # 生成 Http 的证书
  4. ./bin/elasticsearch-certutil http
复制代码
会有很多提问,详情可参考 官方文档

完备输入如下供各人参考
  1. elasticsearch@elasticsearch:~$ ./bin/elasticsearch-certutil http
  2. ## Elasticsearch HTTP Certificate Utility
  3. The 'http' command guides you through the process of generating certificates
  4. for use on the HTTP (Rest) interface for Elasticsearch.
  5. This tool will ask you a number of questions in order to generate the right
  6. set of files for your needs.
  7. ## Do you wish to generate a Certificate Signing Request (CSR)?
  8. A CSR is used when you want your certificate to be created by an existing
  9. Certificate Authority (CA) that you do not control (that is, you don't have
  10. access to the keys for that CA).
  11. If you are in a corporate environment with a central security team, then you
  12. may have an existing Corporate CA that can generate your certificate for you.
  13. Infrastructure within your organisation may already be configured to trust this
  14. CA, so it may be easier for clients to connect to Elasticsearch if you use a
  15. CSR and send that request to the team that controls your CA.
  16. If you choose not to generate a CSR, this tool will generate a new certificate
  17. for you. That certificate will be signed by a CA under your control. This is a
  18. quick and easy way to secure your cluster with TLS, but you will need to
  19. configure all your clients to trust that custom CA.
  20. Generate a CSR? [y/N]N
  21. ## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate?
  22. If you have an existing CA certificate and key, then you can use that CA to
  23. sign your new http certificate. This allows you to use the same CA across
  24. multiple Elasticsearch clusters which can make it easier to configure clients,
  25. and may be easier for you to manage.
  26. If you do not have an existing CA, one will be generated for you.
  27. Use an existing CA? [y/N]y
  28. ## What is the path to your CA?
  29. Please enter the full pathname to the Certificate Authority that you wish to
  30. use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
  31. (.jks) or PEM (.crt, .key, .pem) format.
  32. CA Path: /usr/share/elasticsearch/config/certs/elastic-stack-ca.p12
  33. Reading a PKCS12 keystore requires a password.
  34. It is possible for the keystore's password to be blank,
  35. in which case you can simply press <ENTER> at the prompt
  36. Password for elastic-stack-ca.p12:
  37. ## How long should your certificates be valid?
  38. Every certificate has an expiry date. When the expiry date is reached clients
  39. will stop trusting your certificate and TLS connections will fail.
  40. Best practice suggests that you should either:
  41. (a) set this to a short duration (90 - 120 days) and have automatic processes
  42. to generate a new certificate before the old one expires, or
  43. (b) set it to a longer duration (3 - 5 years) and then perform a manual update
  44. a few months before it expires.
  45. You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D)
  46. For how long should your certificate be valid? [5y]
  47. ## Do you wish to generate one certificate per node?
  48. If you have multiple nodes in your cluster, then you may choose to generate a
  49. separate certificate for each of these nodes. Each certificate will have its
  50. own private key, and will be issued for a specific hostname or IP address.
  51. Alternatively, you may wish to generate a single certificate that is valid
  52. across all the hostnames or addresses in your cluster.
  53. If all of your nodes will be accessed through a single domain
  54. (e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it
  55. simpler to generate one certificate with a wildcard hostname (*.es.example.com)
  56. and use that across all of your nodes.
  57. However, if you do not have a common domain name, and you expect to add
  58. additional nodes to your cluster in the future, then you should generate a
  59. certificate per node so that you can more easily generate new certificates when
  60. you provision new nodes.
  61. Generate a certificate per node? [y/N]y
  62. ## What is the name of node #1?
  63. This name will be used as part of the certificate file name, and as a
  64. descriptive name within the certificate.
  65. You can use any descriptive name that you like, but we recommend using the name
  66. of the Elasticsearch node.
  67. node #1 name: node-01
  68. ## Which hostnames will be used to connect to node-01?
  69. These hostnames will be added as "DNS" names in the "Subject Alternative Name"
  70. (SAN) field in your certificate.
  71. You should list every hostname and variant that people will use to connect to
  72. your cluster over http.
  73. Do not list IP addresses here, you will be asked to enter them later.
  74. If you wish to use a wildcard certificate (for example *.es.example.com) you
  75. can enter that here.
  76. Enter all the hostnames that you need, one per line.
  77. When you are done, press <ENTER> once more to move on to the next step.
  78. elasticsearch
  79. You entered the following hostnames.
  80. - elasticsearch
  81. Is this correct [Y/n]y
  82. ## Which IP addresses will be used to connect to node-01?
  83. If your clients will ever connect to your nodes by numeric IP address, then you
  84. can list these as valid IP "Subject Alternative Name" (SAN) fields in your
  85. certificate.
  86. If you do not have fixed IP addresses, or not wish to support direct IP access
  87. to your cluster then you can just press <ENTER> to skip this step.
  88. Enter all the IP addresses that you need, one per line.
  89. When you are done, press <ENTER> once more to move on to the next step.
  90. You did not enter any IP addresses.
  91. Is this correct [Y/n]y
  92. ## Other certificate options
  93. The generated certificate will have the following additional configuration
  94. values. These values have been selected based on a combination of the
  95. information you have provided above and secure defaults. You should not need to
  96. change these values unless you have specific requirements.
  97. Key Name: node-01
  98. Subject DN: CN=node-01
  99. Key Size: 2048
  100. Do you wish to change any of these options? [y/N]n
  101. Generate additional certificates? [Y/n]n
  102. ## What password do you want for your private key(s)?
  103. Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12".
  104. This type of keystore is always password protected, but it is possible to use a
  105. blank password.
  106. If you wish to use a blank password, simply press <enter> at the prompt below.
  107. Provide a password for the "http.p12" file:  [<ENTER> for none]
  108. ## Where should we save the generated files?
  109. A number of files will be generated including your private key(s),
  110. public certificate(s), and sample configuration options for Elastic Stack products.
  111. These files will be included in a single zip archive.
  112. What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip]
复制代码
必要关注的几个提问:

  • What is the name of node #1?
    输入的 node name 必要和 docker-compose.yml 里给 Elasticsearch 设置的 node.name 保持同等。
  1. ## What is the name of node #1?
  2. This name will be used as part of the certificate file name, and as a
  3. descriptive name within the certificate.
  4. You can use any descriptive name that you like, but we recommend using the name
  5. of the Elasticsearch node.
  6. node #1 name: node-01
复制代码

  • Which hostnames will be used to connect to node-01?
    由于 ES 是以 docker 的方式运行,而且ELK都在同一个 docker network 中,所以 logstash 和 kibana 其实直接通过 es的容器名,即 elasticsearch 访问即可,而且 es 没有被外网访问的必要,所以这里的 hostname 设置为容器名 elasticsearch 就可以。
  1. ## Which hostnames will be used to connect to node-01?
  2. These hostnames will be added as "DNS" names in the "Subject Alternative Name"
  3. (SAN) field in your certificate.
  4. You should list every hostname and variant that people will use to connect to
  5. your cluster over http.
  6. Do not list IP addresses here, you will be asked to enter them later.
  7. If you wish to use a wildcard certificate (for example *.es.example.com) you
  8. can enter that here.
  9. Enter all the hostnames that you need, one per line.
  10. When you are done, press <ENTER> once more to move on to the next step.
  11. elasticsearch
  12. You entered the following hostnames.
  13. - elasticsearch
  14. Is this correct [Y/n]y
复制代码
可以看到生成的 elasticsearch-ssl-http.zip 文件
  1. elasticsearch@elasticsearch:~$ ls
  2. LICENSE.txt  NOTICE.txt  README.asciidoc  bin  config  data  elasticsearch-ssl-http.zip  jdk  lib  logs  modules  plugins
  3. # 解压 elasticsearch-ssl-http.zip
  4. elasticsearch@elasticsearch:~$ unzip elasticsearch-ssl-http.zip
  5. Archive:  elasticsearch-ssl-http.zip
  6.    creating: elasticsearch/
  7.   inflating: elasticsearch/README.txt  
  8.   inflating: elasticsearch/http.p12  
  9.   inflating: elasticsearch/sample-elasticsearch.yml  
  10.    creating: kibana/
  11.   inflating: kibana/README.txt      
  12.   inflating: kibana/elasticsearch-ca.pem
  13.   
  14.   inflating: kibana/sample-kibana.yml
复制代码
将 elasticsearch-ssl-http.zip 文件解压缩之后,会有两个文件夹,即 elasticsearch/ 和 kibana/。
  1. # /kibana 目录下的三个文件
  2. elasticsearch@elasticsearch:~/kibana$ ls
  3. README.txt  elasticsearch-ca.pem
  4.   sample-kibana.yml
  5. # /elasticsearch 目录下的三个文件
  6. elasticsearch@elasticsearch:~/elasticsearch$ ls
  7. README.txt  http.p12  sample-elasticsearch.yml
复制代码
现在将 /usr/share/elasticsearch/elasticsearch/http.p12 移动到 /usr/share/elasticsearch/config/certs 下
  1. mv elasticsearch/http.p12 /usr/share/elasticsearch/config/certs
复制代码
所以 certs/ 下有三个文件了
  1. elasticsearch@elasticsearch:~/config/certs$ ls
  2. elastic-certificates.p12  elastic-stack-ca.p12  http.p12
复制代码
elasticsearch 生成 kibana 的安全认证文件

  1. # 生成kibana证书(-dns后接的是kibana容器名)
  2. ./bin/elasticsearch-certutil csr -name kibana -dns kibana
  3. # 解压文件
  4. elasticsearch@elasticsearch:~$ unzip csr-bundle.zip
  5. Archive:  csr-bundle.zip
  6.   inflating: kibana/kibana.csr      
  7.   inflating: kibana/kibana.key     
复制代码
将生成的安全认证文件从es容器内部复制到宿主机kibana的config目次

  1. # 退出容器
  2. exit
  3. # 进入宿主机kibana的config目录
  4. cd /usr/local/config/kibana/config/
  5. # 创建certs文件夹
  6. mkdir certs
  7. # 将elasticsearch生成在容器中的证书复制到certs中
  8. docker cp elasticsearch:/usr/share/elasticsearch/kibana/kibana.csr /usr/local/config/kibana/config/certs
  9. docker cp elasticsearch:/usr/share/elasticsearch/kibana/kibana.key /usr/local/config/kibana/config/certs
  10. # 还要将上一步生成elasticsearch-ca.pem
  11. 文件一块复制到kibana
  12. docker cp elasticsearch:/usr/share/elasticsearch/kibana/elasticsearch-ca.pem
  13. /usr/local/config/kibana/config/certs
  14. # 进入certs文件夹
  15. cd certs
  16. # 生成crt文件
  17. openssl x509 -req -in kibana.csr -signkey kibana.key -out kibana.crt
  18. # 查看 /certs 目录下的四个文件
  19. elasticsearch-ca.pem
  20.   
  21. kibana.crt
  22. kibana.csr
  23. kibana.key
复制代码
将 elasticsearch-ca.pem
复制到宿主机logstash的config目次


为什么必要?
   当在 Logstash 中设置到 Elasticsearch 的HTTPS 连接时,必要在设置文件中指定 elasticsearch-ca.pem
作为信托证书。
  1. # 进入宿主机kibana的config目录
  2. cd /usr/local/config/logstash/config/
  3. # 创建certs文件夹
  4. mkdir certs
  5. docker cp elasticsearch:/usr/share/elasticsearch/kibana/elasticsearch-ca.pem
  6. /usr/local/config/logstash/config/certs
复制代码
将 elasticsearch 容器内 certs 下的文件持久化到宿主机

  1. # 切换到elasticsearch的config目录
  2. cd /usr/local/config/elasticsearch/config
  3. # 复制docker中的certs到linux
  4. docker cp elasticsearch:/usr/share/elasticsearch/config/certs /usr/local/config/elasticsearch/config
复制代码
部署 kibana 认证文件

现在将宿主机 /usr/local/config/kibana/config/certs 下的认证文件都同步到容器内。
  1. # 将服务器上kibana的certs上传到docker容器中
  2. docker cp /usr/local/config/kibana/config/certs kibana:/usr/share/kibana/config
  3. # 使用root进入kibana容器
  4. docker exec -it -u root kibana bash
  5. # 进入config文件夹
  6. cd config
  7. # 修改certs权限
  8. chown -R kibana certs
复制代码
检察容器内认证文件权限环境
  1. root@286e3f490b30:/usr/share/kibana/config/certs# ll -l
  2. total 24
  3. drwxr-xr-x 2 kibana root 4096 Oct 28 12:42 ./
  4. drwxrwxrwx 3 root   root 4096 Oct 28 13:18 ../
  5. -rw-r--r-- 1 kibana root 1200 Oct 28 08:18 elasticsearch-ca.pem
  6. -rw-r--r-- 1 kibana root  985 Oct 28 12:42 kibana.crt
  7. -rw-r--r-- 1 kibana root  936 Oct 28 12:36 kibana.csr
  8. -rw-r--r-- 1 kibana root 1675 Oct 28 12:36 kibana.key
复制代码
部署 logstash 认证文件

和部署 kibana 类似的操作
将宿主机 /usr/local/config/logstash/config/certs 下的认证文件都同步到容器内。
  1. # 将服务器上kibana的certs上传到docker容器中
  2. docker cp /usr/local/config/logstash/config/certs logstash:/usr/share/logstash/config
  3. # 使用root进入logstash容器
  4. docker exec -it -u root logstash bash
  5. # 进入config文件夹
  6. cd config
  7. # 修改certs权限
  8. chown -R logstash certs
复制代码
检察容器内认证文件 elasticsearch-ca.pem
权限环境
  1. root@d996504f0329:/usr/share/logstash/config/certs# ll -l
  2. total 24
  3. drwxr-xr-x 2 logstash root 4096 Oct 29 03:11 ./
  4. drwxrwxrwx 3 root     root 4096 Oct 30 03:26 ../
  5. -rw-r--r-- 1 logstash root 1200 Oct 28 15:11 elasticsearch-ca.pem
复制代码
设置文件内容调解

1. logstash.yml http -> https

  1. xpack.monitoring.elasticsearch.username: "elastic"
  2. xpack.monitoring.elasticsearch.password: "p0s9Lb3uThEJfN5T0v6x"
  3. xpack.monitoring.enabled: true
  4. # 由之前的 http 改为 https
  5. xpack.monitoring.elasticsearch.hosts: ["https://elasticsearch:9200"]
  6. xpack.monitoring.collection.interval: 10s
  7. log.level: debug
复制代码

  • logstash output es 设置文件开启ssl并指定cacert
  1. input {
  2.   # 假设从stdin输入数据
  3.   stdin {}
  4. }
  5. filter {
  6.   # 使用 Ruby 进行自定义处理
  7.   ruby {
  8.     code => "
  9.       event.set('new_field', event.get('message').upcase)  # 将'message'字段转为大写并保存到'new_field'
  10.       event.set('timestamp', Time.now)                     # 添加当前时间戳到'timestamp'字段
  11.     "
  12.   }
  13. }
  14. output {
  15.   # 输出到 Elasticsearch
  16.   elasticsearch {
  17.     # http调整为https
  18.     hosts => ["http://elasticsearch:9200"]
  19.     # 启用 ssl
  20.     ssl => true
  21.     # 指定认证文件
  22.     cacert => "/usr/share/logstash/config/certs/elasticsearch-ca.pem
  23. "
  24.     index => "logstash-ruby-example"
  25.     # 增加用户名
  26.     user => "elastic"
  27.     # 增加用户名密码
  28.     password => "p0s9Lb3uThEJfN5T0v6x"
  29.     document_id => "%{[@metadata][fingerprint]}"  # 自定义document ID(可选)
  30.   }
  31. }
复制代码

  • 调解 kibana.yml 内容
    涉及到 elasticsearch.hosts 的连接协议是 https
    elasticsearch.password 暗码可以随便填写一个,后面再进入 es 容器内部修改。
  1. i18n.locale: zh-CN
  2. server.host: "0.0.0.0"
  3. server.shutdownTimeout: "5s"
  4. elasticsearch.hosts: [ "https://elasticsearch:9200" ]
  5. elasticsearch.username: "kibana_system"
  6. elasticsearch.password: "12345678"
  7. # ===============================ssl配置如下======================================
  8. server.ssl.enabled: true
  9. server.ssl.certificate: /usr/share/kibana/config/certs/kibana.crt
  10. server.ssl.key: /usr/share/kibana/config/certs/kibana.key
  11. elasticsearch.ssl.verificationMode: none
  12. elasticsearch.ssl.certificateAuthorities: [ "/usr/share/kibana/config/certs/elasticsearch-ca.pem
  13. " ]
  14. # ===============================ssl配置如上======================================
  15. monitoring.ui.container.elasticsearch.enabled: true
  16. xpack.monitoring.enabled: true
  17. xpack.monitoring.elasticsearch.hosts: ["https://elasticsearch:9200"]
  18. xpack.monitoring.kibana.collection.enabled: true
  19. xpack.monitoring.kibana.collection.interval: 10000
复制代码
进入 elasticsearch容器重置 kibana_system 账户暗码
  1. # 进入 elasticsearch 容器内部
  2. docker exec -it elasticsearch /bin/bash
  3. # 重置 kibana_system 账户密码
  4. elasticsearch@elasticsearch:~$ ./bin/elasticsearch-reset-password -u kibana_system -i --url https://elasticsearch:9200
  5. WARNING: Owner of file [/usr/share/elasticsearch/config/users] used to be [root], but now is [elasticsearch]
  6. WARNING: Owner of file [/usr/share/elasticsearch/config/users_roles] used to be [root], but now is [elasticsearch]
  7. This tool will reset the password of the [kibana_system] user.
  8. You will be prompted to enter the password.
  9. Please confirm that you would like to continue [y/N]
复制代码
然后把重置的暗码在 kibana.yml 文件中举行更新
  1. elasticsearch.username: "kibana_system"
  2. elasticsearch.password: "password_reset"
复制代码
最后修改 docker-compose.yml 文件里的目次映射

这一步操作是为了确保宿主机ELK config/certs 目次下的文件与容器内是同步的,检查ELK volumes 设置下看是否有必要调解的地方。
针对于我的环境,elasticsearch 必要新增映射。
  1. services:
  2.   elasticsearch:
  3.     restart: always
  4.     image: docker.elastic.co/elasticsearch/elasticsearch:8.15.3
  5.     container_name: elasticsearch
  6.     hostname: elasticsearch
  7.     privileged: true
  8.     volumes:
  9.       # 新增目录映射
  10.       -"usr/local/config/es/config/certs:/usr/share/elasticsearch/config/certs"
复制代码
设置文件内容调解完成后,就可以重启所有容器了。
  1. docker-compose restart
复制代码
必要注意 kibana 访问输入的账户暗码并非设置文件中的 kibana_system/password_reset,而是必要elastic超等用户和对应暗码,即 elastic/p0s9Lb3uThEJfN5T0v6x 举行登陆访问。
认证文件所在目次概览

宿主机 /usr/local/config/elasticsearch/config/certs 下的文件
  1. .
  2. ├── elastic-certificates.p12
  3. ├── elastic-stack-ca.p12
  4. └── http.p12
  5. 0 directories, 3 files
复制代码
宿主机 /usr/local/config/logstash/config/certs 下的文件
  1. elasticsearch-ca.pem
复制代码
宿主机 /usr/local/config/kibana/config/certs 下的文件
  1. .├── elasticsearch-ca.pem
  2. ├── kibana.crt├── kibana.csr└── kibana.key0 directories, 4 files
复制代码
最后,非常感谢以下博客内容,帮助很多!也希望这篇博客可以帮助到有必要的朋友!
[1] docker开启es集群安全认证
[2] Elasticsearch8.X+ Kibana 8.X安全设置

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

正序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

美食家大橙子

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表