目次
一、创建elk目次、elasticsearch目次、kibana目次
二、创建docker-compose.yml (linux没装docker和docker-compose的先自行百度装一下)
三、启动容器并查看容器状态
四、复制elasticsearch、kibana、filebeat配置文件
五、修改elasticsearch、kibana、filebeat配置文件
六、修改完成配置文件后,修改docker-compose.yml配置文件
七、重新启动ELK
八、修改elasticsearch系统用户暗码
九、欣赏器访问服务
十、别的命令说明
一、创建elk目次、elasticsearch目次、kibana目次
sudo mkdir -p /usr/local/elk/elasticsearch/config
sudo mkdir -p /usr/local/elk/elasticsearch/data
sudo mkdir -p /usr/local/elk/elasticsearch/logs
sudo mkdir -p /usr/local/elk/kibana/config
sudo mkdir -p /usr/local/elk/kibana/data
sudo mkdir -p /usr/local/elk/kibana/logs
sudo mkdir -p /usr/local/elk/filebeat/config
sudo mkdir -p /usr/local/elk/filebeat/data
sudo mkdir -p /usr/local/elk/filebeat/logs
设置权限将/usr/local/elk及其所有子目次的权限设置为当前用户的UID和GID。
sudo chown -R $(id -u)(id -g) /usr/local/elk
二、创建docker-compose.yml (linux没装docker和docker-compose的先自行百度装一下)
cd /usr/local/elk/ && vi docker-compose.yml
然后将下面的代码粘贴到文件中,末了生存即可
注意文件中的数据卷除了时间同步外,其他数据卷先解释掉,如下内容
- #docker-compose.yml
- version: '3.3'
- services:
- elasticsearch:
- image: docker.elastic.co/elasticsearch/elasticsearch:8.6.2
- container_name: elasticsearch
- environment:
- - cluster.name=es-app-cluster
- - bootstrap.memory_lock=true
- - node.name=node-01
- - discovery.type=single-node
- - xpack.security.enabled=true
- - xpack.security.http.ssl.enabled=false
- - xpack.security.transport.ssl.enabled=false
- - ingest.geoip.downloader.enabled=false # 使用正确的配置项
- - ELASTIC_USERNAME=elastic
- - ELASTIC_PASSWORD=elastic
- - "ES_JAVA_OPTS=-Xms128m -Xmx128m"
- ulimits:
- memlock:
- soft: -1
- hard: -1
- volumes:
- #- /usr/local/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- #- /usr/local/elk/elasticsearch/data:/usr/share/elasticsearch/data
- #- /usr/local/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
- - /etc/localtime:/etc/localtime:ro
- ports:
- - 9200:9200
- - 9300:9300
- networks:
- - elk-net
- restart: always
- privileged: true
- kibana:
- image: docker.elastic.co/kibana/kibana:8.6.2
- container_name: kibana
- environment:
- - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- - ELASTICSEARCH_USERNAME=kibana_system
- - ELASTICSEARCH_PASSWORD=elastic
- - XPACK_SECURITY_ENABLED=true
- - SERVER_NAME=kibana
- volumes:
- #- /usr/local/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
- #- /usr/local/elk/kibana/data:/usr/share/kibana/data
- #- /usr/local/elk/kibana/logs:/usr/share/kibana/logs
- - /etc/localtime:/etc/localtime:ro
- ports:
- - 5601:5601
- networks:
- - elk-net
- depends_on:
- - elasticsearch
- restart: always
- privileged: true
- filebeat:
- image: docker.elastic.co/beats/filebeat:8.6.2
- container_name: filebeat
- volumes:
- #- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- #- ./filebeat/data:/usr/share/filebeat/data
- #- ./filebeat/logs:/usr/share/filebeat/logs
- #- /usr/workspace/logs/wclflow:/host/var/log/wclflow # 假设主机的日志位于/var/log下
- #- /usr/nginx/logs/access.log:/host/var/log/nginx/logs/access.log
- #- /usr/nginx/logs/access.log:/host/var/log/nginx/logs/error.log
- - /etc/localtime:/etc/localtime:ro
- networks:
- - elk-net
- depends_on:
- - elasticsearch
- restart: always
- privileged: true
- user: root
-
- networks:
- elk-net:
- driver: bridge
复制代码 三、启动容器并查看容器状态
启动容器(下载较慢,请耐烦等待完成下载并启动容器)
docker-compose up -d
如果 docker-compose up -d实行后提示找不到命令,则本篇所涉及的docker-compose全部改成docker compose后实行
查看服务是否启动
docker-compose ps
三个容器status都是Up,表现容器已启动
四、复制elasticsearch、kibana、filebeat配置文件
将elasticsearch、kibana容器内的config、data、logs这三个目次复制到宿主机咱们刚才创建的目次中,具体操作如下:
- elasticsearch容器目次复制到宿主机对应目次
docker cp elasticsearch:/usr/share/elasticsearch/config /usr/local/elk/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/data /usr/local/elk/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/logs /usr/local/elk/elasticsearch/
- kibana容器目次复制到宿主机对应目次
docker cp kibana:/usr/share/kibana/config /usr/local/elk/kibana/
docker cp kibana:/usr/share/kibana/data /usr/local/elk/kibana/
docker cp kibana:/usr/share/kibana/logs /usr/local/elk/kibana/
- filebeat容器目次复制到宿主机对应目次
docker cp filebeat:/usr/share/filebeat/filebeat.yml /usr/local/elk/filebeat/config/
docker cp filebeat:/usr/share/filebeat/data /usr/local/elk/filebeat/
docker cp filebeat:/usr/share/filebeat/logs /usr/local/elk/filebeat/
五、修改elasticsearch、kibana、filebeat配置文件
- 修改elasticsearch配置文件
cd elasticsearch/config/ && rm -rf elasticsearch.yml && vi elasticsearch.yml
回车后输入以下内容后生存- #sticsearch 配置文件 elasticsearch.yml内容
- cluster.name: "es-app-cluster"
- # 确保Elasticsearch监听所有接口
- network.host: 0.0.0.0
- node.name: node-01
- path.data: /usr/share/elasticsearch/data
- path.logs: /usr/share/elasticsearch/logs
- http.port: 9200
- discovery.type: single-node
- xpack.security.enabled: true
- bootstrap.memory_lock: true
- # 禁用证书检查
- xpack.security.http.ssl.enabled: false
- xpack.security.transport.ssl.enabled: false
- #GeoIP数据库用于将IP地址映射到地理位置信息,关闭它
- ingest.geoip.downloader.enabled: false
复制代码 - 修改kibana配置文件
cd ../../kibana/config/ && rm -rf kibana.yml && vi kibana.yml
回车后输入以下内容后生存- server.host: "0.0.0.0"
- server.shutdownTimeout: "10s"
- elasticsearch.hosts: [ "http://elasticsearch:9200" ]
- monitoring.ui.container.elasticsearch.enabled: true
- i18n.locale: "zh-CN"
- xpack.reporting.roles.enabled: false
复制代码 - 修改filebeat配置文件
cd ../../filebeat/config/ && rm -rf filebeat.yml && vi filebeat.yml
回车后输入以下内容后生存- filebeat.inputs:
- - type: filestream
- id: filestream-wclflow #id要唯一
- enabled: true
- paths:
- - /host/var/log/wclflow/*.log
- fields_under_root: true
- fields:
- type: wclflow
- project: alllogs
- app: wclflow
- - type: filestream
- id: filestream-nginx-access #id要唯一
- enabled: true
- paths:
- - /host/var/log/nginx/logs/access.log
- fields_under_root: true
- fields:
- type: nginx_access
- project: access
- app: nginx
- - type: filestream
- id: filestream-nginx-error #id要唯一
- enabled: true
- paths:
- - /host/var/log/nginx/logs/error.log
- fields_under_root: true
- fields:
- type: nginx_error
- project: error
- app: nginx
- output.elasticsearch:
- hosts: ["http://elasticsearch:9200"]
- username: elastic
- password: elastic
- index: "wclflow-%{+yyyy.MM.dd}"
- indices:
- - index: "nginx-logs-access-%{+yyyy.MM.dd}"
- when.contains:
- type: "nginx_access"
- - index: "nginx-logs-error-%{+yyyy.MM.dd}"
- when.contains:
- type: "nginx_error"
- setup.template.name: "nginx-logs" # 设置模板名称
- setup.template.pattern: "nginx-logs-*" # 设置模板模式
- setup.ilm.enabled: false #如果你不需要自动管理索引生命周期,或者 Elasticsearch 集群没有配置 ILM 策略,建议禁用
- setup.kibana:
- host: "kibana:5601"
复制代码 上面的配置说明一下
我紧张模拟配置三个项目的日记配置
项目1:配置wclflow项目日记作为一个模拟项目的日记目次
项目2:配置nginx访问日记作为一个模拟项目的日记目次
项目3:配置nginx错误日记作为一个模拟项目的日记目次
具体配置几个项目日记目次,就写几个类似下图的模块
type、enabled、fields_under_root固定不变,其他的值自己根据实际环境自定义
(如果上述配置运行后报错yml文件格式错误,解决思路:须要将文件内容的-type模块整体往右缩进两个字符)
六、修改完成配置文件后,修改docker-compose.yml配置文件
进入elk目次
cd /usr/local/elk
修改docker-compose.yml文件内容,将原来解释的数据卷挂载目次的解释取消掉,同时为了filebeat有足够的权限,给filebeat容器配置用户为root
修改后的内容如下,请以这个内容为准(方便起见,直接复制粘贴下面的内容)
- #docker-compose.yml
- version: '3.3'
- services:
- elasticsearch:
- image: docker.elastic.co/elasticsearch/elasticsearch:8.6.2
- container_name: elasticsearch
- environment:
- - cluster.name=es-app-cluster
- - bootstrap.memory_lock=true
- - node.name=node-01
- - discovery.type=single-node
- - xpack.security.enabled=true
- - xpack.security.http.ssl.enabled=false
- - xpack.security.transport.ssl.enabled=false
- - ingest.geoip.downloader.enabled=false # 使用正确的配置项
- - ELASTIC_USERNAME=elastic
- - ELASTIC_PASSWORD=elastic
- - "ES_JAVA_OPTS=-Xms128m -Xmx128m"
- ulimits:
- memlock:
- soft: -1
- hard: -1
- volumes:
- - /usr/local/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- - /usr/local/elk/elasticsearch/data:/usr/share/elasticsearch/data
- - /usr/local/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
- - /etc/localtime:/etc/localtime:ro
- ports:
- - 9200:9200
- - 9300:9300
- networks:
- - elk-net
- restart: always
- privileged: true
- kibana:
- image: docker.elastic.co/kibana/kibana:8.6.2
- container_name: kibana
- environment:
- - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- - ELASTICSEARCH_USERNAME=kibana_system
- - ELASTICSEARCH_PASSWORD=elastic
- - XPACK_SECURITY_ENABLED=true
- - SERVER_NAME=kibana
- volumes:
- - /usr/local/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
- - /usr/local/elk/kibana/data:/usr/share/kibana/data
- - /usr/local/elk/kibana/logs:/usr/share/kibana/logs
- - /etc/localtime:/etc/localtime:ro
- ports:
- - 5601:5601
- networks:
- - elk-net
- depends_on:
- - elasticsearch
- restart: always
- privileged: true
- filebeat:
- image: docker.elastic.co/beats/filebeat:8.6.2
- container_name: filebeat
- volumes:
- - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- - ./filebeat/data:/usr/share/filebeat/data
- - ./filebeat/logs:/usr/share/filebeat/logs
- - /usr/workspace/logs/wclflow:/host/var/log/wclflow # 假设主机的日志位于/var/log下
- - /usr/nginx/logs/access.log:/host/var/log/nginx/logs/access.log
- - /usr/nginx/logs/access.log:/host/var/log/nginx/logs/error.log
- - /etc/localtime:/etc/localtime:ro
- networks:
- - elk-net
- depends_on:
- - elasticsearch
- restart: always
- privileged: true
- user: root
-
- networks:
- elk-net:
- driver: bridge
复制代码 七、重新启动ELK
先关闭服务
docker-compose down
再启动服务
docker-compose up -d
查看服务是否启动
docker-compose ps
三个容器status都是Up,表现容器都已启动
八、修改elasticsearch系统用户暗码
进入elasticsearch容器
docker exec -it elasticsearch /bin/bash
实行下面的代码
./bin/elasticsearch-setup-passwords interactive
回车后选择“y”后再回车,然后就是漫长的输入暗码-确认暗码的过程了,要耐烦,不绝输下去,直至最终结束
修改暗码结束后exit退出容器
关闭并重启服务
docker-compose down 关闭容器
docker-compose up -d 启动容器
九、欣赏器访问服务
我的ip是192.168.7.46,
打开欣赏器访问http://服务器IP:9200/ 查看elasticsearch状态,提示登录,
输入刚才你设定的暗码,就可以登录,比如我给elastic用户设定的暗码是123456,然后就可以登录了,登录乐成后如下图
打开欣赏器访问http://服务器IP:5601/,注意ip是你的服务器ip,端口就是5601,初次访问页面如下,须要先配置Elastic,我们选择自己手动配置一下,然后配置Elastic服务地址,修改下ip和端口,如我就是配置的是192.168.7.46:9200,访问后提示登录,同上一步一样,我使用elastic用户,暗码我设定的123456,然后登录即可
登录乐成进入首页
查看索引
进入索引管理,就能看到我们配置的索引数据、数据流数据、索引模板数据了
配置kibana,进行日记查看
创建数据视图
然后按照上图创建视图的方法,依次创建nginx-logs-access数据视图、nginx-logs-error数据视图
创建好视图后,左侧菜单Discover进入日记查询界面
进入Discover日记查询界面,可切换不同项目,查看日记
查看具体项目日记,如下图查看wclflow服务日记
至此日记已同步到ELK上了。
十、别的命令说明
如果后期功能需求,须要改动某个容器的配置,可以实行下面命令,我以改动kibana容器为例:
停止kibana容器
docker-compose stop kibana
然后修改kibana.yml配置或修改docker-compose.yml有关kibana的配置,修改完成后,实行下面命令,单独启动kibana容器,其他容器不受影响
docker-compose up -d --force-recreate --no-deps kibana
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |