
基于 K8S(K3S) 搭建自己的 ELK 服务
对应的 Yaml 资源在 https://github.com/nicelizhi/k8s-elk
elasticsearch 服务
Service
- kind: Service
- apiVersion: v1
- metadata:
- name: elasticsearch
- spec:
- ports:
- - name: elasticsearch
- protocol: TCP
- port: 9200
- targetPort: 9200
- selector:
- app: elasticsearch
- type: ClusterIP
- sessionAffinity: None
复制代码 ConfigMap
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: elasticsearch-config
- data:
- elasticsearch.yml: |
- network.host: 0.0.0.0
- xpack.monitoring.collection.enabled: true
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- xpack.security.enabled: true
- xpack.security.authc.api_key.enabled: true
复制代码 Deployment
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- name: elasticsearch
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: elasticsearch
- template:
- metadata:
- labels:
- app: elasticsearch
- spec:
- volumes:
- - name: config
- configMap:
- name: elasticsearch-config
- defaultMode: 420
- - name: es-data
- hostPath:
- path: /data/es
- initContainers:
- - name: increase-vm-max-map
- image: busybox
- command:
- - sysctl
- - '-w'
- - vm.max_map_count=262144
- securityContext:
- privileged: true
- containers:
- - name: elasticsearch
- image: 'docker.elastic.co/elasticsearch/elasticsearch:7.16.0'
- resources:
- requests:
- memory: 1524Mi
- cpu: 500m
- limits:
- memory: 1824Mi
- cpu: 1
- ports:
- - containerPort: 9200
- protocol: TCP
- - containerPort: 9300
- protocol: TCP
- env:
- - name: ES_JAVA_OPTS
- value: '-Xms256m -Xmx256m'
- - name: discovery.type
- value: single-node
- volumeMounts:
- - name: config
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
- subPath: elasticsearch.yml
- - mountPath: /usr/share/elasticsearch/data/
- name: es-data
复制代码上面的资源有硬盘挂载与 config 类型的使用
kibana 服务
Service
- kind: Service
- apiVersion: v1
- metadata:
- name: kibana
- spec:
- ports:
- - name: kibana
- protocol: TCP
- port: 5601
- targetPort: 5601
- selector:
- component: kibana
- type: LoadBalancer
复制代码 ConfigMap
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: kibana-config
- data:
- kibana.yml: >
- server.name: kibana
- server.host: 0.0.0.0
- elasticsearch.hosts: ["http://elasticsearch:9200" ]
-
- elasticsearch.username: "elastic"
- monitoring.ui.container.elasticsearch.enabled: true
复制代码 Deployment
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- name: kibana
- spec:
- replicas: 1
- selector:
- matchLabels:
- component: kibana
- template:
- metadata:
- labels:
- component: kibana
- spec:
- volumes:
- - name: config
- configMap:
- name: kibana-config
- defaultMode: 420
- - name: secrets
- secret:
- secretName: es-user-pass
- defaultMode: 0400
- containers:
- - name: elk-kibana
- image: 'docker.elastic.co/kibana/kibana:7.16.0'
- resources:
- requests:
- memory: 512Mi
- cpu: 200m
- limits:
- memory: 1Gi
- cpu: 1
- ports:
- - name: kibana
- containerPort: 5601
- protocol: TCP
- env:
- - name: KIBANA_SYSTEM_PASSWORD
- valueFrom:
- secretKeyRef:
- name: es-user-pass
- key: password
- - name: ELASTICSEARCH_PASSWORD
- valueFrom:
- secretKeyRef:
- name: es-user-pass
- key: password
- volumeMounts:
- - name: config
- mountPath: /usr/share/kibana/config/kibana.yml
- subPath: kibana.yml
复制代码configMap 的配置使用与 Secret内容的使用
logstash 服务
Deployment
- kind: Deployment
- apiVersion: apps/v1
- metadata:
- name: logstash
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: logstash
- template:
- metadata:
- labels:
- app: logstash
- spec:
- volumes:
- - name: config
- configMap:
- name: logstash-config
- defaultMode: 420
- - name: pipelines
- configMap:
- name: logstash-pipelines
- defaultMode: 420
- containers:
- - name: logstash
- image: 'docker.elastic.co/logstash/logstash:7.16.0'
- resources:
- requests:
- memory: 512Mi
- cpu: 500m
- limits:
- memory: 1024Mi
- cpu: 1
- ports:
- - containerPort: 5044
- protocol: TCP
- - containerPort: 5000
- protocol: TCP
- - containerPort: 5000
- protocol: UDP
- - containerPort: 9600
- protocol: TCP
- env:
- - name: ELASTICSEARCH_HOST
- value: 'http://elasticsearch:9200'
- - name: LS_JAVA_OPTS
- value: '-Xms512m -Xmx512m'
- volumeMounts:
- - name: pipelines
- mountPath: /usr/share/logstash/pipeline
- - name: config
- mountPath: /usr/share/logstash/config/logstash.yml
- subPath: logstash.yml
复制代码 Service
- kind: Service
- apiVersion: v1
- metadata:
- name: logstash
- spec:
- ports:
- - name: logstash
- protocol: TCP
- port: 10000
- targetPort: 9600
- - name: filebeat
- protocol: TCP
- port: 5044
- targetPort: 5044
- selector:
- app: logstash
- type: LoadBalancer
- sessionAffinity: None
复制代码 ConfigMap
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: logstash-config
- namespace: default
- data:
- logstash.yml: >
- http.host: "0.0.0.0"
- xpack.monitoring.enabled: true
- config.reload.automatic: true
- xpack.monitoring.elasticsearch.hosts: ["elasticsearch:9200" ]
- xpack.monitoring.elasticsearch.username: "elastic"
- xpack.monitoring.elasticsearch.password: "abc123456"
复制代码 ConfigMap piple
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: logstash-pipelines
- data:
- logstash.conf: |
- input {
- syslog {
- type => "syslog"
- port => 5044
- }
- }
- filter {
- grok {
- match => ["message", "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:timestamp}|-) +(?:%{HOSTNAME:heroku_drain_id}|-) +(?:%{WORD:heroku_source}|-) +(?:%{DATA:heroku_dyno}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:heroku_message}"]
- }
- mutate { rename => ["heroku_message", "message"] }
- kv { source => "message" }
- mutate { convert => ["sample#memory-free", "integer"]}
- mutate { convert => ["sample#memory-total", "integer"]}
- mutate { convert => ["sample#memory-redis", "integer"]}
- mutate { convert => ["sample#memory-cached", "integer"]}
- mutate { convert => ["sample#load-avg-5m", "float"]}
- mutate { convert => ["sample#load-avg-1m", "float"]}
- mutate { convert => ["sample#load-avg-15m", "float"]}
- syslog_pri { syslog_pri_field_name => "syslog5424_pri" }
- }
- output {
- elasticsearch {
- hosts => ["http://elasticsearch:9200"]
- "user" => "elastic"
- "password" => "abc123456"
- "index" => "logstash-%{heroku_dyno}"
- template_overwrite => true
- }
- }
复制代码pipe 日志处理了 Heroku 平台日志收集。并且配置了pipe的自动加载动作,这块也是实际应用中经常应用到的功能
针对与日志收集的过程中可以使用 grok 去做必要的格式转化,从而使的日志安装您的要求保存到ES 服务器中使用。
上面示例是运行在一台 2核 4G 的服务器上面,所以我们使用的是K3S 架构,并且在 ES 上只使用了一个节点数据,这些在正式的使用过程中需要多留意。
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |