首先,我们需要创建一个命名空间来管理所有的日志相关组件:
kubectl create namespace logging
接着,拉取 Elasticsearch 的 Docker 镜像,并运行一个临时容器来生成必要的 SSL 证书:
docker pull elasticsearch:7.13.1
# 启动容器生成证书
docker run --rm --name elasticsearch-certgen -it --entrypoint=/bin/sh elasticsearch:7.13.1 -c \
"elasticsearch-certutil ca --out /tmp/elastic-stack-ca.p12 --pass '' && \
elasticsearch-certutil cert --name security-node --dns security-node --ca /tmp/elastic-stack-ca.p12 --pass '' --ca-pass '' --out /tmp/elastic-certificates.p12"
# 将证书复制到主机
docker cp elasticsearch-certgen:/tmp/elastic-certificates.p12 ./
# 清理容器
docker rm -f elasticsearch-certgen
# 将 PKCS12 证书转换为 PEM 格式
openssl pkcs12 -nodes -passin pass:'' -in elastic-certificates.p12 -out elastic-certificate.pem
然后,使用 Kubernetes Secret 对象存储这些证书以及 Elasticsearch 的访问凭证:
kubectl create secret generic elastic-certs --from-file=elastic-certificates.p12 -n logging
kubectl create secret generic elastic-auth --from-literal=username=elastic --from-literal=password=your_password -n logging
为了方便管理和部署,我们将使用 Helm 来安装 Elasticsearch 和其他组件。首先,添加官方的 Helm 仓库并更新本地缓存:
helm repo add elastic https://helm.elastic.co
helm repo update
下载 Elasticsearch 的 Helm Chart 并解压,进入目录后,根据需要编辑配置文件,如 values-master.yaml
、values-data.yaml
和 values-client.yaml
,设置集群名称、节点角色、资源请求等参数。例如,对于主节点,配置可能如下所示:
clusterName: "elasticsearch"
nodeGroup: "master"
roles:
master: "true"
ingest: "false"
data: "false"
image: "elasticsearch"
imageTag: "7.13.1"
replicas: 3
resources:
requests:
cpu: "2000m"
memory: "2Gi"
limits:
cpu: "2000m"
memory: "2Gi"
persistence:
enabled: true
volumeClaimTemplate:
storageClassName: "csi-rbd-sc"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
protocol: http
secretMounts:
- name: elastic-certs
secretName: elastic-certs
path: "/usr/share/elasticsearch/config/certs"
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: "/usr/share/elasticsearch/config/certs/elastic-certificates.p12"
xpack.security.transport.ssl.truststore.path: "/usr/share/elasticsearch/config/certs/elastic-certificates.p12"
extraEnvs:
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-auth
key: username
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-auth
key: password
antiAffinity: "soft"
tolerations:
- operator: "Exists"
使用 Helm 安装各个组件:
helm install es-master -f values-master.yaml -n logging .
helm install es-data -f values-data.yaml -n logging .
helm install es-client -f values-client.yaml -n logging .
同样地,对于 Kibana 的配置,创建或编辑 values-prod.yaml
文件,设置 Kibana 的镜像、Elasticsearch 的连接地址、资源请求等参数:
image: "kibana"
imageTag: "7.13.2"
elasticsearchHosts: "http://es-client:9200"
extraEnvs:
- name: "ELASTICSEARCH_USERNAME"
valueFrom:
secretKeyRef:
name: elastic-auth
key: username
- name: "ELASTICSEARCH_PASSWORD"
valueFrom:
secretKeyRef:
name: elastic-auth
key: password
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "500m"
memory: "1Gi"
kibanaConfig:
kibana.yml: |
i18n.locale: "zh-CN"
service:
type: NodePort
nodePort: "30601"
最后,使用 Helm 安装 Kibana:
helm install kibana -f values-prod.yaml -n logging .
为了收集和转发 Kubernetes 集群中的日志,我们还需要配置 Fluentd。创建一个 ConfigMap 对象来定义 Fluentd 的配置文件,例如 fluentd-conf.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-conf
namespace: logging
data:
containers.input.conf: |
forward.input.conf: |
output.conf: |
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
host es-client
port 9200
user elastic
password your_password
logstash_format true
logstash_prefix k8s
request_timeout 30s
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
此外,还可以根据需要定制 Fluentd 的 Dockerfile,以包含额外的插件,如 Kafka 支持:
cat <FROM quay.io/fluentd_elasticsearch/fluentd:v3.2.0
RUN echo "source 'https://mirrors.tuna.tsinghua.edu.cn/rubygems/'" > Gemfile && gem install bundler
RUN gem install fluent-plugin-kafka -v 0.16.1 --no-document
EOF
docker build -t ecloudedu/fluentd-kafka:v3.2.0 .
docker push ecloudedu/fluentd-kafka:v3.2.0
通过上述步骤,您可以成功部署 EFKLK 日志采集工具链,实现 Kubernetes 集群的日志收集、分析和可视化。