热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

dockercompose安装ELK7.12.0并启用xpack及kibana设置日志报警

ELKStack简介ELK是三个开源软件的缩写,分别为:Elasticsearch、Logstash以及Kibana,它们都是开源软件。不
  1. ELK Stack 简介
    ELK 是三个开源软件的缩写,分别为:Elasticsearch、Logstash 以及 Kibana,它们都是开源软件。不过现在还新增了一个 Beats,它是一个轻量级的日志收集处理工具(Agent),Beats 占用资源少,适合于在各个服务器上搜集日志后传输给 Logstash,官方也推荐此工具,目前由于原本的 ELK Stack 成员中加入了 Beats 工具所以已改名为 Elastic Stack。

Elastic Stack 包含:

  1. Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful
    风格接口,多数据源,自动搜索负载等。详细可参考 Elasticsearch 权威指南
  2. Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为 c/s 架构,client
    端安装在需要收集日志的主机上,server 端负责将收到的各节点日志进行过滤、修改等操作在一并发往 Elasticsearch 上去。
  3. Kibana 也是一个开源和免费的工具,Kibana 可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助汇总、分析和搜索重要数据日志。
  4. Beats 在这里是一个轻量级日志采集器,其实 Beats 家族有 6 个成员,早期的 ELK 架构中使用 Logstash收集、解析日志,但是 Logstash 对内存、cpu、io 等资源消耗比较高。相比 Logstash,Beats 所占系统的 CPU和内存几乎可以忽略不计。

ELK Stack (5.0版本之后)–> Elastic Stack == (ELK Stack + Beats)。

目前 Beats 包含六种工具:

  • Packetbeat: 网络数据(收集网络流量数据)
  • Metricbeat: 指标(收集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)
  • Filebeat: 日志文件(收集文件数据)
  • Winlogbeat: windows 事件日志(收集 Windows 事件日志数据)
  • Auditbeat:审计数据(收集审计日志)
  • Heartbeat:运行时间监控(收集系统运行时的数据)

k8s的filebeat配置文件地址/home/k8s_filebeat
ip(uat-192.168.180.37,prd-192.168.192.10)
现部署地址192.168.181.52:/home/stack_elk
前提:

  • 1、数据来源于redis中的数据(filebeat推送到redis中)
  • 2、centos7、Docker version 18.03.1-ce

一、创建以下compose和配置文件(es+kibana),传输层配置传输层安全性(TLS)加密


1、(基础文件准备开始)创建instances.yml 标识您需要为其创建证书的实例。

instances:- name: es01dns:- es01- localhostip:- 192.168.181.52- name: 'kib01'dns:- kib01- localhost

2、创建create-certs.yml 用来生成Elasticsearch和Kibana的证书。

version: '2'services:create_certs:image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0#image: elk_es:v1container_name: create_certscommand: >bash -c 'yum install -y -q -e 0 unzip;if [[ ! -f /certs/bundle.zip ]]; thenbin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;unzip /certs/bundle.zip -d /certs;fi;chown -R 1000:0 /certs'working_dir: /usr/share/elasticsearchvolumes:#- certs:/certs- ./certs:/certs- .:/usr/share/elasticsearch/config/certificatesnetworks:- elastic#volumes:
# certs:
# driver: localnetworks:elastic:driver: bridge

3、创建单节点es配置elastic-docker-tls.yml

其中包含单节点的Elasticsearch和一个启用了TLS的Kibana实例和一个logstash实例。

version: '2.1'
services:es01:image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0container_name: es01restart: alwaysenvironment:- "discovery.type=single-node"- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"# 生成并应用支持传输层安全性的试用许可证。- xpack.license.self_generated.type=trial- xpack.security.enabled=true# 启用传输层安全性以加密客户端通信。- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=/usr/share/elasticsearch/config/certificates/es01/es01.key- xpack.security.http.ssl.certificate_authorities=/usr/share/elasticsearch/config/certificates/ca/ca.crt- xpack.security.http.ssl.certificate=/usr/share/elasticsearch/config/certificates/es01/es01.crt# 启用传输层安全性以加密节点间通信。- xpack.security.transport.ssl.enabled=true# 通过不需要主机名验证来允许使用自签名证书。- xpack.security.transport.ssl.verification_mode=certificate- xpack.security.transport.ssl.certificate_authorities=/usr/share/elasticsearch/config/certificates/ca/ca.crt- xpack.security.transport.ssl.certificate=/usr/share/elasticsearch/config/certificates/es01/es01.crt- xpack.security.transport.ssl.key=/usr/share/elasticsearch/config/certificates/es01/es01.key- "http.cors.enabled=true"- "http.cors.allow-origin=*"- "http.cors.allow-headers=Authorization,X-Requested-With,Content-Length,Content-Type"ulimits:memlock:soft: -1hard: -1volumes:- ./data01:/usr/share/elasticsearch/data- ./certs:/usr/share/elasticsearch/config/certificatesports:- 9200:9200networks:- elastichealthcheck:test: curl --cacert /usr/share/elasticsearch/config/certificates/ca/ca.crt -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi# 每次检查之间的间隔时间interval: 30s# 运行命令的超时时间timeout: 10s# 重试次数retries: 5es-head:image: mobz/elasticsearch-head:5container_name: elasticsearch-headrestart: alwaysports:- 9100:9100kib01:image: docker.elastic.co/kibana/kibana:7.12.0container_name: kib01restart: alwaysdepends_on: {"es01": {"condition": "service_healthy"}}ports:- 5601:5601environment:SERVERNAME: localhostELASTICSEARCH_URL: https://es01:9200ELASTICSEARCH_HOSTS: https://es01:9200ELASTICSEARCH_USERNAME: kibana_systemELASTICSEARCH_PASSWORD: n3nhaKiYczkls3AoXup6ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES: /usr/share/elasticsearch/config/certificates/ca/ca.crtSERVER_SSL_ENABLED: "true"SERVER_SSL_KEY: /usr/share/elasticsearch/config/certificates/kib01/kib01.keySERVER_SSL_CERTIFICATE: /usr/share/elasticsearch/config/certificates/kib01/kib01.crtvolumes:- ./certs:/usr/share/elasticsearch/config/certificates- ./kibana.yml:/usr/share/kibana/config/kibana.ymlnetworks:- elasticnetworks:elastic:driver: bridge

①kibana配置插件logtrail(如不需要此插件可以跳过①②③)

cd images/kibana/
下载对应的logtrail插件包:

wget https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-7.9.1-0.1.31.zip

②創建kibana的Dockerfile

FROM docker.elastic.co/kibana/kibana:7.9.1
ADD ./logtrail-7.9.1-0.1.31.zip /opt/kibana/plugins/logtrail-7.9.1-0.1.3
RUN ./bin/kibana-plugin install file:///opt/kibana/plugins//logtrail-7.9.1-0.1.31.zip

③(基础文件准备结束)配置logtrail显示规则(logtrail.json)

cat logtrail.json

{"version": 2,"index_patterns": [{"es": {"default_index": "prod-*","allow_url_parameter": false,"timezone": "UTC"},"tail_interval_in_seconds": 10,"es_index_time_offset_in_seconds": 0,"display_timezone": "local","display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss","max_buckets": 500,"nested_objects": false,"default_time_range_in_days": 5,"max_hosts": 100,"max_events_to_keep_in_viewer": 5000,"default_search": "","fields": {"mapping": {"timestamp": "@timestamp","program": "tags","hostname": "kubernetes.labels.name","message": "message"},"hostname_format": "{{{kubernetes.namespace}}} | {{{hostname}}}","message_format": "{{{kubernetes.namespace}}} | {{{message}}}","keyword_suffix": "keyword"}},{"es": {"default_index": "uat-*","allow_url_parameter": false,"timezone": "UTC"},"tail_interval_in_seconds": 10,"es_index_time_offset_in_seconds": 0,"display_timezone": "local","display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss","max_buckets": 500,"nested_objects": false,"default_time_range_in_days": 5,"max_hosts": 100,"max_events_to_keep_in_viewer": 5000,"default_search": "","fields": {"mapping": {"timestamp": "@timestamp","program": "tags","hostname": "kubernetes.labels.name","message": "message"},"hostname_format": "{{{kubernetes.namespace}}} | {{{hostname}}}","message_format": "{{{kubernetes.namespace}}} | {{{message}}}","keyword_suffix": "keyword"}},{"es": {"default_index": "st-*","allow_url_parameter": false,"timezone": "UTC"},"tail_interval_in_seconds": 10,"es_index_time_offset_in_seconds": 0,"display_timezone": "local","display_timestamp_format": "YYYY年MM月DD日 HH:mm:ss","max_buckets": 500,"nested_objects": false,"default_time_range_in_days": 5,"max_hosts": 100,"max_events_to_keep_in_viewer": 5000,"default_search": "","fields": {"mapping": {"timestamp": "@timestamp","program": "tags","hostname": "attrs.service","message": "log"},"message_format": "{{{log}}} | {{{marker}}}","keyword_suffix": "keyword"}}]
}

1.1、(开始)通过启动create-certs容器为Elasticsearch生成证书:

[root@dev23 stack_elk]# docker-compose -f create-certs.yml run --rm create_certs

1.2、建立单节点的Elasticsearch集群:

[root@dev23 stack_elk]# mkdir data01
[root@dev23 stack_elk]# chmod 777 data01
[root@dev23 stack_elk]# docker-compose -f elastic-docker-tls.yml up -d es01

1.3、运行该elasticsearch-setup-passwords工具为所有内置用户(包括该kibana_system用户)生成密码,自动生成密码用auto, 自己设置用 interactive。

[root@dev23 stack_elk]# docker exec es01 /bin/bash -c "bin/elasticsearch-setup-passwords auto --batch --url https://es01:9200" #为elastic用户设置密码后,引导密码将不再有效。并且再次执行elasticsearch-setup-passwords命令会抛出异常
Failed to authenticate user 'elastic' against https://es01:9200/_security/_authenticate?pretty
Possible causes include:* The password for the 'elastic' user has already been changed on this cluster* Your elasticsearch node is running against a different keystoreThis tool used the keystore at /usr/share/elasticsearch/config/elasticsearch.keystoreERROR: Failed to verify bootstrap password

1.4、ELASTICSEARCH_PASSWORDkibana-docker.yml撰写文件中设置为kibana_system用户生成的密码。

kib01:image: docker.elastic.co/kibana/kibana:${VERSION}container_name: kib01depends_on: {"es01": {"condition": "service_healthy"}}ports:- 5601:5601environment:SERVERNAME: localhostELASTICSEARCH_URL: https://es01:9200ELASTICSEARCH_HOSTS: https://es01:9200ELASTICSEARCH_USERNAME: kibana_system# 修改此处的密码ELASTICSEARCH_PASSWORD: CHANGEME...

1.5、使用docker-compose重启es和Kibana:

[root@dev23 stack_elk]# docker-compose -f elastic-docker-tls.yml up -d

1.6、访问kibana《https://192.168.181.52:5601》

使用elastic用户登录kibana

二、配置 Logstash 启用 TLS连接es

cd logstash/
以下配置文件中涉及到用户名和密码需要引用1.3步骤中所生成的密码

2.1、配置Logstash的docker-compose.yml

version: '2'
services:logstash:container_name: logstash#image: logstash_elk:v1image: logstash:7.12.0restart: alwaysports:- 5044:5044volumes:- ./config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf- ./config/logstash.yml:/usr/share/logstash/config/logstash.yml- ../certs/ca/:/etc/logstash/config/certs/

2.2、创建配置文件logstash.conf

cat config/logstash.conf

input {redis {host => "192.168.181.18"port => "6379"password => "sinoeyes"key => "sinoeyes-io"data_type => "list"db => "5"}
}filter {grok {match => { "message" => "%{TIMESTAMP_ISO8601:log_date}\s*(?([\S+]*))\s*(?([\S+]*))\s*%{LOGLEVEL:log_level}" }}}output {elasticsearch {hosts => "192.168.181.52:9200"index => "%{tags}-%{+YYYY.MM.dd}"ssl => trueuser => "elastic"password => "W7ZOoGqxdncD5IyLSeKX"ssl => truecacert => '/etc/logstash/config/certs/ca.crt'}
}

2.3、创建Logstash的配置文件

cat config/logstash.yml

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
# "auto" will automatically enable ordering if the 'pipeline.workers' setting
# is also set to '1'.
# "true" will enforce ordering on the pipeline and prevent logstash from starting
# if there are multiple workers.
# "false" will disable any extra processing necessary for preserving ordering.
#
pipeline.ordered: auto
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
# Note that the unit value (s) is required. Values without a qualifier (e.g. 60)
# are treated as nanoseconds.
# Setting the interval this way is not recommended and might change in later versions.
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ HTTP API Settings -------------
# Define settings related to the HTTP API here.
#
# The HTTP API is enabled by default. It can be disabled, but features that rely
# on it will not work as intended.
# http.enabled: true
#
# By default, the HTTP API is bound to only the host's local loopback interface,
# ensuring that it is not accessible to the rest of the network. Because the API
# includes neither authentication nor authorization and has not been hardened or
# tested for use as a publicly-reachable API, binding to publicly accessible IPs
# should be avoided where possible.
#
# http.host: 127.0.0.1
#
# The HTTP API web server will listen on an available port from the given range.
# Values can be specified as a single port (e.g., `9600`), or an inclusive range
# of ports (e.g., `9600-9700`).
#
# http.port: 9600-9700
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id:
#
# Format of cloud.auth is: :
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# Flag to output log lines of each pipeline in its separate log file. Each log filename contains the pipeline.name
# Default is false
# pipeline.separate_logs: false
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: "u0bMA5cORrtEkBaOfW2F"
#xpack.monitoring.elasticsearch.proxy: ["http://proxy:port"]
xpack.monitoring.elasticsearch.hosts: ["https://192.168.181.52:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.monitoring.elasticsearch.cloud_id: monitoring_cluster_id:xxxxxxxxxx
#xpack.monitoring.elasticsearch.cloud_auth: logstash_system:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.monitoring.elasticsearch.api_key: "id:api_key"
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/var/lib/docker/volumes/es_certs/_data/ca/ca.crt" ]
xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/config/certs/ca.crt
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.proxy: ["http://proxy:port"]
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.management.elasticsearch.cloud_id: management_cluster_id:xxxxxxxxxx
#xpack.management.elasticsearch.cloud_auth: logstash_admin_user:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.management.elasticsearch.api_key: "id:api_key"
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

2.4、启动

docker-compose up -d

Logstash 启用 TLS参考

三、日志告警

1、创建连接器
在这里插入图片描述
2、创建报警
每十分钟检查日志,如果检测到日志ERROR超过1个就发送邮件
在这里插入图片描述

定义警报参考

四、页面调整


5.1、Discover页面日志显示不全

解决办法:
高级设置页面truncate:maxHeight 这个属性指定了表格中单元格显示时占用的最大高度,设置为0则不限制。
在这里插入图片描述

高级设置所有字段参考

五、定时删除日志

#/bin/bash
#es-index-clear
#只保留某几天内的日志索引-5 days || 5 days ago
ST_LAST_DATA=`date -d "-7 days" "+%Y.%m.%d"`
UAT_LAST_DATA=`date -d "-7 days" "+%Y.%m.%d"`
PROD_LAST_DATA=`date -d "-30 days" "+%Y.%m.%d"`#删除上个月份所有的索引
curl --user elastic:passowrd -XDELETE "https://172.188.180.52:9200/st-${ST_LAST_DATA}" -k
#curl -XGET "https://172.188.180.52:9200/st-${ST_LAST_DATA}"
curl --user elastic:passowrd -XDELETE "https://172.188.180.52:9200/uat-paas-${UAT_LAST_DATA}" -k
#curl -XGET "https://172.188.180.52:9200/uat-dev15-${UAT_LAST_DATA}"
curl --user elastic:passowrd -XDELETE "https://172.188.180.52:9200/prod-admin-paas-${PROD_LAST_DATA}" -k
#curl -XGET "https://172.188.180.52:9200/prod-admin-paas-${PROD_LAST_DATA}"#crontab -e添加定时任务:
#0 1 * * * /home/stack_elk/clear_index/es-index-clear.sh

-k 允许在没有证书的情况下连接到SSL站点

(官方参考)[https://www.elastic.co/guide/en/elastic-stack-get-started/7.9/get-started-docker.html#get-started-docker-tls]


推荐阅读
  • 本文探讨了使用Python实现监控信息收集的方法,涵盖从基础的日志记录到复杂的系统运维解决方案,旨在帮助开发者和运维人员提升工作效率。 ... [详细]
  • Java虚拟机及其发展历程
    Java虚拟机(JVM)是每个Java开发者日常工作中不可或缺的一部分,但其背后的运作机制却往往显得神秘莫测。本文将探讨Java及其虚拟机的发展历程,帮助读者深入了解这一关键技术。 ... [详细]
  • 为何Compose与Swarm之后仍有Kubernetes的诞生?
    探讨在已有Compose和Swarm的情况下,Kubernetes是如何以其独特的设计理念和技术优势脱颖而出,成为容器编排领域的领航者。 ... [详细]
  • Docker安全策略与管理
    本文探讨了Docker的安全挑战、核心安全特性及其管理策略,旨在帮助读者深入理解Docker安全机制,并提供实用的安全管理建议。 ... [详细]
  • 本文详细介绍了PHP中的几种超全局变量,包括$GLOBAL、$_SERVER、$_POST、$_GET等,并探讨了AJAX的工作原理及其优缺点。通过具体示例,帮助读者更好地理解和应用这些技术。 ... [详细]
  • 初探Hadoop:第一章概览
    本文深入探讨了《Hadoop》第一章的内容,重点介绍了Hadoop的基本概念及其如何解决大数据处理中的关键挑战。 ... [详细]
  • 本文深入探讨了MySQL中的高级特性,包括索引机制、锁的使用及管理、以及如何利用慢查询日志优化性能。适合有一定MySQL基础的读者进一步提升技能。 ... [详细]
  • 本文探讨了服务器系统架构的性能评估方法,包括性能评估的目的、步骤以及如何选择合适的度量标准。文章还介绍了几种常用的基准测试程序及其应用,并详细说明了Web服务器性能评估的关键指标与测试方法。 ... [详细]
  • 服务器虚拟化存储设计,完美规划储存与资源,部署高性能虚拟化桌面
    规划部署虚拟桌面环境前,必须先估算目前所使用实体桌面环境的工作负载与IOPS性能,并慎选储存设备。唯有谨慎估算贴近实际的IOPS性能,才能 ... [详细]
  • Asynchronous JavaScript and XML (AJAX) 的流行很大程度上得益于 Google 在其产品如 Google Suggest 和 Google Maps 中的应用。本文将深入探讨 AJAX 在 .NET 环境下的工作原理及其实现方法。 ... [详细]
  • MVC模式下的电子取证技术初探
    本文探讨了在MVC(模型-视图-控制器)架构下进行电子取证的技术方法,通过实际案例分析,提供了详细的取证步骤和技术要点。 ... [详细]
  • 本文提供了一个详尽的前端开发资源列表,涵盖了从基础入门到高级应用的各个方面,包括HTML5、CSS3、JavaScript框架及库、移动开发、API接口、工具与插件等。 ... [详细]
  • 本文介绍了Tomcat的基本操作,包括启动、关闭及首次访问的方法,并详细讲解了如何在IDEA中创建Web项目,配置Servlet及其映射,以及如何将项目部署到Tomcat。 ... [详细]
  • 本文深入探讨了Linux内核中进程地址空间的设计与实现,包括虚拟地址空间的概念、内存描述符`mm_struct`的作用、内核线程与用户进程的区别、进程地址空间的分配方法、虚拟内存区域(VMA)的结构以及地址空间与页表之间的映射机制。 ... [详细]
  • 软件测试行业深度解析:迈向高薪的必经之路
    本文深入探讨了软件测试行业的发展现状及未来趋势,旨在帮助有志于在该领域取得高薪的技术人员明确职业方向和发展路径。 ... [详细]
author-avatar
muc4093631
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有