2019独角兽企业重金招聘Python工程师标准>>>
Introduction介绍
Logstash is a powerful tool for centralizing and analyzing logs, which can help to provide and overview of your environment, and to identify issues with your servers. One way to increase the effectiveness of your ELK Stack (Elasticsearch, Logstash, and Kibana) setup is to collect important application logs and structure the log data by employing filters, so the data can be readily analyzed and query-able. We will build our filters around "grok" patterns, that will parse the data in the logs into useful bits of information.
本教程是“How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04”的后续,专注于为多种多样的应用日志添加logstash过滤器。
前置条件
要学习本教程,需要有一个运行的Logstash服务,且接收来自Filebeat等日志传输工具的日志。如果你没有,按照这篇教程“How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04”搭建这样的环境。
ELK服务端设定
- Logstash安装在
/opt/logstash
- Logstash配置文件目录
/etc/logstash/conf.d
- 输入文件名称为
02-beats-input.conf
- 输出文件名称为
30-elasticsearch-output.conf
使用如下命令创建patterns目录:
sudo mkdir -p /opt/logstash/patterns
sudo chown logstash: /opt/logstash/patterns
Client Server Assumptions客户端服务器设定
- 每个客户端应用服务器上Filebeat已经配置好,将syslog/auth.log发送到Logstash服务器。
如果你的配置与此不同,将本教程适当调整即可。
Grok介绍
Grok使用正则匹配,解析文本模式,并将其分配给一个标识符。
Grok模式的语法是%{PATTERN:IDENTIFIER}
。Logstash过滤器包含了一系列grok模式进行匹配,并将匹配的日志片段分配给相应的标识符,将日志输出进行格式化。
更多grok相关内容,查看 Logstash grok page和 Logstash Default Patterns listing
如何学习本教程
本文下面的每一主要章节都包含了收集和过滤特定应用日志的详细配置。如果你想收集过滤某个应用的日志,你需要在客户端(filebeat)和Logstash服务端都进行配置。
Logstash模式子章节
Logstash模式子章节包含了grok模式,你可以添加到Logstash服务器的/opt/logstash/patterns目录下,你可以在Logstash过滤器部分使用新的模式。
过滤器子章节
Logstash过滤器子章节The Logstash Filter subsections will include a filter that can can be added to a new file, between the input and output configuration files, in /etc/logstash/conf.d
on the Logstash Server. The filter determine how the Logstash server parses the relevant log files. Remember to restart the Logstash service after adding a new filter, to load your changes.
Filebeat Prospector 子章节
Filebeat Prospectors are used specify which logs to send to Logstash. Additional prospector configurations should be added to the /etc/filebeat/filebeat.yml
file directly after existing prospectors in the prospectors
section:
Prospector Examples
filebeat:# List of prospectors to fetch data.prospectors:-- /var/log/secure- /var/log/messagesdocument_type: syslog-paths:- /var/log/app/*.logdocument_type: app-access
...
In the above example, the red highlighted lines represent a Prospector that sends all of the .log
files in /var/log/app/
to Logstash with the app-access
type. After any changes are made, Filebeat must be reloaded to put any changes into effect.
Now that you know how to use this guide, the rest of the guide will show you how to gather and filter application logs!
Application: Nginx
Logstash Patterns: Nginx
Nginx log patterns are not included in Logstash's default patterns, so we will add Nginx patterns manually.
On your ELK server, create a new pattern file called nginx
:
sudo vi /opt/logstash/patterns/nginx
Then insert the following lines:
Nginx Grok Pattern
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent}
Save and exit. The NGINXACCESS
pattern parses, and assigns the data to various identifiers (e.g. clientip
, ident
, auth
, etc.).
Next, change the ownership of the pattern file to logstash
:
sudo chown logstash: /opt/logstash/patterns/nginx
Logstash Filter: Nginx
On your ELK server, create a new filter configuration file called 11-nginx-filter.conf
:
sudo vi /etc/logstash/conf.d/11-nginx-filter.conf
Then add the following filter:
Nginx Filter
filter {if [type] == "nginx-access" {grok {match => { "message" => "%{NGINXACCESS}" }}}
}
Save and exit. Note that this filter will attempt to match messages of nginx-access
type with the NGINXACCESS
pattern, defined above.
Now restart Logstash to reload the configuration:
sudo service logstash restart
Filebeat Prospector: Nginx
On your Nginx servers, open the filebeat.yml
configuration file for editing:
sudo vi /etc/filebeat/filebeat.yml
Add the following Prospector in the filebeat
section to send the Nginx access logs as type nginx-access
to your Logstash server:
Nginx Prospector
-paths:- /var/log/nginx/access.logdocument_type: nginx-access
Save and exit. Reload Filebeat to put the changes into effect:
sudo service filebeat restart
Now your Nginx logs will be gathered and filtered!
Application: Apache HTTP Web Server
Apache's log patterns are included in the default Logstash patterns, so it is fairly easy to set up a filter for it.
Note: If you are using a RedHat variant, such as CentOS, the logs are located at /var/log/httpd
instead of /var/log/apache2
, which is used in the examples.
Logstash Filter: Apache
On your ELK server, create a new filter configuration file called 12-apache.conf
:
sudo vi /etc/logstash/conf.d/12-apache.conf
Then add the following filter:
Apache Filter
filter {if [type] == "apache-access" {grok {match => { "message" => "%{COMBINEDAPACHELOG}" }}}
}
Save and exit. Note that this filter will attempt to match messages of apache-access
type with the COMBINEDAPACHELOG
pattern, one the default Logstash patterns.
Now restart Logstash to reload the configuration:
sudo service logstash restart
Filebeat Prospector: Apache
On your Apache servers, open the filebeat.yml
configuration file for editing:
sudo vi /etc/filebeat/filebeat.yml
Add the following Prospector in the filebeat
section to send the Apache logs as type apache-access
to your Logstash server:
Apache Prospector
-paths:- /var/log/apache2/access.logdocument_type: apache-access
Save and exit. Reload Filebeat to put the changes into effect:
sudo service filebeat restart
Now your Apache logs will be gathered and filtered!
Conclusion
It is possible to collect and parse logs of pretty much any type. Try and write your own filters and patterns for other log files.
Feel free to comment with filters that you would like to see, or with patterns of your own!
If you aren't familiar with using Kibana, check out this tutorial: How To Use Kibana Visualizations and Dashboards.
https://www.digitalocean.com/community/tutorials/adding-logstash-filters-to-improve-centralized-logging