作者:手机用户40405729304 | 来源:互联网 | 2023-05-19 11:02
1、logstash简介?logstash是一个数据分析软件,主要目的是分析log日志。整一套软件可以当作一个MVC模型,logstash是controller层,Elastics
1、logstash简介
? logstash是一个数据分析软件,主要目的是分析log日志。整一套软件可以当作一个MVC模型,logstash是controller层,Elasticsearch是一个model层,kibana是view层。
首先将数据传给logstash,它将数据进行过滤和格式化(转成JSON格式),然后传给Elasticsearch进行存储、建搜索的索引,kibana提供前端的页面再进行搜索和图表可视化,
它是调用Elasticsearch的接口返回的数据进行可视化。logstash和Elasticsearch是用Java写的,kibana使用node.js框架。
架构图:
3、详解
(1)inpute
File
File:从指定的文件中读取事件流;
使用FileWatch(Ruby Gem库)监听文件的变化。
.sincedb:记录了每个被监听的文件的inode,major number,minor nubmer,pos;
#-----
#从/var/log/messages读取日志
[[email protected] conf.d]# cat filesample.conf
input {
file {
path => ["/var/log/messages"]
type => "system"
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
}
#-----
#日志输出到了标准输出
[[email protected] conf.d]# logstash -f /etc/logstash/conf.d/filesample.conf --configtest
[[email protected] conf.d]# logstash -f /etc/logstash/conf.d/filesample.conf |head -n 8
{
"message" => "Mar 29 04:09:26 localhost journal: Runtime journal is using 6.0M .",
"@version" => "1",
"@timestamp" => "2020-04-22T15:09:19.310Z",
"host" => "node4",
"path" => "/var/log/messages",
"type" => "system"
}
udp
udp:通过udp协议从网络连接来读取Message,其必备参数为port,用于指明自己监听的端口,host则用于指明自己监听的地址;
#在node3上安装collectd,用它发送系统信息日志给node4,node4监听node3
#collectd是一个守护(daemon)进程,用来定期收集系统和应用程序的性能指标
#----
[[email protected] ~]# yum -y install collectd
#/etc/collectd.conf是配置文件,LoadPlugin是collectd启用的插件(收集哪些数据),
[[email protected] ~]# grep -v "^#" /etc/collectd.conf | grep -v "^$"
Hostname "node3"
LoadPlugin syslog
LoadPlugin cpu
LoadPlugin df
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin network
#有些插件需要做配置, 192.168.3.114是node4
"192.168.3.114" "25826"> #发给谁,以及发送数据的端口(udp)
Include "/etc/collectd.d"
[[email protected] ~]# systemctl start collectd
[[email protected] ~]# systemctl status collectd
#----node4
[[email protected] conf.d]# cat udpsample.conf
input {
udp {
port => 25826
codec => collectd {}
type => "collectd"
}
}
output {
stdout {
codec => rubydebug
}
}
#----
[[email protected] conf.d]# logstash -f /etc/logstash/conf.d/udpsample.conf
Logstash startup completed
{
"host" => "node3",
"@timestamp" => "2020-04-22T15:55:08.119Z",
"plugin" => "df",
"plugin_instance" => "run-user-0",
"collectd_type" => "df_complex",
"type_instance" => "free",
"value" => 190799872.0,
"@version" => "1",
"type" => "collectd"
}
{
"host" => "node3",
"@timestamp" => "2020-04-22T15:55:08.119Z",
"type_instance" => "nice",
"plugin" => "cpu",
"plugin_instance" => "0",
"collectd_type" => "cpu",
"value" => 62,
"@version" => "1",
"type" => "collectd"
}
......
redis
从redis读取数据,支持redis channel和lists两种方式;
(2)filter
grok
grok:用于分析并结构化文本数据;目前是logstash中将非结构化日志数据转化为结构化的可查询数据的不二之选。
grok是一种采用组合多个预定义的正则表达式,用来匹配分割文本并映射到关键字的工具。通常用来对日志数据进行预处理。
可处理syslog,apache,nginx…等日志;
grok默认内置120个预定义匹配字段(可直接用),可查看以下文件:
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns
也可以自己定义,自定义的加入上面的文件中即可,或者在/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns中新建一个专门的文件,也行;
例子:
[[email protected] conf.d]# cat groksample.conf
input {
stdin {}
}
filter {
grok {
match => { "message" => "%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}
output {
stdout {
codec => rubydebug
}
}
# {IP:clientip} ip是匹配时的字段,clientip是输出时显示的字段名字
%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
检查&启动:
[[email protected] conf.d]# logstash -f /etc/logstash/conf.d/groksample.conf --configtest
[[email protected] conf.d]# logstash -f /etc/logstash/conf.d/groksample.conf
自定义grok模式:
grok的模式是基于正则表达式编写,其元字符与其它用到正则表达式的工具awk/sed/grep/pcre差别不大。
自定义nginx日志模式:
# 文件末尾加入解析nginx日志的模型
[[email protected] ~]# vim /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/grok-patterns
#nginx
NGUSERNAME [a-zA-Z.@-+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}
(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}
%{NOTSPACE:http_x_forwarded_for}
#----
# 在node4上安装一个nginx,并产生一些访问日志
yum -y install nginx
systemctl start nginx
浏览器访问一下,生成日志:http://192.168.3.114/
#----
[[email protected] conf.d]# cat nginxsample.conf
input {
file {
path => ["/var/log/nginx/access.log"]
type => "nginxlog"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{NGINXACCESS}" } #这里直接引用变量即可
}
}
output {
stdout {
codec => rubydebug
}
}
检查&启动:
[[email protected] conf.d]# logstash -f /etc/logstash/conf.d/nginxsample.conf –configtest
可见nginx的日志已经解析出来了:
(3)output
stdout
elasticsearch
让收集的nginx日志进入redis:
node4安装redis:
[[email protected] ~]# yum -y install redis
/etc/redis.conf 设置:bind 0.0.0.0
[[email protected] ~]# systemctl start redis.service
logstash配置文件:
cd /etc/logstash/conf.d
[[email protected] conf.d]# vim nglogredissample.conf
input {
file {
path => ["/var/log/nginx/access.log"]
type => "nginxlog"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{NGINXACCESS}" }
}
}
output {
redis {
port => "6379" #redis的端口
host => ["127.0.0.1"] #redis主机
data_type => "list" #数据类型
key => "logstash-%{type}" #key
}
}
#检查&启动
[[email protected] conf.d]# logstash -f ./nglogredissample.conf --configtest
[[email protected] conf.d]# logstash -f ./nglogredissample.conf #此时刷几下nginx的页面,日志就进入redis中了
ELK之logstash
原文:https://www.cnblogs.com/weiyiming007/p/12764662.html