安装 ELK
安装Java 1.8环境
解压源码安装包:
tar xf jdk-8u121-linux-x64.tar.gz
ll
mkdir /work/opt -p
mv jdk1.8.0_121 /work/opt/jdk
ll /work/opt/jdk/
chown -R root.root /work/opt
vim /etc/profile : //添加
export JAVA_HOME=/work/opt/jdk
export JAVA_BIN=$JAVA_HOME/bin
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_BIN:$PATH
安装 elasticsearch-5.3.0
tar xf elasticsearch-5.3.0.tar.gz
mv elasticsearch-5.3.0 /work/opt/elasticsearch
Ubuntu@ip-172-31-1-79:/work/opt/elasticsearch/config$ egrep -v '#|^$' elasticsearch.yml
cluster.name: lvnian-elk
node.name: lvnian-elk-node1
path.data: /data #由于是普通用户启动elasticsearch,所以这个目录的属主需要改为普通用户
path.logs: /work/opt/elasticsearch/logs #由于是普通用户启动elasticsearch,所以这个目录的属主需要改为普通用户
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
ubuntu@ip-172-31-1-79:/work/opt/elasticsearch/config$
nohup /work/opt/elasticsearch/bin/elasticsearch >> /tmp/elasticsearch.log &
##ES 5.1.1 安装 head:
(5.1.1版本的 elasticsearch 没有提供直接插件安装方法,但在该github上该插件作者给出了方法)
下载二进制源码包:
wget https://nodejs.org/dist/v4.2.2/node-v6.2.0-linux-x64.tar.gz
解压:
tar xf node-v6.2.0-linux-x64.tar.gz -C /work/opt/
设置环境变量:
vim /etc/profile:
export NODE_HOME=/work/opt/node/
export PATH=$PATH:$NODE_HOME/bin
root@ip-172-31-1-79:/work/source# node --version
v6.10.1
npm config set registry https://registry.npm.taobao.org //设置代理镜像源,加速下载
cd /home/stt/node-v4.2.2-linux-x64/lib/node_modules
npm install grunt //显示的2条warn可以忽略
测试grunt是否生效:
$ grunt -version
grunt-cli v1.2.0
grunt v1.0.1
安装head插件:
下载:git clone git://github.com/mobz/elasticsearch-head.git
cd /home/stt/elasticsearch-head
npm install (提示:如果遇到网络的瓶颈,将预先下载的源码包放在对应的位置效果一样,目录为/tmp/phantomjs/,确定位置后可自行创建并上传包)
修改 elasticsearch-head/ _site/app.js
// 把localhost改为ip
找到:
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200";
改为:
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.8.116:9200";
修改 elasticsearch-head/Gruntfile.js
connect: {
server: {
options: {
hostname: "0.0.0.0", //增加这个配置
port: 9100,
base: '.',
keepalive: true
}
}
}
启动服务:(后台运行)
grunt server & //需要在 /home/stt/elasticsearch-head 下执行,因为我的 grunt 没有进行全局的安装
##安装Logstash(这个软件在你需要读取的日志服务上安装,用logstash读取你的日志,上传给elasticsearch的):
也需要安装java1.8环境
tar xf logstash-5.3.0.tar.gz
mkdir /work/opt
mv logstash-5.3.0 /work/opt/
cd /work/opt/
vim /work/opt/logstash-5.3.0/conf/central.conf #(处理基于 FILE 方式输入的日志信息,这里是简单的举个例子,日后继续学习补充)
input {
file {
path => "/tmp/*.log"
}
}
output {
elasticsearch {
hosts => "192.168.8.116:9200"
index => "nginx-access"
}
stdout {
codec => rubydebug
}
}
##安装Kibana:
解压源码包:
tar zxf kibana-5.1.1-linux-x86_64.tar.gz -C /home/stt/server/
vim config/kibana.yml //修改
server.port: 5601 //打开注释而已,不用可以去效果,请使用默认端口
server.host: "0.0.0.0" //打开监听地址,让别的机器也能访问这个 kibana
elasticsearch.url: "http://127.0.0.1:9200" //这个url要根据实质情况,添加访问 elasticsearch 的url
启动服务: (后台运行)
/home/stt/server/kibana-5.1.1-linux-x86_64/bin/kibana &
安装nginx 反向代理,
apt-gei install nginx
nginx放心代理配置文件如下:
##文件名kibana.conf
upstream backend {
server 172.31.6.155:5601;
}
server {
listen 80;
server_name kibana.lvnian.co;
access_log /tmp/kibana-access.log;
error_log /tmp/kibana-error.log;
location / {
#设置主机头和客户端真实地址,以便服务器获取客户端真实IP
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#禁用缓存
proxy_buffering off;
#反向代理的地址
proxy_pass http://backend;
}
}
logstash 读取nginx的访问日志以及error日志,上传到logstash的配置文件文件。
用下面命令运行即可
nohup /work/opt/logstash-5.3.0/bin/logstash -f /work/opt/logstash-5.3.0/conf/elk-nginx-log.conf &
文件名: elk-nginx-log.conf
input {
file {
path => "/data/log/nginx/*.log"
start_position => beginning
}
}
filter {
if [path] =~ "access" {
mutate { replace => { type => "nginx_access" } }
ruby {
init => "@kname = ['http_x_forwarded_for','time_local','request','status','body_bytes_sent','request_body','content_length','http_referer','http_user_agent','http_COOKIE','remote_addr','hostname','upstream_addr','upstream_response_time','request_time']"
code => "
new_event = LogStash::Event.new(Hash[@kname.zip(event.get('message').split('|'))])
new_event.remove('@timestamp')
event.append(new_event)
"
}
if [request] {
ruby {
init => "@kname = ['method','uri','verb']"
code => "
new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))])
new_event.remove('@timestamp')
event.append(new_event)
"
}
if [uri] {
ruby {
init => "@kname = ['url_path','url_args']"
code => "
new_event = LogStash::Event.new(Hash[@kname.zip(event.get('uri').split('?'))])
new_event.remove('@timestamp')
event.append(new_event)
"
}
kv {
prefix => "url_"
source => "url_args"
field_split => "& "
remove_field => [ "url_args","uri","request" ]
}
}
}
mutate {
cOnvert=> [
"body_bytes_sent" , "integer",
"content_length", "integer",
"upstream_response_time", "float",
"request_time", "float"
]
}
date {
match => [ "time_local", "dd/MMM/yyyy:hh:mm:ss Z" ]
locale => "en"
}
}
else if [path] =~ "error" {
mutate { replace => { type => "nginx_error" } }
grok {
match => { "message" => "(?
}
mutate {
rename => [ "host", "fromhost" ]
gsub => [ "errmsg", "too large body: \d+ bytes", "too large body" ]
}
if [errinfo]
{
ruby {
code => "
new_event = LogStash::Event.new(Hash[event.get('errinfo').split(', ').map{|l| l.split(': ')}])
new_event.remove('@timestamp')
event.append(new_event)
"
}
}
grok {
# match => { "request" => '"%{WORD:verb} %{URIPATH:urlpath}(?:\?%{NGX_URIPARAM:urlparam})?(?: HTTP/%{NUMBER:httpversion})"' }
match => { "request" => '"%{WORD:verb} %{URIPATH:urlpath}?(?: HTTP/%{NUMBER:httpversion})"' }
patterns_dir => ["/etc/logstash/patterns"]
# remove_field => [ "message", "errinfo", "request" ]
}
}
else {
mutate { replace => { type => "random_logs" } }
}
}
output {
elasticsearch {
hosts => "172.31.1.79:9200"
#index => "logstash-nginx"
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
flush_size => 20000
idle_flush_time => 10
sniffing => true
template_overwrite => true
}
# stdout {
# codec => rubydebug
# }
}
使用上面的logstash配置文件,需要注意更改nginx日志格式,nginx日志格式应该改为如下:
log_format elk "$http_x_forwarded_for | $time_local | $request | $status | $body_bytes_sent | "
"$request_body | $content_length | $http_referer | $http_user_agent | "
"$http_COOKIE | $remote_addr | $hostname | $upstream_addr | $upstream_response_time | $request_time|$gzip_ratio";
access_log /data/log/nginx/access.log elk;
error_log /data/log/nginx/error.log;
监控指点时间内502日志超过阀值的微信报警
#!/usr/bin/env Python
# -*- coding:utf-8 -*-
#Author: gaogd
import weixin_alter as weixin
from elasticsearch import Elasticsearch
import json,time ,datetime
now_day=time.strftime('%Y.%m.%d',time.localtime(time.time()))
es = Elasticsearch([{'host':'172.3.11.179','port':9200}])
index='logstash-nginx_access-%s'%now_day
body ={
"query": {
"bool": {
"filter": [
{ "term": { "status": "502" }},
{
"range" : {
"@timestamp" : {
"gt": "now-15m",
"lt": "now"
},
},
},
]
}
},
"aggs": {
"group_by_source": {"terms": {"field": "params.source"}}
},
"size": 0
}
res=es.search(index=index, body=body)
# res=es.search(index=index, q=QueryCondition)
num= res['hits']['total']
# print '502错误量:',num
if int(num) > 20:
with open('./alter','r') as f:
time=f.read()
if len(time) == 0 or int(time) == 0 or int(time) >= 15:
print u'502告警! 最近1分钟内日志出现502 错误的有: %s 次\n %s' % (num,datetime.datetime.now())
cOntent= u'502告警!!! \n最近1分钟内前端日志出现502错误\n一分钟内有: %s 次' % (num)
weixin.WeixinSend(str(content))
with open('./alter','w+') as f:
f.write('1')
elif 0
print '---------->',time
with open('./alter', 'w+') as f:
f.write(str(time))
exit()
else:
with open('./alter','r') as f:
num1=f.read()
if int(num1) > 0:
print u'502告警恢复!\n日志502错误已经恢复\n最近1分钟内日志出现502 错误少于20次\n现在错误次数为: %s 次\n %s' % (num,datetime.datetime.now())
cOntent= u'502告警恢复!!!\n 日志502错误已经恢复 最近1分钟内日志出现502 错误少于20次\n现在错误次数为: %s 次' % num
weixin.WeixinSend(str(content))
with open('./alter', 'w+') as f:
f.write('0')
Linux上安装部署ElasticSearch全程记录 2015-09/123241.htm
Elasticsearch安装使用教程 2015-02/113615.htm
ElasticSearch 配置文件译文解析 2015-02/114244.htm
ElasticSearch集群搭建实例 2015-02/114243.htm
分布式搜索ElasticSearch单机与服务器环境搭建 2012-05/60787.htm
ElasticSearch的工作机制 2014-11/109922.htm
Elasticsearch的安装,运行和基本配置 2016-07/133057.htm
使用Elasticsearch + Logstash + Kibana搭建日志集中分析平台实践 2015-12/126587.htm
Ubuntu 14.04搭建ELK日志分析系统(Elasticsearch+Logstash+Kibana) 2016-06/132618.htm
Elasticsearch1.7升级到2.3实践总结 2016-11/137282.htm
Ubuntu 14.04中Elasticsearch集群配置 2017-01/139460.htm
Elasticsearch-5.0.0移植到Ubuntu 16.04 2017-01/139505.htm
ElasticSearch 5.2.2 集群环境的搭建 2017-04/143136.htm
ElasticSearch 的详细介绍:请点这里
ElasticSearch 的下载地址:请点这里