通过执行这个指令,可以创建出系统层面比较通用的一些设置。比如全局模板,用户角色,ingest pipeline,甚至是其他将会应用到的一些设置。这个没什么好说的。为此我们需要一个es shell 脚本:
#!/bin/bash
URL=$1
USERNAME=$2
PASSWORD=$3
VARI_ABLE=$4
FLAG=$5
CURL_LOG='tempfile'
API_NAME=''
JSON_FILENAME="${VARI_ABLE}"
function pipeline(){
API_NAME="_ingest/pipeline/${VARI_ABLE}"
_execute
}
function template(){
API_NAME="_template/${VARI_ABLE}"
_execute
}
function policy(){
API_NAME="_ilm/policy/${VARI_ABLE}"
JSON_FILENAME='policy'
_execute
}
function _execute(){
httpcode=$(curl -k --cacert cert/elastic-ca.pem -H'Content-Type: application/json' -H'WWW-Authenticate:Basic realm=security charset=UTF-8' -s -u ${USERNAME}:${PASSWORD} -XPUT ${URL}/${API_NAME} -d "@template/${JSON_FILENAME}.json" -w %{http_code} -o "$CURL_LOG")
#echo "$httpcode"
if [ "$httpcode" = "200" ]; then
rm "$CURL_LOG"
exit 1
else
cat "$CURL_LOG" > dev/stderr
rm "$CURL_LOG"
fi
}
if [ "$FLAG" = "pipeline" ]; then
pipeline
elif [ "$FLAG" = "template" ]; then
template
elif [ "$FLAG" = "policy" ]; then
policy
else
echo "invalid parameter"
fi
只需要准备好对应的json文件便可以执行对应的策略
还需要一个kibana shell 脚本:
#!/bin/bash
URL=$1
USERNAME=$2
PASSWORD=$3
INDEXPATTERN=$4
FLAG=$5
function create(){
curl -k --cacert cert/elastic-ca.pem -H'Content-Type: application/json' -H'kbn-xsrf: true' -s -u ${USERNAME}:${PASSWORD} -XPOST ${URL}/api/saved_objects/index-pattern/${INDEXPATTERN}?overwrite=true -d'
{
"attributes":
{
"title": "'"$INDEXPATTERN"'",
"timeFieldName": "@timestamp"
}
}
'
}
function delete(){
curl -k --cacert cert/elastic-ca.pem -H'Content-Type: application/json' -H'kbn-xsrf: true' -s -u ${USERNAME}:${PASSWORD} -XDELETE ${URL}/api/saved_objects/index-pattern/${INDEXPATTERN}
}
if [ "$FLAG" = "create" ]; then
create
elif [ "$FLAG" = "delete" ]; then
delete
else
echo "invalid parameter"
fi
创建索引
这个指令是核心。试想在完全依赖”人工智能“的场景下,运维上到服务器后,手动增加一个文件目录监听后,做个重启,再接着去创建kibana index-pattern,如果不运用索引生命周期管理,可能就此结束了。那有人会疑问了,索引怎么删除的呢?依赖一个定时器,定时对超过一定时间范围的索引做删除!这是一个真实的故事。当然了,这样做有一定的可行性,但并不推荐。因而创建索引的指令,必须把依赖”人工智能“的这个过程自动化掉。在结合索引生命周期管理,把整个流程需要的设置项都设置出来,这样就很完美了。下面是创建索引关键代码:
public static async Task CreateIndexAsync(IndexDto dto)
{
var stuffix = dto.Environment + "-" + dto.IndexName;
//别名
var aliasName = stuffix + "-alias";
//策略
var policyId = stuffix + "-policy";
//模板
var templateName = stuffix + "-template";
//索引名
var indexName = dto.RollUpType == 0 ? "<" + stuffix + "-{now/d}-1>" : stuffix + "-000001";
// kibana index-pattern
var kibanaIndexPattern = stuffix + "-*";
//1.先创建策略
var getlifecycleRequest = new GetLifecycleRequest(policyId);
var lifeCycleGetRespOnse= await EsClient.Client.IndexLifecycleManagement.GetLifecycleAsync(getlifecycleRequest);
if (lifeCycleGetResponse.ApiCall.HttpStatusCode == 404)
{
var result = await EsScriptClient.ExecutePolicyAsync(new PolicyDto(policyId));
if (!result.Item1)
{
LogCenterManager.Log(LogLevel.Error, $"索引生命周期策略 { policyId} 创建失败,返回响应值:{result.Item2}", nameof(CreateIndexAsync));
return;
}
LogCenterManager.Log(LogLevel.Info, $"索引生命周期策略 {policyId} 创建成功", nameof(CreateIndexAsync));
}
//2.先创建模板,优先级为1,高于通用模板
var indexExistsTemplateRequest = new IndexTemplateExistsRequest(templateName);
if (!(await EsClient.Client.Indices.TemplateExistsAsync(indexExistsTemplateRequest)).Exists)
{
var putIndexTemplateRequest = new PutIndexTemplateRequest(templateName)
{
Order = 1,
IndexPatterns = new[] { $"{stuffix}-*" },
Settings = new IndexSettings(new Dictionary<string, object>
{
{"lifecycle.name", policyId},
{"lifecycle.rollover_alias", aliasName}
})
};
var putTemplateRespOnse= await EsClient.Client.Indices.PutTemplateAsync(putIndexTemplateRequest);
if (!putTemplateResponse.IsValid)
{
LogCenterManager.Log(LogLevel.Error, $"索引模板 {templateName} 创建失败", nameof(CreateIndexAsync), putTemplateResponse.OriginalException);
return;
}
LogCenterManager.Log(LogLevel.Info, $"索引模板 {templateName} 创建成功,应用 lifecycle.name:{policyId},lifecycle.rollover_alias:{aliasName}", nameof(CreateIndexAsync));
}
//3.创建初始写索引
if (!(await EsClient.Client.Indices.ExistsAsync(indexName)).Exists)
{
var createIndexResponse = await EsClient.Client.Indices.CreateAsync(indexName, request =>
{
return request.Aliases(aliases => aliases.Alias(aliasName, alias => alias.IsWriteIndex()));
});
if (!createIndexResponse.IsValid)
{
LogCenterManager.Log(LogLevel.Error, $"索引 {indexName} 创建失败", nameof(CreateIndexAsync), createIndexResponse.OriginalException);
return;
}
}
//4.创建kibana index-pattern
var createKibanaIndexPatternRespOnse= await KibanaClient.CreateIndexPatternAsync(new IndexPatternDto(kibanaIndexPattern, dto.Flag));
if (!createKibanaIndexPatternResponse.Item1)
{
LogCenterManager.Log(LogLevel.Error, $"kibana index-pattern {kibanaIndexPattern} 失败,返回响应值:{createKibanaIndexPatternResponse.Item2}", nameof(CreateIndexAsync));
return;
}
//5.生成 filebeat 配置文件
var filebeatInputFile = Path.Combine(Consts.FilebeatOutputFilePath, dto.IndexName + ".yml");
var cOntents= await File.ReadAllTextAsync(Consts.FilebeatFilePath, Encoding.UTF8);
cOntents= contents
.Replace("${current_path}", filebeatInputFile)
.Replace("${create_time}", DateTime.Now.ToString("yyyy-MM-dd"))
.Replace("${log_type}", dto.IndexName)
.Replace("${logName}", dto.IndexName.Replace("-", "."))
.Replace("${source}", dto.IndexName.Split('-')[0]);
await File.WriteAllTextAsync(filebeatInputFile, contents);
LogCenterManager.Log(LogLevel.Info, Environment.NewLine + contents + Environment.NewLine + Environment.NewLine + $"索引 {dto.IndexName} 创建成功,{dto.IndexName}.yml 文件可以试着手动进行二次确认,内容如上,没问题的话,可执行 -push 命令往服务器推送", nameof(CreateIndexAsync));
}
文件推送
为什么这里会需要这样一个指令呢?这个指令主要是用来自动将编辑好的yml文件自动上传到filebeat监听的目录中,是一种简单便捷指令,当然不提供也是没有问题的,只是运维要清楚该如何放置文件即可。而这个文件内容如下:
# 自动生成
# ${current_path} ${create_time}
- type: log
enabled: true
paths:
- /var/logs/${logName}/*.log
fields_under_root: true
fields:
log_type: ${log_type}
source: ${source}
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: false
json.message_key: 'message'
multiline.pattern: '^\{"date":"\d{4}-\d{1,2}-\d{1,2}'
multiline.negate: true
multiline.match: after
通过”创建索引“将上述变量进行逐一替换,一个用于filebeat监听的yml文件就已经准备好了,只需要对这个文件进行一些微调,即可使用。比如我们的应用都是部署在linux上,那我们可以用如下脚本来实现简单的文件推送需求:
#!/bin/bash
USERNAME=$1
SOURCENAME=$2
TARGETNAME=$3
TARGETFILE="/usr/local/filebeat/input.d"
ssh ${USERNAME} "mkdir ${TARGETFILE}"
scp template/output/${SOURCENAME}.yml ${USERNAME}:${TARGETFILE}/${TARGETNAME}.yml
r=$?
if [ "$r" = 0 ]; then
echo "success"
else
echo "error:$r"
fi
写在最后
本文主要详细介绍了下制作自动化工具的几个核心步骤,文中也贴出了主要关键代码,通过分析来看,其实就是将我们日常中需要手动完成的步骤通过工具来做而已,并没有用到特别高深隐晦的技术。不过还是那句话,看懂和动手真的是两码事,聪明如你,动起手来吧!下一篇将会介绍下filebeat中的设置项。