热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

如何将JS错误从客户端记录到kibana?-HowtotologJSerrorsfromaclientintokibana?

IhavewebapplicationbackedendinNodeJSandlogstashelasticsearchkibanatohandlesystemlogs

I have web application backed end in NodeJS and logstash/elasticsearch/kibana to handle system logs like (access_error.log, messages.log etc).

我在NodeJS和logstash / elasticsearch / kibana中有Web应用程序支持结束处理系统日志,如(access_error.log,messages.log等)。

Right now I need to record all Javascript client side errors into kibana also. What is the best way to do this?

现在我需要将所有Javascript客户端错误记录到kibana中。做这个的最好方式是什么?

EDIT: I have to add additional information to this question. As @Jackie Xu provide partial solution to my problem and as follows from my comment:

编辑:我必须在此问题中添加其他信息。由于@Jackie Xu为我的问题提供了部分解决方案,我的评论如下:

I'm most interested in realizing server-side error handling. I think it's not effective write each error into file. I'm looking for best practices how to make it more performance.

我最感兴趣的是实现服务器端错误处理。我认为将每个错误写入文件是没有效果的。我正在寻找如何使其更具性能的最佳实践。

I need to handle js error records on server-side more effective than just write into file. May you provide some scenarios how could I increase server-side logging performance?

我需要处理服务器端的js错误记录比写入文件更有效。您是否可以提供一些方案如何提高服务器端日志记录性能?

4 个解决方案

#1


35  

When you say client, I'm assuming here that you mean a logging client and not a web client.

当你说客户端时,我假设你指的是日志记录客户端而不是Web客户端。

First, make it a habit to log your errors in a common format. Logstash likes consistency, so if you're putting text and JSON in the same output log, you will run into issues. Hint: log in JSON. It's awesome and incredibly flexible.

首先,养成以常见格式记录错误的习惯。 Logstash喜欢一致性,因此如果您将文本和JSON放在同一输出日志中,您将遇到问题。提示:登录JSON。它非常棒,非常灵活。

The overall process will go like this:

整个过程将如下所示:

  1. Error occurs in your app
  2. 您的应用中出现错误
  3. Log the error to file, socket, or over a network
  4. 将错误记录到文件,套接字或网络上
  5. Tell logstash how to get (input) that error (i.e. from file, listen over network, etc)
  6. 告诉logstash如何获取(输入)该错误(即从文件,通过网络监听等)
  7. Tell logstash to send (output) the error to Elasticsearch (which can be running on the same machine)
  8. 告诉logstash将错误发送(输出)到Elasticsearch(可以在同一台机器上运行)

In your app, try using the bunyan logger for node. https://github.com/trentm/node-bunyan

在您的应用中,尝试使用bunyan logger for node。 https://github.com/trentm/node-bunyan

node app index.js

node app index.js

var bunyan = require('bunyan');
var log = bunyan.createLogger({
  name: 'myapp',
  streams: [{
    level: 'info',
    stream: process.stdout // log INFO and above to stdout
  }, {
    level: 'error',
    path: '/var/log/myapp-error.log' // log ERROR and above to a file
  }]
});

// Log stuff like this
log.info({status: 'started'}, 'foo bar message');

// Also, in express you can catch all errors like this
app.use(function(err, req, res, next) {
   log.error(err);
   res.send(500, 'An error occurred');
});

Then you need to configure logstash to read those JSON log files and send to Elasticsearch/Kibana. Make a file called myapp.conf and try the following:

然后,您需要配置logstash以读取这些JSON日志文件并发送到Elasticsearch / Kibana。创建一个名为myapp.conf的文件并尝试以下操作:

logstash config myapp.conf

logstash config myapp.conf

# Input can read from many places, but here we're just reading the app error log
input {
    file {
        type => "my-app"
        path => [ "/var/log/myapp/*.log" ]
        codec => "json"
    }   
}

# Output can go many places, here we send to elasticsearch (pick one below)
output {

  elasticsearch {
    # Do this if elasticsearch is running somewhere else
    host => "your.elasticsearch.hostname"
    # Do this if elasticsearch is running on the same machine
    host => "localhost"
    # Do this if you want to run an embedded elastic search in logstash
    embedded => true   
  }

}

Then start/restart logstash as such: bin/logstash agent -f myapp.conf web

然后启动/重启logstash:bin / logstash agent -f myapp.conf web

Go to elasticsearch on http://your-elasticsearch-host:9292 to see the logs coming in.

转到http:// your-elasticsearch-host:9292上的elasticsearch,查看进入的日志。

#2


3  

You would have to catch all client side errors first (and send these to your server):

您必须首先捕获所有客户端错误(并将这些错误发送到您的服务器):

window.Onerror= function (message, url, lineNumber) {

    // Send error to server for storage
    yourAjaxImplementation('http://domain.com/error-logger/', {
        lineNumber: lineNumber,
        message: message,
        url: url
    })

    // Allow default error handling, set to true to disable
    return false

}

Afterwards you can use NodeJS to write these error messages to a log. Logstash can collect these, and then you can use Kibana to visualise.

之后,您可以使用NodeJS将这些错误消息写入日志。 Logstash可以收集这些,然后您可以使用Kibana进行可视化。

Note that according to Mozilla window.onerror doesn't appear to work for every error. You might want to switch to something like Sentry (if you don't want to pay, you can directly get the source from GitHub).

请注意,根据Mozilla window.onerror似乎不适用于每个错误。您可能希望切换到像Sentry这样的东西(如果您不想付费,可以直接从GitHub获取源代码)。

#3


1  

Logging errors trough the default built-in file logging allows your errors to be preserved and it also allows your kernel to optimize the writes for you.

通过默认的内置文件日志记录记录错误可以保留错误,并且还允许内核为您优化写入。

If you really think that it is not fast enough (you get that many errors?) you could just put them into redis.

如果你真的认为它不够快(你得到那么多错误?)你可以把它们放到redis中。

Logstash has a redis pub/sub input so you can store the errors in redis and logstash will pull them out and store them in your case in elasticsearch.

Logstash有一个redis pub / sub输入,因此您可以将错误存储在redis中,logstash会将它们拉出来并将它们存储在elasticsearch中。

I'm presuming logstash/es are on another server, otherwise there really is no point doing this, es has to store the data on disc also, and it is not nearly as efficient as writing a logfile.

我假设logstash / es在另一台服务器上,否则实际上没有必要这样做,es也必须将数据存储在光盘上,并且它的效率不如编写日志文件。

With whatever solution you go with, youll want to store the data, eg. writing it to disc. Append-only to a single (log) file is highly efficient, and when preserving data the only way you can handle more is to shard it across multiple discs/nodes.

无论您使用何种解决方案,您都希望存储数据,例如。把它写到光盘上。仅附加到单个(日志)文件是非常有效的,并且在保留数据时,您可以处理更多文件的唯一方法是在多个光盘/节点上对其进行分片。

#4


1  

If I understand correctly, the problem you have is not about sending your logs back to the server (or if it was @Jackie-xu provided some hints), but rather about how to send them to elastiscsearch the most efficiently.

如果我理解正确,你遇到的问题不是将日志发送回服务器(或者如果@ Jackie-xu提供了一些提示),而是关于如何最有效地将它们发送到elastiscsearch。

Actually the vast majority of users of the classic stack Logstash/Elasticsearch/Kibana are used to having an application that logs into a file, then use Logstash's plugin for reading files to parse that file and send the result to ElasticSearch. Since @methai gave a good explanation about it I won't go any further this way.

实际上,经典堆栈Logstash / Elasticsearch / Kibana的绝大多数用户习惯于将应用程序登录到文件中,然后使用Logstash的插件读取文件来解析该文件并将结果发送到ElasticSearch。由于@methai给出了一个很好的解释,我不会再这样了。

But what I would like to bring on is that:

但我想带来的是:

You are not forced to used Logstash.
Actually Logstash's main role is to collect the logs, parse them to identify their structure and recurrent field, and finally output them in a JSON format so that they can be sent to ElasticSearch. But since you are already manipulating Javascript on the client side, one can easily imagine that you would talk directly to the Elasticsearch server. For example once you have caught a Javascript exception, you could do the folowing:

您不必被迫使用Logstash。实际上,Logstash的主要作用是收集日志,解析它们以识别它们的结构和循环字段,最后以JSON格式输出它们,以便将它们发送到ElasticSearch。但由于您已经在客户端操作Javascript,因此很容易想象您将直接与Elasticsearch服务器通信。例如,一旦你发现了一个Javascript异常,你可以执行以下操作:

var xhr = new XMLHttpRequest();
xhr.open("PUT", http://your-elasticsearch-host:9292, true);
var data = {
    lineNumber: lineNumber,
    message: message,
    url: url
}
xhr.send(JSON.stringify(data));

By doing this, you are directly talking from the client to the ElasticSearch Server. I can't imagine a simpler and faster way to do that (But note that this is just theory, I never tried myself, so reality could be more complex, especially if you want special fields like date timestamps to be generated ;)). In a production context you will probably have security issues, probably a proxy server between the client and the ES server, but the principle is there.

通过这样做,您直接从客户端与ElasticSearch Server进行通信。我无法想象一种更简单,更快速的方法(但请注意,这只是理论,我从未尝试过,所以现实可能更复杂,特别是如果你想要生成像日期时间戳这样的特殊字段;))。在生产环境中,您可能会遇到安全问题,可能是客户端和ES服务器之间的代理服务器,但原则就在那里。

If you absolutely want to use Logstash you are not forced to use a file input
If, for the purpose of harmonizing, doing the same as everyone, or for using advanced logstash parsing confifuration you want to stick to Logstash, you should take a look at all the alternative inputs to the basic file input. For example I used to use a pipe myself, with a process in charge of collecting the logs and outputting these to the standard output. There is also the possibilty to read on an open tcp socket, and a lot more, you can even add your own.

如果你绝对想要使用Logstash,你不会被迫使用文件输入如果为了协调,做同样的事情,或者使用高级logstash解析配置你想坚持使用Logstash,你应该看看基本文件输入的所有替代输入。例如,我过去常常使用管道,负责收集日志并将其输出到标准输出。在开放的tcp套接字上阅读也是可能的,还有更多,你甚至可以添加自己的。


推荐阅读
  • Docker 环境下 MySQL 双主同步配置指南
    本文介绍了如何在 Docker 环境中配置 MySQL 的双主同步,包括目录结构的创建、配置文件的编写、容器的创建与设置以及最终的验证步骤。 ... [详细]
  • PHP-Casbin v3.20.0 已经发布,这是一个使用 PHP 语言开发的轻量级开源访问控制框架,支持多种访问控制模型,包括 ACL、RBAC 和 ABAC。新版本在性能上有了显著的提升。 ... [详细]
  • 本文介绍了如何在 Spring Boot 项目中使用 spring-boot-starter-quartz 组件实现定时任务,并将 cron 表达式存储在数据库中,以便动态调整任务执行频率。 ... [详细]
  • 本文详细介绍了Linux系统中用于管理IPC(Inter-Process Communication)资源的两个重要命令:ipcs和ipcrm。通过这些命令,用户可以查看和删除系统中的消息队列、共享内存和信号量。 ... [详细]
  • 检查 Kubernetes 系统命名空间中的 Pod 状态时,发现 Metric Server Pod 虽然处于运行状态,但存在异常:日志显示 'it doesn’t contain any IP SANs'。 ... [详细]
  • 为什么多数程序员难以成为架构师?
    探讨80%的程序员为何难以晋升为架构师,涉及技术深度、经验积累和综合能力等方面。本文将详细解析Tomcat的配置和服务组件,帮助读者理解其内部机制。 ... [详细]
  • 本文介绍了如何使用Flume从Linux文件系统收集日志并存储到HDFS,然后通过MapReduce清洗数据,使用Hive进行数据分析,并最终通过Sqoop将结果导出到MySQL数据库。 ... [详细]
  • 本文详细介绍了如何在 Linux 系统上安装 JDK 1.8、MySQL 和 Redis,并提供了相应的环境配置和验证步骤。 ... [详细]
  • 本文详细介绍了在 CentOS 7 系统中配置 fstab 文件以实现开机自动挂载 NFS 共享目录的方法,并解决了常见的配置失败问题。 ... [详细]
  • Spark与HBase结合处理大规模流量数据结构设计
    本文将详细介绍如何利用Spark和HBase进行大规模流量数据的分析与处理,包括数据结构的设计和优化方法。 ... [详细]
  • [转]doc,ppt,xls文件格式转PDF格式http:blog.csdn.netlee353086articledetails7920355确实好用。需要注意的是#import ... [详细]
  • 本文详细介绍了 InfluxDB、collectd 和 Grafana 的安装与配置流程。首先,按照启动顺序依次安装并配置 InfluxDB、collectd 和 Grafana。InfluxDB 作为时序数据库,用于存储时间序列数据;collectd 负责数据的采集与传输;Grafana 则用于数据的可视化展示。文中提供了 collectd 的官方文档链接,便于用户参考和进一步了解其配置选项。通过本指南,读者可以轻松搭建一个高效的数据监控系统。 ... [详细]
  • 在《Cocos2d-x学习笔记:基础概念解析与内存管理机制深入探讨》中,详细介绍了Cocos2d-x的基础概念,并深入分析了其内存管理机制。特别是针对Boost库引入的智能指针管理方法进行了详细的讲解,例如在处理鱼的运动过程中,可以通过编写自定义函数来动态计算角度变化,利用CallFunc回调机制实现高效的游戏逻辑控制。此外,文章还探讨了如何通过智能指针优化资源管理和避免内存泄漏,为开发者提供了实用的编程技巧和最佳实践。 ... [详细]
  • WebStorm 是一款强大的集成开发环境,支持多种现代 Web 开发技术,包括 Node.js、CoffeeScript、TypeScript、Dart、Jade、Sass、LESS 和 Stylus。它为开发者提供了丰富的功能和工具,帮助高效构建和调试复杂的 Node.js 应用程序。 ... [详细]
  • TypeScript 实战分享:Google 工程师深度解析 TypeScript 开发经验与心得
    TypeScript 实战分享:Google 工程师深度解析 TypeScript 开发经验与心得 ... [详细]
author-avatar
手机用户2502905147
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有