作者:寒风中挣扎_427 | 来源:互联网 | 2023-09-06 09:26
相信很多小伙伴都见过一些商业产品中的url接口响应时间,实时汇总显示功能。可以理解为web接口的慢查询,与sql的慢查询有异曲同工之妙,但是想做却无从入手不知道怎么实现此功能,所以
0x0
其实nginx本身就带有接口响应时间的功能,只不过还需要改造下,比如说单独记录超过1000ms(1秒)的响应,并写入数据库中。要注意的是并不建议大家将记录直接写入数据库中,因为数据库有时会成为nginx的负担,间接写入即可。需要简单修改下log模块,涉及文件ngx_http_log_module.c 通常位于nginx-1.17.9/src/http/modules/ngx_http_log_module.c
大约838行, 找到ngx_http_log_request_time函数并修改如下:
static u_char *
ngx_http_log_request_time(ngx_http_request_t *r, u_char *buf,
ngx_http_log_op_t *op)
{
ngx_time_t *tp;
ngx_msec_int_t ms;
time_t t = time(NULL);
struct tm *loc_time = localtime(&t);
tp = ngx_timeofday();
u_char slow_log[2048];
memset(slow_log, 0, sizeof(slow_log));
ms = (ngx_msec_int_t) ((tp->sec - r->start_sec) * 1000
+ (tp->msec - r->start_msec));
ms = ngx_max(ms, 0);
ngx_sprintf(slow_log, "%04d/%02d/%02d %02d:%02d:%02d %V %V?%V waste time %d.%d
",
loc_time->tm_year + 1900, loc_time->tm_mon + 1, loc_time->tm_mday,
loc_time->tm_hour, loc_time->tm_min, loc_time->tm_sec,
&r->headers_in.server, &r->uri, &r->args, (time_t) ms / 1000,
ms % 1000);
int logfd;
if ((logfd = open("/var/log/nginx/nginx_slow.log", O_RDWR | O_CREAT | O_APPEND,
S_IRUSR | S_IWUSR)) == -1) {
ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
"can not open file:logfile
");
}
char Server_name[256];
const char *server_name = "%.*s";
memset(Server_name, 0, sizeof(Server_name));
snprintf((char *) Server_name, sizeof(Server_name), server_name,
r->headers_in.server.len, r->headers_in.server.data);
/* 只记录大于1秒的并且域名不是grafana.sshfortress.com */
if (ms > 1000 && strcmp("grafana.sshfortress.com", Server_name) != 0)
write(logfd, slow_log, strlen((char *)slow_log));
close(logfd);
return ngx_sprintf(buf, "%T.%03M", (time_t) ms / 1000, ms % 1000);
}
之后再编译即可
# ./configure --prefix=/usr/local/nginx1.17.9
# make -j4 ; make install
#mkdir -p /var/log/nginx; chmod -R 777 /var/log/nginx
简单配置下
server {
listen *:80;
server_name slow.sshfortress.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:3000;
}
}
启动nginx即可,如果接口响应时间超过1秒,那么var/log/nginx/nginx_slow.log中就会有记录了。但这只是将慢查询记录而已,我们还需要排序以及可示化的展示。记录也可以直接写入mysql但我并没有这么做,为什么不直接写入呢,原因是如果mysql如果响应慢则会影响nginx的响应,但写入磁盘就不会出现这种情形了。接下来就简单实现下把数据同步写入到数据库中。
0x01
首先建个表 用于同步写入记录
CREATE TABLE `nginx_slow` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`date` datetime NOT NULL,
`server_name` varchar(255) NOT NULL,
`url` varchar(255) NOT NULL,
`waste_time` decimal(11,3) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
之后实现一个同步数据的工具,这里用shell即可实现,代码如下:
#!/bin/bash
# insert_mysql.sh
set -x
if [ $# != 1 ]
then
echo "Usage insert_mysql.sh /var/log/nginx/nginx_slow.log"
exit 1
fi
tail -n 1 -f ${1}|while read var
do
value=`echo $var|awk "{print $3}"`
value2=`echo $var|awk "{print $4}"`
value3=`echo $var|awk "{print $7}"`
echo "$value $value2 $value3"
mysql -h 127.0.0.1 -usuper -pxxxxxxxxx -e "use nginx; INSERT INTO nginx_slow( date, server_name, url, waste_time) VALUES ( now(), "${value}", "${value2}", "$value3");"
done
运行脚本就可以同步写入数据了,是不是很简单呢
# ./insert_mysql.sh /var/log/nginx/nginx_slow.log
0x02
最后在grafana里面配置下当天接口慢查询TOP 20即可,相关语句是
select n.url,avg(n.waste_time) 平均响应时长
from nginx_slow n
WHERE n.`date` > curdate()
group by 1
order by 2 desc
limit 20
最后效果图如下