字节流的博客

MySQL 慢查询日志收集

1. 慢查询日志

MySQL 的慢查询日志是 MySQL 提供的一种日志记录,它用来记录在 MySQL 中响应时间超过阀值的语句,具体指运行时间超过 long_query_time 值的 SQL 语句,会被记录到慢查询日志中。long_query_time 的默认值为 10,意思是运行 10s 以上的语句就会被记录到日志中。

MySQL 的日志记录支持两种格式:

  1. File 记录到日志文件中;
  2. Table 记录到 mysql 库下面的 slow-log 表中;

通过 log-output 变量配置

2. MySQL 配置

我们准备把慢查询日志记录到文件和表中,并且设置大于 3s 的查询都算是慢查询,则对 MySQL 进行如下配置:

1
2
3
SET slow_query_log  = ON
SET long_query_time = 3;
SET global log_output = 'FILE,TABLE';

3. Filebeat 配置

Filebeat 用来收集慢查询 slow-log 表产生的数据 CSV 文件。即 MySQL 主目录下的 mysql/slow_log.CSV 文件。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
filebeat:
prospectors:
-
paths:
- /usr/local/var/mysql/mysql/slow_log.CSV
document_type: mysql-slow
registry_file: /etc/registry/mark
output:
logstash:
hosts: ["logstash:5044"]
shipper:
name: db
logging:
files:
rotateeverybytes: 10485760 # = 10MB

4. Logstash 配置

对于 Logstash 应配置过滤器,对于收集上来的 MySQL 慢查询日志进行过滤解析。具体配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
filter {
if [type] == "mysql-slow" {

# mysql escapes double quotes with backslashes, but
# ruby expects pairs of double quotes
mutate { gsub => [ "message", '\\"', '""' ] }

csv {
columns => [ "start_time", "user_host", "query_time", "lock_time",
"rows_sent", "rows_examined", "db", "last_insert_id",
"insert_id", "server_id", "sql_text" ]
}

# convert various fields to integer
mutate { convert => [ "rows_sent", "integer" ] }
mutate { convert => [ "rows_examined", "integer" ] }
mutate { convert => [ "last_insert_id", "integer" ] }
mutate { convert => [ "insert_id", "integer" ] }
mutate { convert => [ "server_id", "integer" ] }

#remove microseconds from start_time
grok {
match => ["start_time","%{DATESTAMP_EVENTLOG:logtime}\.[0-9]+"]
#remove_field => [ "start_time" ]
}


# convert logtime to @timestamp
date {
match => [ "logtime", "YYYY-MM-DD HH:mm:ss" ]
#remove_field => [ "logtime" ]
}

# normalize query_time from HH::mm::ss.SSSSSS to seconds
ruby { code => "event['query_time'] = event['query_time'] ? event['query_time'].split(':').inject(0){|a, m| a = a * 60 + m.to_f} : 0" }

# normalize lock_time from HH:mm:ss to seconds
ruby { code => "event['lock_time'] = event['lock_time'] ? event['lock_time'].split(':').inject(0){|a, m| a = a * 60 + m.to_f} : 0" }

}
}

可以测试一条超过指定时间的 SQL,通过 Kibana 查询,就可以看到收集上来的慢查询日志信息了。并且切出来了重要的字段信息。

参考

  1. https://dev.mysql.com/doc/refman/5.7/en/log-destinations.html
  2. http://www.cnblogs.com/kerrycode/p/5593204.html
  3. http://blog.smoker.cc/docker/20160325.html
Thanks! 😊