####
####
####
This article introduces the use of ELK (elasticsearch, logstash, kibana) + kafka to build a logging system. The main demonstration is to use spring aop to collect logs, and then send the logs to logstash through kafka, and logstash will write the logs to elasticsearch, so that elasticsearch has log data, and finally, use kibana to display the log data stored in elasticsearch , And can do real-time data chart analysis and so on.
When I first started some projects, I was used to using log4j to write logs to log files. Later, when the project had high availability requirements, we carried out distributed deployment of the web, so we still use If log4j is used to record logs, then there are N log directories of N machines. When ELK is used normally, it is necessary to install logstash and other log-collecting clients on each machine. At this time, it is very troublesome to find logs. Knowing which server the problem user error log was written on, later, I thought of a way to write the log directly to the database. This way, although the convenience of finding abnormal information is solved, there are two Defects:
1, there are a lot of log records, the table is not enough, and the score database is divided into tables,
2, connect to db, if the database is abnormal, the log will be lost , Then in order to solve the problem of log loss, you must first write the log locally, and then wait for the db to connect, and then synchronize the log to the db. This method of processing seems to be more complicated.
The current practice of ELK
Fortunately, there is a solution such as ELK, which can solve the above problems. First of all, the use of elasticsearch to store log information can be understood as a general system. Store unlimited pieces of data, because elasticsearch has good scalability, and then there is a logstash, which can be understood as a data interface to connect the log data from the outside for elasticsearch. Its connection channels include kafka, log files, redis, etc. Wait, it is compatible with N multi-log format. The last part is kibana, which is mainly used for data display. Log so much data is stored in elasticsearch. We have to see what the log looks like. This kibana is for Let’s look at the log data, but there is a more important function that can edit N kinds of chart formats, what histogram, line chart, etc., to visually display the log data.
ELK function division
Logstash does log docking, accepts application system logs, and then writes them to elasticsearch. Logstash can support N types of log channels, written in kafka channels, and docked with log directories Way, you can also monitor and read the log data in reids, and so on.
Elasticsearch stores log data, which is convenient for expanding special effects and can store enough log data.
Kibana performs log data stored in elasticsearch: data display, report display, and it is real-time.
This installation The software: es, kibana, logstash, kafka, zookeeper
ELK version is best to use the version above 5, es needs to be served by ordinary users, and you need to pay attention to the system’s maximum number of open files and other parameters. Index applications are basically Just unzip it.
Modify system parameters
vi /etc/security/limits.conf
* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
vi /etc/sysctl.conf
vm.max_map_count= 262144
sysctl -p
####
####
es:
vim elasticsearch.yml
node.name: elk1< /p>
path.data: /data/elk_data
path.logs: /data/elk_data
network.host: 192.168.0.181
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: “*”
##
Because the head plugin is essentially It is still a nodejs project, so you need to install node and use npm to install dependent packages. (Npm can be understood as maven)
Download node-v6.10.3-linux-x64.tar.gz, nodejs download address
Put “node-v6.10.3-linux-x64 .tar.gz” is copied to centos, the sample directory is: /opt/es/
cd /opt/es/
tar -xzvf node-v6.10.3-linux-x64.tar.gz
ln -s /opt/es/node-v6.10.3-linux-x64/bin/node /usr/local/bin/node
ln -s /opt/es/ node-v6.10.3-linux-x64/bin/npm /usr/local/bin/npm
cd /opt/es/node-v6.10.3-linux-x64/bin/
Test whether the installation is successful
npm -v
#####
#####
kibana:< /p>
vim kibana.yml
server.port: 5601
server.host: “192.168.0.181”
elasticsearch.url: “http://192.168.0.181:9200”
kibana .index: “.kibana”
#####
#####
kafka:
vim server.properties
broker.id=0
#port=9092
# host.name=192.168.0.181
listeners=PLAINTEXT://192.168.0.181:9092
num.network.threads=3
num.io.threads=8
socket. send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/logs/kafka-logs< br />num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication. factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check. interval.ms=300000
zookeeper.connect=192.168.0.181:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
#advertised.host.name=192.168.0.181
#advertised.port=9092
advertised.listeners=PLAINTEXT://192.168.0.181:9092
replica.fetch.max.bytes=4194304
message.max.bytes=4000000
compression.codec=snappy
max.partition.fetch.bytes=4194304
#
#
Add topic
./kafka-topics. sh –create –topic app-log –replication-factor 1 –partitions 1 –zookeeper localhost:2181
./kafka-topics.sh –list –zookeeper localhost:2181
< p>#
./kafka-console-producer.sh –broker-list 192.168.0.181:9092 –topic app-log
./kafka-console-consumer.sh –zookeeper 192.168.0.181: 2181 –topic app-log –from-beginning //View output
####
####
logstash:
Add configuration file, start logstash -F specifies the configuration file
[[emailprotected] config]# cat kafka.conf
input {
kafka {
bootstrap_servers => "192.168.0.181:9092 "
group_id => "app-log"
topics => ["app-log"]
consumer_threads => 5
decorate_events => false
auto_offset_reset => "earliest" #latest latest; earliest first
}< br />}
filter {
json {
source => "message"
}
if [severity] == "DEBUG" {< br /> drop {}
}
}
output {
elasticsearch {
action => "index" #The operation on ES
hosts => "192.168.0.181:9200" #ElasticSearch host, can be array.
# index => "applog" #The index to write data to.
index => "applog-%{ +YYYY.MM.dd}"
}
}
####
####
springcould log configuration
###
###
log:
brokerList:
url: 192.168.0.181:9092
topic: app-log
console :
level: DEBUG
#
#
#
#