ELK + Zookeeper + Kafka Collect SpringCould log configuration documentation

ELK+zookeeper+kafka collect springcould log configuration documents
####
####
####

ELK+zookeeper+kafka collect springcould log configuration documents

This article introduces the use of ELK (elasticsearch, logstash, kibana) +  kafka to build a logging system. The main demonstration is to use spring aop to collect logs, and then send the logs to logstash through kafka, and logstash will write the logs to elasticsearch, so that elasticsearch has log data, and finally, use kibana to display the log data stored in elasticsearch , And can do real-time data chart analysis and so on.

When I first started some projects, I was used to using log4j to write logs to log files. Later, when the project had high availability requirements, we carried out distributed deployment of the web, so we still use If log4j is used to record logs, then there are N log directories of N machines. When ELK is used normally, it is necessary to install logstash and other log-collecting clients on each machine. At this time, it is very troublesome to find logs. Knowing which server the problem user error log was written on, later, I thought of a way to write the log directly to the database. This way, although the convenience of finding abnormal information is solved, there are two Defects:

1, there are a lot of log records, the table is not enough, and the score database is divided into tables,

2, connect to db, if the database is abnormal, the log will be lost , Then in order to solve the problem of log loss, you must first write the log locally, and then wait for the db to connect, and then synchronize the log to the db. This method of processing seems to be more complicated.

The current practice of ELK
Fortunately, there is a solution such as ELK, which can solve the above problems. First of all, the use of elasticsearch to store log information can be understood as a general system. Store unlimited pieces of data, because elasticsearch has good scalability, and then there is a logstash, which can be understood as a data interface to connect the log data from the outside for elasticsearch. Its connection channels include kafka, log files, redis, etc. Wait, it is compatible with N multi-log format. The last part is kibana, which is mainly used for data display. Log so much data is stored in elasticsearch. We have to see what the log looks like. This kibana is for Let’s look at the log data, but there is a more important function that can edit N kinds of chart formats, what histogram, line chart, etc., to visually display the log data.

ELK+zookeeper+kafka collect springcould log configuration documents

ELK function division

Logstash does log docking, accepts application system logs, and then writes them to elasticsearch. Logstash can support N types of log channels, written in kafka channels, and docked with log directories Way, you can also monitor and read the log data in reids, and so on.

Elasticsearch stores log data, which is convenient for expanding special effects and can store enough log data.

Kibana performs log data stored in elasticsearch: data display, report display, and it is real-time.
ELK+zookeeper+kafka collect springcould log configuration documents

This installation The software: es, kibana, logstash, kafka, zookeeper
ELK version is best to use the version above 5, es needs to be served by ordinary users, and you need to pay attention to the system’s maximum number of open files and other parameters. Index applications are basically Just unzip it.
Modify system parameters

vi /etc/security/limits.conf

* soft nproc 65536

* hard nproc 65536

* soft nofile 65536

* hard nofile 65536

vi /etc/sysctl.conf

vm.max_map_count= 262144

sysctl -p

####
####

es:
ELK+zookeeper+kafka collection springcould log configuration document

vim elasticsearch.yml

node.name: elk1< /p>

path.data: /data/elk_data

path.logs: /data/elk_data

network.host: 192.168.0.181

http.port: 9200

http.cors.enabled: true

http.cors.allow-origin: “*”
##
Because the head plugin is essentially It is still a nodejs project, so you need to install node and use npm to install dependent packages. (Npm can be understood as maven)

Download node-v6.10.3-linux-x64.tar.gz, nodejs download address

Put “node-v6.10.3-linux-x64 .tar.gz” is copied to centos, the sample directory is: /opt/es/
ELK+zookeeper+kafka collect springcould log configuration documents

cd /opt/es/

tar -xzvf node-v6.10.3-linux-x64.tar.gz

ln -s /opt/es/node-v6.10.3-linux-x64/bin/node /usr/local/bin/node

ln -s /opt/es/ node-v6.10.3-linux-x64/bin/npm /usr/local/bin/npm

cd /opt/es/node-v6.10.3-linux-x64/bin/

Test whether the installation is successful

npm -v

#####
#####

kibana:< /p>

ELK+zookeeper+kafka collect springcould log configuration documents

vim kibana.yml

server.port: 5601
server.host: “192.168.0.181”
elasticsearch.url: “http://192.168.0.181:9200”
kibana .index: “.kibana”

#####
#####
ELK+zookeeper+kafka collect springcould log configuration documents

kafka:

ELK+zookeeper+kafka collect springcould log configuration documents

vim server.properties

broker.id=0
#port=9092
# host.name=192.168.0.181
listeners=PLAINTEXT://192.168.0.181:9092
num.network.threads=3
num.io.threads=8
socket. send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/logs/kafka-logs< br />num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication. factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check. interval.ms=300000
zookeeper.connect=192.168.0.181:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
#advertised.host.name=192.168.0.181
#advertised.port=9092
advertised.listeners=PLAINTEXT://192.168.0.181:9092
replica.fetch.max.bytes=4194304
message.max.bytes=4000000
compression.codec=snappy
max.partition.fetch.bytes=4194304

#
#
Add topic

./kafka-topics. sh –create –topic app-log –replication-factor 1 –partitions 1 –zookeeper localhost:2181
./kafka-topics.sh –list –zookeeper localhost:2181

< p>#
./kafka-console-producer.sh –broker-list 192.168.0.181:9092 –topic app-log
./kafka-console-consumer.sh –zookeeper 192.168.0.181: 2181 –topic app-log –from-beginning //View output

ELK+zookeeper+kafka collect springcould log configuration documents

####
####

logstash:

Add configuration file, start logstash -F specifies the configuration file

[[emailprotected] config]# cat kafka.conf 
input {
kafka {
bootstrap_servers => "192.168.0.181:9092 "
group_id => "app-log"
topics => ["app-log"]
consumer_threads => 5
decorate_events => false
auto_offset_reset => "earliest" #latest latest; earliest first
}< br />}

filter {
json {
source => "message"
}
if [severity] == "DEBUG" {< br /> drop {}
}
}

output {
elasticsearch {
action => "index" #The operation on ES
hosts => "192.168.0.181:9200" #ElasticSearch host, can be array.
# index => "applog" #The index to write data to.
index => "applog-%{ +YYYY.MM.dd}"
}
}

####
####

springcould log configuration

###
ELK+zookeeper+kafka collect springcould log configuration documents

###

ELK+zookeeper+kafka collects springcould log configuration documents

log:
brokerList:
url: 192.168.0.181:9092
topic: app-log
console :
level: DEBUG

#
#

#
#

${logLevel } # # ![](https://s1.51cto.com/images/blog/201901/10/f03c34d45cc43564d98bc2dcf7a6019a.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100 ,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) # # # ![](https://s1.51cto.com/images/blog/201901/10/c0e5bcced0a454b2900a9f508ff9e77e.jpg?x-osmark-process?x-osmark ,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) This script is to delete index and restart regularly “` [[emailprotected/bash] #sh cat/protected/bash. author:silence_ltx #date:20180809 PATH=/data/xxxx-node/bin:/xxxx-jav a/bin:/data/elasticsearch/bin:/data/kibana/bin:/usr/local/python/bin:/data/xxxx-node/bin:/data/xxxx-node/bin:/data/xxxx- java/bin:/data/elasticsearch/bin:/data/kibana/bin:/data/xxxx-java/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin :/root/bin:/root/bin:/root/bin date=`date +%Y.%m.%d -d “2 days ago”` echo $date /usr/bin/curl -XDELETE “192.168. 0.181:9200/applog-${date}” ps -ef| grep logstash | grep -v grep | awk'{print $2}’ | xargs kill -9 netstat -nltp | grep 9092 | awk'{print $NF}’ | awk -F “/”‘{print $1 }’| xargs kill -9 sleep 30 find /data/logs/kafka-logs/app-log-0/ -mtime +2 -exec rm -rf {} \; cd /data/kafka/bin sh ./kafka-server-start.sh ../config/server.properties> /data/kafka/bin/kafka.log & sleep 30 cd /data/logstash/bin sh ./logstash- f ../config/kafka.conf> /data/logstash/bin/logstash.log & “` ![](https://s1.51cto.com/images/blog/201901/10/32c6a3db29380f7498ca9f11a411db03.jpg? x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10, y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) # # # ![](https://s1.51cto.com/images/blog/201901/10/8d40ae54dd476149390d0fef36c8322c.jpg?x-oss-process=image/watermark,size_Y2xQ1RP,size_16,text color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)

Leave a Comment

Your email address will not be published.