Installation deployment of Zookeeper + Kafka Cluster

Preparation work

Upload zookeeper-3.4.6.tar.gz, scala-2.11.4.tgz, kafka_2.9.2-0.8.1.1.tgz, slf4j-1.7.6.zip to the /usr/local directory

zookeeper cluster construction

Unzip the zookeeper installation package

# tar -zxvf zookeeper-3.4.6.tar.gz

Delete the original compressed package

# rm -rf zookeeper-3.4.6.tar.gz

Re Naming

# mv zookeeper-3.4.6 zk

Configure zookeeper-related environment variables

# vi /etc/profile

export ZOOKEEPER_HOME=/usr/local/zk

export PATH=$PATH:$ZOOKEEPER_HOME/bin

# source /etc/profile

ls etc. Invalid command

export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

Modify zookeeper configuration file

# cd /usr/local/zk/conf
# cp zoo_sample.cfg zoo.cfg
# vi zoo.cfg
Modify :
dataDir=/usr/local/zk/data
New:
server.0=eshop-cache01:2888:3888
server.1=eshop-cache02:2888 :3888
server.2=eshop-cache03:2888:3888

Create a new zookeeper data file

# cd /usr/local/zk
#mkdir data
# cd data
# vi myid Add identification number: 0

Other cluster nodes can repeat the above configuration, the only difference is that the identification number is different, which can be 1, 2 …

Start zookeeper

# zkServer.sh start

Check the status of zookeeper

# zkServer.sh status

kafka cluster construction

scala installation

Unzip the scala installation package

# tar- zxvf scala-2.11.4.tgz

Rename

# mv scala-2.11.4 scala

Configure scala environment variables

# vi /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
# source /etc/profile

Check the version of scala

# scala -version

kafka installation

Unzip the kafka installation package

 # tar -zxvf kafka_2.9.2-0.8.1.1.tgz

Rename

# mv kafka_2.9.2-0.8.1.1 kafka

Configure kafka

p>

# vi /usr/local/kafka/config/server.properties
broker.id=0 //Integers of cluster nodes 0, 1, 2 increasing in turn
zookeeper.connect=192.168 .27.11:2181,192.168.27.12:2181,192.168.27.13:2181

kafka installation problem

Solve Kafka’s Unrecognized VM o ption’UseCompressedOops’ problem

# vi /usr/local/kafka/bin/kafka-run-class.sh 
if [-z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
KAFKA_JVM_PERFORMANCE_OPTS ="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true"
fi

Just remove -XX:+UseCompressedOops

Solve the problem that Kafka fails to start due to insufficient system physical memory

# vi /usr/local/kafka/bin/ kafka-server-start.sh
export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

Install slf4j

Unzip the slf4j compressed package

# unzip slf4j-1.7.6.zip

Copy slf4j-nop-1.7.6.jar to the libs directory of kafka

# cp /usr/local/slf4j- 1.7.6/slf4j-nop-1.7.6.jar /usr/local/kafka/libs

Follow the above steps to complete the installation of kafka on other nodes in the cluster. The only difference is the broker.id of server.properties , Need to set a different integer 1, 2…

Start kafka

# nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local /kafka/config/server.properties &

View startup log

# ca t nohup.out

Check whether Kafka is started successfully

# jps

After completing the installation of each node, check whether the Kafka cluster is successfully built

< pre>// Create test topic
# /usr/local/kafka/bin/kafka-topics.sh –zookeeper 192.168.27.11:2181,192.168.27.12:2181,192.168.27.13:2181 –topic test –replication-factor 1 –partitions 1 –create

// Create producer
# /usr/local/kafka/bin/kafka-console-producer.sh –broker -list 192.168.27.11:9092,192.168.27.12:9092,192.168.27.13:9092 –topic test

// Create consumer
# /usr/local/kafka/bin/ kafka-console-consumer.sh –zookeeper 192.168.27.11:2181,192.168.27.12:2181,192.168.27.13:2181 –topic test –from-beginning

Leave a Comment

Your email address will not be published.