ZooKeeper is built with Kafka cluster

First of all, this blog doesn’t have any theoretical stuff, it just explains in detail the process of setting up a cluster of kafka and zookeeper, which requires three linux servers.

  • java environment variable settings
  • zookeeper cluster construction
  • kafka cluster construction

java environment variable settings< /h3>

Java environment variables are set on each server

Here is the way to install java source code:

Download the source package and unzip it, and put it in /usr/ Under the local/ folder, change the name of the directory name to jdk! The next step is to add the java command parameters to the linux environment variables.

[[email protected] jdk]# cat /etc/profile.d/java.sh 
JAVA_HOME
=/usr/local/jdk
JAVA_BIN
=/usr/local/jdk/bin
JRE_HOME
=/usr/local/jdk/jre
PATH
=$PATH:/usr/local/jdk/bin:/usr/local/jdk/jre/bin
CLASSPATH
=/usr/local/jdk/jre/lib:/usr/local/jdk/lib:/usr/local/jdk/jre/lib/charsets.jar
[[emailprotected] jdk]#

#Then execute the source command to load the newly added java.sh script!
[[email Protected] bin]# source /etc/profile.d/java.sh

#Verify whether the java environment variable is set successfully
[[emailprotected] bin]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[[email Protected] bin]# echo $JAVA_HOME #Check whether the JAVA_HOME variable is the path set above
/usr/local/jdk/jdk
[[emailprotected] bin]#

If the above feedback is normal, it means that the java environment has been set up!

zookeeper cluster construction

zookeeper cluster is called a group. Zookeeper uses a consensus protocol, so it is recommended that each group should contain an odd number of nodes, because only when most of the nodes in the group are available, zookeeper can process external requests. In other words, if a group contains 3 nodes, then it allows one node to fail. If it contains 5 nodes, then it allows 2 nodes to fail. [It is not recommended that the zookeeper group has more than 7 nodes, because zookeeper uses the consistency protocol, too many nodes will reduce the performance of the group]

in the group Each server needs to create a myid file in the data directory to indicate its own ID.

Here zookeeper is installed using source code:

[[emailprotected] src]# tar zxvf zookeeper-3.4.6.tar.gz -C ../ #Unzip
[[emailprotected] src]# cd ..
/
[[email protected] local]#
mv zookeeper-3.4.6 zookeeper #modify name
[[ emailprotected] local]# cd zookeeper
[[emailprotected] zookeeper]#
ls
bin CHANGES.txt contrib docs ivy.xml LICENSE.txt README_packaging.txt recipes zookeeper
-3.4.6.jar zookeeper-3.4.6.jar.md5 zookeeper.out
build.xml conf dist
-maven ivysettings.xml lib NOTICE.txt README.txt src zookeeper-3.4.6.jar.asc zookeeper-3.4.6 .jar.sha1
[[email protected] zookeeper]#
mkdir -p / data/zookeeper/data #Create a data directory
[[emailprotected] zookeeper]# mkdir -p /data/zookeeper/logs #Create log file directory
[[emailprotected] zookeeper ]# cd conf
[[email protected] conf]#
ls
configuration.xsl log4j.properties zoo_sample.cfg
[[emailprotected] conf]#
mv zoo_sample.cfg zoo.cfg #Modify the name of the configuration file , It is recommended to back up the original configuration file

The configuration file of zookeeper is as follows:

# The number of milliseconds of each tick
tickTime=2000 #Unit is milliseconds
# The number of ticks that the initial
# synchronization phase can take
initLimit
= 10
# indicates the upper limit for establishing the initial connection between the slave node and the master node.

# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# indicates the upper limit of the time that the slave node and the master node are not synchronized with the turntable.
#initLimit and syncLimit are both multiples of tickTime, which means the time is multiplied by tickTime The value of these two values.
# the directory where the snapshot is stored.
#
do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir
=/data /zookeeper/data #Specify data directory file
dataLogDir
=/data/zookeeper/ logs #Specify the log directory file
# the port at
which the clients will connect
clientPort
=2181 #zookeeperListener Port of
# the maximum number of client connections.
# increase this
if you need to handle more clients
#maxClie ntCnxns
=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http:
//zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain
in dataDir
#autopurge.snapRetainCount
=< span style="color: #800080">3

# Purge task interval
in hours
# Set to
"0" to disable auto purge feature
#autopurge.purgeInterval< /span>=1

#zookeeper's group settings

server.
1=10.0.102.179:2888:3888
server.
2=10.0.102.204:2888:3888
server.
< span style="color: #800080">3
=10.0.102.214: 2888:3888

#The format of group settings is: server.X=hostname:peerPort :leader:Port
X: The ID of the server. It must be an integer, but it does not have to start from 0 or be continuous.
peerPort: TCP port used for communication between nodes.
leaderPort: TCP port used for leader election.

The client only needs to connect to the group through clientport, and the communication between group nodes needs to use these three ports at the same time.

[The above big operation is just to modify the configuration file, create the data directory and log file directory, and decompress the source code package!

The next thing to do is to do the above operations on the other two machines, or you can copy the above files directly to the corresponding positions of the other two machines through the scp command. 】

Like the MySQL cluster, each server in the cluster needs a unique identifier, and zookeeper also needs a myid file to identify each server!

[[email protected] data]# pwd
/data/zookeeper/data
[[emailprotected] data]#
cat myid
1

#Create a myid file under the data file directory of each cluster, and write in each file An integer, this integer is used to indicate the ID of the current server.
#This ID should be consistent with the X in the server.X in the above configuration file. [Encountered inconsistencies, but failed to start, modify to the same, then start successfully]

The myid files of the other two sets are as follows:

[[emailprotected] data]# pwd
/data/zookeeper/data
[[email protected] data]#
cat myid
2

[[emailprotected] data]# pwd
/data/zookeeper/data
[[email protected] data]# cat myid
3

Then start the zookeeper cluster:

#All three servers are started in this way, and the nohup.out file will be generated under the execution path of the current directory. If an error is reported, you can view the file content! 
[[email Protected] bin]# pwd
/usr/local/zookeeper/bin
[[email Protected] bin]# ./zkServer.sh start
JMX enabled by default
Using config:
/usr/local/zookeeper/bin/../conf/< span style="color: #000000">zoo.cfg
Starting zookeeper ... STARTED

#It should be noted that after the startup, you must check whether there is a process, and sometimes no error will be reported. , But did not start successfully!
[[email protected] bin]# netstat -alntp | grep 2181 #Check if the monitoring port exists
tcp 0 :::2181 LIST 27 pre>

After all three servers are started, perform the following verification:

[[emailprotected] bin]# ./zkCli.sh --server10.0.102.204
Connecting to localhost:
2181
2018-12-21 10:54:38,286 [myid:]-INFO [main:[email protected]100]-Client environment:zookeeper.version=< span style="color: #800080">3.4.6-1569965, built on 02/20/2014 09:09 GMT
2018-12-21 10:54:38,289 [myid :]-INFO [main:[email protected]100]-Client environment:host.name=test3
2018-12-21 10:54:38,289 [myid:]-INFO [main:[email protected]100]-Client environment:java.version=1.7.0_79
2018 -12-21 10 :54:38,291 [myid:]-INFO [main:[email protected]100]-Client environment:java.vendor= Oracle Corporation
2018-12-21 10:54:38,291 [myid:]-INFO [main:[emailprotected]100]-Client environment:java.home=/usr/local/jdk/jdk1.7.0_79/jre
2018-12-21 10:54:38,291 [myid:]-INFO [main:[email protected]]-Client environment:java.class.path=/usr /local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j- log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../lib/netty-3.7. 0.Final.jar:/usr/local/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../lib/jline-0.9.94.jar: /usr/local/zookeeper/bin/../zookeeper-3.4.6.jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin /../conf:

2018-12-21 10:54:38,291 [myid:]-INFO [main:[email protected]100]-Client environment:java.library. path=/usr/local/mysql/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-12-21 10:54:38,291 [myid:]-INFO [main:[emailprotected]100]-Client environment:java.io.tmpdir=/tmp
2018 -12-21 10 :54:38,292 [myid:]-INFO [main:[email protected]100]-Client environment:java.compiler=
2018-12-21 10:54:38,292 [myid:]-INFO [main:[email protected]100]-Client environment:os.name=Linux
2018-12-21 10:54:38,292 [myid:]-INFO [main:[emailprotected]100]-Client environment:os.arch=amd64
2018-12-21 10:54:38,292 [myid:]-INFO [main:[email protected]100]-Client environment:os.version=2.6.32-504.el6.x86_64
2018-12 span>-21 10:54 span>:38,292 [myid:]-INFO [main:[emailprotected]< span style="color: #800080">100
]-Client environment:user.name=root
2018-12-21 10:54:38,292< /span> [myid:]-INFO [main:[email protected]100]-Client environment:user.home=/root
2018-12-21 10:54:38,292 [myid:]-INFO [main:[emailprotected]100]-Client environment:user.dir=/usr/local/zookeeper/ bin
2018-12-21 10:54:38 span>,293 [myid:]-INFO [main:[email protected]438]- Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=[email protected]
Welcome to ZooKeeper
!
2018-12-21 10:54:38,315 [myid:]-INFO [main -SendThread(localhost:2181):[email protected]975]-Opening socket connection to server localhost/127.0. 0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2018- 12-21 10: 54:38,320 [myid:]-INFO [main-SendThread( localhost:2181):[emailprotected]852]-Socket connection established to localhost/127.0.0.1:2181, initiating session
2018-12< /span>-21 10:54:38,330 [myid:]-INFO [main-SendThread(localhost:2181):[emailprotected]1235]-Session establishment complete on server localhost/127.0.0.1< /span>:2181, sessionid = 0x367ce7db1fa0003, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:
2181(CONNECTED) 0]


#If the above is displayed, the zookeeper cluster is successfully built
pre>

Build Kafka cluster

The installation method is still using source code. The installation process is similar to zookeeper, unzip, modify the configuration file, and create the corresponding message catalog.

[[email protected] src]# tar -zxvf kafka_2.10-0.10.0.0.tgz -C ../ #解压
[[emailprotected] src]# cd ..
[[emailprotected] local]#
mv kafka_2.10-0.10.0.0 kafka #modify name
[[emailprotected] local]# cd kafka
/
[[emailprotected] kafka]#
ls
bin config libs LICENSE logs NOTICE site
-docs
[[emailprotected] kafka]# cd config
/ #
[[email protected ] config]#
ls #kafka has many configuration files, but for now we only need to pay attention to server.propertiesls #kafka span>
connect
-console-sink.properties connect-file -sink.properties connect-standalone.properties producer.properties zookeeper.properties
connect
-console-source.properties connect-file-source.properties consumer.properties server.properties
connect
-distributed.properties connect-log4j.properties log4j. properties tools-log4j.properties

The configuration file is as follows: delete the comments!

broker.id=3 #In each The only one in the kafka server is an integer
port
=9092
host.name
=10.0.102.179
auto.create.topics.enable
= false < span style="color: #000000">
delete.topic.enable = true
num.network.threads
=3

num.io.threads
=8

socket.send.buffer .bytes
=102400

socket.receive.buffer.bytes< /span>=102400


socket.request.max.bytes
=104857600


log.dirs
=/data/kafka/logs

num.partitions
=1

num.recovery.threads.per.data.
dir=1

log.retention.hours
=168< /span>

log.segment.bytes
=1073741824

log.retention.check.interval.ms
=300000

zookeeper.connect
=10.0.102.179: 2181,10.0.102.204 :2181,10.0.102.214:2181
zookeeper.connection.timeout.ms
=6000

Description of configuration file parameters:

  • broker.id: Every broker needs an identification Symbol, which is represented by broker.id. The default is 0, which can be set to any integer, and must be unique in the Kafka cluster.
  • port: The default port is 9092, configure the port that Kafka listens to.
  • zookeeper.connect: The zookeeper address used to store broker metadata is specified by zookeeper.connect. The new style is like the above, the host and port number, and multiple addresses are separated by commas. The above three are a zookeeper cluster. If the currently connected zookeeper is down, the broker can connect to other servers in the zookeeper group.
  • log.dirs: Kafka saves all messages on the disk, and the directory for storing fragments of these days is specified by log.dirs. It is a set of local file system paths separated by commas. If multiple paths are specified, the broker will save the log fragments of the same partition to the same path according to the "least use" principle. It should be noted that borker will add partitions to the path with the least number of partitions instead of adding partitions to the path with the smallest disk space.
  • num.recovery.threads.per.data.dir, for the following three situations, how many threads will Kafka use to process log fragments?
    •    The server starts normally and is used to open the log fragments of each partition.
    • The server restarts after a crash, which is used to check and truncate the log fragments of each partition.
    • The server is shut down normally, which is used to close log fragments.

By default, each log directory uses only one thread. Because these threads are only used when the server is started and shut down, a large number of threads can be set up to achieve the purpose of parallel operation. Especially for a server with a large number of partitions, once a crash occurs, using parallel operations during recovery may save a lot of time. The configured number corresponds to a single log directory specified by log.dirs. For example, if this value is set to 8, and log.dirx specifies 3 paths, it takes 24 threads for a long time.

  • auto.create.topics.enable By default, Kafka will create topics in the following situations.
    •    When a producer starts writing messages to the topic.
    • When a consumer starts to read the message from the topic.
    • When any client sends a metadata request to the subject.

     This parameter is a bit confusing, try to set it to false!

The log directory of kafa is specified above, and a message directory needs to be created:

[[emailprotected] ~]# mkdir -p /data/kafka/logs

Like zookeeper, decompress the files above and create the log directory , Respectively copy to the other two servers, pay attention to modify the broker.id of the other two servers.

Then start kafka:

[[emailprotected] bin]# ls 
connect
-distributed.sh kafka-consumer-groups.sh kafka-reassign-partitions.sh kafka- simple-consumer-shell.sh zookeeper-server-start.sh
connect
-standalone.sh kafka-consumer-offset-checker.sh kafka-replay-log-producer.sh kafka-topics.sh zookeeper-server-stop.sh
kafka
-acls.sh kafka-consumer-perf-test.sh kafka-replica-verification.sh kafka-verifiable-consumer.sh zookeeper-shell.sh
kafka
-configs.sh kafka-mirror-maker.sh kafka-run-class.sh kafka-verifiable-producer.sh
kafka
-console-consumer.sh kafka-preferred-replica-election.sh kafka-server-start.sh windows
kafka
-console-producer.sh kafka-producer-perf-test.sh kafka-server-stop.sh zookeeper-security-migration.sh

#bin目录下面有很多脚本,kafka自带的有zookeeper启动,但是建议搭建独立的zookeeper集群。

[[email protected] bin]# ./kafka-server-start.sh /usr/local/kafka/config/server.properties 1>/dev/null 2>&1 &
[1] 8757
[[email protected] bin]#

#kafka默认标准输出到屏幕,上面的命令把kafka挂起在后台运行。

#启动之后查看是否监听了9092端口。
[[email protected] bin]# netstat -lntp |grep 9092
tcp        0      0 ::ffff:10.0.102.204:9092    :::*                        LISTEN      8757/java           
[[email protected] bin]#

三台服务器全部启动完毕,可以进行如下测试:

[[email protected] bin]# ./kafka-console-producer.sh --broker-list 10.0.102.179:9092 --topci science       #创建一个主题
topci is not a recognized option
[[email protected] bin]#

#创建一个生产者,发送消息
[[email protected] bin]# ./kafka-console-producer.sh --broker-list 10.0.102.179:9092 --topic science
test message
first
#创意一个消费者对象,接收生产者发送的消息
[[email protected] bin]# ./kafka-console-consumer.sh --zookeeper 10.0.102.179:2181 --topic science --from-beginning
test message

first

#可以看到卡夫卡是一个先进先出的消息队列

若是以上的队列消息验证成功,则说明集群搭建成功、

[[email protected] bin]# ./kafka-topics.sh --zookeeper 10.0.102.179:2181 --describe --topic science         #查看主题的信息
Topic:science PartitionCount:
1 ReplicationFactor:1 Configs:
Topic: science Partition:
0 Leader: 2 Replicas: 2 Isr: 2
[[email protected] bin]#

 

一个异常说明:在测试的时候,刚开始总报如下错误:

[[email protected] bin]# ./kafka-console-producer.sh  --broker-list localhost:9092 --topic lianxi
test1
[
2018-12-20 17:33:27,112] ERROR Error when sending message to topic lianxi with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after
60000 ms.

这个问题搞了好久,最好把java的版本从1.8降到了1.7之后,然后就成功了!

搭建成功的各个组件的版本信息如下:

[[email protected] ~]# java -version
java version
"1.7.0_79"
Java(TM) SE Runtime Environment (build
1.7.0_79-b15)
Java HotSpot(TM)
64-Bit Server VM (build 24.79-b02, mixed mode)

#kafka没有类似的version命令,需要查看kafka的使用包信息
[[email protected] libs]# pwd
/usr/local/kafka/libs
[[email protected] libs]# ls kafka_2.10-0.10.0.0.jar
kafka_2.10-0.10.0.0.jar

#2.10是scala的版本,后面的0.10.0.0是kafka的版本

#在zookeeper的安装目录下面有对应版本的一些包
[[email protected] zookeeper]# ls
bin        CHANGES.txt  contrib     docs             ivy.xml  LICENSE.txt  README_packaging.txt  recipes  zookeeper-3.4.6.jar      zookeeper-3.4.6.jar.md5
build.xml  conf         dist-maven  ivysettings.xml  lib      NOTICE.txt   README.txt            src      zookeeper-3.4.6.jar.asc  zookeeper-3.4.6.jar.sha1
[[email protected] zookeeper]# #zookeeper的版本是3.4.6

[[email protected] jdk]# cat /etc/profile.d/java.sh 
JAVA_HOME
=/usr/local/jdk
JAVA_BIN
=/usr/local/jdk/bin
JRE_HOME
=/usr/local/jdk/jre
PATH
=$PATH:/usr/local/jdk/bin:/usr/local/jdk/jre/bin
CLASSPATH
=/usr/local/jdk/jre/lib:/usr/local/jdk/lib:/usr/local/jdk/jre/lib/charsets.jar
[[email protected] jdk]#

#然后执行source命令,加载新加入的java.sh脚本!
[[email protected] bin]# source /etc/profile.d/java.sh

#验证java环境变量是否设置成功
[[email protected] bin]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[[email protected] bin]# echo $JAVA_HOME #查看JAVA_HOME变量是否是上面设置的路径
/usr/local/jdk/jdk
[[email protected] bin]#

[[email protected] src]# tar zxvf zookeeper-3.4.6.tar.gz -C ../         #解压
[[email protected] src]# cd ..
/
[[email protected] local]#
mv zookeeper-3.4.6 zookeeper #修改名字
[[email protect ed] local]# cd zookeeper
[[email protected] zookeeper]#
ls
bin CHANGES.txt contrib docs ivy.xml LICENSE.txt README_packaging.txt recipes zookeeper
-3.4.6.jar zookeeper-3.4.6.jar.md5 zookeeper.out
build.xml conf dist
-maven ivysettings.xml lib NOTICE.txt README.txt src zookeeper-3.4.6.jar.asc zookeeper-3.4.6.jar.sha1
[[email protected] zookeeper]#
mkdir -p /data/zookeeper/data #创建数据目录
[[email protected] zookeeper]# mkdir -p /data/zookeeper/logs #创建日志文件目录
[[email protected] zookeeper]# cd conf
[[email protected] conf]#
ls
configuration.xsl log4j.properties zoo_sample.cfg
[[email protected] conf]#
mv zoo_sample.cfg zoo.cfg #修改配置文件的名字,建议把原来的配置文件做备份

# The number of milliseconds of each tick
tickTime=2000 #单位为毫秒
# The number of ticks that the initial
# synchronization phase can take
initLimit
=10
#表示用于在从节点与主节点之间建立初始化连接的上限。

# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit
=5
#表示允许从节点与主节点处于不同步转台的时间上限。
#initLimit与syncLimit这两个值都是tickTime的倍数,表示的时间是tickTime乘以这两个数值的值。
# the directory where the snapshot is stored.
#
do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir
=/data/zookeeper/data #指定数据目录文件
dataLogDir
=/data/zookeeper/logs #指定日志目录文件
# the port at
which the clients will connect
clientPort
=2181 #zookeeper监听的端口
# the maximum number of client connections.
# increase this
if you need to handle more clients
#maxClie ntCnxns
=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http:
//zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain
in dataDir
#autopurge.snapRetainCount
=3
# Purge task interval
in hours
# Set to
"0" to disable auto purge feature
#autopurge.purgeInterval
=1

#zookeeper的群组设置

server.
1=10.0.102.179:2888:3888
server.
2=10.0.102.204:2888:3888
server.
3=10.0.102.214:2888:3888

#群组设置的格式为: server.X=hostname:peerPort:leader:Port
X:服务器的ID,必须是一个整数,不过不一定要从0开始,也不要求是连续的。
peerPort:用于节点间通信的TCP端口。
leaderPort:用于leader选举的TCP端口。

[[email protected] data]# pwd
/data/zookeeper/data
[[email protected] data]#
cat myid
1

#在每台集群的数据文件目录下面创建myid文件,每个文件中写入一个整数,这个整数用来指明当前服务器的ID。
#这个ID应该是需要和上面配置文件中的那个server.X中的X一致的。 【遇到过不一致的情况,但是启动失败,修改为一致,就启动成功】

[[email protected] data]# pwd
/data/zookeeper/data
[[email protected] data]#
cat myid
2

[[email protected] data]# pwd
/data/zookeeper/data
[[email protected] data]# cat myid
3

#三台服务器都这样启动,会在当前目录执行路径下面生成nohup.out文件,若是报错的话,可以查看文件内容! 
[[email protected] bin]# pwd
/usr/local/zookeeper/bin
[[email protected] bin]# ./zkServer.sh start
JMX enabled by default
Using config:
/usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#需要注意的是,启动完成之后一定要查看是否有进程,有些时候不会报错,但是没有启动成功!
[[email protected] bin]# netstat -alntp | grep 2181 #查看监听端口是否存在
tcp        0      0 :::2181                     :::*                        LISTEN      27962/java

[[email protected] bin]# ./zkCli.sh --server10.0.102.204
Connecting to localhost:
2181
2018-12-21 10:54:38,286 [myid:] - INFO [main:[email protected]100] - Client environment:zookeeper.version=3.4.6-1569965 , built on 02/20/2014 09:09 GMT
2018-12-21 10:54:38,289 [myid:] - INFO [main:[email protected]100] - Client environment:host.name=test3
2018-12-21 10:54:38,289 [myid:] - INFO [main:[email p rotected]100] - Client environment:java.version=1.7.0_79
2018-12-21 10:54:38,291 [myid:] - INFO [main:[email protected]100] - Client environment:java.vendor=Oracle Corporation
2018-12-21 10:54:38,291 [myid:] - INFO [main:[email protected]100] - Client environment:java .home=/usr/local/jdk/jdk1.7.0_79/jre
2018-12-21 10:54:38,291 [myid:] - INFO  [main:[email protected]] - Client environment:java.class.path=/usr/local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/usr/local/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper/bin/../zookeeper-3.4.6.jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin/../conf:

2018-12-21 10:54:38,291 [myid:] - INFO [main: [email protected]100] - Client environment:java.library.path=/usr/local/mysql/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-12-21 10:54:38,291 [myid:] - INFO [main:[email protected]100] - Client environment:java.io.tmpdir=/tmp
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:java.compiler=
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:os.name=Linux
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:os.arch=amd64
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:os.version=2.6.32-504.el6.x86_64
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:user.name=root
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:user.home=/root
2018-12-21 10:54:38,292 [myid:] - INFO [main:[email protected]100] - Client environment:user.dir=/usr/local/zookeeper/bin
2018-12-21 10:54:38,293 [myid:] - INFO [main:[email protected]438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=[email protected]
Welcome to ZooKeeper
!
2018-12-21 10:54:38,315 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2018-12-21 10:54:38,320 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]852] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2018-12-21 10:54 :38,330 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]1235] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x367ce7db1fa0003, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:
null
[zk: localhost:
2181(CONNECTED) 0]


#若有如上显示,则zookeeper集群搭建成功

[[email protected] src]# tar -zxvf kafka_2.10-0.10.0.0.tgz -C ../         #解压
[[email protected] src]# cd ..
[[email protected] local]#
mv kafka_2.10-0.10.0.0 kafka #修改名字
[[email protected] local]# cd kafka
/
[[email protected] kafka]#
ls
bin config libs LICENSE logs NOTICE site
-docs
[[email protected] kafka]# cd config
/ #
[[email protected] config]#
ls #kafka的配置文件有很多,但是目前我们只需要关注 server.properties
connect
-console-sink.properties connect-file-sink.properties connect-standalone.properties producer.properties zookeeper.properties
connect
-console-source.properties connect-file-source.properties consumer.properties server.properties
connect
-distributed.properties connect-log4j.properties log4j.properties tools-log4j.properties

broker.id=3                            #在每个kafka服务器中唯一的,是一个整数
port
=9092
host.name
=10.0.102.179
auto.create.topics.enable
= false
delete.topic.enable
= true
num.network.threads
=3

num.io.threads
=8

socket.send.buffer.bytes
=102400

socket.receive.buffer.bytes
=102400

socket.request.max.bytes
=104857600


log.dirs
=/data/kafka/logs

num.partitions
=1

num.recovery.threads.per.data.
dir=1

log.retention.hours
=168

log.segment.bytes
=1073741824

log.retention.check.interval.ms
=300000

zookeeper.connect
=10.0.102.179:2181,10.0.102.204:2181,10.0.102.214:2181
zookeeper.connection.timeout.ms
=6000

[[email protected] ~]#  mkdir -p /data/kafka/logs

[[email protected] bin]# ls
connect
-distributed.sh kafka-consumer-groups.sh kafka-reassign-partitions.sh kafka-simple-consumer-shell.sh zookeeper-server-start.sh
connect
-standalone.sh kafka-consumer-offset-checker.sh kafka-replay-log-producer.sh kafka-topics.sh zookeeper-server-stop.sh
kafka
-acls.sh kafka-consumer-perf-test.sh kafka-replica-verification.sh kafka-verifiable-consumer.sh zookeeper-shell.sh
kafka
-configs.sh kafka-mirror-maker.sh kafka-run-class.sh kafka-verifiable-producer.sh
kafka
-console-consumer.sh kafka-preferred-replica-election.sh kafka-server-start.sh windows
kafka
-console-producer.sh kafka-producer-perf-test.sh kafka-server-stop.sh zookeeper-security-migration.sh

#bin目录下面有很多脚本,kafka自带的有zookeeper启动,但是建议搭建独立的zookeeper集群。

[[email protected] bin]# ./kafka-server-start.sh /usr/local/kafka/config/server.properties 1>/dev/null 2>&1 &
[1] 8757
[[email protected] bin]#

#kafka默认标准输出到屏幕,上面的命令把kafka挂起在后台运行。

#启动之后查看是否监听了9092端口。
[[email protected] bin]# netstat -lntp |grep 9092
tcp        0      0 ::ffff:10.0.102.204:9092    :::*                        LISTEN      8757/java           
[[email protected] bin]#

[[email protected] bin]# ./kafka-console-producer.sh --broker-list 10.0.102.179:9092 --topci science       #创建一个主题
topci is not a recognized option
[[email protected] bin]#

#创建一个生产者,发送消息
[[email protected] bin]# ./kafka-console-producer.sh --broker-list 10.0.102.179:9092 --topic science
test message
first
#创意一个消费者对象,接收生产者发送的消息
[[email protected] bin]# ./kafka-console-consumer.sh --zookeeper 10.0.102.179:2181 --topic science --from-beginning
test message

first

#可以看到卡夫卡是一个先进先出的消息队列

[[email protected] bin]# ./kafk a-topics.sh --zookeeper 10.0.102.179:2181 --describe --topic science         #查看主题的信息
Topic:science PartitionCount:
1 ReplicationFactor:1 Configs:
Topic: science Partition:
0 Leader: 2 Replicas: 2 Isr: 2
[[email protected] bin]#

[[email protected] bin]# ./kafka-console-producer.sh --broker-list localhost:9092 --topic lianxi
test1
[
2018-12-20 17:33:27,112] ERROR Error when sending message to topic lianxi with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after
60000 ms.

[[email protected] ~]# java -version
java version
"1.7.0_79"
Java(TM) SE Runtime Environment (build
1.7.0 _79-b15)
Java HotSpot(TM)
64-Bit Server VM (build 24.79-b02, mixed mode)

#kafka没有类似的version命令,需要查看kafka的使用包信息
[[email protected] libs]# pwd
/usr/local/kafka/libs
[[email protected] libs]# ls kafka_2.10-0.10.0.0.jar
kafka_2.10-0.10.0.0.jar

#2.10是scala的版本,后面的0.10.0.0是kafka的版本

#在zookeeper的安装目录下面有对应版本的一些包
[[email protected] zookeeper]# ls
bin        CHANGES.txt  contrib     docs             ivy.xml  LICENSE.txt  README_packaging.txt  recipes  zookeeper-3.4.6.jar      zookeeper-3.4.6.jar.md5
build.xml  conf         dist-maven  ivysettings.xml  lib      NOTICE.txt   README.txt            src      zookeeper-3.4.6.jar.asc  zookeeper-3.4.6.jar.sha1
[[email protected] zookeeper]# #zookeeper的版本是3.4.6

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 4777 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.