用户注册



邮箱:

密码:

用户登录


邮箱:

密码:
记住登录一个月忘记密码?

发表随想


还能输入:200字

公园    -  云代码空间

——

JEESZ-kafka集群安装

2017-11-08|763阅||

摘要: JEESZ-kafka集群安装 1. 在根目录创建kafka文件夹(service1、service2、service3都创建) [root@localhost /]# mkdir kafka

1. 在根目录创建kafka文件夹(service1、service2、service3都创建)

[root@localhost /]# mkdir kafka

2.通过Xshell上传文件到service1服务器:上传kafka_2.9.2-0.8.1.1.tgz到/software文件夹

3.远程copy将service1下的/software/kafka_2.9.2-0.8.1.1.tgz到service2、service3

[root@localhost software]# scp -r /software/kafka_2.9.2-0.8.1.1.tgz root@192.168.2.212:/software/

[root@localhost software]# scp -r /software/kafka_2.9.2-0.8.1.1.tgz root@192.168.2.213:/software/

3.copy /software/kafka_2.9.2-0.8.1.1.tgz到/kafka/目录(service1、service2、service3都执行)

[root@localhost software]# cp /software/kafka_2.9.2-0.8.1.1.tgz /kafka/

4.安装解压kafka_2.9.2-0.8.1.1.tgz(service1、service2、service3都执行)

[root@localhost /]# cd /kafka/

[root@localhost kafka]# tar -zxvf kafka_2.9.2-0.8.1.1.tgz

5.创建kafka消息目录(service1,service2,service3都要创建)

[root@localhost kafka]# mkdir kafkaLogs

6. 修改kafka的配置文件(service1,service2,service3都要配置)

[root@localhost /]# cd /kafka/kafka_2.9.2-0.8.1.1/

[root@localhost kafka_2.9.2-0.8.1.1]# cd config/

[root@localhost config]# ls

consumer.properties  log4j.properties  producer.properties  server.properties  test-log4j.properties  tools-log4j.properties  zookeeper.properties

[root@localhost config]# vi server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

#

#http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.

broker.id=0  ---唯一标识

############################# Socket Server Settings #############################

# The port the socket server listens on

port=19092  --当前broker对外提供的TCP端口,默认9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces

host.name=192.168.2.213  --一般是关闭状态,我们要将它打开,如果dns解析失败,会出现文件句柄泄露,不要小看dns解析失败率,如果dns解析失败率为万分之一,由于kafka的性能非常高,每个topic的每个分区,每秒可以处理十万多条的数据,即使万分之一的失败率,每秒也要泄露10个文件句柄,很快句柄数就会泄露完毕,就会超过Linux打开文件的数,就会出现异常,所以我们配置ip,就不会进行dns解析

# Hostname the broker will advertise to producers and consumers. If not set, it uses the

# value for "host.name" if configured.  Otherwise, it will use the value returned from

#Java.NET.InetAddress.getCanonicalHostName().

#advertised.host.name=

# The port to publish to ZooKeeper for clients to use. If this is not set,

# it will publish the same port that the broker binds to.

#advertised.port=

# The number of threads handling network requests

num.network.threads=2   --broker网络处理的线程数,一般不做处理

# The number of threads doing disk I/O

num.io.threads=8  --broker io处理的线程数,这个数量一定要比log.dirs的目录数要大

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=1048576  --将发送的消息先放到缓冲区,当到达一定量的时候再一次性发出

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=1048576  --kafka接受消息的缓冲区,当接受的数量达到一定量的时候再写入磁盘

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600   --像kafka发送或者请求消息的最大数,此设置不能超过java堆栈大小

############################# Log Basics #############################

# A comma seperated list of directories under which to store log files

log.dirs=/kafka/kafkaLogs  --多个目录可以用,隔开

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

num.partitions=2  --一个topic默认分区数

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync

# the OS cache lazily. The following configurations control the flush of data to disk.

# There are a few important trade-offs here:

#    1. Durability: Unflushed data may be lost if you are not using replication.

#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.

#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.

# The settings below allow one to configure the flush policy to flush data after a period of time or

# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk

#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush

#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can

# be set to delete segments after a period of time, or after a given size has accumulated.

# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens

# from the end of the log.

# The minimum age of a log file to be eligible for deletion

log.retention.hours=168

message.max.byte=5048576   --kafka每条消息容纳的最大大小

default.replication.factor=2  --默认的复制因子,默认消息只有一个副本,不太安全,所以设置为2,如果某个分区的消息失败了,我们可以使用另一个分区的消息服务

replica.fetch.max.byte=5048576 --kafka每条消息容纳的最大大小

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

# segments don't drop below log.retention.bytes.

#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=536870912  --消息持久化的最大大小

# The interval at which log segments are checked to see if they can be deleted according

# to the retention policies

log.retention.check.interval.ms=60000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.

# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.

log.cleaner.enable=false  --不使用log压缩

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).

# This is a comma separated host:port pairs, each corresponding to a zk

# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

# You can also append an optional chroot string to the urls to specify the

# root directory for all kafka znodes.

zookeeper.connect=192.168.2.211:2181,192.168.2.212:2181,192.168.2.213:2181   --zk地址

# Timeout in ms for connecting to zookeeper

zookeeper.connection.timeout.ms=1000000

7.启动kafka服务

[root@localhost bin]# ./kafka-server-start.sh -daemon ../config/server.properties

[root@localhost bin]# jps

27413 Kafka

27450 Jps

17884 QuorumPeerMain

8.验证kafka集群

[root@localhost bin]# ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 1 --topic test

Created topic "test".

9.在service1上开启producer程序

./kafka-console-producer.sh --broker-list 192.168.2.211:9092 --topic test

[root@localhost bin]# ./kafka-console-producer.sh --broker-list 192.168.2.211:9092 --topic test

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

10. 在service2上开启consumer程序

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

11.在producer中发送消息:hello jeesz

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

hello jeesz

12. 在consumer中接受到消息

[root@localhost bin]# ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: Seehttp://www.slf4j.org/codes.html#StaticLoggerBinderfor further details.

hello jeesz

愿意了解框架技术或者源码的朋友直接加求求(企鹅):2042849237

更多详细源码参考来源:http://minglisoft.cn/technology

顶 0踩 0收藏
文章评论
    发表评论

    个人资料

    • 昵称: 公园
    • 等级: 初级设计师
    • 积分: 2760
    • 代码: 0 个
    • 文章: 89 篇
    • 随想: 0 条
    • 访问: 10 次
    • 关注

    人气代码

      标签

      MVC(9)教程(4)Spring(5)Redis分布式缓存(1)spring,spr(1)mvc,web开发,(1)spring,spr(1)mvc,web开发,(1)Zookeeper集(1)restful,(2)kafka,(2)shiro,Spri(1)MVC,mybati(1)跟我学习dubbo,(1)J2ee分布式架构,(1)shiro(1)跟我学习dubbo-(1)跟我学习dubbo-(1)spring+spr(1)Spring4+Sp(1)DUBBO与ZOOK(1)【分享】微服务分布式(1)Springmvc+(1)JEESZ(1)RestFul服务介(1)J2EE分布式架构(3)dubbo+spri(1)基于redis分布式(1)SSM框架Sprin(1)分布式缓存Redis(1)Centos下单节点(1)JEESZ分布式框架(1)JEESZ分布式框架(1)分布式架构sprin(1)+mybatis(8)+shiro+(3)Activiti(3)分布式服务:spri(1)+(11)Dubbo+Zook(1)JEESZ-kafk(1)JEESZ-Zook(1)FastDFS安装、(1)FastDFS分布式(1)分布式服务:spri(1)Dubbo+Zook(1)分布式架构sprin(1)FastDFS分布式(1)FastDFS安装、(1)Maven启用代理访(1)如何从Maven远程(1)Maven安装配置(1)Maven本地资源库(1)使用Maven创建J(1)使用Maven创建W(1)Maven(2)POM(1)构建生命周期(1)SSM框架——详细整(1)JavaEE的13种(1)使用Maven构建和(1)Maven存储库(1)(一)构建dubbo(1)(二)构建dubbo(1)(三)构建dubbo(1)分布式框架简介SSM(1)springmvc+(1)Maven快照(1)Maven项目模板(1)Maven构建自动化(1)dubbo(4)springmvc(4)mybatis(2)java企业架构(1)SSM框架——Spr(1)分布式服务:spri(1)Dubbo+Zook(1)分布式服务:spri(1)Dubbo+Zook(1)springmvc+(1)分布式架构sprin(1)mvc配置(2)dbcp数据源+jd(1)详细介绍(1)(十三)(1)(十四)(1)(十五)(1)MVC原理(1)入门示例讲解(1)(十六)(1)(十七)(1)Springmvc+(1)j2ee分布式架构核(1)【分享】微服务分布式(1)Springmvc+(1)dbcp数据源+jd(1)DUBBO+SPRI(1)JEESZ分布式系统(1)构建dubbo分布式(1)构建springmv(1)构建dubbo分布式(1)构建dubbo分布式(1)构建springmv(1)构建dubbo分布式(1)构建dubbo分布式(1)构建dubbo分布式(1)Springmvc+(1)构建springmv(1)构建springmv(1)构建dubbo分布式(1)构建dubbo分布式(1)跟我学习dubbo-(1)构建dubbo分布式(1)构建dubbo分布式(1)【企业级框架整合】S(1)构建dubbo分布式(1)【推荐】微服务大型分(1)springmvc整(1)【分享】微服务分布式(1)Springmvc+(1)SpringBoot(4)企业级(4)(十七)上传文件(1)(十四)在sprin(1)(十五)Spring(1)(十六)用restT(1)

      站长推荐