发表文章

[最新] kafka集群搭建

qq17545293 1月前 0

版权声明:本文为博主原创文章,未经博主允许不得转载。 http://blog.csdn.net/qq_17545293/article/details/81603388

安装前的环境准备

  • 由于Kafka是用Scala语言开发的,运行在JVM上,因此在安装Kafka之前需要先安装JDK。
    # yum install java-1.8.0-openjdk* -y

  • kafka依赖zookeeper,所以需要先安装zookeeper
    # wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
    # tar -zxvf zookeeper-3.4.12.tar.gz
    # cd zookeeper-3.4.12
    # cp conf/zoo_sample.cfg conf/zoo.cfg

  • 配置jdk、zookeeper的环境变量

    # vim /etc/profile

    在profile文件中输入以下内容:

    
    #配置jdk
    
    export JAVA_HOME=/program/jdk1.8.0_181/      #jdk的安装目录
    export JRE_HOME=${JAVA_HOME}/jre
    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
    
    #配置zookeeper
    
    export ZOOKEEPER_HOME=/program/zookeeper-3.4.13/    #zookeeper的安装目录
    export  PATH=${JAVA_HOME}/bin:$ZOOKEEPER_HOME/bin:$PATH
  • 启动zookeeper
    # bin/zkServer.sh start conf/zoo.cfg &
    # bin/zkCli.sh
    # ls / #查看zk的根目录相关节点

kafka集群安装

首先准备三台Centos7 的机子,每台机子配置好jdk跟zookeeper环境,并启动zookeeper。这里我准备的三台机子的IP分别是:192.168.50.136;192.168.50.137;192.168.50.138。

下面开始搭建:

  • 下载kafka安装包
    # wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz

  • 修改server.properties
    vim config/server.properties

    
    ############################# Server Basics ############################# 
    
    broker.id=0 
    
    ############################# Socket Server Settings ############################# 
    
    listeners=PLAINTEXT://192.168.50.136:9092 #如果不配置,默认取PLAINTEXT://your.host.name:9092 
    
    ############################# Log Basics ############################# 
    
    log.dirs=/tmp/kafka-logs
    delete.topic.enable=true #通过配置此项使得删除topic的指令生效 
    
    ############################# Log Retention Policy ############################# 
    
    
    # The minimum age of a log file to be eligible for deletion 
    
    log.cleanup.policy=delete # 日志的清除策略:直接删除
    log.retention.hours=72 # 日志保存时间为3天 
    log.segment.bytes=1073741824 # 每个日志文件的最大的大小,这里为1GB 
    
    
    # The interval at which log segments are checked to see if they can be deleted according 
    
    
    # to the retention policies 
    
    log.retention.check.interval.ms=300000 
    
    ############################# Zookeeper ############################# 
    
    
    #配置zookeeper集群url,这里很重要 
    
    zookeeper.connect=192.168.50.136:2181,192.168.50.137:2181,192.168.50.138:2181 
    
    
    # Timeout in ms for connecting to zookeeper 
    
    zookeeper.connection.timeout.ms=6000
    • broker.id的值三个节点要配置不同的值,分别配置为0,1,2
    • listeners的IP配置为当前节点的IP,三个节点分别配置为:
      PLAINTEXT://192.168.50.136:9092;
      PLAINTEXT://192.168.50.137:9092;
      PLAINTEXT://192.168.50.138:9092;
    • zookeeper.connect的配置三个节点是一样的

启动服务

  • 关闭防火墙:systemctl stop firewalld
  • 启动脚本语法sh kafka-server-start.sh [-daemon] server.properties
    可以看到,server.properties的配置路径是一个强制的参数,-daemon表示以后台进程运行,否则ssh客户端退出后,就会停止服务。
  • 停止服务:sh bin/kafka-server-stop.sh

查看zk的根目录kafka相关节点

我们进入zookeeper目录通过zookeeper客户端查看下zookeeper的目录树:
# sh bin/zkCli.sh
# ls / #查看zk的根目录kafka相关节点

[zk: localhost:2181(CONNECTED) 0] ls /
[xx, cluster, controller, brokers, zookeeper, admin, isr_change_notification, log_dir_event_notification, controller_epoch, tl, xiaoxiao0000000000, consumers, latest_producer_id_block, config]

# ls /brokers/ids #查看kafka节点

[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[0, 1, 2]

查看一下kafka的leader:

[zk: localhost:2181(CONNECTED) 2] get /controller
{"version":1,"brokerid":2,"timestamp":"1534041018150"}
cZxid = 0x500000037
ctime = Sat Aug 11 22:30:18 EDT 2018
mZxid = 0x500000037
mtime = Sat Aug 11 22:30:18 EDT 2018
pZxid = 0x500000037
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x100009622c60000
dataLength = 54
numChildren = 0
相关推荐
最新评论 (0)
返回
发表文章
qq17545293
文章数
21
评论数
0
注册排名
745516