上海古都建筑设计集团,上海办公室装修设计公司,上海装修公司高质量的内容分享社区,上海装修公司我们不是内容生产者,我们只是上海办公室装修设计公司内容的搬运工平台

HBase1.2.1集群部署,Kafka

guduadmin115小时前

 一、安装HBase集群

1.下载HBase

Index of /dist/hbase/1.2.1 (apache.org)HBase1.2.1集群部署,Kafka,icon-default.png?t=N7T8,第1张https://archive.apache.org/dist/hbase/1.2.1/安装HBase

tar -zxvf /export/software/hbase-1.2.1-bin.tar.gz -C /export/server/

1.修改配置文件hbase-env.sh

在HBase安装目录下的conf目录,执行“vim hbase-env.sh”命令编辑HBase配置文件hbase-env.sh,配置HBase运行时的相关参数。

# 指定JDK安装目录。
export JAVA_HOME=/export/servers/jdk1.8.0_161
#指定不使用内置的ZooKeeper
export HBASE_MANAGES_ZK=false

 把这里注释打开自己设置目录避免后续报错HBase1.2.1集群部署,Kafka,第2张

在这里将如图两行代码注释掉 

HBase1.2.1集群部署,Kafka,第3张 2.修改配置文件hbase-site.xml

在HBase安装目录下的conf目录,执行“vi hbase-site.xml”命令编辑HBase配置文件hbase-site.xml,配置HBase相关参数。


      hbase.rootdir#HBase集群中所有HRegionServer共享目录,用来持久化HBase的数据
      hdfs://hadoop1/hbase


      hbase.cluster.distributed#设置HBase的存储模式为分布式存储
      true


      hbase.zookeeper.quorum#设置ZooKeeper的服务器地址
     hadoop1:2181,hadoop2:2181,hadoop3:2181

3.修改配置文件regionservers

在HBase安装目录下的conf目录,执行“vi regionservers”命令编辑HBase配置文件regionservers,配置运行HRegionServer所在的服务器主机名

hadoop2

hadoop3

4.复制Hadoop配置文件到hbase的conf目录下

进入Hadoop安装目录的conf目录,将配置文件core-site.xml和hdfs-site.xml复制到HBase安装目录下的conf目录,用于HBase启动时读取Hadoop的核心配置信息和HDFS配置信息。

cd /export/server/hadoop/etc/hadoop/

cp hdfs-site.xml /export/server/hbase-1.2.1/conf/

cp core-site.xml /export/server/hbase-1.2.1/conf/

5.配置备用HMaster

在HBase安装目录下的conf目录,执行“vim backup-masters”命令编辑备用HMaster配置文件backup-masters,配置备用HMaster所在的服务器主机名hadoop2。

hadoop2

 6.分发HBase安装目录

   为了便于快速配置HBase集群中其他服务器,将虚拟机hadoop1中的HBase安装目录分发到虚拟机Spark02和Spark03。

scp -r /export/servers/hbase-1.2.1/ hadoop2:/export/servers/
scp -r /export/servers/hbase-1.2.1/ hadoop3:/export/servers/

7. 配置HBase环境变量   

  分别在虚拟机hadoop1、hadoop2和hadoop3,执行“vi /etc/profile”命令编辑系统环境变量文件profile,配置HBase环境变量。

export HBASE_HOME=/export/servers/hbase-1.2.1
export PATH=$PATH:$HBASE_HOME/bin

系统环境变量文件profile配置完成后保存并退出即可,随后执行“source /etc/profile”命令初始化系统环境变量使配置内容生效。

二、 启动HBase集群

   确保hadoop集群和Zookeeper集群已经启动

start-all.sh

zkServer.sh start #没有配置脚本的话每台机器都要执行这句话

start-hbase.sh

HBase1.2.1集群部署,Kafka,第4张

HBase1.2.1集群部署,Kafka,第5张

HBase1.2.1集群部署,Kafka,第6张

  

 三、安装Kafka集群

1.下载并上传Kafka安装包,下面是下载官网

Apache KafkaHBase1.2.1集群部署,Kafka,icon-default.png?t=N7T8,第1张https://kafka.apache.org/downloads2.安装Kafka

tar -zxvf /export/software/kafka_2.11-2.0.0.tgz -C /export/servers/

3.修改配置文件server.properties

在Kafka安装目录下的config目录,执行“vim server.properties”命令编辑Kafka配置文件server.properties,配置Kafka的相关参数。

最重要的五个配置我用颜色进行了标注 

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
#全局唯一编号
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
#用来监听链接的端口,producer和consumer在此端口建立连接
port=9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
#kafka运行日志存放路径
log.dirs=/export/data/kafka
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=2
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
log.cleaner.enable=true
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# broke需要使用zookeeper保存meta数据
zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
delete.topic.enable=true
# 设置本机IP
host.name=hadoop1
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

4.添加环境变量

export KAFKA_HOME=/export/server/kafka_2.11-2.0.0

export PATH=$PATH:$KAFKA_HOME/bin 

5.分发文件

scp -r /export/server/kafka_2.11-2.0.0/ hadoop2:/export/server/

scp -r /export/server/kafka_2.11-2.0.0/ hadoop3:/export/server/

scp /etc/profile hadoop2:/etc/profile

scp /etc/profile hadoop3:/etc/profile

6.修改配置文件

将hadoop2和hadoop3config目录中server.properties的参数broker.id的值分别修改为“1”和“2”,以及本机IP修改为对应的IP   hadoop2和hadoop3

#全局唯一编号

broker.id=0

# 设置本机IP
host.name=hadoop1

四、启动Kafka集群

启动kafka集群前需要先启动Zookeeper集群服务

三台机器分别

zkServer.sh start

 Zookeeper服务启动后就进入到Kafka的根目录下执行

bin/kafka-server-start.sh config/server.properties

HBase1.2.1集群部署,Kafka,第8张

 再复制一个标签页查看是否启动服务

HBase1.2.1集群部署,Kafka,第9张

网友评论

搜索
最新文章
热门文章
热门标签