Complete distributed construction of HBase

Reading statement: The following content is a personal understanding based on online materials and work. If it is inappropriate, please correct me~~~Thank you< /strong>

1. HBase installation mode

  ①Single machine installation: does not depend on Hadoop HDFS, configuration is complete Ready to use, the advantage is that it is easy to test, but the disadvantage is that it does not have the ability to store distributed data.

  ②Pseudo-distributed installation: a single host simulates the real environment.

  ③Fully distributed installation: multiple hosts (virtual machines) to build

Second, preparations for building a fully distributed HBase cluster

< p>  ①Build Hadoop+JDK+ZooKeeper (3 zookeeper cluster modes, installed in other blog posts by the blogger< /span>)

  ②Prepare the HBase installation package

  ③Prepare three nodes:

    192.168.144.130(The pseudo-distributed blogger of Hadoop has been installed in other blog posts, JDK+zookeeper+HBase)

    192.168.144.132 (JDK+zookeeper+HBase)

    192.168.144.134 (JDK+zookeeper+HBase)

3. Construction steps instructions

①Turn off the firewall and modify the hostname , Hosts file mapping, password-free login (3 nodes must do this)

Turn off the firewall

service iptables stop Temporary shutdown

  chkconfig iptables off Permanent shutdown

Modify hostname

  vim /etc/sysconfig/network 

NETWORKING=yes
HOSTNAME=hadoopalone //Here means the host name , and the names of the three nodes are configured as hadoopalone, hadoop02, and hadoop03

hosts file mapping

  vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.144.133 hadoopalone //Configure the corresponding ip and hostname mapping
192.168.144.131 hadoop02
192.168.144.132 hadoop03

Password-free login

    #ssh-keygen, just press Enter all the way.

    share picture

    #ssh-copy-id [emailprotected]hadoopalone(Here select 3 in turn Hostnames of nodes)

    Share a picture

②Get and decompress the HBase installation package

     download address: http://hbase.apache.org/downloads.html

p>

     I take 0.98.17 as an example here: tar -xvf hbase-xxxxx 

③Modify the configuration file—hbase-env.sh

    vim conf/hbase -env.sh (remember after the modification: source hbase-env.sh)

    #modify JAVA_HOME=XXXX

< p>    #Modify the coordination mode of zookeeper and hbase: HBase uses its own zookeeper by default. If you need to use an external zookeeper, you need to close it first: export HBASE_MANAGES_ZK=false

③Modify the configuration file—hbase-site .xml

    Configure to enable fully distributed mode

 <property>

<name>hbase.rootdirname>
<value>hdfs://hadoopalone:9000/hbasevalue< /span>>
property>
<property>
<name>hbase.cluster.distributedname< span style="color: #0000ff;">>
<value>truevalue>
property>

<property >
<name>hbase.zookeeper.quorumname< span style="color: #0000ff;">>
<value>hadoopalone:2181,hadoop02:2181,hadoop03:2181 value>
property>

④Configure the region server

  vim conf/regionservers

   each hostname has its own line

< p>  Enter the following:

  hadoopalone

  hadoop02

  hadoop03

⑤Remote copy the hbase installation package to the other two nodes

h3>

⑥Start zookeeper, Hadoop, Hbase in turn

Four. Formal establishment of HBase cluster

①Turn off the firewall, modify the host name, hosts File mapping, password-free login (3 nodes must do this)

Turn off the firewall

 Share a picture

Edit host name

  vim /etc/sysconfig/network (after the modification, remember to restart)

The host name of the first node is hadoopalone

  Share pictures

  The host name of the second node is hadoop02

  share picture

   The host name of the third node is hadoop03

  share picture

hosts file ip mapping

   select a node: vim /etc/hosts

  share picture

Password-free login

    ①ssh-keygen, then press Enter all the way.

    ②ssh-copy-id [email protected], ssh-copy-id [email protected], ssh-copy-id [email protected]

     each node executes the above commands in turn :

    share picture

②Upload the zookeeper and hbase installation packages to 3 nodes, and decompress them

  Upload via rz command

  Unzip: tar -xvf zookeeper.xxxxx tar -xvf hbase-. …..

   Each node has hbase and zookeeper

  Share pictures

③Modify the configuration file—hbase-env.sh

   modify java_home

  share picture

   modify hbase and Zookeeper coordination mode (modified: source hbase-env.sh takes effect)

  Share pictures

③Modify configuration file—hbase -site.xml

  share picture

④Configure region server

  vim regionservers

  Share a picture

⑤ Remotely copy the hbase installation package to the other two nodes

   make sure the jdk installation path is the same

  share picture

⑥Start zookeeper, Hadoop, Hbase in turn

Start zookeeper

< p>    sh zkServer.sh start start zookeeper, check the status through sh zkServer.sh status

     hadoopalone node zookeeper is: follower

    share picture

    hadoop02 node zookeeper is: leader

    Share pictures

    hadoop03 node zookeeper is: follower

    Share a picture

Start the pseudo-distribution of Hadoop

     I have built the pseudo-distribution of hadoop in hadoopalone before , On the hadoopalone node: start-all.sh to start hadoop

     through jps to check whether the pseudo-distributed process of hadoop started successfully.

    share picture

Start HBase

Enter the HBase bin directory of the Hadoopalone node: sh start-hbase.sh to start the server, through jps Check whether there is an HMaster process.

  share picture

   On another node, through jps to view, the following process appears, indicating that the HBase cluster is built. It can be executed on this node: sh start-hbase.sh, it will become the slave of HMaster. The HMaster that is started first becomes the master.

  Share pictures

So far, the HBase cluster has been set up , If there is a problem, we comment and discuss it, thank you.

NETWORKING=yes
HOSTNAME=hadoopalone //Here means the host name , and the names of the 3 nodes are configured as hadoopalone, hadoop02, hadoop03

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.144.133 hadoopalone //Configure the corresponding ip and hostname mapping
192.168.144.131 hadoop02
192.168.144.132 hadoop03

 <property>

<name>hbase.rootdirname>
<value>hdfs://hadoopalone:9000/hbasevalue< /span>>
property>
<property>
<name>hbase.cluster.distributedname< span style="color: #0000ff;">>
<value>truevalue>
property>

<property >
<name>hbase.zookeeper.quorumname< span style="color: #0000ff;">>
<value>hadoopalone:2181,hadoop02:2181,hadoop03:2181 value>
property>

Leave a Comment

Your email address will not be published.