Hadoop stand-alone in Linux _ Pseudo-distribution _ installation and configuration

Step 1: Create a hadoop user and authorize the hadoop user

(1) In a new Linux system CentOS-7-x86_64-DVD-1708 In the .iso, if the initial user is root and not a hadoop user, then one needs to be added as a Hadoop user. Execute the following command to check whether the hadoop user exists.

$cat /etc/passwd |grep hadoop

(2) If the hadoop user does not exist, create a hadoop user, then continue to execute (3), If the hadoop user exists, go to step (4).

(3) Create user hadoop. (If the system does not have the sudo command, then yum install net-tools install)

$sudo useradd -m hadoop -s /bin/bash< /p>

(4) Set the password to hadoop (the password is not echoed)

$sudo passwd hadoop

(5) is the hadoop user Grant sudo permissions. (There will be an error when authorizing hadoop users: hadoop is not in the sudoers file. This incident will be reported.: Solution Click this link: https://blog.csdn.net/ haijiege/article/details/79630187 )

$sudo adduser hadoop sudo

(6) Restart the computer and log in as the hadoop user.

$reboot

Step 2: Modify the cluster node name and add domain name mapping.

(1) Write the node name into the /etc/hostname file.

$sudo vi /etc/hostnames

(2) Write the IP address and host name of the node into /etc/hosts to complete the domain name mapping Add.

$sudo vi /etc/hosts

For example: 172.17.67.10 master

As shown in the figure, the configuration is complete:

Share a picture

(3) Restart the computer.

$reboot

Step 3: SSH login permission setting

(1) On the node Install SSH.

Check the ssh installation package: rpm -qa | grep ssh
Check whether the ssh installation is successful: ps -ef | grep ssh p>

The following picture is installed:

Share picture

share picture

If not installed, enter the command:

$sudo apt-get install openssh-server

(2) Generate public and private keys on the node.

$ssh-keygen –t rsa (wait for automatic completion after the command is entered)

The directory .ssh is automatically created under the ~/ directory, and id_rsa is created internally (Private key), id_rsa.pub (public key), authorized_keys file.

(3) Send the public key of the node to the .ssh/authorized_keys file.

$cd ~/.ssh $cat ./id_rsa.pub >> ./authorized_keys

(4) Test SSH password-free login. (For the time being, you need to enter a password to log in)

$ssh localhost

share picture

After the test is successful, You can execute the exit command to end the remote login.

Step 4: Install Java environment

(1) Create a jvm directory in the directory /usr/lib, and change the owner of the directory to the hadoop user.

$sudo mkdir /usr/lib/jvm/ $sudo chown –R hadoop:hadoop /usr/lib/jvm

(2) Use tar The command decompresses and installs the jdk-8u121-linux-x64.tar.gz file to the directory /usr/lib/jvm.

$cd ~/

$sudo tar zxvf jdk-8u121-linux-x64.tar .gz -C /usr/lib/jvm/

(3) Configure JDK environment variables to make them effective.

①Use the vi command to open the user’s configuration file .bashrc.

$sudo vi ~/.bashrc

②Add the following content to the file:

export JAVA_HOME=/usr/lib /jvm/jdk1.8.0_121

export JRE_HOME=$JAVA_HOME/jre

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

< p> export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

③Enable the environment variables to take effect and verify whether the JDK is installed successfully.

$ source ~/.bashrc #Effective environment variables

$ java –version #If you print out the java version information, it will succeed

strong>

share picture

< p>

Step 5: Pseudo-distributed cluster installation and configuration

(1) Use the tar command to decompress and install the hadoop-2.7.3.tar.gz file to the directory /usr/local, And renamed to hadoop.

$cd ~/ #Enter the directory where the hadoop-2.7.3.tar.gz file is located

$sudo tar -zxvf hadoop-2.7 .3.tar.gz -C /usr/local

$cd /usr/local #Enter /usr/local to view the decompression result

< p> $ls #After decompression, the directory name is hadoop-2.7.3 $sudo mv ./hadoop-2.7.3 ./hadoop #To simplify the operation, rename the folder to hadoop (2) The owner of the directory /usr/local/hadoop is changed to the hadoop user.

$ sudo chown -R hadoop:hadoop /usr/local/hadoop

(3) Modify the environment variable and make it effective.

①Modify environment variables

$sudo vi ~/.bashrc #Open the user configuration file and write the following hadoop configuration information in the user configuration file .bashrc .

export HADOOP_HOME=/usr/local/hadoop

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_HOME=/usr/local/hadoop

export YARN_CONF_DIR=${YARN_HOME}/etc/hadoop

② Effective environment variables $source ~/.bashrc

( 4) Configure Hadoop file pseudo-distribution environment, pseudo-distribution needs to modify the following 4 configuration files.

$cd /usr/local/hadoop/etc/hadoop

$vi filename (file)

①Configure JAVA_HOME in hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_121

②Configure in yarn-env.sh

JAVA_HOME export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_121

③Modify the core-site.xml file.

Share a picture

④ Modify the hdfs-site.xml file.

share picture

share picture

(5) Format the NameNode node.

$cd /usr/local/hadoop

$bin/hdfs namenode -format

Note: If you modify the configuration file after formatting, you need to reformat, and you need to delete the tmp, dfs, and logs folders before that.

(8) Start Hadoop service

$cd /usr/local/hadoop

$bin/start- dfs.sh

$bin/start-yarn.sh

(9) Verify whether the installation is successful.

① Execute the jps command to view the service

The $sbin/start-all.sh command can be used to start the entire hadoop service

share picture

More detailed reprint: https ://www.cnblogs.com/hopelee/p/7049819.html

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 5093 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.