CentOS 7 Install Hadoop Cluster

Prepare three virtual machines, ip are 192.168.220.10 (master), 192.168.220.11 (slave1), 192.168.220.12 (slave2)

Ready jdk-6u45-linux- x64.bin and hadoop-1.2.1-bin.tar.gz, placed in the /usr/local/src/ directory

Install JDK (installed on each virtual machine )

1. Enter the /usr/local/src/ directory and execute ./jdk-6u45-linux-x64.bin

2. Modify ~/.bashrc, in the file Add three lines at the end

export JAVA_HOME=/usr/local/src/jdk1.6.0_45
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin

  

3. Make the environment variable take effect, execute source ~/.bashrc

Install Hadoop

Install hadoop on the 192.168.220.10 machine

1. Go to the /usr/local/src/ directory, unzip hadoop-1.2.1-bin.tar.gz, and execute tar -zxf hadoop -1.2.1-bin.tar.gz

2. Modify the configuration file

masters file

master

  

slaves file

slave1
slave2

  

core-site.xml file



hadoop.tmp.dir
/usr/local/src/hadoop-1.2.1/tmp


fs.default.name
hdfs://192.168.220.10:9000


  

mapred-site.xml file



mapred.job.tracker
http://192.168.220.10:9001


  

hdfs-site.xml file



dfs.replication
3


  

hadoop-env.sh file, add a line at the end

export JAVA_HOME=/usr/local/src/jdk1.6.0_45

  

3. Copy the /usr/local/src/hadoop-1.2.1 directory to the 192.168.220.11, 192.168.220.12 machine

< p>Configure hostname

Configure the hostname of 192.168.220.10 as master

1. Execute hostname master

2. Modify/ etc/hostname file

master

  

Modify the host name of 192.168.220.11 to slave1, and modify the host name of 192.168.220.12 to slave2

Configure the host file

Add the following code to the end of the host file of the three machines

192.168.220.10 master

192.168.220.11 slave1
192.168.220.12 slave2

Configure SSH

1. Execute ssh-keygen on 192.168.220.10, and add a new .ssh directory under the ~ directory. The file in the directory is id_rsa, id_rsa.pub

2. Copy id_rsa.pub to authorized_keys

cp id_rsa.pub authorized_keys

3. Execute ssh-keygen on 192.168.220.11 and 192.168.220.12 respectively

4. Put 192.168.220.11 and 192.168.220.12 on The contents of id_rsa.pub are copied to the authorized_keys file of 192.168.220.10, as follows:

ssh -rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9mGRhFOdcoHw9GUnKQmqThNKpsyah93Dtq / d8RICGWIHDRJ3GXd0sEcb743ejwbuCMmtlhheXcU0FuyA6Cm0jvMyvDfaPKArtxl6KT7Z93uC0VDCXDRomueux81HAIVjc7ZqlXwVeYs1LITxEeJykKlFOXvK7JexWhWGdMMADwxbFMbaNsZ9EwRxcFLFtNg65FQ + u8CIV9KR3D02kemwLCsP + xiRcgs + wirQPm5JM + 2cJoLsVQBz3Hk335IsEhc1Xb9Cralo8Tt8gh / ho8K / 1pVjvyW1b0LkP9HGNdwVYD9wkWdEJRkryLXBEXpjk4xu + riF +  N4rOzJD [email pr otected]

ssh -rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDn79fdfR / NjzPVD3NPj1vBBfQdVOrv7jeb4UJCOsd7xioPRiz8gOQnOmhu5C + GchbyGA + tg5pXwnNJTOO2wn32U4lOPndW0okN / wqyN4vgq / taJi7JgY / 8rneBiGaIIdNIy / pAGlMwb53Qn766adetMhsxYMD2l4uxmbVVjzCRb8QP5EsAYTmmFOODzJsPm70uF3j1Q8zGavYg0wFSYR / yECQns4DBSuBJNxdGY6PskBXqurahwi5yaR3vWV1Ix4wtB6BYuQomEnGdzOSfrBMZ / yc5tXo0xmEfY7wFkize6z9Pm2E3oDoMR18YkwT1Cz6fHikVILA9cldtL [emailprotected]
ssh -rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCydYCvASzCZggks4hMqOcYSGLO2eAvocWezNOMwspTfpJ105Jumb / vf5h6cRZeckq56IvhSV6t6mytk4pZoZjjZPSmWvCwLtMRMPShNbA3BYtj5V3WRKV8ZcMrNdD // U7iHHoJm57vI / m + XO42YSYjPw7JDkb8Ij9b6zgI3fyvbSSYeXb451PlyJLHdxIzRMAaZDSbAML9e7EO8VJB9Wf9bXpow4 + VeP33it3kgMNUlHQtyqduSwYGxVVtGsUTJkxnuRsbWeeA1 / pp8MNFKUgBTMALTVHByglgZqwGcbblJxsG832PIZNRECIFqorm6odftjnT4DR7 / 0yR [email protected]

5 to 192.168.220.10 authorized_keys file copy. Go to the 192.168.220.11 and 192.168.220.12 machines

After the configuration is complete, the three machines can access each other without a password

Okay, here is the Hadoop cluster After the configuration, let’s use it

Format the namenode on 192.168.220.10

Go to the /usr/local/src/hadoop-1.2.1/bin directory and execute

./hadoop namenode -format

appear

19/08/04 15:15:21 INFO namenode.NameNode: STARTUP_MSG:

/************* ***********************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.220.10
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/ branch-1.2 -r 1503152; compiled by'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.6.0_45
*********************************************** *************
*/
19/08/04 15:15:21 INFO util.GSet: Computing capacity for map BlocksMap
19/08/04 15:15:21 INFO util.GSet: VM type = 64-bit
19/08/04 15:15:21 INFO util.GSet: 2.0% max memory = 1013645312
19/08/04 15:15:21 INFO util.GSet: capacity = 2^21 = 2097152 entries
19/08/04 15:15:21 INFO util.GSet: recommended=2097152, actual=2097152
19/08/04 15:15:22 INFO namenode.FSNamesystem: fsOwner=root
19/08/04 15:15:22 INFO namenode.FSNamesystem: supergroup=supergroup
19/08/04 15:15:22 INFO namenode.FSNamesystem: isPermissionEnabled=true
19/08/04 15:15:22 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
19/08/04 15:15:22 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0< /span> min(s), accessTokenLifetime=0 min(s)
19/08/04 15:15:22 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
19/08/04 15:15:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/08/04 15:15:23 INFO common.Storage: Image file /usr/local/src/hadoop-1.2.1/tmp/dfs/name/current/fsimage of size 110 bytes saved in 0 seconds.
19/08/04 15:15:23 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/local/ src/hadoop-1.2.1/tmp/dfs/name/current/edits
19/08/04 15:15:23 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/local/src /hadoop-1.2.1/tmp/dfs/name/current/edits
19/08/04 15:15:23 INFO common.Storage: Storage directory /usr/local/src/hadoop-1.2.1/tmp/dfs/name has been successfully formatted.
19/08/04 15:15:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************* ***********************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.220.10
*********************************************** *************
*/

Use jps to view the process

[[emailprotected] bin]# jps

19905 JobTracker
19650 NameNode
19821 SecondaryNameNode
20202 Jps

Check the process at 192.168.220.11

9289 DataNode

9493 Jps
9391 TaskTracker

Check the progress at 192.168.220.12

6823 DataNode

6923 TaskTracker
7057 Jps

Under test:

Execute./ hadoop fs -ls /

drwxr-xr-x-root supergroup 0 2019-08-04 15:15 /usr

Upload files, execute ./hadoop fs -put /root/w.txt /

View file execution. /hadoop fs -cat /w.txt, display

 ddd

Success

export JAVA_HOME=/usr/local/src/jdk1 .6.0_45
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin

master

slave1
slave2



hadoop.tmp.dir
/usr/local/src/hadoop-1.2.1/tmp


fs.default.name
hdfs://192.168.220.10:9000




mapred.job.tracker
http://192.168.220.10:9001




dfs.replication
3


export JAVA_HOME=/usr/local/src/jdk1.6.0_45

master

192.168.220.10< span style="color: #000000;"> master

192.168.220.11 slave1
192.168.220.12 slave2

cp id_rsa.pub authorized_keys

 ssh  -rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9mGRhFOdcoHw9GUnKQmqThNKpsyah93Dtq / d8RICGWIHDRJ3GXd0sEcb743ejwbuCMmtlhheXcU0FuyA6Cm0jvMyvDfaPKArtxl6KT7Z93uC0VDCXDRomueux81HAIVjc7ZqlXwVeYs1LITxEeJykKlFOXvK7JexWhWGdMMADwxbFMbaNsZ9EwRxcFLFtNg65FQ + u8CIV9KR3D02kemwLCsP + xiRcgs + wirQPm5JM + 2cJoLsVQBz3Hk335IsEhc1Xb9Cralo8Tt8gh / ho8K / 1pVjvyW1b0LkP9HGNdwVYD9wkWdEJRkryLXBEXpjk4xu + riF +  N4rOzJD [email protected]

ssh -rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDn79fdfR / NjzPVD3NPj1vBBfQdVOrv7jeb4UJCOsd7xioPRiz8gOQnOmhu5C + GchbyGA + tg5pXwnNJTOO2wn32U4lOPndW0okN / wqyN4vgq / taJi7JgY / 8rneBiGaIIdNIy / pAGlMwb53Qn766adetMhsxYMD2l4uxmbVVjzCRb8QP5EsAYTmmFOODzJsPm70uF3j1Q8zGavYg0wFSYR / yECQns4DBSuBJNxdGY6PskBXqurahwi5yaR3vWV1Ix4wtB6BYuQomEnGdzOSfrBMZ / yc5tXo0xmEfY7wFkize6z9Pm2E3oDoMR18YkwT1Cz6fHikVILA9cldtL [emailprotected]
ssh -rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCydYCvASzCZggks4hMqOcYSGLO2eAvocWezNOMwspTfpJ105Jumb / vf5h6cRZeckq56IvhSV6t6mytk4pZoZjjZPSmWvCwLtMRMPShNbA3BYtj5V3WRKV8ZcMrNdD // U7iHHoJm57vI / m + XO42YSYjPw7JDkb8Ij9b6zgI3fyvbSSYeXb451PlyJLHdxIzRMAaZDSbAML9e7EO8VJB9Wf9bXpow4 + VeP33it3kgMNUlHQtyqduSwYGxVVtGsUTJkxnuRsbWeeA1 / pp8MNFKUgBTMALTVHByglgZqwGcbblJxsG832PIZNRECIFqorm6odftjnT4DR7 / 0yR [email protected]

 ./ hadoop namenode -format < /pre>

19/08/< span style="color: #800080;">04 15:15:21 INFO namenode.NameNode: STARTUP_MSG:

/************* ***********************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.220.10
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/ branch-1.2 -r 1503152; compiled by'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.6.0_45
*********************************************** *************
*/
19/08/04 15:15:21 INFO util.GSet: Computing capacity for map BlocksMap
19/08/04 15:15:21 INFO util.GSet: VM type = 64-bit
19/08/04 15:15:21 INFO util.GSet: 2.0% max memory = 1013645312
19/08/04 15:15:21 INFO util.GSet: capacity = 2^21 = 2097152 entries
19/08/04 15:15:21 INFO util.GSet: recommended=2097152, actual=2097152
19/08/04 15:15:22 INFO namenode.FSNamesystem: fsOwner=root
19/08/04 15:15:22 INFO namenode.FSNamesystem: supergroup=supergroup
19/08/04 15:15:22 INFO namenode.FSNamesystem: isPermissionEnabled=true
19/08/04 15:15:22 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
19/08/04 15:15:22 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0< /span> min(s), accessTokenLifetime=0 min(s)
19/08/04 15:15:22 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
19/08/04 15:15:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/08/04 15:15:23 INFO common.Storage: Image file /usr/local/src/hadoop-1.2.1/tmp/dfs/name/current/fsimage of size 110 bytes saved in 0 seconds.
19/08/04 15:15:23 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/local/ src/hadoop-1.2.1/tmp/dfs/name/current/edits
19/08/04 15:15:23 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/local/src /hadoop-1.2.1/tmp/dfs/name/current/edits
19/08/04 15:15:23 INFO common.Storage: Storage directory /usr/local/src/hadoop-1.2.1/tmp/dfs/name has been successfully formatted.
19/08/04 15:15:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************* ***********************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.220.10
*********************************************** *************
*/

< span style="color: #000000;">[[emailprotected] bin]# jps

19905 JobTracker
19650 NameNode
19821 SecondaryNameNode
20202 Jps

9289 DataNode

9493 Jps
9391 TaskTracker

6823 DataNode

6923 TaskTracker
7057 Jps

drwxr-xr-x-root supergroup 0 2019-08 -04 15:15< /span> /usr

ddd

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 4846 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.