Mac deployed Hadoop3 (pseudo-distributed)

Environmental information

  1. Operating system: macOS Mojave 10.14.6
  2. JDK: 1.8.0_211 (installation location: /Library /Java/JavaVirtualMachines/jdk1.8.0_211.jdk/Contents/Home)
  3. hadoop: 3.2.1

    open ssh

    In “System Preferences”->”Sharing”, set as follows:
    Insert picture description here

    Password-free login

  4. Execute the following command to create a secret key:
ssh-keygen -t rsa -P'' -f ~/.ssh/id_rsa

All the way to next, will eventually generate id_rsa and id_rsa.pub files in the ~/.ssh directory

  1. Execute the following command and put your secret key in the ssh authorization directory, so that you don’t need to enter a password for ssh login itself:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  1. Try ssh login, this time no password is needed:
Last login: Sun Oct 13 21:44:17 on ttys000(base) zhaoqindeMBP:~ zhaoqin$ ssh localhostLast login: Sun Oct 13 21:48:57 2019(base) zhaoqindeMBP:~ zhaoqin$

Download hadoop

  1. Download hadoop, the address is: http://hadoop.apache.org/releases.html
  2. The file will be downloaded hadoop-3.2.1.tar.gz unzip, here I am The decompressed address is: ~/software/hadoop-3.2.1/

If you only need hadoop stand-alone mode, you can do it now, but stand-alone mode does not have hdfs, so the next step is to do Pseudo-distribution mode setting;

Pseudo-distribution mode setting

Enter the directory hadoop-3.2.1/etc/hadoop, and make the following settings:

p>

  1. Open the hadoop-env.sh file and add the JAVA path setting:
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1. 8.0_211.jdk/Contents/Home
  1. Open the core-site.xml file and change the configuration node to the following content:
   fs.defaultFS hdfs://localhost:9000 
  1. Open hdfs -site.xml file, change the configuration node to the following content:
  dfs.replication 1  
  1. Open the mapred-site.xml file and change the configuration node to the following content:
  mapreduce.framework.name yarn 
  1. Open yarn -site.xml file, confi The guration node is changed to the following content:
  yarn.nodemanager.aux-services mapreduce_shuffle   yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME 
  1. Execute the following command in the directory hadoop-3.2.1/bin to initialize hdfs:
./hdfs namenode -format pre> 

After the initialization is successful, the following information can be seen:

2019-10-13 22:13:32,468 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 02019 -10-13 22:13:32,473 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.2019-10-13 22:13:32,474 INFO namenode.NameNode: SHUTDOWN_MSG:/******* ************************************************** ***SHUTDOWN_MSG: Shutting down NameNode at zhaoqindeMBP/192.168.50.12************* ***********************************************/ 

Start

  1. Enter the directory hadoop-3.2.1/sbin and execute ./start-dfs.shStart hdfs:
(base) zhaoqindeMBP:sbin zhaoqin$ ./start-dfs.shStarting namenodes on [localhost]Starting datanodesStarting secondary namenodes [zhaoqindeMBP]zhaoqindeMBP: Warning : Permanently added'zhaoqindembp,192.168.50.12' (ECDSA) to the list of known hosts.2019-10-13 22:28:30,597 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin -java classes where applicable

The above warning will not affect the use;

  1. Browser access address: localhost:9870, you can see the web page of hadoop as shown below:
    Insert the picture description here
  2. Enter the directory hadoop-3.2.1/sbin and execute ./start-yarn.shStart yarn:
base) zhaoqindeMBP:sbin zhaoqin$ ./start-yarn .shStarting resourcemanagerStarting nodemanagers
  1. Browser access address: localho st:8088, you can see the web page of yarn as shown below:
    insert picture description here
  2. Execute the jps command to view all java processes. Under normal circumstances, the following processes can be seen:
(base) zhaoqindeMBP:sbin zhaoqin$ jps2161 NodeManager1825 SecondaryNameNode2065 ResourceManager1591 NameNode2234 Jps1691 DataNode

So far, the deployment, setup, and startup of the hadoop3 pseudo-distributed environment have been completed.

stop hadoop service

Enter the directory hadoop-3.2.1/sbin and execute ./stop-all.sh You can turn off all services of hadoop:

(base) zhaoqindeMBP:sbin zhaoqin$ ./stop-all.shWARNING: Stopping all Apache Hadoop daemons as zhaoqin in 10 seconds .WARNING: Use CTRL-C to abort.Stopping namenodes on [localhost]Stopping datanodesStopping secondary namenodes [zhaoqindeMBP]2019-10-13 22:49:00,941 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform.. . using builtin-java classes where applicableStopping nodemanagersStopping resourcemanager

The above is the entire process of deploying hadoop3 in the Mac environment. I hope to give you some reference.

Welcome to follow the official account: programmer Xinchen

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 1177 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.