Official document:
https://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/SingleCluster.html
Configuration free Password login for communication between nameNode and dataNode
ssh-keygen -t rsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Verify ssh, you can log in without entering a password. After logging in, execute exit to exit.
ssh localhost
exist
etc/hadoop/core-site.xml
fs.defaultFS
hdfs://192.168.3.127:8020 span>
etc/hadoop/hdfs-site.xml
dfs.replication
1
dfs.name.dir
file:/home/hdfs/name
Hdfs namespace metadata is stored on the namenode
dfs.data.dir
file:/home/hdfs/data
The physical storage location of the data block on the datanode
Open port
firewall-cmd --add-port=8020/tcp --permanent
firewall-cmd --add-port=50010/tcp -- permanent
firewall-cmd --add-port=50070/tcp -- permanent
firewall-cmd --reload
1. java.lang.IllegalArgumentException: URI has an authority component
An error was reported when executing `bin/hdfs namenode -format`.
Check if the hdfs-site.xml configuration is correct
dfs.name.dir file:/home/hdfs/name Hdfs namespace metadata is stored on the namenode
dfs.data.dir file:/home/hdfs/data The physical storage location of the data block on the datanode
2. java.io. FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
Unzip hadoop-2.9.2.tar.gz to D:\app\
System.setProperty("hadoop.home.dir", "D:\\app\\hadoop-2.9. 2");
3. java.io.FileNotFoundException: Could not locate Hadoop executable: D:\app\hadoop-2.9.2\bin\winutils.exe
Download winutils. Place exe under {HADOOP_HOME}\bin\
4. Permission denied: user=xxx , access=WRITE, inode=”/”:root:supergroup:drwxr-xr-x
/**
* To solve the problem of unauthorized access, set the linux user name of remote hadoop
*/
private static final String USER = "root";
fileSystem = FileSystem.get(new URI(HDFS_PATH ), configuration, USER);
5. java.net.ConnectException: Connection timed out: no further information and org.apache.hadoop.ipc.RemoteException: File /hello-hadoop.md could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
# Open dataNode port
firewall-cmd --add-port=50010/tcp -- permanent
firewall-cmd --reload
6. No FileSystem for scheme “hdfs”
org.apache.hadoop
hadoop-hdfs
${org.apache.hadoop.version}
org.apache.hadoop
hadoop-common
${org.apache.hadoop.version}
org.apache.hadoop
hadoop-client
${org.apache.hadoop.version}
If you have any questions, please leave a message and exchange.
Technical Exchange Group: 282575808
——————————- ——-
Disclaimer: Original articles, reprinting without permission is prohibited!
————————————–
< /p>
ssh-keygen -t rsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh localhost
exist
fs.defaultFS
hdfs://192.168.3.127:8020 span>
dfs.replication
1
dfs.name.dir
file:/home/hdfs/name
Hdfs namespace metadata is stored on the namenode
dfs.data.dir
file:/home/hdfs/data
The physical storage location of the data block on the datanode
firewall-cmd --add-port=8020/tcp --permanent
firewall-cmd --add-port=50010/tcp -- permanent
firewall-cmd --add-port=50070/tcp -- permanent
firewall-cmd --reload
dfs.name.dir file:/home/hdfs/name Hdfs namespace metadata is stored on the namenode
dfs.data.dir file:/home/hdfs/data The physical storage location of the data block on the datanode
System.setProperty("hadoop.home.dir", "< span style="color: #800000;">D:\\app\\hadoop-2.9.2");
/**
* To solve the problem of unauthorized access, set the linux user name of remote hadoop
*/
private static final String USER = "root";
fileSystem = FileSystem.get(new URI(HDFS_PATH ), configuration, USER);
# Open dataNode port
firewall-cmd --add-port=50010/tcp -- permanent
firewall-cmd --reload
org.apache.hadoop
hadoop-hdfs
${org.apache.hadoop.version}
org.apache.hadoop
hadoop-common
${org.apache.hadoop.version}
org.apache.hadoop
hadoop-client
${org.apache.hadoop.version}