上一篇博客主要讲了HDFS的组成部分和机制,这篇博客给大家讲讲如何安装、配置、启动HDFS
这次讲的HDFS是伪分布式的(单台服务器)
1. 下载和解压Hadoop1.2.1
下载
wget http://mirrors.ustc.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
解压
tar -zxvf hadoop-1.2.1.tar.gz
2.配置Hadoop
输入命令
vi hadoop-1.2.1/conf/hadoop-env.sh
配置好jdk
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
修改core-site.xml
vi hadoop-1.2.1/conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
修改core-site.xml
vi hadoop-1.2.1/conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
修改mapred-site.xml
vi hadoop-1.2.1/conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
修改masters
vi hadoop-1.2.1/conf/masters
localhsot
3. ssh免密登录
生成公钥和私钥
ssh-keygen -t rsa -P ""
将公钥追加到文件
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
测试
ssh localhost
若不用输入密码,则证明ssh免密登录配置成功
4. 启动hdfs
hadoop-1.2.1/bin/start-dfs.sh
输入命令,检查是否启动成功
jps
若出现NameNode,DataNode,SecondNameNode则证明启动成功
或者访问 该服务器外网IP地址:50070
若出现该页面则证明启动成功