- java环境安装
- 现有三台主机,CentOS 静态IP设置后,CentOS 修改主机名,修改
hosts
vi /etc/hosts
192.168.31.xxx master
192.168.31.xxx slaver1
192.168.31.xxx slaver2
- 集群ssh无密匙登录设置
--- 在master,slave1,slave2中执行
$ ssh-keygen -t rsa //一路回车
--- 在 master(master角色)服务器上 执行,将~/.ssh/下的id_rsa.pub
公私作为认证发放到master,slave1,slave2的~/.ssh/
下
# ssh-copy-id -i ~/.ssh/id_rsa.pub master
# ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
# ssh-copy-id -i ~/.ssh/id_rsa.pub slave2
设置完后,通过#ssh localhost测试,第一次登录会有如下提示:
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is a2:44:5f:79:00:c9:17:3b:b4:b5:47:cf:66:be:c4:0d.
Are you sure you want to continue connecting (yes/no)?
输入yes后,之后就不需要了。(必须操作)
--- 在 master上登录其他Linux服务器不需要输入密码即成功
//不需要输入密码
# ssh slaver1
or
# ssh slaver2
- hadoop完全分布式集群文件配置和启动
- 第一步 安装Hadoop
上传hadoop的安装包hadoop-2.6.4.tar.gz到服务器上去(自由选择路径)
$ tar -zxvf hadoop-2.6.4.tar.gz -C /home/cloud/
- 第二步 配置hadoop
$ cd /home/cloud/hadoop-2.6.4/etc/hadoop
hadoop-2.6.4的搭建需要修改几个配置文件
第0个
$ vi /home/cloud/hadoop-2.6.4/etc/hadoop/slaves
这是设置从节点hostname的地方(这个文件表示从节点,只填写主机名)
master #姑且让grape0既作namenode又作datanode
slave1 #datanode1
slave2 #datanode2
第一个:vi /home/cloud/hadoop-2.6.4/etc/hadoop/hadoop-env.sh
#第27行 (建议使用推荐的版本号,存放路径自定)
export JAVA_HOME=/home/cloud/jdk1.8.0_101
第二个:vi /home/cloud/hadoop-2.6.4/etc/hadoop/core-site.xml
(添加如下内容)
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/cloud/hadoop-2.6.4/temp</value>
<description>Abasefor other temporary directories.</description>
</property>
</configuration>
第三个:vi /home/cloud/hadoop-2.6.4/etc/hadoop/hdfs-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/cloud/hadoop-2.6.4/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/cloud/hadoop-2.6.4/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
新建文件夹/home/cloud/hadoop-2.6.4/dfs/name和/home/cloud/hadoop-2.6.4/dfs/data
第四个:mapred-site.xml (改名后得到的文件)
# mapred-site.xml.template重命名 (在/home/cloud/hadoop-2.6.4/etc/hadoop/路径下)
$ mv mapred-site.xml.template mapred-site.xml
$ vi mapred-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:50030</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>4096</value>
</property>
</configuration>
第五个:vi /home/cloud/hadoop-2.6.4/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
- 第三步 将hadoop添加到环境变量
$ vim /etc/profile
#hadoop
export HADOOP_HOME=/home/cloud/hadoop-2.6.4
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
$ source /etc/profile
- 第四步 复制Hadoop配置好的包到其他Linux主机
# scp -r hadoop-2.7.3 grape1:/home/cloud/
# scp -r hadoop-2.7.3 grape2:/home/cloud/
- 第五步 格式化namenode(是对namenode进行初始化)
第一次启动Hadoop,需要进行格式化主节点操作
# hadoop namenode -format
以后启动hadoop,不要格式化主节点了,会丢失数据
- 第六步 启动hadoop
$ cd /home/cloud/hadoop-2.6.4/sbin/#若已配置hadoop环境变量,直接在根目录下输入下一行代码
$ start-all.sh
Hadoop搭建到此结束,接下来是验证是否搭建成功
使用jps命令验证
出现以下结果,表明启动成功
[root@xxxxx ]# jps
6417 DataNode
7207 NodeManager
6920 ResourceManager
7258 Jps
6235 NameNode
6700 SecondaryNameNode
打开浏览器,地址栏输入master:50070查看详情
7.第七步 终止hadoop
$ cd /home/cloud/hadoop-2.6.4/sbin/ #若已配置hadoop环境变量,直接在根目录下输入下一行代码
$ stop-all.sh
参考:
Hadoop完全分布式集群搭建手记
Hadoop集群实践-完整Hadoop分布式集群部署ubuntu-16.04.1+hadoop-2.7.3