注:大家常说的ssh其实就是一个免密码访问的东西,为了简化操作的,不用每次访问其他节点重新输入密码。但是要想配置如下:
.在每台机器上执行 ssh-keygen -t rsa,连敲三次回车键(即设置空密码)
.然后在每台机器上都执行cd ~/.ssh,并分别把id_rsa.pub复制到authorized_keys中,
即执行 cp id_rsa.pub authorized_keys
.然后分别把slave0,slave1的authorized_keys都复制到master主节点的authorized_keys中,
即分别在两个从节点slave0和slave1中执行 ssh-copy-id -i master
.再分别复制到slave0,slave1中(即每一个节点的authorized_keys中都有这三个节点的密钥)
即在主节点master上执行 scp -r ~/.ssh/authorized_keys slave0:~/.ssh/
scp -r ~/.ssh/authorized_keys slave1:~/.ssh/
此时成功。
简单测试:比如说在master上执行ssh slave0,这时候发现不需要输入密码即可转
到slave0从节点上,这时成功
hadoop-0.20.2的配置文件
hadoop-0.20.0的配置文件:
core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://cMaster:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/joe/cloudData</value>
</property>
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/home/joe/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/joe/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value></value>
</property>
mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>cMaster:</value>
</property>
hadoop-0.20.2集群操作命令:
hadoop-0.20.2的集群操作命令
上传本地文件至hdfs:[rio@cMaster hadoop-0.20.]#bin/hadoop dfs -put /home/rio/input/* /in
用WordCount计算数据:[rio@cMaster hadoop-2.2.0]#bin/hadoop jar hadoop-0.20.2-examples.jar
wordcount /in /out/wc-01
hadoop-2.2.0的配置文件:
hadoop-2.2.0的配置文件:
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://cMaster:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/joe/cloudData</value>
</property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>cMaster</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
mapred-site.xml(注:将mapred-site.xml.template重命名为mapred-site.xml)
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
hadoop-2.2.0进程启动命令:
启动(或关闭)全部服务:[rio@cMaster hadoop-2.2.]#sbin/start-all.sh
[rio@cMaster hadoop-2.2.]#sbin/stop-all.sh
格式化主节点:[rio@cMaster hadoop-2.2.]#bin/hdfs namenode -format(注:仅格式化一次)
启动主节点上的namenode:[rio@cMaster hadoop-2.2.]#sbin/hadoop-daemon.sh start namenode
启动主节点上的resourcemanager:[rio@cMaster hadoop-2.2.]#sbin/yarn-daemon.sh start resourcemanager
启动从节点上的datanode:[rio@cMaster hadoop-2.2.]#sbin/hadoop-daemon.sh start datanode
启动从节点上的nodemanager:[rio@cMaster hadoop-2.2.]#sbin/yarn-daemon.sh start nodemanager
启动日志历史查询:[rio@cMaster hadoop-2.2.]#sbin/mr-jobhistory-daemon.sh start historyserver
查看服务是否启动:[rio@cMaster hadoop-2.2.]#/usr/java/jdk1..0_71/bin/jps
hadoop-2.2.0集群操作命令:
创建目录:[rio@cMaster hadoop-2.2.]#bin/hdfs dfs -mkidr /in
删除文件及目录:[rio@cMaster hadoop-2.2.]#bin/hdfs dfs -rmr /out/input
上传本地文件至hdfs:[rio@cMaster hadoop-2.2.]#bin/hdfs dfs -put /home/rio/input/* /in
查看hdfs里的文件:[rio@cMaster hadoop-2.2.0]#bin/hdfs dfs -cat /in/*
[rio@cMaster hadoop-2.2.0]#bin/hdfs dfs -cat /out/wc-01/*
用WordCount计算数据:[rio@cMaster hadoop-2.2.0]#bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-example-2.2.0.jar
wordcount /in /out/wc-01