1、 給一個可執行的權限:
chmod u+x hadoop-2.7.1.tar.gz
2、 配置環境,如圖:
Vi /etc/profile
3、 使文件生效:
source /etc/profile
4、 修改4個配置文件,分別是core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml(Hadoop-env.sh、yarn-env.sh)
Core-site.xml:
Hdfs-site.xml
Mapred-site.xml
Yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
Hadoop-env.sh
Yarn-env.sh
5、 格式化namenode
bin/hdfs namenode –format
6、 啟動我們的HDFS系統
Sbin/start-dfs.sh
7、 啟動我們的yarn:
Sbin/start-yarn.sh
8、 連續創建兩級目錄:
bin/hdfs dfs -mkdir -p /user/derek
9、 上傳文件到指定的目錄:
Bin/hdfs dfs –put /etc/Hadoop/*.xml /user/derek
10、bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep /user/derek /user/output 'dfs[a-z.]+'
10、 關閉HDFS、yarn
Sbin/stop-dfs.sh
Sbin/stop-yarn.sh