JavaCDH的安装和安装

利用伪分布形式安装和装置CDH,前提是曾经设置了Java和SSH。

1.
下载hadoop-2.6.0-cdh5.9.0,复制到/opt/下,再解压;

2.
进入/opt/hadoop-2.6.0-cdh5.9.0/etc/hadoop/,在hadoop-env.sh中添加:

export JAVA_HOME=/opt/jdk1.8.0_121
export HADOOP_HOME=/opt/hadoop-2.6.0-cdh5.9.0

修改配置文件core-tite.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://192.168.1.104:9000</value>
    </property>
</configuration>

其间hadoop.tmp.dir最好团结设置,不要接纳默许的设置,因为默认的设置是在/tmp/上面,机珍爱启未来会被删去掉,造成Hadoop无法运行,要再一次格式化NameNode才能运作。

hdfs-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/opt/hdfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/opt/hdfs/data</value>
    </property>
    <property>
            <name>dfs.tmp.dir</name>
            <value>/opt/hdfs/tmp</value>
    </property>
</configuration>

mapred-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>hdfs://192.168.1.104:9001</value>
    </property>
</configuration>
  1. 在/etc/profile后边加上:

    export HADOOP_HOME=/opt/hadoop-2.6.0-cdh5.9.0
    export PATH=$PATH:$HADOOP_HOME/bin

并且输入指令:

source /etc/profile

使设置生效。

  1. 输入指令:

    hadoop namenode -format

格式化NameNode,如果结果提醒Successful注明格式化成功。

5.
跻身/opt/hadoop-2.6.0-cdh5.9.0/etc/hadoop/sbin,输入指令:

./start-all.sh

开行Hadoop。为了检验是不是启动成功,输入指令:

jps

假若结果包蕴了以下多少个进度,则注解启动成功:

Java 1

也足以在浏览器里面输入地点http://localhost:50070,检验是否启动成功:

Java 2

相关文章