Hadoop 生态系统

Imagemap
Hadoop 生态系统Apache HadoopHadoop独立模式Docker版本安装hadoopdocker run -i -t -p 50070:50070 -p 9000: ...访问hadoopNodeManagerhttp://192.168.31.11:8042/nodeYarn管理应用http://192.168.31.11:8088/clusterNameNode管理站点http://192.168.31.11:50070/DataNodehttp://192.168.31.11:50075/secondarynamenodehttp://192.168.31.11:50090/进入容器docker ps查看到该容器信息容器ID: c825d4428a07使用attach进入容器docker attach c825d4428a07或使用exec进入容器docker exec -it c825d4428a07 /bin/bash 退出容器ctrl+c 退出容器并关闭容器ctrl+p+q 退出不关闭容器运行容器进入容器后使用exit退出就关闭了容器 docker ps -all 查看容器还在 docker start -ia c825d4428a07测试应用docker exec -it c825d4428a07 /bin/bash cd  /usr/local/hadoop-2.6.0/share/hadoop ...../../../bin/hadoop jar ./hadoop-mapredu ...Hadoop集群模式Docker版本安装Hadoopdocker run -it -p 8088:8088 -p 9820:9820 ...docker run -it -p 18088:8088 -p 19820:98 ...docker run -it -p 28088:8088 -p 29820:98 ...ctrl+p+q 退出不关闭容器配置集群需要配置内容共需要配置/opt/hadoop/hadoop-3.1.0/etc/hadoop ...修改hostnamemaster机hostnamectl set-hostname masterslave机hostnamectl set-hostname slave1hostnamectl set-hostname slave2配置hostsVi /etc/hosts172.17.0.2      master
172.17.0.3      s ...复制到master和slavescp /etc/hosts master:/etc/hostsscp /etc/hosts slave1:/etc/hostsscp /etc/hosts slave2:/etc/hosts配置免密登陆在master上生成keySsh-keygen复制key到从机ssh-copy-id root@slave1ssh-copy-id root@slave2配置系统环境变量vim ~/.bashrcJAVA_HOME=/usr/lib/jvm/java-1.8.0-openjd ...使配置生效source ~/.bashrc进行验证hadoop version如果显示版本则说明配置正确复制到slavescp ~/.bashrc slave1:~/.bashrcscp ~/.bashrc slave2:~/.bashrc配置workersvim $HADOOP_HOME/etc/hadoop/workers删除原内容
添加
slave1
slave2创建dfs目录配置hadoop-envVi etc/hadoop/hadoop-env.sh添加
export JAVA_HOME=/usr/lib/jvm/java-1. ...配置yarn-env.shvi etc/hadoop/yarn-env.sh添加
export JAVA_HOME=/usr/lib/jvm/java-1. ...配置Core-siteVi etc/hadoop/core-site.xml配置hdfs-siteVi etc/hadoop/hdfs-site.xml配置mapred-sitevi etc/hadoop/mapred-site.xml配置yarn-sitevi etc/hadoop/yarm-site.xml<configuration>
    <property>
        < ...yarn.resourcemanager.hostname
指定 yarn 的  ...yarn.nodemanager.aux-services
reducer 获取 ...yarn.nodemanager.vmem-check-enabled
忽略虚拟 ...复制到其它节点scp -rp /usr/local/hadoop slave1:/usr/lo ...scp -rp /usr/local/hadoop slave2:/usr/lo ...格式化在 master 上进行即可hadoop namenode -format注意,如果需要重新格式化 NameNode需要先将原来 NameNode 和 DataNode 下的文件全部删除rm -rf $HADOOP_HOME/dfs/data/*rm -rf $HADOOP_HOME/dfs/name/*启停服务start-all.shstop-all.sh验证服务master 上执行 jpsslave上执行jps测试应用hadoop jar hadoop-mapreduce-examples-3.2 ...hadoop jar hadoop-mapreduce-examples-3.2 ...Hadoop集群模式虚拟机版本Apache Flume™官网https://flume.apache.org/下载wget https://downloads.apache.org/flume/ ...参考https://blog.csdn.net/weixin_38231448/ar ...安装安装JDK 1.8+ 配置环境变量安装Flume[root@CentOSA ~]# tar -zxf  apache-flume ...配置Agent配置Apache Ambari官网https://ambari.apache.org/
hide
Hadoop 生态系统
hide
Apache Hadoop
hide
Hadoop集群模式Docker版本
hide
配置集群