摘要
吐血整理,搭建了两遍,亲测可用!!!
我买的是阿里云2C4G的服务器,使用的是CentOS 7.7版本。在搭建过程中踩了不少坑,本篇文章希望对大家有用
CentOS 7.7安装Docker
查看内核版本(使用root用户登陆)
uname-a
把yum包更新到最新
yumupdate
安装需要的软件包
yuminstall-yyum-utilsdevice-mapper-persistent-datalvm2
设置yum源
sudoyum-config-manager--add-repo/linux/centos/docker-ce.repo
创建目录
cd/mntmkdirdockercddocker
可以查看所有仓库中所有docker版本,并选择特定版本安装
yumlistdocker-ce--showduplicates|sort-r
安装Docker,命令:yum install docker-ce-版本号
yuminstalldocker-ce-18.06.3.ce
启动并加入开机启动
systemctlstartdockersystemctlenabledocker
验证安装是否成功(有client和service两部分表示docker安装启动都成功了)
dockerversion
Ubuntu 18.04 安装Docker
创建目录
cd/mntmkdirdockercddocker
下载
wget/docker-ce/linux/ubuntu/dists/bionic/pool/stable/amd64/containerd.io_1.2.6-3_amd64.debwget/docker-ce/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce-cli_19.03.9~3-0~ubuntu-bionic_amd64.debwget/docker-ce/linux/ubuntu/dists/bionic/pool/stable/amd64/docker-ce_19.03.9~3-0~ubuntu-bionic_amd64.deb
安装
sudodpkg-i*.deb
启动
servicedockerstart
生成服务器、Hadoop镜像
获取centos7镜像
dockerpullcentos
查看镜像列表
dockerimages
安装SSH
cd/mnt/dockermkdirsshcdsshviDockerfile
内容
FROMcentosMAINTAINERdysRUNyuminstall-yopenssh-serversudoRUNsed-i's/UsePAMyes/UsePAMno/g'/etc/ssh/sshd_configRUNyuminstall-yopenssh-clientsRUNecho"root:1234"|chpasswdRUNecho"rootALL=(ALL)ALL">>/etc/sudoersRUNssh-keygen-tdsa-f/etc/ssh/ssh_host_dsa_keyRUNssh-keygen-trsa-f/etc/ssh/ssh_host_rsa_keyRUNmkdir/var/run/sshdEXPOSE22CMD["/usr/sbin/sshd","-D"]
保存并退出
执行构建镜像的命令,新镜像命名为 centos7-ssh
dockerbuild-t="centos7-ssh".
基于 centos7-ssh 这个镜像启动三个容器
dockerrun-d--name=centos7.sshcentos7-sshdockerrun-d--name=centos7.ssh2centos7-sshdockerrun-d--name=centos7.ssh3centos7-ssh
构建Hadoop镜像
创建目录
cd/mnt/dockermkdirhadoopcdhadoop
下载jar包
//下载hadoop,构建镜像时使用wgethttps://mirrors./apache/hadoop/common/hadoop-2.9.2/hadoop-2.9.2.tar.gz//下载jdk,构建镜像时使用wget--no-cookies--no-check-certificate--header"Cookie:gpw_e24=http%3A%2F%%2F;oraclelicense=accept-securebackup-cookie""/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz"
编辑DockerFile
viDockerfile
内容
FROMcentos7-sshADDjdk-8u141-linux-x64.tar.gz/usr/local/RUNmv/usr/local/jdk1.8.0_141/usr/local/jdk1.8ENVJAVA_HOME/usr/local/jdk1.8ENVPATH$JAVA_HOME/bin:$PATHADDhadoop-2.9.2.tar.gz/usr/localRUNmv/usr/local/hadoop-2.9.2/usr/local/hadoopENVHADOOP_HOME/usr/local/hadoopENVPATH$HADOOP_HOME/bin:$PATHRUNyuminstall-ywhichsudo
保存并退出
执行构建命令
dockerbuild-t="hadoop".
运行容器
dockerrun--namehadoop0--hostnamehadoop0-d-P-p50070:50070-p8088:8088hadoopdockerrun--namehadoop1--hostnamehadoop1-d-Phadoopdockerrun--namehadoop2--hostnamehadoop2-d-Phadoop
Hadoop 集群搭建
配置ll命令
vim~/.bashrc
内容
增加下面的配置
aliasll='ls-l'
保存退出
重新加载
source~/.bashrc
安装vim、net-tools
yuminstallnet-toolsyuminstallvim
修改每台服务器的 /etc/hosts
使用ifconfig命令查看自己的IP,改为自己服务器的IP
172.18.0.5hadoop0172.18.0.6hadoop1172.18.0.7hadoop2
修改时区
rm-rf/etc/localtimeln-s/usr/share/zoneinfo/Asia/Shanghai/etc/localtime
SSH无密码登陆
在每台服务器上都执行以下命令,执行后会有多个输入提示,不用输入任何内容,全部直接回车即可
ssh-keygen
输入下面命令时,需要用到上面设置的密码1234
ssh-copy-id-i/root/.ssh/id_rsa-p22root@hadoop0ssh-copy-id-i/root/.ssh/id_rsa-p22root@hadoop1ssh-copy-id-i/root/.ssh/id_rsa-p22root@hadoop2
安装配置 hadoop
在 master 中执行
cd/usr/local/hadoopmkdirtmphdfsmkdirhdfs/datahdfs/name
配置core-site.xml
vim/usr/local/hadoop/etc/hadoop/core-site.xml
在 块儿中添加:
fs.defaultFShdfs://hadoop0:9000hadoop.tmp.dirfile:/usr/local/hadoop/tmpio.file.buffer.size131702
配置hdfs-site.xml
vim/usr/local/hadoop/etc/hadoop/hdfs-site.xml
在 块儿中添加:
dfs.namenode.name.dirfile:/usr/local/hadoop/hdfs/namedfs.datanode.data.dirfile:/usr/local/hadoop/hdfs/datadfs.replication2dfs.namenode.secondary.http-addresshadoop0:9001dfs.webhdfs.enabledtrue
配置mapred-site.xml
这个文件默认不存在,需要从 mapred-site.xml.template 复制过来
cp/usr/local/hadoop/etc/hadoop/mapred-site.xml.template/usr/local/hadoop/etc/hadoop/mapred-site.xml
编辑文件
vim/usr/local/hadoop/etc/hadoop/mapred-site.xml
在 块儿中添加:
mapreduce.framework.nameyarnmapreduce.jobhistory.addresshadoop0:10020mapreduce.jobhistory.webapp.addresshadoop0:19888
配置yarn-site.xml
vim/usr/local/hadoop/etc/hadoop/yarn-site.xml
在 块儿中添加:
yarn.nodemanager.aux-servicesmapreduce_shuffleyarn.nodemanager.auxservices.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandleryarn.resourcemanager.addresshadoop0:8032yarn.resourcemanager.scheduler.addresshadoop0:8030yarn.resourcemanager.resource-tracker.addresshadoop0:8031yarn.resourcemanager.admin.addresshadoop0:8033yarn.resourcemanager.webapp.addresshadoop0:8088
配置slaves
vim/usr/local/hadoop/etc/hadoop/slaves
删除已有内容,添加:
hadoop1hadoop2
配置hadoop-env.sh
vim/usr/local/hadoop/etc/hadoop/hadoop-env.sh
找到 export JAVA_HOME=${JAVA_HOME},改为自己JAVA_HOME的绝对路径
exportJAVA_HOME=/usr/local/jdk1.8
复制文件到 hadoop1,hadoop2
scp-r/usr/local/hadoophadoop1:/usr/localscp-r/usr/local/hadoophadoop2:/usr/local
设置 hadoop 环境变量
在每台服务器上都执行:
vim~/.bashrc
增加内容
exportPATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin
保存退出,重新加载资源
source~/.bashrc
启动 hadoop
在master启动hadoop,从节点会自动启动
初始化
hdfsnamenode-format
启动
hadoop-daemon.shstartnamenodehadoop-daemon.shstartdatanodestart-dfs.shstart-yarn.shmr-jobhistory-daemon.shstarthistoryserver
测试
如果您使用的也是阿里云服务器,那需要在阿里云客户端调整安全组,阿里云默认只开放22端口,所以需要把50070、8088都开通
端口号.png
浏览器中访问:
http://服务器IP:50070/
图片1.png
http://服务器IP:8088/
hdfs 操作
hdfsdfs-mkdir-p/usr/local/hadoop/inputhdfsdfs-put/usr/local/hadoop/etc/hadoop/kms*.xml/usr/local/hadoop/input
http://服务器IP:50070/,在文件浏览页面查看
验证hdfs.png
mapreduce 操作
hadoopjar/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jargrep/usr/local/hadoop/input/usr/local/hadoop/output'dfs[a-z.]+'
验证mapreduce.png