100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > 【云计算】VMware虚拟机完成在Docker部署Hadoop集群并发布在阿里云和DockerHub

【云计算】VMware虚拟机完成在Docker部署Hadoop集群并发布在阿里云和DockerHub

时间:2020-06-20 01:55:33

相关推荐

【云计算】VMware虚拟机完成在Docker部署Hadoop集群并发布在阿里云和DockerHub

目录

1.环境准备

2.安装docker

3.在Docker中安装centos7

ps:解决Docker中CentOS镜像无法使用systemd的问题

4.Docker中centos7的java和hadoop配置

5.打包Docker镜像,上传Docker hub,阿里云

1.环境准备

vm虚拟机 centos7

hadoop版本:3.1.1

2.安装docker

(1)引用资料:

centos7安装Docker详细步骤(无坑版教程) - 云+社区 - 腾讯云在安装 Docker 之前,先说一下配置,我这里是Centos7 Linux 内核:官方建议 3.10 以上,3.8以上貌似也可。/developer/article/1701451

(2)安装校验

[root@localhost master]# docker versionClient: Docker Engine - CommunityVersion: 20.10.14API version: 1.41Go version: go1.16.15Git commit: a224086Built: Thu Mar 24 01:49:57 OS/Arch: linux/amd64Context: defaultExperimental:trueServer: Docker Engine - CommunityEngine:Version:20.10.14API version:1.41 (minimum version 1.12)Go version: go1.16.15Git commit: 87a90dcBuilt: Thu Mar 24 01:48:24 OS/Arch:linux/amd64Experimental:falsecontainerd:Version:1.5.11GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8runc:Version:1.0.3GitCommit: v1.0.3-0-gf46b6badocker-init:Version:0.19.0GitCommit: de40ad0

3.在Docker中安装centos7

安装三个容器分别为Dmaser和Dslave,Dslave2

(1)在docker中拉取centos7镜像创建容器

ps:解决Docker中CentOS镜像无法使用systemd的问题

Failed to get D-Bus connection: Operation not permitted —解决Docker中CentOS镜像无法使用systemd的问题-阿里云开发者社区解决Docker中CentOS镜像无法使用systemd的问题:Failed to get D-Bus connection: Operation not permitted/article/688670

创建DockerFile

mkdir /usr/local/centos-systemdcd /usr/local/centos-systemdvim dockerfileFROM centos:7ENV container dockerRUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \rm -f /lib/systemd/system/multi-user.target.wants/*;\rm -f /etc/systemd/system/*.wants/*;\rm -f /lib/systemd/system/local-fs.target.wants/*; \rm -f /lib/systemd/system/sockets.target.wants/*udev*; \rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \rm -f /lib/systemd/system/basic.target.wants/*;\rm -f /lib/systemd/system/anaconda.target.wants/*;VOLUME [ "/sys/fs/cgroup" ]CMD ["/usr/sbin/init"]ESC:wq

下载镜像并命名为centos-systemd:7

cd /usr/local/centos-systemddocker build --rm -t centos-systemd:7 .

通过centos-systemd:7镜像生成一个名为centos-systemd的容器

# docker run -d -i -t --privileged=true -p 8080:8080 -p 50070:50070 --name Dmaster00 -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos-systemd:7# docker run -d -i -t --privileged=true --name Dslave1 -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos-systemd:7# docker run -d -i -t --privileged=true --name Dslave2 -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos-systemd:7

查看所有容器

[root@localhost master]# docker ps -a

启动容器

[root@localhost master]# docker start 51a073e4762e #Dmaster[root@localhost master]# docker start c55c8394429f #Dslave1[root@localhost master]# docker start ae94c4a316ce #Dslave2

进入该容器

[root@localhost master]# docker exec -it 51a073e4762e /bin/bash #Dmaster[root@localhost master]# docker exec -it c55c8394429f /bin/bash #Dslave1[root@localhost master]# docker exec -it ae94c4a316ce /bin/bash #Dslave2

至此,master和slave1,slave2三个容器安装完成

查看安装的centos7版本

[root@290ca80f1a3c /]# cat /etc/redhat-release

安装一些基本套件(5,6可不装)

# 1 为了更好的编辑文件可以安装 vimyum install -y vim# 2 为了查看端口信息更方便可以安装 lsofyum install -y lsof# 3 安装 wgetyum install -y wget# 4 安装 treeyum install -y tree# 5 python 工具yum install -y python-devel# 6 安装编译 C 的环境yum install -y gcc gcc-c++yum install -y zlibyum install -y zlib-develyum install -y tcl build-essential tk gettext

至此,master和slave1,slave2三个容器安装完成

4.Docker中centos7的java和hadoop配置

(1)配置ip和ssh免密登录

# vim /etc/sysconfig/network

HOSTNAME DmasterHOSTNAME DslaveHOSTNAME Dslave2

配置hosts

vim /etc/hosts

写入ip

172.17.0.2Dmaster172.17.0.3Dslave1172.17.0.4Dslave2

设置自身免密登录

# 1 yum 安装 spenssl 服务yum -y install passwd openssl openssh-server openssh-clientsmkdir /var/run/sshd/# 2 修改配置vim /etc/ssh/sshd_config +39## 大概在 38 - 45 行之间,修改或添加如下三个配置PermitRootLogin yesRSAAuthentication yesPubkeyAuthentication yes

# 3 sshd 服务的启停## 3.1 启动systemctl start sshd.service## 3.2 查看 sshd 服务状态systemctl status sshd## 3.3 停止systemctl start sshd.service# 4 设置为开机自启systemctl enable sshd.service# 5 生成ssh的密钥和公钥# ssh-keygen -t rsa# 6 查看 SSH 服务lsof -i:22

生成密钥文件

# cd ~/.ssh/ # rm -rf * # ssh-keygen -t rsa

将密钥加入到授权中,修改授权文件权限

# cat id_rsa.pub >>authorized_keys# chmod 600 ./authorized_keys

测试免密登录自己

设置root密码

[root@DMaster .ssh]# yum install passwd[root@DMaster .ssh]# passwd

重命名密钥,分配到Dslave1和Dsalve2

# cd ~/.ssh/# mv id_rsa.pub id_rsa_189.pub# scp id_rsa_189.pub Dslave1:~/.ssh/# scp id_rsa_189.pub Dslave2:~/.ssh/

Dslave1的密钥也要分配到Dmaster,Dsalve2上,Dsalve2的密钥也要分配到Dmaster,Dsalve1上

Dslave1

# cd ~/.ssh/# mv id_rsa.pub id_rsa_190.pub# scp id_rsa_190.pub Dmaster:~/.ssh/# scp id_rsa_190.pub Dslave2:~/.ssh/

Dslave2

# cd ~/.ssh/# mv id_rsa.pub id_rsa_191.pub# scp id_rsa_191.pub Dmaster:~/.ssh/# scp id_rsa_191.pub Dslave1:~/.ssh/

然后三个节点都要将密钥加入ssh

cat id_rsa_189.pub >> authorized_keyscat id_rsa_190.pub >> authorized_keyscat id_rsa_191.pub >> authorized_keys

(2)配置java

在home中新建java文件夹

[root@2c50f7885def /]# cd home[root@2c50f7885def home]# mkdir java[root@2c50f7885def home]# cd java

然后使用docker cp 命令,将centos中的java安装包上传到Dmaster中

然后安装java

[root@2c50f7885def java]# rpm -ivh jdk-8u181-linux-x64.rpm

配置环境变量

[root@2c50f7885def /]# vim /etc/profileJAVA_HOME=/usr/java/jdk1.8.0_181-amd64JRE_HOME=/usr/java/jdk1.8.0_181-amd64/jrePATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib[root@2c50f7885def /]# source /etc/profile[root@2c50f7885def /]# echo $PATH

# vim ~/.bashrcexport JAVA_HOME=/usr/java/jdk1.8.0_181-amd64export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/binb# source ~/.bashrc# echo $JAVA_HOME

三个容器都要安装java

(3)配置hadoop

与java配置相同,在home下创建hadoop文件夹,使用docker cp上传hadoop安装包

然后解压

[root@2c50f7885def hadoop]# tar -zxvf hadoop-3.1.1.tar.gz -C /usr/local[root@2c50f7885def hadoop]# cd /usr/local[root@2c50f7885def hadoop]# mv ./hadoop-3.1.1 ./hadoop

同样,三个容器都要安装

然后,配置环境变量

# vim ~/.bashrcexport HADOOP_INSTALL=/usr/local/hadoopexport PATH=$PATH:$HADOOP_INSTALL/binexport PATH=$PATH:$HADOOP_INSTALL/sbinexport HADOOP_MAPRED_HOME=$HADOOP_INSTALLexport HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALLexport YARN_HOME=$HADOOP_INSTALLexport HADOOP_HOME=$HADOOP_INSTALL# source ~/.bashrc# hdfs

接下来对hadoop的配置,只在Dmaster上进行

首先进入配置文件所在的目录

配置hadoop-env.sh

# cd /usr/local/hadoop/etc/hadoop/# vim hadoop-env.shexport JAVA_HOME=/usr/java/jdk1.8.0_181-amd64export HADOOP_HOME=/usr/local/hadoopexport HDFS_NAMENODE_USER="root"export HDFS_DATANODE_USER="root"export HDFS_SECONDARYNAMENODE_USER="root"export YARN_RESOURCEMANAGER_USER="root"export YARN_NODEMANAGER_USER="root"

配置core-site.xml

<configuration><property><name>hadoop.tmp.dir</name><value>/usr/hadoop/tmp</value></property><property><name>fs.default.name</name><value>hdfs://Dmaster:9000</value></property><property><name>io.file.buffer.size</name><value>131072</value> </property></configuration>

配置hdfs-site.xml

<configuration><property><name>dfs.namenode.name.dir</name><value>/usr/local/hadoop/hdfs/name/</value></property><property><name>dfs.blocksize</name><value>268435456</value></property><property><name>dfs.namenode.handler.count</name><value>100</value></property><property><name>dfs.datanode.data.dir</name><value>/usr/local/hadoop/hdfs/data/</value></property><property><name>dfs.namenode.http-address</name><value>master:50070</value></property><property><name>dfs.namenode.secondary.http-address</name> <value>Dmaster:50090</value></property>

配置yarn-site.xml

<configuration><property> <!--配置yarn主节点--><name>yarn.resourcemanager.hostname</name><value>DMaster</value></property><property><!--配置执行的计算框架--><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property></configuration>

配置mapred-site.xml

<configuration><property><name>mapreduce.Framework.name</name><value>yarn</value></property><property><name>yarn.applicaition.classpath</name> <value>$HADOOP_HOME/etc/*,$HADOOP_HOME/etc/hadoop/*,$HADOOP_HOME/lib/*,$HADOOP_HOME/share/hadoop/common/*,$HADOOP_HOME/share/hadoop/common/lib/*,$HADOOP_HOME/share/hadoop/mapreduce/*,$HADOOP_HOME/share/hadoop/mapreduce/lib/*,$HADOOP_HOME/share/hadoop/hdfs/*,$HADOOP_HOME/share/hadoop/hdfs/lib/*,$HADOOP_HOME/share/hadoop/yarn/*,$HADOOP_HOME/share/hadoop/yarn/lib/*</value></property><property><name>mapreduce.jobhistory.address</name><value>Dmaster:10020</value></property><property><name>mapred.job.history.webapp.address</name><value> Dmaster:19888</value></property><property><name>yarn.nodemanager.aux-service</name><value>mapreduce_shuffle</value></property><property><name>mapred.job.tracker.http.address</name><value> Dmaster:50030</value></property><property><name>mapred.task.tracker.http.address</name><value> Dmaster:50060</value></property><property><name>mapred.job.tracker</name><value>Dmaster:9001</value><description>HOST OrIPand port of JobTracker</description></property><property><name>yarn.app.mapreduce.am.env</name><value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value></property><property><name>mapreduce.map.env</name><value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value></property><property><name>mapreduce.reduce.env</name><value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value></property><property><name>mapreduce.Framework.name</name><value>yarn</value></property></configuration>

配置workers

# vim workersDslave1Dslave2

将Dmaster节点的配置文件覆盖复制到集群其他节点

# scp hadoop-env.sh root@Dslave1:/usr/local/hadoop/etc/hadoop/# scp core-site.xml root@Dslave1:/usr/local/hadoop/etc/hadoop/# scp hdfs-site.xml root@Dslave1:/usr/local/hadoop/etc/hadoop/# scp mapred-site.xml root@Dslave1:/usr/local/hadoop/etc/hadoop/# scp yarn-site.xml root@Dslave1:/usr/local/hadoop/etc/hadoop/# scp workers root@Dslave1:/usr/local/hadoop/etc/hadoop/# scp hadoop-env.sh root@Dslave2:/usr/local/hadoop/etc/hadoop/# scp core-site.xml root@Dslave2:/usr/local/hadoop/etc/hadoop/# scp hdfs-site.xml root@Dslave2:/usr/local/hadoop/etc/hadoop/# scp mapred-site.xml root@Dslave2:/usr/local/hadoop/etc/hadoop/# scp yarn-site.xml root@Dslave2:/usr/local/hadoop/etc/hadoop/# scp workers root@Dslave2:/usr/local/hadoop/etc/hadoop/

在3个节点上分别创建Hadoop配置对应的目录

mkdir /usr/hadoopmkdir /usr/hadoop/tmpmkdir /usr/local/hadoop/hdfsmkdir /usr/local/hadoop/hdfs/namemkdir /usr/local/hadoop/hdfs/data

格式化namenode

# hdfs namenode -format

启动Hadoop(在Dmaster上启动)

# cd /usr/local/hadoop# sbin/start-all.sh# jps

5.打包Docker镜像,上传Docker hub,阿里云

(1)commit打包

#打包Dmasterdocker commit -m="docker_hadoop" -a="wangchen" 2c50f7885def Dmaster:1.5#打包Dsalve1docker commit -m="docker_hadoop" -a="wangchen" c55c8394429f Dslave1:1.5#打包Dsalve2docker commit -m="docker_hadoop" -a="wangchen" ae94c4a316ce Dslave2:1.5

(2)上传到Docker hub

以Dmaster为例

登录[root@localhost master]# docker login将镜像命名为仓库名称,一定要加版本号,不然默认是latest[root@localhost master]# docker tag 25e22d5d8268 allenbayer/dmaster:1.0上传镜像,一定要加版本号,不加无法上传[root@localhost master]# docker push allenbayer/dmaster:1.0

(3)上传到阿里云

以Dmaster为例

在阿里云中创建镜像仓库

登录阿里云Docker Registry

$ docker login --username=aliyun6469631569 -

用于登录的用户名为阿里云账号全名,密码为开通服务时设置的密码。

将镜像推送到Registry

$ docker login --username=aliyun6469631569 -$ docker tag [ImageId] -/centos_hadoop6564/dmaster:[镜像版本号]$ docker push -/centos_hadoop6564/dmaster:[镜像版本号]

请根据实际镜像信息替换示例中的[ImageId]和[镜像版本号]参数。

操作实例

6.使用Wordcount案例

在hadoop中新建input文件夹

hadoop fs -mkdir /input

新建一个文件作为输入文件input01.txt

将其上传到我们刚刚建立的hdfs中的input文件夹中

hadoop fs -put input01.txt /input/

[root@2c50f7885def hadoop]# cd /usr/local/hadoop/share/hadoop/mapreduce

运行 wordcount

hadoop jar hadoop-mapreduce-examples-3.1.1.jar wordcount /input /output

查看运行结果

hadoop fs -cat /output/part-r-00000

完成,结束

(以下为编辑草稿,不要理会)

# docker run -d -i -t --privileged=true -p 8080:8080 -p 50070:50070 --name Dmaster00 -v /sys/fs/cgroup:/sys/fs/cgroup:ro dmaster:1.5

wget -O ./jdk-8u181-linux-x64.rpm http://192.168.3.17:8081/index/file_down2/?p1=jdk-8u181-linux-x64.rpmwget -O ./ideaIU-.1.tar.gz http://192.168.3.17:8081/index/file_down2/?p1=ideaIU-.1.tar.gz

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。