知者何南 发表于 2023-2-21 21:29:31

Hadoop 及Spark 分布式HA运行环境搭建

作者:京东物流 秦彪

工欲善其事必先利其器,在深入学习大数据相关技术之前,先手动从0到1搭建一个属于自己的本地Hadoop和Spark运行环境,对于继续研究大数据生态圈各类技术具有重要意义。本文旨在站在研发的角度上通过手动实践搭建运行环境,文中不拖泥带水过多讲述基础知识,结合Hadoop和Spark最新版本,帮助大家跟着步骤一步步实践环境搭建。
1. 总体运行环境概览

(1) 软件包及使用工具版本介绍表:
技术名称或工具名称版本备注Hadoophadoop-3.3.4.tar.gzVirtualBox6.0.0 r127566虚拟机,推荐CentOScentos7.3JDKjdk-8u212-linux-x64.tar.gz1.8.0_111Zookeeperzookeeper-3.6.tar.gzFileZillaFileZilla_3.34.0文件传输工具,推荐MobaXtermMobaXterm_Portable_v10.9SSH连接工具,推荐IdeaIDEA COMMUNITY 2019.1.4代码IDE开发工具,推荐(2)环境部署与分布介绍表:
主机名IP运行的进程master192.168.0.20QuorumPeerMain、NameNode、DataNode、ResourceManager、NodeManager、JournalNode、DFSZKFailoverController、Masterslave1192.168.0.21QuorumPeerMain、NameNode、DataNode、ResourceManager、NodeManager、JournalNode、DFSZKFailoverController、Master、Workerslave2192.168.0.22QuorumPeerMain、NameNode、DataNode、JournalNode、NodeManager、Worker(3)进程介绍:(1表示进程存在,0表示不存在)
进程名含义masterslave1slave2QuorumPeerMainZK进程111NameNodeHadoop主节点110DataNodeHadoop数据节点111ResourceManagerYarn管理进程110NodeManagerYarn 工作进程111JournalNodeNameNode同步进程111DFSZKFailoverControllerNameNode监控进程110MasterSpark主节点110WorkerSpark工作节点1112. 系统基础环境准备

步骤1: 虚拟机中Linux系统安装(略)
VirtualBox中安装CentOS7操作系统
步骤2: CentOS7基础配置
(1) 配置主机的hostname
命令: vim/etc/hostname
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/7641bb94259b41908a3c7453a819a6a8~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=jVT4ARnqdFddLFbDEt2EGXxTUsM%3D
(2) 配置hosts, 命令vim /etc/hosts
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/7c2af2ec07db49f88d13e5cc7d271270~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=PQI3iE6TMQ4wVxS1xF6Msr1xS78%3D
(3) 安装JDK
命令:
rpm -qa | grep java 查看是否有通过rpm方式安装的java
java -version 查看当前环境变量下的java 版本
1) filezilla上传安装包,tar -zxvf
jdk-8u212-linux-x64.tar.gz 解压
2) bin目录的完整路径:
/usr/local/jdk/jdk1.8.0_212/bin
3) vim /etc/profile 配置jdk环境变量
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/2aa9d6d041be4f43876ef988ff167225~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=X%2FcQ2tHxabz2c%2B1UkGb0AI7ctQs%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/fc7d283a248a41b3a62cffa2a487c029~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=M005u54RokId28SReo2eLPuKN1g%3D
(4) 复制主机:
1)利用VirtualBox复制功能复制两台主机
2)命令:vi
/etc/sysconfig/network-scripts/ifcfg-eth0,设置相应的网络信息
3)三台主机IP分别为: 192.168.0.20/21/22
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/cf4b3dd3b4d0460594b43a3ddf763c0f~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=ilYtdnPQXNRKMCOJA3p8r2S5nCY%3D
(5) 配置三台主机ssh无密码登录(略)
(6) 安装zookeeper
1) filezilla上传安装包,zookeeper-3.4.10.tar.gz 解压
2) bin目录的完整路径:
/usr/local/zookeeper/zookeeper-3.4.10/bin
3) vim /etc/profile 配置jdk环境变量
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/cf49399875e54e09867a41c8373178af~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=0h1lnRykRzAzunLtJpZpN6DXsnU%3D
4) zookeeper的配置文件修改,zookeeper-3.4.10/conf/
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/188e08ce593c412789cb69be3bbf99bd~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=OdibH9sErQlvfyeHs8ice0pw%2Fy4%3D
5) 执行命令从master节点复制配置到其他两个节点
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/0dad78c677304a16b5bf13bc973718e0~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=h3tszl0IvIgKP8BwzBcIImwnwZI%3D
6) 每台机器zookeeper目录下新建一个data目录, data目录下新建一个myid文件,master主机存放标识值1;slave1主机标识值为2;slave3主机标识值为3
7) 每台机器上命令:zkServer.sh start ,启动ZK,进程名:QuorumPeerMain
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/7fa24ec1f97f40caba251ab931dd70b4~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=femEVSTD%2BV7yhqOeUhuqx3TpVKI%3D
3. Hadoop安装与部署

3.1安装Hadoop

1)filezilla上传安装包,hadoop-3.3.4.tar.gz 解压
2)bin目录的完整路径: /usr/local/hadoop/hadoop-3.3.4/bin
3)vim /etc/profile 配置jdk环境变量
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/7bcab0e746704821a40ab8ebc5af477e~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=gnvs6DbQMqH0jYZpPUvw5nxoikA%3D
4) 修改配置文件共6个: hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml和workers
文件1: hadoop-env.sh; 增加jdk环境变量
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/8fd314e245174bc1804ac2bc9e155b0d~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=fjuuC8Nv0dWcEId%2BYhBjwUugZUg%3D
文件2: core-site.xml; 配置临时目录及zookeeper信息
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/6de9bdee12574c3592f70b263332575d~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=u5gWOgxjze48IQ%2FZzRBUGxd4I88%3D
文件3: hdfs-site.xml; 配置hdfs信息
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/45e8f5f1a01049618238ee1e9c9c7b39~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=9QJIjUJChaCK%2BR2%2B6r2WmF5k9oc%3D
文件4: mapred-site.xml; 配置mapreduce和dfs权限信息
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/37dce1880e93409b8a224931d81a3a23~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=YvJfMqET4MMQGggUpmo8Qc4AaL8%3D
文件5: yarn-site.xml; 配置yarn资源调度信息
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/393789a6fa904187addd3a7d3d40d59c~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=lEY1NVdqY5kEFEEYckFx8m7DUcE%3D
文件6: worker文件存放当前的worker节点名,复制到每一个虚拟机中
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/06fec6ebf3184974823b06ed49658f57~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=3a1ga0hAnUVgkrYK5IaGeHCdtOQ%3D
3.2启动Hadoop

1) 使用命令: hadoop-daemon.sh start journalnode 启动journalnode 进程(每个节点执行)
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/7b575a23e02f475d9e68bd5d1536be6f~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=yPa9ZivAOWVUuCfFfelm7D44sVY%3D
2) 使用命令: hadoop-daemon.sh start namenode 启动namenode 进程(master、slave1节点上执行)
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/85528ef5b7cb4151aaa592a64e4f9d6d~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=9IRpNJFz85VYMMZ4riEWUmUkEg0%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/eb37f8770706440e9e7c0835efb59b98~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=4RSRCG%2FEexTxUmWF6w8l3glwF54%3D
3) 使用命令:hadoop-daemon.sh start datanode 在所有节点上启动datanode 进程
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/0669e02ec38640a8bd17e81fa8819028~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=QgYTZIjq0RJ7t9ln%2BcQFv8W5pPA%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/a213dfd1e31545788d6976559e26d8c9~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=ubmEGX6Br6z42UwKAdWcaHtNx6g%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/331680903020468ea5ac00b3177957bc~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=JfZhtF%2FxfCrJnXFyiA53LxYHt5E%3D
4) 使用命令:start-yarn.sh 在master上启动yarn
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/402b1cbc70dd4da3aea7f516023d4bc3~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=hT3dXLncz2JKj%2BWRGxKT8GZtVYw%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/a2fe8e014e3f49d7bd605fa2fd14a3a4~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=F968qiiNAIHPBAQKItPISmy6Xao%3D
5) 使用命令: hdfs zkfc -formatZK 在ZK上生成ha节点
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/55f9a8677ee44d839f226c2eddcc1578~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=n9LaYCaNKJp5C1w4xg%2B1UzvjmJM%3D
6) 使用命令: hadoop-daemon.sh start zkfc 启动 DFSZKFailoverController进程,在master节点执行
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/b593127abae3405bb8690bb8a506c19f~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=e0DBEoxt%2F2gMGDLf0nEHQJitjD0%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/d1010283eab743b5a60b8294fda9f073~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=Sp7A8F9LMFgSNHNR5ye82daANv8%3D
a. 访问HDFS的管理页面
http://192.168.0.20:50070此处192.168.0.20为namenode节点的Active节点
http://192.168.0.21:50070 此处192.168.0.20为namenode节点的standby节点
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/35a10f490de74f2eafe8c124e8c7cd41~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=5RKvyaU84ofIPsJUpemEL9%2B8Y6A%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/e03eab79039944d9abd0df317288c776~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=RxbzLR6j19i%2BS11tlFrD3bCZ4%2Bo%3D
3.3 验证HDFS使用

使用命令:hdfs dfs -ls / 查看HDFS中文件
使用命令:hdfs dfs -mkdir /input 在HDFS上创建目录
使用命令:hdfs dfs -put ./test.txt /input 将本地文件上传到HDFS指定目录
使用命令:hdfs dfs -get /input/test.txt ./tmp 将HDFS文件复制到本地目录
使用命令:hdfs dfs -text /input/test.txt 查看HDFS上的文本文件
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/9ec00aacff4f4bac8d1c52135e09534f~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=WLajlX8UlVcVYOlyLjwQ0StnZyo%3D
web端浏览HDFS目录
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/db4c510288184032b4a9b53b610566a9~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=N1yRd%2FXCuNNUPFUTNKBC0LsrZI8%3D
3.4 验证MapReduce的wordcount案例

(1)先通过命令将带有文本内容的test2.txt文件上传到HDFS
(2)对HDFS上test2.txt文件执行wordcount统计,结果放回HDFS新目录,命令:
hadoop jar /usr/local/hadoop/hadoop-3.3.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar wordcount /input/test2.txt /out
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/dd3929a499ed4b83ad3dfb16e12cfaa5~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=1CRxkCx7isYk1a8oxb2xJgKYI8c%3D
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/2520cf4f08e243e6bedccfc1e433b36b~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=2bosX7fob%2F9UB3binGw9pZSIVd4%3D
4. Spark安装与部署

4.1安装Scala

(1)安装scala
上传scala压缩包解压,使用命令:
scala -version 查看当前环境变量下的scala 版本
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/118c394e2406404286310d47056f3ab2~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=QmSh3%2BxFFhCRmtJkX1SqpTP6ngM%3D
(2)拷贝scala目录和环境变量到其他两台机器
使用命令:
scp -r /usr/local/scala root@slave1:/usr/local/
scp /etc/profile root@slave1:/etc/profile
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/ab451a520acb4e5e8e175b216de65bd7~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=EEixuVPZl6pTGEvWOaEGFQ9ZQmE%3D
4.2安装Spark

(1)上传spark压缩包解压,修改配置文件
命令: vim
/usr/local/spark/spark-3.3.1/conf/spark-env.sh
(2) 新建worker目录,写入master机器名称
4.3启动Spark

(1)在master的spark安装目录下启动spark
命令:
cd /usr/local/spark/spark-3.3.1/sbin
./start-all.sh
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/dfa640d1c7bc446faeaaf2378462426d~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=MCKzgGOP7hsH3AEOpbvq0jItbUw%3D
(2)在slave1同样目录启动master进程
命令:./start-master.sh
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/f6cea85829d34d4faa340274fc7c67c4~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=6xj70GCZEY4fFu%2BbdfV%2BizX1TMM%3D
(3)访问spark管理页面ui
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/0399adf8a08346d989c6c625f7284497~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=BTe9%2BFyvnv8kDahx%2BL50Cj6r8YY%3D
4.4 验证Spark的wordcount案例

(1)执行命令:
cd /usr/local/spark/spark-3.3.1/bin
./spark-shell --master spark://master:7077
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/3a017e76cb6e4ef4ad8d47aa659a7003~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=Huo3PXeQnxK8izfrdtmhXhW4vCw%3D
(3)从HDFS读取数据执行自定义wordcount代码,结果写入HDFS,命令:
sc.textFile("hdfs://master:9000/input/test2.txt").flatMap(.split(" ")).map(word=>(word,1)).reduceByKey(+_).map(pair=>(pair._2,pair._1)).sortByKey(false).map(pair=>(pair._2,pair._1)).saveAsTextFile("hdfs://master:9000/spark_out")
(4)输出结果:
https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/f1d0678a42034a46af741dbed88242ef~tplv-tt-shrink:640:0.image?traceid=20230221100619815351C599FB90BF64ED&x-expires=2147483647&x-signature=XYHqkp8VmgnhoMChS6xgifp2%2BMs%3D
5. 后记

大数据技术日新月异,得益于互联网技术加持下的商业和工业模式变革。人们日益增长的对生活生产便捷性、数字化、智能化的需求,催生了数据爆炸式的增长,推动了大数据技术推陈出新。作为新时代的程序开发者,必须掌握一定的大数据基础知识才能适应时代的要求,本文只是一个引子,从自身实践的角度帮助初学者认识大数据,并基于此搭建自己属于自己的开发环境,希望大家能够在此基础上继续钻研有所建树。

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
页: [1]
查看完整版本: Hadoop 及Spark 分布式HA运行环境搭建