数据人与超自然意识 发表于 2024-7-24 09:13:22

Hadoop3.3.6安装和设置hbase-2.5.5-hadoop3x,zookeeper-3.8.3



前置设置

vm设置

https://img-blog.csdnimg.cn/img_convert/4c15a9e2e775d85a4b9eb67241fd63c7.png
https://img-blog.csdnimg.cn/img_convert/9218ce920839afa00d8ba6527bd1a38f.png
虚拟机创建(hadoop1,hadoop2,hadoop3)

https://img-blog.csdnimg.cn/img_convert/b6f727afece57bc6ee512c94058d4653.png
在安装过程中推荐设置root用户暗码为1234方面后续操作
https://img-blog.csdnimg.cn/img_convert/d864a4776eab8d6dd385ad1c5972bbbd.png
https://img-blog.csdnimg.cn/img_convert/41310c31c6e66e9dce2f381c4492fa4c.png
linux前置设置(三个呆板都要设置)

1.设置主机名

以hadoop3为例
hostnamectl set-hostname hadoop3
2.设置固定ip

vim/etc/sysconfig/network-scripts/ifcfg-ens33
https://img-blog.csdnimg.cn/img_convert/891a9baa0b10b5c6b56ec0cd72ee8196.png
https://img-blog.csdnimg.cn/img_convert/7120fad56da2af104119336f2665992e.png
hadoop1 192.168.88.201
hadoop2 192.168.88.202
hadoop3 192.168.88.203
最后执行
service network restart
革新网卡
3.工具连接(三个呆板都要设置)

https://img-blog.csdnimg.cn/img_convert/ead662ed38a49aebf43b8245ad9cd6d7.png
https://img-blog.csdnimg.cn/img_convert/a582cba8fbaf2f89724a0f4cd9979bde.png
https://img-blog.csdnimg.cn/img_convert/2573c50be154936436105b31e6991ccd.png
4.主机映射

windows:

C:\Windows\System32\drivers\etc
修改这个路径下的hosts文件
https://img-blog.csdnimg.cn/img_convert/6da6f3896455ae356874eab994598243.png
https://img-blog.csdnimg.cn/direct/58f0574eed1e45b2a58c0f7a3bbedaac.png
推荐使用vscode打开可以修改成功
linux:(三个呆板都要设置)

vim /etc/hosts
https://img-blog.csdnimg.cn/img_convert/4bed385b9d38dfc4b9b9681608f65c5d.png
5.设置SSH免密登录(三个呆板都要设置)

root免密

1.在每一台呆板都执行:ssh-keygen -trsa -b 4096 ,一起回车到底即可

https://img-blog.csdnimg.cn/img_convert/7d0ea09c538dac4b6700759c2076d0f9.png
2.在每一台呆板都执行:

ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
https://img-blog.csdnimg.cn/img_convert/9aaf8b9c581de07a06c30b958c186833.png
hadoop免密

创建hadoop用户并设置免密登录
1.在每一台呆板执行:useradd hadoop,创建hadoop用户

2.在每一台呆板执行:passwd hadoop,设置hadoop用户暗码为1234

https://img-blog.csdnimg.cn/img_convert/d0820a8da5716f2033947c83f46ddbfb.png
3.在每一台呆板均切换到hadoop用户:su - hadoop,并执行ssh-keygen -trsa -b 4096,创建ssh密钥

https://img-blog.csdnimg.cn/img_convert/9ca800f3b84c046c2730f352c002792f.png
4.在每一台呆板均执行

ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
6.关闭防火墙和SELinux(三个呆板都要设置)

1:

systemctl stop firewalld

systemctl disable firewalld
https://img-blog.csdnimg.cn/img_convert/a401ba2fe09e47b9154c552c43218384.png
2.

vim /etc/sysconfig/selinux
https://img-blog.csdnimg.cn/img_convert/2f9bee9ab6bef7ce24d2e39da257ab25.png
设置好输入 init 6 重启
https://img-blog.csdnimg.cn/img_convert/0c7abfef8556884e349300838d07818f.png
3.

以下操作在三台Linux均执行

[*]安装ntp软件
yum install -y ntp

[*]更新时区
rm -f /etc/localtime;sudo ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

[*]同步时间
ntpdate -u ntp.aliyun.com

[*]开启ntp服务并设置开机自启
systemctl start ntpd

systemctl enable ntpd
https://img-blog.csdnimg.cn/img_convert/787adc7d2722f07a98c58d679341215a.png
三台创建快照1
https://img-blog.csdnimg.cn/img_convert/8066600384479603849f44094e630368.png
当您看到这里的时候我很感谢,但后续的内容并不是很好的内容,下面的设置文件是我幼年斗志昂扬没考虑到许多东西进行设置的,例如性能和安装设置这些的分布。
请看这篇文章,它是我认为最好的讲解。
文章链接
环境设置

1、jdk1.8 Java Downloads | Oracle
2、hadoop-3.3.6 Apache Hadoop
3、hbase-2.5.5.hadoop3x Index of /dist/hbase/2.5.5 (apache.org)
4、zookeeper-3.8.3 Apache ZooKeeper
重点:以下设置都是在root用户下进行设置后续会给对应的hadoop用户权限
推荐一口气设置完在进行给予权限和进行设置文件的革新,以及最后的分发
https://img-blog.csdnimg.cn/img_convert/e072303ef847c2a93e04a1269d2dc3fc.png
https://img-blog.csdnimg.cn/img_convert/8f64f09a00d00e72c9f2944f22ebe49f.png
jdk

创建文件夹,用来摆设JDK,将JDK和Tomcat都安装摆设到:/export/server 内

cd /

mkdir export

cd export

mkdir server
https://img-blog.csdnimg.cn/img_convert/6d9b07bef4baf4e8c6a51e21fa0314ca.png
https://img-blog.csdnimg.cn/img_convert/6daad08b1379101961e10ad2139f22ef.png
解压缩JDK安装文件

tar -zxvf jdk-8u321-linux-x64.tar.gz -C /export/server
设置JDK的软链接

https://img-blog.csdnimg.cn/img_convert/493410666d13360c7cd53320d7ff2fd1.png
设置JAVA_HOME环境变量,以及将$JAVA_HOME/bin文件夹加入PATH环境变量中

vim /etc/profile


export JAVA_HOME=/export/server/jdk
export PATH=$PATH: $JAVA_HOME/bin
https://img-blog.csdnimg.cn/img_convert/a710efd7aa97c42dc47792d0244e8709.png
生效环境变量

source /etc/profile

删除系统自带的java步调
rm -f /usr/bin/java
软链接我们自己的java
ln -s /export/server/jdk/bin/java /usr/bin/java
执行验证

https://img-blog.csdnimg.cn/img_convert/51d5797ce3aa3723c44d6498233e9b88.png
分发

hadoop2,3先创建文件夹

https://img-blog.csdnimg.cn/img_convert/1deebb3e592f09abcd3f071f6fa74f13.png
hadoop分发

cd /export/server/

scp -r jdk1.8.0_321/ hadoop2:`pwd`

scp -r jdk1.8.0_321/ hadoop3:`pwd`

cd /etc

scp -r profile hadoop2:`pwd`

scp -r profile hadoop3:`pwd`
hadoop2,3

source /etc/profile

rm -f /usr/bin/java
ln -s /export/server/jdk/bin/java /usr/bin/java
https://img-blog.csdnimg.cn/img_convert/10f0c9e248679a56f4dcb05f5fd57196.png
hadoop

上传息争压

https://img-blog.csdnimg.cn/img_convert/f88d4ea0ad572066d55cd397b9ed1d0e.png
cd /export/server

tar -zxvf hadoop-3.3.6.tar.gz

ln -s hadoop-3.3.6 hadoop
https://img-blog.csdnimg.cn/img_convert/ca612c49c5106d6f97fff30ea6660297.png
hadoop设置

worksers

hadoop1

hadoop2

hadoop3
https://img-blog.csdnimg.cn/img_convert/6428a1375f8c386f9accb9c53f26596e.png
hdfs-site.xml

<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:9870</value>
<description> The address and the base port where the dfs namenode web ui will listen on.
</description>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/nn</value>
</property>
<property>
<name>dfs.namenode.hosts</name>
<value>hadoop1,hadoop2,hadoop3</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/dn</value>
</property>
core-site.xml

<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:8020</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
hadoop-env.sh

export JAVA_HOME=/export/server/jdk
export HADOOP_HOME=/export/server/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_LOG_DIR=$HADOOP_HOME/logs
yarn-site.xml

<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.log.server.url</name>
    <value>http://hadoop1:19888/jobhistory/logs</value>
    <description></description>
</property>

<property>
    <name>yarn.web-proxy.address</name>
    <value>hadoop1:8089</value>
    <description>proxy server hostname and port</description>
</property>


<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
    <description>Configuration to enable or disable log aggregation</description>
</property>

<property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
    <description>Configuration to enable or disable log aggregation</description>
</property>


<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop1</value>
    <description></description>
</property>

<property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    <description></description>
</property>

<property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/data/nm-local</value>
    <description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description>
</property>

<property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/data/nm-log</value>
    <description>Comma-separated list of paths on the local filesystem where logs are written.</description>
</property>

<property>
    <name>yarn.nodemanager.log.retain-seconds</name>
    <value>10800</value>
    <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
</property>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>Shuffle service that needs to be set for Map Reduce applications.</description>
</property>

<!-- 是否需要开启Timeline服务 -->
<property>
<name>yarn.timeline-service.enabled</name>
<value>true</value>
</property>
<!-- Timeline Web服务的主机,通过8188端⼝访问 -->
<property>
<name>yarn.timeline-service.hostname</name>
<value>hadoop1</value>
</property>
<!-- 设置ResourceManager是否发送指标信息到Timeline服务 -->
<property>
<name>yarn.system-metrics-publisher.enabled</name>
<value>false</value>
</property>
mapred-site.xml

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <description></description>
</property>

<property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoop1:10020</value>
    <description></description>
</property>


<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoop1:19888</value>
    <description></description>
</property>


<property>
    <name>mapreduce.jobhistory.intermediate-done-dir</name>
    <value>/data/mr-history/tmp</value>
    <description></description>
</property>

<property>
    <name>mapreduce.jobhistory.done-dir</name>
    <value>/data/mr-history/done</value>
    <description></description>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
环境变量设置

vim /etc/profile


export HADOOP_HOME=/export/server/hadoop

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
https://img-blog.csdnimg.cn/img_convert/f579e798bef8f09affafc0daca0916c5.png
分发hadoop到 主机2,3

发送hadoop

cd /export/server/

scp -r hadoop-3.3.6/ hadoop2:`pwd`

scp -r hadoop-3.3.6/ hadoop3:`pwd`
发送环境变量

cd /etc

scp -r profile hadoop2:`pwd`

scp -r profile hadoop2:`pwd`
其他设置

hadoop2,3分别创建软连接
cd /export/server/

ln -s hadoop-3.3.6/ hadoop
https://img-blog.csdnimg.cn/img_convert/05a715d47186a95178c1614d0fd373d5.png
革新环境变量

source /etc/peorfile

hadoop version
https://img-blog.csdnimg.cn/img_convert/f54c219b4c89f174e5b652bb64f62de7.png
hadoop权限设置

主机 123 都执行: 以 root 权限 给 hadoop 用户设置相关权限
mkdir -p /data/nn

mkdir -p /data/dn

chown -R hadoop:hadoop /data

chown -R hadoop:hadoop /export

https://img-blog.csdnimg.cn/img_convert/fac9e3c9e7043d8004d9e81ef8f6df9a.png
创建快照2

格式化与启动

1.切换用户hadoop

su - hadoop
2.进行格式化

hdfs namenode -format
3.启动!!!

一键启动:
start-all.sh
分开启动:
start-dfs.sh

start-yarn.sh
https://img-blog.csdnimg.cn/img_convert/f12982ba03e1458f776a6d250d5d8a14.png
查看网页

https://img-blog.csdnimg.cn/img_convert/aefc7992a6a257b127b11711fd6b1c2e.png
https://img-blog.csdnimg.cn/img_convert/ba0a8a61d7b7cec11d352a543c23cf69.png
zookeeper

上传与解压

https://img-blog.csdnimg.cn/img_convert/0422cf9155e30fc34bcbac23443b8bc4.png
cd /export/server/

tar -zxvf apache-zookeeper-3.9.1-bin.tar.gz

ln -s apache-zookeeper-3.9.1-bin zookeeper

rm -rf apache-zookeeper-3.9.1-bin.tar.gz
https://img-blog.csdnimg.cn/img_convert/9355aca1aa993416970db97b5f46986b.png
设置

cd /export/server/zookeeper/conf/

cp zoo_sample.cfg zoo.cfg
//修改 zoo.cfg 设置文件,将 dataDir=/data/zookeeper/data 修改为指定的data目录
vim zoo.cfg
dataDir=/export/server/zookeeper/zkData

server.2=hadoop1:2888:3888
server.1=hadoop2:2888:3888
server.3=hadoop3:2888:3888
https://img-blog.csdnimg.cn/direct/cf92052831f24039a9306210eb30c32d.png
cd ..

mkdir zkData

vim myid
https://img-blog.csdnimg.cn/img_convert/bb6a0bdc01117d346bc9fdb72f35d923.png
https://img-blog.csdnimg.cn/direct/5d0dc0d4ede6461ba9d3d69160c0623d.png
分发和环境变量

环境变量

vim /etc/profile
export ZOOKEEPER_HOME=/export/server/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
https://img-blog.csdnimg.cn/img_convert/403c32d6f492e4317c014ee774dfc9c6.png
分发

cd /etc

scp -r profile hadoop2:`pwd`

scp -r profile hadoop3:`pwd`

cd /export/server/

scp -rapache-zookeeper-3.9.1-bin/ hadoop2:`pwd`

scp -rapache-zookeeper-3.9.1-bin/ hadoop3:`pwd`
hadoop2,3创建软连接
ln -s apache-zookeeper-3.9.1-bin/ zookeeper
hadoop2,3修改内容

cd /export/server/zookeeper/zkData/
https://img-blog.csdnimg.cn/img_convert/a3017c68a56f1794c54b9c6a1edd280c.png
hadoop1 修改为2
hadoop2 修改为1
hadoop3 修改为3
https://img-blog.csdnimg.cn/img_convert/c6397520f7e3c88bf2cc39d079a07b4f.png
革新设置文件

source /etc/profile

重新给权限
chown -R hadoop:hadoop /export
启动(三个呆板都执行)

su - hadoop

bin/zkServer.sh start
查看状态

bin/zkServer.sh status
https://img-blog.csdnimg.cn/img_convert/06c25b668870b602f0797d8529c16dec.png
hbase

上传息争压

https://img-blog.csdnimg.cn/img_convert/f2ea70bde398a1b77f00ae5b1291c8e0.png
tar -zxvf hbase-2.5.5-hadoop3-bin.tar.gz

ln -s hbase-2.5.5-hadoop3 hbase

rm -rf hbase-2.5.5-hadoop3-bin.tar.gz
https://img-blog.csdnimg.cn/img_convert/4fd57ed91c7359d3017fdbe8ec2538be.png
设置

cd /export/server/hbase/conf/

mkdir -p /data/hbase/logs
hbase-env.sh

export JAVA_HOME=/export/server/jdk
export HBASE_MANAGES_ZK=false
regionservers

https://img-blog.csdnimg.cn/img_convert/b8b5bc01f49c9cbcaa61326befadaf84.png
backup-master

vim backup-master
https://img-blog.csdnimg.cn/direct/3ae0972adafd4e66a672fccba868198b.png
hbase-site.xml

<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>hadoop1,hadoop2,hadoop3</value>
</property>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://hadoop1:8020/hbase</value>
</property>

<property>
<name>hbase.wal.provider</name>
<value>filesystem</value>
</property>
分发和权限以及环境变量

环境变量

vim /etc/profile
export HBASE_HOME=/export/server/hbaseexport PATH=$PATH:$HBASE_HOME/bin https://img-blog.csdnimg.cn/img_convert/299ba90df1741cf81a90162d9a939847.png
分发

cd /export

scp -r hbase-2.5.5-hadoop3/ hadoop2:`pwd`

scp -r hbase-2.5.5-hadoop3/ hadoop3:`pwd`

hadoop2,3分别创建软连接

ln -s hbase-2.5.5-hadoop2/ hbase

ln -s hbase-2.5.5-hadoop3/ hbase

cd /etc

scp -r profile hadoop2:`pwd`

scp -r profile hadoop3:`pwd`

source/etc/proflie
权限(都执行)

chown -R hadoop:hadoop /export
chown -R hadoop:hadoop /data 启动

su - hadoop

start-hbase
https://img-blog.csdnimg.cn/img_convert/d8b2d611dc238945256d40762322be7b.png
https://img-blog.csdnimg.cn/img_convert/6327f50c73a1f640af25006a2ee44636.png

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页: [1]
查看完整版本: Hadoop3.3.6安装和设置hbase-2.5.5-hadoop3x,zookeeper-3.8.3