ToB企服应用市场:ToB评测及商务社交产业平台

标题: Linux环境下Hive4.0.0(最新版本)部署 [打印本页]

作者: 忿忿的泥巴坨    时间: 2024-9-27 00:10
标题: Linux环境下Hive4.0.0(最新版本)部署
 前置依赖部署
  Linux环境下Hadoop3.4.0(最新版本)集群部署-CSDN博客
  Linux环境下部署MySQL8数据库-CSDN博客
  官方地点:Apache Hive

庞大变化:Hive4.0.0中,HiveCLI已经被弃用了,代替它的是Beeline。所以,启动Hive4.0.0时,会默认进入Beeline命令行界面,而不是HiveCLI
1、下载安装包:apache-hive-4.0.0-bin.tar.gz

下载路径:Index of /hive/hive-4.0.0

2、解压软件

将apache-hive-4.0.0-bin.tar.gz上传至linux体系/usr/local/soft/路径下
  1. cd /usr/local/soft/
  2. tar -zxvf apache-hive-4.0.0-bin.tar.gz
复制代码
3、修改体系环境变量

  1. vim /etc/profile
复制代码
添加内容:
  1. export HIVE_HOME=/usr/local/soft/apache-hive-4.0.0-bin
  2. export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin
复制代码
保存:
  1. source /etc/profile
复制代码
4、修改hive环境变量

  1. cd /usr/local/soft/apache-hive-4.0.0-bin/bin/
复制代码
编辑hive-config.sh文件
  1. vi hive-config.sh
复制代码
新增内容:
  1. export JAVA_HOME=/usr/local/soft/jdk1.8.0_381
  2. export HIVE_HOME=/usr/local/soft/apache-hive-4.0.0-bin
  3. export HADOOP_HOME=/usr/local/soft/hadoop-3.4.0
  4. export HIVE_CONF_DIR=/usr/local/soft/apache-hive-4.0.0-bin/conf
复制代码



5、拷贝hive配置文件

  1. cd /usr/local/soft/apache-hive-4.0.0-bin/conf/
  2. cp hive-default.xml.template hive-site.xml
复制代码
6、修改Hive配置文件,找到对应的位置进行修改

可以直接全部替换
  1. <configuration>
  2.   <property>
  3.     <name>javax.jdo.option.ConnectionDriverName</name>
  4.     <value>com.mysql.cj.jdbc.Driver</value>
  5.     <description>Driver class name for a JDBC metastore</description>
  6.   </property>
  7. <property>
  8.     <name>javax.jdo.option.ConnectionUserName</name>
  9.     <value>root</value>
  10.     <description>Username to use against metastore database</description>
  11.   </property>
  12. <property>
  13.     <name>javax.jdo.option.ConnectionPassword</name>
  14.     <value>root123</value>
  15.     <description>password to use against metastore database</description>
  16.   </property>
  17. <property>
  18.     <name>javax.jdo.option.ConnectionURL</name>
  19. <value>jdbc:mysql://192.168.1.5:3306/hive?useUnicode=true&amp;characterEncoding=utf8&amp;useSSL=false&amp;serverTimezone=GMT</value>
  20.     <description>
  21.       JDBC connect string for a JDBC metastore.
  22.       To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
  23.       For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
  24.     </description>
  25.   </property>
  26.   <property>
  27.     <name>datanucleus.schema.autoCreateAll</name>
  28.     <value>true</value>
  29.     <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
  30.   </property>
  31. <property>
  32.     <name>hive.metastore.schema.verification</name>
  33.     <value>false</value>
  34.     <description>
  35.       Enforce metastore schema version consistency.
  36.       True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
  37.             schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
  38.             proper metastore schema migration. (Default)
  39.       False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
  40.     </description>
  41.   </property>
  42. <property>
  43.     <name>hive.exec.local.scratchdir</name>
  44.     <value>/usr/local/soft/apache-hive-4.0.0-bin/tmp/${user.name}</value>
  45.     <description>Local scratch space for Hive jobs</description>
  46.   </property>
  47.   <property>
  48. <name>system:java.io.tmpdir</name>
  49. <value>/usr/local/soft/apache-hive-4.0.0-bin/iotmp</value>
  50. <description/>
  51. </property>
  52.   <property>
  53.     <name>hive.downloaded.resources.dir</name>
  54. <value>/usr/local/soft/apache-hive-4.0.0-bin/tmp/${hive.session.id}_resources</value>
  55.     <description>Temporary local directory for added resources in the remote file system.</description>
  56.   </property>
  57. <property>
  58.     <name>hive.querylog.location</name>
  59.     <value>/usr/local/soft/apache-hive-4.0.0-bin/tmp/${system:user.name}</value>
  60.     <description>Location of Hive run time structured log file</description>
  61.   </property>
  62.   <property>
  63.     <name>hive.server2.logging.operation.log.location</name>
  64. <value>/usr/local/soft/apache-hive-4.0.0-bin/tmp/${system:user.name}/operation_logs</value>
  65.     <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  66.   </property>
  67.   <property>
  68.     <name>hive.metastore.db.type</name>
  69.     <value>mysql</value>
  70.     <description>
  71.       Expects one of [derby, oracle, mysql, mssql, postgres].
  72.       Type of database used by the metastore. Information schema &amp; JDBCStorageHandler depend on it.
  73.     </description>
  74.   </property>
  75.   <property>
  76.     <name>hive.cli.print.current.db</name>
  77.     <value>true</value>
  78.     <description>Whether to include the current database in the Hive prompt.</description>
  79.   </property>
  80.   <property>
  81.     <name>hive.cli.print.header</name>
  82.     <value>true</value>
  83.     <description>Whether to print the names of the columns in query output.</description>
  84.   </property>
  85.   <property>
  86.     <name>hive.metastore.warehouse.dir</name>
  87.     <value>/user/hive/warehouse</value>
  88.     <description>location of default database for the warehouse</description>
  89.   </property>
  90. <property>  
  91.   <name>hive.metastore.uris</name>  
  92.   <value>thrift://192.168.1.11:9083</value>  
  93. </property>  
  94.     <property>
  95.         <name>hive.metastore.event.db.notification.api.auth</name>
  96.         <value>false</value>
  97.     </property>
  98.     <property>
  99.         <name>hive.server2.thrift.bind.host</name>
  100.         <value>node11</value>
  101.     </property>
  102.     <property>
  103.     <name>hive.server2.thrift.port</name>
  104.     <value>10000</value>
  105.     </property>
  106. </configuration>
复制代码
7、上传mysql驱动包到/usr/local/soft/apache-hive-4.0.0-bin/lib/文件夹下

驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包
8、确保 mysql数据库中有名称为hive的数据库,字符集须设置latin1,否则的话hive表删除会卡死

9、初始化初始化元数据库

  1. schematool -dbType mysql -initSchema
复制代码
10、确保Hadoop启动

node11上实行以下命令
  1. start-all.sh
复制代码
11、启动服务

庞大变化:Hive4.0.0中,HiveCLI已经被弃用了,代替它的是Beeline。所以,启动Hive4.0.0时,会默认进入Beeline命令行界面,而不是HiveCLI

利用Beeline命令行连接Hive服务之前,需要确保以下服务已经启动和配置:
服务
阐明
实行命令
Hadoop
Hive需要依赖Hadoop服务来运行,因此需要确保Hadoop服务已经启动,而且配置文件中的相关参数正确。
start-all.sh
Hive Metastore
Hive Metastore是Hive的元数据存储服务,需要确保Metastore服务已经启动,而且在Beeline的配置文件中正确配置了Metastore的地点。
hive --service metastore
HiveServer2
HiveServer2是Hive的查询服务,需要确保HiveServer2服务已经启动,而且在Beeline的配置文件中正确配置了HiveServer2的地点。
hive --service hiveserver2
启动元数据服务 

  1. hive --service metastore
复制代码
或(此种方式后续不需要另起Shell)
  1. hive --service metastore 2>&1 &
复制代码

启动hiveserver2服务(另起shell窗口)

  1. hiveserver2
复制代码
 或
  1. hive --service hiveserver2
复制代码
12、启动beeline客户端 

在新的窗口里面实行hive 或者beeline命令

输入: 
  1. !connect jdbc:hive2://node11:10000
复制代码

或者直接输入如下内容启动客户端 
  1. beeline -u jdbc:hive2://node11:10000 -n root
复制代码

FAQ

Exception in thread "main" MetaException(message:JDOFatalInternalException: Index/candidate part #0 for `CTLGS` already set
Root cause: org.datanucleus.exceptions.NucleusException: Index/candidate part #0 for `CTLGS` already set)

 

解决方案:
  1. cp /usr/local/soft/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar /usr/local/soft/apache-hive-4.0.0-bin/lib/
复制代码
或查抄hadoop的core-site.xml是否有如下内容
  1.   </property>
  2.       <property>
  3.         <name>hadoop.proxyuser.root.hosts</name>
  4.         <value>*</value>
  5.     </property>
  6.     <property>
  7.         <name>hadoop.proxyuser.root.groups</name>
  8.         <value>*</value>
  9.     </property>
复制代码
beeline启动失败 

解决方案:可实验多连接反复 

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。




欢迎光临 ToB企服应用市场:ToB评测及商务社交产业平台 (https://dis.qidao123.com/) Powered by Discuz! X3.4