which: no hbase in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin:/home/hadoop/bin:/opt/module/jdk1.8.0_144/bin:/opt/module/hadoop-3.3.0/bin:/opt/module/hadoop-3.3.0/sbin:/opt/module/apache-hive-2.1.1-bin/bin:/opt/module/sqoop/bin:/opt/module/azkaban-2.5.0/azkaban-web-2.5.0/bin:/opt/module/azkaban-2.5.0/azkaban-executor-2.5.0/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 7fa59ebc-f38c-42eb-a01b-f2369cdd5432
Logging initialized using configuration in jar:file:/opt/module/apache-hive-2.1.1-bin/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/hadoop/7fa59ebc-f38c-42eb-a01b-f2369cdd5432. Name node is in safe mode.
The reported blocks 219 needs additional 1 blocks to reach the threshold 0.9990 of total blocks 221.
The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:node100
复制代码
4,查看hvie运行日记
<property>
<name>hive.querylog.location</name>
<value>${system:java.io.tmpdir}/${system:user.name}</value>
<description>Location of Hive run time structured log file</description>
</property>
从我的机器上来看就是/tmp/hadoop/hive.log,从报错日记上来看,Error starting HiveServer2 on attempt 2, will retry in 60000ms,启动hiveserver2的第二次实验失败了,60秒后重试,然后重试了好多次,每次实验都会有一个session id号,所以在控制台才会打印那么多session id号。这里报错缘故原由也是因为数据块没有到达阈值导致了安全模式
2024-09-06T18:56:53,217 INFO [main] server.HiveServer2: Starting HiveServer2
2024-09-06T18:56:53,319 INFO [main] SessionState: Hive Session ID = a4fe191c-6fe4-4963-adc8-1c11c7cb3b8e
2024-09-06T18:56:53,367 INFO [main] server.HiveServer2: Shutting down HiveServer2
2024-09-06T18:56:53,367 INFO [main] server.HiveServer2: Stopping/Disconnecting tez sessions.
2024-09-06T18:56:53,367 WARN [main] server.HiveServer2: Error starting HiveServer2 on attempt 2, will retry in 60000ms
java.lang.RuntimeException: Error applying authorization policy on hive configuration: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/hadoop/a4fe191c-6fe4-4963-adc8-1c11c7cb3b8e. Name node is in safe mode.
The reported blocks 219 needs additional 1 blocks to reach the threshold 0.9990 of total blocks 221.
The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:node100
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1570)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1557)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3406)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1161)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:739)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)
tcp 0 0 192.168.5.100:8033 0.0.0.0:* LISTEN 2299/java//但是单独输入进程号却可以,但是背面就没有显示是什么应用了[hadoop@node100 ~]$ netstat -nltp|grep 10000(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)tcp6 0 0 :::10000 :::* LISTEN -