标题: CDH Hive集群的create/drop慢问题,在200s 多一点处理分析 [打印本页] 作者: 天津储鑫盛钢材现货供应商 时间: 2024-10-11 10:33 标题: CDH Hive集群的create/drop慢问题,在200s 多一点处理分析 征象:
CREATE TABLE test911 (SN String,PN_CODE String);
Total time spent in this metastore function was greater than 1000ms : createTable_(Table, )=200091
Hive集群的 create/drop 操作时间根本都稳定在 200s 多一点。
分析:
HMS会实时向Sentry同步Notifications请求,当必要大批同步消息必要处理,后台线程处理不外来,消息压滞就会出现这个异常。
这个异常不影响集群正常使用,只是会导致create, drop 等操作慢,必要等待200s,等待的目的也是为了追上最新的id。
sentry 和 hive 的消息同步的元数据信息,征象是 sentry 元数据的 SENTRY_HMS_NOTIFICATION_ID 表一直没有更新,而 hive 元数据 NOTIFICATION_SEQUENCE 表一直在更新,也就是说 sentry 斲丧 HMS 端的信息不及时出现了滞后情况。此时 Hive HMS 便会出现 canary 异常,导致上面 create database 操作的时间根本都是在 200.**s 左右,这个参数是由 sentry.notification.sync.timeout.ms(200s) 参数控制的,这也能表明为什么 create/drop 操作时间都在 200s 多一点。
方案:
1、恰当调小 sentry.notification.sync.timeout.ms 参数
该参数默认是 200s,调小该参数,可恰当减小 create/drop/alter 等操作的等待时间,消息积压不多的情况可以选择这种方式让 sentry 自行斲丧处理掉。
Cloudera 修改 Sentry 服务的参数配置:sentry.notification.sync.timeout.ms = 10000
修改参数后重启Sentry服务,发现HMS出现canary异常后超时时间在10s多一点,阐明参数生效。
2、手动修改记载
获取sentry元数据SENTRY_HMS_NOTIFICATION_ID信息,
mysql> use sentry;
mysql> select * from SENTRY_HMS_NOTIFICATION_ID order by NOTIFICATION_ID desc limit 2;
±----------------+
| NOTIFICATION_ID |
±----------------+
| 34044370 |
| 34044369 |
±----------------+
获取hive元数据的NOTIFICATION_SEQUENCE信息,
mysql> use metastore
mysql> select * from NOTIFICATION_SEQUENCE limit 1;
±-------±--------------+
| NNI_ID | NEXT_EVENT_ID |
±-------±--------------+
| 1 | 34047898 |
±-------±--------------+
若sentry滞后hive,HMS必会出现canary异常。
如果消息积压的太多,sentry逐步斲丧的时间太长的话,大概一直追不上HMS的最新id,此时可以选择丢掉这些信息!
mysql> use sentry;
mysql> insert into SENTRY_HMS_NOTIFICATION_ID values(34047898);
mysql> select * from SENTRY_HMS_NOTIFICATION_ID order by NOTIFICATION_ID desc limit 2;
±----------------+
| NOTIFICATION_ID |
±----------------+
| 34047898 |
| 34044575 |
±----------------+
hive sql 详细实行过程:
2024-09-13 09:12:49,886 DEBUG org.apache.hadoop.security.UserGroupInformation: [pool-8-thread-200]: PrivilegedAction as:root (auth:SIMPLE) fromrg.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
2024-09-13 09:12:49,887 DEBUG org.apache.hadoop.hive.ql.log.PerfLogger: [pool-8-thread-200]:
2024-09-13 09:12:49,887 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-8-thread-200]: 200: source:10.11.16.36 create_table: Table(tableName:test912, dbName:test, owner:root, createTime:1726189969, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:sn, type:string, comment:null), FieldSchema(name:pn_code, type:string, comment:null)], location:null, inputFormatrg.apache.hadoop.mapred.TextInputFormat, outputFormatrg.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLibrg.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=0, numRows=0, rawDataSize=0, COLUMN_STATS_ACCURATE={“BASIC_STATS”:“true”}, numFiles=0, numFilesErasureCoded=0}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privilegesrincipalPrivilegeSet(userPrivileges:{root=[PrivilegeGrantInfo(privilege:INSERT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilegeELETE, createTime:-1, grantor:root, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false, ownerType:USER)
2024-09-13 09:12:49,887 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-8-thread-200]: ugi=root ip=10.11.16.36
cmd=source:10.11.16.36 create_table: Table(tableName:test912, dbName:test, owner:root, createTime:1726189969, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:sn, type:string, comment:null), FieldSchema(name:pn_code, type:string, comment:null)],
location:null, inputFormatrg.apache.hadoop.mapred.TextInputFormat, outputFormatrg.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLibrg.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=0, numRows=0, rawDataSize=0, COLUMN_STATS_ACCURATE={“BASIC_STATS”:“true”}, numFiles=0, numFilesErasureCoded=0}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privilegesrincipalPrivilegeSet(userPrivileges:{root=[PrivilegeGrantInfo(privilege:INSERT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilegeELETE, createTime:-1, grantor:root, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false, ownerType:USER)
2024-09-13 09:12:49,888 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 1, isActive = true at:
sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
2024-09-13 09:12:49,888 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore G e t H e l p e r . s t a r t ( O b j e c t S t o r e . j a v a : 3067 ) 2024 − 09 − 1309 : 12 : 49 , 888 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a s t o r e D i r e c t S q l U t i l s : [ p o o l − 8 − t h r e a d − 200 ] : D i r e c t S Q L q u e r y i n 0.156529 m s + 0.001045 m s , t h e q u e r y i s [ S E T @ @ s e s s i o n . s q l m o d e = A N S I Q U O T E S ] 2024 − 09 − 1309 : 12 : 49 , 888 T R A C E o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a S t o r e D i r e c t S q l : [ p o o l − 8 − t h r e a d − 200 ] : g e t D a t a b a s e : q u e r y i n s t a n t i a t e d : s e l e c t " D B I D " , " N A M E " , " D B L O C A T I O N U R I " , " D E S C " , " O W N E R N A M E " , " O W N E R T Y P E " , " C R E A T E T I M E " F R O M " D B S " w h e r e " N A M E " = ? w i t h p a r a m [ t e s t ] 2024 − 09 − 1309 : 12 : 49 , 889 T R A C E o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a S t o r e D i r e c t S q l : [ p o o l − 8 − t h r e a d − 200 ] : g e t D a t a b a s e : q u e r y 2 i n s t a n t i a t e d : s e l e c t " P A R A M K E Y " , " P A R A M V A L U E " F R O M " D A T A B A S E P A R A M S " W H E R E " D B I D " = ? A N D " P A R A M K E Y " I S N O T N U L L w i t h p a r a m [ 253191 ] 2024 − 09 − 1309 : 12 : 49 , 889 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a S t o r e D i r e c t S q l : [ p o o l − 8 − t h r e a d − 200 ] : g e t D a t a b a s e : d i r e c t s q l r e t u r n i n g d b t e s t l o c n [ h d f s : / / n a m e s e r v i c e 1 / u s e r / h i v e / w a r e h o u s e / t e s t . d b ] d e s c [ n u l l ] o w n e r [ h i v e ] o w n e r t y p e [ U S E R ] 2024 − 09 − 1309 : 12 : 49 , 889 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . O b j e c t S t o r e : [ p o o l − 8 − t h r e a d − 200 ] : U s i n g d i r e c t S Q L o p t i m i z a t i o n . 2024 − 09 − 1309 : 12 : 49 , 889 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . O b j e c t S t o r e : [ p o o l − 8 − t h r e a d − 200 ] : C o m m i t t r a n s a c t i o n : c o u n t = 1 , i s a c t i v e t r u e a t : o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . O b j e c t S t o r e GetHelper.start(ObjectStore.java:3067) 2024-09-13 09:12:49,888 DEBUG org.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils: [pool-8-thread-200]: Direct SQL query in 0.156529ms + 0.001045ms, the query is [SET @@session.sql_mode=ANSI_QUOTES] 2024-09-13 09:12:49,888 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query instantiated : select "DB_ID", "NAME", "DB_LOCATION_URI", "DESC", "OWNER_NAME", "OWNER_TYPE" , "CREATE_TIME"FROM "DBS" where "NAME" = ? with param [test] 2024-09-13 09:12:49,889 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query2 instantiated : select "ARAM_KEY", "ARAM_VALUE" FROM "DATABASE_PARAMS" WHERE "DB_ID" = ? AND "ARAM_KEY" IS NOT NULL with param [253191] 2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase: directsql returning db test locn[hdfs://nameservice1/user/hive/warehouse/test.db] desc [null] owner [hive] ownertype [USER] 2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Using direct SQL optimization. 2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at: org.apache.hadoop.hive.metastore.ObjectStore GetHelper.start(ObjectStore.java:3067)2024−09−1309:12:49,888DEBUGorg.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils:[pool−8−thread−200]irectSQLqueryin0.156529ms+0.001045ms,thequeryis[SET@@session.sqlmode=ANSIQUOTES]2024−09−1309:12:49,888TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:queryinstantiated:select"DBID","NAME","DBLOCATIONURI","DESC","OWNERNAME","OWNERTYPE","CREATETIME"FROM"DBS"where"NAME"=?withparam[test]2024−09−1309:12:49,889TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:query2instantiated:select"ARAMKEY","ARAMVALUE"FROM"DATABASEPARAMS"WHERE"DBID"=?AND"ARAMKEY"ISNOTNULLwithparam[253191]2024−09−1309:12:49,889DEBUGorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:directsqlreturningdbtestlocn[hdfs://nameservice1/user/hive/warehouse/test.db]desc[null]owner[hive]ownertype[USER]2024−09−1309:12:49,889DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:UsingdirectSQLoptimization.2024−09−1309:12:49,889DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:Committransaction:count=1,isactivetrueatrg.apache.hadoop.hive.metastore.ObjectStoreGetHelper.commit(ObjectStore.java:3160)
2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: db details for db test retrieved using
SQL in 1.599198ms
2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1270)
2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 3, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1457)
2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 2, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1471)
2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1272)
2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore G e t H e l p e r . s t a r t ( O b j e c t S t o r e . j a v a : 3067 ) 2024 − 09 − 1309 : 12 : 49 , 890 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a s t o r e D i r e c t S q l U t i l s : [ p o o l − 8 − t h r e a d − 200 ] : D i r e c t S Q L q u e r y i n 0.126999 m s + 6.9 E − 4 m s , t h e q u e r y i s [ S E T @ @ s e s s i o n . s q l m o d e = A N S I Q U O T E S ] 2024 − 09 − 1309 : 12 : 49 , 890 T R A C E o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a S t o r e D i r e c t S q l : [ p o o l − 8 − t h r e a d − 200 ] : g e t D a t a b a s e : q u e r y i n s t a n t i a t e d : s e l e c t " D B I D " , " N A M E " , " D B L O C A T I O N U R I " , " D E S C " , " O W N E R N A M E " , " O W N E R T Y P E " , " C R E A T E T I M E " F R O M " D B S " w h e r e " N A M E " = ? w i t h p a r a m [ t e s t ] 2024 − 09 − 1309 : 12 : 49 , 890 T R A C E o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a S t o r e D i r e c t S q l : [ p o o l − 8 − t h r e a d − 200 ] : g e t D a t a b a s e : q u e r y 2 i n s t a n t i a t e d : s e l e c t " P A R A M K E Y " , " P A R A M V A L U E " F R O M " D A T A B A S E P A R A M S " W H E R E " D B I D " = ? A N D " P A R A M K E Y " I S N O T N U L L w i t h p a r a m [ 253191 ] 2024 − 09 − 1309 : 12 : 49 , 891 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . M e t a S t o r e D i r e c t S q l : [ p o o l − 8 − t h r e a d − 200 ] : g e t D a t a b a s e : d i r e c t s q l r e t u r n i n g d b t e s t l o c n [ h d f s : / / n a m e s e r v i c e 1 / u s e r / h i v e / w a r e h o u s e / t e s t . d b ] d e s c [ n u l l ] o w n e r [ h i v e ] o w n e r t y p e [ U S E R ] 2024 − 09 − 1309 : 12 : 49 , 891 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . O b j e c t S t o r e : [ p o o l − 8 − t h r e a d − 200 ] : U s i n g d i r e c t S Q L o p t i m i z a t i o n . 2024 − 09 − 1309 : 12 : 49 , 891 D E B U G o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . O b j e c t S t o r e : [ p o o l − 8 − t h r e a d − 200 ] : C o m m i t t r a n s a c t i o n : c o u n t = 1 , i s a c t i v e t r u e a t : o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . O b j e c t S t o r e GetHelper.start(ObjectStore.java:3067) 2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils: [pool-8-thread-200]: Direct SQL query in 0.126999ms + 6.9E-4ms, the query is [SET @@session.sql_mode=ANSI_QUOTES] 2024-09-13 09:12:49,890 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query instantiated : select "DB_ID", "NAME", "DB_LOCATION_URI", "DESC", "OWNER_NAME", "OWNER_TYPE" , "CREATE_TIME"FROM "DBS" where "NAME" = ? with param [test] 2024-09-13 09:12:49,890 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query2 instantiated : select "ARAM_KEY", "ARAM_VALUE" FROM "DATABASE_PARAMS" WHERE "DB_ID" = ? AND "ARAM_KEY" IS NOT NULL with param [253191] 2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase: directsql returning db test locn[hdfs://nameservice1/user/hive/warehouse/test.db] desc [null] owner [hive] ownertype [USER] 2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Using direct SQL optimization. 2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at: org.apache.hadoop.hive.metastore.ObjectStore GetHelper.start(ObjectStore.java:3067)2024−09−1309:12:49,890DEBUGorg.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils:[pool−8−thread−200]irectSQLqueryin0.126999ms+6.9E−4ms,thequeryis[SET@@session.sqlmode=ANSIQUOTES]2024−09−1309:12:49,890TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:queryinstantiated:select"DBID","NAME","DBLOCATIONURI","DESC","OWNERNAME","OWNERTYPE","CREATETIME"FROM"DBS"where"NAME"=?withparam[test]2024−09−1309:12:49,890TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:query2instantiated:select"ARAMKEY","PARAMVALUE"FROM"DATABASEPARAMS"WHERE"DBID"=?AND"PARAMKEY"ISNOTNULLwithparam[253191]2024−09−1309:12:49,891DEBUGorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:directsqlreturningdbtestlocn[hdfs://nameservice1/user/hive/warehouse/test.db]desc[null]owner[hive]ownertype[USER]2024−09−1309:12:49,891DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:UsingdirectSQLoptimization.2024−09−1309:12:49,891DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:Committransaction:count=1,isactivetrueatrg.apache.hadoop.hive.metastore.ObjectStoreGetHelper.commit(ObjectStore.java:3160)
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: db details for db test retrieved using
SQL in 0.717265ms
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: Looking for FS supporting hdfs
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: looking for configuration option fs.hdfs.impl
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: Looking in service filesystems for implementation class
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.use.legacy.blockreader.local = false
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.read.shortcircuit = false
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.domain.socket.data.traffic = false
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.DFSClient: [pool-8-thread-200]: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2024-09-13 09:12:49,892 TRACE org.apache.hadoop.security.SecurityUtil: [pool-8-thread-200]: Name lookup for IT-CDH-Node01 took 0 ms.
2024-09-13 09:12:49,892 TRACE org.apache.hadoop.security.SecurityUtil: [pool-8-thread-200]: Name lookup for IT-CDH-Node02 took 0 ms.
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.HAUtilClient: [pool-8-thread-200]: No HA service delegation token found for logical URI hdfs://nameservice1/user/hive/warehouse/test.db/test912
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.use.legacy.blockreader.local = false
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.read.shortcircuit = false
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.domain.socket.data.traffic = false
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.io.retry.RetryUtils: [pool-8-thread-200]: multipleLinearRandomRetry = null
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: getting client out of cache: org.apache.hadoop.ipc.Client@a4c506
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil: [pool-8-thread-200]: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
2024-09-13 09:12:49,893 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node01/10.11.16.35:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,893 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: The ping interval is 60000 ms.
2024-09-13 09:12:49,893 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: Connecting to IT-CDH-Node01/10.11.16.35:8020
2024-09-13 09:12:49,895 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Exception <- IT-CDH-Node01/10.11.16.35:8020: getFileInfo {org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode N a m e N o d e H A C o n t e x t . c h e c k O p e r a t i o n ( N a m e N o d e . j a v a : 1962 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . c h e c k O p e r a t i o n ( F S N a m e s y s t e m . j a v a : 1421 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . g e t F i l e I n f o ( F S N a m e s y s t e m . j a v a : 3055 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e R p c S e r v e r . g e t F i l e I n f o ( N a m e N o d e R p c S e r v e r . j a v a : 1151 ) a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l P B . C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . g e t F i l e I n f o ( C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . j a v a : 940 ) a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . C l i e n t N a m e n o d e P r o t o c o l P r o t o s NameNodeHAContext.checkOperation(NameNode.java:1962) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos NameNodeHAContext.checkOperation(NameNode.java:1962)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e . j a v a : 523 ) a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 869 ) a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.ServerKaTeX parse error: Expected 'EOF', got '}' at position 31: …ver.java:2675) }̲ 2024-09-13 09:…NameNodeHAContext.checkOperation(NameNode.java:1962)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e . j a v a : 523 ) a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 869 ) a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1445) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1355) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875) ~[hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source) [?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1630) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1496) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:476) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1548) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at sun.reflect.GeneratedMethodAccessor173.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown Source) [?:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_202]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
复制代码
2024-09-13 09:12:49,898 DEBUG org.apache.hadoop.io.retry.RetryInvocationHandler: [pool-8-thread-200]: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode N a m e N o d e H A C o n t e x t . c h e c k O p e r a t i o n ( N a m e N o d e . j a v a : 1962 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . c h e c k O p e r a t i o n ( F S N a m e s y s t e m . j a v a : 1421 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . F S N a m e s y s t e m . g e t F i l e I n f o ( F S N a m e s y s t e m . j a v a : 3055 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e R p c S e r v e r . g e t F i l e I n f o ( N a m e N o d e R p c S e r v e r . j a v a : 1151 ) a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l P B . C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . g e t F i l e I n f o ( C l i e n t N a m e n o d e P r o t o c o l S e r v e r S i d e T r a n s l a t o r P B . j a v a : 940 ) a t o r g . a p a c h e . h a d o o p . h d f s . p r o t o c o l . p r o t o . C l i e n t N a m e n o d e P r o t o c o l P r o t o s NameNodeHAContext.checkOperation(NameNode.java:1962) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos NameNodeHAContext.checkOperation(NameNode.java:1962)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e . j a v a : 523 ) a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 869 ) a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Server H a n d l e r . r u n ( S e r v e r . j a v a : 2675 ) , w h i l e i n v o k i n g C l i e n t N a m e n o d e P r o t o c o l T r a n s l a t o r P B . g e t F i l e I n f o o v e r I T − C D H − N o d e 01 / 10.11.16.35 : 8020. T r y i n g t o f a i l o v e r i m m e d i a t e l y . o r g . a p a c h e . h a d o o p . i p c . R e m o t e E x c e p t i o n : O p e r a t i o n c a t e g o r y R E A D i s n o t s u p p o r t e d i n s t a t e s t a n d b y . V i s i t h t t p s : / / s . a p a c h e . o r g / s b n n − e r r o r a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . h a . S t a n d b y S t a t e . c h e c k O p e r a t i o n ( S t a n d b y S t a t e . j a v a : 88 ) a t o r g . a p a c h e . h a d o o p . h d f s . s e r v e r . n a m e n o d e . N a m e N o d e Handler.run(Server.java:2675) , while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over IT-CDH-Node01/10.11.16.35:8020. Trying to failover immediately. org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88) at org.apache.hadoop.hdfs.server.namenode.NameNode Handler.run(Server.java:2675),whileinvokingClientNamenodeProtocolTranslatorPB.getFileInfooverIT−CDH−Node01/10.11.16.35:8020.Tryingtofailoverimmediately.org.apache.hadoop.ipc.RemoteException:OperationcategoryREADisnotsupportedinstatestandby.Visithttps://s.apache.org/sbnn−erroratorg.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)atorg.apache.hadoop.hdfs.server.namenode.NameNodeNameNodeHAContext.checkOperation(NameNode.java:1962)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol 2. c a l l B l o c k i n g M e t h o d ( C l i e n t N a m e n o d e P r o t o c o l P r o t o s . j a v a ) a t o r g . a p a c h e . h a d o o p . i p c . P r o t o b u f R p c E n g i n e 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer P r o t o B u f R p c I n v o k e r . c a l l ( P r o t o b u f R p c E n g i n e . j a v a : 523 ) a t o r g . a p a c h e . h a d o o p . i p c . R P C ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server R p c C a l l . r u n ( S e r v e r . j a v a : 869 ) a t o r g . a p a c h e . h a d o o p . i p c . S e r v e r RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1445) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1355) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875) ~[hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source) [?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1630) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1496) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:476) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1548) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at sun.reflect.GeneratedMethodAccessor173.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown Source) [?:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_202]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]