azkaban任务报错java.lang.RuntimeException: The root scratch dir: /tmp/hive

11840阅读 0评论2016-03-23 levy-linux
分类:HADOOP

azkaban运行任务的时候失败报错如下:

23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR - Exception in thread "main" org.apache.hive.service.cli.HiveSQLException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-xr-x
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:231)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:222)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:459)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:178)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at java.sql.DriverManager.getConnection(DriverManager.java:571)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at java.sql.DriverManager.getConnection(DriverManager.java:215)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at com.geo.gdata.common.HiveClientUtils.getConnection(HiveClientUtils.java:36)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at com.geo.gdata.hive.DataToHive.loadDataToTab(DataToHive.java:59)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at com.geo.gdata.chain.KafkaToHiveTab.main(KafkaToHiveTab.java:74)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at java.lang.reflect.Method.invoke(Method.java:606)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR - Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-xr-x
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.session.HiveSessionImpl.(HiveSessionImpl.java:116)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.session.HiveSessionImplwithUGI.(HiveSessionImplwithUGI.java:47)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:260)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:175)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:322)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:235)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at java.lang.Thread.run(Thread.java:745)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR - Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-xr-x
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:529)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:478)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:430)
23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR -     ... 15 more
23-03-2016 08:16:15 CST analyzer-kafka2hdfs_new INFO - Process completed unsuccessfully in 73 seconds.
23-03-2016 08:16:15 CST analyzer-kafka2hdfs_new ERROR - Job run failed!
23-03-2016 08:16:15 CST analyzer-kafka2hdfs_new ERROR - azkaban.jobExecutor.utils.process.ProcessFailureExceptionazkaban.jobExecutor.utils.process.ProcessFailureException
23-03-2016 08:16:15 CST analyzer-kafka2hdfs_new INFO - Finishing job analyzer-kafka2hdfs_new at 1458692175992 with status FAILED

问题分析:
从报错中看到应该是跟权限有关系,搜索了相关文档,基本可以肯定是权限问题。

解决方法:
将/tmp/hive置为777权限

Update the permission of your /tmp/hive HDFS directory using the following command
hadoop fs -chmod 777 /tmp/hive
hdfs dfs -chmod 777 /tmp/hive

If so can you remove /tmp/hive on both local and hdfs.

hadoop fs -rm -r /tmp/hive;  
rm -rf /tmp/hive

Only temporary files are kept in this location. No problem even if we delete this, will be created when required with proper permissions.

上一篇:su: /bin/bash: Too many open files in system
下一篇:Namenode停止报错 Error: flush failed for required journal