Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Kerberos security in HDP, "GSS initiate failed" for the "hdfs" user

$
0
0

Replies: 7

I’m trying to enable security in HDP 2.0, deployed using Ambari 1.4.0 (from the developers’ repository), on a virtual machine, in a single-node cluster.
I have a problem with Kerberos TGT.
.
I try to execute the following 2 commands (taken from error messages from Puppet):

[root@dev01 ~]# /usr/bin/kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs
[root@dev01 ~]# su hdfs -c “hadoop –config /etc/hadoop/conf fs -mkdir -p /mapred”
13/06/25 10:14:04 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
13/06/25 10:14:04 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
13/06/25 10:14:04 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
mkdir: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: “dev01.hortonworks.com/192.168.56.101″; destination host is: “dev01.hortonworks.com”:8020;
[root@dev01 ~]#

The keytab file (/etc/security/keytabs/hdfs.headless.keytab) is in place, the 1st command finished OK, but the 2nd comand did not work.

Then I tried:
[root@dev01 ~]# kinit -R
kinit: Ticket expired while renewing credentials

It looks like a ticket has expired immediately after kinit.
Then I try to check:

[root@dev01 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs@EXAMPLE.COM

Valid starting Expires Service principal
06/25/13 10:13:46 06/26/13 10:13:46 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 06/25/13 10:13:46
[root@dev01 ~]#

But it looks like the ticket is valid, as far as I understand.
Now I don’t understand what’s going on with Kerberos TGT here.

Here is the Kerberos config (/etc/krb5.conf):
——–
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true

[realms]
EXAMPLE.COM = {
kdc = dev01.hortonworks.com
admin_server = dev01.hortonworks.com
}

[domain_realm]
.hortonworks.com = EXAMPLE.COM
dev01.hortonworks.com = EXAMPLE.COM
——–

Can somebody help me?


sqoop export problem

$
0
0

Replies: 1

Hello
we are having trouble doing sqoop export from hcatalog table to a mysql table, seems like there is some haoop version incompatibility when generating sqoop job, does anyone have a clue for me on this ?

sqoop export -D mapreduce.job.queuename=q_restitution –connect jdbc:mysql://xx.xx.xx.xx/test –username hive –password hive –table ihm_ano –hcatalog-database ‘hive_temp’ –hcatalog-table ‘ihm_ano’ –verbose

14/06/10 14:56:41 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4.2.0.6.1-102
..
14/06/10 14:56:57 INFO client.RMProxy: Connecting to ResourceManager at x.x.x.x/xx.xx.xx.xx:8050
14/06/10 14:56:57 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 55683 for u__etl on xx.xx.xx.xx:8020
14/06/10 14:56:57 INFO security.TokenCache: Got dt for hdfs://x.x.x.x:8020; Kind: HDFS_DELEGATION_TOKEN, Service: xx.xx.xx.xx:8020, Ident: (HDFS_DELEGATION_TOKEN token 55683 for u__etl)
14/06/10 14:57:01 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/u__etl/.staging/job_1400766215563_0542
14/06/10 14:57:01 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@63c78e57
<b>Exception in thread “main” java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:101)</b>
at org.apache.sqoop.mapreduce.hcat.SqoopHCatExportFormat.getSplits(SqoopHCatExportFormat.java:56)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:296)
at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:273)
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:405)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:828)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.ru

Error with Hive : Could not initialize class org.openx.data.jsonserde.objectinsp

$
0
0

Replies: 1

http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-sentiment-data/

I have followed the same steps given in the above link, but while executing the hiveddl.sql, i get the following error.

Executed in putty :

Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.serde2.objectinspector.primitive.AbstractPrimitiveJav aObjectInspector.<init>(Lorg/apache/hadoop/hive/serde2/objectinspector/primitive /PrimitiveObjectInspectorUtils$PrimitiveTypeEntry;)V

Execeuted from HUE shell

Driver returned: 1. Errors: OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Could not initialize class org.openx.data.jsonserde.objectinspector.JsonObjectInspectorFactory

Please help.

Trouble with Ambari SSH

$
0
0

Replies: 2

Hello guys!

I’m trying my first Ambari on EC2 and I cannot get the server to register with the hosts. I have changed the hostnames and /etc/hosts file for my cluster. I am able to ssh from the Ambari server box to the client hosts but Ambari fails. The server logs show me

INFO:root:BootStrapping hosts ['hadoop1', 'hadoop2', 'hadoop3'] using /usr/lib/python2.6/site-packages/ambari_server cluster primary OS: redhat6 with user ‘ec2-user’ sshKey File /var/run/ambari-server/bootstrap/18/sshKey password File null using tmp dir /var/run/ambari-server/bootstrap/18 ambari: ab; server_port: 8080; ambari version: 1.6.0
INFO:root:Executing parallel bootstrap
ERROR:root:ERROR: Bootstrap of host hadoop1 fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

ERROR:root:ERROR: Bootstrap of host hadoop2 fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

ERROR:root:ERROR: Bootstrap of host hadoop3 fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

INFO:root:Finished parallel bootstrap

I turned on ssh debugging on one of the client boxes and I see

Jun 12 02:48:25 ip-10-211-11-25 sshd[1588]: debug1: PAM: initializing for “ec2-user”
Jun 12 02:48:25 ip-10-211-11-25 sshd[1588]: debug1: PAM: setting PAM_RHOST to “10.211.20.248″
Jun 12 02:48:25 ip-10-211-11-25 sshd[1588]: debug1: PAM: setting PAM_TTY to “ssh”
Jun 12 02:48:25 ip-10-211-11-25 sshd[1589]: Connection closed by 10.211.20.248

I can ssh as ec2-user from the server box but the registering process cannot.

History server issue accessing logs

$
0
0

Replies: 0

Hello,

when I am trying to access logs from mapreduce jobs I am getting following error:

org.apache.hadoop.yarn.webapp.WebAppException: /octopus.svs.usa.hp.com:19888/jobhistory/logs/clownfish.svs.usa.hp.com:45454/container_1394028045311_0004_01_000001/container_1394028045311_0004_01_000001/hdfs: controller for octopus.svs.usa.hp.com:19888 not found
at org.apache.hadoop.yarn.webapp.Router.resolveDefault(Router.java:232)

I’ve found this question raised x times but so far no solution.

Because with the using next level of tool like oozie etc. this problem gets multiplied as it seems that oozie takes information about progress from history server.

Does anybody found a proper solution to this problem?

Thanks
Jakub

Sandbox – Pig Basic Tutorial example is nbot working

$
0
0

Replies: 43

Hi, I just tried the following pig Basic Tutorial which is not working

a = LOAD ‘nyse_stocks’ USING org.apache.hcatalog.pig.HCatLoader();
b = FILTER a BY stock_symbol == ‘IBM’;
c = group b all;
d = FOREACH c GENERATE AVG(b.stock_volume);
dump d;

when i tried the syntax check, the following logs captured.

013-03-17 14:35:28,456 [main] INFO org.apache.pig.Main – Apache Pig version 0.10.1.21 (rexported) compiled Jan 10 2013, 04:00:42
2013-03-17 14:35:28,459 [main] INFO org.apache.pig.Main – Logging error messages to: /home/sandbox/hue/pig_1363556128447.log
2013-03-17 14:35:41,945 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine – Connecting to hadoop file system at: file:///
2013-03-17 14:35:45,555 [main] ERROR org.apache.pig.tools.grunt.Grunt – ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /home/sandbox/hue/pig_1363556128447.log

please do the needful to resolve this issue. Thank you!

Regards,
Sankar

Does hive over hbase support wildcard query?

$
0
0

Replies: 0

Does hive over hbase support wildcard query?
I have tried “select * from foo where a like ‘%sth%’”. It failed with the following:
Job Submission failed with exception ‘java.lang.RuntimeException(Unexpected residual predicate (data like ’1′))’

Ambari corrupts rpmdb

$
0
0

Replies: 4

Hii,
I have setup HDP 2.0 cluster using 5 Virtual nodes (OS: CentOS 6.0 using VirtualBox). When I stop and restart ALL processes using Ambari, most of the times I get failure for few demons (randomly). I see following error(s) in logs. As a solution I have to manually delete “rm -f /var/lib/rpm/__db.00*” and restart the processes. After this fix, the processes start normally.
What I have observed that every time Ambari tries to install the packages on the nodes (or atleast check if they are available) and somehow this corrupts the RPMDB.

Please suggest if this behavior is due to environment settings or due to flaw in the cluster setup/configuration. If there is inherent problem with Ambari, is there any workaround/fix ?

ERROR ==>

err: /Stage[1]/Hdp::Snappy::Package/Hdp::Package[snappy]/Hdp::Package::Process_pkg[snappy]/Package[snappy]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install snappy' returned 1: rpmdb: Thread/process 1757/139873492805376 failed: Thread died in Berkeley DB library
error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db3 - (-30974)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:

Error: rpmdb open failed


HDP Sandbox 2.1 MSSQL Failures

$
0
0

Replies: 0

I have been banging my head against the wall on this for an entire day. I am using the HDP vmware sandbox to evaluate the possibility of a hadoop platform. I am totally able to use sqoop to ‘list-tables’ and ‘list-databases’, but every time i attempt an ‘import’ with the following:

sqoop import --connect "jdbc:sqlserver://xxx.xxx.xxx.xxx:1433;databasename=mydatabase;" --username svc_hadoop --password 'hadoop' --verbose --table my_table

I am getting the following errors:

4/06/12 12:45:02 INFO mapreduce.Job:  map 0% reduce 0%
14/06/12 12:45:12 INFO mapreduce.Job: Task Id : attempt_1402597795499_0009_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.UnsupportedOperationException: Java Runtime Environment (JRE) version 1.7 is not supported by this driver. Use the sqljdbc4.jar class library, which provides support for JDBC 4.0.
	at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:726)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

To my knowledge and following the instructions of the link here I have configured the sqljdbc4.jar and sqljdbc.jar drivers/ my MSSQL environment correctly. Any help would be awesome, I can’t seem to find any information on it at all.

Ambari Server After Changing Host Names and IP

$
0
0

Replies: 5

I use AWS and sometimes have to stop and start instances. To keep things simple, I spun up just one instance and have all components installed on it. Jobs were running and it was all good. But I have to stop it to save money. It’s an Ambari install.

When I stop and then start the instance, the hostname and IP address change. The Amabri Server web UI shows my prior hostname and IP address, the values that I used when I successfully installed the components. I updated the IP information in the three config files core-site.xml, hdfs-site.xml, and mapred-site.xml to the new IP.

How do I update the hostname and IP values in Ambari Server so I can use it after I restart my AWS instance?

NO DATA – after selecting the Table in the ODBC Excel Menu

$
0
0

Replies: 1

Hi everyone,

after succefully connecting Excel 2011 on Mac via ODBC to the virtual machine, I can sucessfully select all the tables from the Sandbox.
But when I try to get data I am always getting “NO DATA SETS OUTPUT” at the bottom, even I am sure there are data sets in the sandbox for this table and the fields are correctly selected (e.g. batting_data.*)

When I try to click directly on “return data”, I am getting a error message:

“[Hortonworks][HiveODBC] (35) Error from Hive: error code: 40000`errorr message: Èrror while compiling statement: FAILED: HiveAccessControlException Permission denied.Principal [name=sandbox, type=USER] does not have following privileges on Object …..

Anyone having the same problem or can help please?

Unable to connect Sandbox to Tableau or ODBC drivers

$
0
0

Replies: 2

Hello all,

Being new to Hadoop, I recently installed the Hortonworks Sandbox on my Windows 7 Premium 64bit machine. However, even after following the tutorials for Tableau, Excel and ODBC, I am unable to connect. Any help would be appreciated!

I have tried to be as detailed with my computing environment as possible:

SYSTEM:
Virtual Box + Hortonworks Sandbox (latest)
Able to connect with browser on 127.0.0.1:8000, nyse_stocks table created and registered with HCAT, Hive queries on nyse_stocks successful.
Virtual OS: Red Hat 64bit
Windows 7 Home Premium 64bit
Firewall: OFF
ODBC: Both, 32bit and 64bit installed
RAM: 1800MB allocated to Virtualbox. I currently only have 4GB on my laptop and am currently awaiting for my 2nd 4GB stick to arrive. The system is a little sluggish but my RAM should arrive soon…

NETWORK SETTINGS:
1. Virtual Box Preference:
NATNetwork CIDR: 10.0.2.0/24 (default settings, with NAT enabled); no port forwarding set
Host Only:
Adapter: 192.168.56.1 Mask: 255.255.255.0
DHCP: 192.168.56.1 Mask: 255.255.255.0 Bounds 102.168.56.101-110
2. Sandbox Settings:
Adapter 1: NAT enabled
Adapter 2: Host only adapter

3. ~ifconfig results:
Eth1: 192.168.56.101
Lo: 127.0.0.1

ISSUE:
1.Testing the 64 bit ODBC connection from Administrative Tools:
Host: 192.168.56.101
Port: 10000 (also tried 8000, 8888)
Database default, Hive Server 2, Username: hue (no password)
RESULT:
Driver Version: V1.4.5.1005 Running connectivity tests…
Attempting connection Failed to establish connection
SQLSTATE: HY000[Hortonworks][HiveODBC] (34) Error from Hive: connect() failed: errno = 10060.
TESTS COMPLETED WITH ERROR.32 bit ODBC:

2.Testing the 32 bit ODBC and attempting to connect with Tableau:

ODBC Settings:
Server Type: Hive Server 2
Mechanism: User Name
Username: sandbox

Tableau Settings:
Using Hortonworks Hive
Server Name: tried 192.168.56.101:10000 and 127.0.0.1:8000
Authentication: User Name (sandbox)
RESULT:
Unable to connect to the ODBC Data Source. Check that the necessary drivers are installed and that the connection properties are valid.
[Hortonworks][HiveODBC] (34) Error from Hive: connect() failed: errno = 10060.
Unable to connect to the server “192.168.56.101″. Check that the server is running and that you have access privileges to the requested database.
Unable to connect to the server. Check that the server is running and that you have access privileges to the requested database.

JSON SERDE not working in hive 0.13.0

$
0
0

Replies: 0

Hi, I am testing on hive 0.13.0 . I am trying to create an external table with jars and i am giving the following for ROW FORMAT

ROW FORMAT SERDE ‘org.apache.hadoop.hive.contrib.serde2.JsonSerde’
ROW FORMAT SERDE ‘org.openx.data.jsonserde.JsonSerDe’
ROW FORMAT SERDE ‘com.cloudera.hive.serde.JSONSerDe’

But all of them are returning errors. below are the errors

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: com.cloudera.hive.serde.JSONSerDe

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.serde2.object inspector.primitive.AbstractPrimitiveJavaObjectInspector.<init>(Lorg/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils$PrimitiveTypeEntry;)V

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: org.apache.hadoop.hive.contrib.serde2.JsonSerde

I am unable to understand what to do now. Any help please.

oozie with derby not able to access database

$
0
0

Replies: 0

I have derby new database for oozie in HDP 2.1 upgraded from HDP 2.0 after upgrading Amabri 1.4 to 1.6. When I try to start Oozie I get log message in derby log as follows
2014-06-13 01:18:14.680 GMT Thread[main,5,main] Cleanup action starting
java.sql.SQLException: Failed to start database ‘/hadoop/oozie/data/oozie-db’ with class loader WebappClassLoader^M
context: /oozie^M
delegate: false^M
repositories:^M
/WEB-INF/classes/^M
———-> Parent Classloader:^M
org.apache.catalina.loader.StandardClassLoader@71adff7c^M
, see the next exception for details.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)

Ambari 1.4.2 – No Data There was not data available.

$
0
0

Replies: 11

Hi, I installed latest HDP 2.0.6 stack from Ambari 1.4.2 on CentOS 6.5. Everything went well, except to patch /usr/lib/python2.6/site-packages/ambari_server/os_type_check.sh file with patch from 1.4.3.

Ganglia is working on 100% and I see all statistics on Ganglia Web UI, but Ambari 1.4.2 don’t show this values. :( All services on Ambari is green, I see that Ganglia server and all monitors are UP and working. Does somebody have the same issue?

My Ganglia server is running on master host where is Ambari server as well, when I see packages with yum…

[root@hm hdp]# yum list |grep ambari
ambari-agent.x86_64 1.4.2.104-1 @Updates-ambari-1.4.2.104
ambari-log4j.noarch 1.4.2.104-1 @Updates-ambari-1.4.2.104
ambari-server.noarch 1.4.2.104-1 @Updates-ambari-1.4.2.104
hdp_mon_nagios_addons.noarch 1.4.2.104-1.el6 @Updates-ambari-1.4.2.104
hdp_mon_ganglia_addons.noarch 1.4.2.104-1.el6 Updates-ambari-1.4.2.104

…I see there is not installed package hdp_mon_ganglia_addons.noarch, isn’t that issue?

Please help! I’m on testing process before production release and buying support from Hortonworks.


Delete and readd a Host

$
0
0

Replies: 1

Hi,
Im wondering did someone every try to delete a host and then readd? I kind of cilled one host and tried to readd it after a complete reinstall of linux. Unfortunately the hostname is still the same due company network restrictions. Trying to add the host again i recognized that ambari doesnt really delete the host. There must be some data left since sometimes i still see the old amount of hosts and it sometimes even pop ups in the host list with 0 Components.

Since my cluster is pretty new i think ill just reinstall ambari completely but still im wondering why cant i just delete a host properly? i restarted ambari and the server node multiple times but he still got some data left from the old host…

PANIC on VMWare Workstation 7.1.3

$
0
0

Replies: 0

I have downloaded and imported the VM to my Windows 7 machine running VMWare Workstation 7.1.3. I have followed the instructions.

When I boot the VM, I get this:

PANIC: early exception 0d rip 10:ffffffff8103eb79 error 0 cr2 0

Any ideas?

Thanks, Steve

Oozie Workflow Shell Action Permission Denied on yarn.nodemanager.local-dirs

$
0
0

Replies: 1

I am trying to run an Oozie Workflow Shell Action from HUE. I had some HDFS permissions which I could fix easily by adding properties to Oozie xml. I am stuck 6 hours trying to get around a local filesystem permission issue. Using HDP 2.0.6 installed with Ambari + installed Hue as described in documentation. When I submit workflow as hue (member of hadoop group)user with only a Shell Action in it I get the below error log:


ACTION[0000010-140526124040148-oozie-oozi-W@ChargingVariables] Launcher exception: Cannot run program "charging_related_calculations" (in directory "/space/hadoop/yarn/local/usercache/hue/appcache/application_1402668961478_0005/container_1402668961478_0005_01_000002"): error=13, Permission denied
java.io.IOException: Cannot run program "charging_related_calculations" (in directory "/space/hadoop/yarn/local/usercache/hue/appcache/application_1402668961478_0005/container_1402668961478_0005_01_000002"): error=13, Permission denied

everytime after I submit the job the directory /space/hadoop/yarn/local/usercache/hue/appcache permissions are automatically changed to 710 and the owner of the directory is yarn:hadoop. So hadoop group has only execution right on the appcache directory created. I am sure about this because I watched all three nodes and saw that randomly shell script is being copied under the above directory.

I am running Oozie Worklfow as hue user and hue is member of hadoop linux group. I obeserved all the local folder and files being copied to temp appcache. I even have a copy of folders which I instantly cp -R d when the tmp folders were created, and I can share them. I also executed and got the below weird error Any Idea?

2014-06-13 22:13:23,106 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: tex655.tnhdpdemo/10.35.36.55:42020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1 SECONDS)
2014-06-13 22:13:23,110 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.net.ConnectException: Call From tex655.tnhdpdemo/10.35.36.55 to tex655.tnhdpdemo:42020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.i

Unable to access HWI page

$
0
0

Replies: 0

I have successfully installed and also all the necessary services are also running, including hwi service on windows.
But when I try and access the pate http://<SERVER_NAME&gt;:9999, I get the following message
HTTP ERROR: 404
Problem accessing /. Reason:
NOT_FOUND

Has anybody been able to successfully launch HWI, if so what additional configuration is necessary.

Hive JDBC Connection hangs

$
0
0

Replies: 0

Hi

I recently install the HDP Sandbox and I created the following Java Program to create a connection:

public class PruebaHive {
private static String driverName = “org.apache.hive.jdbc.HiveDriver”;

public PruebaHive() {
}

public static void main(String[] args) throws Exception {
PruebaHive pruebaHive = new PruebaHive();

Class.forName(“org.apache.hive.jdbc.HiveDriver”);
Connection connection = DriverManager.getConnection(“jdbc:hive2://192.168.182.128:10000″, “”, “”);

connection.close();
}
}

The program hangs while obtaining the connection. I restarted the entire server and I still get the same error.

Viewing all 5121 articles
Browse latest View live




Latest Images