Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Server error: 500 Status code received

$
0
0

Replies: 3

Hello,

While installing the latest version of Ambari (1.6.0) I got this error: 500 status code received on GET method for API…
with final error message:
Error message: org.apache.ambari.server.controller.spi.SystemException: Error loading deferred resources

The only services that keep on running are Ganglia, NameNode, Hive Metastore, MySQL Server and ZooKeeper server.on one node.
Is this a known issue? Will there be a fix available soon, please?

Kind regards,
Nico.


Ambari Server Proxy Authentication

$
0
0

Replies: 6

I am trying to setup a HDP2 Cluster with Ambari. Access to internet is via proxy with authentication. All hosts have a yum.conf with proper proxy settings. yum can access all repos without problems. Unfortunately, the Ambari server seems to call http URLs directly. I found a hint for Ambari proxy configuration here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_using_Ambari_book/content/Configure-Ambari-Server-for-Internet-Proxy.html
Unfortunately proxy authentication is not mentioned. Is there a way to tell Ambari about proxy authentication or do I have to use local repositories?
Any help greatly appreciated, with best Regards
Andreas

hdfs.audit.logger=INFO,console

$
0
0

Replies: 5

Hi,

not sure if pasting to right section of forum. I would like to change value of hdfs.audit.logger=INFO,console in log4j.properties to hdfs.audit.logger=WARNING,console. If I do change manually on all master and slave nodes, in next services start all log4j.properties are overwritten and my setting is lost.

Please give me advice where to change it? Thanks you.

Distcp exclude a dir from copying

$
0
0

Replies: 0

I have 10′s of directories under /user and we are in the process of copying data between two clusters and would like to exclude 1 directory that has 12TB data. I didn;t find in apache docs a way to exclude a dir and hence wondering one of you might help/

Sandbox 2.1 slower than 2.0, any update?

$
0
0

Replies: 0

Hi,

The HDP Sandbox 2.1 (with Hadoop 2.40) is about 30% slower than the Sandbox 2.0 (Hadoop 2.20). I use the Hyper-V image, exactly same settings (6GB RAM allocated, HyperV server 2012R2). The slowness is consistently reproduced whether with Pig, Hive or Java MapReduce jobs.

Also noticed Hive failed to start (Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient) I had to restart the Sandbox several times to get it started correctly. Is there any update available for the Sandbox 2.1? Is it OK if I do a “yum update” ?

Thanks in advance for any help.

Could not perform authorization operation

$
0
0

Replies: 0

Hi Friends,
I’m not able to run any oosie job on my cluster, getting this:

Error submitting workflow a1 – admin
E0501: Could not perform authorization operation, User: oozie is not allowed to impersonate admin

I also modified and added the following, but still no lick.

sudo vim ./etc/hadoop/conf.empty/core-site.xml

<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>

sudo vim ./etc/oozie/conf.dist/oozie-site.xml

<property>
<name>oozie.service.ProxyUserService.proxyuser.oozie.hosts</name>
<value>*</value>
</property>

<property>
<name>oozie.service.ProxyUserService.proxyuser.oozie.groups</name>
<value>*</value>
</property>

any idea how to fix ?

Thanks in advance,
Patrick

oozie workflow fails on submit from hue

$
0
0

Replies: 1

I have a shared workflow that works fine when run as the owner as the workflow from hue. But if I launch it from another user account, then I get the following error:

Failed to access deployment directory.

AccessControlException: Permission denied: user=zing, access=READ_EXECUTE, inode=”/user/hue/oozie/workspaces/_mapred_-oozie-2614-1394040961.07″:mapred:hue:drwx–x–x (error 403)

The python code /usr/lib/hue/desktop/libs/liboozie/src/liboozie/submittion.py doesn’t try to create the directory using a “doas” call, but rather just gets the owner (mapred in this case) to create the deployment directory, and then tries to access the directory (to copy files there for submit) using the current user.

Looks like a bug to me, or perhaps I misunderstand the intent of hue an what it does with permissions. The python code has HARDCODED directory permissions for the directory created as 711 (!?why?) . If hue doesn’t want anyone but the owner to be able to submit a workflow, why doesn’t it just check that first before creating an empty directory that then just gets left lying there?

I would expect that a shared workflow could be submitted by anyone that had permission to read it.

Also note that even though it is a shared workflow, I’m unable to copy it from Hue as well.

Hive server2 and Hive meta store not starting

$
0
0

Replies: 3

Hi,

I am trying to setup a single node hadoop instance. I have ambari server and hadoop running on the same server. All the services are up, but when i try to start Hive server 2 or Hive metastore from ambari. I am getting the below error.

Python script has been killed due to timeout

Is there a place where i can configure timeout parameter for the script ??.


finding sandbox on virtual box

$
0
0

Replies: 1

I have installed virtual box and tried to import sandbox through the import appliance menu. Nothing is there to import. I did restart the computer. What am I missing.? The instructions do not tell you that you need to install sandbox separately!

sandbox.hortonworks.com:8042

$
0
0

Replies: 1

I tried to get run the spark example from the dokument http://hortonworks.com/wp-content/uploads/2014/05/SparkTechnicalPreview.pdf and everything works fine. On http://127.0.0.1:8088/cluster/app/application_1400281252440_0034 I see the application but when I click on logs it tries to connect http://sandbox.hortonworks.com:8042/node/containerlogs/container_1400281252440_0034_01_000001/hue wher probably sandbox.hortonworks.com couldnt be resolved from my local host (windows). if I change sandbox.hortonworks.com to 127.0.0.1 doesnt help realy. I set sandbox.hortonworks.com in the windows etc hosts but it didn’t help. do you have any idea?

VMWARE Sandbox WEB UI

$
0
0

Replies: 1

Hi,
I hope this is the right way to ask. I have set up the Sandbox on VMWARE ESXi and it all seems fine. Have set a local IP and can ssh to it and reach the basic web page on http.

However I cannot reach the Web UI on http://IP_Addre:8000

I am not sure what is wrong and all network connectivity from my local laptop to the server seems just fine.

Hope I can get some tips or helps.

Apologies if this is not the best place to ask. I am super excited to get started.

Thank you very much in advance.

What is the debug port for the Sanbox

$
0
0

Replies: 1

Wanted to do some CPU profiling on the Sandbox using Jprofiler.
What is the debug port?

Unable to enter Sandbox: http://127.0.0.1:8000

$
0
0

Replies: 2

Hi,

I have seen this issue reported in earlier threads, tried everything that was recommended but could not find the resolution. Hence posting this.

I am using Oracle VirtualBox 4.3.12, Windows 7, Sandbox 2.1. After a multitude of attempts, I was able to run the virtual image. I also got to the registration form, and when I try to “dive in”: http://127.0.0.1:8000, its just not working. The ping to 127.0.0.1 works but not when I type it in the browser. I have tried it in Firefox 29.0.1, IE 8 and Chrome 35. I have updated network settings as host-only adaptor.

Can someone please suggest what I should do? Responses are much appreciated.

Thanks,
AssortedOrb

can't establish connection to 127.0.0.1:8000

$
0
0

Replies: 2

I have tried several things but cannot get the Sandbox to work. I get this error every time. Can anyone provide some guidance or suggestions?

Newer Hue 3.6?

$
0
0

Replies: 0

I saw HDP only has Hue 2.3, when do we have more recent Hue? I saw on gethue.com the new Hue and it looks really good


Network Error

$
0
0

Replies: 0

When I type in the address to connect to the browser GUI for Sandbox, I get the following connection error:
“Network Error (tcp_error)

A communication error occurred: “Operation timed out”
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.”

Why is it failing to connect?
(I’ve already viewed similar topics on this forum, and I tried restarting Sandbox– same error– and I don’t think it’s a firewall issue, but how would I check/fix it if it is a firewall issue?)
I’m running Sandbox 2.1 on VMWare Player 6.0.2 on a 32-bit Windows 7 machine.

Unable to View Namenode from webGUI

$
0
0

Replies: 0

I installed VM 4.2+ and sandbox 2.1 in my windows 8. When I tried to access the GUI ,the home page opens up ( http://192.168.56.101:8000) but other ports is not accessible, i get error Oops! Google Chrome could not connect

Why do I get this issue.what iam i missing.

do i need to edit the hdfs-site.xml file. for instance i cannot access namenode via GUI the property in hdfs-site.xml file is below do i need to edit it and how.

<property>
<name> dfs.namenode.http-address</name>
<value>sandbox.hortontonworks.com:50070</value>
<property>

Thanks in advance, pls help

Thanks in advance

oozie workflow fails on submit from hue

$
0
0

Replies: 2

I have a shared workflow that works fine when run as the owner as the workflow from hue. But if I launch it from another user account, then I get the following error:

Failed to access deployment directory.

AccessControlException: Permission denied: user=zing, access=READ_EXECUTE, inode=”/user/hue/oozie/workspaces/_mapred_-oozie-2614-1394040961.07″:mapred:hue:drwx–x–x (error 403)

The python code /usr/lib/hue/desktop/libs/liboozie/src/liboozie/submittion.py doesn’t try to create the directory using a “doas” call, but rather just gets the owner (mapred in this case) to create the deployment directory, and then tries to access the directory (to copy files there for submit) using the current user.

Looks like a bug to me, or perhaps I misunderstand the intent of hue an what it does with permissions. The python code has HARDCODED directory permissions for the directory created as 711 (!?why?) . If hue doesn’t want anyone but the owner to be able to submit a workflow, why doesn’t it just check that first before creating an empty directory that then just gets left lying there?

I would expect that a shared workflow could be submitted by anyone that had permission to read it.

Also note that even though it is a shared workflow, I’m unable to copy it from Hue as well.

Hive server2 and Hive meta store not starting

$
0
0

Replies: 4

Hi,

I am trying to setup a single node hadoop instance. I have ambari server and hadoop running on the same server. All the services are up, but when i try to start Hive server 2 or Hive metastore from ambari. I am getting the below error.

Python script has been killed due to timeout

Is there a place where i can configure timeout parameter for the script ??.

Install Hadoop client

$
0
0

Replies: 0

Hi,

How to install Hadoop client on a linux machine that is not part of the Hadoop cluster. I searched google, but i found is only for windows. I couldn’t find anything for Linux.

Thanks,
Tarun.

Viewing all 5121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>