Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Ambari server installation error

$
0
0

Replies: 3

I’m trying to install Ambari( and later HDP 2) on RHEL 6.5.

As per the , I followed the steps :

1. Download the Ambari repository file to a directory on your installation host.
wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/ambari.repo -O /etc/yum.repos.d/ambari.repo

Output :

2015-02-10 14:45:28 URL:http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/ambari.repo [472/472] -> “/etc/yum.repos.d/ambari.repo” [1]

ambari.repo as follows :

[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 – Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

2. Confirm that the repository is configured by checking the repo list.
yum repolist

OUTPUT :

Loaded plugins: product-id, rhnplugin, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
This system is receiving updates from RHN Classic or RHN Satellite.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
repo id repo name status
Updates-ambari-1.7.0 ambari-1.7.0 – Updates 0
ambari-1.x Ambari 1.x 0
scania-rhel-x86_64-rhev-agent-6-server Scania Enterprise Virt Agent (v.6 Server for x86_64) 0
scania-rhel-x86_64-server-6 Scania RHEL (v. 6 for 64-bit x86_64) 0
scania-rhel-x86_64-server-custom-6 Scania RHEL Custom (v. 6 for 64-bit x86_64) 0
scania-rhel-x86_64-server-optional-6 Scania RHEL Server Optional (v. 6 64-bit x86_64) 0
scania-rhel-x86_64-server-supplementary-6 Scania RHEL Server Supplementary (v. 6 64-bit x86_64) 0
scania-rhn-tools-rhel-x86_64-server-6 Scania RHN Tools for RHEL (v. 6 for 64-bit x86_64) 0
repolist: 0

As expected, the third step failed

3. yum install ambari-server

Loaded plugins: product-id, rhnplugin, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
This system is receiving updates from RHN Classic or RHN Satellite.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again

What am I missing ?


Ambari installation – jdk issue

$
0
0

Replies: 0

I’m using RHEL 6.5 and proceeded as per the latest documentation.

I getting an error(the first time as well as in successive attempts) regarding the jdk installation during the ambari server set-up.

ambari-server setup
Using python /usr/bin/python2.6
Setup ambari-server
Checking SELinux…
SELinux status is ‘enabled’
SELinux mode is ‘permissive’
WARNING: SELinux is set to ‘permissive’ mode and temporarily disabled.
OK to continue [y/n] (y)? y
Ambari-server daemon is configured to run under user ‘root’. Change this setting [y/n] (n)? n
Adjusting ambari-server permissions and ownership…
Checking firewall…
Checking JDK…
[1] – Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
[2] – Oracle JDK 1.6 + Java Cryptography Extension (JCE) Policy Files 6
[3] – Custom JDK
==============================================================================
Enter choice (1):1
JDK already exists, using /var/lib/ambari-server/resources/jdk-7u67-linux-x64.tar.gz
Installing JDK to /usr/jdk64
Installation of JDK has failed: ‘Fatal exception: Installation of JDK returned exit code 2, exit code 2′

JDK found at /var/lib/ambari-server/resources/jdk-7u67-linux-x64.tar.gz. Would you like to re-download the JDK [y/n] (y)? y
jdk-7u67-linux-x64.tar.gz… 100% (135.8 MB of 135.8 MB)
Successfully re-downloaded JDK distribution to /var/lib/ambari-server/resources/jdk-7u67-linux-x64.tar.gz
Installing JDK to /usr/jdk64
Installation of JDK was failed: ‘Fatal exception: Installation of JDK returned exit code 2, exit code 2′

ERROR: Exiting with exit code 1.
REASON: Downloading or installing JDK failed: ‘Fatal exception: Unable to install JDK. Please remove JDK, file found at /var/lib/ambari-server/resources/jdk-7u67-linux-x64.tar.gz and re-run Ambari Server setup, exit code 1′. Exiting.
What am I missing ?

Pig statement throws error

$
0
0

Replies: 1

Hi,

I am trying to execute the following PIG script to find the AVG of Stock_Volume field

a = LOAD ‘default.nyse_stocks’ USING org.apache.hive.hcatalog.pig.HCatLoader();
b = FILTER a BY stock_symbol == ‘IBM';
c = group b all;
d = foreach c generate AVG(b.stock_volume);
dump d;

I receive the following error
2015-02-10 16:14:19,123 [main] INFO org.apache.pig.Main – Apache Pig version 0.14.0.2.2.0.0-2041 (rexported) compiled Nov 19 2014, 15:24:46
2015-02-10 16:14:19,123 [main] INFO org.apache.pig.Main – Logging error messages to: /hadoop/yarn/local/usercache/hue/appcache/application_1423510712708_0012/container_1423510712708_0012_01_000002/pig_1423584859121.log
2015-02-10 16:14:21,803 [main] INFO org.apache.pig.impl.util.Utils – Default bootup file /home/yarn/.pigbootup not found
2015-02-10 16:14:22,611 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine – Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
2015-02-10 16:14:26,932 [main] ERROR org.apache.pig.tools.grunt.Grunt – ERROR 1070: Could not resolve org.apache.hive.hcatalog.pig.HCatLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /hadoop/yarn/local/usercache/hue/appcache/application_1423510712708_0012/container_1423510712708_0012_01_000002/pig_1423584859121.log

Please help . I am new to Pig.

Thanks,
PSB

Add Host Wizard stuck on failed install, cannot add new host

$
0
0

Replies: 12

Earlier today I tried to add a new host to my Ambari-managed cluster. I managed to mess a few things up, and neither the DataNode nor the JobTracker could install properly. Deciding I needed to wipe the node and start over, I did so. I’m working on Amazon EC2, so my old IP address is long gone by now.

My problem is that I cannot get the Add New Host wizard to get UNSTUCK from the old install. It’s on the Install, Start and Test screen and won’t allow me to click Next, and there’s no back button! I even tried rebooting the entire cluster to get it unstuck, but when it comes back up it puts me back to this screen. Does anybody know how to get past this? Please help, I really don’t want to have to scrap the cluster and install from scratch because of this.

Sandox Pig Tutorial

$
0
0

Replies: 1

Hi, I am running into the following error while trying out the introductory Pig Tutorial

ls: cannot access /hadoop/yarn/local/usercache/hue/appcache/application_1420342244376_0002/container_1420342244376_0002_01_000002/hive.tar.gz/hive/lib/slf4j-api-*.jar: No such file or directory

It would be really helpful if i could have a step by step explanation of how to resolve this. Note that i tried to implement the solution provided here

http://idavit.blogspot.mx/2014/12/como-no-morir-en-el-intento-primer.html

but the copyToLocal command returns that there is no such file /apps/webhcat/hive.tar.gz

when i do a hadoop fs -ls from the command line nothing is displayed. I do not know what is wrong.

The Hive part of the tutorial worked just fine. Also, here is my pig script

a = LOAD ‘default.stocks’ USING org.apache.hive.hcatalog.pig.HCatLoader();
b = group a BY stock_symbol;
c = group b all;
d = foreach c generate stock_symbol, AVG(c.stock_volume);
dump d;

i am also using -useHCatalog as a pig argument in the arguments field

Ambari-Server sync-ldap not working

$
0
0

Replies: 0

Am trying to sync to my LDAP server using the following command:

Ambari-Server sync-ldap –users user.txt –groups group.txt

I have the following errors:

ERROR: Exiting with exit code 1.
REASON: Sync event creation failed. Error details: <urlopen error [Errno 111] Connection refused>

Note: I can query the LDAP Server using ldapsearch

Here is my ambari-server properties file

authentication.ldap.managerDn=cn=user1,ou=service,ou=service accounts,dc=company,dc=com
ulimit.open.files=10000
server.connection.max.idle.millis=900000
bootstrap.script=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py
server.version.file=/var/lib/ambari-server/resources/version
api.authenticate=true
jdk1.6.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-6u31-linux-x64.bin
server.persistence.type=local
client.api.ssl.key_name=https.key
authentication.ldap.useSSL=false
authentication.ldap.groupMembershipAttr=member
ambari-server.user=root
webapp.dir=/usr/lib/ambari-server/web
agent.threadpool.size.max=25
client.security=ldap
client.api.ssl.port=8443
authentication.ldap.usernameAttribute=sAMAccountName
jce.name=UnlimitedJCEPolicyJDK7.zip
jce_policy1.6.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jce_policy-6.zip
jce_policy1.7.url=http://public-repo-1.hortonworks.com/ARTIFACTS/UnlimitedJCEPolicyJDK7.zip
java.home=/usr/jdk64/jdk1.7.0_67
server.jdbc.postgres.schema=ambari
jdk.name=jdk-7u67-linux-x64.tar.gz
authentication.ldap.groupNamingAttr=cn
api.ssl=true
client.api.ssl.cert_name=https.crt
authentication.ldap.bindAnonymously=false
recommendations.dir=/var/run/ambari-server/stack-recommendations
server.os_type=redhat6
resources.dir=/var/lib/ambari-server/resources
custom.action.definitions=/var/lib/ambari-server/resources/custom_action_definitions
authentication.ldap.groupObjectClass=group
authentication.ldap.userObjectClass=*
server.execution.scheduler.maxDbConnections=5
bootstrap.setup_agent.script=/usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
server.http.session.inactive_timeout=1800
server.execution.scheduler.misfire.toleration.minutes=480
security.server.keys_dir=/var/lib/ambari-server/keys
stackadvisor.script=/var/lib/ambari-server/resources/scripts/stack_advisor.py
server.tmp.dir=/var/lib/ambari-server/tmp
server.execution.scheduler.maxThreads=5
metadata.path=/var/lib/ambari-server/resources/stacks
server.fqdn.service.url=http://169.254.169.254/latest/meta-data/public-hostname
bootstrap.dir=/var/run/ambari-server/bootstrap
server.stages.parallel=true
authentication.ldap.baseDn=dc=company,dc=com
authentication.ldap.primaryUrl=server1.company.com:389
ambari.ldap.isConfigured=true
authentication.ldap.secondaryUrl=server2.company.com:389
agent.task.timeout=900
client.threadpool.size.max=25
jdk1.7.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-7u67-linux-x64.tar.gz
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
server.execution.scheduler.isClustered=false
authentication.ldap.managerPassword=/etc/ambari-server/conf/ldap-password.dat
server.jdbc.user.name=ambari
server.jdbc.database=postgres
server.jdbc.database_name=ambari

Project Seeks to Light Up Networks real

$
0
0

Replies: 0

Dartmouth University researchers are shining a new light on using “smart spaces” in ambient room lighting to transmit both data and human gestures.

Datanode Denied communication with Namenode.

$
0
0

Replies: 1

I have installed hadoop cluster on ubuntu 14.04 machine with individual VM’s for all nodes. Created the hadoop cluster in fully distributed mode with 1 namenode, 1 secondary namenode, 1 zookeeper, 7 datanodes and 1 hbase on different machines with ubuntu 14.04. Created password less ssh between them. Now, I am able to start all required processes on respective nodes individually, but facing error related to datanode is not able to communicate with namenode. I am able to see only 1 datanode added out of 7 on Web UI page of name node (i.e. on 50070 port of namenode) and it is showing IP address of Host machine on which all VM’s for node is running not IP address of actual VM on which datanode is started. Tried adding entries in /etc/hosts files of all VM’s as well as on host machine manually but still facing same error.

I am facing below error:

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-.. (Datanode Uuid null) service to ../..:8020 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(.., datanodeUuid=.., infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-..;nsid=191289458;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:887)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4514)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1017)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28057)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)


Unable to get past step 9 – Hive Metastore start – fails

$
0
0

Replies: 12

Error is that the Hive Metastore does not start and I see this:

stderr: /var/lib/ambari-agent/data/errors-85.txt

2014-08-13 17:04:13,454 – Error while executing command ‘start':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 111, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py”, line 42, in start
self.configure(env) # FOR SECURITY
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py”, line 37, in configure
hive(name=’metastore’)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py”, line 108, in hive
not_if = check_schema_created_cmd
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 239, in action_run
raise ex
Fail: Execution of ‘export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]’ returned 1. Metastore connection URL: jdbc:mysql://hadoop.monicoinc.local/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***

stdout: /var/lib/ambari-agent/data/output-85.txt – (actual text of this file is output)

Spark SQL ODBC on HDP for Windows

$
0
0

Replies: 0

Hello,
We are a pure player consulting company building a BigData offer based on HDInsight and HDP. We are exploring the capabilities of SPARK as rolap engine for SSAS but we cannot find a way to activate the odbc server on a Windows custer. There is no start-thriftserver.sh command available for windows.

Somebody knows if there is a way to make this work?

Thanks!!
Francisco

Installtion of HDP 2.2 with sudo root user

$
0
0

Replies: 0

Hi All,
We are facing the following issue :
Background :
We are trying to install HDP 2.2, via Ambari.Our security team would not give us root access to the cluster. So we have requested them for a generic id (username : hdpcluster) with sudo root access but the security team is not willing to give an generic id, citing accountability reasons for an generic id . However, they are willing to provide a generic id (hdpcluster) with sudo root access, only for the duration of installation .
We have personal ids with sudo root access, but we are not using them, as personal ids are temporary.

Question :
1) If we install the HDP via Ambari with this generic id, and after some time this id (hdpcluster) is 1.revoked or 2.)its priveleges reduced will this affect the functioning of the HDP cluster ?
2) What are the best practises to install HDP in such an scenario.

Thanks.

Cannot retrieve repository metadata (repomd.xml)

$
0
0

Replies: 10

Hello,

Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-UTILS-1.1.0.16. Please verify its path and try again.

I have verified that the path does contain the repomd.xml file.

I am able to “wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/centos6/repodata/repomd.xml” successfully from all of my nodes.

Full Log output from the install wizard :
STDOUT

STDERR
scp /usr/lib/python2.6/site-packages/ambari_server/os_type_check.sh done for host MY_HOST_NAME.com, exitcode=0
Copying os type check script finished
Running os type check…
STDOUT
Cluster primary OS type is redhat6 and local OS type is redhat6

STDERR
Connection to MY_HOST_NAME.com closed.
SSH command execution finished for host MY_HOST_NAME.com, exitcode=0
Running os type check finished
STDOUT
sudo-1.7.4p5-7.el6.x86_64

STDERR
Connection to MY_HOST_NAME.com closed.
SSH command execution finished for host MY_HOST_NAME.com, exitcode=0
Checking ‘sudo’ package finished
Copying repo file to ‘tmp’ folder…
STDOUT

STDERR
scp /etc/yum.repos.d/ambari.repo done for host MY_HOST_NAME.com, exitcode=0
Moving file to repo dir…
STDOUT

STDERR
Connection to MY_HOST_NAME.com closed.
SSH command execution finished for host MY_HOST_NAME.com, exitcode=0
Copying setup script file…
STDOUT

STDERR
scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py done for host MY_HOST_NAME.com, exitcode=0
Copying files finished
Running setup agent…
STDOUT
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/centos6/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.16/repos/centos6/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-UTILS-1.1.0.16. Please verify its path and try again
{‘exitstatus': 1, ‘log': (‘Loaded plugins: rhnplugin
‘, None)}

STDERR
Connection to MY_HOST_NAME.com closed.
SSH command execution finished for host MY_HOST_NAME.com, exitcode=1
Setting up agent finished
ERROR: Bootstrap of host MY_HOST_NAME.com fails because previous action finished with non-zero exit code (1)

Hue URL returning AttributeError

$
0
0

Replies: 0

I am trying to configure my HUE for my cluster and I am seeing this error.
Request Method: GET
Request URL: http://hdp21nn01:8000/about/
Django Version: 1.2.3
Exception Type: AttributeError
Exception Value:
‘str’ object has no attribute ‘get’
Exception Location: /usr/lib/hue/desktop/core/src/desktop/lib/conf.py in _get_data_and_presence, line 131
Python Executable: /usr/bin/python2.6
Python Version: 2.6.6
Python Path: [‘/usr/lib/hue/build/env/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pip-0.6.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Babel-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/BabelDjango-0.2.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Markdown-2.0.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MarkupSafe-0.9.3-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Paste-1.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/PyYAML-3.09-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Pygments-1.3.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/South-0.7-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Spawning-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/avro-1.5.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_auth_ldap-1.0.7-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_extensions-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_nose-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/elementtree-1.2.6_20050316-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/enum-0.4.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/greenlet-0.3.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/happybase-0.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kerberos-1.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lockfile-0.8-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lxml-3.3.5-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/moxy-1.0.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pam-0.1.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pyOpenSSL-0.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pycrypto-2.6-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pysqlite-2.5.5-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_daemon-1.5.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_ldap-2.3.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pytidylib-0.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests-2.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests_kerberos-0.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sasl-0.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sh-1.08-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/simplejson-2.0.9-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/threadframe-0.2-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/thrift-0.9.0-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/urllib2_kerberos-0.1.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/xlrd-0.9.0-py2.6.egg’, ‘/usr/lib/hue/desktop/core/src’, ‘/usr/lib/hue/desktop/libs/hadoop/src’, ‘/usr/lib/hue/desktop/libs/liboozie/src’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages’, ‘/usr/lib/hue/apps/about/src’, ‘/usr/lib/hue/apps/beeswax/src’, ‘/usr/lib/hue/apps/filebrowser/src’, ‘/usr/lib/hue/apps/hcatalog/src’, ‘/usr/lib/hue/apps/help/src’, ‘/usr/lib/hue/apps/jobbrowser/src’, ‘/usr/lib/hue/apps/jobsub/src’, ‘/usr/lib/hue/apps/oozie/src’, ‘/usr/lib/hue/apps/pig/src’, ‘/usr/lib/hue/apps/proxy/src’, ‘/usr/lib/hue/apps/useradmin/src’, ‘/usr/lib/hue/build/env/bin’, ‘/usr/lib64/python26.zip’, ‘/usr/lib64/python2.6′, ‘/usr/lib64/python2.6/plat-linux2′, ‘/usr/lib64/python2.6/lib-tk’, ‘/usr/lib64/python2.6/lib-old’, ‘/usr/lib64/python2.6/lib-dynload’, ‘/usr/lib64/python2.6/site-packages’, ‘/usr/lib64/python2.6/site-packages/gst-0.10′, ‘/usr/lib64/python2.6/site-packages/gtk-2.0′, ‘/usr/lib64/python2.6/site-packages/webkit-1.0′, ‘/usr/lib/python2.6/site-packages’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/hue/apps/beeswax/src/beeswax/../../gen-py’, ‘/usr/lib/hue/apps/jobbrowser/src/jobbrowser/../../gen-py’, ‘/usr/lib/hue/apps/proxy/src/proxy/../../gen-py’]
Server time: Tue, 10 Feb 2015 16:52:36 -0800

Has anyone else encountered this?

Thank you in advance.

Virtual Reality a Sports Training Game Changer

Ambari host registration failing

$
0
0

Replies: 1

Hi,

I have started straightaway from the Ambari installation and now proceeding to setting up a 4-node(RHEL) cluster . Following are the steps :

1. On ‘Select Stack’ page under ‘Advanced Repository Options’, I checked only ‘redhat6′ which shows ‘400:Bad request’ for http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0 and http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6. Then I checked ‘Skip Repository Base URL validation’ and proceeded.
2. Then I added the hostnames and the id_rsa file(of the host where Ambari is running and will also be used as NN) and clicked on next.
3. Three hosts(non-Ambari) failed earlier than the other one, following is the log for one of those :

==========================
Creating target directory…
==========================

Command start time 2015-02-11 16:03:55

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying common functions script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_commons
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying OS type check script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Running OS type check…
==========================

Command start time 2015-02-11 16:03:56
Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:57

==========================
Checking ‘sudo’ package on remote host…
==========================

Command start time 2015-02-11 16:03:57
sudo-1.8.6p3-12.el6.x86_64

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying repo file to ‘tmp’ folder…
==========================

Command start time 2015-02-11 16:03:58

scp /etc/yum.repos.d/ambari.repo
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Moving file to repo dir…
==========================

Command start time 2015-02-11 16:03:58

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying setup script file…
==========================

Command start time 2015-02-11 16:03:58

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:59

==========================
Running setup agent script…
==========================

Command start time 2015-02-11 16:03:59
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
/bin/sh: /usr/sbin/ambari-agent: No such file or directory
{‘exitstatus': 1, ‘log': (”, None)}

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=1
Command end time 2015-02-11 16:05:00

ERROR: Bootstrap of host l1033lab.sss.se.scania.com fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: tcgetattr: Invalid argument
Connection to l1033lab.sss.se.scania.com closed.

STDOUT: This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
/bin/sh: /usr/sbin/ambari-agent: No such file or directory
{‘exitstatus': 1, ‘log': (”, None)}

Connection to l1033lab.sss.se.scania.com closed.

******************************The last one to failed(where Ambari runs) had the following log :******************************

==========================
Creating target directory…
==========================

Command start time 2015-02-11 16:03:55

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying common functions script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_commons
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying OS type check script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Running OS type check…
==========================

Command start time 2015-02-11 16:03:56
Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:57

==========================
Checking ‘sudo’ package on remote host…
==========================

Command start time 2015-02-11 16:03:57
sudo-1.8.6p3-12.el6.x86_64

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying repo file to ‘tmp’ folder…
==========================

Command start time 2015-02-11 16:03:58

scp /etc/yum.repos.d/ambari.repo
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Moving file to repo dir…
==========================

Command start time 2015-02-11 16:03:58

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying setup script file…
==========================

Command start time 2015-02-11 16:03:58

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:59

==========================
Running setup agent script…
==========================

Command start time 2015-02-11 16:03:59
Automatic Agent registration timed out (timeout = 300 seconds). Check your network connectivity and retry registration, or use manual agent registration.

******************************Now I did a wget from all the hosts successfully :******************************

[root@l1034lab ~]# wget http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml
–2015-02-11 16:10:15– http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml
Resolving proxyseso.scania.com… 138.106.57.2
Connecting to proxyseso.scania.com|138.106.57.2|:8080… connected.
Proxy request sent, awaiting response… 200 OK
Length: 2983 (2.9K) [text language="/xml"][/text]
Saving to: “repomd.xml”

100%[===================================================================================>] 2,983 –.-K/s in 0s

2015-02-11 16:10:15 (227 MB/s) – “repomd.xml” saved [2983/2983]

Are there some steps mandatory before one can install AMbari and proceed ? What’s wrong here ?

Thanks and regards


No Data Available after Table Creation.

Unable to track job

$
0
0

Replies: 0

Hi All,

I have a cluster where the hdfs client is firewalled from the rest of the Hadoop cluster. I have been able to secure most access however I am getting errors when monitoring a job. It appears the nodemanager uses a dynamic port each time making it difficult to secure the firewall. Is there a way to restrict the range of ports that are selected to bind the tracking URL to?

The output I receive is as follows:

15/02/11 08:15:22 INFO fs.TestDFSIO: TestDFSIO.1.7
15/02/11 08:15:22 INFO fs.TestDFSIO: nrFiles = 500
15/02/11 08:15:22 INFO fs.TestDFSIO: nrBytes (MB) = 1000.0
15/02/11 08:15:22 INFO fs.TestDFSIO: bufferSize = 1000000
15/02/11 08:15:22 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
15/02/11 08:15:23 INFO fs.TestDFSIO: creating control file: 1048576000 bytes, 500 files
15/02/11 08:15:36 INFO fs.TestDFSIO: created control files for: 500 files
15/02/11 08:15:37 INFO impl.TimelineClientImpl: Timeline service address: http://0.0.0.0:8188/ws/v1/timeline/
15/02/11 08:15:37 INFO client.RMProxy: Connecting to ResourceManager at node10.test.local/192.168.0.10:8050
15/02/11 08:15:37 INFO impl.TimelineClientImpl: Timeline service address: http://0.0.0.0:8188/ws/v1/timeline/
15/02/11 08:15:37 INFO client.RMProxy: Connecting to ResourceManager at node10.test.local/192.168.0.10:8050
15/02/11 08:15:38 INFO mapred.FileInputFormat: Total input paths to process : 500
15/02/11 08:15:38 INFO mapreduce.JobSubmitter: number of splits:500
15/02/11 08:15:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1423324247753_0011
15/02/11 08:15:38 INFO impl.YarnClientImpl: Submitted application application_1423324247753_0011
15/02/11 08:15:38 INFO mapreduce.Job: The url to track the job: http://node10.test.lo…24247753_0011/
15/02/11 08:15:38 INFO mapreduce.Job: Running job: job_1423324247753_0011
15/02/11 08:16:03 INFO ipc.Client: Retrying connect to server: node1.test.local/10.10.10.17:33463. Already tried 0 time(s); maxRetries=3
15/02/11 08:16:23 INFO ipc.Client: Retrying connect to server: node1.test.local/10.10.10.17:33463. Already tried 1 time(s); maxRetries=3
15/02/11 08:16:43 INFO ipc.Client: Retrying connect to server: node1.test.local/10.10.10.17:33463. Already tried 2 time(s); maxRetries=3

node1.test.local runs the Ambari DATANODE and NODEMANAGER roles.

The distro is Hortonworks 2.1.1. Please note I have seen this page http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.1/bk_reference/content/reference_chap2.html but it doesn’t seem to make any reference to such a port range.

Any help will be much appreciated.

Thanks

Charlie

Storm cannot connect to HDFS in HA mode

$
0
0

Replies: 1

Hi,

I’m using HDP 2.1.3 and trying to get a Storm Bolt to connect to HDFS to perform some writes, but it is failing with a UnknownHostException:

java.lang.RuntimeException: Error preparing HdfsBolt: java.net.UnknownHostException: tmm
	at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:86)
	at backtype.storm.daemon.executor$fn__5371$fn__5383.invoke(executor.clj:689)
	at backtype.storm.util$async_loop$fn__1015.invoke(util.clj:434)
	at clojure.lang.AFn.run(AFn.java:24)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: tmm
	at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
	at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
	at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:144)
	at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:579)
	at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:524)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
	at org.apache.storm.hdfs.bolt.HdfsBolt.doPrepare(HdfsBolt.java:83)
	at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:82)
	... 4 more
Caused by: java.net.UnknownHostException: tmm
	... 17 more

As the exception suggests, my HDFS URL is hdfs://tmm and it works fine when using the hdfs tool (e.g. hdfs dfs -ls hdfs://tmm).

Going through the stack trace, it seems that NameNodeProxies.createProxy cannot get a failover proxy provider class, even though it is configured like this in hdfs-site.xml:

    <property>
      <name>dfs.nameservices</name>
      <value>tmm</value>
    </property>

    <property>
      <name>dfs.client.failover.proxy.provider.tmm</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

Is there anything I can do to troubleshoot this? Any ideas? :)

Thanks,

Max

Can I use HDP 1.3.10 with vanilla Hadoop 1.2 clients?

$
0
0

Replies: 0

I’m attempting to get an application with a bundled Apache Hadoop 1.2.1 client to work with a HDP 1.3.10 cluster. The same application works fine with earlier versions of Hortonworks.

When I try to use my application, i get an error like “java.lang.IllegalAgrumentException: No enum const class org.apache.hadoop.hdfs.protocol.DatanodeInfo$AdminStates”

Is it possible that the bug fix listed below on the 1.3.10 patch list has broken compatibility with the vanilla Hadoop 1.2 client?

HDFS HADOOP-10627 BUG-14148 DataNode must support HTTPS on HDP 1.3.x

Thank you,
Ben

Hiveserver2 configuration

$
0
0

Replies: 0

After downloading the sandbox and changing couple of configuration files i got rid of most the configuration errors except the Beeswax. I am not sure where to make the changes required. Sandbox was downloaded on VMWare environment.

Beeswax (Hive UI) The application won’t work without a running HiveServer2.

Viewing all 5121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>