Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Error in starting App Timeline Server

$
0
0

Replies: 0

I am getting the following error when I start YARN App Timeline Server, it is a new installation. Please help

Below are the details:

cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)

echo $JAVA_HOME
/usr/java/jdk1.7.0_67

find / -iname ‘leveldbjn*jar’
/usr/hdp/2.2.0.0-2041/hbase/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/slider/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/oozie/libtools/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/oozie/libserver/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/oozie/oozie-server/webapps/oozie/WEB-INF/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/hadoop/client/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/hadoop/client/leveldbjni-all.jar
/usr/hdp/2.2.0.0-2041/hadoop-yarn/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar
/usr/hdp/2.2.0.0-2041/falcon/client/lib/leveldbjni-all-1.8.jar

stderr:
2015-03-16 10:26:10,169 – Error while executing command ‘restart':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 123, in execute
method(env)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 233, in restart
self.start(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py”, line 42, in start
service(‘timelineserver’, action=’start’)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py”, line 59, in service
initial_wait=5
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 241, in action_run
raise ex
Fail: Execution of ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1′ returned 1.
stdout:
2015-03-16 10:26:02,652 – Group[‘hadoop’] {‘ignore_failures': False}
2015-03-16 10:26:02,653 – Modifying group hadoop
2015-03-16 10:26:02,694 – Group[‘nobody’] {‘ignore_failures': False}
2015-03-16 10:26:02,695 – Modifying group nobody
2015-03-16 10:26:02,725 – Group[‘users’] {‘ignore_failures': False}
2015-03-16 10:26:02,725 – Modifying group users
2015-03-16 10:26:02,758 – Group[‘nagios’] {‘ignore_failures': False}
2015-03-16 10:26:02,759 – Modifying group nagios
2015-03-16 10:26:02,786 – Group[‘knox’] {‘ignore_failures': False}
2015-03-16 10:26:02,787 – Modifying group knox
2015-03-16 10:26:02,817 – User[‘nobody’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’nobody’]}
2015-03-16 10:26:02,817 – Modifying user nobody
2015-03-16 10:26:02,844 – User[‘hive’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:02,844 – Modifying user hive
2015-03-16 10:26:02,875 – User[‘oozie’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-03-16 10:26:02,876 – Modifying user oozie
2015-03-16 10:26:02,902 – User[‘nagios’] {‘gid': ‘nagios’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:02,903 – Modifying user nagios
2015-03-16 10:26:02,932 – User[‘ambari-qa’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-03-16 10:26:02,932 – Modifying user ambari-qa
2015-03-16 10:26:02,962 – User[‘flume’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:02,963 – Modifying user flume
2015-03-16 10:26:02,988 – User[‘hdfs’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:02,988 – Modifying user hdfs
2015-03-16 10:26:03,015 – User[‘knox’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,015 – Modifying user knox
2015-03-16 10:26:03,041 – User[‘storm’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,041 – Modifying user storm
2015-03-16 10:26:03,068 – User[‘mapred’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,069 – Modifying user mapred
2015-03-16 10:26:03,095 – User[‘hbase’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,095 – Modifying user hbase
2015-03-16 10:26:03,121 – User[‘tez’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’users’]}
2015-03-16 10:26:03,122 – Modifying user tez
2015-03-16 10:26:03,147 – User[‘zookeeper’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,147 – Modifying user zookeeper
2015-03-16 10:26:03,173 – User[‘kafka’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,173 – Modifying user kafka
2015-03-16 10:26:03,237 – User[‘falcon’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,238 – Modifying user falcon
2015-03-16 10:26:03,274 – User[‘sqoop’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,274 – Modifying user sqoop
2015-03-16 10:26:03,301 – User[‘yarn’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,301 – Modifying user yarn
2015-03-16 10:26:03,329 – User[‘hcat’] {‘gid': ‘hadoop’, ‘ignore_failures': False, ‘groups': [u’hadoop’]}
2015-03-16 10:26:03,330 – Modifying user hcat
2015-03-16 10:26:03,356 – File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’] {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-03-16 10:26:03,358 – Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null’] {‘not_if': ‘test $(id -u ambari-qa) -gt 1000′}
2015-03-16 10:26:03,385 – Skipping Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null’] due to not_if
2015-03-16 10:26:03,386 – File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’] {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-03-16 10:26:03,387 – Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase 2>/dev/null’] {‘not_if': ‘test $(id -u hbase) -gt 1000′}
2015-03-16 10:26:03,412 – Skipping Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase 2>/dev/null’] due to not_if
2015-03-16 10:26:03,413 – Directory[‘/etc/hadoop/conf.empty’] {‘owner': ‘root’, ‘group': ‘root’, ‘recursive': True}
2015-03-16 10:26:03,413 – Link[‘/etc/hadoop/conf’] {‘not_if': ‘ls /etc/hadoop/conf’, ‘to': ‘/etc/hadoop/conf.empty’}
2015-03-16 10:26:03,437 – Skipping Link[‘/etc/hadoop/conf’] due to not_if
2015-03-16 10:26:03,451 – File[‘/etc/hadoop/conf/hadoop-env.sh’] {‘content': InlineTemplate(…), ‘owner': ‘hdfs’}
2015-03-16 10:26:03,463 – Execute[‘/bin/echo 0 > /selinux/enforce’] {‘only_if': ‘test -f /selinux/enforce’}
2015-03-16 10:26:03,487 – Skipping Execute[‘/bin/echo 0 > /selinux/enforce’] due to only_if
2015-03-16 10:26:03,487 – Directory[‘/var/log/hadoop’] {‘owner': ‘root’, ‘group': ‘hadoop’, ‘mode': 0775, ‘recursive': True}
2015-03-16 10:26:03,489 – Directory[‘/var/run/hadoop’] {‘owner': ‘root’, ‘group': ‘root’, ‘recursive': True}
2015-03-16 10:26:03,489 – Directory[‘/tmp/hadoop-hdfs’] {‘owner': ‘hdfs’, ‘recursive': True}
2015-03-16 10:26:03,494 – File[‘/etc/hadoop/conf/commons-logging.properties’] {‘content': Template(‘commons-logging.properties.j2′), ‘owner': ‘hdfs’}
2015-03-16 10:26:03,496 – File[‘/etc/hadoop/conf/health_check’] {‘content': Template(‘health_check-v2.j2′), ‘owner': ‘hdfs’}
2015-03-16 10:26:03,497 – File[‘/etc/hadoop/conf/log4j.properties’] {‘content': ‘…’, ‘owner': ‘hdfs’, ‘group': ‘hadoop’, ‘mode': 0644}
2015-03-16 10:26:03,502 – File[‘/etc/hadoop/conf/hadoop-metrics2.properties’] {‘content': Template(‘hadoop-metrics2.properties.j2′), ‘owner': ‘hdfs’}
2015-03-16 10:26:03,502 – File[‘/etc/hadoop/conf/task-log4j.properties’] {‘content': StaticFile(‘task-log4j.properties’), ‘mode': 0755}
2015-03-16 10:26:03,680 – Execute[‘export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-client/sbin/yarn-daemon.sh –config /etc/hadoop/conf stop timelineserver’] {‘user': ‘yarn’}
2015-03-16 10:26:03,765 – File[‘/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid’] {‘action': [‘delete’]}
2015-03-16 10:26:03,767 – Directory[‘/var/run/hadoop-yarn/yarn’] {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True}
2015-03-16 10:26:03,768 – Directory[‘/var/log/hadoop-yarn/yarn’] {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True}
2015-03-16 10:26:03,769 – Directory[‘/var/run/hadoop-mapreduce/mapred’] {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘recursive': True}
2015-03-16 10:26:03,770 – Directory[‘/var/log/hadoop-mapreduce/mapred’] {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘recursive': True}
2015-03-16 10:26:03,771 – Directory[‘/var/log/hadoop-yarn’] {‘owner': ‘yarn’, ‘ignore_failures': True, ‘recursive': True}
2015-03-16 10:26:03,771 – XmlConfig[‘core-site.xml’] {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': …, ‘owner': ‘hdfs’, ‘configurations': …}
2015-03-16 10:26:03,789 – Generating config: /etc/hadoop/conf/core-site.xml
2015-03-16 10:26:03,790 – File[‘/etc/hadoop/conf/core-site.xml’] {‘owner': ‘hdfs’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-03-16 10:26:03,791 – Writing File[‘/etc/hadoop/conf/core-site.xml’] because contents don’t match
2015-03-16 10:26:03,791 – XmlConfig[‘mapred-site.xml’] {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': …, ‘owner': ‘yarn’, ‘configurations': …}
2015-03-16 10:26:03,802 – Generating config: /etc/hadoop/conf/mapred-site.xml
2015-03-16 10:26:03,802 – File[‘/etc/hadoop/conf/mapred-site.xml’] {‘owner': ‘yarn’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-03-16 10:26:03,804 – Writing File[‘/etc/hadoop/conf/mapred-site.xml’] because contents don’t match
2015-03-16 10:26:03,804 – Changing owner for /etc/hadoop/conf/mapred-site.xml from 509 to yarn
2015-03-16 10:26:03,804 – XmlConfig[‘yarn-site.xml’] {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': …, ‘owner': ‘yarn’, ‘configurations': …}
2015-03-16 10:26:03,815 – Generating config: /etc/hadoop/conf/yarn-site.xml
2015-03-16 10:26:03,815 – File[‘/etc/hadoop/conf/yarn-site.xml’] {‘owner': ‘yarn’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-03-16 10:26:03,817 – Writing File[‘/etc/hadoop/conf/yarn-site.xml’] because contents don’t match
2015-03-16 10:26:03,818 – XmlConfig[‘capacity-scheduler.xml’] {‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘mode': 0644, ‘configuration_attributes': …, ‘owner': ‘yarn’, ‘configurations': …}
2015-03-16 10:26:03,829 – Generating config: /etc/hadoop/conf/capacity-scheduler.xml
2015-03-16 10:26:03,829 – File[‘/etc/hadoop/conf/capacity-scheduler.xml’] {‘owner': ‘yarn’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': 0644, ‘encoding': ‘UTF-8′}
2015-03-16 10:26:03,830 – Writing File[‘/etc/hadoop/conf/capacity-scheduler.xml’] because contents don’t match
2015-03-16 10:26:03,830 – Changing owner for /etc/hadoop/conf/capacity-scheduler.xml from 506 to yarn
2015-03-16 10:26:03,830 – Directory[‘/hadoop/yarn/timeline’] {‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘recursive': True}
2015-03-16 10:26:03,831 – File[‘/etc/hadoop/conf/yarn.exclude’] {‘owner': ‘yarn’, ‘group': ‘hadoop’}
2015-03-16 10:26:03,834 – File[‘/etc/security/limits.d/yarn.conf’] {‘content': Template(‘yarn.conf.j2′), ‘mode': 0644}
2015-03-16 10:26:03,836 – File[‘/etc/security/limits.d/mapreduce.conf’] {‘content': Template(‘mapreduce.conf.j2′), ‘mode': 0644}
2015-03-16 10:26:03,841 – File[‘/etc/hadoop/conf/yarn-env.sh’] {‘content': InlineTemplate(…), ‘owner': ‘yarn’, ‘group': ‘hadoop’, ‘mode': 0755}
2015-03-16 10:26:03,843 – File[‘/etc/hadoop/conf/mapred-env.sh’] {‘content': InlineTemplate(…), ‘owner': ‘hdfs’}
2015-03-16 10:26:03,845 – File[‘/etc/hadoop/conf/taskcontroller.cfg’] {‘content': Template(‘taskcontroller.cfg.j2′), ‘owner': ‘hdfs’}
2015-03-16 10:26:03,846 – XmlConfig[‘mapred-site.xml’] {‘owner': ‘mapred’, ‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘configuration_attributes': …, ‘configurations': …}
2015-03-16 10:26:03,857 – Generating config: /etc/hadoop/conf/mapred-site.xml
2015-03-16 10:26:03,857 – File[‘/etc/hadoop/conf/mapred-site.xml’] {‘owner': ‘mapred’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': None, ‘encoding': ‘UTF-8′}
2015-03-16 10:26:03,858 – Changing owner for /etc/hadoop/conf/mapred-site.xml from 516 to mapred
2015-03-16 10:26:03,859 – XmlConfig[‘capacity-scheduler.xml’] {‘owner': ‘hdfs’, ‘group': ‘hadoop’, ‘conf_dir': ‘/etc/hadoop/conf’, ‘configuration_attributes': …, ‘configurations': …}
2015-03-16 10:26:03,869 – Generating config: /etc/hadoop/conf/capacity-scheduler.xml
2015-03-16 10:26:03,870 – File[‘/etc/hadoop/conf/capacity-scheduler.xml’] {‘owner': ‘hdfs’, ‘content': InlineTemplate(…), ‘group': ‘hadoop’, ‘mode': None, ‘encoding': ‘UTF-8′}
2015-03-16 10:26:03,871 – Changing owner for /etc/hadoop/conf/capacity-scheduler.xml from 516 to hdfs
2015-03-16 10:26:03,872 – File[‘/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid’] {‘action': [‘delete’], ‘not_if': ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1′}
2015-03-16 10:26:03,902 – Execute[‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-client/sbin/yarn-daemon.sh –config /etc/hadoop/conf start timelineserver’] {‘not_if': ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1′, ‘user': ‘yarn’}
2015-03-16 10:26:05,028 – Execute[‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1′] {‘initial_wait': 5, ‘not_if': ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1′, ‘user': ‘yarn’}
2015-03-16 10:26:10,169 – Error while executing command ‘restart':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 123, in execute
method(env)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 233, in restart
self.start(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py”, line 42, in start
service(‘timelineserver’, action=’start’)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py”, line 59, in service
initial_wait=5
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 241, in action_run
raise ex
Fail: Execution of ‘ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1′ returned 1.

The following is the error in the logs
‘jenkins’ on 2014-11-19T19:42Z
STARTUP_MSG: java = 1.7.0_67
************************************************************/
2015-03-16 10:31:39,084 INFO applicationhistoryservice.ApplicationHistoryServer (SignalLogger.java:register(91)) – registered UNIX signal handlers for [TERM, HUP, INT]
2015-03-16 10:31:39,767 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) – Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
2015-03-16 10:31:40,032 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) – loaded properties from hadoop-metrics2.properties
2015-03-16 10:31:40,092 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) – Scheduled snapshot period at 60 second(s).
2015-03-16 10:31:40,092 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) – ApplicationHistoryServer metrics system started
2015-03-16 10:31:40,125 FATAL applicationhistoryservice.ApplicationHistoryServer (ApplicationHistoryServer.java:launchAppHistoryServer(160)) – Error starting ApplicationHistoryServer
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni32-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /tmp/libleveldbjni-32-1-2153690905433719857.8: libstdc++.so.6: cannot open shared object file: No such file or directory]
at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
at org.fusesource.leveldbjni.JniDBFactory.<clinit>(JniDBFactory.java:48)
at org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore.serviceInit(LeveldbTimelineStore.java:202)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:99)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:157)
at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:167)
2015-03-16 10:31:40,129 INFO util.ExitUtil (ExitUtil.java:terminate(124)) – Exiting with status -1
2015-03-16 10:31:40,131 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) – Stopping ApplicationHistoryServer metrics system…
2015-03-16 10:31:40,132 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) – ApplicationHistoryServer metrics system stopped.
2015-03-16 10:31:40,132 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(605)) – ApplicationHistoryServer metrics system shutdown complete.
2015-03-16 10:31:40,132 INFO applicationhistoryservice.ApplicationHistoryServer (StringUtils.java:run(659)) – SHUTDOWN_MSG:


Integrating Nutch with Hortonworks Sandbox

$
0
0

Replies: 0

I am trying to crawl the web. Preferably with Nutch. I did not find the references if Hortownworks out of the box supports Nutch.

Has any one integrated Nutch with Hortonworks HDP / HDP Sandbox ?
Once I can integrate it on Sandbox, would like to use it on HDP in dev/production environments.

Please share your experience.

Thank you in advance.

Regards,
Arpan

Hive Not Equal Issue

$
0
0

Replies: 1

Any guidance on why the following query works correctly for “=” but when I change to != it returns an error? See last line.

select count(*)
from projected_inventory_finished_goods a join zsnp_projinv b
on (
a.projected_inventory_date = to_date(b.start_date)
and cast(a.material_id as bigint) = cast(b.product as bigint)
and cast(a.plant_id as bigint) = cast(b.location as bigint)
and cast(a.projected_inventory_quantity as bigint) != cast(b.projinv_cs as bigint))

Failure registering hosts

$
0
0

Replies: 2

Trying to setup Ambari-server/hosts on amazon ec2 linux instances (using passwordless ssh) and it fails while registering the host. I am using the FQDN (private DNS) to register (as ec2-user) through the cluster setup process.

The error i see on the ambari-server is the following :

Host {private DNS of the host} doesn’t exist in database
01:48:05,319 ERROR [qtp855747015-23] AbstractResourceProvider:280 – Caught AmbariException when creating a resource
org.apache.ambari.server.AmbariException: Host ip-172-31-36-244.us-west-2.compute.internal doesn’t exist in database
at org.apache.ambari.server.actionmanager.ActionDBAccessorImpl.persistActions(ActionDBAccessorImpl.java:242)

I am using the latest version of open ssl. selinux is turned off. The machines can communicate with each other without issues.I can see that the ambari-agent is installed on the host This is the error on the agent

INFO 2015-02-10 01:46:00,121 NetUtil.py:48 – Connecting to {Server private DNS}:8440/connection_info
INFO 2015-02-10 01:46:00,205 security.py:49 – Server require two-way SSL authentication. Use it instead of one-way…
INFO 2015-02-10 01:46:00,206 security.py:175 – Server certicate exists, ok
INFO 2015-02-10 01:46:00,206 security.py:183 – Agent key exists, ok
INFO 2015-02-10 01:46:00,206 security.py:191 – Agent certificate exists, ok
INFO 2015-02-10 01:46:00,206 security.py:93 – SSL Connect being called.. connecting to the server
INFO 2015-02-10 01:46:00,285 security.py:77 – SSL connection established. Two-way SSL authentication completed successfully.
ERROR 2015-02-10 01:46:00,290 Controller.py:117 – Cannot register host with not supported os type, hostname={host private DNS}, serverOsType=redhat7, agentOsType=redhat7
INFO 2015-02-10 01:46:00,290 Controller.py:320 – Registration response from {Server private DNS} was FAILED
INFO 2015-02-10 01:46:00,290 main.py:55 – signal received, exiting.
INFO 2015-02-10 01:46:00,290 ProcessHelper.py:39 – Removing pid file
INFO 2015-02-10 01:46:00,290 ProcessHelper.py:46 – Removing temp files

Any suggestions on what is wrong here ?

Regards,
V.

Hive UDF function request problem

$
0
0

Replies: 2

Hello,
I have a list of values for a specific field (state) sorted by dates. I want do display only the lines which state has changed from the previous date:
Example input:

date state
2013-01-15 04:15:07.602 ON
2013-01-15 05:15:08.502 ON
2013-01-15 06:15:08.502 OFF
2013-01-15 07:15:08.502 ON
2013-01-15 08:15:08.502 ON
...

Output expected

date state
2013-01-15 04:15:07.602 ON
2013-01-15 06:15:08.502 OFF
2013-01-15 07:15:08.502 ON

My hiveql query is like this

select date, state from demo_bd where statechanged(state) sort by date

“statechanged” is my UDF java function that returns true only if the current state is different from the previous one. This function works fine in java.
My problem is that while it seems to work for the first hundreds values then it fails and sometimes (not everytime) I get the same state for 2 adjacent dates…
I really don’t see where the problem comes from. Is it related to the way and order hive process the data ?

Any help is really appreciated.

Thank you.

Query on partition with where clause based on unix_timestamp()

$
0
0

Replies: 2

As the number of partitions grows, the query takes longer and longer.

The table has a partition by datestamp (bigint)

The following where clause touches upon all 82 partitions:
WHERE datestamp=cast(from_unixtime(unix_timestamp(),'yyyyMMdd') as bigint)

15/03/16 09:21:53 INFO mapred.FileInputFormat: Total input paths to process : 82

…whereas the following only touches the one partition:
WHERE datestamp=20150316

15/03/16 09:23:06 INFO input.FileInputFormat: Total input paths to process : 1

Error: HDP 2.1 Installation nagios-plugins-1.4.9-1.x86_64 failed

$
0
0

Replies: 3

File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 90, in _call
raise Fail(err_msg)
Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install hdp_mon_nagios_addons’ returned 1. Error: Package: nagios-plugins-1.4.9-1.x86_64 (HDP-UTILS-1.1.0.17)
Requires: libssl.so.10(libssl.so.10)(64bit)
Error: Package: nagios-plugins-1.4.9-1.x86_64 (HDP-UTILS-1.1.0.17)
Requires: libcrypto.so.10(libcrypto.so.10)(64bit)
You could try using –skip-broken to work around the problem
You could try running: rpm -Va –nofiles –nodigest

Ambari installation fails with the above error. Found that libcrypto.so.10 and libssl.so.10 installed on the node.

running Hadoop with HBase: org.apache.hadoop.hbase.client.HTable.(Lorg/apa

$
0
0

Replies: 1

I’m trying to make a mapreduce program on Hadoop using HBase. I’m using Hadoop 2.5.1 with HBase 0.98.10.1.

The program can be compiled successfully and being made into a jar file. But, when I try to run the jar using “hadoop jar” the program shows error says:

“org.apache.hadoop.hbase.client.HTable.(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String”.

Here is the line of code I used to initiate the HTable.

HBaseConfiguration config = new HBaseConfiguration();
HTable table = new HTable(config, "Test");

Thanks in advanced for every help.


Unable to get past step 9 – Hive Metastore start – fails

$
0
0

Replies: 13

Error is that the Hive Metastore does not start and I see this:

stderr: /var/lib/ambari-agent/data/errors-85.txt

2014-08-13 17:04:13,454 – Error while executing command ‘start':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 111, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py”, line 42, in start
self.configure(env) # FOR SECURITY
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py”, line 37, in configure
hive(name=’metastore’)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py”, line 108, in hive
not_if = check_schema_created_cmd
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 239, in action_run
raise ex
Fail: Execution of ‘export HIVE_CONF_DIR=/etc/hive/conf.server ; /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]’ returned 1. Metastore connection URL: jdbc:mysql://hadoop.monicoinc.local/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***

stdout: /var/lib/ambari-agent/data/output-85.txt – (actual text of this file is output)

What creates /user/ directories

$
0
0

Replies: 0

Hi folks, I am using a third party app which connects to Hadoop as one user and then impersonates another for HDFS and MapReduce2 jobs. The problem I have is that it wants to write some data into /user/<username> Directory. Now this would be fine if I only had a few users but actually my users are anyone within an Active Directory group.. I can log in as HDFS, create the user home dir, and chmod it to that user, but…. There may be loads of them, and I don’t know them all in advance.

So is there something which will automatically create these user directories ?

Is this something that Apache ranger (nee Argus) does in its user synchronisation?

Thanks all

Data size on HDFS

$
0
0

Replies: 1

Hi,

I’m sorry to ask this dummy question, but I didn’t manage to find any practical answer to it.

I would like to estimate the storage size of my data, once uploaded on HDFS.
Imagine a 1Go file, uploaded to HDFS, with 3 replicates, without any compression. Does this imply that I will need 3Go on my data nodes?

Is the file size shown by the Hue file browser taking replication factor into account? What about hdfs dfs -du -s -h ?

Thank you.

Regards,
Orlando

Services failed to start during install & post install URGENT!

$
0
0

Replies: 0

I’ve made several attempts of installing a 4 node cluster using Ambari 1.7 HDP 2.1 on RHEL 6.5
All the applications install but always fail on starting the services.
This causes basically all applications not to start on a reboot
My experience is that RHEL uses services under /etc/init.d/servicename
The only services that get created are the Ambari-server
The documentation

http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.1/bk_reference/content/reference_chap3_1.html

Says execute these commands…
How are those supposed to be automated?

My install goes perfect until the “failed service” warning and basically nothing works except Ambari

How is this supposed to work? I’m thinking is should be totally automatic…

Tutorial:Real time Data Ingestion in HBase & Hive using Storm Bolt

$
0
0

Replies: 7

Hi there
Is Anybody experiencing problems with this tutorial
Submitted the topology
all the services for Storm and Kafka are started
then issued command to Start the ‘TruckEventsProducer’ Kafka Producer. I can see events are being produced and logs sent to the screen
But data is not being persisted. The Kafka sprout is not producing anything. (When I view using storm ui the KafkaSprout-emitted counter is not updating… stays at 0). When I check log files for the worker task for the TruckEventProcessor in /var/log/storm…
I see the following

13:01:45 b.s.d.worker [INFO] Launching worker for truck-event-processor-1-1422017673 on 8c75249c-e8e9-4d31-9908-579f25c4fb88:6701 with id 48ed2969-334c-479c-9b03-2a31053fa65c
13:01:45 b.s.d.worker [ERROR] Error on initialization of server mk-worker
java.io.IOException: No such file or directory

Ive tried resubmitting this topology several times and I get the error always.
I also made sure I cleaned out storm.local.dir: /hadoop/storm before each run
I was able to get everything working in “Ingesting and processing Realtime events with Apache Storm”
That topology submitted for tutorial2 was fine, but the one submitted for this exercise tutorial3 (storm jar target/Tutorial-1.0-SNAPSHOT.jar com.hortonworks.tutorials.tutorial3.TruckEventProcessingTopology ) doesn’t seem to process.

Anybody have any ideas please
Thank you

Hive .14

$
0
0

Replies: 0

Hi All,

In the link below, we say ” The major compaction will request the lowest in­flight transaction id and re­write the base with the merged transaction that are less than it. ”

http://hortonworks.com/blog/adding-acid-to-apache-hive/

I am curious to know, by rewrite, we mean the entire base file is replaced?, if so, is there any write up round this to showcase how exactly this is done?

Regards,
Sam

Getting errors in Storm tutorial

$
0
0

Replies: 1

I am trying to implement bolts to send data from Storm to hive and hbase as shown by tutorial (http://hortonworks.com/hadoop-tutorial/real-time-data-ingestion-hbase-hive-using-storm-bolt/ )
However, I am getting error(s):
hbaseBolt:
java.lang.RuntimeException: Error retrievinging connection and access to HBase Tables at com.hortonworks.tutorials.tutorial3.TruckHBaseBolt.prepare(TruckHBaseBolt.java:76) at backtype.storm.daemon.e

hdfsBolt:
java.lang.NoClassDefFoundError: org/htrace/Trace at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:214) at com.sun.proxy.$Proxy9.create(Unknown Source) at sun.reflect

The above errors are seen in the Storm UI. Kafka producer is producing the data, but the bolts are unable to process it.


Ranger user-sync with LDAP authentication – error

$
0
0

Replies: 1

Hello
We are trying to connect Ranger to our LDAP server (Microsoft Active Directory).
We filled install.properties file with all the correct values:
SYNC_SOURCE = ldap
SYNC_LDAP_URL = ldap://<LDAP_FQDN:389>
SYNC_LDAP_BIND_DN = cn=<USER>,ou=Users,dc=<domain_name>,dc=local
SYNC_LDAP_BIND_PASSWORD = password

However, after running setup.sh and starting the user-sync service the usersync.log shows:
“ERROR UserGroupSync [UnixUserSyncThread] – Failed to initialize UserGroup source/sink. Will retry after 300000 milliseconds. Error details:
javax.naming.AuthenticationException: [LDAP: error code 49 – 80090308: LdapErr: DSID-0C0903C5, comment: AcceptSecurityContext error, data 52e, v2580^@]”

The error suggests that it’s a credetials issue – however error remains no matter the user & password we provide (which are 100% correct)

Any ideas ?

Adi

Has anyone succeeded in LDAP authentication within Ranger?

$
0
0

Replies: 0

Hello,

I am trying to use LDAP (instead of unix authentification) for importing user/ groups from my existing LDAP server. In the log files I don´t have any error messages. But the user/ groups from LDAP aren´t synced within Ranger!?

So my question is if anyone has been successful with LDAP integration in Ranger?

HDP Streaming Job with C# Mapper/Reducer and Configuration File

$
0
0

Replies: 0

I installed HDP 2.2 on Windows Server 2012 R2 Build 9600.

I struggled in executing C# mapper/reducer with configuration file and external DLL. My reducer needs to call external configuration file and also external DLL.

I tried the following

attempt 1)
hadoop jar c:\hdp\hadoop-2.6.0.2.2.0.0-2041\share\hadoop\tools\lib\hadoop-streaming-2.6.0.2.2.0.0-2041.jar -conf hdfs://host_name:8020/user/hadoop/App/Reducer.exe.config -input /user/hadoop/Input -output /user/hadoop/Output -mapper Mapper.exe -reducer Reducer.exe -file C:\Test\Mapper.exe -file C:\Test\Reducer.exe

mapper runs, but reducer fails. The log file reads
==============================================================================================================================================
2015-03-17 17:17:15,788 FATAL [IPC Server handler 0 on 55054] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1426543033330_0007_r_000000_0 – exited : java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 255
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:134)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2015-03-17 17:17:15,788 INFO [IPC Server handler 0 on 55054] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1426543033330_0007_r_000000_0: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 255
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:134)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2015-03-17 17:17:15,815 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1426543033330_0007_r_000000_0: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 255
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:134)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
========================================================================================================

attempt 2)

hadoop jar c:\hdp\hadoop-2.6.0.2.2.0.0-2041\share\hadoop\tools\lib\hadoop-streaming-2.6.0.2.2.0.0-2041.jar -conf /user/hadoop/App/Reducer.exe.config -input /user/hadoop/Input -output /user/hadoop/Output -mapper Mapper.exe -reducer Reducer.exe -file C:\Test\Mapper.exe -file C:\Test\Reducer.exe

Mapper runs, but reducer fails with the same error message.

I am wondering what is the correct way/syntax to include the configuration file in the streaming job.

Thanks.

Ambari 1.6 GUI Stuck on "Loading"

$
0
0

Replies: 0

Hi ,

Just upgraded my stack from HDP 2.1 to 2.2 and now when logging on to server:8080/#/main/dashboard/metrics hangs on “loading”

GET /api/v1/stacks2/HDP/versions/2.2/stackServices?fields=StackServices/comments,StackServices/service_version,serviceComponents/*&_=1426673414334 HTTP/1.1
Host: myserver:8080

Not sure if related :

ls ./var/lib/ambari-server/resources/stacks/HDP/
1.2.0 1.3 1.3.2 1.3.3 2.0.5 2.0.6.GlusterFS 2.1.GlusterFS
1.2.1 1.3.0 1.3.2.GlusterFS 2.0 2.0.6 2.1

Any ideas ?

Regards,
D
Connection: keep-alive
Accept: application/json, text/javascript, */*; q=0.01
X-Requested-With: XMLHttpRequest
X-Requested-By: X-Requested-By
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.89 Safari/537.36
Referer: http://myserver:8080/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-GB,en;q=0.8,es;q=0.6
Cookie: AMBARISESSIONID=phj1o89zi8riw9ajwi1qv59t

{
“status” : 404,
“message” : “Parent Stack Version resource doesn’t exist. Stack data, stackName=HDP, stackVersion=2.2″
}

Insert Arabic data

$
0
0

Replies: 2

I cannot insert Arabic data into hive ” as an exception Unicode error apear
i tried to change the python file to utf-8-*
but it was useless . what can i do i got stuck !!! :(

Viewing all 5121 articles
Browse latest View live




Latest Images