Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Does HDP 2.x support Ubuntu 1404?

$
0
0

Replies: 0

Does HDP 2.x support Ubuntu 1404?
Why HDP 2.3 does not support Ubuntu 1404?


Java Error while connecting to Hadoop cluster

$
0
0

Replies: 0

Hi,

We are running Java program in AIX to connect hadoop cluster (HDFS) and we are getting error attached in log file.
Please check and let us know on questions.

Connection String:
————————–
hdfs://znlhacdq0002.amer.zurich.corp:10010/tmp/HDFSCheck.txt

Log:
——
Exception in thread “main” java.lang.NoClassDefFoundError: org.htrace.Trace
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:214)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:768)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:618)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2007)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423)
at HDFSCheck.main(HDFSCheck.java:213)
Caused by: java.lang.ClassNotFoundException: org.htrace.Trace
at java.net.URLClassLoader.findClass(URLClassLoader.java:599)
at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:760)
at java.lang.ClassLoader.loadClass(ClassLoader.java:728)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:325)
at java.lang.ClassLoader.loadClass(ClassLoader.java:707)
… 17 more

Thanks & Regards,
Magesh Vadivel

Java Error Connecting hadoop cluster (Hive) to test the connectivity from AIX

$
0
0

Replies: 0

Hi,

We are running Java program in AIX to connect hadoop cluster (Hive) to test the connectivity from AIX server to HAdoop Cluster (Linux Server)
and we are getting error attached in log file.

Please check and let us know ,if you have questions.

Full connection string is:
—————————-
jdbc:hive2://znlhacdq0002.amer.zurich.corp:10010/default;principal=hive/znlhacdq0002.amer.zurich.corp@ZHDPDEV.COM

Error Log:
———-

Exception in thread “main” java.sql.SQLException: Could not create secure connection to jdbc:hive2://znlhacdq0002.amer.zurich.corp:10010/default;principal=hive/znlhacdq0002.amer.zurich.corp@ZHDPDEV.COM: Failed to open client transport
at org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:404)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:187)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:163)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at HiveCheck.main(HiveCheck.java:227)
Caused by: javax.security.sasl.SaslException: Failed to open client transport [Caused by java.io.IOException: Could not instantiate SASL transport]
at org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:60)
at org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:362)
… 4 more
Caused by: java.io.IOException: Could not instantiate SASL transport
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:224)
at org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:56)
… 5 more
Caused by: javax.security.sasl.SaslException: Failure to initialize security context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
major string: Invalid credentials
minor string: SubjectCredFinder: no JAAS Subject]
at com.ibm.security.sasl.gsskerb.GssKrb5Client.<init>(GssKrb5Client.java:131)
at com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
at org.apache.thrift.transport.TSaslClientTransport.<init>(TSaslClientTransport.java:72)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:216)
… 6 more
Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
major string: Invalid credentials
minor string: SubjectCredFinder: no JAAS Subject
at com.ibm.security.jgss.i18n.I18NException.throwGSSException(I18NException.java:8)
at com.ibm.security.jgss.mech.krb5.db.run(db.java:15)
at java.security.AccessController.doPrivileged(AccessController.java:330)
at com.ibm.security.jgss.mech.krb5.y.c(y.java:260)
at com.ibm.security.jgss.mech.krb5.y.a(y.java:176)
at com.ibm.security.jgss.mech.krb5.y.a(y.java:278)
at com.ibm.security.jgss.mech.krb5.y.<init>(y.java:241)
at com.ibm.security.jgss.mech.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:19)
at com.ibm.security.jgss.GSSManagerImpl.createMechCredential(GSSManagerImpl.java:75)
at com.ibm.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:109)
at com.ibm.security.jgss.GSSCredentialImpl.<init>(GSSCredentialImpl.java:162)
at com.ibm.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:11)
at com.ibm.security.jgss.GSSContextImpl.a(GSSContextImpl.java:17)
at com.ibm.security.jgss.GSSContextImpl.<init>(GSSContextImpl.java:1)
at com.ibm.security.jgss.GSSManagerImpl.createContext(GSSManagerImpl.java:27)
at com.ibm.security.sasl.gsskerb.GssKrb5Client.<init>(GssKrb5Client.java:110)
… 10 more

Thanks & Regards,
Magesh Vadivel

Hiveserver2 dont start: "ascii codec can't encode character"

$
0
0

Replies: 0

Hi! I made a cluster with NameNode, Secondary NameNode, and 3 DataNodes. I installed HDP via Ambari + HUE and now I am configuring XA secure policies for HDFS, Hive and Hbase. It works fine for every component, except Hive. Problem is that when I change hive.security.authorization to true (in Ambari -> hive configs) the Hiveserver2 fails at start with a problem:

File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 115, in action_create
fp.write(content)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 990: ordinal not in range(128)

I tried to edit that python file but when I do any changes it gets even worse. It probably tries to encode Unicode character using wrong codec and save it to the file, but I am bad programmer and I dont know how to edit it correctly. I cant figure out what is that file, where is it and what it contains.

When I set security authorization to false, the server starts but crashes in ~3 minutes with an error:

12:02:43,523 ERROR [pool-1-thread-648] JMXPropertyProvider:540 - Caught exception getting JMX metrics : Server returned HTTP response code: 500 for URL: http://localhost.localdomain:8745/api/cluster/summary
12:02:50,604 INFO [qtp677995254-4417] HeartBeatHandler:428 - State of service component HIVE_SERVER of service HIVE of cluster testING has changed from STARTED to INSTALLED at host localhost.localdomain
12:02:53,624 ERROR [pool-1-thread-668] JMXPropertyProvider:540 - Caught exception getting JMX metrics : Read timed out

Any suggestions? Thank you in advance

problem add Ranger servcice

$
0
0

Replies: 5

anyone encounter the same?

stderr: /var/lib/ambari-agent/data/errors-226.txt

Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py”, line 74, in <module>
RangerAdmin().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py”, line 37, in install
self.configure(env)
File “/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py”, line 70, in configure
setup_ranger()
File “/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger.py”, line 31, in setup_ranger
content = DownloadSource(params.driver_curl_source)
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 108, in action_create
content = self._get_content()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 150, in _get_content
return content()
File “/usr/lib/python2.6/site-packages/resource_management/core/source.py”, line 50, in __call__
return self.get_content()
File “/usr/lib/python2.6/site-packages/resource_management/core/source.py”, line 181, in get_content
web_file = opener.open(req)
File “/usr/lib64/python2.6/urllib2.py”, line 397, in open
response = meth(req, response)
File “/usr/lib64/python2.6/urllib2.py”, line 510, in http_response
‘http’, request, response, code, msg, hdrs)
File “/usr/lib64/python2.6/urllib2.py”, line 435, in error
return self._call_chain(*args)
File “/usr/lib64/python2.6/urllib2.py”, line 369, in _call_chain
result = func(*args)
File “/usr/lib64/python2.6/urllib2.py”, line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
stdout: /var/lib/ambari-agent/data/output-226.txt

2015-05-03 15:27:27,657 – u”Directory[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/’]” {‘recursive': True}
2015-05-03 15:27:28,042 – u”File[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip’]” {‘content': DownloadSource(‘http://leo-cent66-h2.voyager.test:8080/resources//UnlimitedJCEPolicyJDK7.zip’)}
2015-05-03 15:27:28,211 – Not downloading the file from http://leo-cent66-h2.voyager.test:8080/resources//UnlimitedJCEPolicyJDK7.zip, because /var/lib/ambari-agent/data/tmp/UnlimitedJCEPolicyJDK7.zip already exists
2015-05-03 15:27:29,921 – u”Group[‘hwc2-users’]” {‘ignore_failures': False}
2015-05-03 15:27:29,923 – Modifying group hwc2-users
2015-05-03 15:27:30,800 – u”Group[‘ranger’]” {‘ignore_failures': False}
2015-05-03 15:27:30,801 – Modifying group ranger
2015-05-03 15:27:30,886 – u”Group[‘hwc2-hadoop’]” {‘ignore_failures': False}
2015-05-03 15:27:30,887 – Modifying group hwc2-hadoop
2015-05-03 15:27:30,930 – u”Group[‘hwc2-spark’]” {‘ignore_failures': False}
2015-05-03 15:27:30,931 – Modifying group hwc2-spark
2015-05-03 15:27:31,015 – u”Group[‘hwc2-knox’]” {‘ignore_failures': False}
2015-05-03 15:27:31,015 – Modifying group hwc2-knox
2015-05-03 15:27:31,099 – u”User[‘hwc2-zookeeper’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,099 – Modifying user hwc2-zookeeper
2015-05-03 15:27:31,264 – u”User[‘hwc2-ams’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,265 – Modifying user hwc2-ams
2015-05-03 15:27:31,310 – u”User[‘root’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,310 – Modifying user root
2015-05-03 15:27:31,360 – u”User[‘hwc2-hbase’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,360 – Modifying user hwc2-hbase
2015-05-03 15:27:31,405 – u”User[‘hwc2-storm’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,406 – Modifying user hwc2-storm
2015-05-03 15:27:31,491 – u”User[‘hwc2-sqoop’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,491 – Modifying user hwc2-sqoop
2015-05-03 15:27:31,537 – u”User[‘hwc2-mapred’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,537 – Modifying user hwc2-mapred
2015-05-03 15:27:31,583 – u”User[‘ranger’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,584 – Modifying user ranger
2015-05-03 15:27:31,629 – u”User[‘hwc2-kafka’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,629 – Modifying user hwc2-kafka
2015-05-03 15:27:31,714 – u”User[‘hwc2-hdfs’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,715 – Modifying user hwc2-hdfs
2015-05-03 15:27:31,760 – u”User[‘hwc2-yarn’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,761 – Modifying user hwc2-yarn
2015-05-03 15:27:31,806 – u”User[‘hwc2-flume’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,806 – Modifying user hwc2-flume
2015-05-03 15:27:31,851 – u”User[‘hwc2-ambari-qa’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-users’]}
2015-05-03 15:27:31,852 – Modifying user hwc2-ambari-qa
2015-05-03 15:27:31,897 – u”User[‘hwc2-oozie’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-users’]}
2015-05-03 15:27:31,897 – Modifying user hwc2-oozie
2015-05-03 15:27:31,942 – u”User[‘hwc2-tez’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-users’]}
2015-05-03 15:27:31,943 – Modifying user hwc2-tez
2015-05-03 15:27:31,988 – u”User[‘rangerlogger’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:31,988 – Modifying user rangerlogger
2015-05-03 15:27:32,033 – u”User[‘hwc2-falcon’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:32,034 – Modifying user hwc2-falcon
2015-05-03 15:27:32,079 – u”User[‘hwc2-hive’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:32,079 – Modifying user hwc2-hive
2015-05-03 15:27:32,124 – u”User[‘hwc2-spark’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:32,125 – Modifying user hwc2-spark
2015-05-03 15:27:32,170 – u”User[‘hwc2-hcat’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:32,171 – Modifying user hwc2-hcat
2015-05-03 15:27:32,216 – u”User[‘hwc2-knox’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:32,216 – Modifying user hwc2-knox
2015-05-03 15:27:32,261 – u”User[‘rangeradmin’]” {‘gid': ‘hwc2-hadoop’, ‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’]}
2015-05-03 15:27:32,262 – Modifying user rangeradmin
2015-05-03 15:27:32,307 – u”File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’]” {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-05-03 15:27:32,617 – u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hwc2-ambari-qa /tmp/hadoop-hwc2-ambari-qa,/tmp/hsperfdata_hwc2-ambari-qa,/home/hwc2-ambari-qa,/tmp/hwc2-ambari-qa,/tmp/sqoop-hwc2-ambari-qa’]” {‘not_if': ‘(test $(id -u hwc2-ambari-qa) -gt 1000) || (false)’}
2015-05-03 15:27:32,741 – Skipping u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hwc2-ambari-qa /tmp/hadoop-hwc2-ambari-qa,/tmp/hsperfdata_hwc2-ambari-qa,/home/hwc2-ambari-qa,/tmp/hwc2-ambari-qa,/tmp/sqoop-hwc2-ambari-qa’]” due to not_if
2015-05-03 15:27:32,741 – u”Directory[‘/hadoop/hbase’]” {‘owner': ‘hwc2-hbase’, ‘recursive': True, ‘mode': 0775, ‘cd_access': ‘a’}
2015-05-03 15:27:33,153 – u”File[‘/var/lib/ambari-agent/data/tmp/changeUid.sh’]” {‘content': StaticFile(‘changeToSecureUid.sh’), ‘mode': 0555}
2015-05-03 15:27:33,444 – u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hwc2-hbase /home/hwc2-hbase,/tmp/hwc2-hbase,/usr/bin/hwc2-hbase,/var/log/hwc2-hbase,/hadoop/hbase’]” {‘not_if': ‘(test $(id -u hwc2-hbase) -gt 1000) || (false)’}
2015-05-03 15:27:33,489 – Skipping u”Execute[‘/var/lib/ambari-agent/data/tmp/changeUid.sh hwc2-hbase /home/hwc2-hbase,/tmp/hwc2-hbase,/usr/bin/hwc2-hbase,/var/log/hwc2-hbase,/hadoop/hbase’]” due to not_if
2015-05-03 15:27:33,490 – u”Group[‘hwc2-hdfs’]” {‘ignore_failures': False}
2015-05-03 15:27:33,490 – Modifying group hwc2-hdfs
2015-05-03 15:27:33,534 – u”User[‘hwc2-hdfs’]” {‘ignore_failures': False, ‘groups': [u’hwc2-hadoop’, ‘hwc2-hadoop’, ‘hwc2-hdfs’, u’hwc2-hdfs’]}
2015-05-03 15:27:33,535 – Modifying user hwc2-hdfs
2015-05-03 15:27:33,580 – u”Directory[‘/etc/hadoop’]” {‘mode': 0755}
2015-05-03 15:27:33,737 – u”Directory[‘/etc/hadoop/conf.empty’]” {‘owner': ‘root’, ‘group': ‘hwc2-hadoop’, ‘recursive': True}
2015-05-03 15:27:33,893 – u”Link[‘/etc/hadoop/conf’]” {‘not_if': ‘ls /etc/hadoop/conf’, ‘to': ‘/etc/hadoop/conf.empty’}
2015-05-03 15:27:33,983 – Skipping u”Link[‘/etc/hadoop/conf’]” due to not_if
2015-05-03 15:27:34,001 – u”File[‘/etc/hadoop/conf/hadoop-env.sh’]” {‘content': InlineTemplate(…), ‘owner': ‘hwc2-hdfs’, ‘group': ‘hwc2-hadoop’}
2015-05-03 15:27:34,342 – u”Repository[‘HDP-2.2′]” {‘base_url': ‘http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.4.2′, ‘action': [‘create’], ‘components': [u’HDP’, ‘main’], ‘repo_template': ‘repo_suse_rhel.j2′, ‘repo_file_name': ‘HDP’, ‘mirror_list': None}
2015-05-03 15:27:34,392 – u”File[‘/etc/yum.repos.d/HDP.repo’]” {‘content': Template(‘repo_suse_rhel.j2′)}
2015-05-03 15:27:34,662 – u”Repository[‘HDP-UTILS-1.1.0.20′]” {‘base_url': ‘http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6′, ‘action': [‘create’], ‘components': [u’HDP-UTILS’, ‘main’], ‘repo_template': ‘repo_suse_rhel.j2′, ‘repo_file_name': ‘HDP-UTILS’, ‘mirror_list': None}
2015-05-03 15:27:34,667 – u”File[‘/etc/yum.repos.d/HDP-UTILS.repo’]” {‘content': Template(‘repo_suse_rhel.j2′)}
2015-05-03 15:27:34,959 – u”Package[‘unzip’]” {}
2015-05-03 15:28:06,480 – Skipping installing existent package unzip
2015-05-03 15:28:06,480 – u”Package[‘curl’]” {}
2015-05-03 15:28:32,198 – Skipping installing existent package curl
2015-05-03 15:28:32,198 – u”Package[‘hdp-select’]” {}
2015-05-03 15:29:01,599 – Skipping installing existent package hdp-select
2015-05-03 15:29:01,628 – u”Directory[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/’]” {‘recursive': True}
2015-05-03 15:29:02,294 – u”File[‘/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz’]” {‘content': DownloadSource(‘http://leo-cent66-h2.voyager.test:8080/resources//jdk-7u67-linux-x64.tar.gz’)}
2015-05-03 15:29:02,423 – Not downloading the file from http://leo-cent66-h2.voyager.test:8080/resources//jdk-7u67-linux-x64.tar.gz, because /var/lib/ambari-agent/data/tmp/jdk-7u67-linux-x64.tar.gz already exists
2015-05-03 15:29:33,744 – u”Directory[‘/usr/jdk64′]” {}
2015-05-03 15:29:33,912 – u”Execute[‘(‘chmod’, ‘a+x’, u’/usr/jdk64′)’]” {‘not_if': ‘test -e /usr/jdk64/jdk1.7.0_67/bin/java’, ‘sudo': True}
2015-05-03 15:29:33,963 – Skipping u”Execute[‘(‘chmod’, ‘a+x’, u’/usr/jdk64′)’]” due to not_if
2015-05-03 15:29:33,964 – u”Execute[‘mkdir -p /var/lib/ambari-agent/data/tmp/jdk && cd /var/lib/ambari-agent/data/tmp/jdk && tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz && ambari-sudo.sh cp -rp /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64′]” {‘not_if': ‘test -e /usr/jdk64/jdk1.7.0_67/bin/java’}
2015-05-03 15:29:34,007 – Skipping u”Execute[‘mkdir -p /var/lib/ambari-agent/data/tmp/jdk && cd /var/lib/ambari-agent/data/tmp/jdk && tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz && ambari-sudo.sh cp -rp /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64′]” due to not_if
2015-05-03 15:29:34,008 – u”Execute[‘(‘chgrp’, ‘-R’, u’hwc2-hadoop’, u’/usr/jdk64/jdk1.7.0_67′)’]” {‘sudo': True}
2015-05-03 15:29:34,381 – u”Execute[‘(‘chown’, ‘-R’, ‘root’, u’/usr/jdk64/jdk1.7.0_67′)’]” {‘sudo': True}
2015-05-03 15:29:36,507 – u”Package[‘ranger_2_2_*-admin’]” {}
2015-05-03 15:30:06,977 – Skipping installing existent package ranger_2_2_*-admin
2015-05-03 15:30:06,978 – u”Package[‘ranger_2_2_*-usersync’]” {}
2015-05-03 15:30:09,686 – Skipping installing existent package ranger_2_2_*-usersync
2015-05-03 15:30:09,720 – Checking DB connection
5.1.73
2015-05-03 15:30:10,024 – Checking DB connection DONE
2015-05-03 15:30:10,025 – u”File[‘/var/lib/ambari-agent/data/tmp/mysql-connector-java.jar’]” {‘content': DownloadSource(‘http://leo-cent66-h2.voyager.test:8080/resources//mysql-jdbc-driver.jar’)}
2015-05-03 15:30:10,210 – Downloading the file from http://leo-cent66-h2.voyager.test:8080/resources//mysql-jdbc-driver.jar
2015-05-03 15:30:10,494 – Command: /usr/bin/hdp-select status ranger-admin > /tmp/tmplaGW3M
Output: ranger-admin – 2.2.4.2-2

Hive Update : Null Pointer Exception while using If statement

$
0
0

Replies: 0

I have table in hive in which i have to update certain records. I am using hive 0.13 version. I did bit of googling and found that i can use If-Else statement with insert overwrite for doing this but after running a query it is throwing null pointer exception.

Here is my Employee table:

1 emp1
2 emp2
3 emp3
4 emp4
5 emp5

I created another table employee_incr with same schema as employee and ran this query to get updated records.

insert overwrite table employee_incr select employee.id,employee.ename,if(employee.id=”1″,12,employee.id ) as employee.id from employee;

Here is trace from Hive.

2015-07-24 09:55:05,351 INFO [main]: session.SessionState (SessionState.java:start(361)) – No Tez session required at this point. hive.execution.engine=mr.
2015-07-24 09:55:05,391 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(108)) – <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
2015-07-24 09:55:05,392 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(108)) – <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
2015-07-24 09:55:05,392 INFO [main]: ql.Driver (Driver.java:checkConcurrency(159)) – Concurrency mode is disabled, not creating a lock manager
2015-07-24 09:55:05,395 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(108)) – <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
2015-07-24 09:55:05,422 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(108)) – <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
2015-07-24 09:55:05,426 INFO [main]: parse.ParseDriver (ParseDriver.java:parse(185)) – Parsing command: insert overwrite table employee_incr select employee.id,employee.ename,if(employee.id=1,12,employee.id ) as employee.id from employee
2015-07-24 09:55:05,709 ERROR [main]: ql.Driver (SessionState.java:printError(547)) – FAILED: NullPointerException null
java.lang.NullPointerException
at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:37646)
at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:36884)
at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:36760)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1338)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:409)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:323)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:980)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1045)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:359)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:743)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

pig udf using java to create graph in neo4j

$
0
0

Replies: 0

i have my simple create node rest api client to neo4j written in java. It works with plain java but when i try to modify the java to pig udf it gives me the following exception. I have tried to look for the log file mentioned by the job but log file isnt available. I have added sysout in java but they are printing in hue stdout. Any one can shed some light on the issue please?

2015-07-24 06:29:43,127 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Failed!
2015-07-24 06:29:43,180 [main] ERROR org.apache.pig.tools.grunt.Grunt – ERROR 1066: Unable to open iterator for alias createdNodes. Backend error : org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing [POUserFunc (Name: POUserFunc(com.graph.CreateGraph)[chararray] – scope-23 Operator Key: scope-23) children: null at []]: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
Details at logfile: /hadoop/yarn/local/usercache/hue/appcache/application_1437635399969_0056/container_e03_1437635399969_0056_01_000002/pig_1437719137636.log
2015-07-24 06:29:43,312 [main] INFO org.apache.pig.Main – Pig script completed in 4 minutes, 7 seconds and 111 milliseconds (247111 ms)

can't create new directories in HDFS

$
0
0

Replies: 2

Trying to follow the tutorial and can’t create a new directory in HDFS following the geo location /truck demo. Any help will be highly appreciated.

thanks,
Yolanda.


MSI Crashes

$
0
0

Replies: 1

Hi Team,

I am trying to install HDP 2.2.6 on Windows server 2012. I am installing it as administrator. It starts fine but crashes after “gathering the system information” step.
Any idea, if I am missing any pre-requisites. I have installed python 2.7, Microsoft 2010 and have java 1.7 on my machine.

Regards

HDP 2.2 installation failure on Windows Server 2008 R2

$
0
0

Replies: 8

HDP 2.2 installation is failing on windows server 2008 R2. I have followed the instructions provided in the for “Quick Start Guide for Single Node HDP Installation” in HDP_Man_Up_v22_Win.pdf

but I am failing. I am using SQL Server 2012 for DB.

Please help in this regard.

The error message is as below:

See the end of this message for details on invoking
just-in-time (JIT) debugging instead of this dialog box.

************** Exception Text **************
System.ComponentModel.Win32Exception (0x80004005): The specified executable is not a valid application for this OS platform.
at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo)
at GUI.forma.ping(String host, String failed)
at GUI.forma.Validate_Hosts()
at GUI.forma.Validate_fields(String mode, String path)
at GUI.forma.Install_Click(Object sender, EventArgs e)
at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ButtonBase.WndProc(Message& m)
at System.Windows.Forms.Button.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

************** Loaded Assemblies **************
mscorlib
Assembly Version: 4.0.0.0
Win32 Version: 4.0.30319.34209 built by: FX452RTMGDR
CodeBase: file:///E:/Windows/Microsoft.NET/Framework64/v4.0.30319/mscorlib.dll
—————————————-
GUI
Assembly Version: 1.0.0.0
Win32 Version: 1.0.0.0
CodeBase: file:///C:/Temp/MSI6A78.tmp
—————————————-
System.Windows.Forms
Assembly Version: 4.0.0.0
Win32 Version: 4.0.30319.34209 built by: FX452RTMGDR
CodeBase: file:///E:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll
—————————————-
System.Drawing
Assembly Version: 4.0.0.0
Win32 Version: 4.0.30319.34209 built by: FX452RTMGDR
CodeBase: file:///E:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll
—————————————-
System
Assembly Version: 4.0.0.0
Win32 Version: 4.0.30319.34209 built by: FX452RTMGDR
CodeBase: file:///E:/Windows/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll
—————————————-
System.Core
Assembly Version: 4.0.0.0
Win32 Version: 4.0.30319.34209 built by: FX452RTMGDR
CodeBase: file:///E:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Core/v4.0_4.0.0.0__b77a5c561934e089/System.Core.dll
—————————————-
System.DirectoryServices
Assembly Version: 4.0.0.0
Win32 Version: 4.0.30319.34209 built by: FX452RTMGDR
CodeBase: file:///E:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.DirectoryServices/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.DirectoryServices.dll
—————————————-

************** JIT Debugging **************
To enable just-in-time (JIT) debugging, the .config file for this
application or computer (machine.config) must have the
jitDebugging value set in the system.windows.forms section.
The application must also be compiled with debugging
enabled.

For example:

<configuration>
<system.windows.forms jitDebugging=”true” />
</configuration>

When JIT debugging is enabled, any unhandled exception
will be sent to the JIT debugger registered on the computer
rather than be handled by this dialog box.

Ambari 2.1 cannot restart HDP services after upgrading from Ambari 2.0

$
0
0

Replies: 1

Hi,
I have just upgraded to Ambari 2.1 from (2.0.1) after the announcement of HDP 2.3 ready for enterprise.
The upgrading of Ambari and Ambari Metrics is very easy and smooth. However, after the upgrading, Ambari display the icon to refresh the services with stale config. I tried to restart the services, but all of them failed with the error:

Caught an exception while executing custom service command: <class ‘ambari_agent.AgentException.AgentException’>: ‘Broken data in given range, expected – m-n or m, got : /default-rack:0-21′; ‘Broken data in given range, expected – m-n or m, got : /default-rack:0-21′

I have 22 servers, all of them running HDP version 2.2.4.4-16.
I wonder if the error is come from the new feature “rack-awareness”?
Anyone has the same problem? Thanks for helping!
Sincerely.

hadoop-httpfs install issues in HDP 2.2

$
0
0

Replies: 10

In HDP 2.2, when installing ‘hadoop-httpfs’ from RPM, the installation is not complete.

A few things appear to be missing from the RPM:
– Does not contain or link anything into /etc
– The RPM scripts are missing or commented

===========

Reference:

/etc/ populated in HDP 2.1:
# rpm -qlp hadoop-httpfs-2.4.0.2.1.1.0-385.el6.x86_64.rpm | grep ^/etc | wc -l
16

/etc/ not populated in HDP 2.2:
# rpm -qlp hadoop_2_2_0_0_2041-httpfs-2.6.0.2.2.0.0-2041.el6.x86_64.rpm | grep ^/etc | wc -l
0

Working RPM scripts in HDP 2.1:

# rpm -q --scripts -p hadoop-httpfs-2.4.0.2.1.1.0-385.el6.x86_64.rpm
preinstall scriptlet (using /bin/sh):
getent group httpfs >/dev/null   || groupadd -r httpfs
getent passwd httpfs >/dev/null || /usr/sbin/useradd --comment "Hadoop HTTPFS" --shell /bin/bash -M -r -g httpfs -G httpfs --home /var/run/hadoop-httpfs httpfs
postinstall scriptlet (using /bin/sh):
alternatives --install /etc/hadoop-httpfs/conf hadoop-httpfs-conf /etc/hadoop-httpfs/conf.empty 10
alternatives --install /etc/hadoop-httpfs/tomcat-deployment hadoop-tomcat-deployment /etc/hadoop-httpfs/tomcat-deployment.dist 10
chkconfig --add hadoop-httpfs
preuninstall scriptlet (using /bin/sh):
if [ $1 = 0 ]; then
  service hadoop-httpfs stop > /dev/null 2>&1
  chkconfig --del hadoop-httpfs
  alternatives --remove hadoop-httpfs-conf /etc/hadoop-httpfs/conf.empty || :
  alternatives --remove hadoop-tomcat-deployment /etc/hadoop-httpfs/tomcat-deployment.dist || :
fi
postuninstall scriptlet (using /bin/sh):
if [ $1 -ge 1 ]; then
  service hadoop-httpfs condrestart >/dev/null 2>&1
fi

Not working install scripts in HDP 2.2 (postinstall is missing):

# rpm -q --scripts -p hadoop_2_2_0_0_2041-httpfs-2.6.0.2.2.0.0-2041.el6.x86_64.rpm
preinstall scriptlet (using /bin/sh):
getent group httpfs >/dev/null   || groupadd -r httpfs
getent passwd httpfs >/dev/null || /usr/sbin/useradd --comment "Hadoop HTTPFS" --shell /bin/bash -M -r -g httpfs -G httpfs --home /var/run/hadoop/httpfs httpfs
preuninstall scriptlet (using /bin/sh):
#if [ $1 = 0 ]; then
  #service hadoop-httpfs stop > /dev/null 2>&1
  #chkconfig --del hadoop-httpfs
  #alternatives --remove hadoop-httpfs-conf /usr/hdp/2.2.0.0-2041/etc/hadoop-httpfs/conf.empty || :
  #alternatives --remove hadoop-tomcat-deployment /usr/hdp/2.2.0.0-2041/etc/hadoop-httpfs/tomcat-deployment.dist || :
#fi

#%postun httpfs
#if [ $1 -ge 1 ]; then
  #service hadoop-httpfs condrestart >/dev/null 2>&1
#fi

Common installation type of Ambari server

$
0
0

Replies: 1

What’s the common hadoop multi-node installation type with ambari ? Do users install ambari-server on one of the hadoop nodes or the ambari-server is installed by itself on a separate node.

Thanks

HDP Slow after update

$
0
0

Replies: 1

Hi everyone,

In Ambari, when I started all the services; everything was done in less than 15 min.

After an “yum update” during July 11 – 12; now Ambari need more than 3 hours to restart all services!!! (Even restarting the machine)

Do you know what can be happening?

Thank you.

Ranger User sync – Unix

$
0
0

Replies: 1

Hi,

I dont see my newly created UNIX users in Ranger portal.

I see the following errors in Ranger logs /var/log/ranger/usersync

29 Jun 2015 15:40:11 ERROR PasswordValidator [Thread-4] – Response [FAILED: unable to validate due to error javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake] for user: null
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:946)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1312)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:882)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at com.xasecure.authentication.PasswordValidator.run(PasswordValidator.java:58)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:482)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)

Appreciate anyones help on this


Permission Denied for Hive Query

$
0
0

Replies: 5

Using a newly installed sandbox and using Hue with user id hue attempting to select from the supplied sample file:

select * from sample_07 limit 10
..
fails with:

Error occurred executing hive query: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hue] does not have [USE] privilege on [default]

The hdfs file is owned by user hue. How does one grant access for use privilege on the default database?
The same error occurs using beeline. What is needed to use these tools to query the sample tables?

Completely uninstalling HDP 2.2/2.3

$
0
0

Replies: 2

Hi, all!

I am new to all of this Hadoop business, FYI, so take this with a grain of salt.

I installed HDP 2.2 on CentOS 6.6 using Ambari last week. It went well! I only got a couple errors, mostly due to port conflicts and insufficient disk space where I was trying to put the files. I got that straightened out and started playing with things.

I knew I was going to plug Solr into this at some point and noticed that 2.3 supports HDP Search (Solr, right?). So, because installing HDP 2.2 with Ambari was relatively painless, I thought, why not install 2.3??

I decided to do a complete uninstall/install since I didn’t have any real data I needed to preserve. Well, 2.3 didn’t go well at all. I was getting errors all over the place. I never was able to start any services. I decided to back out of 2.3 and go back to 2.2.

That was a couple hours ago. :-) I only just now installed an HDP 2.2 cluster with HDFS and Zookeeper running. I was able to do that only be gradually manually removing things I thought might be causing conflicts. Here is a Github gist with the shell script I was using to remove remnants of previous installations:

https://gist.github.com/hourback/085500397bb2588964c5

I wish it were clearer how to completely reset the environment in order to reinstall HDP. Now I’m wondering if my HDP 2.3 installation failed because HDP 2.3 is simply too new, as I originally thought, or is it because I didn’t completely remove HDP 2.2, as I now suspect? I’ve already lost an entire day to this, and I don’t know know if HDP Search is worth upgrading to HDP 2.3 at this point, since I’m just getting going with Hadoop period. Anyway, I guess I’ll keep going with HDP 2.2 and see how far that gets me. I’d be interested in knowing if anyone else has had a similar experience.

Have a great weekend,
Ali

Ambari 2.1 alerts & YARN HA

$
0
0

Replies: 0

There seems to be a bug in Ambari 2.1 alerts where it’s trying to monitor the standby resource manager & node manager in a YARN HA setup, and then reports the alert status “UNKNOWN” with the response status “HTTP 200 response (metrics unavailable)” for that node.

This shows up in an API call to ambari:
/api/v1/clusters/mycluster/alerts?format=summary

“UNKNOWN” : {
“count” : 3,
“original_timestamp” : 1437770735278,
“maintenance_count” : 0
}

and also eventually causes the “ambari alert” to trigger.

I confirmed that failing over the YARN HA to the secondary node causes the alerts to migrate as well. (and now ambari is alarming on the new secondary).

one reducer only when inserting into an orc dynimicaly partitioned table

$
0
0

Replies: 2

Hi,

I am running on 10 node cluster hdp 2.2.
Using tez and yarn.
hive version is 0.14

I have a 90 milion row table stroed in a plain text csv 10GB text file.

When trying to insert into an orc partitioned table using the statement:

“insert overwrite table 2h2 partition (dt) select *,TIME_STAMP from 2h_tmp;”

dt is the dynamic partition key.

Tez alloactes only one reducer to the job which results in a 6 hour run.

I expect about 120 partions to be created .

How can I increase number of reducers to speed up this job?

Is this related to https://issues.apache.org/jira/browse/HIVE-7158 , it is marked as resolved for hive 0.14

I am running with default values

hive.tez.auto.reducer.parallelism

Default Value: false
Added In: Hive 0.14.0 with HIVE-7158

hive.tez.max.partition.factor

Default Value: 2
Added In: Hive 0.14.0 with HIVE-7158

hive.tez.min.partition.factor

Default Value: 0.25
Added In: Hive 0.14.0 with HIVE-7158

and hive.exec.dynamic.partition=true;
hive.exec.dynamic.partition.mode=nonstrict;

Ambari – HDFS view "500 Server Error"

$
0
0

Replies: 2

Hi everyone,

I’ve recently installed the last version of HDP with the help of Ambari on a CentOS 3 machine cluster, with only the HDFS, Hive, HBase, Tez, Pig, MapR2, Yarn and Zookeeper modules. I’ve been trying to set up the HDFS view on the Ambari webapp, and so far it was going smoothly. I can set up permissions, and create directories.
But when I tried uploading a file into a directory where the current user normally has access to, I’m met with a “500 Server error” pop-up as soon as I press the upload button. The only message showing up in the datanode logs is an End Of File Exception, which is apparently related to some Nagios check that doesn’t affect functionality (original thread here). Thus, I’m thinking it is an Ambari error, not a Hadoop one.

Any leads on this?

Viewing all 5121 articles
Browse latest View live