Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

hdp 2.2 – hiveserver2 crashes

$
0
0

Replies: 3

We have hiveserver2 crashing once it two days in average.

We connect with jdbc using auth=noSal and hive.server2.authentication=nosasl (according to

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/Comparing_Beeline_and_the_Hive_CLI_v22/index.html#Item1.1

There are a lot of kinds of errors in the hiveserver2.log addressing Java heap space:

ERROR [BoneCP-pool-watch-thread]: bonecp.CustomThreadFactory (CustomThreadFactory.java:uncaughtException(69)) – Uncaught Exception in thread BoneCP-pool-watch-thread java.lang.OutOfMemoryError: Java heap space
______________________________________________________
WARN [HiveServer2-Handler-Pool: Thread-30]: thrift.ThriftCLIService (ThriftCLIService.java:ExecuteStatement(407)) – Error executing statement:
java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:84)
….
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
______________________________________________________
ERROR [BoneCP-pool-watch-thread]: bonecp.BoneCP (BoneCP.java:obtainInternalConnection(292)) – Failed to acquire connection to jdbc:mysql://my-host/hive?createDatabaseIfNotExist=true. Sleeping for 7000 ms. Attempts left: 4 java.sql.SQLException: java.lang.OutOfMemoryError: Java heap space at om.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
______________________________________________________
ERROR [CuratorFramework-0]: imps.CuratorFrameworkImpl (CuratorFrameworkImpl.java:logError(534)) – Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)


Ambari 1.7 Confirm Hosts error

$
0
0

Replies: 0

Install Ambari 1.7 in CentOS 6.6 and confirm hosts error:
==========================
Creating target directory…
==========================

Command start time 2015-03-08 13:41:37

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
SSH command execution finished
host=hadoopcluster.master, exitcode=255
Command end time 2015-03-08 13:41:37

ERROR: Bootstrap of host hadoopcluster.master fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

Optimization in Hive: Does Hive update dynamically the graph structure of tasks?

$
0
0

Replies: 0

When I look at the Hive source code (latest version 1.0.0):

In the ‘compile’ phase, each SQL query input will generate a list of dependent tasks (a graph of tasks).

My question is in the ‘execution’ phase, will Hive change or update dynamically the graph structure of tasks? and what about the configuration files?

So Hive uses static optimization or dynamic optimization?

Memory configuration for 1NN (16GB RAM ) and 4 DN (8GB RAM)

$
0
0

Replies: 4

I have set up HDP cluster using Ambari 1.7. I am able to process 50GB of data (2750 tasks) on cluster.
But the problem come when i run another script to proces 50 GB the total tasks comes to 4300 . Job fails with Heap size error.

As Ship Goes Missing With 49 On Board

$
0
0

Replies: 0

The system uses special “signals”, which are basically pieces of information about what trades should be made. Using these signals, the software began to do all of the hard work for us. In fact, there was practically nothing else to do after this point, apart from sit back and watch. Of course, you do not have to actively watch, and you can just leave the software to trade your money for you.

Ambari-Server sync-ldap not working

$
0
0

Replies: 2

Am trying to sync to my LDAP server using the following command:

Ambari-Server sync-ldap –users user.txt –groups group.txt

I have the following errors:

ERROR: Exiting with exit code 1.
REASON: Sync event creation failed. Error details: <urlopen error [Errno 111] Connection refused>

Note: I can query the LDAP Server using ldapsearch

Here is my ambari-server properties file

authentication.ldap.managerDn=cn=user1,ou=service,ou=service accounts,dc=company,dc=com
ulimit.open.files=10000
server.connection.max.idle.millis=900000
bootstrap.script=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py
server.version.file=/var/lib/ambari-server/resources/version
api.authenticate=true
jdk1.6.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-6u31-linux-x64.bin
server.persistence.type=local
client.api.ssl.key_name=https.key
authentication.ldap.useSSL=false
authentication.ldap.groupMembershipAttr=member
ambari-server.user=root
webapp.dir=/usr/lib/ambari-server/web
agent.threadpool.size.max=25
client.security=ldap
client.api.ssl.port=8443
authentication.ldap.usernameAttribute=sAMAccountName
jce.name=UnlimitedJCEPolicyJDK7.zip
jce_policy1.6.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jce_policy-6.zip
jce_policy1.7.url=http://public-repo-1.hortonworks.com/ARTIFACTS/UnlimitedJCEPolicyJDK7.zip
java.home=/usr/jdk64/jdk1.7.0_67
server.jdbc.postgres.schema=ambari
jdk.name=jdk-7u67-linux-x64.tar.gz
authentication.ldap.groupNamingAttr=cn
api.ssl=true
client.api.ssl.cert_name=https.crt
authentication.ldap.bindAnonymously=false
recommendations.dir=/var/run/ambari-server/stack-recommendations
server.os_type=redhat6
resources.dir=/var/lib/ambari-server/resources
custom.action.definitions=/var/lib/ambari-server/resources/custom_action_definitions
authentication.ldap.groupObjectClass=group
authentication.ldap.userObjectClass=*
server.execution.scheduler.maxDbConnections=5
bootstrap.setup_agent.script=/usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
server.http.session.inactive_timeout=1800
server.execution.scheduler.misfire.toleration.minutes=480
security.server.keys_dir=/var/lib/ambari-server/keys
stackadvisor.script=/var/lib/ambari-server/resources/scripts/stack_advisor.py
server.tmp.dir=/var/lib/ambari-server/tmp
server.execution.scheduler.maxThreads=5
metadata.path=/var/lib/ambari-server/resources/stacks
server.fqdn.service.url=http://169.254.169.254/latest/meta-data/public-hostname
bootstrap.dir=/var/run/ambari-server/bootstrap
server.stages.parallel=true
authentication.ldap.baseDn=dc=company,dc=com
authentication.ldap.primaryUrl=server1.company.com:389
ambari.ldap.isConfigured=true
authentication.ldap.secondaryUrl=server2.company.com:389
agent.task.timeout=900
client.threadpool.size.max=25
jdk1.7.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-7u67-linux-x64.tar.gz
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
server.execution.scheduler.isClustered=false
authentication.ldap.managerPassword=/etc/ambari-server/conf/ldap-password.dat
server.jdbc.user.name=ambari
server.jdbc.database=postgres
server.jdbc.database_name=ambari

Install failures due to efi permissions

$
0
0

Replies: 2

I have been encountering multiple installation failures related to failed attempts to chmod or chown files in the /boot/efi file system.

I’m a little confused here: I’ve read that the efi file system is always FAT, which I didn’t think was POSIX-compliant, meaning I didn’t think you could do things like chmod or chown on files in a FAT file system.

This is how /boot/efi is mounted on my system:

[root@compute000 ~]# ssh compute005 ‘grep “/boot/efi” /etc/mtab’
/dev/sda1 /boot/efi vfat rw,uid=0,gid=0,umask=0077,shortname=winnt 0 0
[root@compute000 ~]# ssh compute005 ‘fdisk -l /dev/sda’

Disk /dev/sda: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sda1 1 36473 292968749+ ee GPT

WARNING: GPT (GUID Partition Table) detected on ‘/dev/sda’! The util fdisk doesn’t support GPT. Use GNU Parted.

[root@compute000 ~]# ssh compute005 ‘parted /dev/sda print’
Model: IBM-ESXS AL13SEB300 (scsi)
Disk /dev/sda: 300GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1049kB 53.5MB 52.4MB fat16 boot
2 53.5MB 322MB 268MB ext3
3 322MB 34.1GB 33.8GB linux-swap(v1)
4 34.1GB 300GB 266GB ext4

Take Falcon for example. The output from the Falcon install shows:

2015-03-05 11:03:30,463 – Changing permission for /boot/efi/hadoop/falcon/data/lineage/graphdb from 700 to 775
2015-03-05 11:03:30,463 – Changing owner for /boot/efi/hadoop/falcon/data/lineage/graphdb from 0 to falcon

But those commands won’t work on this /boot/efi file system:

[root@compute000 ~]# ssh compute005 ‘ls -la /boot/efi/hadoop/falcon’
total 6
drwx—— 3 root root 2048 Mar 5 10:56 .
drwx—— 3 root root 2048 Mar 5 10:56 ..
drwx—— 3 root root 2048 Mar 5 10:56 data
[root@compute000 ~]# ssh compute005 ‘chmod 775 /boot/efi/hadoop/falcon’
[root@compute000 ~]# ssh compute005 ‘ls -la /boot/efi/hadoop/falcon’
total 6
drwx—— 3 root root 2048 Mar 5 10:56 .
drwx—— 3 root root 2048 Mar 5 10:56 ..
drwx—— 3 root root 2048 Mar 5 10:56 data
[root@compute000 ~]# ssh compute005 ‘chown falcon /boot/efi/hadoop/falcon’
chown: changing ownership of `/boot/efi/hadoop/falcon': Operation not permitted

I guess there are 2 possible solutions for this, right?

1) HDP needs to use a different file system because it can’t work with /boot/efi on my cluster the way it expects to.
2) I need to install my nodes differently in a way that permissions in the /boot/efi file system can be modified with commands like chmod and chown.

I’m not sure which is more work for me, but I’m open to suggestions for either approach.

Or an option #3 if anyone has one…

Thanks,

Nate

java.lang.UnsatisfiedLinkError when running hbase shell

$
0
0

Replies: 1

Hi Folks,

I have an java.lang.UnsatisfiedLinkError when running HBase shell. Any suggestions? I saw someone had this problem some time ago and they fixed their problem through upgrading JRuby somewhere. I am using the HDP 2.2, and a recent RedHat LInux, AND Kerberos.
One thing which instantly worries me is that I have /tmp mounted as “noexec” and “native lib” suggests there are .so files to be loaded.
Can I tell it to use a different TEMP?

UAT [mclinta@bruathdp004 ~]$ hbase shell
2015-03-09 13:57:12,632 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
at com.kenai.jffi.Foreign$InValidInstanceHolder.getForeign(Foreign.java:90)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:95)
at com.kenai.jffi.Library.openLibrary(Library.java:151)
at com.kenai.jffi.Library.getCachedInstance(Library.java:125)
at com.kenai.jaffl.provider.jffi.Library.loadNativeLibraries(Library.java:66)
at com.kenai.jaffl.provider.jffi.Library.getNativeLibraries(Library.java:56)
at com.kenai.jaffl.provider.jffi.Library.getSymbolAddress(Library.java:35)
at com.kenai.jaffl.provider.jffi.Library.findSymbolAddress(Library.java:45)
at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:188)
at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.loadLibrary(AsmLibraryLoader.java:110)
at com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:31)
at com.kenai.jaffl.provider.jffi.Provider.loadLibrary(Provider.java:25)
at com.kenai.jaffl.Library.loadLibrary(Library.java:76)
at org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider$SingletonHolder.<clinit>(POSIXFactory.java:108)
at org.jruby.ext.posix.POSIXFactory$LinuxLibCProvider.getLibC(POSIXFactory.java:112)
at org.jruby.ext.posix.BaseNativePOSIX.<init>(BaseNativePOSIX.java:30)
at org.jruby.ext.posix.LinuxPOSIX.<init>(LinuxPOSIX.java:17)
at org.jruby.ext.posix.POSIXFactory.loadLinuxPOSIX(POSIXFactory.java:70)
at org.jruby.ext.posix.POSIXFactory.loadPOSIX(POSIXFactory.java:31)
at org.jruby.ext.posix.LazyPOSIX.loadPOSIX(LazyPOSIX.java:29)
at org.jruby.ext.posix.LazyPOSIX.posix(LazyPOSIX.java:25)
at org.jruby.ext.posix.LazyPOSIX.isatty(LazyPOSIX.java:159)
at org.jruby.RubyIO.tty_p(RubyIO.java:1897)
at org.jruby.RubyIO$i$0$0$tty_p.call(RubyIO$i$0$0$tty_p.gen:65535)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:63)
at org.jruby.ast.IfNode.interpret(IfNode.java:111)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:147)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:183)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
at org.jruby.ast.VCallNode.interpret(VCallNode.java:86)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302)
at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148)
at org.jruby.RubyClass.newInstance(RubyClass.java:822)
at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
at usr.hdp.$2_dot_2_dot_0_dot_0_minus_2041.hbase.bin.hirb.__file__(/usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:110)
at usr.hdp.$2_dot_2_dot_0_dot_0_minus_2041.hbase.bin.hirb.load(/usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb)
at org.jruby.Ruby.runScript(Ruby.java:697)
at org.jruby.Ruby.runScript(Ruby.java:690)
at org.jruby.Ruby.runNormally(Ruby.java:597)
at org.jruby.Ruby.runFromMain(Ruby.java:446)
at org.jruby.Main.doRunFromMain(Main.java:369)
at org.jruby.Main.internalRun(Main.java:258)
at org.jruby.Main.run(Main.java:224)
at org.jruby.Main.run(Main.java:208)
at org.jruby.Main.main(Main.java:188)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
at java.lang.Runtime.load0(Runtime.java:795)
at java.lang.System.load(System.java:1062)
at com.kenai.jffi.Init.loadFromJar(Init.java:164)
at com.kenai.jffi.Init.load(Init.java:78)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.<clinit>(Foreign.java:45)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:95)
at com.kenai.jffi.Internals.getErrnoSaveFunction(Internals.java:44)
at com.kenai.jaffl.provider.jffi.StubCompiler.getErrnoSaveFunction(StubCompiler.java:68)
at com.kenai.jaffl.provider.jffi.StubCompiler.<clinit>(StubCompiler.java:18)
at com.kenai.jaffl.provider.jffi.AsmLibraryLoader.generateInterfaceImpl(AsmLibraryLoader.java:146)
… 50 more
Foreign.java:90:in `getForeign': java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
from Foreign.java:95:in `getInstance’
from Library.java:151:in `openLibrary’
from Library.java:125:in `getCachedInstance’
from Library.java:66:in `loadNativeLibraries’
from Library.java:56:in `getNativeLibraries’
from Library.java:35:in `getSymbolAddress’
from Library.java:45:in `findSymbolAddress’
from DefaultInvokerFactory.java:51:in `createInvoker’
from Library.java:27:in `getInvoker’
from NativeInvocationHandler.java:90:in `createInvoker’
from NativeInvocationHandler.java:74:in `getInvoker’
from NativeInvocationHandler.java:110:in `invoke’
from null:-1:in `isatty’
from BaseNativePOSIX.java:300:in `isatty’
from LazyPOSIX.java:159:in `isatty’
from RubyIO.java:1897:in `tty_p’
from RubyIO$i$0$0$tty_p.gen:65535:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from CallNoArgNode.java:63:in `interpret’
from IfNode.java:111:in `interpret’
from NewlineNode.java:104:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:147:in `call’
from DefaultMethod.java:183:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from VCallNode.java:86:in `interpret’
from NewlineNode.java:104:in `interpret’
from BlockNode.java:71:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:169:in `call’
from DefaultMethod.java:191:in `call’
from CachingCallSite.java:302:in `cacheAndCall’
from CachingCallSite.java:144:in `callBlock’
from CachingCallSite.java:148:in `call’
from RubyClass.java:822:in `newInstance’
from RubyClass$i$newInstance.gen:65535:in `call’
from JavaMethod.java:249:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:110:in `__file__’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:-1:in `load’
from Ruby.java:697:in `runScript’
from Ruby.java:690:in `runScript’
from Ruby.java:597:in `runNormally’
from Ruby.java:446:in `runFromMain’
from Main.java:369:in `doRunFromMain’
from Main.java:258:in `internalRun’
from Main.java:224:in `run’
from Main.java:208:in `run’
from Main.java:188:in `main’
Caused by:
ClassLoader.java:-2:in `load': java.lang.UnsatisfiedLinkError: /tmp/jffi821701048168284535.tmp: /tmp/jffi821701048168284535.tmp: failed to map segment from shared object: Operation not permitted
from ClassLoader.java:1965:in `loadLibrary1′
from ClassLoader.java:1890:in `loadLibrary0′
from ClassLoader.java:1851:in `loadLibrary’
from Runtime.java:795:in `load0′
from System.java:1062:in `load’
from Init.java:164:in `loadFromJar’
from Init.java:78:in `load’
from Foreign.java:49:in `getInstanceHolder’
from Foreign.java:45:in `<clinit>’
from Foreign.java:95:in `getInstance’
from Internals.java:44:in `getErrnoSaveFunction’
from StubCompiler.java:68:in `getErrnoSaveFunction’
from StubCompiler.java:18:in `<clinit>’
from AsmLibraryLoader.java:146:in `generateInterfaceImpl’
from AsmLibraryLoader.java:110:in `loadLibrary’
from Provider.java:31:in `loadLibrary’
from Provider.java:25:in `loadLibrary’
from Library.java:76:in `loadLibrary’
from POSIXFactory.java:108:in `<clinit>’
from POSIXFactory.java:112:in `getLibC’
from BaseNativePOSIX.java:30:in `<init>’
from LinuxPOSIX.java:17:in `<init>’
from POSIXFactory.java:70:in `loadLinuxPOSIX’
from POSIXFactory.java:31:in `loadPOSIX’
from LazyPOSIX.java:29:in `loadPOSIX’
from LazyPOSIX.java:25:in `posix’
from LazyPOSIX.java:159:in `isatty’
from RubyIO.java:1897:in `tty_p’
from RubyIO$i$0$0$tty_p.gen:65535:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from CallNoArgNode.java:63:in `interpret’
from IfNode.java:111:in `interpret’
from NewlineNode.java:104:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:147:in `call’
from DefaultMethod.java:183:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from VCallNode.java:86:in `interpret’
from NewlineNode.java:104:in `interpret’
from BlockNode.java:71:in `interpret’
from ASTInterpreter.java:74:in `INTERPRET_METHOD’
from InterpretedMethod.java:169:in `call’
from DefaultMethod.java:191:in `call’
from CachingCallSite.java:302:in `cacheAndCall’
from CachingCallSite.java:144:in `callBlock’
from CachingCallSite.java:148:in `call’
from RubyClass.java:822:in `newInstance’
from RubyClass$i$newInstance.gen:65535:in `call’
from JavaMethod.java:249:in `call’
from CachingCallSite.java:292:in `cacheAndCall’
from CachingCallSite.java:135:in `call’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:110:in `__file__’
from /usr/hdp/2.2.0.0-2041/hbase/bin/hirb.rb:-1:in `load’
from Ruby.java:697:in `runScript’
from Ruby.java:690:in `runScript’
from Ruby.java:597:in `runNormally’
from Ruby.java:446:in `runFromMain’
from Main.java:369:in `doRunFromMain’
from Main.java:258:in `internalRun’
from Main.java:224:in `run’
from Main.java:208:in `run’
from Main.java:188:in `main’


Graphite.sink metrics

$
0
0

Replies: 0

How to change . (dot) separator to another character when send metrics to graphite from Hadoop. Graphite crete a new dir for each dot. Is it possible to change the dot to another caracter

Regards ///Ulf Fernholm

Installed 4 node cluster – services not starting Ambari 1.7 HDP 2.1

$
0
0

Replies: 0

centos 6.5
ambari 1.7. HDP 2.1
4 node physical cluster
Followed documentation – exactly from

http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/AMBARI_DOC_SUITE/index.html#Item3.14

All products installed but very few if any services have started
How does HDP start services? Im on centos so I would expect a service – I do see ambari server and agent services but thats it

Also, I dont see any type of $HADOOP_HOME or variables set

please tell me how these services are supposed to get started

SSL Enablement

$
0
0

Replies: 0

I’m following the directions here:

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Security_Guide_v22/index.html#Item1.3.4.4

and I’m seeing this error when I enable HTTPS and restart HDFS:
STARTUP_MSG: java = 1.7.0_67
************************************************************/
2015-03-09 19:10:50,250 INFO datanode.DataNode (SignalLogger.java:register(91)) – registered UNIX signal handlers for [TERM, HUP, INT]
2015-03-09 19:10:50,328 WARN common.Util (Util.java:stringAsURI(56)) – Path /hadoop/hadoop/hdfs/data should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-09 19:10:50,703 INFO security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(938)) – Login successful for user dn/ldevawshdp0002.cedargatepartners.pvc@CEDARGATEPARTNERS.PVC using keytab file /etc/security/keytabs/dn.service.keytab
2015-03-09 19:10:50,850 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) – loaded properties from hadoop-metrics2.properties
2015-03-09 19:10:50,877 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(195)) – Sink ganglia started
2015-03-09 19:10:50,934 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) – Scheduled snapshot period at 10 second(s).
2015-03-09 19:10:50,934 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) – DataNode metrics system started
2015-03-09 19:10:50,938 INFO datanode.DataNode (DataNode.java:<init>(403)) – File descriptor passing is enabled.
2015-03-09 19:10:50,938 INFO datanode.DataNode (DataNode.java:<init>(414)) – Configured hostname is ldevawshdp0002.cedargatepartners.pvc
2015-03-09 19:10:50,947 INFO datanode.DataNode (DataNode.java:startDataNode(1049)) – Starting DataNode with maxLockedMemory = 0
2015-03-09 19:10:50,965 INFO datanode.DataNode (DataNode.java:initDataXceiver(848)) – Opened streaming server at /0.0.0.0:1019
2015-03-09 19:10:50,968 INFO datanode.DataNode (DataXceiverServer.java:<init>(76)) – Balancing bandwith is 6250000 bytes/s
2015-03-09 19:10:50,968 INFO datanode.DataNode (DataXceiverServer.java:<init>(77)) – Number threads for balancing is 5
2015-03-09 19:10:50,972 INFO datanode.DataNode (DataXceiverServer.java:<init>(76)) – Balancing bandwith is 6250000 bytes/s
2015-03-09 19:10:50,972 INFO datanode.DataNode (DataXceiverServer.java:<init>(77)) – Number threads for balancing is 5
2015-03-09 19:10:50,974 INFO datanode.DataNode (DataNode.java:initDataXceiver(863)) – Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket
2015-03-09 19:10:51,053 INFO mortbay.log (Slf4jLog.java:info(67)) – Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-03-09 19:10:51,057 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) – Http request log for http.requests.datanode is not defined
2015-03-09 19:10:51,069 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(699)) – Added global filter ‘safety’ (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-03-09 19:10:51,072 INFO http.HttpServer2 (HttpServer2.java:addFilter(677)) – Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2015-03-09 19:10:51,072 INFO http.HttpServer2 (HttpServer2.java:addFilter(684)) – Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-03-09 19:10:51,072 INFO http.HttpServer2 (HttpServer2.java:addFilter(684)) – Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-03-09 19:10:51,089 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(603)) – addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-03-09 19:10:51,095 WARN mortbay.log (Slf4jLog.java:warn(76)) – java.lang.NullPointerException
2015-03-09 19:10:51,095 INFO http.HttpServer2 (HttpServer2.java:start(830)) – HttpServer.start() threw a non Bind IOException
java.io.IOException: !JsseListener: java.lang.NullPointerException
at org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:531)
at org.apache.hadoop.security.ssl.SslSocketConnectorSecure.newServerSocket(SslSocketConnectorSecure.java:46)
at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:663)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1057)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:415)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2268)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2155)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378)
at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)

Anydobdy have any ideas?

Hive ORC format

$
0
0

Replies: 1

Hi,

I created a text file with following values

1,2,3
2,3,4
3,4,5

then created a Hive managed table t with clause STORED AS ORC. Then Loaded the managed table with the text file created(As mentioned above). When I query on hive shell, select * from t’, I got an exception stating malformed ORC file.

But How can I convert the Text file to ORC file without using any intermediate table and using nested queries, is it possible? or Is it possible convert my Text file to ORC File format?

Regards,
Sandeep

Nagios Dependency

$
0
0

Replies: 0

File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 90, in _call
raise Fail(err_msg)
Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install hdp_mon_nagios_addons’ returned 1. Error: Package: nagios-plugins-1.4.9-1.x86_64 (HDP-UTILS-1.1.0.17)
Requires: libssl.so.10(libssl.so.10)(64bit)
Error: Package: nagios-plugins-1.4.9-1.x86_64 (HDP-UTILS-1.1.0.17)
Requires: libcrypto.so.10(libcrypto.so.10)(64bit)
You could try using –skip-broken to work around the problem
You could try running: rpm -Va –nofiles –nodigest

Any updates on
libssl.so.10
libcrypto.so.10
how to install them manually with RHEL 6??

Thanks in advance

oozie workflow fails for hive query of ORC table

$
0
0

Replies: 2

We have a hive script to handle incremental update for a big table. The hive script essentially merge the incremental table and base table and then use the end result to overwrite the base table. The base table is a ORC table. The hive script works without any problem when executing from Hive Shell. But when we use it in Oozie workflow via Hive Action, it fails buz in the second stage of the MapReduce job, all the reduce attempts fail with the following error:
“TaskAttempt killed because it ran on unusable node”

We also tried to use the Shell Action of Oozie workflow to execute the script. It also fails with a different error as follows (it complaints it can’t find the meta file job.splitmetainfo, which is generated automatically by MapReduce. How can we make it work in Oozie workflow? Thanks a lot in advance.
==================================================================
Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://qa1-sjc001-031.i.jasperwireless.com:8020/user/hdfs/.staging/job_1424730037228_0101/job.splitmetainfo
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1568)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1432)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1390)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1289)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1057)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1500)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1496)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1429)
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://qa1-sjc001-031.i.jasperwireless.com:8020/user/hdfs/.staging/job_1424730037228_0101/job.splitmetainfo
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1563)

HDP2.2 Hue 2.6.1, issues with log tab in Hive Editor (beeswax)

$
0
0

Replies: 1

I installed HDP2.2 using ambari 1.7 and manually installed and configuredd hue as instructed on this page.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Man_Install_v22/index.html#Item1.14

The hue is up and running, but I noticed several degradation from previous version. (I was using HDp2.1 and hue 2.3)

The Hive editor (beeswax) log is not working well. In a lot of cases, the command execution doesn’t show logs at all, like the create table (and other ddl). For example,when I create a table from a query, during the execution the log tab is blank while the query is running.

I did some preliminary search on google, it seems this version of hue is connecting to hive server2, while hiveserver2 didn’t provide a getLog api .

I also see some posts saying one can manually complile&build a latest hue 3.7 version and installed it against HDP2.2, but I am not sure whether that will solve my problem.

Below is the component version displayed on my hue home page.
Component Version
Hue 2.6.1-2041
HDP 2.2.0
Hadoop 2.6.0
Pig 0.14.0
Hive-Hcatalog 0.14.0
Oozie 4.1.0
Ambari 1.7-169
HBase 0.98.4


Execute on Tez

$
0
0

Replies: 1

Using Hue on the HDP 2.2 sandbox, I can see checkbox “Execution on Tez” on the left panel of “Query Editor” in BeesWax. However, after I installed HDP 2.2 through Ambari on my Redhat cluster, then install latest Hue manually, I cannot see the same checkbox on Beeswax. Is there any switch in configuration file that I missed?

JA017: Unknown hadoop job

$
0
0

Replies: 0

Hi
I am getting this error when i am trying to run pig job:
I am trying to schedule it over cluster that i had setup using HDP2.2 and ambari1.7

JA017: Unknown hadoop job [job_1425653747034_0055] associated with action [0000015-150306202423135-oozie-oozi-W@pig-node]. Failing this action!

Thanks,
Vikash

Unable to run the custom hook script error

$
0
0

Replies: 0

I had added custom service using steps described here https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133. It was successful, but then I was not able to start cluster. Then I removed custom services that I had added earlier. But still I am not able to start cluster.
My configuration was Ambari 1.7 with HDP 2.2 on CentOS 6.4 on Amazon aws multi node cluster.

It ended up with following exception:
Fail: Execution of ‘groupadd ”’ returned 3. groupadd: ” is not a valid group name
Error: Error: Unable to run the custom hook script [‘/usr/bin/python2.6′, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py’, ‘ANY’, ‘/var/lib/ambari-agent/data/command-1486.json’, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY’, ‘/var/lib/ambari-agent/data/structured-out-1486.json’, ‘INFO’, ‘/var/lib/ambari-agent/data/tmp’]

NameNode not running, port conflict

$
0
0

Replies: 9

Hi there !

I finished earlier my installation of HDP with Ambari, i’m running Ambari 1.4.2 and the installation was fine. But some services didn’t start during the wizard. Especially the HDFS (namenode not running). So i used the dashboard to start it manually but it failed again.
In the logs of the NN, i found this line :

2013-12-23 17:44:09,784 INFO http.HttpServer (HttpServer.java:start(690)) – HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: <host>:50070

I thought of a classic problem, but a netstat command was telling me that nothing is using :50070

I try to change the port with the configs pannel of the dashboard, to use 50071. Again, the same problem with a port already in use.

Do you guys have any idea to solve this problem ?

Thanks =)

Oozie and Hive Config won't load in Ambari

Viewing all 5121 articles
Browse latest View live




Latest Images