Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Manual installation failed

0
0

Replies: 0

Hi,

I started with Ambari but it failed installing nodes, so going with manual installation route.. I have installed HDFS as of now in 3 VM as recommended.

started my name node(3GB ram), secondary (2nd vm-2GB ram) and Datanode (3rd vm(2GB ram)).
Now time to validate : http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.1/bk_installing_manually_book/content/rpm-chap4.html
But even after formatting, when i try to access the http://$namenode.full.hostname:50070.. it gives host not reachable.

I have tried to create ssh key and also permanently add the other 2 hosts. I have configuations set in host files also. Also tried to switchoff the filewall.. But nothing works. Need help.. I want to start horton works and practice mad reduce.. Any suggessions welcome.

Thanks..


services not running after successful installation of HDP in Windows Server 2012

0
0

Replies: 0

Installed HDP 2.1 in Windows Server 2012. Did all the required configuration and along with some correction mentioned @ http://www.sqlskills.com/blogs/bobb/installing-running-hdp-2-1-windows/ . After that I executed the smoke test.

Can you please identify the root cause for the failed component, mentioned below?

Please note that “HWI” and “GATEWAY” services are not able to run not able to find the cause till now. This might be cause of the 4 component failures.

Please find attached the smoke test result.
Component Status
HCATALOG Failed
Hive Failed
WebHCat Failed
Knox Failed
Hadoop Passed
Tez Passed
PIG Passed
HiveServer2 Passed
Sqoop Passed
Oozie Passed
Mahout Passed
HBase Passed
Zookeeper Passed
Phoenix Passed
Falcom Passed
Storm Passed

Beeline not able to connect

0
0

Replies: 0

Hi
i am not able to connect using beeline.
Here is cmd

C:\hdp\hive-0.13.0.2.1.1.0-1621\bin>beeline
Beeline version 0.13.0.2.1.1.0-1621 by Apache Hive
beeline> !connect jdbc:hive2://kchjbdsrv07:10000 hive hive org.apache.hive.jdbc.HiveDriver
Connecting to jdbc:hive2://kchjbdsrv07:10000
Error: Could not open connection to jdbc:hive2://kchjbdsrv07:10000: java.net.ConnectException: Connection refused: conne
ct (state=08S01,code=0)
0: jdbc:hive2://kchjbdsrv07:10000> show tables;
No current connection

thx in advance

Beeswax ADD JAR

0
0

Replies: 0

I am probably trying something silly here but I have used flume to load twitter data into hdfs on the sandbox and now want to create a table in Hive to read the data. I have popped a SeDe jar in /usr/lib/hive/lib.

When I come to use the browser Beeswax query tool I get an error when I type:
add jar /usr/lib/hive/lib/json-serde-1.1.6-SNAPSHOT-jar-with-dependencies.jar;

The error is:
Error occurred executing hive query: OK FAILED: ParseException line 1:0 cannot recognize input near ‘add’ ‘jar’ ‘/’

Is it just that add jar is not supported in the editor, or am I doing something silly?

Many thanks for any help
Peter

Elephant-bird to analyse Tweets

0
0

Replies: 3

Hello I wanted to use Twitters Elephant-bird, to analyze Tweets without having to save them in another format like csv and leave them in their original JSON format.

I have built Elephant-bird and I wrote the following simple code to load tweets from a file, following some examples I saw:


REGISTER /user/rmrodriguez/jar/json-simple-1.1.jar;
REGISTER /user/rmrodriguez/jar/elephant-bird-pig-4.4.jar;
REGISTER /user/rmrodriguez/jar/elephant-bird-core-4.4.jar;
REGISTER /user/rmrodriguez/jar/google-collections-1.0.jar;

A = LOAD 'tweets.20131201-215958.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad');

tweets = FOREACH A GENERATE (CHARARRAY)$0#'id' AS id;

DUMP tweets;

and I get the following error:

2013-12-19 07:55:55,364 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2013-12-19 07:55:55,367 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2013-12-19 07:55:55,370 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. com/twitter/elephantbird/util/HadoopCompat
Details at logfile: /hadoop/yarn/local/usercache/rmrodriguez/appcache/application_1387366430472_0012/container_1387366430472_0012_01_000002/pig_1387457753323.log

Anyone has experience with elephant-bird that might know the cause for the error or can suggest another way for loading tweets in JSON format?

Greetings,
Rod

Shark 0.9.1 on HDP 2.1

0
0

Replies: 4

Trying to install Shark 0.9.1 on HDP 2.1 (TP).
Get this message when I start the shell “No HADOOP_HOME specified. Shark will run in local-mode”
I cannot seem to be initiate the sharkserver2 . Is there a config or so missing?

Dependency Issues with HDP 1.3.3

0
0

Replies: 2

I’m noticing the following yum errors when running yum update on a CentOS 6.5 box, where Ambari Server is installed:

Error: Package: rrdtool-1.4.5-4.5.1.x86_64 (HDP-UTILS-1.1.0.16)
Requires: dejavu
Error: Package: yum-metadata-parser-1.1.2-119.6.x86_64 (HDP-UTILS-1.1.0.16)
Requires: python = 2.6
Installed: python-2.6.6-52.el6.x86_64 (@centos6_updates_x86_64)
python = 2.6.6-52.el6
Available: python-2.6.6-36.el6.i686 (centos6_base_x86_64)
python = 2.6.6-36.el6
Available: python-2.6.6-37.el6_4.i686 (centos6_updates_x86_64)
python = 2.6.6-37.el6_4
Available: python-2.6.6-51.el6.x86_64 (base)
python = 2.6.6-51.el6
Error: Package: rrdtool-1.4.5-4.5.1.x86_64 (HDP-UTILS-1.1.0.16)
Requires: perl = 5.10.0
Installed: 4:perl-5.10.1-136.el6.x86_64 (@base)
perl = 4:5.10.1-136.el6
Available: 4:perl-5.10.1-129.el6.x86_64 (centos6_base_x86_64)
perl = 4:5.10.1-129.el6
Available: 4:perl-5.10.1-130.el6_4.x86_64 (centos6_updates_x86_64)
perl = 4:5.10.1-130.el6_4
Available: 4:perl-5.10.1-131.el6_4.x86_64 (centos6_updates_x86_64)
perl = 4:5.10.1-131.el6_4
Error: Package: python-rrdtool-1.4.5-4.5.1.x86_64 (HDP-UTILS-1.1.0.16)
Requires: python = 2.6
Installed: python-2.6.6-52.el6.x86_64 (@centos6_updates_x86_64)
python = 2.6.6-52.el6
Available: python-2.6.6-36.el6.i686 (centos6_base_x86_64)
python = 2.6.6-36.el6
Available: python-2.6.6-37.el6_4.i686 (centos6_updates_x86_64)
python = 2.6.6-37.el6_4
Available: python-2.6.6-51.el6.x86_64 (base)
python = 2.6.6-51.el6
Error: Package: rrdtool-1.4.5-4.5.1.x86_64 (HDP-UTILS-1.1.0.16)
Requires: libxcb-xlib.so.0()(64bit)

As you can see, I have python and perl installed, it just happens to be a newer version than what is required. That shouldn’t be a problem should it?

As for dejavu, why isn’t that made available via the HDP-UTILS repo?

Thoughts?

heartbeat lost of all ambari services

0
0

Replies: 0

hi,
I have installed apache ambari 1.6.0 on centos 6.5 (virtual box). It was working fine.but now it is giving problem that,Heartbeat lost to every service.(last time i haven’t closed sytem properly)i tried by restarting ambari-agent,server but no luck. one more thing is ‘ssh host_name’ is not working it gives time out plz help


Sandbox 2.1 ODBC Not Working

0
0

Replies: 6

I have created an ODBC connection to the Sandbox 2.1 but I am not able to pull any data. The error I get is “ODBC–call failed. [Hortonworks][HiveODBC] [35] Error from Hive: error code: ’40000′ error message: ‘Error while compiling statement: FAILED: HiveAccessControlException Permission denied. Principal [name=hue, type=USER] does not have following privileges on Object [type=TABLE_OR_VIEW, name=<table name>]: [SELECT]‘, [#35]”

This works when I use the same setup to connect to the Sandbox 2.0, I have used it many times for moving data, mostly to MS Access.

I have tried different user names in the ODBC connection such as root, hdfs, and hadoop. I have tried various permissions on the table itself using table>view>view file location>change permissions. These did not solve it.

Any help would be appreciated.

Thanks.

Ambari Hosts confirmation

0
0

Replies: 0

I get the following errors during my ambari-server deployment :
==========================
Copying common functions script…
==========================

scp /usr/lib/python2.6/site-packages/common_functions
host=hdp-cluster-dfossouo-1.novalocal, exitcode=0

==========================
Copying OS type check script…
==========================

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=hdp-cluster-dfossouo-1.novalocal, exitcode=0

==========================
Running OS type check…
==========================
Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6

Connection to hdp-cluster-dfossouo-1.novalocal closed.
SSH command execution finished
host=hdp-cluster-dfossouo-1.novalocal, exitcode=0

==========================
Checking ‘sudo’ package on remote host…
==========================
sudo-1.8.6p3-12.el6.x86_64

Connection to hdp-cluster-dfossouo-1.novalocal closed.
SSH command execution finished
host=hdp-cluster-dfossouo-1.novalocal, exitcode=0

==========================
Copying repo file to ‘tmp’ folder…
==========================

scp /etc/yum.repos.d/ambari.repo
host=hdp-cluster-dfossouo-1.novalocal, exitcode=0

==========================
Moving file to repo dir…
Bootstrap timed out
==========================
[sudo] password for osadmin:

Connection to hdp-cluster-dfossouo-1.novalocal closed.
SSH command execution finished
host=hdp-cluster-dfossouo-1.novalocal, exitcode=1

==========================
Copying setup script file…
==========================

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=hdp-cluster-dfossouo-1.novalocal, exitcode=0

ERROR: Bootstrap of host hdp-cluster-dfossouo-1.novalocal fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: Execute of ‘<bound method Bootstrap.copyNeededFiles of <Bootstrap(Thread-1, started daemon 140221148915456)>>’ failed
STDOUT: Try to execute ‘<bound method Bootstrap.copyNeededFiles of <Bootstrap(Thread-1, started daemon 140221148915456)>>’

Hortonworks Sandbox + HDP Security – Version for VMWare?

0
0

Replies: 3

Hello,

Will you be releasing a version of the sandbox with HDP security for VMWare?

tez classpath error

0
0

Replies: 0

Hello

I installed and enabled TEZ but when I run a query through HUE I get the following error

14/07/15 08:35:24 ERROR thrift.ProcessFunction: Internal error processing query
java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:354)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.initialize(BeeswaxServiceImpl.java:303)
at com.cloudera.beeswax.BeeswaxServiceImpl$2.run(BeeswaxServiceImpl.java:832)
at com.cloudera.beeswax.BeeswaxServiceImpl$2.run(BeeswaxServiceImpl.java:828)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
at com.cloudera.beeswax.BeeswaxServiceImpl.doWithState(BeeswaxServiceImpl.java:777)

I have set $TEZ_CONF_DIR, $TEZ_JARS, and added $TEZ_JARS to the $HADOOP_CLASSPATH

I have added the tez.lib.uris property to tez-site.xml

<property>
<name>tez.lib.uris</name>
<value>hdfs://172.31.83.135:8020/apps/tez/,hdfs://172.31.83.135:8020/apps/tez/lib/</value>
</property>

And I have copied the files into the HDFS directory

# hadoop fs -ls /apps/tez
Found 10 items
drwxr-xr-x – hdfs Users 0 2014-07-15 09:26 /apps/tez/conf
drwxr-xr-x – hdfs Users 507 2014-07-15 09:26 /apps/tez/lib
-rw-r–r– 1 hdfs Users 748258 2014-07-15 09:26 /apps/tez/tez-api-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 29852 2014-07-15 09:26 /apps/tez/tez-common-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 980895 2014-07-15 09:26 /apps/tez/tez-dag-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 242117 2014-07-15 09:26 /apps/tez/tez-mapreduce-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 195639 2014-07-15 09:26 /apps/tez/tez-mapreduce-examples-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 110397 2014-07-15 09:26 /apps/tez/tez-runtime-internals-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 348568 2014-07-15 09:26 /apps/tez/tez-runtime-library-0.4.0.2.1.2.0-402.jar
-rw-r–r– 1 hdfs Users 2620 2014-07-15 09:26 /apps/tez/tez-tests-0.4.0.2.1.2.0-402.jar

Is there another step I am missing to get HUE/beeswax to see the TEZ jars on the classpth

Thanks

Sandbox – Pig Basic Tutorial example is nbot working

0
0

Replies: 48

Hi, I just tried the following pig Basic Tutorial which is not working

a = LOAD ‘nyse_stocks’ USING org.apache.hcatalog.pig.HCatLoader();
b = FILTER a BY stock_symbol == ‘IBM’;
c = group b all;
d = FOREACH c GENERATE AVG(b.stock_volume);
dump d;

when i tried the syntax check, the following logs captured.

013-03-17 14:35:28,456 [main] INFO org.apache.pig.Main – Apache Pig version 0.10.1.21 (rexported) compiled Jan 10 2013, 04:00:42
2013-03-17 14:35:28,459 [main] INFO org.apache.pig.Main – Logging error messages to: /home/sandbox/hue/pig_1363556128447.log
2013-03-17 14:35:41,945 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine – Connecting to hadoop file system at: file:///
2013-03-17 14:35:45,555 [main] ERROR org.apache.pig.tools.grunt.Grunt – ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /home/sandbox/hue/pig_1363556128447.log

please do the needful to resolve this issue. Thank you!

Regards,
Sankar

HDP 2.1.3 upgrade overwrites hue database

0
0

Replies: 0

I upgraded to HDP2.1.3 yesterday and noticed that the update process upgraded hue and also overwrote the /var/lib/hue/desktop.db database.

That file says it’s owned by the 2.1.3 hue:
# rpm -qf /var/lib/hue/desktop.db
hue-common-2.3.4.2.1.3.0-563.el6.x86_64

I had to restore that file from a backup to avoid losing the info in hue.

Is this a bug? I would have expected it to not overwrite that database file if it existed.

(I don’t see this in the release notes / known issues for HDP2.1.3, and no mention of it in the “upgrading from 2.1.2 to 2.1.3″ doc).

OOZIE tutorial

0
0

Replies: 0

Hello all,
I have executed sqoop import tables from sql server to hdfs and vice versa. also I have executed hive independently. How I can schedule the same with oozie? Where I can find step by step procedure for oozie for scheduling a simple job?


Sqoop job through oozie

0
0

Replies: 2

I have created a sqoop job called TeamMemsImportJob which basically pulls data from sql server into hive.
I can execute the sqoop job through the unix command line by running the following command:

sqoop job –exec TeamMemsImportJob

If I create an oozie job with the actual scoop import command in it, it runs through fine.
However if I create the oozie job and run the sqoop job through it, I get the following error:

oozie job -config TeamMemsImportJob.properties -run

>>> Invoking Sqoop command line now >>>

4273 [main] WARN org.apache.sqoop.tool.SqoopTool – $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
4329 [main] INFO org.apache.sqoop.Sqoop – Running Sqoop version: 1.4.4.2.1.1.0-385
5172 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage – Cannot restore job: TeamMemsImportJob
5172 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage – (No such job)
5172 [main] ERROR org.apache.sqoop.tool.JobTool – I/O error performing job operation: java.io.IOException: Cannot restore missing job TeamMemsImportJob
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.read(HsqldbJobStorage.java:256)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:198)

it looks as if it cannot find the job. However I can see the job as below

[root@sandbox ~]# sqoop job –list
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/06/25 08:12:08 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4.2.1.1.0-385
Available jobs:
TeamMemsImportJob

Can someone please help me out with this?

Thanks,
Colman

HBase Error – This server is in the failed servers list

0
0

Replies: 3

Hi,
I am using Hortonworks HDP 2.1 beta with CentOS RHEL 6.2 and trying to run a simple HBase Java program.
By JPS command all the services are running fine.
[root@localhost ~]# jps
28508 HRegionServer
2423 SecondaryNameNode
3389 NodeManager
3570 JobHistoryServer
32362 Jps
2328 NameNode
18379 QuorumPeerMain
2671 DataNode
4219 org.eclipse.equinox.launcher_1.2.0.v20110502.jar
28379 HMaster
3138 ResourceManager

Below is the java program.
public static void main(String[] args) {
// TODO Auto-generated method stub
Configuration conf = HBaseConfiguration.create();
try {
HBaseAdmin hbase = new HBaseAdmin(conf);
boolean flag = hbase.isMasterRunning();
System.out.print(“ok”);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Once I run the program it is showing below output.
13/10/10 11:17:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=60000 watcher=hconnection-0x6564dbd5
13/10/10 11:17:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6564dbd5 connecting to ZooKeeper ensemble=127.0.0.1:2181
13/10/10 11:17:15 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/10/10 11:17:15 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
13/10/10 11:17:15 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x141a029bd47004e, negotiated timeout = 40000
13/10/10 11:17:16 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 failed; retrying after sleep of 100, exception=com.google.protobuf.ServiceException: java.io.IOException: Could not set up IO Streams
13/10/10 11:17:16 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 failed; retrying after sleep of 200, exception=com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: localhost.localdomain/127.0.0.1:60000
Here is the HBase-site.xml

hbase.rootdir
hdfs://127.0.0.1:8020/apps/hbase

hbase.master.info.bindAddress
127.0.0.1

hbase.zookeeper.quorum
127.0.0.1

hbase.cluster.distributed
true

Here is the /etc/hosts file
127.0.0.1 localhost.localdomain localhost

Please let me know why the error is happening.
Thanks,
Aparna

Repository

Potential misconfiguration detected. Fix and restart Hue.

0
0

Replies: 0

I have been trying to get HUE running on an EC2 cluster. The Filebrowser give me this error:
WebHdfsException at /filebrowser/
<urlopen error [Errno 111] Connection refused>

I have updated the ini file with my local server dns name but the error still occurs. When going to the potential mis-configuration page i see this:

hadoop.hdfs_clusters.default.webhdfs_url Current value: http://iocalserverurl.internal:50070/webhdfs/v1/
Failed to access filesystem root
hcatalog.templeton_url Current value: http://localhost:50111/templeton/v1/
Oozie Editor/Dashboard The app won’t work without a running Oozie server

HDP Setup Failure Hadoop Password creation

0
0

Replies: 6

Installing Hadoop on Windows 8.1 in a VMWare workstation instance. In the HDP setup I have an error I cannot work past, ” Hadoop user password does not meet minimum system requirements defined in the system policies” . I have run secpol.msc to make sure Windows 8.1 password policy is very basic requiring only 4 characters and with no complexity or expiration etc. Still with very low requirements no password I use seems to satisfy the HDP install. I have set the policies higher and used complex passwords, but that made no difference either. I just downloaded all the code yesterday for this install, so I don’t believe anything is old.

It must be simple, but I just cannot see what I have missed.

Viewing all 5121 articles
Browse latest View live




Latest Images