Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Cant connect to Hive from Tableau 8.1

$
0
0

Replies: 0

I am trying to connect to hive from Tableau 8.1. I instaleld the HDP Hive driver and when I connect from the ODBC adminsitrator..it works fine…
——————————————-
Driver Version: V1.4.5.1005
Running connectivity tests…
Attempting connection
Connection established
Disconnecting from server
TESTS COMPLETED SUCCESSFULLY!
——————————————-

but when I select the Hortonworks Hadoop Hive Connection in Tableau, and type in the server’s IP address in step 1 (leaving the port untouched), and click connect in step 2 and receive the below message:

The drivers necessary to connect to this database server are not properly installed. Visit http://www.tableausoftware.com/drivers to download driver setup files.
[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified

Unable to connect to the server “192.168.24.128?. Check that the server is running and that you have access privileges to the requested database.
Unable to connect to the server. Check that the server is running and that you have access privileges to the requested database.

No problems connecting via Excel 2013.

Can someone please help

Thx!


Installing Flume

$
0
0

Replies: 3

Hi everyone,

First of all, I am totally new to Hadoop, Linux and Horton Works so forgive me if I ask some stupid questions!

Basically I have installed the HW Sandbox successfully and would now like to install Flume to load in some Twitter data (as in Tutorial 12). However, when I follow the instructions in Tutorial 12 the installation does not work – I receive a PYCURL ERROR 6 message – can’t find a valid base URL for repo: base when I execute the YUM install command.

Also, even when I ping google.com or any other site I receive an ‘unknown host’ error also so I’m guessing I don’t have the sandbox shell configured to access the internet. Any help you can give me to resolve would be great.

Thanks!

Ambari Installation from tar balls?

$
0
0

Replies: 0

Hi All,
I am evaluating Ambari for managing my cluster. I have the following questions
1. Is rpm the only way to install Ambari? Is there a way to manually install using tarballs – something along the lines of http://goo.gl/DTIsRL?
2. How to make sure that Ambari doesnt become a single point of failure?
3. Is it possible to configure ambari to point to an external database? If so, are there instructions to configure the database for ambari?

Thanks,
Karthick

Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver") error !

$
0
0

Replies: 0

Hi Friends,
Try to connect o MSSQL server through simple .java app and pool some files,
Getting this error :
java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver

any experience on how to fix this ?

Thanks,
Patrick

ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader

$
0
0

Replies: 2

Hi,

When run below Pig command on HDP 2.1 for Windows, failed with ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader. please advise what could be wrong.

grunt> sample_01 = LOAD ‘hivesmoke’ using org.apache.hcatalog.pig.HCatLoader();

Logfile shows below info:

Caused by:
<line 1, column 39> pig script failed to validate: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:1299)
at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:1284)
at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:5162)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3519)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1629)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1106)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:564)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188)
… 10 more
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:653)
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:1296)
… 18 more

thanks,
nemo

How to install mahout using ambari server

$
0
0

Replies: 0

I have created a hadoop cluster using 3 slaves and 1 master using ambari server(hortonworks). I need to install mahout 0.9 in the master machine in order to run mahout jobs in the cluster. How do I do that?

I am using ambari 1.5.1 and HDP 2.1.

127.0.0.1:8888 does not brings up welcome screen

$
0
0

Replies: 1

Hi All,

The port 8888 supposed to display a welcome screen as described in the instructions manual. However, i see a different screen with port 8888. The screen name is ‘HDP 2.1 Technical Preview’ with the following sub headings.
1. Get started with Hadoop
2.Try New Features
3. Dive right in

All the above sections are connected to Hortonworks website.

My VirtualBox is adapter is set to NAT. I would have attached a screenshot if that option is available here.

Pls help me.

Thanks,
Satish

Create sanbox on a Vmware Cluster

$
0
0

Replies: 1

Is it possible to fun the 2.1 sandbox inside a a Vmware cluster? I have built a windows 7 workstation in Vmware and installed the Sandbox. I receive this error when i try to boot the sandbox. The kernel requires an x86-64 CPU, but only detected an i686 CPU. Unable to to boot – please use a kernel appropriate for your CPU. The CPU is X64 and i have tried multiple ESX hosts.

Steve


Unable to ssh into sandbox using putty

$
0
0

Replies: 1

Hey,

I have downloaded the sandbox for VMWare workstation. I have workstation 10 running. Now when I run it am able to see the sandbox page in browser @192.168.124.128. But I can’t ssh into my sandbox, I tried 127.0.0.1:2222 192.168.124.128:2222. They all return the same “Connection refused error”.

Please help me resolve this problem. Thank you in advance.

HDP2 Sandbox won't work

$
0
0

Replies: 1

I downloaded and installed the hortonworks sandbox hdp2.0 on my machine and can run it successfully. Issue is that IP address giving isn’t working on my browsers. Keep saying firewall issues but I have my firewalls all disabled and still it won’t connect.

ifconfig hortonworks

$
0
0

Replies: 1

Good Morning, I’m a beginer with Big Data and Hortonworks. I’ve tried to have Hortonworks Sanbox with Virtualbox. Whith the command ifconfig I have this result:
eth0 inet addr:10.0.2.15. Is it normal to have such an IP address ??? Why don’t I have an address like 192.164….. ???
Any address 10.0.2.15:xxxxx does not work on my Chrome browser. Thank for your help.

Need guidance to set-up an Hadoop Envr using Hortonworks with 3 virtual servers

$
0
0

Replies: 1

Hi. I need guidance to set up an Hadoop Environment using HorntonWorks on Windows. I would like to have 3 virtual servers, 1 master & 2 slaves. The slaves should be able to intake 3 TB’s of data. What should be the specification of the virtual servers & version of the Horton Sandbox. Also, how to go about re-scaling if required in the future.

I will also appreciate if anyone can guide me to a quick reference/study material on the same.

Thanks.

Storm Tutorial for Beginners

$
0
0

Replies: 0

can anybody help me with writing a program, deploying and running it on Storm
i need something or tutorial step by step, program code and commands.
i think i don’t need a tutorial for installing n configuring STORM

also i need to learn the program structure for a python program and run that program on storm.

i saw many tutorials but they are not for beginners there is a lot of missing information. how to start and how to write a program and where to define the topology and how to run it.

a quick detailed and positive response shall be highly appreciated.

Thankyou

Regards
-Shahid Ansari
0082-10-4680-5051

Cluster install fails with 404

$
0
0

Replies: 0

After lots of trial and error, I’ve gotten to the point where Ambari is trying to install my first node. After a bit, it fails with
Running setup agent...
STDOUT
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.x. Please verify its path and try again
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"

I’m all sorts of confused. I can browse to that URL, for what that’s worth. But where is that even coming from? The base URL for HPD 2.0.6 on Centos 6x is:

http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.0/

What’s going on?

Sandbox – Pig Basic Tutorial example is nbot working

$
0
0

Replies: 42

Hi, I just tried the following pig Basic Tutorial which is not working

a = LOAD ‘nyse_stocks’ USING org.apache.hcatalog.pig.HCatLoader();
b = FILTER a BY stock_symbol == ‘IBM’;
c = group b all;
d = FOREACH c GENERATE AVG(b.stock_volume);
dump d;

when i tried the syntax check, the following logs captured.

013-03-17 14:35:28,456 [main] INFO org.apache.pig.Main – Apache Pig version 0.10.1.21 (rexported) compiled Jan 10 2013, 04:00:42
2013-03-17 14:35:28,459 [main] INFO org.apache.pig.Main – Logging error messages to: /home/sandbox/hue/pig_1363556128447.log
2013-03-17 14:35:41,945 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine – Connecting to hadoop file system at: file:///
2013-03-17 14:35:45,555 [main] ERROR org.apache.pig.tools.grunt.Grunt – ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /home/sandbox/hue/pig_1363556128447.log

please do the needful to resolve this issue. Thank you!

Regards,
Sankar


Sandbox error

$
0
0

Replies: 0

I am running hive select query and I am getting the following error
Driver returned: 2. Errors: OK
Query ID = hue_20140610205757_f1316eda-f798-41f5-96aa-adbf99aa97a7
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1402454115376_0005, Tracking URL = N/A
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1402454115376_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-06-10 20:57:37,368 Stage-1 map = 0%, reduce = 0%
2014-06-10 20:58:11,643 Stage-1 map = 100%, reduce = 100%
Ended Job = job_1402454115376_0005 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

Sandbox and os 32 bits.

$
0
0

Replies: 8

Hi,

I have installed “VirtualBox 4.3.12 for Windows hosts x86/amd64″ on my computer : windows 7 professionnal in 32 bits.
I have downloaded “the Hortonworks+Sandbox+2.0+VirtualBox.ova”
I have installed the VMs and when I started the VMs, I have the following error :
“this kernel requires an x86-64 cpu but only detected an i686 cpu”.

I thinks this sandbow works in 32 bits too, no ?
how to run the sandbox on my 32-bit system? Is it possible?

thanks in advacne for your help.
regards,

Kévin.

Pig with hbase errors

$
0
0

Replies: 6

Hi,

I’m trying to do a test with pig and HBase. The same test works within grunt
The simple test, read from a text file and save to a HBase table, but always gives the same error

ERROR org.apache.pig.tools.grunt.Grunt – ERROR 2998: Unhandled internal error. org/apache/hadoop/hbase/filter/Filter

Is a problem with libraries?
How can I fix it?

I have this versions
Hue 2.3.0-101
HDP 2.0.6
Hadoop 2.2.0
HCatalog 0.12.0
Pig 0.12.0
Hive 0.12.0
Oozie 4.0.0
Ambari 1.4.3
HBase 0.96.1

Beeswax and Hcat "Timed Out" Errors

$
0
0

Replies: 0

All of the sandbox features work except Beeswax and HCat.
When I click on these I get a “Timed Out” error message.

Here are the log file entries for the HCat “Timed Out”

HCat Timed Out

[11/Jun/2014 06:48:00] middleware DEBUG No desktop_app known for request.
[11/Jun/2014 06:48:00] access INFO 192.168.106.134 frank – “GET /hcatalog/ HTTP/1.0″
[11/Jun/2014 06:48:00] views DEBUG Getting database name from cookies
[11/Jun/2014 06:48:00] thrift_util DEBUG Thrift call: <class ‘hive_metastore.ThriftHiveMetastore.Client’>.get_all_databases(args=(), kwargs={})
[11/Jun/2014 06:48:10] thrift_util WARNING Not retrying thrift call get_all_databases due to socket timeout
[11/Jun/2014 06:48:10] thrift_util INFO Thrift saw a socket error: timed out
[11/Jun/2014 06:48:10] middleware INFO Processing exception: timed out (code THRIFTSOCKET): None: Traceback (most recent call last):
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py”, line 100, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File “/usr/lib/hue/apps/hcatalog/src/hcatalog/views.py”, line 53, in index
return show_tables(request, database=database)
File “/usr/lib/hue/apps/hcatalog/src/hcatalog/views.py”, line 92, in show_tables
databases = db.get_databases()
File “/usr/lib/hue/apps/beeswax/src/beeswax/server/dbms.py”, line 92, in get_databases
return self.client.get_databases()
File “/usr/lib/hue/apps/beeswax/src/beeswax/server/beeswax_lib.py”, line 124, in get_databases
return self.meta_client.get_all_databases()
File “/usr/lib/hue/desktop/core/src/desktop/lib/thrift_util.py”, line 302, in wrapper
raise StructuredException(‘THRIFTSOCKET’, str(e), data=None, error_code=502)
StructuredException: timed out (code THRIFTSOCKET): None

[11/Jun/2014 06:48:13] middleware DEBUG No desktop_app known for request.
[11/Jun/2014 06:48:13] access INFO 192.168.106.134 frank – “GET /about/ HTTP/1.0″
[11/Jun/2014 06:48:16] access WARNING 192.168.106.134 frank – “GET /logs HTTP/1.0″
[11/Jun/2014 06:48:17] access WARNING 192.168.106.134 frank – “POST /logs HTTP/1.0″
[11/Jun/2014 06:48:21] access WARNING 192.168.106.134 frank – “GET /download_logs HTTP/1.0″

Local repository won't work (The requested URL returned error: 403 Forbidden)

$
0
0

Replies: 1

Hello,

I am trying to configure ambari without access to the internet, I have successfully installed and configure a ftp server to work as a repository mirror.
I’ve looked around and after allot of configuration / re-configuration I’m still not able to get over the “403″ error.
The following steps were followed:
1. installed vsftpd
2. created a repo in /var/ftp/pub/hdp/HDP-UTILS-1.1.0.17
3. ran createrepo in the ../HDP-UTILS-1.1.0.17/repos/centos6/repodata/
4. chmoded -R to 775
5. added a new repo to /etc/yum.repos.d/hdp.conf (triple checked the baseurl, it is correct, copy pasting it in a “curl baseurl” will work)
6. disabled iptables, selinux
7. able to curl,wget,navigate in a browser to ftp://fqdn-hostname/pub/hdp/HDP-UTILS-1.1.0.17/repos/centos6/repodata/repomd.xml
8. ran yum clean all
9. yum list will give me this error “ftp://fqdn-hostname/pub/hdp/HDP-UTILS-1.1.0.17/repos/centos6/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 403 Forbidden”

I’m trying to find out why I still get the 403 when running the yum install command. Do you have any sugestions?

PS. I actually went in python and using the pycurl module, called the URL and it gets an answer from the baseurl used in yum.repos.d.

Thanks!

Viewing all 5121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>