Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Not connecting to any website from Sanbox

$
0
0

Replies: 7

I have installed Sandbox 2.2.4 on Oracle Virtual box 4.3, and am able to use HCatalog, Hive, Pig etc..

But while doing required setups for enabling Ambari, I am not able to connect to any website from Sandbox. It is actually required to connect to github.com for adding Vagrant box.

Here is the issue that I am facing:
[root@sandbox ~]# ping http://www.google.com
ping: unknown host http://www.google.com

Please let me know how to resolve this issue.

Thanks.

Regards,
Uma Nadipalli


One mapper for a CSV import – ORC export

$
0
0

Replies: 0

I have a small HDP 2.3 cluster (5 nodes) that I’ve setup to get some experience with Hive + Tez. I have one or two tables that I’m creating using what I believe is a fairly simple DDL as follows:

CREATE TABLE SomeTable_csv(valueA int, valueB int, valueC timestamp)
ROW FORMAT
DELIMITED FIELDS TERMINATED BY ','
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/some/long/path'
TBLPROPERTIES ("skip.header.line.count"="1");

And then I load a single file into that table. I then try to turn it into a table backed by an ORC file, so I do something similar:

CREATE TABLE SomeTable(valueA int, valueB int, valueC timestamp)
STORED AS ORC
LOCATION '/some/long/path';

INSERT OVERWRITE TABLE SomeTable
 SELECT valueA,valueB,valueC from SomeTable_csv;

Apparently this produces a Tez task that has 1 mapper and 1 reducer and takes hours to run (input file is about 50GB). I expected that it would make a reasonable attempt to use more mappers since it’s a simple mapping process – possibly aligning to the HDFS block size. Any hints about what I can do to try to get the process to break up into more than one mapper. I can split the input file before importing into HDFS, but seems like that’s what the task should be doing.

Thanks

Ambari host registration failing

$
0
0

Replies: 5

Hi,

I have started straightaway from the Ambari installation and now proceeding to setting up a 4-node(RHEL) cluster . Following are the steps :

1. On ‘Select Stack’ page under ‘Advanced Repository Options’, I checked only ‘redhat6′ which shows ‘400:Bad request’ for http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0 and http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6. Then I checked ‘Skip Repository Base URL validation’ and proceeded.
2. Then I added the hostnames and the id_rsa file(of the host where Ambari is running and will also be used as NN) and clicked on next.
3. Three hosts(non-Ambari) failed earlier than the other one, following is the log for one of those :

==========================
Creating target directory…
==========================

Command start time 2015-02-11 16:03:55

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying common functions script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_commons
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying OS type check script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Running OS type check…
==========================

Command start time 2015-02-11 16:03:56
Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:57

==========================
Checking ‘sudo’ package on remote host…
==========================

Command start time 2015-02-11 16:03:57
sudo-1.8.6p3-12.el6.x86_64

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying repo file to ‘tmp’ folder…
==========================

Command start time 2015-02-11 16:03:58

scp /etc/yum.repos.d/ambari.repo
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Moving file to repo dir…
==========================

Command start time 2015-02-11 16:03:58

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying setup script file…
==========================

Command start time 2015-02-11 16:03:58

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=l1033lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:59

==========================
Running setup agent script…
==========================

Command start time 2015-02-11 16:03:59
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
/bin/sh: /usr/sbin/ambari-agent: No such file or directory
{‘exitstatus': 1, ‘log': (”, None)}

Connection to l1033lab.sss.se.scania.com closed.
SSH command execution finished
host=l1033lab.sss.se.scania.com, exitcode=1
Command end time 2015-02-11 16:05:00

ERROR: Bootstrap of host l1033lab.sss.se.scania.com fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: tcgetattr: Invalid argument
Connection to l1033lab.sss.se.scania.com closed.

STDOUT: This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml: (28, ‘connect() timed out!’)
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.7.0. Please verify its path and try again
/bin/sh: /usr/sbin/ambari-agent: No such file or directory
{‘exitstatus': 1, ‘log': (”, None)}

Connection to l1033lab.sss.se.scania.com closed.

******************************The last one to failed(where Ambari runs) had the following log :******************************

==========================
Creating target directory…
==========================

Command start time 2015-02-11 16:03:55

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying common functions script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_commons
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Copying OS type check script…
==========================

Command start time 2015-02-11 16:03:56

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:56

==========================
Running OS type check…
==========================

Command start time 2015-02-11 16:03:56
Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:57

==========================
Checking ‘sudo’ package on remote host…
==========================

Command start time 2015-02-11 16:03:57
sudo-1.8.6p3-12.el6.x86_64

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying repo file to ‘tmp’ folder…
==========================

Command start time 2015-02-11 16:03:58

scp /etc/yum.repos.d/ambari.repo
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Moving file to repo dir…
==========================

Command start time 2015-02-11 16:03:58

Connection to l1032lab.sss.se.scania.com closed.
SSH command execution finished
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:58

==========================
Copying setup script file…
==========================

Command start time 2015-02-11 16:03:58

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=l1032lab.sss.se.scania.com, exitcode=0
Command end time 2015-02-11 16:03:59

==========================
Running setup agent script…
==========================

Command start time 2015-02-11 16:03:59
Automatic Agent registration timed out (timeout = 300 seconds). Check your network connectivity and retry registration, or use manual agent registration.

******************************Now I did a wget from all the hosts successfully :******************************

[root@l1034lab ~]# wget http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml
–2015-02-11 16:10:15– http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0/repodata/repomd.xml
Resolving proxyseso.scania.com… 138.106.57.2
Connecting to proxyseso.scania.com|138.106.57.2|:8080… connected.
Proxy request sent, awaiting response… 200 OK
Length: 2983 (2.9K) [text language="/xml"][/text]
Saving to: “repomd.xml”

100%[===================================================================================>] 2,983 –.-K/s in 0s

2015-02-11 16:10:15 (227 MB/s) – “repomd.xml” saved [2983/2983]

Are there some steps mandatory before one can install AMbari and proceed ? What’s wrong here ?

Thanks and regards

Problems with adding a job in Oozie

$
0
0

Replies: 0

Hi there,

I am challenging with some errors with Oozie over a month but no hope. Can you help me with these disasters?

when I submit a mapreduce job in oozie, I get this error:

JA017: Could not lookup launched hadoop Job ID [job_local152843681_0009] which was associated with action [0000009-150711083342968-oozie-root-W@mapreduce-f660]. Failing this action!

Best,

HDP 2.0 – SparkR (Spark 1.4)

RAM of HDP 2.3 Sandbox

$
0
0

Replies: 0

I use the sandbox in 8GB RAM, and run well, but HDP 2.3 make me hang, when I try to start HBase from default HDP Sandbox.. and I try to upgrade to 16GB, and it run well, and after see the RAM, it took 11GB (include run Spark).

HBase icon in HDP 2.3 Sandbox

$
0
0

Replies: 1

i got this , Connection failed: [Errno 111] Connection refused to sandbox.hortonworks.com:16030

I tought run hbase no need the URL..

Sandbox in Vagrant

$
0
0

Replies: 0

I want to make vagrant edition of the sandbox.

is it possible?


Q: Tutorial HIVE error

Hive jdbc connection error

$
0
0

Replies: 4

I am trying to execute a SQL query using JAVA on Hive through JDBC, I run the following java program on the same machine in which i have the HIVE database:

import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;

public class HiveJdbcClient {
private static String driverName = “org.apache.hive.jdbc.HiveDriver”;

public static void main(String[] args) throws SQLException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
Connection con = DriverManager.getConnection(“jdbc:hive2://”,”hive”,”123456″);
Statement stmt = con.createStatement();
String tableName = “test2″;
stmt.executeQuery(“drop table ” + tableName);
ResultSet res = stmt.executeQuery(“create table ” + tableName + ” (key int, value string)”);
// show tables
String sql = “show tables ‘” + tableName + “‘”;
System.out.println(“Running: ” + sql);
res = stmt.executeQuery(sql);
if (res.next()) {
System.out.println(res.getString(1));
}
// describe table
sql = “describe ” + tableName;
System.out.println(“Running: ” + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1) + “\t” + res.getString(2));
}

// load data into table
// NOTE: filepath has to be local to the hive server
// NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line
String filepath = “/tmp/a.txt”;
sql = “load data local inpath ‘” + filepath + “‘ into table ” + tableName;
System.out.println(“Running: ” + sql);
res = stmt.executeQuery(sql);

// select * query
sql = “select * from ” + tableName;
System.out.println(“Running: ” + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(String.valueOf(res.getInt(1)) + “\t” + res.getString(2));
}

// regular hive query
sql = “select count(1) from ” + tableName;
System.out.println(“Running: ” + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1));
}
}
}

However it gives the following error:
Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hive/service/cli/thrift/TCLIService$Iface
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:104)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at HiveJdbcClient.main(HiveJdbcClient.java:18)
Caused by: java.lang.ClassNotFoundException: org.apache.hive.service.cli.thrift.TCLIService$Iface
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
… 4 more

I run the java code using command:
java -cp “/usr/lib/hive/lib/hive-exec-0.11.0.1.3.0.0-107.jar:/usr/lib/hive/lib/hive-jdbc-0.11.0.1.3.0.0-107.jar:/usr/lib/hive/lib/hive-metastore-0.11.0.1.3.0.0-107.jar:/usr/lib/hive/lib/antlr-runtime-3.4.jar:/usr/lib/hive/lib/derby-10.4.2.0.jar:/usr/lib/hive/lib/jdo2-api-2.3-ec.jar:/usr/lib/hive/lib/jpox-core-1.2.2.jar.zip:/usr/lib/hive/lib/jpox-core-1.2.2.jar.zip:/usr/lib/hadoop/hadoop-core-1.2.0.1.3.0.0-107.jar:/usr/lib/hadoop/conf:.:/usr/lib/hcatalog/share/hcatalog/hcatalog-core.jar” HiveJdbcClient

ERROR: Ambari server upgrade failed.

$
0
0

Replies: 4

Hi,

I have a issue with upgrade from Ambari 2.0.1 to 2.1.0, I changed the JDK from 1.6 to 1.7 from “ambari-server setup”. When I do “ambari-server upgrade”, the result is:

Using python /usr/bin/python2.6
Upgrading ambari-server
Updating properties in ambari.properties …
WARNING: Can not find ambari.properties.rpmsave file from previous version, skipping import of settings
Fixing database objects owner
Upgrading database schema
ERROR: Error executing schema upgrade, please check the server logs.
ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details.
ERROR: Exiting with exit code 11.
REASON: Schema upgrade failed.

When I see the /var/log/ambari-server/ambari-server.log:

23 Jul 2015 11:37:15,724 INFO [main] Configuration:594 – Reading password from existing file
23 Jul 2015 11:37:15,738 INFO [main] Configuration:864 – Hosts Mapping File null
23 Jul 2015 11:37:15,738 INFO [main] HostsMap:60 – Using hostsmap file null
23 Jul 2015 11:37:16,229 INFO [main] ControllerModule:185 – Detected POSTGRES as the database type from the JDBC URL
23 Jul 2015 11:37:17,669 INFO [main] ControllerModule:558 – Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.AlertScriptDispatcher
23 Jul 2015 11:37:17,677 INFO [main] ControllerModule:558 – Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.EmailDispatcher
23 Jul 2015 11:37:17,731 INFO [main] ControllerModule:558 – Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.SNMPDispatcher
23 Jul 2015 11:37:20,479 INFO [main] SchemaUpgradeHelper:277 – Upgrading schema to target version = 2.1.0
23 Jul 2015 11:37:20,500 INFO [main] SchemaUpgradeHelper:286 – Upgrading schema from source version = 2.0.1
23 Jul 2015 11:37:20,502 INFO [main] SchemaUpgradeHelper:150 – Upgrade path: [{ upgradeCatalog: sourceVersion = 2.0.0, targetVersion = 2.1.0 }, { upgradeCatalog: sourceVersion = null, targetVersion = 2.1.0 }]
23 Jul 2015 11:37:20,502 INFO [main] SchemaUpgradeHelper:185 – Executing DDL upgrade…
23 Jul 2015 11:37:20,502 INFO [main] DBAccessorImpl:691 – Executing query: ALTER SCHEMA ambari OWNER TO “ambari”;
23 Jul 2015 11:37:20,503 INFO [main] DBAccessorImpl:691 – Executing query: ALTER ROLE “ambari” SET search_path to ‘ambari';
23 Jul 2015 11:37:20,506 INFO [main] DBAccessorImpl:691 – Executing query: ALTER TABLE alert_current ALTER COLUMN latest_text TYPE TEXT
23 Jul 2015 11:37:20,516 INFO [main] DBAccessorImpl:691 – Executing query: ALTER TABLE alert_history ALTER COLUMN alert_text TYPE TEXT
23 Jul 2015 11:37:20,582 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 1 WHERE host_name = ‘hm1.clapix.com’
23 Jul 2015 11:37:20,585 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 2 WHERE host_name = ‘hm2.clapix.com’
23 Jul 2015 11:37:20,586 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 3 WHERE host_name = ‘hs1.clapix.com’
23 Jul 2015 11:37:20,586 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 4 WHERE host_name = ‘hs2.clapix.com’
23 Jul 2015 11:37:20,587 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 5 WHERE host_name = ‘hs3.clapix.com’
23 Jul 2015 11:37:20,588 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 6 WHERE host_name = ‘hs4.clapix.com’
23 Jul 2015 11:37:20,589 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 7 WHERE host_name = ‘hs5.clapix.com’
23 Jul 2015 11:37:20,590 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 8 WHERE host_name = ‘hs6.clapix.com’
23 Jul 2015 11:37:20,590 INFO [main] DBAccessorImpl:691 – Executing query: UPDATE hosts SET host_id = 9 WHERE host_name = ‘hs7.clapix.com’
23 Jul 2015 11:37:20,592 WARN [main] AbstractUpgradeCatalog:140 – Sequence host_id_seq already exists, skipping
23 Jul 2015 11:37:20,592 INFO [main] DBAccessorImpl:691 – Executing query: ALTER TABLE hosts ALTER COLUMN host_id TYPE BIGINT
23 Jul 2015 11:37:20,621 WARN [main] DBAccessorImpl:288 – FK hostcomponentstate_host_name not found for table hostcomponentstate
23 Jul 2015 11:37:20,621 WARN [main] DBAccessorImpl:748 – Constraint hostcomponentstate_host_name from hostcomponentstate table not found, nothing to drop
23 Jul 2015 11:37:20,640 WARN [main] DBAccessorImpl:288 – FK fk_hostcomponentstate_host_name not found for table hostcomponentstate
23 Jul 2015 11:37:20,640 WARN [main] DBAccessorImpl:748 – Constraint fk_hostcomponentstate_host_name from hostcomponentstate table not found, nothing to drop
23 Jul 2015 11:37:20,657 WARN [main] DBAccessorImpl:288 – FK hstcmponentdesiredstatehstname not found for table hostcomponentdesiredstate
23 Jul 2015 11:37:20,658 WARN [main] DBAccessorImpl:748 – Constraint hstcmponentdesiredstatehstname from hostcomponentdesiredstate table not found, nothing to drop
23 Jul 2015 11:37:20,673 WARN [main] DBAccessorImpl:288 – FK fk_hostcomponentdesiredstate_host_name not found for table hostcomponentdesiredstate
23 Jul 2015 11:37:20,673 WARN [main] DBAccessorImpl:748 – Constraint fk_hostcomponentdesiredstate_host_name from hostcomponentdesiredstate table not found, nothing to drop
23 Jul 2015 11:37:20,690 WARN [main] DBAccessorImpl:288 – FK fk_host_role_command_host_name not found for table host_role_command
23 Jul 2015 11:37:20,690 WARN [main] DBAccessorImpl:748 – Constraint FK_host_role_command_host_name from host_role_command table not found, nothing to drop
23 Jul 2015 11:37:20,704 WARN [main] DBAccessorImpl:288 – FK fk_hoststate_host_name not found for table hoststate
23 Jul 2015 11:37:20,704 WARN [main] DBAccessorImpl:748 – Constraint FK_hoststate_host_name from hoststate table not found, nothing to drop
23 Jul 2015 11:37:20,718 WARN [main] DBAccessorImpl:288 – FK fk_host_version_host_name not found for table host_version
23 Jul 2015 11:37:20,718 WARN [main] DBAccessorImpl:748 – Constraint FK_host_version_host_name from host_version table not found, nothing to drop
23 Jul 2015 11:37:20,732 WARN [main] DBAccessorImpl:288 – FK fk_cghm_hname not found for table configgrouphostmapping
23 Jul 2015 11:37:20,732 WARN [main] DBAccessorImpl:748 – Constraint FK_cghm_hname from configgrouphostmapping table not found, nothing to drop
23 Jul 2015 11:37:20,746 WARN [main] DBAccessorImpl:288 – FK fk_configgrouphostmapping_host_name not found for table configgrouphostmapping
23 Jul 2015 11:37:20,746 WARN [main] DBAccessorImpl:748 – Constraint fk_configgrouphostmapping_host_name from configgrouphostmapping table not found, nothing to drop
23 Jul 2015 11:37:20,759 WARN [main] DBAccessorImpl:288 – FK fk_krb_pr_host_hostname not found for table kerberos_principal_host
23 Jul 2015 11:37:20,759 WARN [main] DBAccessorImpl:748 – Constraint FK_krb_pr_host_hostname from kerberos_principal_host table not found, nothing to drop
23 Jul 2015 11:37:20,773 WARN [main] DBAccessorImpl:288 – FK fk_kerberos_principal_host_host_name not found for table kerberos_principal_host
23 Jul 2015 11:37:20,774 WARN [main] DBAccessorImpl:748 – Constraint fk_kerberos_principal_host_host_name from kerberos_principal_host table not found, nothing to drop
23 Jul 2015 11:37:20,789 WARN [main] DBAccessorImpl:288 – FK fk_krb_pr_host_principalname not found for table kerberos_principal_host
23 Jul 2015 11:37:20,789 WARN [main] DBAccessorImpl:748 – Constraint FK_krb_pr_host_principalname from kerberos_principal_host table not found, nothing to drop
23 Jul 2015 11:37:20,806 WARN [main] DBAccessorImpl:288 – FK fk_hostconfmapping_host_name not found for table hostconfigmapping
23 Jul 2015 11:37:20,807 WARN [main] DBAccessorImpl:748 – Constraint FK_hostconfmapping_host_name from hostconfigmapping table not found, nothing to drop
23 Jul 2015 11:37:20,820 WARN [main] DBAccessorImpl:288 – FK clusterhostmapping_host_name not found for table clusterhostmapping
23 Jul 2015 11:37:20,820 WARN [main] DBAccessorImpl:748 – Constraint ClusterHostMapping_host_name from ClusterHostMapping table not found, nothing to drop
23 Jul 2015 11:37:20,833 WARN [main] DBAccessorImpl:288 – FK fk_clusterhostmapping_host_name not found for table clusterhostmapping
23 Jul 2015 11:37:20,833 WARN [main] DBAccessorImpl:748 – Constraint fk_clusterhostmapping_host_name from ClusterHostMapping table not found, nothing to drop
23 Jul 2015 11:37:20,846 WARN [main] DBAccessorImpl:288 – FK clusterhostmapping_cluster_id not found for table clusterhostmapping
23 Jul 2015 11:37:20,846 WARN [main] DBAccessorImpl:748 – Constraint ClusterHostMapping_cluster_id from ClusterHostMapping table not found, nothing to drop
23 Jul 2015 11:37:20,859 INFO [main] DBAccessorImpl:422 – Foreign Key constraint FK_clhostmapping_cluster_id already exists, skipping
23 Jul 2015 11:37:20,863 INFO [main] DBAccessorImpl:691 – Executing query: ALTER TABLE hosts DROP CONSTRAINT hosts_pkey
23 Jul 2015 11:37:20,869 ERROR [main] DBAccessorImpl:697 – Error executing query: ALTER TABLE hosts DROP CONSTRAINT hosts_pkey
org.postgresql.util.PSQLException: ERROR: cannot drop constraint hosts_pkey on table hosts because other objects depend on it
Detail: constraint fk_hostconfigmapping_host_name on table hostconfigmapping depends on index hosts_pkey
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:694)
at org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:771)
at org.apache.ambari.server.upgrade.UpgradeCatalog210.executeHostsDDLUpdates(UpgradeCatalog210.java:425)
at org.apache.ambari.server.upgrade.UpgradeCatalog210.executeDDLUpdates(UpgradeCatalog210.java:190)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:526)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:190)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:291)
23 Jul 2015 11:37:20,871 ERROR [main] SchemaUpgradeHelper:192 – Upgrade failed.
org.postgresql.util.PSQLException: ERROR: cannot drop constraint hosts_pkey on table hosts because other objects depend on it
Detail: constraint fk_hostconfigmapping_host_name on table hostconfigmapping depends on index hosts_pkey
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:694)
at org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:771)
at org.apache.ambari.server.upgrade.UpgradeCatalog210.executeHostsDDLUpdates(UpgradeCatalog210.java:425)
at org.apache.ambari.server.upgrade.UpgradeCatalog210.executeDDLUpdates(UpgradeCatalog210.java:190)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:526)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:190)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:291)
23 Jul 2015 11:37:20,871 ERROR [main] SchemaUpgradeHelper:308 – Exception occurred during upgrade, failed
org.apache.ambari.server.AmbariException: ERROR: cannot drop constraint hosts_pkey on table hosts because other objects depend on it
Detail: constraint fk_hostconfigmapping_host_name on table hostconfigmapping depends on index hosts_pkey
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:193)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:291)
Caused by: org.postgresql.util.PSQLException: ERROR: cannot drop constraint hosts_pkey on table hosts because other objects depend on it
Detail: constraint fk_hostconfigmapping_host_name on table hostconfigmapping depends on index hosts_pkey
Hint: Use DROP … CASCADE to drop the dependent objects too.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:694)
at org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:771)
at org.apache.ambari.server.upgrade.UpgradeCatalog210.executeHostsDDLUpdates(UpgradeCatalog210.java:425)
at org.apache.ambari.server.upgrade.UpgradeCatalog210.executeDDLUpdates(UpgradeCatalog210.java:190)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:526)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:190)
… 1 more

Kafka connect produce data from outside sand box

$
0
0

Replies: 1

Hi, Im not able to connnect to Apache kafka from outside sandbox.

I tried to disable the firewall but also im not able to produce the data to Kafka broker.
Thanks

Installing Hadoop In Windows 8

$
0
0

Replies: 1

Hi All
I am new to hadoop.Please guide me installing hadoop in my windows machine.

Also i have below questions,
1. Is JDK 1.6 is mandatory for hadoop ? Currently i have only 1.7 and 1.8
2. What is the minimum configuration required for hadoop ? Currently in my machine is i3 with 4 GB RAM.Hope this is enough.

adding a database ambari HIVE

$
0
0

Replies: 2

I see default and xademo how do you create a database in ambari ?

Looking for Community Contributions

$
0
0

Replies: 1

Hi all,

We’re looking for community contributions to the Sandbox tutorials! If you have a great lesson, a tip or tutorial on connecting the Sandbox to another tool or application (Eclipse, Maven, BI tools, R, etc) we’d love to see your contributions. You can contribute through GitHub: https://github.com/hortonworks/hadoop-tutorials . This your chance to share your knowledge and gain external recognition for your expertise. And, there might even be a bit of Hortonworks swag for your efforts. https://twitter.com/HadoopFred/status/398950825164562433


Integrating Hadoop security (Kerberos) with Active directory

$
0
0

Replies: 0

I am trying to enable Hadoop security in a Windows environment with Active Directory. The machines running Hadoop are in domain A and Kerberos users/principals are in domain B. Trust is enabled between domain A and domain B (I am able to login to machines in domain A using accounts in domain B). Few questions about this:

1. Do I need to run the Hadoop services under the user account (domain B) or can I run them as a machine local account such as Local System?
2. If I run them as Local System, how does the preauthentication to KDC take place? Is there a way to configure the credentials to use for KDC preauthentication?

Integrating Hadoop security (Kerberos) with Active directory

$
0
0

Replies: 0

I am trying to enable Hadoop security in a Windows environment with Active Directory. The machines running Hadoop are in domain A and Kerberos users/principals are in domain B. Trust is enabled between domain A and domain B (I am able to login to machines in domain A using accounts in domain B). Few questions about this:

1. Do I need to run the Hadoop services under the user account (domain B) or can I run them as a machine local account such as Local System?
2. If I run them as Local System, how does the preauthentication to KDC take place? Is there a way to configure the credentials to use for KDC preauthentication?

Sqoop and Flume

$
0
0

Replies: 1

Hello,
Can anyone share the training links available for sqoop and flume for the hortonworks sandbox? I checked on the hortonworks tutorial page but I could not find any thing for sqoop or flume. There are trainings for pig and hive.

Thanks,
Phani.

Not able to load file through Flume

$
0
0

Replies: 0

Hi All,

I am new to Hadhoop and we are using Horton work sandbox 2.3 for hands-on. I am trying to load sample file (.csv) from directory to HDFS using flume.
But nothing is happening. I am not getting any error also. Source of file is (/var/log/flumeSpool) and destination (/var/log/flumeSpool/%y-%m-%d). Not sure if I have configured sink correctly or not.

Kindly suggest below is the flume configuration file used by me.

# Agent Name is agent-1 and reading log file from /var/log/apache/flumeSpool
agent-1.sources = src-1
agent-1.sinks = snk-1
agent-1.channels = ch-1

# configure of source
agent-1.sources.src-1.type = spooldir
agent-1.sources.src-1.channels = ch-1
agent-1.sources.src-1.spoolDir = /var/log/flumeSpool
agent-1.sources.src-1.fileHeader = true

# configure of sink
agent-1.sinks.snk-1.type = hdfs
agent-1.sinks.snk-1.hdfs.path = /var/log/flumeSpool/%y-%m-%d
agent-1.sinks.snk-1.hdfs.filePrefix = Flight-
agent-1.sinks.snk-1.hdfs.round = true
agent-1.sinks.snk-1.hdfs.roundValue = 10
agent-1.sinks.snk-1.hdfs.roundUnit = minute

# configure of channel
agent-1.channels.ch-1.type = memory

# Flow from sources to channels
agent-1.sources.src-1.channels = ch-1

# Flow from channels to sink
agent-1.sinks.snk-1.channel = ch-1

PUT webhdfs url

$
0
0

Replies: 0

Hello Gurus,

1) I am new to Hadoop. lately I observed that Opening a file on data node this url can be used “http://<host>:50070/webhdfs/v1/KS/TEST_FILE.txt
/?op=OPEN .

Similarly how can store(PUT) a file using url similar to above “OPEN” url

My source location is C:\HTEMP
My location of HDFS to put the file to /KS

2) what exactly happens when “PUT” is executed through URL is possible? I am seeing for some kind of explain like …. when this url is run it picks up a particular .jar file from particular location ….etc….etc.

Thanks,
Solic

Viewing all 5121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>