Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Tez Jars unpacked from tar.gz?

$
0
0

Replies: 0

So I have Tez working – or so I thought – until I tried Hive+Tez launched from Oozie. Then I get

Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster

That seems to be the entire log.
Now my understanding is that I need to set the tez.lib.uris to something in the workflow xml as well as the tez config file.
eg

<name>tez.lib.uris</name>
<value>hdfs:///hdp/apps/2.2.0.0-2041/tez/tez.tar.gz,hdfs:///user/oozie/share/lib/</value>
</property>

Now I don’t know if I need hdfs:///user/oozie/share/lib/, but it does not seem to be taking the tar.gz file and unpacking it – or else why would I be getting that error?

Any ideas of things to try?


Not connecting to any website from Sanbox

$
0
0

Replies: 10

I have installed Sandbox 2.2.4 on Oracle Virtual box 4.3, and am able to use HCatalog, Hive, Pig etc..

But while doing required setups for enabling Ambari, I am not able to connect to any website from Sandbox. It is actually required to connect to github.com for adding Vagrant box.

Here is the issue that I am facing:
[root@sandbox ~]# ping http://www.google.com
ping: unknown host http://www.google.com

Please let me know how to resolve this issue.

Thanks.

Regards,
Uma Nadipalli

PIG run time error

$
0
0

Replies: 0

Hello All,

When running a PIG query from a java program on the host mac machine, i get the below error. I have a hdps virtual machine running on my mac. It it because my mac user is different than my vm user ? The java code is also given below:

package com.redhat.aml.pig;
import java.io.IOException;

import org.apache.pig.PigServer;
import org.apache.pig.backend.executionengine.ExecException;
import java.util.Properties;
import org.apache.pig.ExecType;

public class GenerateCustomerProfile {

public static void main(String[] args) throws ExecException, IOException {
// TODO Auto-generated method stub
Properties props = new Properties();
props.setProperty(“fs.default.name”, “hdfs://localhost:8020″);
System.out.println(“Step 1″);
//props.setProperty(“mapred.job.tracker”, “<jobtracker-hostname>:<port>”);
//props.setProperty(“mapred.job.tracker”, “<jobtracker-hostname>:<port>”);
PigServer pigServer = new PigServer(ExecType.MAPREDUCE,props);
System.out.println(“Step 2″);
try {
runMyQuery(pigServer, “/user/aml-demo/trans.txt”, “/user/aml-demo/account.txt”);
}
catch (IOException e) {
e.printStackTrace();
}
}

public static void runMyQuery(PigServer pigServer, String trans, String account) throws IOException {
System.out.println(“Step 3″);
pigServer.registerQuery(“transaction = load ‘” + trans + “‘ using PigStorage(‘,’) as (TransactionID:int,AccountNo:int,FirstName:chararray,LastName:chararray,Amount:int,TransactionType:chararray,FromZipCode:chararray,ToZipCode:chararray,IPAddress:chararray,DeviceLocation:chararray,Country:chararray,State:chararray);”);
System.out.println(“Step 4″);
pigServer.registerQuery(“account = load ‘” + account + “‘ using PigStorage(‘,’) as (AccountNo:int, FirstName:chararray, LastName:chararray, Street:chararray, City:chararray, State:chararray, ZipCode:chararray, Occupation:chararray, Age:int, Sex:chararray,MaritalStatus:chararray, AccountType:chararray);”);
System.out.println(“Step 5″);
pigServer.registerQuery(“C = foreach account generate AccountNo as id, ZipCode,Occupation;”);
System.out.println(“Step 6″);
pigServer.registerQuery(“jnd = join transaction by AccountNo, C by id;”);
System.out.println(“Step 7″);
pigServer.registerQuery(“D = group jnd by (C::ZipCode,transaction::TransactionType,C::Occupation);”);
System.out.println(“Step 8″);
pigServer.registerQuery(“E = foreach D generate flatten(group) as (zip,Tranaction,occupation),SUM($1.Amount) as total_spent,COUNT(jnd) as numOfTransactions,AVG($1.Amount) as avg;”);
System.out.println(“Step 9″);
pigServer.openIterator(“E”);
System.out.println(“Step 10″);
//pigServer.store(“E”, “/user/aml-demo/idout”);
}

}

org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias E
at org.apache.pig.PigServer.openIterator(PigServer.java:935)
at com.redhat.aml.pig.GenerateCustomerProfile.runMyQuery(GenerateCustomerProfile.java:42)
at com.redhat.aml.pig.GenerateCustomerProfile.main(GenerateCustomerProfile.java:21)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias E
at org.apache.pig.PigServer.storeEx(PigServer.java:1038)
at org.apache.pig.PigServer.store(PigServer.java:997)
at org.apache.pig.PigServer.openIterator(PigServer.java:910)
… 2 more
Caused by: org.apache.pig.backend.hadoop.executionengine.JobCreationException: ERROR 2017: Internal error creating job configuration.
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:1010)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:323)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:196)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:304)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
at org.apache.pig.PigServer.storeEx(PigServer.java:1034)
… 4 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/temp680828053/tmp-1259370403/pig-0.15.0.jar could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1551)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3117)

Ambari server upgrade 1.7 > 2.0.1 fails

$
0
0

Replies: 1

Upgraded from HDP 2.2.0 to 2.2.4 via manual process last month. Just getting around to upgrading the ambari server from 1.7.0 > 2.0.1, which succeeds using yum commands but the subsequent ‘ambari-server upgrade’ fails. Redhat Linux 6.6

ambari-server upgrade
Using python /usr/bin/python2.6
Upgrading ambari-server
Updating properties in ambari.properties …
Fixing database objects owner
Upgrading database schema
ERROR: Error executing schema upgrade, please check the server logs.
ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details.
ERROR: Exiting with exit code 11.
REASON: Schema upgrade failed.

/var/log/ambari-server/ambari-server.log
15 Jul 2015 17:26:04,039 INFO [main] Configuration:527 – Reading password from existing file
15 Jul 2015 17:26:04,067 INFO [main] Configuration:747 – Hosts Mapping File null
15 Jul 2015 17:26:04,067 INFO [main] HostsMap:60 – Using hostsmap file null
15 Jul 2015 17:26:04,941 INFO [main] ControllerModule:173 – Detected POSTGRES as the database type from the JDBC URL
15 Jul 2015 17:26:08,660 INFO [main] SchemaUpgradeHelper:231 – Upgrading schema to target version = 2.0.1
15 Jul 2015 17:26:08,694 INFO [main] SchemaUpgradeHelper:240 – Upgrading schema from source version = 1.7.0
15 Jul 2015 17:26:08,697 INFO [main] SchemaUpgradeHelper:147 – Upgrade path: [{ upgradeCatalog: sourceVersion = 1.7.0, targetVersion = 2.0.1 }]
15 Jul 2015 17:26:08,697 INFO [main] SchemaUpgradeHelper:180 – Executing DDL upgrade…
15 Jul 2015 17:26:08,698 INFO [main] DBAccessorImpl:547 – Executing query: ALTER SCHEMA ambari OWNER TO “ambari”;
15 Jul 2015 17:26:08,699 INFO [main] DBAccessorImpl:547 – Executing query: ALTER ROLE “ambari” SET search_path to ‘ambari';
15 Jul 2015 17:26:08,792 INFO [main] DBAccessorImpl:381 – Foreign Key constraint FK_cluster_version_cluster_id already exists, skipping
15 Jul 2015 17:26:08,823 INFO [main] DBAccessorImpl:381 – Foreign Key constraint FK_cluster_version_repovers_id already exists, skipping
15 Jul 2015 17:26:08,921 INFO [main] DBAccessorImpl:547 – Executing query: ALTER TABLE host_version ADD CONSTRAINT FK_host_version_host_name FOREIGN KEY (host_name) REFERENCES hosts (host_name)
15 Jul 2015 17:26:08,925 ERROR [main] DBAccessorImpl:553 – Error executing query: ALTER TABLE host_version ADD CONSTRAINT FK_host_version_host_name FOREIGN KEY (host_name) REFERENCES hosts (host_name)
org.postgresql.util.PSQLException: ERROR: there is no unique constraint matching given keys for referenced table “hosts”
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:550)
at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:371)
at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:337)
at org.apache.ambari.server.upgrade.UpgradeCatalog200.prepareRollingUpgradesDDL(UpgradeCatalog200.java:285)
at org.apache.ambari.server.upgrade.UpgradeCatalog200.executeDDLUpdates(UpgradeCatalog200.java:124)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:371)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:185)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:245)
15 Jul 2015 17:26:08,937 WARN [main] DBAccessorImpl:373 – Add FK constraint failed, constraintName = FK_host_version_host_name, tableName = host_version
15 Jul 2015 17:26:08,937 ERROR [main] SchemaUpgradeHelper:187 – Upgrade failed.
org.postgresql.util.PSQLException: ERROR: there is no unique constraint matching given keys for referenced table “hosts”
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:550)
at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:371)
at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:337)
at org.apache.ambari.server.upgrade.UpgradeCatalog200.prepareRollingUpgradesDDL(UpgradeCatalog200.java:285)
at org.apache.ambari.server.upgrade.UpgradeCatalog200.executeDDLUpdates(UpgradeCatalog200.java:124)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:371)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:185)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:245)
15 Jul 2015 17:26:08,940 ERROR [main] SchemaUpgradeHelper:258 – Exception occured during upgrade, failed
org.apache.ambari.server.AmbariException: ERROR: there is no unique constraint matching given keys for referenced table “hosts”
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:188)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:245)
Caused by: org.postgresql.util.PSQLException: ERROR: there is no unique constraint matching given keys for referenced table “hosts”
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:550)
at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:371)
at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:337)
at org.apache.ambari.server.upgrade.UpgradeCatalog200.prepareRollingUpgradesDDL(UpgradeCatalog200.java:285)
at org.apache.ambari.server.upgrade.UpgradeCatalog200.executeDDLUpdates(UpgradeCatalog200.java:124)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:371)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:185)
… 1 more

MetricsPropertyProvider:185 – Error getting timeline metrics

$
0
0

Replies: 0

ambari-server.log gives “MetricsPropertyProvider:185 – Error getting timeline metrics. Can not connect to collector, socket error.” around every 20 seconds.
It only appears when I log in the ambari web.
If I exit from ambari web, it stops.
Any help or information will be great.
Thanks.

[URGENT] Re-Enable Kerberos after changing Realm fails

$
0
0

Replies: 0

Hi,
I had to change the Kerberos realm and therefore I did:
– disable Kerberos in Ambari
– reconfigure Kerberos to reflect the new realm
– recreated base principals for new realm
– tried to re-enable Kerberos in Ambari

Seems like I trapped into BUG https://issues.apache.org/jira/browse/AMBARI-10930
because Ambari still wants to create keytabs/principals with the old realm.

Please provide a workaround/solution what I have to do step-by-step to get Kerberos enabled again.

Thanks, Gerd

Permissions error with beeswax

$
0
0

Replies: 0

I get an exception when trying to perform any operation on any database/table that I have set up in Hive when I try to access it through the Hue/Beeswax interface. this is the exception that I get.

Exception Type: QueryServerException at /beeswax/table/default/archive
Exception Value: Bad status for request TExecuteStatementReq(confOverlay={}, sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret=’\x9e\xec=\xbdc\x96Fc\x9a\xa9%Z\xf9?>\x96′, guid=’\x1f\x94DW\xac{Fa\xaatk\xab\xe8.\xc3\t’)), runAsync=False, statement=’USE default’):
TExecuteStatementResp(status=TStatus(errorCode=40000, errorMessage=’Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hue] does not have [USE] privilege on [default]’, sqlState=’42000′, infoMessages=[‘*org.apache.hive.service.cli.HiveSQLException:Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hue] does not have [USE] privilege on [default]:17:16′, ‘org.apache.hive.service.cli.operation.Operation:toSQLException:Operation.java:314′, ‘org.apache.hive.service.cli.operation.SQLOperation:prepare:SQLOperation.java:111′, ‘org.apache.hive.service.cli.operation.SQLOperation:runInternal:SQLOperation.java:180′, ‘org.apache.hive.service.cli.operation.Operation:run:Operation.java:256′, ‘org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementInternal:HiveSessionImpl.java:376′, ‘org.apache.hive.service.cli.session.HiveSessionImpl:executeStatement:HiveSessionImpl.java:357′, ‘org.apache.hive.service.cli.CLIService:executeStatement:CLIService.java:257′, ‘org.apache.hive.service.cli.thrift.ThriftCLIService:ExecuteStatement:ThriftCLIService.java:401′, ‘org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1313′, ‘org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1298′, ‘org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39′, ‘org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39′, ‘org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56′, ‘org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:206′, ‘java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145′, ‘java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615′, ‘java.lang.Thread:run:Thread.java:745′, ‘*org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException:Permission denied: user [hue] does not have [USE] privilege on [default]:23:7′, ‘com.xasecure.authorization.hive.authorizer.XaSecureHiveAuthorizer:checkPrivileges:XaSecureHiveAuthorizer.java:254′, ‘org.apache.hadoop.hive.ql.Driver:doAuthorizationV2:Driver.java:727′, ‘org.apache.hadoop.hive.ql.Driver:doAuthorization:Driver.java:520′, ‘org.apache.hadoop.hive.ql.Driver:compile:Driver.java:457′, ‘org.apache.hadoop.hive.ql.Driver:compile:Driver.java:305′, ‘org.apache.hadoop.hive.ql.Driver:compileInternal:Driver.java:1069′, ‘org.apache.hadoop.hive.ql.Driver:compileAndRespond:Driver.java:1063′, ‘org.apache.hive.service.cli.operation.SQLOperation:prepare:SQLOperation.java:109′], statusCode=3), operationHandle=None)

However, if I launch Hive as the Hue user over SSH then everything works fine, so I’m not sure what would be different when using it through the UI since it’s still the same user.

Download HDP 2.3 get corrupted file for VMware and Virture Box image file

$
0
0

Replies: 0

I tried to download HDP ova file both Virtue Box and VNware both indicated file was corrupted when I tried to import to my computer after downloading finished.

I checked the file it is about 2.3 GB, however, in the downlaod page it says it is 7.3 GB. anything wrong?


Ambari 2.1 LDAP: error code 12

$
0
0

Replies: 0

Hi,
After an upgrade from Ambari 2.0 to 2.1, The LDAP sync not working any more, i have this error:


# ambari-server sync-ldap --groups groups.txt
Using python /usr/bin/python2.6
Syncing with LDAP...
Enter Ambari Admin login: admin
Enter Ambari Admin password:
Syncing specified users and groups...ERROR: Exiting with exit code 1.
REASON: Caught exception running LDAP sync. [LDAP: error code 12 - Unavailable Critical Extension]; nested exception is javax.naming.OperationNotSupportedException: [LDAP: error code 12 - Unavailable Critical Extension]; remaining name dc=******

More info in “/var/log/ambari-server/ambari-server.log” :

31 Jul 2015 15:54:12,385 INFO [pool-6-thread-1] AmbariLdapDataPopulator:612 - Reloading properties
31 Jul 2015 15:54:12,400 INFO [pool-6-thread-1] LdapTemplate:1262 - The returnObjFlag of supplied SearchControls is not set but a ContextMapper is used - setting flag to true
31 Jul 2015 15:54:12,602 FATAL [pool-6-thread-1] AbstractRequestControlDirContextProcessor:186 - No matching response control found for paged results - looking for 'class javax.naming.ldap.PagedResultsResponseControl
31 Jul 2015 15:54:12,603 ERROR [pool-6-thread-1] LdapSyncEventResourceProvider:429 - Caught exception running LDAP sync.
org.springframework.ldap.OperationNotSupportedException: [LDAP: error code 12 - Unavailable Critical Extension]; nested exception is javax.naming.OperationNotSupportedException: [LDAP: error code 12 - Unavailable Critical Extension]; remaining name dc=******

See below the log in my LDAP server.

This LDAP query work fine:

[...]
[31/Jul/2015:16:53:22 +0200] conn=360563 op=1 msgId=2 - SRCH base="dc=*******" scope=2 filter="(&(objectClass=posixGroup)(cn=group1))" attrs=ALL
[31/Jul/2015:16:53:22 +0200] conn=360563 op=1 msgId=2 - RESULT err=0 tag=101 nentries=1 etime=0.001200
[...]

But This one don’t work, it’s strange, there is no attributes in the filter ?!

[...]
[31/Jul/2015:16:53:22 +0200] conn=360564 op=1 msgId=2 - SRCH base="dc=*********" scope=2 filter="(&(objectClass=posixAccount)(|(dn=oozie)(uid=oozie)))", unsupported critical extension
[31/Jul/2015:16:53:22 +0200] conn=360564 op=1 msgId=2 - RESULT err=12 tag=101 nentries=0 etime=0.000280
[...]

Please could you help me ? :) It worked fine in Ambari 2.0, my LDAP server is Oracle Directory Server Enterprise Edition 11G.

Regards,
STAMS

Fail to Add HBASE service to the cluster via Ambari

$
0
0

Replies: 2

Trying to add HBASE service through Ambari add service wizard. But it’s failing to install both Hbase master and slave. It’s giving the following error.

2015-07-28 15:09:33,275 – Error while executing command ‘any':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py”, line 30, in hook
setup_users()
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py”, line 85, in setup_users
cd_access=”a”,
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 165, in action_create
sudo.makedirs(path, self.resource.mode or 0755)
File “/usr/lib/python2.6/site-packages/resource_management/core/sudo.py”, line 43, in makedirs
shell.checked_call([“mkdir”, “-p”, path], sudo=True)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
return function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 82, in checked_call
return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 199, in _call
raise Fail(err_msg)
Fail: Execution of ‘mkdir -p /etc/resolv.conf/hadoop/hbase’ returned 1. mkdir: cannot create directory `/etc/resolv.conf': Not a directory
Error: Error: Unable to run the custom hook script [‘/usr/bin/python2.6′, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py’, ‘ANY’, ‘/var/lib/ambari-agent/data/command-1156.json’, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY’, ‘/var/lib/ambari-agent/data/structured-out-1156.json’, ‘INFO’, ‘/var/lib/ambari-agent/data/tmp’]

Installing HDP 2.3 on a single node only

$
0
0

Replies: 0

Hi, all.

Just wondering if there are any problems with installing HDP 2.3 from scratch on a single CentOS 6.6 node for prototyping. I’ve been able to install HDP 2.2 from scratch without any problem. However, multiple attempts to upgrade to HDP 2.3 or to install HDP 2.3 from scratch have failed.

TIA,
Ali

Cannot Start Ambari Server on the box

$
0
0

Replies: 0

Hello Everyone, can someone help … download the sandbox 2.3. I was able to start Amabari Agent but not the Amabari server on the sandbox. It comes up with error.
[root@sandbox ~]# ambari-server start
Using python /usr/bin/python2.6
Starting ambari-server
Ambari Server running with administrator privileges.
Running initdb: This may take upto a minute.
About to start PostgreSQL
ERROR: Exiting with exit code 3.
REASON: Unable to start PostgreSQL server. Status stopped. . Exiting
[root@sandbox ~]#
[root@sandbox ~]# ambari-agent start
Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
Starting ambari-agent
Verifying ambari-agent process status…
Ambari Agent successfully started

Hortonworks Hive ODBC Driver data source testing failed

$
0
0

Replies: 3

Hi there,

We have installed HDP Version: 1.1.0-160 with following parameters…
msiexec /i “e:\hdp\hdp-1.1.0-160.winpkg.msi” /lv “e:\hdp\logs\hdp.log” HDP_LAYOUT=”e:\hdp\clusterproperties.txt” HDP_DIR=”e:\hdp\hadoop” DESTROY_DATA=”no”

two nodes(master and slave) small cluster have been created on Windows server 2012 64bit and following is my cluster information…

#Log directory
HDP_LOG_DIR=e:\hdp\logs
#Data directory
HDP_DATA_DIR=e:\hdp\data
#Hosts
NAMENODE_HOST=hdpmaster
SECONDARY_NAMENODE_HOST=hdpmaster
JOBTRACKER_HOST=hdpmaster
HIVE_SERVER_HOST=hdpmaster
OOZIE_SERVER_HOST=hdpmaster
TEMPLETON_HOST=hdpmaster
SLAVE_HOSTS=workernode1
#Database host
DB_FLAVOR=derby
DB_HOSTNAME=hdpmaster
#Hive properties
HIVE_DB_NAME=hive
HIVE_DB_USERNAME=hive
HIVE_DB_PASSWORD=hive
#Oozie properties
OOZIE_DB_NAME=oozie
OOZIE_DB_USERNAME=oozie
OOZIE_DB_PASSWORD=oozie
Driver Version: V1.2.0.1005

We have also installed MS Excel 2013, MSSQL Server 2012 and HortonworksHiveODBC64 on master node.

A HDP DSN has been set up with following details….
Host:hdpmaster
Port:10000
Database:default
Hive Server Type: Hive Server 1
Authentication: No Authentication
Besides that we have started all services including Apache Hadoop hiveserver
and execute command hive -h hdpmaster -p 10000 and get following prompt.
[hdpmaster:10000] hive>
However, the results are as follows….
Driver Version: V1.2.0.1005

Running connectivity tests…

Attempting connection
Failed to establish connection
SQLSTATE: HY000[Hortonworks][Hardy] (22) Error from ThriftHiveClient: No more data to read.

TESTS COMPLETED WITH ERROR.

If we set Hive Server Type: 2 in the DSN setup

the results are….
Driver Version: V1.2.0.1005

Running connectivity tests…

Attempting connection
Failed to establish connection
SQLSTATE: HY000[Hortonworks][Hardy] (34) Error from Hive: No more data to read..

TESTS COMPLETED WITH ERROR.

Please help me out.

Thanks
Mahabubur Rahaman
Orion Informatics Ltd,
Dhaka, Bangladesh

Sqoop and Flume

$
0
0

Replies: 2

Hello,
Can anyone share the training links available for sqoop and flume for the hortonworks sandbox? I checked on the hortonworks tutorial page but I could not find any thing for sqoop or flume. There are trainings for pig and hive.

Thanks,
Phani.

SLES 11.1 Ambari Registration Fails

$
0
0

Replies: 0

Hi there – I am unable to run the Ambari “registration” step for a single-node HDP cluster on SLES 11.1.
The ssh keys setup is ok; ambari server and agent are installed and running; start|stop|status return ok.
I included the registration log outout, the ambari-server log the ambari-agent log and the bootstrap log below; as I think all of these seem to be relevant.

The “ambari-updates” zypper repository refresh doesnt work (not found) – but the other repositoriees work ok. Since Ambari-server and ambari-agent downloaded ok, I presume that failure of this repository connection is not an issue:

cd /etc/zypp/repos.d
rm ambari*
wget http://public-repo-1.hortonworks.com/ambari/suse11/1.x/GA/ambari.repo
–2015-08-02 16:06:45– http://public-repo- 1.hortonworks.com/ambari/suse11/1.x/GA/ambari.repo
Resolving proxy…
Connecting to proxy… connected.
Proxy request sent, awaiting response… 200 OK
Length: 745 [application/octet-stream]
Saving to: `ambari.repo’
2015-08-02 16:06:45 (46.3 MB/s) – `ambari.repo’ saved [745/745]

# zypper clean
All repositories have been cleaned up.
# zypper refresh
Repository ‘Hortonworks Data Platform Utils Version – HDP-UTILS-1.1.0.16′ is up to date.
Repository ‘REP6′ is up to date.
Repository ‘ambari-1.x – Updates’ is invalid.
[Updates-ambari-1.x|http://public-repo- 1.hortonworks.com/ambari/suse11/1.x/updates] Repository type can’t be determined.
Please check if the URIs defined for this repository are pointing to a valid repository.
Skipping repository ‘ambari-1.x – Updates’ because of the above error.
Repository ‘Ambari 1.x’ is up to date.
Some of the repositories have not been refreshed because of an error.

The Registration log for the session (chrome or firefox) is: =================================

Error building the cache:
[|] Repository type can’t be determined.
Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
ERROR: ambari-agent already running
Check /var/run/ambari-agent/ambari-agent.pid for PID.
(‘hostname: ok apjhana01.XXX.XXX.corp
ip: ok XX.XX.XXX.XX (…masked)
cpu: ok Intel(R) Xeon(R) CPU X7560 @ 2.27GHz
(…etc etc)
memory: ok 252.279 GB
disks: ok
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 1.1T 642G 375G 64% /
devtmpfs 127G 120K 127G 1% /dev
tmpfs 127G 248K 127G 1% /dev/shm
os: ok Welcome to SUSE Linux Enterprise Server 11 SP1 (x86_64) – Kernel %r (%t).
iptables: ok
Chain INPUT (policy ACCEPT 235M packets, 110G bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 234M packets, 110G bytes)
pkts bytes target prot opt in out source destination
selinux: UNAVAILABLE
yum: UNAVAILABLE
rpm: ok rpm-4.4.2.3-37.16.37
openssl: ok openssl-0.9.8h-30.27.11
curl: ok curl-7.19.0-11.24.25
wget: ok wget-1.11.4-1.15.1
net-snmp: UNAVAILABLE
net-snmp-utils: UNAVAILABLE
ntpd: UNAVAILABLE
ruby: ok ruby-1.8.7.p72-5.24.2
puppet: ok puppet-0.24.8-1.3.5
nagios: UNAVAILABLE
ganglia: UNAVAILABLE
passenger: UNAVAILABLE
hadoop: UNAVAILABLE
yum_repos: UNAVAILABLE
zypper_repos: ok
2 | HDP-UTILS-1.1.0.16 | Hortonworks Data Platform Utils Version – HDP-UTILS-1.1.0.16 | Yes | No
‘, None)
(‘INFO 2015-08-02 14:36:37,342 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.XXX.XXX.corp:4080/cert/ca
INFO 2015-08-02 14:36:37,343 NetUtil.py:58 – Failed to connect to https://apjhana01.XXX.XXX.corp:4080/cert/ca due to [Errno 111] Connection refused
INFO 2015-08-02 14:36:37,343 NetUtil.py:77 – Server at https://apjhana01.XXX.XXX.corp:4080 is not reachable, sleeping for 10 seconds…
INFO 2015-08-02 14:36:45,203 main.py:51 – signal received, exiting.
INFO 2015-08-02 14:36:53,390 shell.py:50 – Killing stale processes
INFO 2015-08-02 14:36:53,391 shell.py:58 – Killed stale processes
INFO 2015-08-02 14:36:53,391 main.py:141 – Connecting to the server at: https://apjhana01.XXX.XXX.corp:8440
INFO 2015-08-02 14:36:53,391 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:36:53,392 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
INFO 2015-08-02 14:41:27,032 NetUtil.py:58 – Failed to connect to https://apjhana01.XXX.XXX.corp:8440/cert/ca due to [Errno 104] Connection reset by peer
INFO 2015-08-02 14:41:27,032 NetUtil.py:77 – Server at https://apjhana01.sin.XXX.corp:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-08-02 14:41:37,043 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
INFO 2015-08-02 14:41:37,044 NetUtil.py:58 – Failed to connect to https://apjhana01.sin.XXX.corp:8440/cert/ca due to [Errno 111] Connection refused
INFO 2015-08-02 14:41:37,044 NetUtil.py:77 – Server at https://apjhana01.sin.XXX.corp:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-08-02 14:41:43,917 main.py:51 – signal received, exiting.
INFO 2015-08-02 14:42:46,560 shell.py:50 – Killing stale processes
INFO 2015-08-02 14:42:46,566 shell.py:58 – Killed stale processes
INFO 2015-08-02 14:42:46,566 main.py:141 – Connecting to the server at: https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:42:46,566 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:42:46,567 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
‘, None)

STDERR
Connection to apjhana01.sin.XXX.corp closed.
Registering with the server…
Registration with the server failed.
OK
Licensed under the Apache License, Version 2.0.
See third-party tools/resources that Ambari uses and their respective authors

ambari-server.log =======================================

19:59:45,262 INFO Configuration:288 – Web App DIR test /usr/lib/ambari-server/web
19:59:45,270 INFO CertificateManager:65 – Initialization of root certificate
19:59:45,270 INFO CertificateManager:69 – Certificate exists:true
19:59:45,364 INFO AmbariServer:290 – ********* Initializing Meta Info **********
19:59:45,885 INFO AmbariServer:300 – ********* Initializing Clusters **********
19:59:45,886 INFO AmbariServer:304 – ********* Current Clusters State *********
19:59:45,886 INFO AmbariServer:305 – Clusters=[ ]
19:59:45,886 INFO AmbariServer:307 – ********* Initializing ActionManager **********
19:59:45,886 INFO AmbariServer:309 – ********* Initializing Controller **********
19:59:45,890 INFO AmbariManagementControllerImpl:124 – Initializing the AmbariManagementControllerImpl
19:59:45,895 INFO Server:266 – jetty-7.6.7.v20120910
19:59:45,970 INFO ContextHandler:744 – started o.e.j.s.ServletContextHandler{/,file:/usr/lib/ambari-server/web/}
19:59:48,613 INFO AbstractConnector:338 – Started SelectChannelConnector@0.0.0.0:8080
19:59:48,614 INFO Server:266 – jetty-7.6.7.v20120910
19:59:48,616 INFO ContextHandler:744 – started o.e.j.s.ServletContextHandler{/,null}
19:59:49,673 INFO SslContextFactory:300 – Enabled Protocols [SSLv2Hello, SSLv3, TLSv1] of [SSLv2Hello, SSLv3, TLSv1]
19:59:49,681 INFO AbstractConnector:338 – Started SslSelectChannelConnector@0.0.0.0:8440
19:59:49,751 INFO SslContextFactory:300 – Enabled Protocols [SSLv2Hello, SSLv3, TLSv1] of [SSLv2Hello, SSLv3, TLSv1]
19:59:49,757 WARN AbstractConnector:335 – insufficient threads configured for SslSelectChannelConnector@0.0.0.0:8441
19:59:49,758 INFO AbstractConnector:338 – Started SslSelectChannelConnector@0.0.0.0:8441
19:59:49,758 INFO AmbariServer:324 – ********* Started Server **********
19:59:49,759 INFO ActionManager:61 – Starting scheduler thread
19:59:49,759 INFO AmbariServer:327 – ********* Started ActionManager **********
20:00:25,613 INFO AmbariLocalUserDetailsService:62 – Loading user by name: admin
20:00:26,633 INFO ClusterControllerImpl:92 – Using resource provider org.apache.ambari.server.controller.internal.UserResourceProvider for request type User
20:00:26,984 INFO PersistKeyValueService:82 – Looking for keyName CLUSTER_CURRENT_STATUS
20:03:33,584 INFO BootStrapImpl:97 – BootStrapping hosts apjhana01.sin.XXX.corp:
20:03:33,591 INFO BSRunner:166 – Host= apjhana01.sin.XXX.corp bs=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py requestDir=/var/run/ambari-server/bootstrap/1 keyfile=/var/run/ambari-server/bootstrap/1/sshKey server=apjhana01.sin.XXX.corp
20:03:33,607 INFO BSRunner:196 – Kicking off the scheduler for polling on logs in /var/run/ambari-server/bootstrap/1
20:03:33,608 INFO BSRunner:200 – Bootstrap output, log=/var/run/ambari-server/bootstrap/1/bootstrap.err /var/run/ambari-server/bootstrap/1/bootstrap.out
20:03:33,610 INFO BSHostStatusCollector:55 – Request directory /var/run/ambari-server/bootstrap/1
20:03:33,610 INFO BSHostStatusCollector:62 – HostList for polling on [apjhana01.sin.XXX.corp]
20:03:33,786 INFO BSRunner:212 – Script log Mesg

Ambari-agent.log ==============================

INFO 2015-08-02 14:36:53,391 main.py:141 – Connecting to the server at: https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:36:53,391 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:36:53,392 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
INFO 2015-08-02 14:41:27,032 NetUtil.py:58 – Failed to connect to https://apjhana01.sin.XXX.corp:8440/cert/ca due to [Errno 104] Connection reset by peer
INFO 2015-08-02 14:41:27,032 NetUtil.py:77 – Server at https://apjhana01.sin.XXX.corp:8440 is not reachable, sleeping for 10 seconds…

/var/run/ambari-server/bootstrap/apjhana01.sin.XXX.corp.log ==================

Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
tput: No value for $TERM and no -T specified
ERROR: ambari-agent already running
tput: No value for $TERM and no -T specified
Check /var/run/ambari-agent/ambari-agent.pid for PID.
(‘hostname: ok apjhana01.sin.XXX.corp\nip: ok 10.32.241.20\ncpu: ok Intel(R) Xeon(R) CPU X7560 @ 2.27GHz\nIntel(R) Xeon(R) CPU X7560 @ 2.27GHz\nIntel(R) Xeon(R) CPU X7560 @ 2.27GHz\nIntel(R) Xeon(R) CPU (..etc etc) \nmemory: ok 252.279 GB\ndisks: ok\n Filesystem Size Used Avail Use% Mounted on\n/dev/sda2 1.1T 642G 375G 64% /\ndevtmpfs 127G 120K 127G 1% /dev\ntmpfs 127G 248K 127G 1% /dev/shm\nos: ok Welcome to SUSE Linux Enterprise Server 11 SP1 (x86_64) – Kernel %r (%t).\niptables: ok\n Chain INPUT (policy ACCEPT 240M packets, 112G bytes)\n pkts bytes target prot opt in out source destination \n\nChain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target prot opt in out source destination \n\nChain OUTPUT (policy ACCEPT 240M packets, 112G bytes)\n pkts bytes target prot opt in out source destination\nselinux: UNAVAILABLE\nyum: UNAVAILABLE\nrpm: ok rpm-4.4.2.3-37.16.37\nopenssl: ok openssl-0.9.8h-30.27.11\ncurl: ok curl-7.19.0-11.24.25\nwget: ok wget-1.11.4-1.15.1\nnet-snmp: UNAVAILABLE\nnet-snmp-utils: UNAVAILABLE\nntpd: UNAVAILABLE\nruby: ok ruby-1.8.7.p72-5.24.2\npuppet: ok puppet-0.24.8-1.3.5\nnagios: UNAVAILABLE\nganglia: UNAVAILABLE\npassenger: UNAVAILABLE\nhadoop: UNAVAILABLE\nyum_repos: UNAVAILABLE\nzypper_repos: ok\n 2 | HDP-UTILS-1.1.0.16 | Hortonworks Data Platform Utils Version – HDP-UTILS-1.1.0.16 | Yes | No\n’, None)
(‘INFO 2015-08-02 15:10:10,683 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://10.32.241.20:8440\nINFO 2015-08-02 15:10:10,683 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:16:19,429 main.py:51 – signal received, exiting.\nINFO 2015-08-02 15:16:29,784 shell.py:50 – Killing stale processes\nINFO 2015-08-02 15:16:29,784 shell.py:58 – Killed stale processes\nINFO 2015-08-02 15:16:29,784 main.py:141 – Connecting to the server at: https://10.32.241.20:8440\nINFO 2015-08-02 15:16:29,785 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://10.32.241.20:8440\nINFO 2015-08-02 15:16:29,785 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:16:59,875 main.py:51 – signal received, exiting.\nINFO 2015-08-02 15:17:08,950 shell.py:50 – Killing stale processes\nINFO 2015-08-02 15:17:08,950 shell.py:58 – Killed stale processes\nINFO 2015-08-02 15:17:08,950 main.py:141 – Connecting to the server at: https://10.32.241.20:8440\nINFO 2015-08-02 15:17:08,951 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://10.32.241.20:8440\nINFO 2015-08-02 15:17:08,951 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:38:12,679 NetUtil.py:58 – Failed to connect to https://10.32.241.20:8440/cert/ca due to [Errno 104] Connection reset by peer\nINFO 2015-08-02 15:38:12,679 NetUtil.py:77 – Server at https://10.32.241.20:8440 is not reachable, sleeping for 10 seconds…\nINFO 2015-08-02 15:38:22,688 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:38:22,689 NetUtil.py:58 – Failed to connect to https://10.32.241.20:8440/cert/ca due to [Errno 111] Connection refused\nINFO 2015-08-02 15:38:22,689 NetUtil.py:77 – Server at https://10.32.241.20:8440 is not reachable, sleeping for 10 seconds…\nINFO 2015-08-02 15:38:32,699 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\n’, None)

STDERR
tcgetattr: Invalid argument
Connection to apjhana01.sin.XXX.corp closed.


HDP 2.3 on Ubuntu??

$
0
0

Replies: 2

When will HDP 2.3 add Ubuntu to supported platforms?

Processing the doc file in mapreduce

$
0
0

Replies: 0

i have a document file which consits of both text and some images(graphs,pictures etc).how can i process the file in mapreduce.I am looking for the map reduce produce program to process this.

Which machines need Hive Client Libs

$
0
0

Replies: 0

So if I am launching a normal Hive client then I need the client libraries and config on my local machine. I don’t actually need them on any of my cluster worker nodes.

Now if I launch the same client through Oozie which machines need the client libraries and config? Is it “all of them”? Is it just the oozie server machine? Should I do this just through the oozie workflow packaging mechansim?

Thanks

hive-action or hive2-action in oozie

$
0
0

Replies: 2

As I understand it hive-action does not connect to HiveServer2, but instead is itself a normal Hive client and also talks to the Hcatalog directly. This matters a lot more when you have kerberos security.
Now if I want it to connect to hiveserver2 then I need the new hive2-action
BUT the hive2-action is not offered by default in HDP2.2.0
In fact I have Oozie 4.1 when https://issues.apache.org/jira/browse/OOZIE-1457 tells me that it was properly implemented in oozie 4.2

So are there clear guidelines on how and when to pick hive-action rather than hive2-action?

HDFS disk utilization mechanics

$
0
0

Replies: 0

1) I notice that HDFS disk utilzation is not proportionate across nodes having different disk sizes. For example, is nodes 1-10 have 2 GB drives and nodes 11-15 have 3 GB drives, I would expect a larger (and therefore proportionate) usage of space on the 3 GB drives to keep the %usage around the same regardless of disk size. Instead, I observe that usage is uniform across the different sized drives causing smaller drives to to filled up faster. Is this the default behavior of HDFS? Is there a way to change this, say via Ambari, and how?

2) Also, if there were n disks per node supporting HDFS and I desire to reduce that to n-1 disks per node, I would expect this could be done dynamically. In a cluster with k-factor=3, I would think I should be able to decommission a disk on a node at the Linux level, wait for HDFS to reconcile and make the third copy elsewhere and then proceed to decommission a disk on the second node ….and so on. All this while the cluster is live. Is this a possibility? If yes, is there an easier way to do this via Ambari?

Viewing all 5121 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>