Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Beeswax/hive query error

$
0
0

Replies: 8

By trying to create a table in Beeswax I´m getting the following error:
Error occurred executing hive query: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hue] does not have [USE] privilege on [default]

Please I need some help!

Diesher


HDP-2.3.0.0-centos7-rpm.tar.gz is wrong

$
0
0

Replies: 0

HDP-2.3.0.0-centos7-rpm.tar.gz is same as HDP-2.3.0.0-centos6-rpm.tar.gz

Impossible tu run script pig on ihm

$
0
0

Replies: 0

When i try to run pig script with the pig view i’ve got this error :
File does not exist: /user/admin/pig/jobs/riskfactorpig_04-08-2015-11-32-12/stdout at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71) at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1820) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1791) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1704) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:587) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2081) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2077) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2075)

Any idea ?

Running a Spark SQL in Cluster Mode

$
0
0

Replies: 1

We already installed Ambari server with the package HDP-2.2.6.0, The Spark history server is running but we could not able to see the web ui for master UI. we are running below url into browser and getting bellow message.
URL : http://<IP_Address>18080/

1.2.1 History Server
Yarn Application History Server: http://<IP_Address&gt;:8188/
No completed applications found!

Did you specify the correct logging directory? Please verify your setting of spark.history.fs.logDirectory and whether you have the permissions to access it.
It is also possible that your application did not run to completion or did not stop the SparkContext.

Permissions error with beeswax

$
0
0

Replies: 1

I get an exception when trying to perform any operation on any database/table that I have set up in Hive when I try to access it through the Hue/Beeswax interface. this is the exception that I get.

Exception Type: QueryServerException at /beeswax/table/default/archive
Exception Value: Bad status for request TExecuteStatementReq(confOverlay={}, sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret=’\x9e\xec=\xbdc\x96Fc\x9a\xa9%Z\xf9?>\x96′, guid=’\x1f\x94DW\xac{Fa\xaatk\xab\xe8.\xc3\t’)), runAsync=False, statement=’USE default’):
TExecuteStatementResp(status=TStatus(errorCode=40000, errorMessage=’Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hue] does not have [USE] privilege on [default]’, sqlState=’42000′, infoMessages=[‘*org.apache.hive.service.cli.HiveSQLException:Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hue] does not have [USE] privilege on [default]:17:16′, ‘org.apache.hive.service.cli.operation.Operation:toSQLException:Operation.java:314′, ‘org.apache.hive.service.cli.operation.SQLOperation:prepare:SQLOperation.java:111′, ‘org.apache.hive.service.cli.operation.SQLOperation:runInternal:SQLOperation.java:180′, ‘org.apache.hive.service.cli.operation.Operation:run:Operation.java:256′, ‘org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementInternal:HiveSessionImpl.java:376′, ‘org.apache.hive.service.cli.session.HiveSessionImpl:executeStatement:HiveSessionImpl.java:357′, ‘org.apache.hive.service.cli.CLIService:executeStatement:CLIService.java:257′, ‘org.apache.hive.service.cli.thrift.ThriftCLIService:ExecuteStatement:ThriftCLIService.java:401′, ‘org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1313′, ‘org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1298′, ‘org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39′, ‘org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39′, ‘org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56′, ‘org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:206′, ‘java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145′, ‘java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615′, ‘java.lang.Thread:run:Thread.java:745′, ‘*org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException:Permission denied: user [hue] does not have [USE] privilege on [default]:23:7′, ‘com.xasecure.authorization.hive.authorizer.XaSecureHiveAuthorizer:checkPrivileges:XaSecureHiveAuthorizer.java:254′, ‘org.apache.hadoop.hive.ql.Driver:doAuthorizationV2:Driver.java:727′, ‘org.apache.hadoop.hive.ql.Driver:doAuthorization:Driver.java:520′, ‘org.apache.hadoop.hive.ql.Driver:compile:Driver.java:457′, ‘org.apache.hadoop.hive.ql.Driver:compile:Driver.java:305′, ‘org.apache.hadoop.hive.ql.Driver:compileInternal:Driver.java:1069′, ‘org.apache.hadoop.hive.ql.Driver:compileAndRespond:Driver.java:1063′, ‘org.apache.hive.service.cli.operation.SQLOperation:prepare:SQLOperation.java:109′], statusCode=3), operationHandle=None)

However, if I launch Hive as the Hue user over SSH then everything works fine, so I’m not sure what would be different when using it through the UI since it’s still the same user.

Authentication Problem with Knox

$
0
0

Replies: 0

Hi,
I’m trying to set apache knox. To authenticate i use LDAP. I installed HDP with Ambari on CentOS. After following the handbook, I try to access to HDFS with this command:
curl -i -k -u guest:guest-password “https://localhost:8443/gateway/default/webhdfs/v1/?op=LISTSTATUS”
but i riceive always this response:
HTTP/1.1 401 Unauthorized
WWW-Authenticate: BASIC realm=”application”
Content-Length: 0
Server: Jetty(8.1.14.v20131031)

I don’ t understand how authenticate the user. The Guest user with his password are in users.lifd
Someone can help me?
Thanks

Ambari 2.1 LDAP: error code 12

$
0
0

Replies: 4

Hi,
After an upgrade from Ambari 2.0 to 2.1, The LDAP sync not working any more, i have this error:


# ambari-server sync-ldap --groups groups.txt
Using python /usr/bin/python2.6
Syncing with LDAP...
Enter Ambari Admin login: admin
Enter Ambari Admin password:
Syncing specified users and groups...ERROR: Exiting with exit code 1.
REASON: Caught exception running LDAP sync. [LDAP: error code 12 - Unavailable Critical Extension]; nested exception is javax.naming.OperationNotSupportedException: [LDAP: error code 12 - Unavailable Critical Extension]; remaining name dc=******

More info in “/var/log/ambari-server/ambari-server.log” :

31 Jul 2015 15:54:12,385 INFO [pool-6-thread-1] AmbariLdapDataPopulator:612 - Reloading properties
31 Jul 2015 15:54:12,400 INFO [pool-6-thread-1] LdapTemplate:1262 - The returnObjFlag of supplied SearchControls is not set but a ContextMapper is used - setting flag to true
31 Jul 2015 15:54:12,602 FATAL [pool-6-thread-1] AbstractRequestControlDirContextProcessor:186 - No matching response control found for paged results - looking for 'class javax.naming.ldap.PagedResultsResponseControl
31 Jul 2015 15:54:12,603 ERROR [pool-6-thread-1] LdapSyncEventResourceProvider:429 - Caught exception running LDAP sync.
org.springframework.ldap.OperationNotSupportedException: [LDAP: error code 12 - Unavailable Critical Extension]; nested exception is javax.naming.OperationNotSupportedException: [LDAP: error code 12 - Unavailable Critical Extension]; remaining name dc=******

See below the log in my LDAP server.

This LDAP query work fine:

[...]
[31/Jul/2015:16:53:22 +0200] conn=360563 op=1 msgId=2 - SRCH base="dc=*******" scope=2 filter="(&(objectClass=posixGroup)(cn=group1))" attrs=ALL
[31/Jul/2015:16:53:22 +0200] conn=360563 op=1 msgId=2 - RESULT err=0 tag=101 nentries=1 etime=0.001200
[...]

But This one don’t work, it’s strange, there is no attributes in the filter ?!

[...]
[31/Jul/2015:16:53:22 +0200] conn=360564 op=1 msgId=2 - SRCH base="dc=*********" scope=2 filter="(&(objectClass=posixAccount)(|(dn=oozie)(uid=oozie)))", unsupported critical extension
[31/Jul/2015:16:53:22 +0200] conn=360564 op=1 msgId=2 - RESULT err=12 tag=101 nentries=0 etime=0.000280
[...]

Please could you help me ? :) It worked fine in Ambari 2.0, my LDAP server is Oracle Directory Server Enterprise Edition 11G.

Regards,
STAMS

ClassNotFoundException XaSecure

$
0
0

Replies: 0

Hi!
I have a problem with Beeswax (Hive UI). When I am trying to run any query I got an error

Error occurred executing hive query: java.lang.RuntimeException: hive.semantic.analyzer.hook Class not found:com.xasecure.authorization.hive.hooks.XaSecureSemanticAnalyzerHook FAILED: ClassNotFoundException com.xasecure.authorization.hive.hooks.XaSecureSemanticAnalyzerHook

I am using tutorial from this site:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.5/bk_HDPSecure_Admin/content/ch_XA-conf-hive.html
Any suggestions? Thanks in advance


Hive Username/Password

$
0
0

Replies: 0

Hi Guys, I am attempting to get a connection to the hive server and keep getting:
HiveAccessControlException Permission denied: user [hue] does not have [CREATE] privilege on [default/testHiveDriverTable]

Not sure what username/password combination I should use.

I have Horton running on a virtual machine. The code I am attempting to use to connect to the Hive server is:
Connection con = DriverManager.getConnection(“jdbc:hive2://192.168.0.6:10000/default”, “hue”, “”);

[CRIT] oozie does not start on HDP 2.3 with default database settings

$
0
0

Replies: 2

oozie server does not start just after install. I suppose the installer failed to deploy the jars to connect to the default mysql database… Will try to put the jar manually on HDFS… but … this is more likely a bug ?!

Message is:
Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py”, line 181, in <module>
OozieServer().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 218, in execute
method(env)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 459, in restart
self.start(env)
File “/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py”, line 57, in start
self.configure(env)
File “/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py”, line 51, in configure
oozie(is_server=True)
File “/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py”, line 89, in thunk
return fn(*args, **kwargs)
File “/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py”, line 154, in oozie
oozie_server_specific()
File “/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py”, line 224, in oozie_server_specific
download_database_library_if_needed()
File “/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py”, line 313, in download_database_library_if_needed
content = DownloadSource(params.driver_curl_source))
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 157, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 90, in action_create
content = self._get_content()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 127, in _get_content
return content()
File “/usr/lib/python2.6/site-packages/resource_management/core/source.py”, line 51, in __call__
return self.get_content()
File “/usr/lib/python2.6/site-packages/resource_management/core/source.py”, line 195, in get_content
raise Fail(“Failed to download file from {0} due to HTTP error: {1}”.format(self.url, str(ex)))
resource_management.core.exceptions.Fail: Failed to download file from http://mymachine:8080/resources//mysql-jdbc-driver.jar due to HTTP error: HTTP Error 404: Not Found

[CRIT] Yarn config cannot be accessed in Ambari 2.1.0 using Postgresql

$
0
0

Replies: 4

I have installed the latest HDP 2.3 two days ago. I have the issue that clicking on Yarn service and then config has absolutely no effect… impossible to see and therefore modify the yarn configuration… all others configs are okay… all services but oozie (again oozie… not starting) are up and running – including yarn with the initial config we did not change at install. The cluster does work (we already had some jobs running in pig, hive and spark). But the thing is impossible to access the yarn config to configure the cluster in a better way (for example we need to put a full hostname to yarn.log.server.url !!!).

error : file not found on local file system

$
0
0

Replies: 0

Hi,

I am trying to run a shell action to watch for a file on the edge node and if found put it in the hdfs location. The inputDir is passed as an argument for the shell script through job.properties

I have mentioned the file system uri as file:///home/.. but, still its not picking up the location. Need help in this issue.

Thanks

Will HDFS packaged along with Hortonworks HDP

Flumeagent service cannot start

$
0
0

Replies: 2

SOLUTION:
– navigate to %FLUME_HOME%\bin and locate flumeagent.xml file. If the file does not exist locate flumeservice.xml file and rename it to flumeagent.xml.
– Once the file is renamed go to windows services and restart flumeagent service.

Ranger: YARN Plugin not working

$
0
0

Replies: 0

Hi,

I’m having trouble with the Ranger YARN plugin on HDP2.3. The plugin installs fine and I can create policies with no issues.

However, what I’m finding is that the policies don’t seem to work. I originally had the capacity scheduler setup with custom queues. There were no specific ACL’s on those queues, so anyone could submit and admin the queues.

After installing the Ranger YARN plugin, I would expect that the capacity-scheduler.xml ACL’s would be overruled by the ranger policies. I would also expect once the policies are in place, no one would be able to submit to queues, unless explicitly defined in the policy.

My problem is that even with policies in place, any user can still submit to whichever queue they want. Basically the ACL’s and the policies don’t appear to work. If I retrospectively change the capacity-scheduler.xml to restrict queues to users, users can still submit to the queue. I’m completely stumped as to get the Ranger YARN plugin to work.

Am I doing it wrong? I’m trying to search for whatever documentation I can get for the YARN plugin, but there is basically nothing on the net.

Happy to provide my config files if required.

Thanks!


oozie pig job error

$
0
0

Replies: 0

JA002: Unauthorized connection for super-user: oozie from IP 127.0.0.1

HBase Thrift Server with Kerberos

$
0
0

Replies: 1

I am trying to use the HUe (V3.8.1) HBase browser. This seems to need to talk to the HBASe thrift server. Ambari 1.7 does not seem to start this by default and I have to do this

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.3/bk_installing_manually_book/content/ch06s05.html

which basically says /usr/bin/hbase thrift start

Now I have a kerberized cluster and the error I get from that is “Running in secure mode, but config doesn’t have a keytab”
Any idea what I am missing?

as user hbase
$ kinit -k -t /etc/security/keytabs/hbase.service.keytab hbase/my_machine_fqdn@MYREGION
$ /usr/bin/hbase thrift start
2015-08-05 13:20:37,134 INFO [main] util.VersionInfo: HBase 0.98.4.2.2.0.0-2041-hadoop2
2015-08-05 13:20:37,135 INFO [main] util.VersionInfo: Subversion git://ip-10-0-0-5.ec2.internal/grid/0/jenkins/workspace/HDP-champlain-centos6/bigtop/build/hbase/rpm/BUILD/hbase-0.98.4.2.2.0.0 -r 18e3e58ae6ca5ef5e9c60e3129a1089a8656f91d
2015-08-05 13:20:37,135 INFO [main] util.VersionInfo: Compiled by jenkins on Wed Nov 19 15:10:28 EST 2014
2015-08-05 13:20:37,490 INFO [main] thrift.ThriftServerRunner: Using default thrift server type
2015-08-05 13:20:37,490 INFO [main] thrift.ThriftServerRunner: Using thrift server type threadpool
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Exception in thread “main” java.io.IOException: Running in secure mode, but config doesn’t have a keytab
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:236)
at org.apache.hadoop.hbase.security.User$SecureHadoopUser.login(User.java:360)
at org.apache.hadoop.hbase.security.User.login(User.java:227)
at org.apache.hadoop.hbase.security.UserProvider.login(UserProvider.java:113)
at org.apache.hadoop.hbase.thrift.ThriftServerRunner.<init>(ThriftServerRunner.java:273)
at org.apache.hadoop.hbase.thrift.ThriftServer.doMain(ThriftServer.java:92)
at org.apache.hadoop.hbase.thrift.ThriftServer.main(ThriftServer.java:231)

(Running HDP 2.2.0 on RHEL 6, but moving to HDP 2.3 soon)

HDP-friendly way of adding HBase sink for Flume

Yarn Sqoop job failing

$
0
0

Replies: 0

We are trying to run a sqoop import in shell script in oozie via YARN. We are running hdp 2.3.0. The sqoop job runs fine via command prompt. But through oozie its failing as job.splitmetainfo file not found. I’m guessing some kind of misconfiguration somewhere as sqoop runs via command prompt. Any pointers would be much appreciated.

2015-08-06 05:13:39,851 INFO [Thread-53] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Setting job diagnostics to Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://xxx/user/root/.staging/job_1438837181977_0008/job.splitmetainfo
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1568)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1432)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1390)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1312)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1080)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1519)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1515)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1448)
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://ixxx/user/root/.staging/job_1438837181977_0008/job.splitmetainfo
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1563)
… 17 more

SLES 11.1 Ambari Registration Fails

$
0
0

Replies: 2

Hi there – I am unable to run the Ambari “registration” step for a single-node HDP cluster on SLES 11.1.
The ssh keys setup is ok; ambari server and agent are installed and running; start|stop|status return ok.
I included the registration log outout, the ambari-server log the ambari-agent log and the bootstrap log below; as I think all of these seem to be relevant.

The “ambari-updates” zypper repository refresh doesnt work (not found) – but the other repositoriees work ok. Since Ambari-server and ambari-agent downloaded ok, I presume that failure of this repository connection is not an issue:

cd /etc/zypp/repos.d
rm ambari*
wget http://public-repo-1.hortonworks.com/ambari/suse11/1.x/GA/ambari.repo
–2015-08-02 16:06:45– http://public-repo- 1.hortonworks.com/ambari/suse11/1.x/GA/ambari.repo
Resolving proxy…
Connecting to proxy… connected.
Proxy request sent, awaiting response… 200 OK
Length: 745 [application/octet-stream]
Saving to: `ambari.repo’
2015-08-02 16:06:45 (46.3 MB/s) – `ambari.repo’ saved [745/745]

# zypper clean
All repositories have been cleaned up.
# zypper refresh
Repository ‘Hortonworks Data Platform Utils Version – HDP-UTILS-1.1.0.16′ is up to date.
Repository ‘REP6′ is up to date.
Repository ‘ambari-1.x – Updates’ is invalid.
[Updates-ambari-1.x|http://public-repo- 1.hortonworks.com/ambari/suse11/1.x/updates] Repository type can’t be determined.
Please check if the URIs defined for this repository are pointing to a valid repository.
Skipping repository ‘ambari-1.x – Updates’ because of the above error.
Repository ‘Ambari 1.x’ is up to date.
Some of the repositories have not been refreshed because of an error.

The Registration log for the session (chrome or firefox) is: =================================

Error building the cache:
[|] Repository type can’t be determined.
Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
ERROR: ambari-agent already running
Check /var/run/ambari-agent/ambari-agent.pid for PID.
(‘hostname: ok apjhana01.XXX.XXX.corp
ip: ok XX.XX.XXX.XX (…masked)
cpu: ok Intel(R) Xeon(R) CPU X7560 @ 2.27GHz
(…etc etc)
memory: ok 252.279 GB
disks: ok
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 1.1T 642G 375G 64% /
devtmpfs 127G 120K 127G 1% /dev
tmpfs 127G 248K 127G 1% /dev/shm
os: ok Welcome to SUSE Linux Enterprise Server 11 SP1 (x86_64) – Kernel %r (%t).
iptables: ok
Chain INPUT (policy ACCEPT 235M packets, 110G bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 234M packets, 110G bytes)
pkts bytes target prot opt in out source destination
selinux: UNAVAILABLE
yum: UNAVAILABLE
rpm: ok rpm-4.4.2.3-37.16.37
openssl: ok openssl-0.9.8h-30.27.11
curl: ok curl-7.19.0-11.24.25
wget: ok wget-1.11.4-1.15.1
net-snmp: UNAVAILABLE
net-snmp-utils: UNAVAILABLE
ntpd: UNAVAILABLE
ruby: ok ruby-1.8.7.p72-5.24.2
puppet: ok puppet-0.24.8-1.3.5
nagios: UNAVAILABLE
ganglia: UNAVAILABLE
passenger: UNAVAILABLE
hadoop: UNAVAILABLE
yum_repos: UNAVAILABLE
zypper_repos: ok
2 | HDP-UTILS-1.1.0.16 | Hortonworks Data Platform Utils Version – HDP-UTILS-1.1.0.16 | Yes | No
‘, None)
(‘INFO 2015-08-02 14:36:37,342 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.XXX.XXX.corp:4080/cert/ca
INFO 2015-08-02 14:36:37,343 NetUtil.py:58 – Failed to connect to https://apjhana01.XXX.XXX.corp:4080/cert/ca due to [Errno 111] Connection refused
INFO 2015-08-02 14:36:37,343 NetUtil.py:77 – Server at https://apjhana01.XXX.XXX.corp:4080 is not reachable, sleeping for 10 seconds…
INFO 2015-08-02 14:36:45,203 main.py:51 – signal received, exiting.
INFO 2015-08-02 14:36:53,390 shell.py:50 – Killing stale processes
INFO 2015-08-02 14:36:53,391 shell.py:58 – Killed stale processes
INFO 2015-08-02 14:36:53,391 main.py:141 – Connecting to the server at: https://apjhana01.XXX.XXX.corp:8440
INFO 2015-08-02 14:36:53,391 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:36:53,392 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
INFO 2015-08-02 14:41:27,032 NetUtil.py:58 – Failed to connect to https://apjhana01.XXX.XXX.corp:8440/cert/ca due to [Errno 104] Connection reset by peer
INFO 2015-08-02 14:41:27,032 NetUtil.py:77 – Server at https://apjhana01.sin.XXX.corp:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-08-02 14:41:37,043 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
INFO 2015-08-02 14:41:37,044 NetUtil.py:58 – Failed to connect to https://apjhana01.sin.XXX.corp:8440/cert/ca due to [Errno 111] Connection refused
INFO 2015-08-02 14:41:37,044 NetUtil.py:77 – Server at https://apjhana01.sin.XXX.corp:8440 is not reachable, sleeping for 10 seconds…
INFO 2015-08-02 14:41:43,917 main.py:51 – signal received, exiting.
INFO 2015-08-02 14:42:46,560 shell.py:50 – Killing stale processes
INFO 2015-08-02 14:42:46,566 shell.py:58 – Killed stale processes
INFO 2015-08-02 14:42:46,566 main.py:141 – Connecting to the server at: https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:42:46,566 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:42:46,567 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
‘, None)

STDERR
Connection to apjhana01.sin.XXX.corp closed.
Registering with the server…
Registration with the server failed.
OK
Licensed under the Apache License, Version 2.0.
See third-party tools/resources that Ambari uses and their respective authors

ambari-server.log =======================================

19:59:45,262 INFO Configuration:288 – Web App DIR test /usr/lib/ambari-server/web
19:59:45,270 INFO CertificateManager:65 – Initialization of root certificate
19:59:45,270 INFO CertificateManager:69 – Certificate exists:true
19:59:45,364 INFO AmbariServer:290 – ********* Initializing Meta Info **********
19:59:45,885 INFO AmbariServer:300 – ********* Initializing Clusters **********
19:59:45,886 INFO AmbariServer:304 – ********* Current Clusters State *********
19:59:45,886 INFO AmbariServer:305 – Clusters=[ ]
19:59:45,886 INFO AmbariServer:307 – ********* Initializing ActionManager **********
19:59:45,886 INFO AmbariServer:309 – ********* Initializing Controller **********
19:59:45,890 INFO AmbariManagementControllerImpl:124 – Initializing the AmbariManagementControllerImpl
19:59:45,895 INFO Server:266 – jetty-7.6.7.v20120910
19:59:45,970 INFO ContextHandler:744 – started o.e.j.s.ServletContextHandler{/,file:/usr/lib/ambari-server/web/}
19:59:48,613 INFO AbstractConnector:338 – Started SelectChannelConnector@0.0.0.0:8080
19:59:48,614 INFO Server:266 – jetty-7.6.7.v20120910
19:59:48,616 INFO ContextHandler:744 – started o.e.j.s.ServletContextHandler{/,null}
19:59:49,673 INFO SslContextFactory:300 – Enabled Protocols [SSLv2Hello, SSLv3, TLSv1] of [SSLv2Hello, SSLv3, TLSv1]
19:59:49,681 INFO AbstractConnector:338 – Started SslSelectChannelConnector@0.0.0.0:8440
19:59:49,751 INFO SslContextFactory:300 – Enabled Protocols [SSLv2Hello, SSLv3, TLSv1] of [SSLv2Hello, SSLv3, TLSv1]
19:59:49,757 WARN AbstractConnector:335 – insufficient threads configured for SslSelectChannelConnector@0.0.0.0:8441
19:59:49,758 INFO AbstractConnector:338 – Started SslSelectChannelConnector@0.0.0.0:8441
19:59:49,758 INFO AmbariServer:324 – ********* Started Server **********
19:59:49,759 INFO ActionManager:61 – Starting scheduler thread
19:59:49,759 INFO AmbariServer:327 – ********* Started ActionManager **********
20:00:25,613 INFO AmbariLocalUserDetailsService:62 – Loading user by name: admin
20:00:26,633 INFO ClusterControllerImpl:92 – Using resource provider org.apache.ambari.server.controller.internal.UserResourceProvider for request type User
20:00:26,984 INFO PersistKeyValueService:82 – Looking for keyName CLUSTER_CURRENT_STATUS
20:03:33,584 INFO BootStrapImpl:97 – BootStrapping hosts apjhana01.sin.XXX.corp:
20:03:33,591 INFO BSRunner:166 – Host= apjhana01.sin.XXX.corp bs=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py requestDir=/var/run/ambari-server/bootstrap/1 keyfile=/var/run/ambari-server/bootstrap/1/sshKey server=apjhana01.sin.XXX.corp
20:03:33,607 INFO BSRunner:196 – Kicking off the scheduler for polling on logs in /var/run/ambari-server/bootstrap/1
20:03:33,608 INFO BSRunner:200 – Bootstrap output, log=/var/run/ambari-server/bootstrap/1/bootstrap.err /var/run/ambari-server/bootstrap/1/bootstrap.out
20:03:33,610 INFO BSHostStatusCollector:55 – Request directory /var/run/ambari-server/bootstrap/1
20:03:33,610 INFO BSHostStatusCollector:62 – HostList for polling on [apjhana01.sin.XXX.corp]
20:03:33,786 INFO BSRunner:212 – Script log Mesg

Ambari-agent.log ==============================

INFO 2015-08-02 14:36:53,391 main.py:141 – Connecting to the server at: https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:36:53,391 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://apjhana01.sin.XXX.corp:8440
INFO 2015-08-02 14:36:53,392 NetUtil.py:44 – DEBUG:: Connecting to the following url https://apjhana01.sin.XXX.corp:8440/cert/ca
INFO 2015-08-02 14:41:27,032 NetUtil.py:58 – Failed to connect to https://apjhana01.sin.XXX.corp:8440/cert/ca due to [Errno 104] Connection reset by peer
INFO 2015-08-02 14:41:27,032 NetUtil.py:77 – Server at https://apjhana01.sin.XXX.corp:8440 is not reachable, sleeping for 10 seconds…

/var/run/ambari-server/bootstrap/apjhana01.sin.XXX.corp.log ==================

Verifying Python version compatibility…
Using python /usr/bin/python2.6
Checking for previously running Ambari Agent…
tput: No value for $TERM and no -T specified
ERROR: ambari-agent already running
tput: No value for $TERM and no -T specified
Check /var/run/ambari-agent/ambari-agent.pid for PID.
(‘hostname: ok apjhana01.sin.XXX.corp\nip: ok 10.32.241.20\ncpu: ok Intel(R) Xeon(R) CPU X7560 @ 2.27GHz\nIntel(R) Xeon(R) CPU X7560 @ 2.27GHz\nIntel(R) Xeon(R) CPU X7560 @ 2.27GHz\nIntel(R) Xeon(R) CPU (..etc etc) \nmemory: ok 252.279 GB\ndisks: ok\n Filesystem Size Used Avail Use% Mounted on\n/dev/sda2 1.1T 642G 375G 64% /\ndevtmpfs 127G 120K 127G 1% /dev\ntmpfs 127G 248K 127G 1% /dev/shm\nos: ok Welcome to SUSE Linux Enterprise Server 11 SP1 (x86_64) – Kernel %r (%t).\niptables: ok\n Chain INPUT (policy ACCEPT 240M packets, 112G bytes)\n pkts bytes target prot opt in out source destination \n\nChain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target prot opt in out source destination \n\nChain OUTPUT (policy ACCEPT 240M packets, 112G bytes)\n pkts bytes target prot opt in out source destination\nselinux: UNAVAILABLE\nyum: UNAVAILABLE\nrpm: ok rpm-4.4.2.3-37.16.37\nopenssl: ok openssl-0.9.8h-30.27.11\ncurl: ok curl-7.19.0-11.24.25\nwget: ok wget-1.11.4-1.15.1\nnet-snmp: UNAVAILABLE\nnet-snmp-utils: UNAVAILABLE\nntpd: UNAVAILABLE\nruby: ok ruby-1.8.7.p72-5.24.2\npuppet: ok puppet-0.24.8-1.3.5\nnagios: UNAVAILABLE\nganglia: UNAVAILABLE\npassenger: UNAVAILABLE\nhadoop: UNAVAILABLE\nyum_repos: UNAVAILABLE\nzypper_repos: ok\n 2 | HDP-UTILS-1.1.0.16 | Hortonworks Data Platform Utils Version – HDP-UTILS-1.1.0.16 | Yes | No\n’, None)
(‘INFO 2015-08-02 15:10:10,683 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://10.32.241.20:8440\nINFO 2015-08-02 15:10:10,683 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:16:19,429 main.py:51 – signal received, exiting.\nINFO 2015-08-02 15:16:29,784 shell.py:50 – Killing stale processes\nINFO 2015-08-02 15:16:29,784 shell.py:58 – Killed stale processes\nINFO 2015-08-02 15:16:29,784 main.py:141 – Connecting to the server at: https://10.32.241.20:8440\nINFO 2015-08-02 15:16:29,785 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://10.32.241.20:8440\nINFO 2015-08-02 15:16:29,785 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:16:59,875 main.py:51 – signal received, exiting.\nINFO 2015-08-02 15:17:08,950 shell.py:50 – Killing stale processes\nINFO 2015-08-02 15:17:08,950 shell.py:58 – Killed stale processes\nINFO 2015-08-02 15:17:08,950 main.py:141 – Connecting to the server at: https://10.32.241.20:8440\nINFO 2015-08-02 15:17:08,951 NetUtil.py:68 – DEBUG: Trying to connect to the server at https://10.32.241.20:8440\nINFO 2015-08-02 15:17:08,951 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:38:12,679 NetUtil.py:58 – Failed to connect to https://10.32.241.20:8440/cert/ca due to [Errno 104] Connection reset by peer\nINFO 2015-08-02 15:38:12,679 NetUtil.py:77 – Server at https://10.32.241.20:8440 is not reachable, sleeping for 10 seconds…\nINFO 2015-08-02 15:38:22,688 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\nINFO 2015-08-02 15:38:22,689 NetUtil.py:58 – Failed to connect to https://10.32.241.20:8440/cert/ca due to [Errno 111] Connection refused\nINFO 2015-08-02 15:38:22,689 NetUtil.py:77 – Server at https://10.32.241.20:8440 is not reachable, sleeping for 10 seconds…\nINFO 2015-08-02 15:38:32,699 NetUtil.py:44 – DEBUG:: Connecting to the following url https://10.32.241.20:8440/cert/ca\n’, None)

STDERR
tcgetattr: Invalid argument
Connection to apjhana01.sin.XXX.corp closed.

Viewing all 5121 articles
Browse latest View live


Latest Images