Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Ambari 2.1 on Centos 7 Fails to create a cluster

$
0
0

Replies: 1

I am trying to set up a cluster on my localhost. The operation fails with the following Registration log (below). My confusion is that, at the bottom of the log, it refers to an IP address 198.105.244.20:8080 for my Ambari server. I have no clue where that IP is coming from because I am trying to use just my internal network.

Does anyone have any ideas to point me in the right direction?

Many thanks!

==========================
Creating target directory…
==========================

Command start time 2015-07-28 00:37:10

Connection to localhost closed.
SSH command execution finished
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:10

==========================
Copying common functions script…
==========================

Command start time 2015-07-28 00:37:10

scp /usr/lib/python2.6/site-packages/ambari_commons
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:11

==========================
Copying OS type check script…
==========================

Command start time 2015-07-28 00:37:11

scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:11

==========================
Running OS type check…
==========================

Command start time 2015-07-28 00:37:11
Cluster primary/cluster OS family is redhat7 and local/current OS family is redhat7

Connection to localhost closed.
SSH command execution finished
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:11

==========================
Checking ‘sudo’ package on remote host…
==========================

Command start time 2015-07-28 00:37:11
sudo-1.8.6p7-13.el7.x86_64

Connection to localhost closed.
SSH command execution finished
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:13

==========================
Copying repo file to ‘tmp’ folder…
==========================

Command start time 2015-07-28 00:37:13

scp /etc/yum.repos.d/ambari.repo
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:13

==========================
Moving file to repo dir…
==========================

Command start time 2015-07-28 00:37:13

Connection to localhost closed.
SSH command execution finished
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:13

==========================
Changing permissions for ambari.repo…
==========================

Command start time 2015-07-28 00:37:13

Connection to localhost closed.
SSH command execution finished
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:13

==========================
Copying setup script file…
==========================

Command start time 2015-07-28 00:37:13

scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=localhost, exitcode=0
Command end time 2015-07-28 00:37:14

==========================
Running setup agent script…
==========================

Command start time 2015-07-28 00:37:14
Host registration aborted. Ambari Agent host cannot reach Ambari Server ‘198.105.244.20:8080′. Please check the network connectivity between the Ambari Agent host and the Ambari Server

Connection to localhost closed.
SSH command execution finished
host=localhost, exitcode=1
Command end time 2015-07-28 00:39:21

ERROR: Bootstrap of host localhost fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: Connection to localhost closed.

STDOUT: Host registration aborted. Ambari Agent host cannot reach Ambari Server ‘198.105.244.20:8080′. Please check the network connectivity between the Ambari Agent host and the Ambari Server

Connection to localhost closed.

Licensed under the Apache License, Version 2.0.
See third-party tools/resources that Ambari uses and their respective authors


Freeing up "Non DFS" space

$
0
0

Replies: 0

I am trying out to load our data in hadoop hdfs. After some test runs, when check hadoop web ui, I realise that there is a lot of space consumed under title “Non-DFS used”. In fact, “Non-DFS used” is more than “DFS used”. So almost half the cluster is consumed by Non-DFS data.

Even after reformatting namenode and restarting, this ” Non-DFS” space is not freed up.

Also I am not able to find the directory under which this “Non-DFS” data is stored, so that I can manually delete those files.

I read many threads online from people stuck at the exact same issue, but none got definitive answer.

Is it so difficult to empty this “Non-DFS” space? Or should I be not deleting it? How can I free up this space?

HDP for Azure – Sandbox root login

$
0
0

Replies: 4

Hi – ‘hadoop’ as a password for the root login on HDP on Azure (shell access) – doesnt seem to work. Can you pls help me with the shell root password for Azure.

Pyspark support for external machine learning libraries

$
0
0

Replies: 2

I am working on a machine learning use case for which I need to find the classification probabilities. Since Spark MLlib has no facility for providing probabilities, I plan to call external machine learning libraries like scikit-learn, sofoa-ml and xgboost from Spark Python API and would want to know it this approach is possible, considering Spark works on RDDs whereas the aforementioned libraries work with data in different formats like arrays, matrices, dataframes, etc.

Error When Running a Pig job from java code in ecliplse

$
0
0

Replies: 3

When i run a pig query from java client i get the below error:
I am using HDP 2.3 VM on my machine. It would be very helpful if you could help me resolve this issue.

at com.redhat.aml.pig.GenerateCustomerProfile.main(GenerateCustomerProfile.java:17)
>> Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4

My pom.xml is given below:
<project xmlns=”http://maven.apache.org/POM/4.0.0″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd“>
<modelVersion>4.0.0</modelVersion>
<groupId>com.rhc.aml</groupId>
<artifactId>aml-pig-profile</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>aml-customerProfile</name>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.version>2.7.0</hadoop.version>
<shade.version>2.4</shade.version>
<jar.plugin.version>2.4</jar.plugin.version>
</properties>

<dependencies>
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pig</artifactId>
<version>0.15.0</version>
</dependency>

<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>

<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-api</artifactId>
<version>2.7.1</version>
</dependency>
</dependencies>
</project>

REST API to Start Services

$
0
0

Replies: 1

Hi,

I am exploring the Ambari REST API to start/stop services and in the process I tried starting FALCON remotely using the following command:

curl -u admin:admin -i -H ‘X-Requested-By: ambari’ -X PUT -d ‘{“RequestInfo”: {“context” :”Start FALCON via REST”}, “Body”: {“ServiceInfo”: {“state”: “STARTED”}}}’ http://<ambari_server_ip&gt;:8080/api/v1/clusters/bda_master/services/FALCON</b>

The command did not succeed and encountered the following error:

HTTP/1.0 504 Gateway Timeout
Server: Zscaler/5.0
Content-Type: text/html
Connection: close

I can ssh or scp into the server easily and also the port 8080 is open as I can remotely browse the ambari page. Can someone help me in how to resolve this issue?

Regards,
Gaurav

Internet Access on HyperV

$
0
0

Replies: 0

I’ve downloaded Sandbox 2.3 (Windows HyperV) and followed the install notes.
As the box boots it reports “eth0 does not seem to be present”. The box does not have internet connectivity, compromising following the tutorials.
Is there an additional step required to set the sandbox up to have internet access?

HDP 2.3 Sandbox root password

$
0
0

Replies: 0

I am running the HDP Sandbox 2.3 on a private VMWare vCloud.
I cannot connect to the console with user/pwd root/hadoop.
Neither from the vCloud console nor from W7 via putty.exe nor from Linux via SSH on port 22 (port 2222 does not work for me)

the answer is always “Permission denied, please try again.”

Thanks for help
Andreas


Issue trying to add Ambari View

$
0
0

Replies: 2

Hi,
I am working with Ambari 1.7.0 on IBM IOP distribution. I am trying to install the hdfs Files view. I am running into the error:
500 HdfsApi connection failed. Check “webhdfs.url” property

Stack trace:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.web.WebHdfsFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1720)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2415)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.ambari.view.filebrowser.HdfsApi$1.run(HdfsApi.java:67)
at org.apache.ambari.view.filebrowser.HdfsApi$1.run(HdfsApi.java:65)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.ambari.view.filebrowser.HdfsApi.<init>(HdfsApi.java:65)
at org.apache.ambari.view.filebrowser.HdfsService.getApi(HdfsService.java:76)
at org.apache.ambari.view.filebrowser.FileOperationService.listdir(FileOperationService.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)

I found the exactly same question being asked on Ambari forum :
http://mail-archives.apache.org/mod_mbox/ambari-user/201504.mbox/%3CCAFGA2c3KD8stSg5grVn9ZbZVoFS-RR6zwK8HWZsf+4YsxUL_Xw@mail.gmail.com%3E

However this has not been answer as well.
Could anybody please suggest a solution or atleast elaborate the issue?

Thanks
Vinayak

odbc drivers for HDP 2.3 windows 7 and MacOS

$
0
0

Replies: 1

I cannot seem to find then as of July 23 2015 can someone send me a direct link?

Error when installing packages for HDP 2.3

$
0
0

Replies: 0

When trying to install the packages for HDP 2.3, I am getting a timeout after 900 seconds, with the error “Package Manager failed to install packages. Error: (4, ‘Interrupted system call’)”. This is occurring on all nodes of my cluster. I am currently running HDP 2.2.4.2-2 on CentOS 6. I have posted the error below. Are there any steps that I could take to try to resolve this?

2015-07-29 14:57:12,736 – Caught signal 15, will handle it gracefully. Compute the actual version if possible before exiting.
2015-07-29 14:57:12,794 – Package Manager failed to install packages. Error: (4, ‘Interrupted system call’)
Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py”, line 233, in install_packages
skip_repos=[self.REPO_FILE_NAME_PREFIX + “*”] if OSCheck.is_redhat_family() else [])
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 157, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py”, line 45, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py”, line 49, in install_package
shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 240, in _call
ready, _, _ = select.select(read_set, [], [], 1)
error: (4, ‘Interrupted system call’)
Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py”, line 312, in <module>
InstallPackages().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 218, in execute
method(env)
File “/var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py”, line 162, in actionexecute
raise Fail(“Failed to distribute repositories/install packages”)
resource_management.core.exceptions.Fail: Failed to distribute repositories/install packages

Python script has been killed due to timeout after waiting 900 secs

Ambari checks/problems – hung up

$
0
0

Replies: 0

After checking the hosts for install the screen never proceeds. It’s stuck at “Please wait while the hosts are being checked for potential problems…”

Anybody see this?

Fail to Add HBASE service to the cluster via Ambari

$
0
0

Replies: 1

Trying to add HBASE service through Ambari add service wizard. But it’s failing to install both Hbase master and slave. It’s giving the following error.

2015-07-28 15:09:33,275 – Error while executing command ‘any':
Traceback (most recent call last):
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 214, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py”, line 30, in hook
setup_users()
File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py”, line 85, in setup_users
cd_access=”a”,
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 165, in action_create
sudo.makedirs(path, self.resource.mode or 0755)
File “/usr/lib/python2.6/site-packages/resource_management/core/sudo.py”, line 43, in makedirs
shell.checked_call([“mkdir”, “-p”, path], sudo=True)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 70, in inner
return function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 82, in checked_call
return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 199, in _call
raise Fail(err_msg)
Fail: Execution of ‘mkdir -p /etc/resolv.conf/hadoop/hbase’ returned 1. mkdir: cannot create directory `/etc/resolv.conf': Not a directory
Error: Error: Unable to run the custom hook script [‘/usr/bin/python2.6′, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py’, ‘ANY’, ‘/var/lib/ambari-agent/data/command-1156.json’, ‘/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY’, ‘/var/lib/ambari-agent/data/structured-out-1156.json’, ‘INFO’, ‘/var/lib/ambari-agent/data/tmp’]

No Data Available after Table Creation.

HiveServer2 select Permission Error on Write

$
0
0

Replies: 0

Hi everyone,

After upgrading from HDP 2.1 to HDP 2.2 hiveserver2 through beeline queries fall in error.
HDP in secure mode.

Sample query:
beeline -u”jdbc:hive2://host.example.com:10000/dbname;principal=hive/host.example.com@EXAMPLE.COM” -e “select * from testtable limit 10;”

result:
Error: Error while compiling statement: FAILED: HiveException java.security.AccessControlException: Permission denied: user=rboyko, access=WRITE, inode=”/apps/hive/warehouse/dbname.db/testtable”:dbname:dbname:drwxr-x—
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:185)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6812)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6794)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6719)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:9546)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:1637)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1433)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) (state=42000,code=40000)

But when i’m trying this select through “hive -e” command it works normally.

Main hiveserver2 and metastore configurations:

hive.server2.allow.user.substitution=true
hive.server2.authentication=KERBEROS
hive.server2.authentication.kerberos.keytab=/etc/security/keytabs/hive.service.keytab
hive.server2.authentication.kerberos.principal=hive/_HOST@EXAMPLE.COM
hive.server2.authentication.spnego.keytab=/etc/security/keytabs/spnego.service.keytab
hive.server2.authentication.spnego.principal=HTTP/_HOST@EXAMPLE.COM
hive.server2.logging.operation.enabled=true
hive.server2.logging.operation.log.location=${system:java.io.tmpdir}/${system:user.name}/operation_logs

hive.metastore.kerberos.keytab.file=/etc/security/keytabs/hive.service.keytab
hive.metastore.kerberos.principal=hive/_HOST@EXAMPLE.COM
hive.metastore.pre.event.listeners=org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener
hive.metastore.warehouse.dir=/apps/hive/warehouse
hive.security.metastore.authenticator.manager=org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator
hive.security.metastore.authorization.auth.reads=true

I can’t understand why hiveserver2 trying to write into table dir when I perform simple select.

Thanks for answers!


Ambari 2.1 error load web ui

$
0
0

Replies: 1

Dear all,

I’ve upgrade with succes my ambari 1.7 to 2.1.
However, the web-ui started to retrieve me loading error ( hereunder)
It appears that it appens when I set the parameter dfs.http.policy to HTTP_ONLY. With HTTP_AND_HTTPS or HTTP_ONLY I don’t have any error.
The only solution I’ve found to reload the ambari-web ui is to reset ambari-server which is not at all a long term solution. Not even a short term solution.

the log of HTTP indicates me that there is jmx error.
Also, I also noticed that ambari didn’t take into account the HTTPS modification, when I want to start the namenode i see that the web ui try to :

Connection failed to http://***********:50090 (Execution of ‘curl -k –negotiate -u : -b /var/lib/ambari-agent/data/tmp/cookies/275cbc46-ffae-4524-bc29-6896c0b565e5 -c /var/lib/ambari-agent/data/tmp/cookies/275cbc46-ffae-4524-bc29-6896c0b565e5 -w ‘%{http_code}’ http://******************:50090 –connect-timeout 5 –max-time 7 -o /dev/null’ returned 7. curl: (7) couldn’t connect to host
000)
he should make a curl on the port 50470 with the prtotocol https.

In advance, thank you for your feedback

ERROR From the Web-ui:
500 status code received on GET method for API: /api/v1/clusters/APA_CLUSTER_1_NODE/components/?ServiceComponentInfo/component_name=FLUME_HANDLER|ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|ServiceComponentInfo/category=MASTER&fields=ServiceComponentInfo/service_name,host_components/HostRoles/host_name,host_components/HostRoles/state,host_components/HostRoles/maintenance_state,host_components/HostRoles/stale_configs,host_components/HostRoles/ha_state,host_components/HostRoles/desired_admin_state,host_components/metrics/jvm/memHeapUsedM,host_components/metrics/jvm/HeapMemoryMax,host_components/metrics/jvm/HeapMemoryUsed,host_components/metrics/jvm/memHeapCommittedM,host_components/metrics/mapred/jobtracker/trackers_decommissioned,host_components/metrics/cpu/cpu_wio,host_components/metrics/rpc/RpcQueueTime_avg_time,host_components/metrics/dfs/FSNamesystem/*,host_components/metrics/dfs/namenode/Version,host_components/metrics/dfs/namenode/LiveNodes,host_components/metrics/dfs/namenode/DeadNodes,host_components/metrics/dfs/namenode/DecomNodes,host_components/metrics/dfs/namenode/TotalFiles,host_components/metrics/dfs/namenode/UpgradeFinalized,host_components/metrics/dfs/namenode/Safemode,host_components/metrics/runtime/StartTime,host_components/processes/HostComponentProcess,host_components/metrics/hbase/master/IsActiveMaster,host_components/metrics/hbase/master/MasterStartTime,host_components/metrics/hbase/master/MasterActiveTime,host_components/metrics/hbase/master/AverageLoad,host_components/metrics/master/AssignmentManger/ritCount,metrics/api/v1/cluster/summary,metrics/api/v1/topology/summary,host_components/metrics/yarn/Queue,host_components/metrics/yarn/ClusterMetrics/NumActiveNMs,host_components/metrics/yarn/ClusterMetrics/NumLostNMs,host_components/metrics/yarn/ClusterMetrics/NumUnhealthyNMs,host_components/metrics/yarn/ClusterMetrics/NumRebootedNMs,host_components/metrics/yarn/ClusterMetrics/NumDecommissionedNMs&minimal_response=true

Error message: Server Error

MSI Crashes

$
0
0

Replies: 3

Hi Team,

I am trying to install HDP 2.2.6 on Windows server 2012. I am installing it as administrator. It starts fine but crashes after “gathering the system information” step.
Any idea, if I am missing any pre-requisites. I have installed python 2.7, Microsoft 2010 and have java 1.7 on my machine.

Regards

Installing Hadoop In Windows 8

$
0
0

Replies: 3

Hi All
I am new to hadoop.Please guide me installing hadoop in my windows machine.

Also i have below questions,
1. Is JDK 1.6 is mandatory for hadoop ? Currently i have only 1.7 and 1.8
2. What is the minimum configuration required for hadoop ? Currently in my machine is i3 with 4 GB RAM.Hope this is enough.

How to migration between Apache hadoop2 to Hortonworks

$
0
0

Replies: 0

Good Day, I am planning to switch from Apache hadoop2 hdfs to horotonworks.
Wanted to know the procedure to migrate data such as hdfs files, datanodes metadata, namenode metadata with minimal downtime.

I have the following setup with 4 data nodes, 2 name nodes HA, Zookeeper and QJM, HA resource manager.

Some sites mentioned to use Distcp but, I am not sure what to extract from older hdfs to the new one.
Also when setting up Hortonworks do you need Ambari with it?

Thanks & Regards,

Surinder

HDP 2.3 on Ubuntu??

$
0
0

Replies: 1

When will HDP 2.3 add Ubuntu to supported platforms?

Viewing all 5121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>