Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Creating Vectors from SequenceFile in Mahout

$
0
0

Replies: 1

I’m using Mahout 0.9 (installed on HDP 2.2) for topic discovery (LDA algorithm). I have my text file stored in directory inputraw and execute the following commands in order

command#1:

mahout seqdirectory -i inputraw -o output-directory -c UTF-8

command#2:

mahout seq2sparse -i output-directory -o output-vector-str -wt tf -ng 3 –maxDFPercent 40 -ow -nv

command#3:

mahout rowid -i output-vector-str/tf-vectors/ -o output-vector-int

command#4:

mahout cvb -i output-vector-int/matrix -o output-topics -k 1 -mt output-tmp -x 10 -dict output-vector-str/dictionary.file-0

After executing the second command and as expected it creates a bunch of subfolders and files under the output-vector-str (named df-count, dictionary.file-0, frequency.file-0, tf-vectors,tokenized-documents and wordcount). The size of these files all looks ok considering the size of my input file however the file under tf-vectors has a very small size, in fact it’s only 118 bytes).

Apparently as the tf-vectors is the input to the 3rd command, the third command also generates a file of small size. Does anyone know:

1- what is the reason of the file under tf-vectors folder to be that small? There must be something wrong.

2- Starting from the first command, all the generated files have a strange coding and are nor human readable. Is this something expected?

Besides,


Error creating Table in HCatalog

$
0
0

Replies: 3

Hi,

I installed my Hadoop with HDP 2.2 with Ambari 1.7. While creating a table following this webpage – http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HBase_Import_v22/index.html#Item1.1,

I am getting this error –

hcat -f simple.ddl

15/01/08 22:11:23 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
15/01/08 22:11:23 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
15/01/08 22:11:23 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
15/01/08 22:11:23 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
FAILED: SemanticException Cannot find class ‘org.apache.hcatalog.hbase.HBaseHCatStorageHandler’

Ambari

$
0
0

Replies: 0

Ambari hangs when attempting to install an HDP2.2 STORM service.
It appears the a url map file is incorrect –

I believe HortonWorks needs to fix this file:

http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json

This file returns the following content .. the HDP-2.2 section is missing hdp.repo at the end.

{
“HDP-2.2″: {
“latest”: {
“centos5″: “http://public-repo-1.hortonworks.com/HDP/centos5/2.x/GA/2.2.0.0/”,
“centos6″: “http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/”,
“suse11″: “http://public-repo-1.hortonworks.com/HDP/suse11sp3/2.x/GA/2.2.0.0/”,
“ubuntu12″: “http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/GA/2.2.0.0/”
}
},

http://public-repo-1.hortonworks.com/HDP/centos5/2.x/GA/2.2.0.0/ returns
<Error>
NoSuchKey
<Message>The specified key does not exist.</Message>
<Key>HDP/centos5/2.x/GA/2.2.0.0/</Key>
<RequestId>3120A2173594E43D</RequestId>
<HostId>F43TTS4STm/jqjppMkVcxVbbF2oxceObChX9NXNI24/VHsoA/GsjHYk/axqXRTdKsx7liXG6T7Q=</HostId>
</Error>

http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo – is a valid url

This seems to be such a simple change. Hope it gets fixed soon.
Cheers,
Greg.

HDP2.1 HDFS NameNode Issue

$
0
0

Replies: 2

Hi We are HDP 2.1 10 node cluster.WE are facing some issues like Namenode is going often down and some of the services are going down.Can you please help me in resolving the issue.

Hive Query with a user input

$
0
0

Replies: 0

Dear Sir,
I want make use of ‘user input’ in a hive query similar to like taking input from a html page and validating on a jsp page. Is there any way, we can do this in hive?

Secondary namenode on Hadoop 2.x (Hortonworks)

$
0
0

Replies: 0

Hi everyone.
I’m learning hadoop 2.x technology Hortonworks, cloudera and notice that in architecture of hadoop 2.x there is no Secondary namenode it was replaced by Standby namenode.
1) Is **Secondary node** deprecated ? couse (as I understood) Standby namenode functions differ from Secondary namenode functions.
2) Can I build hadoop without Secondary namenode and without standby namenode and without loss of performance?

Thank you

Does Hive(0.14.0) support update Query based on a join

$
0
0

Replies: 5

I am using Hortworks distribution of Hive (0.14.0) . I have created a ORC Table With Buckets and also an external table. I am able to issue an standalone Update query on the ORC Table sucesfully i.e. update HiveExample set commission = 50000 where id = 19;

But when i try to do a join based update using the ORC Table(HiveExample) and External table (HiveTest) , Hive throws me an error such as

UPDATE HiveExample INNER JOIN HiveTest ON HiveExample.id = HiveTest.id SET HiveExample.Commissionflag = ‘F’

Error occurred executing hive query: Error while compiling statement: FAILED: ParseException line 2:1 mismatched input ‘JOIN’ expecting SET near ‘HiveTest’ in update statement

It would be very helpful if some one could confirm if this type of queries are supported or not.

Note : I have went through the apache documentation https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Update

The value assigned must be an expression that Hive supports in the select clause. Thus arithmetic operators, UDFs, casts, literals, etc. are supported. Subqueries are not supported.

Thanks in advance

Change admin PW + SSL config simply does not work

$
0
0

Replies: 0

Hi,
I am running HUE 2.6.1-2041 w/ HDP 2.20 installed on RHEL 6

Everything works great except

a) I want to change the admin user’s password. When I go to user admin or “profile” and change the password, I get no errors, but no matter what the password never changes

b) I read the instructions on how to configure SSL for HUE, but again, this does not work either. I setup a cert/key, tried uncommenting the sections in hue.ini + hue_httpd.conf and neither of these work. Sometimes hue won’t start at all or it does, but I can’t connect

Help?


Bad substitution error with spark on HDP 2.2

$
0
0

Replies: 2

Hi,

I’m facing issue with running spark on yarn.

YARN is installed through Ambari, HDP v2.2.0.0-2041. Spark 1.2
After submit spark job through YARN, get error message:
Stack trace: ExitCodeException exitCode=1: /hdp/hadoop/yarn/local/usercache/leads_user/appcache/application_1420759015115_0012/container_1420759015115_0012_02_000001/launch_co ntainer.sh: line 27: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/ hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr- framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/ hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.ver sion}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution

at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I’ve followed the instructions given in technical preview.

I’ve given mentioned configurations in spark-defaults.conf file inside conf folder. I’ve also checked using verbose logging. It is taking those parameters. But still i’m getting the same error.

In the verbose mode it is printing the following

System properties:
spark.executor.memory -> 3G
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Dhdp.version=2.2.0.0-2041 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/hdfs/heapDump/ -XX:+UseCompressedOops
spark.app.name -> com.xxx.xxx.xxxxxx
spark.driver.extraJavaOptions -> -Dhdp.version=2.2.0.0-2041
spark.yarn.am.extraJavaOptions -> -Dhdp.version=2.2.0.0-2041
spark.master -> yarn-cluster

Any idea what could be the problem here.

What is a Client in HDP installations

$
0
0

Replies: 0

Hello,

after I installed a Node (Datanode, NameNode, Gangalia components) the client installation failed. What ist a Client?
The Node run perfectly.

Thank you for your response

Tutorial:Real time Data Ingestion in HBase & Hive using Storm Bolt

$
0
0

Replies: 10

Hi there
Is Anybody experiencing problems with this tutorial
Submitted the topology
all the services for Storm and Kafka are started
then issued command to Start the ‘TruckEventsProducer’ Kafka Producer. I can see events are being produced and logs sent to the screen
But data is not being persisted. The Kafka sprout is not producing anything. (When I view using storm ui the KafkaSprout-emitted counter is not updating… stays at 0). When I check log files for the worker task for the TruckEventProcessor in /var/log/storm…
I see the following

13:01:45 b.s.d.worker [INFO] Launching worker for truck-event-processor-1-1422017673 on 8c75249c-e8e9-4d31-9908-579f25c4fb88:6701 with id 48ed2969-334c-479c-9b03-2a31053fa65c
13:01:45 b.s.d.worker [ERROR] Error on initialization of server mk-worker
java.io.IOException: No such file or directory

Ive tried resubmitting this topology several times and I get the error always.
I also made sure I cleaned out storm.local.dir: /hadoop/storm before each run
I was able to get everything working in “Ingesting and processing Realtime events with Apache Storm”
That topology submitted for tutorial2 was fine, but the one submitted for this exercise tutorial3 (storm jar target/Tutorial-1.0-SNAPSHOT.jar com.hortonworks.tutorials.tutorial3.TruckEventProcessingTopology ) doesn’t seem to process.

Anybody have any ideas please
Thank you

In hive how to get the query result in variable(hiveconf)

$
0
0

Replies: 0

Hi All,

I am executing hql file through shall commend. my query output should be stored in hiveconf variable. So that i will use the variable in other query.
basically i am from sql server background. In sql server we use do like this.

Declare @cnt as int;
select @cnt=count(1) from table;

so that @cnt will be used several place.

I don’t how to achieve this. can anyone help me.

Thanks
venkadesan

cannot turn off maintenance mode

$
0
0

Replies: 0

Can anyone know how to turn off the maintenance mode in a host or service?
I have a 3-node cluster. Currently , the maintenance mode of the node manager, yarn client on all nodes,
and app timeline manager/ resource manager of the master node has been turned on.
I have tried several times by selecting “turn off maintenance mode” for these hosts/services.
The following message appears :

Maintenance Mode has been turned off. It may take a few minutes for the alerts to be enabled.

But the maintenance mode icon still appear next to the host/service.
Please let me know how to fix this
Thanks

In HDP 2.2 Apache Spark is not installed

$
0
0

Replies: 0

Hi ,
I have successfully installed (automatic amabri install) HDP 2.2 on CentOS 6.6, It seems Apace Spark service is not listed/installed in HDP 2.2 .
As per this link http://hortonworks.com/hadoop/spark/ I had understanding that Apache Spark 1.2 is installed in HDP 2.2. Can you please advise if that is not case and when and what version HDP will Apache Spark installed as part of hadoop echo system

Reagrds,
Pal

Hortonworks and VMware HVE

$
0
0

Replies: 0

Hi,

Is anyone able to confirm if Hortonworks 2.x supports Hadoop Virtualised Extensions? I am unable to find any clear documentation so any advice would be much appreciated.

Thanks

Charlie


Ambari with LDAP

$
0
0

Replies: 0

HI,
In ambari.properties file I have added teh LDAP properties as mention at http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.4/bk_using_Ambari_book/content/ambari-chap2-2a.html
After starting ambari-server I am unable to login using Ldap usser, I am getting below error:

04:20:20,862 INFO [qtp908151269-325] AmbariLocalUserDetailsService:62 – Loading user by name: username
04:20:20,864 INFO [qtp908151269-325] AmbariLocalUserDetailsService:67 – user not found
04:20:20,864 INFO [qtp908151269-325] AmbariLdapAuthenticationProvider:146 – Reloading properties
04:20:20,864 INFO [qtp908151269-325] AmbariLdapAuthenticationProvider:94 – LDAP Properties changed – rebuilding Context
04:20:20,865 INFO [qtp908151269-325] AbstractContextSource:330 – Property ‘userDn’ not set – anonymous context will be used for read-write operations
04:20:20,865 INFO [qtp908151269-325] FilterBasedLdapUserSearch:89 – SearchBase not set. Searches will be performed from the root: ou=Users,dc=xxx,dc=yyy

In My config I have meention like below:
authentication.ldap.baseDn=”OU=Users,DC=xxx,DC=yyy”
authentication.ldap.usernameAttribute=sAMAccountName

Any idea what I am doing wrong??

Managing files within .har

$
0
0

Replies: 2

Hello experts,

I’m currently evaluating hadoop for file storage and reporting, but I’m stuck on the following detail: the source files that will be ingested to hdfs are quite small, so, because of the NameNode problem I decided to store them into a Hadoop Archive. However, I don’t find any options to manage them into the .har file, for instance move files, insert new files, delete files and so on.

Does anyone have any idea on how can I achieve this?

Best regards,
Clóvis.

Hadoop HA setup : not able to connect to zookeeper

$
0
0

Replies: 0

I am trying to set up Hadoop HA following the below article.

http://hashprompt.blogspot.in/2015/01/fully-distributed-hadoop-cluster.html

After the configuration, when I try to run

hdfs zkfc -formatZK

I get the following error.

15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-2.6.0/lib/native
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:os.version=3.13.0-32-generic
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:user.name=huser
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/huser
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hadoop-2.6.0/sbin
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@4d9e68d0
15/03/30 12:18:14 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-4594ddc63.mo.sap.corp/10.97.155.65:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:14 INFO zookeeper.ClientCnxn: Socket connection established to mo-4594ddc63.mo.sap.corp/10.97.155.65:2181, initiating session
15/03/30 12:18:14 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
15/03/30 12:18:15 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-e7b2822cb.mo.sap.corp/10.97.136.84:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:15 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:15 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-6dd5bf8b8.mo.sap.corp/10.97.156.12:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:15 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-4594ddc63.mo.sap.corp/10.97.155.65:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Socket connection established to mo-4594ddc63.mo.sap.corp/10.97.155.65:2181, initiating session
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-e7b2822cb.mo.sap.corp/10.97.136.84:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:17 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:18 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-6dd5bf8b8.mo.sap.corp/10.97.156.12:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:18 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:19 ERROR ha.ActiveStandbyElector: Connection timed out: couldn’t connect to ZooKeeper in 5000 milliseconds
15/03/30 12:18:19 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-4594ddc63.mo.sap.corp/10.97.155.65:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:19 INFO zookeeper.ClientCnxn: Socket connection established to mo-4594ddc63.mo.sap.corp/10.97.155.65:2181, initiating session
15/03/30 12:18:20 INFO zookeeper.ZooKeeper: Session: 0x0 closed
15/03/30 12:18:20 INFO zookeeper.ClientCnxn: EventThread shut down
15/03/30 12:18:20 FATAL ha.ZKFailoverController: Unable to start failover controller. Unable to connect to ZooKeeper quorum at mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp:2181. Please check the configured value for ha.zookeeper.quorum and ensure that ZooKeeper is running.

After zookeeper installation(for which I followed http://rajsyrus.blogspot.sg/2014/04/configuring-hadoop-high-availability.html), I started the zookeeper service at each node with

./zkServer.sh start

command but then when I see status of it using

./zkServer.sh status

The followinf result happens

JMX enabled by default
Using config: /home/huser/zookeeper-3.4.6/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.

Which means may be it is not properly running.

Content of zoo.cfg

# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/huser/zookeeper/data/
dataLogDir=/home/huser/zookeeper/log/
server.1=mo-4594ddc63.mo.sap.corp:2888:3888
server.2=mo-6dd5bf8b8.mo.sap.corp:2888:3888
server.3=mo-e7b2822cb.mo.sap.corp:2888:3888
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to “0” to disable auto purge feature
#autopurge.purgeInterval=1

content of core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://auto-ha</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp.hadoop.lab:2181</value>
</property>
</configuration>

Content of hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>auto-ha</value>
</property>
<property>
<name>dfs.ha.namenodes.auto-ha</name>
<value>nn01,nn02</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn01</name>
<value>mo-4594ddc63.mo.sap.corp:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn01</name>
<value>mo-4594ddc63.mo.sap.corp:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn02</name>
<value>mo-6dd5bf8b8.mo.sap.corp:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn02</name>
<value>mo-6dd5bf8b8.mo.sap.corp:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://mo-4594ddc63.mo.sap.corp:8485;mo-6dd5bf8b8.mo.sap.corp:8485;mo-e7b2822cb.mo.sap.corp:8485/auto-ha</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hdfs/journalnode</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/huser/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.auto-ha</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp:2181</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.auto-ha</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>

Any pointer to the error resolution would be of great help.

Regards,
Subhankar

Storm and kafka appender not working

$
0
0

Replies: 0

0 down vote favorite

I would execute an example of Storm trend topology, for using log appender. I use this code : https://github.com/alvinhenrick/log-kafka-storm

Bug when I execute my appender, IM messages doesn’t appear in my desktop. :( I tried many and many possibilities without find any solution…

My Storm log are this :

2015-03-30 12:03:03 s.k.KafkaUtils [WARN] No data found in Kafka Partition partition_0
2015-03-30 12:03:55 s.k.t.ZkBrokerReader [INFO] brokers need refreshing because 60000ms have expired
2015-03-30 12:03:55 s.k.DynamicBrokersReader [INFO] Read partition info from zookeeper: GlobalPartitionInformation{partitionMap={0=sandbox.hortonworks.com:6667, 1=sandbox.hortonworks.com:6667}}
2015-03-30 12:04:03 s.k.KafkaUtils [WARN] No data found in Kafka Partition partition_0

2015-03-27 18:27:46 o.a.z.ClientCnxn [INFO] Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2015-03-27 18:27:46 o.a.z.ClientCnxn [INFO] Socket connection established to 127.0.0.1/127.0.0.1:2181, initiating session
2015-03-27 18:27:46 o.a.z.ClientCnxn [INFO] Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x14c4be104f62176, negotiated timeout = 20000

Thanks in advance…

HDP2.2.3?

$
0
0

Replies: 0

Just curious, what’s going on with HDP2.2.3? It was released on ~3/20 (showing up on the docs site, but without release notes), then disappeared from the website, then was released again a few days later (this time with release notes), and is now missing again?

Viewing all 5121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>