Replies: 0
I already install 2 node cluster hadoop mapreduce. i use hadoop-2.2.0 version and run on fedora 18.
i modified /etc/hosts to
192.168.34.68 master
192.168.34.148 slave
and this is my xml configuration
core-site.xml
name : fs.default.name
value: hdfs://master:9000
name : hadoop.http.staticuser.user
value : hduser
name : hadoop.user.group.static.mapping.overrides
value : hduser=hadoop
name : hadoop.tmp.dir
value : home/hduser/App/hadoop-2.2.0/tmp
============
hdfs-site.xml
name : dfs.replication
value : 2
name : dfs.namenode.name.dir
value : file:/home/hduser/App/hadoop-2.2.0/hdfs/namenode
name : dfs.datanode.data.dir
value : file:/home/hduser/App/hadoop-2.2.0/hdfs/datanode
name : dfs.permissions
value : false
============
mapred-site.xml
name : mapreduce.framework.name
value : yarn
name : mapreduce.jobhistory.address
value : master:10020
name : mapreduce.jobhistory.webapp.address
value : master:19888
name : mapred.job.tracker
value : hdfs://master:9001
================
yarn-site.xml
name : yarn.resourcemanager.resource-tracker.address
value : master:8031
name : yarn.resourcemanager.scheduler.address
value : master:8030
name : yarn.resourcemanager.scheduler.class
value : org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
name : yarn.resourcemanager.address
value : 0.0.0.0:8032
name : yarn.nodemanager.local-dirs
value : ${hadoop.tmp.dir}/nodemanager/local
name : yarn.nodemanager.address
value : 0.0.0.0:8034
name : yarn.nodemanager.remote-app-log-dir
value : ${hadoop.tmp.dir}/nodemanager/remote
name : yarn.nodemanager.log-dirs</name>
value : ${hadoop.tmp.dir}/nodemanager/logs
name : yarn.nodemanager.aux-services
value : mapreduce_shuffle
name : yarn.nodemanager.aux-services.mapreduce.shuffle.class
value : org.apache.hadoop.mapred.ShuffleHandler
===================
than, i check service which run on my cluster with jps. this is result jps from master
31941 NameNode
32071 DataNode
32250 ResourceManager
29387 JobHistoryServer
2588 Jps
32364 NodeManager
and this is from slave
2069 DataNode
2282 NodeManager
8158 Main
3231 Jps
i check on hdfs monitoring, i get 2 DataNode live. (just to make sure hdfs running well)
my cluster successfully running sample wordcount project. then i try to run my application. suddenly i get this error :
14/08/20 12:10:56 INFO mapreduce.Job: Job job_1408511379607_0001 failed with state FAILED due to: Application application_1408511379607_0001 failed 2 times due to Error launching appattempt_1408511379607_0001_000002. Got exception: org.apache.hadoop.yarn.exceptions.NMNotYetReadyException: Rejecting new containers as NodeManager has not yet connected with ResourceManager
obviously, i can ssh passwordless and ping to each node. firewall i disable to.
am i wrong configure hadoop or somethings wrong on my environtment?