top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Do I need to start haddop on slave also?

+2 votes
298 views

I have installed hadoop-2.2.0 under two machine,one is master and other is slave,then I start hadoop service under master machine.

[hadoop@master ~]$./start-dfs.sh  
[hadoop@master ~]$./start-yarn.sh  
[hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver   

My question is whether I need start hadoop service under slave machine again? i.e.

[hadoop@slave ~]$./start-dfs.sh  
[hadoop@slave ~]$./start-yarn.sh  
[hadoop@slave ~]$./mr-jobhistory-daemon.sh start historyserver            
posted Feb 17, 2014 by Amit Mishra

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote

No. You don't need to. Master will start all required daemons on slave. Check all daemons using JPS command.

answer Feb 17, 2014 by Jagan Mishra
Similar Questions
+2 votes

Let we change the default block size to 32 MB and replication factor to 1. Let Hadoop cluster consists of 4 DNs. Let input data size is 192 MB. Now I want to place data on DNs as following. DN1 and DN2 contain 2 blocks (32+32 = 64 MB) each and DN3 and DN4 contain 1 block (32 MB) each. Can it be possible? How to accomplish it?

+1 vote

How can I track a job failure on node or list of nodes, using YARN apis. I could get the list of long running jobs, using yarn client API, but need to go further to AM, NM, task attempts for map or reduce.
Say, I have a job running for long,(about 4hours), might be caused of some task failures.

Please provide the sequence of APIs, or any reference.

+2 votes

Did any one got these error before, please help

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: xxxxx.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /xxxxxxxx:39000 dst: /xxxxxx:50010

java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:167)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
at java.lang.Thread.run(Thread.java:745)
2015-01-11 04:13:21,846 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 657ms (threshold=300ms)
...