top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

What happens when a datanode fails in Hadoop framework?

+1 vote
981 views
What happens when a datanode fails in Hadoop framework?
posted Dec 4, 2014 by Amit Kumar Pandey

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

OK I got what you are asking. DFS Client will get a list of datanodes from the namenode where it is supposed to write a block (say A) of a file. DFS Client will iterate over that list of Datanodes and write the block A in those locations. If block write fails in the first datanodes, it'll abandon the block write and ask namenode a new set of datanodes where it can attempt to write again.

In below example I will try to give one sample code:

private DatanodeInfo[] nextBlockOutputStream(String client) throws IOException {
    //----- other code ------
    do {
            hasError = false;
            lastException = null;
            errorIndex = 0;
            retry = false;
            nodes = null;
            success = false;

            long startTime = System.currentTimeMillis();
            lb = locateFollowingBlock(startTime);
            block = lb.getBlock();
            accessToken = lb.getBlockToken();
            nodes = lb.getLocations();

            //
            // Connect to first DataNode in the list.
            //
            success = createBlockOutputStream(nodes, clientName, false);

            if (!success) {
              LOG.info("Abandoning block " + block);
              namenode.abandonBlock(block, src, clientName);

              // Connection failed.  Let's wait a little bit and retry
              retry = true;
              try {
                if (System.currentTimeMillis() - startTime > 5000) {
                  LOG.info("Waiting to find target node: " + nodes[0].getName());
                }
                Thread.sleep(6000);
              } catch (InterruptedException iex) {
              }
            }
          } while (retry && --count >= 0);

          if (!success) {
            throw new IOException("Unable to create new block.");
          }
     return nodes;
}
answer Dec 29, 2014 by Kali Mishra
Similar Questions
+2 votes

I am running hadoop-2.4.0 cluster. Each datanode has 10 disks, directories for 10 disks are specified in dfs.datanode.data.dir.

A few days ago, I modified dfs.datanode.data.dir of a datanode () to reduce disks. so two disks were excluded from dfs.datanode.data.dir, after the datanode was restarted, I expected that the namenode would update block locations. In other words, I thought the namenode should remove from block locations associated with blocks which were stored on excluded disks, but the namenode didnt update the block locations...

In my understanding, datanode send a block report to the namenode when datanode start so the namenode should update block locations immediately.

Is a bug? Could anyone please explain?

+4 votes

I am using Hadoop cluster with 9 nodes. I would like to know what is the basic datanode configuration in Hadoop cluster.

I am using following configuration on Namenode and Datanode.

RAM = 4GB Cores = 4 Disk = 8 ( Total 16GB storage space)

Running sample sort and word count jobs to check the hadoop network performance.
Is the configuration I have chosen is right ?

+1 vote

I have a job running very slowly, when I examine cluster, I find my hdfs user using 170m swap though top command, thats user run datanode daemon, ps output show following info, there are two -Xmx value, and i do not know which value is the real ,1000m or 10240m

# ps -ef|grep 2853
root      2095  1937  0 15:06 pts/4    00:00:00 grep 2853
hdfs      2853     1  5 Nov07 ?        1-22:34:22 /usr/java/jdk1.7.0_45/bin/java -Dproc_datanode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ch14.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Xmx10240m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/var/log/hadoop-hdfs/gc-ch14-datanode.log -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
0 votes

I have a basic question regarding the HDFS file read. I want to know what happens, when the following steps are followed:

  1. Client opens the file for reading and starts reading the file.
  2. In the meantime, someone deletes the file and file moves to the trash folder

Will Step 1. succeed? I feel, since the client has already opened the file and file still exists in .trash, the client should continue to read the file.

...