top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

HDFS federation configuration

+1 vote
497 views

We tried setting up HDFS name node federation set up with 2 name nodes. I am facing few issues.

Can any one help me in understanding below points?
1) how can we configure different namespaces to different name node? Where exactly we need to configure this?
2) After formatting each NN with one cluster id, Do we need to set this cluster id in hdfs-site.xml?
3) I am getting exception like, data dir already locked by one of the NN, But when dont specify data.dir, then its not showing exception.

So what could be the issue?

posted Jan 23, 2014 by Bob Wise

Looking for an answer?  Promote on:
Facebook Share Button Twitter Share Button LinkedIn Share Button
Thanks. I followed the link its clear now. But client side configuration is not covered on the doc.

Similar Questions
0 votes

Our hadoop cluster is using HDFS Federation, but when use the following command to report the HDFS status

$ ./hdfs dfsadmin -reportreport: FileSystem viewfs://nsX/ is not an HDFS file system
Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]

It gives me the following message that viewfs is NOT HDFS filesystem. Then how can I proceed to report the hdfs status

0 votes

I was trying to implement a Hadoop/Spark audit tool, but l met a problem that I can't get the input file location and file name. I can get username, IP address, time, user command, all of these info from hdfs-audit.log. But When I submit a MapReduce job, I can't see input file location neither in Hadoop logs or Hadoop ResourceManager.

Does hadoop have API or log that contains these info through some configuration ?If it have, what should I configure?

0 votes

I have a basic question regarding the HDFS file read. I want to know what happens, when the following steps are followed:

  1. Client opens the file for reading and starts reading the file.
  2. In the meantime, someone deletes the file and file moves to the trash folder

Will Step 1. succeed? I feel, since the client has already opened the file and file still exists in .trash, the client should continue to read the file.

...