top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Why Hadoop doesn’t support Updates and append?

+3 votes
407 views
Why Hadoop doesn’t support Updates and append?
posted Nov 29, 2016 by Shyam

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

By default Hadoop meant for write once and read many time functionality. Hadoop 2.x support append operation, but Hadoop 1.x doesn’t support.

answer Dec 1, 2016 by Karthick.c
Similar Questions
+1 vote

I want to know the installation and configuration of Apache Hadoop and Programming Paradigm for working on it..

+2 votes

I am running hadoop-2.4.0 cluster. Each datanode has 10 disks, directories for 10 disks are specified in dfs.datanode.data.dir.

A few days ago, I modified dfs.datanode.data.dir of a datanode () to reduce disks. so two disks were excluded from dfs.datanode.data.dir, after the datanode was restarted, I expected that the namenode would update block locations. In other words, I thought the namenode should remove from block locations associated with blocks which were stored on excluded disks, but the namenode didnt update the block locations...

In my understanding, datanode send a block report to the namenode when datanode start so the namenode should update block locations immediately.

Is a bug? Could anyone please explain?

+2 votes

I am trying to run Nutch 2.2.1 on a Haddop 2-node cluster. My hadoop cluster is running fine and I have successfully added the input and output directory on to HDFS. But when I run

$HADOOP_HOME/bin/hadoop jar /nutch/apache-nutch-2.2.1.job org.apache.nutch.crawl.Crawler urls -dir crawl -depth 3 -topN 5

I am getting something like:

INFO input.FileInputFormat: Total input paths to process : 0

Which, I understand, is meaning that Hadoop cannot locate the input files. The job ends for obvious reasons citing the null pointer exception.

Can someone help me out?

...