top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Partition file by content based through HDFS

+1 vote
349 views

When a user is uploading a file from the local disk to its HDFS, can I make it partition the file into blocks based on its content?

Meaning, if I have a file with one integer column, can i say, I want the hdfs block to have even numbers?

posted May 13, 2014 by Sanketi Garg

Looking for an answer?  Promote on:
Facebook Share Button Twitter Share Button LinkedIn Share Button

Similar Questions
+1 vote

I have a use case that requires transfer of input files from remote storage using SCP protocol (using jSCH jar). To optimize this use case, I have pre-loaded all my input files into HDFS and modified my use case so that it copies required files from HDFS. So, when tasktrackers works, it copies required number of input files to its local directory from HDFS.

All my tasktrackers are also datanodes. I could see my use case has run faster. The only modification in my application is that file copy from HDFS instead of transfer using SCP. Also, my use case involves parallel operations (run in tasktrackers) and they do lot of file transfer. Now all these transfers are replaced with HDFS copy.

Can anyone tell me HDFS transfer is faster as I witnessed? Is it because, it uses TCP/IP? Can anyone give me reasonable reasons to support the decrease of time?

0 votes

I was trying to implement a Hadoop/Spark audit tool, but l met a problem that I can't get the input file location and file name. I can get username, IP address, time, user command, all of these info from hdfs-audit.log. But When I submit a MapReduce job, I can't see input file location neither in Hadoop logs or Hadoop ResourceManager.

Does hadoop have API or log that contains these info through some configuration ?If it have, what should I configure?

0 votes

I have a basic question regarding the HDFS file read. I want to know what happens, when the following steps are followed:

  1. Client opens the file for reading and starts reading the file.
  2. In the meantime, someone deletes the file and file moves to the trash folder

Will Step 1. succeed? I feel, since the client has already opened the file and file still exists in .trash, the client should continue to read the file.

0 votes

If there are 10 HDFS blocks to be copied from one machine to another. However, the other machine can copy only 7.5 blocks, is there a possibility for the blocks to be broken down during the time of replication?

+4 votes

My requirement is a typical Datawarehouse and ETL requirement. I need to accomplish

1) Daily Insert transaction records to a Hive table or a HDFS file. This table or file is not a big table ( approximately 10 records per day). I don't want to Partition the table / file.

In few articles It was being mentioned that we need to load to a staging table in Hive. And then insert like the below :

insert overwrite table finaltable select * from staging;

I am not getting this logic. How should I populate the staging table daily.

...