top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Unable to compile hadoop source code

+3 votes
1,319 views

I checked out the source code from https://svn.apache.org/repos/asf/hadoop/common/trunk/, I tried to compile the code with mvn. I am compiling this on a mac os X , mavericks.

It failed at the following stage

[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [5.017s] 
[INFO] Apache Hadoop Common .............................. FAILURE [1:39.797s] 
[INFO] Apache Hadoop NFS ................................. SKIPPED 
[INFO] Apache Hadoop Common Project ...................... SKIPPED
[INFO] ------------------------------------------------------------------------ 
[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc --version did not return a version -> [Help 1] 
[ERROR]  
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
posted Jan 7, 2014 by Mandeep Sehgal

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button
The error message said that the build needs protoc to be installed. Search protobuf on the web.

2 Answers

+1 vote

Try to read Build instructions for Hadoop. http://svn.apache.org/repos/asf/hadoop/common/trunk/BUILDING.txt

For your problem, proto-buf not set in PATH. After setting, recheck proto-buffer version is 2.5

answer Jan 7, 2014 by Kumar Mitrasen
+1 vote

Download from http://code.google.com/p/protobuf/downloads/list

$ ./configure $ make $ make check $ make install
Then compile the source code

answer Jan 7, 2014 by Jagan Mishra
Similar Questions
+1 vote

I downloaded the hadoop source code from github, after importing those files in eclipse some of the classes and packages are missing. And I am not able able to get those files online.

Please help me out to get all the files once and some link for what are the files do I need to import eclipse.

+1 vote

I have Hadoop cluster set up on Amazon EC2. When I am trying to access the application logs through Web UI I am getting page cant be displayed.

Configuration of Cluster: My Namenode is mapped with elastic IP(static) of EC2. Other datanodes public IP changing everyday as we are stopping the clustering during non working hours.

Observation :When I try to view the logs ,Its picking one of the datanode private IP and I am getting Page cant be displayed.

0 votes

Our hadoop cluster is using HDFS Federation, but when use the following command to report the HDFS status

$ ./hdfs dfsadmin -reportreport: FileSystem viewfs://nsX/ is not an HDFS file system
Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]

It gives me the following message that viewfs is NOT HDFS filesystem. Then how can I proceed to report the hdfs status

0 votes

I want to ask, what's the best way implementing a Job which is importing files into the HDFS?

I have an external System offering data accessible through a Rest API. My goal is to have a job running in Hadoop which is periodical (maybe started by chron?) looking into the Rest API if new data is available.

It would be nice if also this job could run on multiple data nodes. But in difference to all the MapReduce examples I found, is my job looking for new Data or changed data from an external interface and compares the data with existing one.

This is a conceptual example of the job:

  • The job ask the Rest API if there are new files
  • if so, the job imports the first file in the list
  • look if the file already exits

  • if not, the job imports the file

  • if yes, the job compares the data with the data already stored

  • if changed the job updates the file

  • if more file exits the job continues with 2 -

  • otherwise ends.

Can anybody give me a little help how to start (its my first job I write...) ?

...