top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Hadoop Java API to get maximum-am-resource-percent?

0 votes
301 views

Is there a Java API for getting yarn.scheduler.capacity.maximum-am-resource-percent for a particular queue?

Unfortunately, QueueInfo doesnt seem to contain this info, so something like this wouldnt work:

Configuration conf = new Configuration(...);
Cluster cluster = new Cluster(conf);
QueueInfo queueInfo = cluster.getQueue("default");
Properties p = queueInfo.getProperties(); // This is empty! :(
posted Dec 22, 2016 by anonymous

Looking for an answer?  Promote on:
Facebook Share Button Twitter Share Button LinkedIn Share Button

Similar Questions
+2 votes

Did any one got these error before, please help

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: xxxxx.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /xxxxxxxx:39000 dst: /xxxxxx:50010

java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:167)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
at java.lang.Thread.run(Thread.java:745)
2015-01-11 04:13:21,846 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 657ms (threshold=300ms)
0 votes

I can't find any information on how possible or difficult it is to install Hadoop as a single node on Windows 8 running Oracle Java 8. The tutorial on Hadoop 2 on Windows mentions neither Windows 8 nor Java 8.

Is there anything known about this?

+2 votes

I have a working version of java 7 installed. I can execute java programs in the workstation. When I start hdfs, the statup process aborts with a message JAVA_HOME not set.

OS : Ubuntu 13.04 raring ringtail
Hadoop version : 2.1.1-beta
Java version:java-7-openjdk-amd64

0 votes

I want to ask, what's the best way implementing a Job which is importing files into the HDFS?

I have an external System offering data accessible through a Rest API. My goal is to have a job running in Hadoop which is periodical (maybe started by chron?) looking into the Rest API if new data is available.

It would be nice if also this job could run on multiple data nodes. But in difference to all the MapReduce examples I found, is my job looking for new Data or changed data from an external interface and compares the data with existing one.

This is a conceptual example of the job:

  • The job ask the Rest API if there are new files
  • if so, the job imports the first file in the list
  • look if the file already exits

  • if not, the job imports the file

  • if yes, the job compares the data with the data already stored

  • if changed the job updates the file

  • if more file exits the job continues with 2 -

  • otherwise ends.

Can anybody give me a little help how to start (its my first job I write...) ?

...