top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

How we can install Hadoop 1.2.1 on RHEL 7/6/5 or on CentOS?

+1 vote
982 views
How we can install Hadoop 1.2.1 on RHEL 7/6/5 or on CentOS?
posted May 7, 2015 by Amit Kumar Pandey

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote
 
Best answer

Step 1. Install Java
Step 2. Create User Account

Now create a system user account to use for hadoop installation i.e. user hadoop

# useradd hadoop
# passwd hadoop

Step 3. Configuring Key Based Login
It’s required to setup hadoop user to ssh itself without password. Using following method it will enable key based login for hadoop user.

# su - hadoop
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
$ exit

Step 4. Download and Extract Hadoop Source
download hadoop latest available version from its official site, and follow below steps.

# mkdir /opt/hadoop
# cd /opt/hadoop/
# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
# tar -xzf hadoop-1.2.1.tar.gz
# mv hadoop-1.2.1 hadoop
# chown -R hadoop /opt/hadoop
# cd /opt/hadoop/hadoop/

Step 5: Configure Hadoop

First edit hadoop configuration files and make following changes.

5.1 Edit core-site.xml

# vim conf/core-site.xml
#Add the following inside the configuration tag
<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000/</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>

5.2 Edit hdfs-site.xml

# vim conf/hdfs-site.xml
# Add the following inside the configuration tag
<property>
    <name>dfs.data.dir</name>
    <value>/opt/hadoop/hadoop/dfs/name/data</value>
    <final>true</final>
</property>
<property>
    <name>dfs.name.dir</name>
    <value>/opt/hadoop/hadoop/dfs/name</value>
    <final>true</final>
</property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>

5.3 Edit mapred-site.xml

# vim conf/mapred-site.xml
# Add the following inside the configuration tag
<property>
        <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
</property>

5.4 Edit hadoop-env.sh

# vim conf/hadoop-env.sh
export JAVA_HOME=/opt/jdk1.7.0_75
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Set JAVA_HOME path as per your system configuration for java.

Next to format Name Node

# su - hadoop
$ cd /opt/hadoop/hadoop
$ bin/hadoop namenode -format
13/06/02 22:53:48 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = srv1.tecadmin.net/192.168.1.90
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May  6 06:59:37 UTC 2013
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
13/06/02 22:53:48 INFO util.GSet: Computing capacity for map BlocksMap
13/06/02 22:53:48 INFO util.GSet: VM type       = 32-bit
13/06/02 22:53:48 INFO util.GSet: 2.0% max memory = **********
13/06/02 22:53:48 INFO util.GSet: capacity      = 2^22 = 4194304 entries
13/06/02 22:53:48 INFO util.GSet: recommended=4194304, actual=4194304
13/06/02 22:53:49 INFO namenode.FSNamesystem: fsOwner=hadoop
13/06/02 22:53:49 INFO namenode.FSNamesystem: supergroup=supergroup
13/06/02 22:53:49 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/06/02 22:53:49 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/06/02 22:53:49 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/06/02 22:53:49 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
13/06/02 22:53:49 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/06/02 22:53:49 INFO common.Storage: Image file of size 112 saved in 0 seconds.
13/06/02 22:53:49 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits
13/06/02 22:53:49 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits
13/06/02 22:53:49 INFO common.Storage: Storage directory /opt/hadoop/hadoop/dfs/name has been successfully formatted.
13/06/02 22:53:49 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at srv1.tecadmin.net/192.168.1.90
************************************************************/

Step 6: Start Hadoop Services
Use the following command to start all hadoop services.

$ bin/start-all.sh

Step 7: Test and Access Hadoop Services
Use ‘jps‘ command to check if all services are started well.

Credit: http://tecadmin.net/setup-hadoop-single-node-cluster-on-centos-redhat/

answer May 7, 2015 by Salil Agrawal
Similar Questions
+2 votes

I'm having an issue getting a C6.6 install to work on a 3 TB dual hard drive system, raid 0. I'm hoping that someone here can help.

So, I install as normal, but then reboot, and it comes to a grub prompt. Going into the system via Linux rescue, I see that most of the files dealing with the kernel haven't been installed.

I asked the maker of the server and he said that they have noticed this happen recently. A solution is to put the kernel files on a thumb drive, and then point the OS to look for them there.

I have yet to try it, but is there a better way to deal with this issue that anyone else has done?

+2 votes

I tried to install CentOS 7 on a new system. It works.

However, I'm noticing small things:
1. system-config-network-tui is not installed and yum cannot find it. I realized for this -- nmtui
2. What about firewall? I can't seem to understand the replacement from system-config-firewall-tui

0 votes

I have my product binary built on the RHEL5.5 and my customer is having 6.2. Just want to know the compatibility between these two version. Can we ship the binary to customer?

Thanks

+1 vote

I want to install path.py in my Python 3.4 environment on a Centos 5 box. My /usr/local/bin/ contains:

easy_install-3.4 
python3.4  
etc. 

We are behind a proxy server and I tried this:

# /usr/local/bin/easy_install-3.4 path.py 

Searching for path.py 
Reading https://pypi.python.org/simple/path.py/ 
Download error on https://pypi.python.org/simple/path.py/: hostname '172.29.68.1  
' doesn't match either of 'www.python.org', 'python.org', 'pypi.python.org',  
'docs.python.org', 'testpypi.python.org', 'bugs.python.org', 'wiki.python.org',  
'hg.python.org', 'mail.python.org', 'packaging.python.org', 'pythonhosted.org',  
'www.pythonhosted.org', 'test.pythonhosted.org', 'us.pycon.org', 'id.python.org' --  
Some packages may not be found! 

Couldn't find index page for 'path.py' (maybe misspelled?), Am I best to use pip or easy_install? also if easy_install, how can I fix the above error?

...