Contact Us
A 101-103 Siddhivinayak Towers, Off S.G. Highway, Ahmedabad, Gujarat 380051

Hadoop Components Installation on Cluster

by Attune World Wide / /
Hadoop-Components-Installation-on-Cluster2 (1)

The Apache Hadoop softwares library may be a framework that enables for the distributed process of big data sets across clusters of computers using easy programming models. it’s designed to proportion from single servers to thousands of machines, every giving native computation and storage. instead of consider hardware to deliver high-availability, the library itself is intended to sight and handle failures at the applying layer, thus delivering a highly-available service on prime of a cluster of computers, every of which can be vulnerable to failures.

Please refer for hadoop Installations Hadoop Cluster Installation my previous Blog….

Now we are going to install Hadoop Components on Cluster Environment

Hbase Installation

You will notice that fixing multiple node HBase cluster is mutch similar to create  hadoop cluster. we can define HRegionServers (Slaves) within the file $HBASE_HOME/conf/regionservers and this needs to be done on the HMaster (Master) node. And for all the master and   salves machin the hbase-site.xml ought to simply properly refer the master machine’s science address in situ of localhost or master.

Ideally this could be enough to achive  your HBase cluster running. fast review for the 2 node setup On Master Machine: The regionservers file ought to appear as if    master     slave

And add the subsequent 2 properties to each the machines hbase-site.xml that tells it the zookeeper details we have a tendency to simply setup.

Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.
Comma separated list of servers in the ZooKeeper Quorum.

Thats it! currently provides it a spin. 1st check that nothing is running by checking jps. Or stop everything using currently attempt the subsequent commands on the Master machine, that ought to begin everything on slave also.

$ #starts the HDFS
$ #starts the mapred
$ start #starts our own zookeeper
$ #starts hbase cluster

Test the setup using jps.On Master

23143 Jps
22985 HRegionServer
22817 HMaster
22767 QuorumPeer
5750 SecondaryNameNode
5399 NameNode
5838 JobTracker
5567 DataNode
6006 TaskTracker

On Slave:

5613 Jps
5797 HRegionServer
3243 DataNode
8274 TaskTracker


We will setting a seperate zookeper on our own rather than the one that HBase provides, as a result of its continuously a decent plan to stay elements seperated, could also be it’ll be helpful once you keep the zookeeper underneath management later as its a fail quick method.

If you have got already setup HBase on single node as delineate within the previous guide you want to be having a HQuorumPeer method running that is internal zookeper given by HBase. Lets get obviate it and use a brand new zookeeper by downloading it. Use following commands .

Zookeper Installation

$ cd /usr/local
$ tar zxf zookeeper-x.x.x.tar.gz
$ mv zookeeper-x.x.x zookeeper
$ chown -R hduser:hduser zookeeper

Now Setup a entry in our .bashrc file for zookeeper,

export ZK_HOME=/usr/local/zookeeper
export PATH=$PATH:[your recent entries]:$ZK_HOME/bin.

For configuring the zookeeper goto :

you may or might not notice installation.cfg, if not do a cp zoo_sample.cfg zoo.cfg.

You dont actually need to edit it however you will want to edit the dataDir to one thing like


This should be enough for running our zookeeper at port 2181 (default). currently lets get obviate the one that HBase starts. this is often straightforward in

set export

Now Zookeeper is install with success.

Sqoop Intregation with Hadoop

  1. You can Get sqoop from
  2. P.S : I even have tested sqoop with hadoop one.1.2.So I even have downloaded sqoop-1.4.4.bin__hadoop-1.0.0.tar.gz
  3. Extract it.
  4. currently you wish to Update you hsqldb- to  hsqldb-2.2.8.jar
  5. currently you wish to repeat mysql instrumentation jar within the $SQOOP_HOME/lib/
  6. you’ll have to be compelled to coppy sqoop-1.4.4.jar to $hadoop_home/lib
  7. visit bin directory(/../sqoop-1.4.4.bin__hadoop-1.0.0/bin)
  8. kind below command,

And for running mysql information, you wish to initial place the mysql connexion jar go into the lib directory of sqoop(/../sqoop-1.4.4.bin__hadoop-1.0.0/lib).

E.g for mysql :

./sqoop import –connect <Your JDBC URL>/<Your Database Name> –table  <Your table name> –username <User Name> -P


Hbase Cluster setupe is ready to deploy map reduce task and sqoop also performed emporting and exporting data mysql to hdfs and hdfs to mysql.

About Attune World Wide

What you can read next

Leave a Reply

Your email address will not be published. Required fields are marked *

three × one =

Recent Posts