OpenStack

Elastic Map Reduce (Sahara): Introduction, Integration with swift and Installation

Today’s blog will walk you through the introduction of Sahara, how it integrates with swift and the steps to install Sahara on your system. Elastic Map Reduce (Sahara) is one of the components of OpenStack and with this component; we are going to continue our discussion on components of OpenStack through our series!

Sahara:

An objective of Sahara is to offer simple methods to the users to preplanned Hadoop clusters via referring multiple options such as cluster topology, Hadoop version, hardware details of nodes and some more. Once the users fill out each and every parameter after that Sahara installs the clusters in a few periods of time.

Sahara also offers ways to escalate previously planned cluster by means of adding or removing worker nodes as per the demand. The solution obtained through this task will tackle with below mentioned used cases:

  1. It helps in quick preplanning of Hadoop clusters on OpenStack for development and Quality Assurance.
  2. It helps in consuming the multipurpose unutilized compute strength related to OpenStack laaS cloud.
  3. Analyzes or analytics as a service, particular critical workloads like AWS EMR.

Main features of Sahara are: –

  1. As mentioned earlier, Sahara is invented as a component of OpenStack.
  2. Sahara was regulated by REST API along with User Interface which was feasible as a portion of OpenStack Dashboard.
  3. Sahara provides support to the various types of Hadoop version:
  4. Plugged – in a system of Hadoop installation engines.
  5. Sahara integrates with particular tools like Apache Ambari or Cloudera Management Console, etc.
  6. Predesigned prototypes of Hadoop forms along with the capability to change the parameters.

Sahara interacts with following OpenStack components:

  • Horizon: –

It offers Graphic User Interface due to the capacity to use all features of Sahara.

  • Keystone: –

It verifies the users and also provides safety token which can be used for working with the OpenStack, therefore, restricting the users’ capabilities inside Sahara to its privileges from OpenStack.

  • Nova: –

Nova supports in provisioning of virtual machines required for the Hadoop cluster.

  • Glance: –

The Virtual Machine images of Hadoop are stored in Glance, at the same time, every image contains a preinstalled Operating System and Hadoop. Already installed Hadoop helps us by giving a good advantage for startup nodes.

  • Swift: –

It may be used as a repository for data which can be dealt with Hadoop jobs.

Generic Workflow of Sahara: –

  1. Sahara offers two levels of concept for API and User interface on the basis of used cases like analyzing service and preplanning the cluster.
  2. For the faster preplanning of the cluster, the general workflow will be as follows:
  3. First of all, choose the Hadoop version.
  4. Then choose support image which is preinstalled with Hadoop or without Hadoop
  5. The support images which are not preinstalled with Hadoop will provide support to the plugin installment engines interconnected with vendor casting.
  6. After this, describe the configuration of the cluster, involving the size and layout of the cluster and then setting up various types of parameters related to Hadoop like heap size:
  7. Customizable outlines or Templates will be provided for simplifying the process of configuration of these Hadoop parameters.
  8. Preplanning the cluster: Virtual Machines, installation, and configuration of Hadoop will be preplanned by Sahara.
  9. Processes performed on the cluster: It will help in adding or removing nodes.
  10. Delete the cluster when it is not required at present
  11. For analyzing service or analytic as a service general workflow will be as follows:
  12. Choose anyone pre-described Hadoop version.
  13. Configuration of the job:
  14. Select the type of job from pig, hive, jar-file, etc
  15. Supply the script source for the job or the location of the jar.
  16. Choose the input as well as output location of data, at first only swift, will get the support.
  17. Choose the log location.
  18. Fix the limits for cluster size.
  19. Completing the job:
  20. Entire cluster preplanning and job accomplishment will take place clearly for the users.
  21. Once the job is completed, the cluster will be terminated automatically.
  22. Obtain the results of various estimations, for instance from Swift.

Integration of Sahara with Swift:

As discussed in the previous blog, Swift service is common object storage in the environment of OpenStack or equivalent of Amazon S3. According to the rule, it is installed on the bare metal machines or systems. For processing the data stored on OpenStack, Hadoop is already present on OpenStack. For helping the task of data processing, a few advancements are on their way.

First is, a file system execution for Swift: With the help of HADOOP -8545 in place, jobs related to Hadoop could work with Swift as normally as with HDFS.

There is a request for change: which is Change 16b1ba25b (merged). This executes the capability to register the objects’ endpoints, accounts for making it possible to merge the swift with the software which depends upon information about the location of data for avoiding the network overhead.

Pluggable installation and Controlling:

Along with the monitoring or controlling abilities offered specifically by the Hadoop management tooling which is vendor specific, Sahara will offer pluggable incorporation with outer scrutinizing systems like Nagios or Zabbix.

These both tools related to installation and monitoring will be deployed on a single virtual machine, thus offering only one event to handle multiple clusters at a time.

Architecture of Sahara:

Architecture of Sahara contains multiple components as follows:

  1. Cluster Configuration Manager:

The entire logic of business is stored here.

  • Auth component:

It is accountable for the verification and approval of the client.

  • Data Access Layer or DAL:

DAL Stays internal models in database.

  • Provisioning of Virtual Machines:

This component is accountable for the interaction between the Nova and Glance.

  • Installation:

The plugin method is accountable for installing Hadoop on the pre-planned virtual machines; management solutions such as Apache Ambari and Cloudera Management Console can be used for installing Hadoop on provisioned virtual machines.

  • REST API:

Reveals the functionality of Sahara through REST API.

  • Python Sahara Client:

Same as other components of OpenStack, Sahara owns its own Python client.

  • Sahara Pages:

The Graphic user Interface for Sahara is situated on Horizon.

Installation of Sahara:

Before moving on with the installation, it is recommended that you should install Sahara in such a way that it will maintain the consistent state of your system. For this purpose, we recommend the following 3 alternatives:

  1. Install Sahara through Fuel
  2. Install Sahara through RDO Havana+
  3. Install Sahara into a virtual environment.

Let’s discuss each of these installation methods:

To install Sahara with

  1. Begin the installation and configuration of OpenStack by following the Quickstart.
  2. Start the Sahara service during the course of installation.
  3. To install Sahara with RDO:
  4. Begin the installation and configuration of OpenStack by following the Quickstart.
  5. Using Yum install Sahara-API service as follows:

$ yum install openstack-sahara

  • Then configure the Sahara-API service as per your wish. The configuration file is located at:

/etc/sahara/sahara.conf

  • Create database framework as follows:

 $ sahara-db-manage –config-file /etc/sahara/sahara.conf upgrade head

  • Start the sahara-api service:

 $ service openstack-sahara-api start

To install Sahara into a virtual environment:

First of all, using your Operating System manager, you will require installing several packages, which depends upon the operating system that you are using.

For Ubuntu run following command:

$ sudo apt-get install python-setuptools python-virtualenv python-dev

For Fedora run following command:

$ sudo yum install gcc python-setuptools python-virtualenv python-deve1

For CentOS run following command:

$ sudo yum install gcc python-setuptools python-deve1

$ sudo easy_install pip

$ sudo pip install virtualenv

  • Configure virtual environment for Sahara:

 $ virtualenv sahara-venv

Above command will install python virtual environment into Sahara-venv directory present in your current working directory. There is no need of any superuser privileges for this command and this command can be executed in any directory to which, the current user has to write permission.

  • You can install the latest version of Sahara from pypi as follows :

 $ sahara-venv/bin/pip install sahara

Or

 You could get a Sahara archive from http://tarballs.openstack.org/sahara/

 And install it using pip:

 $ sahara-venv/bin/pip install ’http://tarballs.openstack.org/sahara/sahara-master.tar.gz’

Important Note:

Remember that Sahara-master.tar.gz consists of the current changes and therefore it may not be stable at present. Therefore, we would recommend you to browse: http://tarballs.openstack.org/sahara/

 And choose the current and stable version of Sahara.

  • Once you finish the installation, then you must create a configuration file using a sample config which is located in Sahara-venv/share/Sahara/sahara.conf.sample-basic:

 $ mkdir sahara-venv/etc

 $ cp sahara-venv/share/sahara/sahara.conf.sample-basic sahara-venv/etc/sahara.conf

Then it is important to make the required changes in:

sahara-venv/etc/sahara.conf.

  • If you are using Sahara with MySQL database, then for storing big Job Binaries in Internal Database of Sahara, you should set the size of maximum allowed packet. Then modify my.cnf and also change the parameter, as follows:

 …

 [mysqld]

 … max_allowed_packet = 256M

And after that restart mysql server.

  • Finally create database schema as follows:

$ sahara-venv/bin/sahara-db-manage –config-file sahara-venv/etc/sahara.conf upgrade head

  • For enabling Sahara call as follows:

 $ sahara-venv/bin/sahara-api –config-file sahara-venv/etc/sahara.conf

Here we have completed the installation of Sahara!

That’s all for today! Please do not forget to leave a comment in the comment section below. Thank you for reading the blog. See you soon with another interesting blog!

also read:-


Vishwajit Kale
Vishwajit Kale blazed onto the digital marketing scene back in 2015 and is the digital marketing strategist of Hostripples, a company that aims to provide affordable web hosting solutions. Vishwajit is experienced in digital and content marketing along with SEO. He's fond of writing technology blogs, traveling and reading.

Recent Posts

The Ultimate Showdown: Linux vs Windows for VPS Hosting

As the demand for virtual private servers (VPS) continues to grow, businesses and individuals are faced with a crucial decision:…

2 weeks ago

Questions to Ask Your Web Hosting Support Team

Web hosting is a large industry, as many other factors help any web hosting provider to form a company. The…

2 weeks ago

How to Secure Your WordPress Site in 2025

Welcome to the complete guide to WordPress security best practices in 2024. As technology evolves rapidly, implementing strong security measures…

3 weeks ago

Unlocking the Secrets of Hosting: Essential Questions to Ask Hostripples

Hey, wanted to learn about web hosting? Or do you want to start a new website and need hosting? Questions…

3 weeks ago

DDoS Attacks: What You Need to Know for Protection

In today's digital world, the threat of DDoS attacks has become increasingly prevalent. These types of attacks have the power…

1 month ago