64 research outputs found
Recommended from our members
Optimisation of a hadoop cluster based on SDN in cloud computing for big data applications
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonBig data has received a great deal attention from many sectors, including academia, industry and government. The Hadoop framework has emerged for supporting its storage and analysis using the MapReduce programming module. However, this framework is a complex system that has more than 150 parameters and some of them can exert a considerable effect on the performance of a Hadoop job. The optimum tuning of the Hadoop parameters is a difficult task as well as being time consuming. In this thesis, an optimisation approach is presented to improve the performance of a Hadoop framework by setting the values of the Hadoop parameters automatically. Specifically, genetic programming is used to construct a fitness function that represents the interrelations among the Hadoop parameters. Then, a genetic algorithm is employed to search for the optimum or near the optimum values of the Hadoop parameters. A Hadoop cluster is configured on two severe at Brunel University London to evaluate the performance of the proposed optimisation approach. The experimental results show that the performance of a Hadoop MapReduce job for 20 GB on Word Count Application is improved by 69.63% and 30.31% when compared to the default settings and state of the art, respectively. Whilst on Tera sort application, it is improved by 73.39% and 55.93%. For better optimisation, SDN is also employed to improve the performance of a Hadoop job. The experimental results show that the performance of a Hadoop job in SDN network for 50 GB is improved by 32.8% when compared to traditional network. Whilst on Tera sort application, the improvement for 50 GB is on average 38.7%. An effective computing platform is also presented in this thesis to support solar irradiation data analytics. It is built based on RHIPE to provide fast analysis and calculation for solar irradiation datasets. The performance of RHIPE is compared with the R language in terms of accuracy, scalability and speedup. The speed up of RHIPE is evaluated by Gustafson's Law, which is revised to enhance the performance of the parallel computation on intensive irradiation data sets in a cluster computing environment like Hadoop. The performance of the proposed work is evaluated using a Hadoop cluster based on the Microsoft azure cloud and the experimental results show that RHIPE provides considerable improvements over the R language. Finally, an effective routing algorithm based on SDN to improve the performance of a Hadoop job in a large scale cluster in a data centre network is presented. The proposed algorithm is used to improve the performance of a Hadoop job during the shuffle phase by allocating efficient paths for each shuffling flow, according to the network resources demand of each flow as well as their size and number. Furthermore, it is also employed to allocate alternative paths for each shuffling flow in the case of any link crashing or failure. This algorithm is evaluated by two network topologies, namely, fat tree and leaf-spine, built by EstiNet emulator software. The experimental results show that the proposed approach improves the performance of a Hadoop job in a data centre network.Ministry of Higher Education and Scientific Researc
Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning
CCTV cameras produce a large amount of video surveillance data per day, and
analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce
programming model. It automates the operation of creating tasks for each
function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed
so that the processing can be done in the fastest (or within a known time
constraint) and the most cost effective manner. Often such resources will also
have to satisfy practical, procedural and legal requirements. The capability to
model a distributed processing architecture where the resource requirements can
be effectively and optimally predicted will thus be a useful tool, if available. In
literature there is no clear and comprehensive modelling framework that provides
proactive resource allocation mechanisms to satisfy a user's target requirements,
especially for a processing intensive application such as video analytic.
In this thesis, with the hope of closing the above research gap, novel research
is first initiated by understanding the current legal practices and requirements of
implementing video surveillance system within a distributed processing and data
storage environment, since the legal validity of data gathered or processed within
such a system is vital for a distributed system's applicability in such domains.
Subsequently the thesis presents a comprehensive framework for the performance
ii
modelling and optimization of resource allocation in deploying a scalable distributed
video analytic application in a Hadoop based framework, running on virtualized
cluster of machines.
The proposed modelling framework investigates the use of several machine
learning algorithms such as, decision trees (M5P, RepTree), Linear Regression,
Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to
model and predict the execution time of video analytic jobs, based on infrastructure
level as well as job level parameters. Further in order to propose a novel
framework for the allocate resources under constraints to obtain optimal performance
in terms of job execution time, we propose a Genetic Algorithms (GAs) based
optimization technique.
Experimental results are provided to demonstrate the proposed framework's
capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters.
Given the above, the thesis contributes to the state-of-art in distributed video
analytics, design, implementation, performance analysis and optimisation
- …