644 research outputs found

    FDMC: Framework for Decision Making in Cloud for Efficient Resource Management

    Get PDF
    An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost

    Real-time big data processing for anomaly detection : a survey

    Get PDF
    The advent of connected devices and omnipresence of Internet have paved way for intruders to attack networks, which leads to cyber-attack, financial loss, information theft in healthcare, and cyber war. Hence, network security analytics has become an important area of concern and has gained intensive attention among researchers, off late, specifically in the domain of anomaly detection in network, which is considered crucial for network security. However, preliminary investigations have revealed that the existing approaches to detect anomalies in network are not effective enough, particularly to detect them in real time. The reason for the inefficacy of current approaches is mainly due the amassment of massive volumes of data though the connected devices. Therefore, it is crucial to propose a framework that effectively handles real time big data processing and detect anomalies in networks. In this regard, this paper attempts to address the issue of detecting anomalies in real time. Respectively, this paper has surveyed the state-of-the-art real-time big data processing technologies related to anomaly detection and the vital characteristics of associated machine learning algorithms. This paper begins with the explanation of essential contexts and taxonomy of real-time big data processing, anomalous detection, and machine learning algorithms, followed by the review of big data processing technologies. Finally, the identified research challenges of real-time big data processing in anomaly detection are discussed. © 2018 Elsevier Lt

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    A state-of-art optimization method for analyzing the tweets of earthquake-prone region

    Get PDF
    With the increase in accumulated data and usage of the Internet, social media such as Twitter has become a fundamental tool to access all kinds of information. Therefore, it can be expressed that processing, preparing data, and eliminating unnecessary information on Twitter gains its importance rapidly. In particular, it is very important to analyze the information and make it available in emergencies such as disasters. In the proposed study, an earthquake with the magnitude of Mw = 6.8 on the Richter scale that occurred on January 24, 2020, in Elazig province, Turkey, is analyzed in detail. Tweets under twelve hashtags are clustered separately by utilizing the Social Spider Optimization (SSO) algorithm with some modifications. The sum-of intra-cluster distances (SICD) is utilized to measure the performance of the proposed clustering algorithm. In addition, SICD, which works in a way of assigning a new solution to its nearest node, is used as an integer programming model to be solved with the GUROBI package program on the test data-sets. Optimal results are gathered and compared with the proposed SSO results. In the study, center tweets with optimal results are found by utilizing modified SSO. Moreover, results of the proposed SSO algorithm are compared with the K-means clustering technique which is the most popular clustering technique. The proposed SSO algorithm gives better results. Hereby, the general situation of society after an earthquake is deduced to provide moral and material supports

    Load Balancer using Whale-Earthworm Optimization for Efficient Resource Scheduling in the IoT-Fog-Cloud Framework

    Get PDF
    Cloud-Fog environment is useful in offering optimized services to customers in their daily routine tasks. With the exponential usage of IoT devices, a huge scale of data is generated. Different service providers use optimization scheduling approaches to optimally allocate the scarce resources in the Fog computing environment to meet job deadlines. This study introduces the Whale-EarthWorm Optimization method (WEOA), a powerful hybrid optimization method for improving resource management in the Cloud-Fog environment. Striking a balance between exploration and exploitation of these approaches is difficult, if only Earthworm or Whale optimization methods are used. Earthworm technique can result in inefficiency due to its investigations and additional overhead, whereas Whale algorithm, may leave scope for improvement in finding the optimal solutions using its exploitation.  This research introduces an efficient task allocation method as a novel load balancer. It leverages an enhanced exploration phase inspired by the Earthworm algorithm and an improved exploitation phase inspired by the Whale algorithm to manage the optimization process. It shows a notable performance enhancement, with a 6% reduction in response time, a 2% decrease in cost, and a 2% improvement in makespan over EEOA. Furthermore, when compared to other approaches like h-DEWOA, CSDEO, CSPSO, and BLEMO, the proposed method achieves remarkable results, with response time reductions of up to 82%, cost reductions of up to 75%, and makespan improvements of up to 80%

    Artificial Intelligence Models for Scheduling Big Data Services on the Cloud

    Get PDF
    The widespread adoption of Internet of Things (IoT) applications in many critical sectors (e.g., healthcare, unmanned autonomous systems, etc.) and the huge volumes of data that are being generated from such applications have led to an unprecedented reliance on the cloud computing platform to store and process these data. Moreover, cloud providers tend to receive massive waves of demands on their storage and computing resources. To help providers deal with such demands without sacrificing performance, the concept of cloud automation had recently arisen to improve the performance and reduce the manual efforts related to the management of cloud computing workloads. However, several challenges have to be taken into consideration in order to guarantee an optimal performance for big data storage and analytics in cloud computing environments. In this context, we propose in this thesis a smart scheduling model as an automated big data task scheduling approach in cloud computing environments. Our scheduling model combines Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) to automatically predict the IoT devices to which each incoming big data task should be scheduled to as to improve the performance and reduce the execution cost. Furthermore, we solve the long execution time and data shortage problems by introducing a FL-based solution that also ensures privacy-preserving and reduces training and data complexity. The motivation of this thesis stems from four main observations/research gaps that we have drawn through our literature reviews and/or experiments, which are: (1) most of the existing cloud-based scheduling solutions consider the scheduling problem only from the tasks priority viewpoint, which leads to increase the amounts of wasted resources in case of malicious or compromised IoT devices; (2) the existing scheduling solutions in the domain of cloud and edge computing are still ineffective in making real-time decisions concerning the resource allocation and management in cloud systems; (3) it is quite difficult to schedule tasks or learning models from servers in areas that are far from the objects and IoT devices, which entails significant delay and response time for the process of transmitting data; and (4) none of the existing scheduling solutions has yet addressed the issue of dynamic task scheduling automation in complex and large-scale edge computing settings. In this thesis, we address the scheduling challenges related to the cloud and edge computing environment. To this end, we argue that trust should be an integral part of the decision-making process and therefore design a trust establishment mechanism between the edge server and IoT devices. The trust mechanism model aims to detect those IoT devices that over-utilize or under-utilize their resources. Thereafter, we design a smart scheduling algorithm to automate the process of scheduling large-scale workloads onto edge cloud computing resources while taking into account the trust scores, task waiting time, and energy levels of the IoT devices to make appropriate scheduling decisions. Finally, we apply our scheduling strategy in the healthcare domain to investigate its applicability in a real-world scenario (COVID-19)

    A Comprehensive Survey on Resource Management in Internet of Things, Journal of Telecommunications and Information Technology, 2020, nr 4

    Get PDF
    Efficient resource management is a challenging task in distributed systems, such as the Internet of Things, fog, edge, and cloud computing. In this work, we present a broad overview of the Internet of Things ecosystem and of the challenges related to managing its resources. We also investigate the need for efficient resource management and the guidelines given/suggested by Standard Development Organizations. Additionally, this paper contains a comprehensive survey of the individual phases of resource management processes, focusing on resource modeling, resource discovery, resource estimation, and resource allocation approaches based on performance parameters or metrics, as well as on architecture types. This paper presents also the architecture of a generic resource management enabler. Furthermore, we present open issues concerning resource management, pointing out the directions of future research related to the Internet of Thing
    corecore