984 research outputs found

    Towards green computing in wireless sensor networks: controlled mobility-aided balanced tree approach

    Get PDF
    Virtualization technology has revolutionized the mobile network and widely used in 5G innovation. It is a way of computing that allows dynamic leasing of server capabilities in the form of services like SaaS, PaaS, and IaaS. The proliferation of these services among the users led to the establishment of large-scale cloud data centers that consume an enormous amount of electrical energy and results into high metered bill cost and carbon footprint. In this paper, we propose three heuristic models namely Median Migration Time (MeMT), Smallest Void Detection (SVD) and Maximum Fill (MF) that can reduce energy consumption with minimal variation in SLAs negotiated. Specifically, we derive the cost of running cloud data center, cost optimization problem and resource utilization optimization problem. Power consumption model is developed for cloud computing environment focusing on liner relationship between power consumption and resource utilization. A virtual machine migration technique is considered focusing on synchronization oriented shorter stop-and-copy phase. The complete operational steps as algorithms are developed for energy aware heuristic models including MeMT, SVD and MF. To evaluate proposed heuristic models, we conduct experimentations using PlanetLab server data often ten days and synthetic workload data collected randomly from the similar number of VMs employed in PlanetLab Servers. Through evaluation process, we deduce that proposed approaches can significantly reduce the energy consumption, total VM migration, and host shutdown while maintaining the high system performance

    Evolutionary algorithm-based multi-objective task scheduling optimization model in cloud environments

    Full text link
    © 2015, Springer Science+Business Media New York. Optimizing task scheduling in a distributed heterogeneous computing environment, which is a nonlinear multi-objective NP-hard problem, plays a critical role in decreasing service response time and cost, and boosting Quality of Service (QoS). This paper, considers four conflicting objectives, namely minimizing task transfer time, task execution cost, power consumption, and task queue length, to develop a comprehensive multi-objective optimization model for task scheduling. This model reduces costs from both the customer and provider perspectives by considering execution and power cost. We evaluate our model by applying two multi-objective evolutionary algorithms, namely Multi-Objective Particle Swarm Optimization (MOPSO) and Multi-Objective Genetic Algorithm (MOGA). To implement the proposed model, we extend the Cloudsim toolkit by using MOPSO and MOGA as its task scheduling algorithms which determine the optimal task arrangement among VMs. The simulation results show that the proposed multi-objective model finds optimal trade-off solutions amongst the four conflicting objectives, which significantly reduces the job response time and makespan. This model not only increases QoS but also decreases the cost to providers. From our experimentation results, we find that MOPSO is a faster and more accurate evolutionary algorithm than MOGA for solving such problems

    Implicit Study of Techniques and Tools for Data Analysis of Complex Sensory Data

    Get PDF
    The utility as well as contribution of applications in Wireless Sensor Network (WSN) has been experienced by the users from more than a decade. However, with the evolution of time, it has been found that there is a massive growth of data generation even in WSN. The smaller size of sensor with limited battery life and minimal computational capability cannot handle processive such a massive stream of complex data efficiently. Although, there are various types of mining techniques being practiced today, but such tools and techniques cannot be efficiently used for analyzing such complex and massively growing data. This paper therefore discusses about the generation of large data and issues of the existing research techniques by reviewing the literatures and frequently used tools. The study finally briefs about the significant research gap that calls for need of data analytical tools in extracting knowledge from complex sensory data

    Couplers for linking environmental models: scoping study and potential next steps

    Get PDF
    This report scopes out what couplers there are available in the hydrology and atmospheric modelling fields. The work reported here examines both dynamic runtime and one way file based coupling. Based on a review of the peer-reviewed literature and other open sources, there are a plethora of coupling technologies and standards relating to file formats. The available approaches have been evaluated against criteria developed as part of the DREAM project. Based on these investigations, the following recommendations are made: • The most promising dynamic coupling technologies for use within BGS are OpenMI 2.0 and CSDMS (either 1.0 or 2.0) • Investigate the use of workflow engines: Trident and Pyxis, the latter as part of the TSB/AHRC project “Confluence” • There is a need to include database standards CSW and GDAL and use data formats from the climate community NetCDF and CF standards. • Development of a “standard” composition which will consist of two process models and a 3D geological model all linked to data stored in the BGS corporate database and flat file format. Web Feature Services should be included in these compositions. There is also a need to investigate other approaches in different disciplines: The Loss Modelling Framework, OASIS-LMF is the best candidate

    Fog of everything: energy-efficient networked computing architectures, research challenges, and a case study

    Get PDF
    Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing \u27ad hoc\u27 communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects

    Prototype of machine learning “as a service” for CMS physics in signal vs background discrimination

    Get PDF
    Big volumes of data are collected and analysed by LHC experiments at CERN. The success of this scientific challenges is ensured by a great amount of computing power and storage capacity, operated over high performance networks, in very complex LHC computing models on the LHC Computing Grid infrastructure. Now in Run-2 data taking, LHC has an ambitious and broad experimental programme for the coming decades: it includes large investments in detector hardware, and similarly it requires commensurate investment in the R&D in software and com- puting to acquire, manage, process, and analyse the shear amounts of data to be recorded in the High-Luminosity LHC (HL-LHC) era. The new rise of Artificial Intelligence - related to the current Big Data era, to the technological progress and to a bump in resources democratization and efficient allocation at affordable costs through cloud solutions - is posing new challenges but also offering extremely promising techniques, not only for the commercial world but also for scientific enterprises such as HEP experiments. Machine Learning and Deep Learning are rapidly evolving approaches to characterising and describing data with the potential to radically change how data is reduced and analysed, also at LHC. This thesis aims at contributing to the construction of a Machine Learning “as a service” solution for CMS Physics needs, namely an end-to-end data-service to serve Machine Learning trained model to the CMS software framework. To this ambitious goal, this thesis work contributes firstly with a proof of concept of a first prototype of such infrastructure, and secondly with a specific physics use-case: the Signal versus Background discrimination in the study of CMS all-hadronic top quark decays, done with scalable Machine Learning techniques

    A planetary nervous system for social mining and collective awareness

    Get PDF
    We present a research roadmap of a Planetary Nervous System (PNS), capable of sensing and mining the digital breadcrumbs of human activities and unveiling the knowledge hidden in the big data for addressing the big questions about social complexity. We envision the PNS as a globally distributed, self-organizing, techno-social system for answering analytical questions about the status of world-wide society, based on three pillars: social sensing, social mining and the idea of trust networks and privacy-aware social mining. We discuss the ingredients of a science and a technology necessary to build the PNS upon the three mentioned pillars, beyond the limitations of their respective state-of-art. Social sensing is aimed at developing better methods for harvesting the big data from the techno-social ecosystem and make them available for mining, learning and analysis at a properly high abstraction level. Social mining is the problem of discovering patterns and models of human behaviour from the sensed data across the various social dimensions by data mining, machine learning and social network analysis. Trusted networks and privacy-aware social mining is aimed at creating a new deal around the questions of privacy and data ownership empowering individual persons with full awareness and control on own personal data, so that users may allow access and use of their data for their own good and the common good. The PNS will provide a goal-oriented knowledge discovery framework, made of technology and people, able to configure itself to the aim of answering questions about the pulse of global society. Given an analytical request, the PNS activates a process composed by a variety of interconnected tasks exploiting the social sensing and mining methods within the transparent ecosystem provided by the trusted network. The PNS we foresee is the key tool for individual and collective awareness for the knowledge society. We need such a tool for everyone to become fully aware of how powerful is the knowledge of our society we can achieve by leveraging our wisdom as a crowd, and how important is that everybody participates both as a consumer and as a producer of the social knowledge, for it to become a trustable, accessible, safe and useful public good.Seventh Framework Programme (European Commission) (grant agreement No. 284709
    • …
    corecore