316 research outputs found

    Mechanisms for improving information quality in smartphone crowdsensing systems

    Get PDF
    Given its potential for a large variety of real-life applications, smartphone crowdsensing has recently gained tremendous attention from the research community. Smartphone crowdsensing is a paradigm that allows ordinary citizens to participate in large-scale sensing surveys by using user-friendly applications installed in their smartphones. In this way, fine-grained sensing information is obtained from smartphone users without employing fixed and expensive infrastructure, and with negligible maintenance costs. Existing smartphone sensing systems depend completely on the participants\u27 willingness to submit up-to-date and accurate information regarding the events being monitored. Therefore, it becomes paramount to scalably and effectively determine, enforce, and optimize the information quality of the sensing reports submitted by the participants. To this end, mechanisms to improve information quality in smartphone crowdsensing systems were designed in this work. Firstly, the FIRST framework is presented, which is a reputation-based mechanism that leverages the concept of mobile trusted participants to determine and improve the information quality of collected data. Secondly, it is mathematically modeled and studied the problem of maximizing the likelihood of successful execution of sensing tasks when participants having uncertain mobility execute sensing tasks. Two incentive mechanisms based on game and auction theory are then proposed to efficiently and scalably solve such problem. Experimental results demonstrate that the mechanisms developed in this thesis outperform existing state of the art in improving information quality in smartphone crowdsensing systems --Abstract, page iii

    Scheduling in cloud and fog architecture: identification of limitations and suggestion of improvement perspectives

    Get PDF
    Application execution required in cloud and fog architectures are generally heterogeneous in terms of device and application contexts. Scaling these requirements on these architectures is an optimization problem with multiple restrictions. Despite countless efforts, task scheduling in these architectures continue to present some enticing challenges that can lead us to the question how tasks are routed between different physical devices, fog nodes and cloud. In fog, due to its density and heterogeneity of devices, the scheduling is very complex and in the literature, there are still few studies that have been conducted. However, scheduling in the cloud has been widely studied. Nonetheless, many surveys address this issue from the perspective of service providers or optimize application quality of service (QoS) levels. Also, they ignore contextual information at the level of the device and end users and their user experiences. In this paper, we conducted a systematic review of the literature on the main task by: scheduling algorithms in the existing cloud and fog architecture; studying and discussing their limitations, and we explored and suggested some perspectives for improvement.Calouste Gulbenkian Foundation, PhD scholarship No.234242, 2019.info:eu-repo/semantics/publishedVersio

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    An optimized cost-based data allocation model for heterogeneous distributed computing systems

    Get PDF
    Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation

    Towards Energy-Efficient, Fault-Tolerant, and Load-Balanced Mobile Cloud

    Get PDF
    Recent advances in mobile technologies have enabled a new computing paradigm in which large amounts of data are generated and accessed from mobile devices. However, running resource-intensive applications (e.g., video/image storage and processing or map-reduce type) on a single mobile device still remains off bounds since it requires large computation and storage capabilities. Computer scientists overcome this issue by exploiting the abundant computation and storage resources from traditional cloud to enhance the capabilities of end-user mobile devices. Nevertheless, the designs that rely on remote cloud services sometimes underlook the available resources (e.g., storage, communication, and processing) on mobile devices. In particular, when the remote cloud services are unavailable (due to service provider or network issues) these smart devices become unusable. For mobile devices deployed in an infrastructureless network where nodes can move, join, or leave the network dynamically, the challenges on energy-efficiency, reliability, and load-balance are still largely unexplored. This research investigates challenges and proposes solutions for deploying mobile application in such environments. In particular, we focus on a distributed data storage and data processing framework for mobile cloud. The proposed mobile cloud computing (MCC) framework provides data storage and data processing services to MCC applications such as video storage and processing or map-reduce type. These services ensure the mobile cloud is energy-efficient, fault-tolerant, and load-balanced by intelligently allocating and managing the stored data and processing tasks accounting for the limited resources on mobile devices. When considering the load-balance, the framework also incorporates the heterogeneous characteristics of mobile cloud in which nodes may have various energy, communication, and processing capabilities. All the designs are built on the k-out-of-n computing theoretical foundation. The novel formulations produce a reliability-compliant, energy-efficient data storage solution and a deadline-compliant, energy-efficient job scheduler. From the promising outcomes of this research, a future where mobile cloud offers real-time computation capabilities in complex environments such as disaster relief or warzone is certainly not far

    Cloud computing: survey on energy efficiency

    Get PDF
    International audienceCloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions

    Optimizations for Energy-Aware, High-Performance and Reliable Distributed Storage Systems

    Get PDF
    With the decreasing cost and wide-spread use of commodity hard drives, it has become possible to create very large-scale storage systems with less expense. However, as we approach exabyte-scale storage systems, maintaining important features such as energy-efficiency, performance, reliability and usability became increasingly difficult. Despite the decreasing cost of storage systems, the energy consumption of these systems still needs to be addressed in order to retain cost-effectiveness. Any improvements in a storage system can be outweighed by high energy costs. On the other hand, large-scale storage systems can benefit more from the object storage features for improved performance and usability. One area of concern is metadata performance bottleneck of applications reading large directories or creating a large number of files. Similarly, computation on big data where data needs to be transferred between compute and storage clusters adversely affects I/O performance. As the storage systems become more complex and larger, transferring data between remote compute and storage tiers becomes impractical. Furthermore, storage systems implement reliability typically at the file system or client level. This approach might not always be practical in terms of performance. Lastly, object storage features are usually tailored to specific use cases that makes it harder to use them in various contexts. In this thesis, we are presenting several approaches to enhance energy-efficiency, performance, reliability and usability of large-scale storage systems. To begin with, we improve the energy-efficiency of storage systems by moving I/O load to a subset of the storage nodes with energy-aware node allocation methods and turn off the unused nodes, while preserving load balance on demand. To address the metadata performance issue associated with large creates and directory reads, we represent directories with object storage collections and implement lazy creation of objects. Similarly, in-situ computation on large-scale data is enabled by using object storage features to integrate a computational framework with the existing object storage layer to eliminate the need to transfer data between compute and storage silos for better performance. We then present parity-based redundancy using object storage features to achieve reliability with less performance impact. Finally, unified storage brings together the object storage features to meet the needs of distinct use cases; such as cloud storage, big data or high-performance computing to alleviate the unnecessary fragmentation of storage resources. We evaluate each proposed approach thoroughly and validate their effectiveness in terms of improving energy-efficiency, performance, reliability and usability of a large-scale storage system
    • …
    corecore