714 research outputs found

    Consensus-Based Data Management within Fog Computing For the Internet of Things

    Get PDF
    The Internet of Things (IoT) infrastructure forms a gigantic network of interconnected and interacting devices. This infrastructure involves a new generation of service delivery models, more advanced data management and policy schemes, sophisticated data analytics tools, and effective decision making applications. IoT technology brings automation to a new level wherein nodes can communicate and make autonomous decisions in the absence of human interventions. IoT enabled solutions generate and process enormous volumes of heterogeneous data exchanged among billions of nodes. This results in Big Data congestion, data management, storage issues and various inefficiencies. Fog Computing aims at solving the issues with data management as it includes intelligent computational components and storage closer to the data sources. Often, an IoT-enabled infrastructure is shared among many users with various requirements. Sharing resources, sharing operational costs and collective decision making (consensus) among many stakeholders is frequently neglected. This research addresses an essential requirement for adaptive, autonomous and consensus-based Fog computational solutions which are able to support distributed and in-network schemes and policies. These network schemes and policies need to meet the requirements of many users. In this work, innovative consensus-based computational solutions are investigated. These proposed solutions aim to correlate and organise data for effective management and decision making in Fog. Instead of individual decision making, the algorithms aim to aggregate several decisions into a consensus decision representing a collective agreement, benefiting from the individuals variant knowledge and meeting multiple stakeholders requirements. In order to validate the proposed solutions, hybrid research methodology is involved that includes the design of a test-bed and the execution of several experiments. In order to investigate the effectiveness of the paradigm, three experiments were designed and validated. Real-life sensor data and synthetic statistical data was collected, processed and analysed. Bayesian Machine Learning models and Analytics were used to consolidate the design and evaluate the performance of the algorithms. In the Fog environment, the first scenario tests the Aggregation by Distribution algorithm. The solution contribute in achieving a notable efficiency of data delivery obtained with a minimal loss in precision. The second scenario validates the merits of the approach in predicting the activities of high mobility IoT applications. The third scenario tests the applications related to smart home IoT. All proposed Consensus algorithms use statistical analysis to support effective decision making in Fog and enable data aggregation for optimal storage, data transmission, processing and analytics. The final results of all experiments showed that all the implemented consensus approaches surpass the individual ones in different performance terms. Formal results also showed that the paradigm is a good fit in many IoT environments and can be suitable for different scenarios when applying data analysis to correlate data with the design. Finally, the design demonstrates that Fog Computing can compete with Cloud Computing in terms of accuracy with an added preference of locality

    Security architecture for Fog-To-Cloud continuum system

    Get PDF
    Nowadays, by increasing the number of connected devices to Internet rapidly, cloud computing cannot handle the real-time processing. Therefore, fog computing was emerged for providing data processing, filtering, aggregating, storing, network, and computing closer to the users. Fog computing provides real-time processing with lower latency than cloud. However, fog computing did not come to compete with cloud, it comes to complete the cloud. Therefore, a hierarchical Fog-to-Cloud (F2C) continuum system was introduced. The F2C system brings the collaboration between distributed fogs and centralized cloud. In F2C systems, one of the main challenges is security. Traditional cloud as security provider is not suitable for the F2C system due to be a single-point-of-failure; and even the increasing number of devices at the edge of the network brings scalability issues. Furthermore, traditional cloud security cannot be applied to the fog devices due to their lower computational power than cloud. On the other hand, considering fog nodes as security providers for the edge of the network brings Quality of Service (QoS) issues due to huge fog device’s computational power consumption by security algorithms. There are some security solutions for fog computing but they are not considering the hierarchical fog to cloud characteristics that can cause a no-secure collaboration between fog and cloud. In this thesis, the security considerations, attacks, challenges, requirements, and existing solutions are deeply analyzed and reviewed. And finally, a decoupled security architecture is proposed to provide the demanded security in hierarchical and distributed fashion with less impact on the QoS.Hoy en dĂ­a, al aumentar rĂĄpidamente el nĂșmero de dispositivos conectados a Internet, el cloud computing no puede gestionar el procesamiento en tiempo real. Por lo tanto, la informĂĄtica de niebla surgiĂł para proporcionar procesamiento de datos, filtrado, agregaciĂłn, almacenamiento, red y computaciĂłn mĂĄs cercana a los usuarios. La computaciĂłn nebulizada proporciona procesamiento en tiempo real con menor latencia que la nube. Sin embargo, la informĂĄtica de niebla no llegĂł a competir con la nube, sino que viene a completar la nube. Por lo tanto, se introdujo un sistema continuo jerĂĄrquico de niebla a nube (F2C). El sistema F2C aporta la colaboraciĂłn entre las nieblas distribuidas y la nube centralizada. En los sistemas F2C, uno de los principales retos es la seguridad. La nube tradicional como proveedor de seguridad no es adecuada para el sistema F2C debido a que se trata de un Ășnico punto de fallo; e incluso el creciente nĂșmero de dispositivos en el borde de la red trae consigo problemas de escalabilidad. AdemĂĄs, la seguridad tradicional de la nube no se puede aplicar a los dispositivos de niebla debido a su menor poder computacional que la nube. Por otro lado, considerar los nodos de niebla como proveedores de seguridad para el borde de la red trae problemas de Calidad de Servicio (QoS) debido al enorme consumo de energĂ­a computacional del dispositivo de niebla por parte de los algoritmos de seguridad. Existen algunas soluciones de seguridad para la informĂĄtica de niebla, pero no estĂĄn considerando las caracterĂ­sticas de niebla a nube jerĂĄrquica que pueden causar una colaboraciĂłn insegura entre niebla y nube. En esta tesis, las consideraciones de seguridad, los ataques, los desafĂ­os, los requisitos y las soluciones existentes se analizan y revisan en profundidad. Y finalmente, se propone una arquitectura de seguridad desacoplada para proporcionar la seguridad exigida de forma jerĂĄrquica y distribuida con menor impacto en la QoS.Postprint (published version

    IoT and Machine Learning for Process Optimization in Agrofood Industry

    Get PDF
    A indĂșstria 4.0 engloba as principais inovaçÔes tecnolĂłgicas dos campos de automação, controlo e tecnologia da informação, suportada nos conceitos tecnolĂłgicos, Internet das Coisas (IoT), sistemas Cloud e sistemas Cyber-FĂ­sicos, tornando os processos cada vez mais eficientes e autĂŽnomos. Neste contexto, propomos estudar e otimizar o processo de dessolventização de uma unidade fabril de produção de Ăłleo vegetal, associada a uma cadeia de abastecimento da industria agroalimentar. Esta cadeia contĂȘm um conjunto de dispositivos e sensores que permitem monitorizar o processo em tempo real, dando a capacidade de desenvolver modelos preditivos que otimizem o processo de extração do solvente comercial do produto final.Pretendemos demonstrar que a transformação da operação desta cadeia de abastecimento melhora a eficiĂȘncia e eficĂĄcia dos processos associados, reduzindo os custos operacionais de extração de solvente, obtendo o produto final de acordo com as normas e parĂąmetros de qualidade e segurança.Orientadores Externos: Vasco Miguel Pires, Raquel Peixoto de Oliveir

    Artificial Intelligence based Anomaly Detection of Energy Consumption in Buildings: A Review, Current Trends and New Perspectives

    Get PDF
    Enormous amounts of data are being produced everyday by sub-meters and smart sensors installed in residential buildings. If leveraged properly, that data could assist end-users, energy producers and utility companies in detecting anomalous power consumption and understanding the causes of each anomaly. Therefore, anomaly detection could stop a minor problem becoming overwhelming. Moreover, it will aid in better decision-making to reduce wasted energy and promote sustainable and energy efficient behavior. In this regard, this paper is an in-depth review of existing anomaly detection frameworks for building energy consumption based on artificial intelligence. Specifically, an extensive survey is presented, in which a comprehensive taxonomy is introduced to classify existing algorithms based on different modules and parameters adopted, such as machine learning algorithms, feature extraction approaches, anomaly detection levels, computing platforms and application scenarios. To the best of the authors' knowledge, this is the first review article that discusses anomaly detection in building energy consumption. Moving forward, important findings along with domain-specific problems, difficulties and challenges that remain unresolved are thoroughly discussed, including the absence of: (i) precise definitions of anomalous power consumption, (ii) annotated datasets, (iii) unified metrics to assess the performance of existing solutions, (iv) platforms for reproducibility and (v) privacy-preservation. Following, insights about current research trends are discussed to widen the applications and effectiveness of the anomaly detection technology before deriving future directions attracting significant attention. This article serves as a comprehensive reference to understand the current technological progress in anomaly detection of energy consumption based on artificial intelligence.Comment: 11 Figures, 3 Table

    Supervised Machine Learning in SAS Viya: Development of a Supervised Machine Learning pipeline in SAS Viya for comparison with a pipeline developed in Python

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsThis internship report details the development of a supervised ML pipeline in SAS Viya, a cloud-based environment composed of several solutions for importing, managing and transforming data and building and deploying predictive models into production environments. As a practical case study, this report showcases the SAS Viya features and capabilities which can be offered to the end-user. A comparison with a similar supervised ML pipeline in Python was made, to highlight both tools’ advantages and disadvantages. Thus, analytical tasks were employed, to demonstrate which different supervised ML techniques can be used in each technology. Furthermore, it was shown that, depending on the experience and knowledge of the end-user, both SAS Viya and Jupyter Notebook/Python are able to produce satisfactory results, being the latter more suited to data scientists with some experience in programming and ML. At the same time, SAS Viya fits more for employees who are getting started in the ML field, due to its point-and-click user interface. On the other hand, building a supervised ML pipeline in SAS Viya can be more straightforward than in Jupyter Notebook/Python, since the code is already developed and the process automatized, while pipeline templates are made available to the user. However, due to its open-source nature, Python has more supervised ML techniques available to be used in Jupyter Notebook. This report shows that these two solutions can complement each other, as SAS Viya offers good visualizations for data exploration, while Jupyter Notebook/Python can be dedicated to data transformation and predictive models’ development

    EGI user forum 2011 : book of abstracts

    Get PDF

    Improving efficiency, usability and scalability in a secure, resource-constrained web of things

    Get PDF

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    The Zenith attack: vulnerabilities and countermeasures

    Full text link
    In this paper we identify and define Zenith attacks, a new class of attacks on content-distribution systems, which seek to expose the popularity (i.e. access frequency) of individual items of content. As the access pattern to most real-world content exhibits Zipf-like characteristics, there is a small set of dominating items which account for the majority of accesses. Identifying such items enables an adversary to perform follow up adversarial actions targeting these items, including mounting denial of service attacks, deploying censorship mechanisms, and eavesdropping on or prosecution of the host or recipient. We instantiate a Zenith attack on the Kademlia and Chord structured overlay networks and quantify the cost of such an attack. As a countermeasure to these attacks we propose Crypsis, a system to conceal the lookup frequency of individual keys through aggregation over ranges of the keyspace. Crypsis provides provable security guarantees for concealment of lookup frequency while maintaining logarithmic routing and state bounds.National Science Foundation (0735974, 0820138, 0952145, 1012798
    • 

    corecore