270 research outputs found
Real-time Adaptive Detection and Recovery against Sensor Attacks in Cyber-physical Systems
Cyber-physical systems (CPSs) utilize computation to control physical objects in real-world environments, and an increasing number of CPS-based applications have been designed for life-critical purposes. Sensor attacks, which manipulate sensor readings to deceive CPSs into performing dangerous actions, can result in severe consequences. This urgent need has motivated significant research into reactive defense. In this dissertation, we present an adaptive detection method capable of identifying sensor attacks before the system reaches unsafe states. Once the attacks are detected, a recovery approach that we propose can guide the physical plant to a desired safe state before a safety deadline.Existing detection approaches tend to minimize detection delay and false alarms simultaneously, despite a clear trade-off between these two metrics. We argue that attack detection should dynamically balance these metrics according to the physical system\u27s current state. In line with this argument, we propose an adaptive sensor attack detection system comprising three components: an adaptive detector, a detection deadline estimator, and a data logger. This system can adapt the detection delay and thus false alarms in real-time to meet a varying detection deadline, thereby improving usability. We implement our detection system and validate it using multiple CPS simulators and a reduced-scale autonomous vehicle testbed. After identifying sensor attacks, it is essential to extend the benefits of attack detection. In this dissertation, we investigate how to eliminate the impact of these attacks and propose novel real-time recovery methods for securing CPSs. Initially, we target sensor attack recovery in linear CPSs. By employing formal methods, we are able to reconstruct state estimates and calculate a conservative safety deadline. With these constraints, we formulate the recovery problem as either a linear programming or a quadratic programming problem. By solving this problem, we obtain a recovery control sequence that can smoothly steer a physical system back to a target state set before a safe deadline and maintain the system state within the set once reached. Subsequently, to make recovery practical for complex CPSs, we adapt our recovery method for nonlinear systems and explore the use of uncorrupted sensors to alleviate uncertainty accumulation. Ultimately, we implement our approach and showcase its effectiveness and efficiency through an extensive set of experiments. For linear CPSs, we evaluate the approach using 5 CPS simulators and 3 types of sensor attacks. For nonlinear CPSs, we assess our method on 3 nonlinear benchmarks
University of Windsor Graduate Calendar 2023 Spring
https://scholar.uwindsor.ca/universitywindsorgraduatecalendars/1027/thumbnail.jp
University of Windsor Graduate Calendar 2023 Winter
https://scholar.uwindsor.ca/universitywindsorgraduatecalendars/1026/thumbnail.jp
Energy-Sustainable IoT Connectivity: Vision, Technological Enablers, Challenges, and Future Directions
Technology solutions must effectively balance economic growth, social equity,
and environmental integrity to achieve a sustainable society. Notably, although
the Internet of Things (IoT) paradigm constitutes a key sustainability enabler,
critical issues such as the increasing maintenance operations, energy
consumption, and manufacturing/disposal of IoT devices have long-term negative
economic, societal, and environmental impacts and must be efficiently
addressed. This calls for self-sustainable IoT ecosystems requiring minimal
external resources and intervention, effectively utilizing renewable energy
sources, and recycling materials whenever possible, thus encompassing energy
sustainability. In this work, we focus on energy-sustainable IoT during the
operation phase, although our discussions sometimes extend to other
sustainability aspects and IoT lifecycle phases. Specifically, we provide a
fresh look at energy-sustainable IoT and identify energy provision, transfer,
and energy efficiency as the three main energy-related processes whose
harmonious coexistence pushes toward realizing self-sustainable IoT systems.
Their main related technologies, recent advances, challenges, and research
directions are also discussed. Moreover, we overview relevant performance
metrics to assess the energy-sustainability potential of a certain technique,
technology, device, or network and list some target values for the next
generation of wireless systems. Overall, this paper offers insights that are
valuable for advancing sustainability goals for present and future generations.Comment: 25 figures, 12 tables, submitted to IEEE Open Journal of the
Communications Societ
Dutkat: A Privacy-Preserving System for Automatic Catch Documentation and Illegal Activity Detection in the Fishing Industry
United Nations' Sustainable Development Goal 14 aims to conserve and sustainably use the oceans and their resources for the benefit of people and the planet. This includes protecting marine ecosystems, preventing pollution, and overfishing, and increasing scientific understanding of the oceans. Achieving this goal will help ensure the health and well-being of marine life and the millions of people who rely on the oceans for their livelihoods. In order to ensure sustainable fishing practices, it is important to have a system in place for automatic catch documentation.
This thesis presents our research on the design and development of Dutkat, a privacy-preserving, edge-based system for catch documentation and detection of illegal activities in the fishing industry. Utilising machine learning techniques, Dutkat can analyse large amounts of data and identify patterns that may indicate illegal activities such as overfishing or illegal discard of catch. Additionally, the system can assist in catch documentation by automating the process of identifying and counting fish species, thus reducing potential human error and increasing efficiency. Specifically, our research has consisted of the development of various components of the Dutkat system, evaluation through experimentation, exploration of existing data, and organization of machine learning competitions. We have also implemented it from a compliance-by-design perspective to ensure that the system is in compliance with data protection laws and regulations such as GDPR. Our goal with Dutkat is to promote sustainable fishing practices, which aligns with the Sustainable Development Goal 14, while simultaneously protecting the privacy and rights of fishing crews
A prototype of the data quality pipeline of the Online Observation Quality System of ASTRI-Mini Array telescope system
Gamma-ray astronomy investigates the physics of the universe and the characteristics of celestial objects through gamma rays. Gamma-rays are the most energetic part of the electromagnetic spectrum, emitted in some of the brightest events in the universe, such as pulsars, quasars, and supernova remnants. Gamma rays can be observed with satellites or ground-based telescopes. The latter allow to detect gamma rays in the very high energy range with the indirect Cherenkov technique. When highly energetic photons enter Earth's atmosphere, they generate air showers, cascades of particles whose fast motion produces elusive flashes of blue Cherenkov light in the sky.
This thesis discusses the research conducted at the Astrophysics and Space Science Observatory of Bologna in collaboration with the international project, guided by INAF, for ground-based gamma-ray astrophysics, ASTRI Mini-Array. The focus is on the Online Observation Quality System (OOQS), which conducts a quick look analysis during the telescope observation. The Cherenkov Camera Data Quality Checker is the OOQS component that performs real-time quality checks on the data acquired at high frequency, up to 1000\,Hz, and with a total bandwidth of 148MB/s, from the nine Cherenkov Cameras. The thesis presents the implementation of the OOQS-Pipeline, a software prototype that receives scientific packets from a Cherenkov Camera, performs quality analysis, and stores the results. The pipeline consists of three main applications: Kafka-Consumer, DQ-Analysis, and DQ-Aggregator. The pipeline was tested on a server having similar performance as the ones of the Array Observing Site, and results indicate that it is possible to acquire the maximum data flow produced by the cameras. Overall, the thesis presents an important contribution to the ASTRI Mini-Array project, about the development of the first version of the OOQS-Pipeline, which will maximize observation time with quality data passing the verification thresholds
Leveraging spot instances for resource provisioning in serverless computing
Cloud computing has become a dominating paradigm across the IT industry. However, keeping cloud costs under control is a major challenge for organizations. One option to save costs is using spot instances: virtual machines that have highly discounted prices at the expense of lower reliability and availability.
Serverless computing is a paradigm that allows developers to build and deploy applications in the cloud without provisioning or managing backend infrastructure. Function as a Service (FaaS) is the prevalent delivery model of this paradigm, which allows developers to execute functions in the cloud as a response to a request or an event. The developer focuses only on the code, and the cloud provider handles the execution and scaling of the functions. This is convenient for developers, but comes with some limitations and can become very expensive at scale.
This thesis investigates leveraging spot instances for running serverless functions, and potentially achieve both higher flexibility and cost reduction compared to commercial FaaS solutions. For this purpose, we present a system design, suitable for applications that tolerate some execution latency, and implement it in Google Cloud Platform. Our implementation is compared against Google Cloud Run, a service that offers a similar functionality.
Our system achieves significant cost savings: assuming a function execution time of two minutes, our system has the same price as the Cloud Run solution at around 8,000 requests per month, and at, for example, 20,000 requests per month, the cost is less than half of Cloud Run. However, one important design decision is that a spot instance is provisioned on the fly for every request. While this introduces latency, it also allows the system to achieve no significant reduction in reliability, as was confirmed in our evaluation
Multi-FedLS: a Framework for Cross-Silo Federated Learning Applications on Multi-Cloud Environments
Federated Learning (FL) is a distributed Machine Learning (ML) technique that
can benefit from cloud environments while preserving data privacy. We propose
Multi-FedLS, a framework that manages multi-cloud resources, reducing execution
time and financial costs of Cross-Silo Federated Learning applications by using
preemptible VMs, cheaper than on-demand ones but that can be revoked at any
time. Our framework encloses four modules: Pre-Scheduling, Initial Mapping,
Fault Tolerance, and Dynamic Scheduler. This paper extends our previous work
\cite{brum2022sbac} by formally describing the Multi-FedLS resource manager
framework and its modules. Experiments were conducted with three Cross-Silo FL
applications on CloudLab and a proof-of-concept confirms that Multi-FedLS can
be executed on a multi-cloud composed by AWS and GCP, two commercial cloud
providers. Results show that the problem of executing Cross-Silo FL
applications in multi-cloud environments with preemptible VMs can be
efficiently resolved using a mathematical formulation, fault tolerance
techniques, and a simple heuristic to choose a new VM in case of revocation.Comment: In review by Journal of Parallel and Distributed Computin
Analysis of Embedded Controllers Subject to Computational Overruns
Microcontrollers have become an integral part of modern everyday embedded systems, such as smart bikes, cars, and drones. Typically, microcontrollers operate under real-time constraints, which require the timely execution of programs on the resource-constrained hardware. As embedded systems are becoming increasingly more complex, microcontrollers run the risk of violating their timing constraints, i.e., overrunning the program deadlines. Breaking these constraints can cause severe damage to both the embedded system and the humans interacting with the device. Therefore, it is crucial to analyse embedded systems properly to ensure that they do not pose any significant danger if the microcontroller overruns a few deadlines.However, there are very few tools available for assessing the safety and performance of embedded control systems when considering the implementation of the microcontroller. This thesis aims to fill this gap in the literature by presenting five papers on the analysis of embedded controllers subject to computational overruns. Details about the real-time operating system's implementation are included into the analysis, such as what happens to the controller's internal state representation when the timing constraints are violated. The contribution includes theoretical and computational tools for analysing the embedded system's stability, performance, and real-time properties.The embedded controller is analysed under three different types of timing violations: blackout events (when no control computation is completed during long periods), weakly-hard constraints (when the number of deadline overruns is constrained over a window), and stochastic overruns (when violations of timing constraints are governed by a probabilistic process). These scenarios are combined with different implementation policies to reduce the gap between the analysis and its practical applicability. The analyses are further validated with a comprehensive experimental campaign performed on both a set of physical processes and multiple simulations.In conclusion, the findings of this thesis reveal that the effect deadline overruns have on the embedded system heavily depends the implementation details and the system's dynamics. Additionally, the stability analysis of embedded controllers subject to deadline overruns is typically conservative, implying that additional insights can be gained by also analysing the system's performance
Enabling Scalability: Graph Hierarchies and Fault Tolerance
In this dissertation, we explore approaches to two techniques for building scalable algorithms. First, we look at different graph problems. We show how to exploit the input graph\u27s inherent hierarchy for scalable graph algorithms. The second technique takes a step back from concrete algorithmic problems. Here, we consider the case of node failures in large distributed systems and present techniques to quickly recover from these.
In the first part of the dissertation, we investigate how hierarchies in graphs can be used to scale algorithms to large inputs. We develop algorithms for three graph problems based on two approaches to build hierarchies. The first approach reduces instance sizes for NP-hard problems by applying so-called reduction rules. These rules can be applied in polynomial time. They either find parts of the input that can be solved in polynomial time, or they identify structures that can be contracted (reduced) into smaller structures without loss of information for the specific problem. After solving the reduced instance using an exponential-time algorithm, these previously contracted structures can be uncontracted to obtain an exact solution for the original input. In addition to a simple preprocessing procedure, reduction rules can also be used in branch-and-reduce algorithms where they are successively applied after each branching step to build a hierarchy of problem kernels of increasing computational hardness. We develop reduction-based algorithms for the classical NP-hard problems Maximum Independent Set and Maximum Cut. The second approach is used for route planning in road networks where we build a hierarchy of road segments based on their importance for long distance shortest paths. By only considering important road segments when we are far away from the source and destination, we can substantially speed up shortest path queries.
In the second part of this dissertation, we take a step back from concrete graph problems and look at more general problems in high performance computing (HPC). Here, due to the ever increasing size and complexity of HPC clusters, we expect hardware and software failures to become more common in massively parallel computations. We present two techniques for applications to recover from failures and resume computation. Both techniques are based on in-memory storage of redundant information and a data distribution that enables fast recovery. The first technique can be used for general purpose distributed processing frameworks: We identify data that is redundantly available on multiple machines and only introduce additional work for the remaining data that is only available on one machine. The second technique is a checkpointing library engineered for fast recovery using a data distribution method that achieves balanced communication loads. Both our techniques have in common that they work in settings where computation after a failure is continued with less machines than before. This is in contrast to many previous approaches that---in particular for checkpointing---focus on systems that keep spare resources available to replace failed machines.
Overall, we present different techniques that enable scalable algorithms. While some of these techniques are specific to graph problems, we also present tools for fault tolerant algorithms and applications in a distributed setting. To show that those can be helpful in many different domains, we evaluate them for graph problems and other applications like phylogenetic tree inference
- âŠ