12,536 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Challenges in the Design and Implementation of IoT Testbeds in Smart-Cities : A Systematic Review
Advancements in wireless communication and the increased accessibility to low-cost sensing and data processing IoT technologies have increased the research and development of urban monitoring systems. Most smart city research projects rely on deploying proprietary IoT testbeds for indoor and outdoor data collection. Such testbeds typically rely on a three-tier architecture composed of the Endpoint, the Edge, and the Cloud. Managing the system's operation whilst considering the security and privacy challenges that emerge, such as data privacy controls, network security, and security updates on the devices, is challenging. This work presents a systematic study of the challenges of developing, deploying and managing urban monitoring testbeds, as experienced in a series of urban monitoring research projects, followed by an analysis of the relevant literature. By identifying the challenges in the various projects and organising them under the V-model development lifecycle levels, we provide a reference guide for future projects. Understanding the challenges early on will facilitate current and future smart-cities IoT research projects to reduce implementation time and deliver secure and resilient testbeds
Recommended from our members
Rigorous Experimentation For Reinforcement Learning
Scientific fields make advancements by leveraging the knowledge created by others to push the boundary of understanding. The primary tool in many fields for generating knowledge is empirical experimentation. Although common, generating accurate knowledge from empirical experiments is often challenging due to inherent randomness in execution and confounding variables that can obscure the correct interpretation of the results. As such, researchers must hold themselves and others to a high degree of rigor when designing experiments. Unfortunately, most reinforcement learning (RL) experiments lack this rigor, making the knowledge generated from experiments dubious. This dissertation proposes methods to address central issues in RL experimentation.
Evaluating the performance of an RL algorithm is the most common type of experiment in RL literature. Most performance evaluations are often incapable of answering a specific research question and produce misleading results. Thus, the first issue we address is how to create a performance evaluation procedure that holds up to scientific standards.
Despite the prevalence of performance evaluation, these types of experiments produce limited knowledge, e.g., they can only show how well an algorithm worked and not why, and they require significant amounts of time and computational resources. As an alternative, this dissertation proposes that scientific testing, the process of conducting carefully controlled experiments designed to further the knowledge and understanding of how an algorithm works, should be the primary form of experimentation.
Lastly, this dissertation provides a case study using policy gradient methods, showing how scientific testing can replace performance evaluation as the primary form of experimentation. As a result, this dissertation can motivate others in the field to adopt more rigorous experimental practices
A Digital Delay Model Supporting Large Adversarial Delay Variations
Dynamic digital timing analysis is a promising alternative to analog
simulations for verifying particularly timing-critical parts of a circuit. A
necessary prerequisite is a digital delay model, which allows to accurately
predict the input-to-output delay of a given transition in the input signal(s)
of a gate. Since all existing digital delay models for dynamic digital timing
analysis are deterministic, however, they cannot cover delay fluctuations
caused by PVT variations, aging and analog signal noise. The only exception
known to us is the -IDM introduced by F\"ugger et al. at DATE'18, which
allows to add (very) small adversarially chosen delay variations to the
deterministic involution delay model, without endangering its faithfulness. In
this paper, we show that it is possible to extend the range of allowed delay
variations so significantly that realistic PVT variations and aging are covered
by the resulting extended -IDM
On the road with RTLola : Testing real driving emissions on your phone
This paper is about shipping runtime verification to the masses. It presents the crucial technology enabling everyday car
owners to monitor the behaviour of their cars in-the-wild. Concretely, we present an Android app that deploys rtlola
runtime monitors for the purpose of diagnosing automotive exhaust emissions. For this, it harvests the availability of cheap
Bluetooth adapters to the On-Board-Diagnostics (obd) ports, which are ubiquitous in cars nowadays. The app is a central
piece in a set of tools and services we have developed for black-box analysis of automotive vehicles. We detail its use in
the context of real driving emission (rde) tests and report on sample runs that helped identify violations of the regulatory
framework currently valid in the European Union
A model of actors and grey failures
Existing models for the analysis of concurrent processes tend to focus on
fail-stop failures, where processes are either working or permanently stopped,
and their state (working/stopped) is known. In fact, systems are often affected
by grey failures: failures that are latent, possibly transient, and may affect
the system in subtle ways that later lead to major issues (such as crashes,
limited availability, overload). We introduce a model of actor-based systems
with grey failures, based on two interlinked layers: an actor model, given as
an asynchronous process calculus with discrete time, and a failure model that
represents failure patterns to inject in the system. Our failure model captures
not only fail-stop node and link failures, but also grey failures (e.g.,
partial, transient). We give a behavioural equivalence relation based on weak
barbed bisimulation to compare systems on the basis of their ability to recover
from failures, and on this basis we define some desirable properties of
reliable systems. By doing so, we reduce the problem of checking reliability
properties of systems to the problem of checking bisimulation
Performance, memory efficiency and programmability: the ambitious triptych of combining vertex-centricity with HPC
The field of graph processing has grown significantly due to the flexibility and wide
applicability of the graph data structure. In the meantime, so has interest from the
community in developing new approaches to graph processing applications. In 2010,
Google introduced the vertex-centric programming model through their framework Pregel. This consists of expressing computation from the perspective of a vertex, whilst inter-vertex communications are achieved via data exchanges along incoming and outgoing edges, using the message-passing abstraction provided. Pregel ’s high-level programming interface, designed around a set of simple functions, provides ease of programmability to the user. The aim is to enable the development of graph processing applications without requiring expertise in optimisation or parallel programming. Such challenges are instead abstracted from the user and offloaded to the underlying framework. However, fine-grained synchronisation, unpredictable memory access patterns and multiple sources of load imbalance make it difficult to implement the vertex centric model efficiently on high performance computing platforms without sacrificing programmability.
This research focuses on combining vertex-centric and High-Performance Comput-
ing (HPC), resulting in the development of a shared-memory framework, iPregel, which
demonstrates that a performance and memory efficiency similar to that of non-vertex-
centric approaches can be achieved while preserving the programmability benefits of
vertex-centric. Non-volatile memory is then explored to extend single-node capabilities, during which multiple versions of iPregel are implemented to experiment with the various data movement strategies.
Then, distributed memory parallelism is investigated to overcome the resource limitations of single node processing. A second framework named DiP, which ports applicable iPregel ’s optimisations to distributed memory, prioritises performance to high scalability.
This research has resulted in a set of techniques and optimisations illustrated through a shared-memory framework iPregel and a distributed-memory framework DiP. The former closes a gap of several orders of magnitude in both performance and memory efficiency, even able to process a graph of 750 billion edges using non-volatile memory. The latter has proved that this competitiveness can also be scaled beyond a single node, enabling the processing of the largest graph generated in this research, comprising 1.6 trillion edges. Most importantly, both frameworks achieved these performance and capability gains whilst also preserving programmability, which is the cornerstone of the vertex-centric programming model. This research therefore demonstrates that by combining vertex-centricity and High-Performance Computing (HPC), it is possible to maintain performance, memory efficiency and programmability
- …