424 research outputs found
Local Reasoning about Probabilistic Behaviour for Classical-Quantum Programs
Verifying the functional correctness of programs with both classical and
quantum constructs is a challenging task. The presence of probabilistic
behaviour entailed by quantum measurements and unbounded while loops complicate
the verification task greatly. We propose a new quantum Hoare logic for local
reasoning about probabilistic behaviour by introducing distribution formulas to
specify probabilistic properties. We show that the proof rules in the logic are
sound with respect to a denotational semantics. To demonstrate the
effectiveness of the logic, we formally verify the correctness of non-trivial
quantum algorithms including the HHL and Shor's algorithms.Comment: 27 pages. arXiv admin note: text overlap with arXiv:2107.0080
Resilient and Scalable Forwarding for Software-Defined Networks with P4-Programmable Switches
Traditional networking devices support only fixed features and limited configurability.
Network softwarization leverages programmable software and hardware platforms to remove those limitations.
In this context the concept of programmable data planes allows directly to program the packet processing pipeline of networking devices and create custom control plane algorithms.
This flexibility enables the design of novel networking mechanisms where the status quo struggles to meet high demands of next-generation networks like 5G, Internet of Things, cloud computing, and industry 4.0.
P4 is the most popular technology to implement programmable data planes.
However, programmable data planes, and in particular, the P4 technology, emerged only recently.
Thus, P4 support for some well-established networking concepts is still lacking and several issues remain unsolved due to the different characteristics of programmable data planes in comparison to traditional networking.
The research of this thesis focuses on two open issues of programmable data planes.
First, it develops resilient and efficient forwarding mechanisms for the P4 data plane as there are no satisfying state of the art best practices yet.
Second, it enables BIER in high-performance P4 data planes.
BIER is a novel, scalable, and efficient transport mechanism for IP multicast traffic which has only very limited support of high-performance forwarding platforms yet.
The main results of this thesis are published as 8 peer-reviewed and one post-publication peer-reviewed publication. The results cover the development of suitable resilience mechanisms for P4 data planes, the development and implementation of resilient BIER forwarding in P4, and the extensive evaluations of all developed and implemented mechanisms. Furthermore, the results contain a comprehensive P4 literature study.
Two more peer-reviewed papers contain additional content that is not directly related to the main results.
They implement congestion avoidance mechanisms in P4 and develop a scheduling concept to find cost-optimized load schedules based on day-ahead forecasts
Developing a high-performance soil fertility status prediction voting ensemble using brute exhaustive optimization in automated multiprecision weights of hybrid classifiers
A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Information and Communication Science and Engineering of the Nelson Mandela African Institution of Science and TechnologyWith the advent of machine learning (ML) techniques, various algorithms have been applied in
previous studies to develop models for predicting soil fertility status. However, these models are
observed to use varying fertility target classes, and variations have been reported in these models'
predictive performances. As a result, practical applications of these models for obtaining the most
accurate predictions may become hindered. While the weighted voting ensemble (WVE) ML
technique can be used to improve soil fertility status prediction by aggregating individual models
prediction, guaranteeing finding of an optimal WVE assignment weights is challenging. Whereas
a brute exhaustive search procedure can be applied for the mentioned task, there is a lack of
exploration on the exploitation of automated classifiers' precise weights combinations as search
spaces for successful optimization. This research aims to develop a high-performance soil
fertility status prediction voting ensemble using brute exhaustive optimization in automated
1EXP(-)Z+ multi-precision weights of hybrid classifiers. Soil chemical properties and ML
modeling algorithms for modeling soil fertility status were identified. Base hybrid ML
classification models for predicting soil fertility status were evaluated using Tanzania as a case
study. Finally, the base ML hybrids WVE models were optimized using brute exhaustive search
procedure’s novel developed search spaces generation algorithm for guaranteed optimal solution
finding. The research was designed using design science research methodology, with the
application of unsupervised machine learning K-mean algorithm with a knee detection method
to find the optimal number of soil fertility status target classes, and supervised learning
algorithms were applied to model classifiers for those optimal classes. Three soil fertility target
classes were identified by clustering technique. The model achieved on test data a predictive
accuracy of 98.93%, with respective AUC of 82%, 83%, and 87% for low, medium, and high
soil fertility targets classes. Whereas these performances are observed higher compared to models
in previous studies, 92% correct classifications were obtained on validation against external
unseen laboratory-based tested soil results. Therefore, soil testing laboratories and farmers should
consider using the model to smartly manage soil fertility which may lead to improved crop
growth and productivity. The government could set agricultural-related policies that require the
use of the model by farmers with the provision of agricultural inputs subsidies. Future work could
be to develop an integrated real-time web and mobile application for providing farmers with soil
fertility status information
Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023
Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida
Recommended from our members
Abstractions for Probabilistic Programming to Support Model Development
Probabilistic programming is a recent advancement in probabilistic modeling whereby we can express a model as a program with little concern for the details of probabilistic inference.
Probabilistic programming thereby provides a clean and powerful abstraction to its users, letting even non-experts develop clear and concise models that can leverage state-of-the-art computational inference algorithms. This model-as-program representation also presents a unique opportunity: we can apply methods from the study of programming languages directly onto probabilistic models. By developing techniques to analyze, transform, or extend the capabilities of probabilistic programs, we can immediately improve the workflow of probabilistic modeling and benefit all of its applications throughout science and industry.
The aim of this dissertation is to support an ideal probabilistic modeling workflow byaddressing two limitations of probabilistic programming: that a program can only represent one model; and that the structure of the model that it represents is often opaque to users and to the compiler. In particular, I make the following primary contributions:
(1) I introduce Multi-Model Probabilistic Programming: an extension of probabilistic programming whereby a program can represent a network of interrelated models. This new representation allows users to construct and leverage spaces of models in the same way that probabilistic programs do for individual models. Multi-Model Probabilistic Programming lets us visualize and navigate solution spaces, track and document model development paths, and audit modeler degrees of freedom to mitigate issues like p-hacking. It also provides an efficient computational foundation for the automation of model-space applications like model search, sensitivity analysis, and ensemble methods.
I give a formal language specification and semantics for Multi-Model Probabilistic Programming built on the Stan language, I provide algorithms for the fundamental model-space operations along with proofs of correctness and efficiency, and I present a prototype implementation, with which I demonstrate a variety of practical applications.
(2) I present a method for automatically transforming probabilistic programs into semantically related forms by using static analysis and constraint solving to recover the structure of their underlying models. In particular, I automate two general model transformations that are required for diagnostic checks which are important steps of a model-building workflow. Automating these transformations frees the user from manually rewriting their models, thereby avoiding potential correctness and efficiency issues.
(3) I present a probabilistic program analysis tool, “Pedantic Mode”, that automatically warns users about potential statistical issues with the model described by their program. “Pedantic Mode” uses specialized static analysis methods to decompose the structure of the underlying model. Lastly, I discuss future work in these areas, such as advanced model-space algorithms and other general-purpose model transformations. I also discuss how these ideas may fit into future modeling workflows as technologies
LIPIcs, Volume 277, GIScience 2023, Complete Volume
LIPIcs, Volume 277, GIScience 2023, Complete Volum
12th International Conference on Geographic Information Science: GIScience 2023, September 12–15, 2023, Leeds, UK
No abstract available
Toward a formal theory for computing machines made out of whatever physics offers: extended version
Approaching limitations of digital computing technologies have spurred
research in neuromorphic and other unconventional approaches to computing. Here
we argue that if we want to systematically engineer computing systems that are
based on unconventional physical effects, we need guidance from a formal theory
that is different from the symbolic-algorithmic theory of today's computer
science textbooks. We propose a general strategy for developing such a theory,
and within that general view, a specific approach that we call "fluent
computing". In contrast to Turing, who modeled computing processes from a
top-down perspective as symbolic reasoning, we adopt the scientific paradigm of
physics and model physical computing systems bottom-up by formalizing what can
ultimately be measured in any physical substrate. This leads to an
understanding of computing as the structuring of processes, while classical
models of computing systems describe the processing of structures.Comment: 76 pages. This is an extended version of a perspective article with
the same title that will appear in Nature Communications soon after this
manuscript goes public on arxi
High Frequency Physiological Data Quality Modelling in the Intensive Care Unit
Intensive care medicine is a resource intense environment in which technical and clinical decision making relies on rapidly assimilating a huge amount of categorical and timeseries physiologic data. These signals are being presented at variable frequencies and of variable quality. Intensive care clinicians rely on high frequency measurements of the patient's physiologic state to assess critical illness and the response to therapies. Physiological waveforms have the potential to reveal details about the patient state in very fine resolution, and can assist, augment, or even automate decision making in intensive care. However, these high frequency time-series physiologic signals pose many challenges for modelling. These signals contain noise, artefacts, and systematic timing errors, all of which can impact the quality and accuracy of models being developed and the reproducibility of results. In this context, the central theme of this thesis is to model the process of data collection in an intensive care environment from a statistical, metrological, and biosignals engineering perspective with the aim of identifying, quantifying, and, where possible, correcting errors introduced by the data collection systems. Three different aspects of physiological measurement were explored in detail, namely measurement of blood oxygenation, measurement of blood pressure, and measurement of time. A literature review of sources of errors and uncertainty in timing systems used in intensive care units was undertaken. A signal alignment algorithm was developed and applied to approximately 34,000 patient-hours of simultaneously collected electroencephalography and physiological waveforms collected at the bedside using two different medical devices
- …