6,013 research outputs found
Intelligent Match Merging to Prevent Obfuscation Attacks on Software Plagiarism Detectors
Aufgrund der steigenden Anzahl der Informatikstudierenden verlassen sich Dozenten auf aktuelle Werkzeuge zur Erkennung von Quelltextplagiaten, um zu verhindern, dass Studierende plagiierte Programmieraufgaben einreichen. WĂ€hrend diese auf Token basierenden Plagiatsdetektoren inhĂ€rent resilient gegen einfache Verschleierungen sind, ermöglichen kĂŒrzlich veröffentlichte Verschleierungswerkzeuge den Studierenden, ihre Abgaben mĂŒhelos zu Ă€ndern, um die Erkennung zu umgehen. Der Vormarsch von ChatGPT hat zusĂ€tzliche Bedenken hinsichtlich seiner VerschleierungsfĂ€higkeiten und der Notwendigkeit wirksamer Gegenstrategien aufgeworfen. Bestehende Verteidigungsmechanismen gegen Verschleierung sind oft durch ihre SpezifitĂ€t fĂŒr bestimmte Angriffe oder ihre AbhĂ€ngigkeit von Programmiersprachen begrenzt, was eine mĂŒhsame und fehleranfĂ€llige Neuimplementierung erfordert. Als Antwort auf diese Herausforderung fĂŒhrt diese Arbeit einen neuartigen Verteidigungsmechanismus gegen automatische Verschleierungsangriffe namens Match-ZusammenfĂŒhrung ein. Er macht sich die Tatsache zunutze, dass Verschleierungsangriffe die Token-Sequenz Ă€ndern, um Ăbereinstimmungen zwischen zwei Abgaben aufzuspalten, sodass die gebrochenen Ăbereinstimmungen vom Plagiatsdetektor verworfen werden. Match-ZusammenfĂŒhrung macht die Auswirkungen dieser Angriffe rĂŒckgĂ€ngig, indem benachbarte Ăbereinstimmungen auf der Grundlage einer Heuristik intelligent zusammengefĂŒhrt werden, um falsch positive Ergebnisse zu minimieren. Die WiderstandsfĂ€higkeit unserer Methode gegen klassische Verschleierungsangriffe wird durch Evaluationen anhand verschiedener realer DatensĂ€tze, einschlieĂlich Studienarbeiten und Programmierwettbewerbe, in sechs verschiedenen Angriffsszenarien demonstriert. DarĂŒber hinaus verbessert sie die Erkennungsleistung gegen KI-basierte Verschleierung signifikant. Was diesen Mechanismus auszeichnet, ist seine UnabhĂ€ngigkeit von Sprache und Angriff, wĂ€hrend sein minimaler Laufzeit-Aufwand ihn nahtlos mit anderen Verteidigungsmechanismen kompatibel macht
A clinical decision support system for detecting and mitigating potentially inappropriate medications
Background: Medication errors are a leading cause of preventable harm to patients. In older adults, the impact of ageing on the therapeutic effectiveness and safety of drugs is a significant concern, especially for those over 65. Consequently, certain medications called Potentially Inappropriate Medications (PIMs) can be dangerous in the elderly and should be avoided. Tackling PIMs by health professionals and patients can be time-consuming and error-prone, as the criteria underlying the definition of PIMs are complex and subject to frequent updates. Moreover, the criteria are not available in a representation that health systems can interpret and reason with directly.
Objectives: This thesis aims to demonstrate the feasibility of using an ontology/rule-based approach in a clinical knowledge base to identify potentially inappropriate medication(PIM). In addition, how constraint solvers can be used effectively to suggest alternative medications and administration schedules to solve or minimise PIM undesirable side effects.
Methodology: To address these objectives, we propose a novel integrated approach using formal rules to represent the PIMs criteria and inference engines to perform the reasoning presented in the context of a Clinical Decision Support System (CDSS). The approach aims to detect, solve, or minimise undesirable side-effects of PIMs through an ontology (knowledge base) and inference engines incorporating multiple reasoning approaches.
Contributions: The main contribution lies in the framework to formalise PIMs, including the steps required to define guideline requisites to create inference rules to detect and propose alternative drugs to inappropriate medications. No formalisation of the selected guideline (Beers Criteria) can be found in the literature, and hence, this thesis provides a novel ontology for it. Moreover, our process of minimising undesirable side effects offers a novel approach that enhances and optimises the drug rescheduling process, providing a more accurate way to minimise the effect of drug interactions in clinical practice
Design and Validation of Cyber-Physical Systems Through Co-Simulation: The Voronoi Tessellation Use Case
This paper reports on the use of co-simulation techniques to build prototypes of co-operative autonomous robotic cyber-physical systems. Designing such systems involves a mission-specific planner algorithm, a control algorithm to drive an agent performing its task; and the plant model to simulate the agent dynamics. An application aimed at positioning a swarm of unmanned aerial vehicles (drones) in a bounded area, exploiting a Voronoi tessellation algorithm developed in this work, is taken as a case study. The paper shows how co-simulation allows testing the complex system at the design phase using models created with different languages and tools. The paper then reports on how the adopted co-simulation platform enables control parameters calibration, by exploiting design space exploration technology. The INTO-CPS co-simulation platform, compliant with the Functional Mock-up Interface standard to exchange dynamic simulation models using various languages, was used in this work. The different software modules were written in Modelica, C, and Python. In particular, the latter was used to implement an original variant of the Voronoi algorithm to tesselate a convex polygonal region, by means of dummy points added at appropriate positions outside the bounding polygon. A key contribution of this case study is that it demonstrates how an accurate simulation of a cooperative drone swarm requires modeling the physical plant together with the high-level coordination algorithm. The coupling of co-simulation and design space exploration has been demonstrated to support control parameter calibration to optimize energy consumption and convergence time to the target positions of the drone swarm. From a practical point of view, this makes it possible to test the ability of the swarm to self-deploy in space in order to achieve optimal detection coverage and allow unmanned aerial vehicles in a swarm to coordinate with each other
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Barrier-Based Test Synthesis for Safety-Critical Systems Subject to Timed Reach-Avoid Specifications
We propose an adversarial, time-varying test-synthesis procedure for
safety-critical systems without requiring specific knowledge of the underlying
controller steering the system. From a broader test and evaluation context,
determination of difficult tests of system behavior is important as these tests
would elucidate problematic system phenomena before these mistakes can engender
problematic outcomes, e.g. loss of human life in autonomous cars, costly
failures for airplane systems, etc. Our approach builds on existing,
simulation-based work in the test and evaluation literature by offering a
controller-agnostic test-synthesis procedure that provides a series of
benchmark tests with which to determine controller reliability. To achieve
this, our approach codifies the system objective as a timed reach-avoid
specification. Then, by coupling control barrier functions with this class of
specifications, we construct an instantaneous difficulty metric whose minimizer
corresponds to the most difficult test at that system state. We use this
instantaneous difficulty metric in a game-theoretic fashion, to produce an
adversarial, time-varying test-synthesis procedure that does not require
specific knowledge of the system's controller, but can still provably identify
realizable and maximally difficult tests of system behavior. Finally, we
develop this test-synthesis procedure for both continuous and discrete-time
systems and showcase our test-synthesis procedure on simulated and hardware
examples
A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks
Transformer is a deep neural network that employs a self-attention mechanism
to comprehend the contextual relationships within sequential data. Unlike
conventional neural networks or updated versions of Recurrent Neural Networks
(RNNs) such as Long Short-Term Memory (LSTM), transformer models excel in
handling long dependencies between input sequence elements and enable parallel
processing. As a result, transformer-based models have attracted substantial
interest among researchers in the field of artificial intelligence. This can be
attributed to their immense potential and remarkable achievements, not only in
Natural Language Processing (NLP) tasks but also in a wide range of domains,
including computer vision, audio and speech processing, healthcare, and the
Internet of Things (IoT). Although several survey papers have been published
highlighting the transformer's contributions in specific fields, architectural
differences, or performance evaluations, there is still a significant absence
of a comprehensive survey paper encompassing its major applications across
various domains. Therefore, we undertook the task of filling this gap by
conducting an extensive survey of proposed transformer models from 2017 to
2022. Our survey encompasses the identification of the top five application
domains for transformer-based models, namely: NLP, Computer Vision,
Multi-Modality, Audio and Speech Processing, and Signal Processing. We analyze
the impact of highly influential transformer-based models in these domains and
subsequently classify them based on their respective tasks using a proposed
taxonomy. Our aim is to shed light on the existing potential and future
possibilities of transformers for enthusiastic researchers, thus contributing
to the broader understanding of this groundbreaking technology
Towards Efficient Explainability of Schedulability Properties in Real-Time Systems
The notion of efficient explainability was recently introduced in the context of hard-real-time scheduling: a claim that a real-time system is schedulable (i.e., that it will always meet all deadlines during run-time) is defined to be efficiently explainable if there is a proof of such schedulability that can be verified by a polynomial-time algorithm. We further explore this notion by (i) classifying a variety of common schedulability analysis problems according to whether they are efficiently explainable or not; and (ii) developing strategies for dealing with those determined to not be efficiently schedulable, primarily by identifying practically meaningful sub-problems that are efficiently explainable
Morpheus: Automated Safety Verification of Data-Dependent Parser Combinator Programs
Parser combinators are a well-known mechanism used for the compositional construction of parsers, and have shown to be particularly useful in writing parsers for rich grammars with data-dependencies and global state. Verifying applications written using them, however, has proven to be challenging in large part because of the inherently effectful nature of the parsers being composed and the difficulty in reasoning about the arbitrarily rich data-dependent semantic actions that can be associated with parsing actions. In this paper, we address these challenges by defining a parser combinator framework called Morpheus equipped with abstractions for defining composable effects tailored for parsing and semantic actions, and a rich specification language used to define safety properties over the constituent parsers comprising a program. Even though its abstractions yield many of the same expressivity benefits as other parser combinator systems, Morpheus is carefully engineered to yield a substantially more tractable automated verification pathway. We demonstrate its utility in verifying a number of realistic, challenging parsing applications, including several cases that involve non-trivial data-dependent relations
Nonlocal games and their device-independent quantum applications
Device-independence is a property of certain protocols that allows one to ensure their proper execution given only classical interaction with devices and assuming the correctness of the laws of physics. This scenario describes the most general form of cryptographic security, in which no trust is placed in the hardware involved; indeed, one may even take it to have been prepared by an adversary.
Many quantum tasks have been shown to admit device-independent protocols by augmentation with "nonlocal games". These are games in which noncommunicating parties jointly attempt to fulfil some conditions imposed by a referee. We introduce examples of such games and examine the optimal strategies of players who are allowed access to different possible shared resources, such as entangled quantum states. We then study their role in self-testing, private random number generation, and secure delegated quantum computation. Hardware imperfections are naturally incorporated in the device-independent scenario as adversarial, and we thus also perform noise robustness analysis where feasible.
We first study a generalization of the MerminâPeres magic square game to arbitrary rectangular dimensions. After exhibiting some general properties, these "magic rectangle" games are fully characterized in terms of their optimal win probabilities for quantum strategies. We find that for mĂn magic rectangle games with dimensions m,nâ„3, there are quantum strategies that win with certainty, while for dimensions 1Ăn quantum strategies do not outperform classical strategies. The final case of dimensions 2Ăn is richer, and we give upper and lower bounds that both outperform the classical strategies. As an initial usage scenario, we apply our findings to quantum certified randomness expansion to find noise tolerances and rates for all magic rectangle games. To do this, we use our previous results to obtain the winning probabilities of games with a distinguished input for which the devices give a deterministic outcome and follow the analysis of C. A. Miller and Y. Shi [SIAM J. Comput. 46, 1304 (2017)].
Self-testing is a method to verify that one has a particular quantum state from purely classical statistics. For practical applications, such as device-independent delegated verifiable quantum computation, it is crucial that one self-tests multiple Bell states in parallel while keeping the quantum capabilities required of one side to a minimum. We use our 3Ăn magic rectangle games to obtain a self-test for n Bell states where one side needs only to measure single-qubit Pauli observables. The protocol requires small input sizes [constant for Alice and O(log n) bits for Bob] and is robust with robustness O(nâ”/ÂČâΔ), where Δ is the closeness of the ideal (perfect) correlations to those observed. To achieve the desired self-test, we introduce a one-side-local quantum strategy for the magic square game that wins with certainty, we generalize this strategy to the family of 3Ăn magic rectangle games, and we supplement these nonlocal games with extra check rounds (of single and pairs of observables).
Finally, we introduce a device-independent two-prover scheme in which a classical verifier can use a simple untrusted quantum measurement device (the client device) to securely delegate a quantum computation to an untrusted quantum server. To do this, we construct a parallel self-testing protocol to perform device-independent remote state preparation of n qubits and compose this with the unconditionally secure universal verifiable blind quantum computation (VBQC) scheme of J. F. Fitzsimons and E. Kashefi [Phys. Rev. A 96, 012303 (2017)]. Our self-test achieves a multitude of desirable properties for the application we consider, giving rise to practical and fully device-independent VBQC. It certifies parallel measurements of all cardinal and intercardinal directions in the XY-plane as well as the computational basis, uses few input questions (of size logarithmic in n for the client and a constant number communicated to the server), and requires only single-qubit measurements to be performed by the client device
An Empathetic Design Framework for Humanity-Centered AI: A preventative approach to developing more holistic, reliable, and ethical ML products
Machine Learning (ML), a subset of Artificial Intelligence (AI) has been in a pattern of rapid growth over the last decade, simultaneously evolving through the intersection of the needs of businesses and individuals, together with the combined, exponential increase of computer power, data availability, and network infrastructure.
The rise of ML products and services has led to advances in vital sectors including healthcare, finance, automotive, security, and more. These include expediting enhanced diagnosis in patients, strengthening cybersecurity measures, manufacturing automation, or leading to new technologies like self-driving vehicles, robotics, digital assistants, and so-called âchatbotsâ. However, the rise in the development of AI-enabled products and services has not been all positive. In parallel, there have been numerous documented instances of harmful impacts on individuals, communities, and the broader society.
This project focuses on understanding and mitigating negative, unforeseen, and even unconscious consequences of AI/ML by interrogating the presence of bias in the Machine Learning Operations (MLOps) process. Our approach is to better identify and address vulnerabilities at specific phases in the development of an ML product or service. Using strategic foresight methods, this project explores emerging AI trends and develops an array of possible future scenarios, through which bias and other areas of concern are studied to better understand their potential impacts.
As a product of this investigation, we develop an Empathetic Design Framework (EDF), employing a set of lenses and a toolkit that can be effortlessly incorporated into an ML cross-functional teamâs agile practice in a bid to better identify ML risks and weaknesses, and reduce the occurrence of negative future scenarios.
Finally, this research aims to identify appropriate and impactful insertion points within the MLOps process for utilizing the EDF to mitigate negative potential biases during the ML life cycle
- âŠ