889 research outputs found
Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts
Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm
Morpheus: Automated Safety Verification of Data-Dependent Parser Combinator Programs
Parser combinators are a well-known mechanism used for the compositional construction of parsers, and have shown to be particularly useful in writing parsers for rich grammars with data-dependencies and global state. Verifying applications written using them, however, has proven to be challenging in large part because of the inherently effectful nature of the parsers being composed and the difficulty in reasoning about the arbitrarily rich data-dependent semantic actions that can be associated with parsing actions. In this paper, we address these challenges by defining a parser combinator framework called Morpheus equipped with abstractions for defining composable effects tailored for parsing and semantic actions, and a rich specification language used to define safety properties over the constituent parsers comprising a program. Even though its abstractions yield many of the same expressivity benefits as other parser combinator systems, Morpheus is carefully engineered to yield a substantially more tractable automated verification pathway. We demonstrate its utility in verifying a number of realistic, challenging parsing applications, including several cases that involve non-trivial data-dependent relations
Object detection and localization: an application inspired by RobotAtFactory using machine learning
Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáThe evolution of artificial intelligence and digital cameras has made the transformation of the real world into its digital image version more accessible and widely used. In this way, the analysis of information can be carried out with the use of algorithms. The detection and localization of objects is a crucial task in several applications, such as surveillance, autonomous robotics, intelligent transportation systems, and others. Based on this, this work aims to implement a system that can find objects and estimate their location (distance and angle), through the acquisition and analysis of images. Having as motivation the possible problems that can be introduced in the robotics competition, RobotAtFactory Lite, in future versions. As an example, the obstruction of the path developed through the printed lines, requiring the robot to deviate, and/or the positioning of the boxes in different places of the initial warehouses, being positioned so that the
robot does not know its previous location, having to find it somehow. For this, different methods were analyzed, based on machine leraning, for object detection using feature extraction and neural networks, as well as object localization, based on the Pinhole model and triangulation. By compiling these techniques through python programming in the module, based on a Raspberry Pi Model B and a Raspi Cam Rev 1.3, the goal of the work is achieved. Thus, it was possible to find the objects and obtain an estimate of their
relative position. In the future, in a possible implementation together with a robot, this data can be used to find objects and perform tasks.A evolução da inteligência artificial e das câmeras digitais, tornou mais acessÃvel e amplamente utilizada a transformação do mundo real, para sua versão em imagem digital. Dessa maneira, a análise das informações pode ser efetuada com a utilização de algoritmos. A deteção e localização de objetos é uma tarefa crucial em diversas aplicações, tais como vigilância, robótica autônoma, sistemas de transporte inteligente, entre outras. Baseado nisso, este trabalho tem como objetivo implementar um sistema que consiga encontrar objetos e estimar sua localização (distância e ângulo), através da aquisição e análise de imagens. Tendo como motivação os possÃveis problemas que possam ser introduzidos na competição de robótica, Robot@Factory Lite, em versões futuras. Podendo ser citados como exemplo a obstrução do caminho desenvolvido através das linhas impressas, requerendo que o robô desvie, e/ou o posicionamento das caixas em locais diferentes dos armazéns iniciais, sendo posicionadas de modo que o robô não saiba sua localização prévia, devendo encontra-las de alguma maneira. Para isso, foram analisados diferentes
métodos, baseadas em machine leraning, para deteção de objetos utilizando extração de caracterÃsticas e redes neurais, bem como a localização de objetos, baseada no modelo de Pinhole e triangulação. Compilando essas técnicas através da programação em python, no módulo, baseado em um Raspberry Pi Model B e um Raspi Cam Rev 1.3, o objetivo do trabalho é alcançado. Assim, foi possÃvel encontrar os objetos e obter uma estimativa da sua posição relativa. Futuramente, em uma possÃvel implementação junta a um robô, esses dados podem ser utilizados para encontrar objetos e executar tarefas
2017 GREAT Day Program
SUNY Geneseo’s Eleventh Annual GREAT Day.https://knightscholar.geneseo.edu/program-2007/1011/thumbnail.jp
Robust and Uncertainty-Aware Software Vulnerability Detection Using Bayesian Recurrent Neural Networks
Software systems are prone to code defects or vulnerabilities, resulting in several cyberattacks such as hacking, identity breach and information leakage leading to system failure. Vulnerabilities in software systems have severe societal implications, including threats to public safety, financial damage, and even risks to national security. Identifying and mitigating software vulnerabilities is critical to protect organizations and societies from potential threats. Machine learning algorithms have been employed to detect and classify potential vulnerabilities in software source code automatically. However, these algorithms are not robust to noise or malicious attacks and cannot quantify uncertainty in the model’s output. Quantifying uncertainty in the vulnerability detection mechanism can inform the user of possible noise or perturbation in the source codes and holds the promise for the safe deployment of trustworthy algorithms in real-world security applications. We develop a robust software vulnerability detection framework using Bayesian Recurrent Neural Networks (Bayesian SVD). The proposed models detect source code vulnerabilities and simultaneously learn uncertainty in output predictions. The proposed Bayesian SVD adopts variational inference and optimizes the variational posterior distribution defined over the model parameters using the evidence lower bound (ELBO). Within each state, the first two moments of the variational distribution are transmitted through the recurrent layers. At the SVD models’ output, the predictive distribution’s mean indicates the vulnerability class, while the covariance matrix captures the uncertainty information. Extensive experiments on benchmark datasets reveal (1) the robustness of the proposed models under noisy conditions and malicious attacks compared to the deterministic counterpart and (2) significantly higher uncertainty when the model encountered high levels of natural noise or malicious attacks, which serves as a warning for safe handling
CARLA+: An Evolution of the CARLA Simulator for Complex Environment Using a Probabilistic Graphical Model
In an urban and uncontrolled environment, the presence of mixed traffic of autonomous vehicles, classical vehicles, vulnerable road users, e.g., pedestrians, and unprecedented dynamic events makes it challenging for the classical autonomous vehicle to navigate the traffic safely. Therefore, the realization of collaborative autonomous driving has the potential to improve road safety and traffic efficiency. However, an obvious challenge in this regard is how to define, model, and simulate the environment that captures the dynamics of a complex and urban environment. Therefore, in this research, we first define the dynamics of the envisioned environment, where we capture the dynamics relevant to the complex urban environment, specifically, highlighting the challenges that are unaddressed and are within the scope of collaborative autonomous driving. To this end, we model the dynamic urban environment leveraging a probabilistic graphical model (PGM). To develop the proposed solution, a realistic simulation environment is required. There are a number of simulators—CARLA (Car Learning to Act), one of the prominent ones, provides rich features and environment; however, it still fails on a few fronts, for example, it cannot fully capture the complexity of an urban environment. Moreover, the classical CARLA mainly relies on manual code and multiple conditional statements, and it provides no pre-defined way to do things automatically based on the dynamic simulation environment. Hence, there is an urgent need to extend the off-the-shelf CARLA with more sophisticated settings that can model the required dynamics. In this regard, we comprehensively design, develop, and implement an extension of a classical CARLA referred to as CARLA+ for the complex environment by integrating the PGM framework. It provides a unified framework to automate the behavior of different actors leveraging PGMs. Instead of manually catering to each condition, CARLA+ enables the user to automate the modeling of different dynamics of the environment. Therefore, to validate the proposed CARLA+, experiments with different settings are designed and conducted. The experimental results demonstrate that CARLA+ is flexible enough to allow users to model various scenarios, ranging from simple controlled models to complex models learned directly from real-world data. In the future, we plan to extend CARLA+ by allowing for more configurable parameters and more flexibility on the type of probabilistic networks and models one can choose. The open-source code of CARLA+ is made publicly available for researchers
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Cybersecurity: Past, Present and Future
The digital transformation has created a new digital space known as
cyberspace. This new cyberspace has improved the workings of businesses,
organizations, governments, society as a whole, and day to day life of an
individual. With these improvements come new challenges, and one of the main
challenges is security. The security of the new cyberspace is called
cybersecurity. Cyberspace has created new technologies and environments such as
cloud computing, smart devices, IoTs, and several others. To keep pace with
these advancements in cyber technologies there is a need to expand research and
develop new cybersecurity methods and tools to secure these domains and
environments. This book is an effort to introduce the reader to the field of
cybersecurity, highlight current issues and challenges, and provide future
directions to mitigate or resolve them. The main specializations of
cybersecurity covered in this book are software security, hardware security,
the evolution of malware, biometrics, cyber intelligence, and cyber forensics.
We must learn from the past, evolve our present and improve the future. Based
on this objective, the book covers the past, present, and future of these main
specializations of cybersecurity. The book also examines the upcoming areas of
research in cyber intelligence, such as hybrid augmented and explainable
artificial intelligence (AI). Human and AI collaboration can significantly
increase the performance of a cybersecurity system. Interpreting and explaining
machine learning models, i.e., explainable AI is an emerging field of study and
has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-
Measuring the impact of COVID-19 on hospital care pathways
Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted
Towards an Unsupervised Bayesian Network Pipeline for Explainable Prediction, Decision Making and Discovery
An unsupervised learning pipeline for discrete Bayesian networks is proposed to facilitate prediction, decision making, discovery of patterns, and transparency in challenging real-world AI applications, and contend with data limitations. We explore methods for discretizing data, and notably apply the pipeline to prediction and prevention of preterm birth
- …