225 research outputs found

    RML: Runtime Monitoring Language

    Get PDF
    Runtime verification is a relatively new software verification technique that aims to prove the correctness of a specific run of a program, rather than statically verify the code. The program is instrumented in order to collect all the relevant information, and the resulting trace of events is inspected by a monitor that verifies its compliance with respect to a specification of the expected properties of the system under scrutiny. Many languages exist that can be used to formally express the expected behavior of a system, with different design choices and degrees of expressivity. This thesis presents RML, a specification language designed for runtime verification, with the goal of being completely modular and independent from the instrumentation and the kind of system being monitored. RML is highly expressive, and allows one to express complex, parametric, non-context-free properties concisely. RML is compiled down to TC, a lower level calculus, which is fully formalized with a deterministic, rewriting-based semantics. In order to evaluate the approach, an open source implementation has been developed, and several examples with Node.js programs have been tested. Benchmarks show the ability of the monitors automatically generated from RML specifications to effectively and efficiently verify complex properties

    Owl Eyes: Spotting UI Display Issues via Visual Understanding

    Full text link
    Graphical User Interface (GUI) provides a visual bridge between a software application and end users, through which they can interact with each other. With the development of technology and aesthetics, the visual effects of the GUI are more and more attracting. However, such GUI complexity posts a great challenge to the GUI implementation. According to our pilot study of crowdtesting bug reports, display issues such as text overlap, blurred screen, missing image always occur during GUI rendering on different devices due to the software or hardware compatibility. They negatively influence the app usability, resulting in poor user experience. To detect these issues, we propose a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot. Therefore, OwlEye can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug. We manually construct a large-scale labelled dataset with 4,470 GUI screenshots with UI display issues and develop a heuristics-based data augmentation method for boosting the performance of our OwlEye. The evaluation demonstrates that our OwlEye can achieve 85% precision and 84% recall in detecting UI display issues, and 90% accuracy in localizing these issues. We also evaluate OwlEye with popular Android apps on Google Play and F-droid, and successfully uncover 57 previously-undetected UI display issues with 26 of them being confirmed or fixed so far.Comment: Accepted to 35th IEEE/ACM International Conference on Automated Software Engineering (ASE 20

    Bacatá:Notebooks for DSLs, Almost for Free

    Get PDF
    Context: Computational notebooks are a contemporary style of literate programming, in which users can communicate and transfer knowledge by interleaving executable code, output, and prose in a single rich document. A Domain-Specific Language (DSL) is an artificial software language tailored for a particular application domain. Usually, DSL users are domain experts that may not have a software engineering background. As a consequence, they might not be familiar with Integrated Development Environments (IDEs). Thus, the development of tools that offer different interfaces for interacting with a DSL is relevant. Inquiry: However, resources available to DSL designers are limited. We would like to leverage tools used to interact with general purpose languages in the context of DSLs. Computational notebooks are an example of such tools. Then, our main question is: What is an efficient and effective method of designing and implementing notebook interfaces for DSLs? By addressing this question we might be able to speed up the development of DSL tools, and ease the interaction between end-users and DSLs. Approach: In this paper, we present Bacat\'a, a mechanism for generating notebook interfaces for DSLs in a language parametric fashion. We designed this mechanism in a way in which language engineers can reuse as many language components (e.g., language processors, type checkers, code generators) as possible. Knowledge: Our results show that notebook interfaces generated by Bacat\'a can be automatically generated with little manual configuration. There are few considerations and caveats that should be addressed by language engineers that rely on language design aspects. The creation of a notebook for a DSL with Bacat\'a becomes a matter of writing the code that wires existing language components in the Rascal language workbench with the Jupyter platform. Grounding: We evaluate Bacat\'a by generating functional computational notebook interfaces for three different non-trivial DSLs, namely: a small subset of Halide (a DSL for digital image processing), SweeterJS (an extended version of JavaScript), and QL (a DSL for questionnaires). Additionally, it is relevant to generate notebook implementations rather than implementing them manually. We measured and compared the number of Source Lines of Code (SLOCs) that we reused from existing implementations of those languages. Importance: The adoption of notebooks by novice-programmers and end-users has made them very popular in several domains such as exploratory programming, data science, data journalism, and machine learning. Why are they popular? In (data) science, it is essential to make results reproducible as well as understandable. However, notebooks are only available for GPLs. This paper opens up the notebook metaphor for DSLs to improve the end-user experience when interacting with code and to increase DSLs adoption

    A Review of Verification and Validation for Space Autonomous Systems

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: registration 2021-05-13, accepted 2021-05-13, online 2021-06-18, pub-electronic 2021-06-18, pub-print 2021-09Publication status: PublishedFunder: Engineering and Physical Sciences Research Council; doi: https://doi.org/10.13039/501100000266; Grant(s): EP/R026092/1Abstract: Purpose of Review: The deployment of hardware (e.g., robots, satellites, etc.) to space is a costly and complex endeavor. It is of extreme importance that on-board systems are verified and validated through a variety of verification and validation techniques, especially in the case of autonomous systems. In this paper, we discuss a number of approaches from the literature that are relevant or directly applied to the verification and validation of systems in space, with an emphasis on autonomy. Recent Findings: Despite advances in individual verification and validation techniques, there is still a lack of approaches that aim to combine different forms of verification in order to obtain system-wide verification of modular autonomous systems. Summary: This systematic review of the literature includes the current advances in the latest approaches using formal methods for static verification (model checking and theorem proving) and runtime verification, the progress achieved so far in the verification of machine learning, an overview of the landscape in software testing, and the importance of performing compositional verification in modular systems. In particular, we focus on reporting the use of these techniques for the verification and validation of systems in space with an emphasis on autonomy, as well as more general techniques (such as in the aeronautical domain) that have been shown to have potential value in the verification and validation of autonomous systems in space

    Towards Live Refactoring to Patterns

    Get PDF

    A Review of Verification and Validation for Space Autonomous Systems

    Get PDF
    Purpose of Review: The deployment of hardware (e.g., robots, satellites, etc.) to space is a costly and complex endeavor. It is of extreme importance that on-board systems are verified and validated through a variety of verification and validation techniques, especially in the case of autonomous systems. In this paper, we discuss a number of approaches from the literature that are relevant or directly applied to the verification and validation of systems in space, with an emphasis on autonomy. Recent Findings: Despite advances in individual verification and validation techniques, there is still a lack of approaches that aim to combine different forms of verification in order to obtain system-wide verification of modular autonomous systems. Summary: This systematic review of the literature includes the current advances in the latest approaches using formal methods for static verification (model checking and theorem proving) and runtime verification, the progress achieved so far in the verification of machine learning, an overview of the landscape in software testing, and the importance of performing compositional verification in modular systems. In particular, we focus on reporting the use of these techniques for the verification and validation of systems in space with an emphasis on autonomy, as well as more general techniques (such as in the aeronautical domain) that have been shown to have potential value in the verification and validation of autonomous systems in space

    MODELLING & SIMULATION HYBRID WARFARE Researches, Models and Tools for Hybrid Warfare and Population Simulation

    Get PDF
    The Hybrid Warfare phenomena, which is the subject of the current research, has been framed by the work of Professor Agostino Bruzzone (University of Genoa) and Professor Erdal Cayirci (University of Stavanger), that in June 2016 created in order to inquiry the subject a dedicated Exploratory Team, which was endorsed by NATO Modelling & Simulation Group (a panel of the NATO Science & Technology organization) and established with the participation as well of the author. The author brought his personal contribution within the ET43 by introducing meaningful insights coming from the lecture of \u201cFight by the minutes: Time and the Art of War (1994)\u201d, written by Lieutenant Colonel US Army (Rtd.) Robert Leonhard; in such work, Leonhard extensively developed the concept that \u201cTime\u201d, rather than geometry of the battlefield and/or firepower, is the critical factor to tackle in military operations and by extension in Hybrid Warfare. The critical reflection about the time - both in its quantitative and qualitative dimension - in a hybrid confrontation it is addressed and studied inside SIMCJOH, a software built around challenges that imposes literally to \u201cFight by the minutes\u201d, echoing the core concept expressed in the eponymous work. Hybrid Warfare \u2013 which, by definition and purpose, aims to keep the military commitment of both aggressor and defender at the lowest - can gain enormous profit by employing a wide variety of non-military tools, turning them into a weapon, as in the case of the phenomena of \u201cweaponization of mass migrations\u201d, as it is examined in the \u201cDies Irae\u201d simulation architecture. Currently, since migration it is a very sensitive and divisive issue among the public opinions of many European countries, cynically leveraging on a humanitarian emergency caused by an exogenous, inducted migration, could result in a high level of political and social destabilization, which indeed favours the concurrent actions carried on by other hybrid tools. Other kind of disruption however, are already available in the arsenal of Hybrid Warfare, such cyber threats, information campaigns lead by troll factories for the diffusion of fake/altered news, etc. From this perspective the author examines how the TREX (Threat network simulation for REactive eXperience) simulator is able to offer insights about a hybrid scenario characterized by an intense level of social disruption, brought by cyber-attacks and systemic faking of news. Furthermore, the rising discipline of \u201cStrategic Engineering\u201d, as envisaged by Professor Agostino Bruzzone, when matched with the operational requirements to fulfil in order to counter Hybrid Threats, it brings another innovative, as much as powerful tool, into the professional luggage of the military and the civilian employed in Defence and Homeland security sectors. Hybrid is not the New War. What is new is brought by globalization paired with the transition to the information age and rising geopolitical tensions, which have put new emphasis on hybrid hostilities that manifest themselves in a contemporary way. Hybrid Warfare is a deliberate choice of an aggressor. While militarily weak nations can resort to it in order to re-balance the odds, instead military strong nations appreciate its inherent effectiveness coupled with the denial of direct responsibility, thus circumventing the rules of the International Community (IC). In order to be successful, Hybrid Warfare should consist of a highly coordinated, sapient mix of diverse and dynamic combination of regular forces, irregular forces (even criminal elements), cyber disruption etc. all in order to achieve effects across the entire DIMEFIL/PMESII_PT spectrum. However, the owner of the strategy, i.e. the aggressor, by keeping the threshold of impunity as high as possible and decreasing the willingness of the defender, can maintain his Hybrid Warfare at a diplomatically feasible level; so the model of the capacity, willingness and threshold, as proposed by Cayirci, Bruzzone and Gunneriusson (2016), remains critical to comprehend Hybrid Warfare. Its dynamicity is able to capture the evanescent, blurring line between Hybrid Warfare and Conventional Warfare. In such contest time is the critical factor: this because it is hard to foreseen for the aggressor how long he can keep up with such strategy without risking either the retaliation from the International Community or the depletion of resources across its own DIMEFIL/PMESII_PT spectrum. Similar discourse affects the defender: if he isn\u2019t able to cope with Hybrid Threats (i.e. taking no action), time works against him; if he is, he can start to develop counter narrative and address physical countermeasures. However, this can lead, in the medium long period, to an unforeseen (both for the attacker and the defender) escalation into a large, conventional, armed conflict. The performance of operations that required more than kinetic effects drove the development of DIMEFIL/PMESII_PT models and in turn this drive the development of Human Social Culture Behavior Modelling (HCSB), which should stand at the core of the Hybrid Warfare modelling and simulation efforts. Multi Layers models are fundamental to evaluate Strategies and Support Decisions: currently there are favourable conditions to implement models of Hybrid Warfare, such as Dies Irae, SIMCJOH and TREX, in order to further develop tools and war-games for studying new tactics, execute collective training and to support decisions making and analysis planning. The proposed approach is based on the idea to create a mosaic made by HLA interoperable simulators able to be combined as tiles to cover an extensive part of the Hybrid Warfare, giving the users an interactive and intuitive environment based on the \u201cModelling interoperable Simulation and Serious Game\u201d (MS2G) approach. From this point of view, the impressive capabilities achieved by IA-CGF in human behavior modeling to support population simulation as well as their native HLA structure, suggests to adopt them as core engine in this application field. However, it necessary to highlight that, when modelling DIMEFIL/PMESII_PT domains, the researcher has to be aware of the bias introduced by the fact that especially Political and Social \u201cscience\u201d are accompanied and built around value judgement. From this perspective, the models proposed by Cayirci, Bruzzone, Guinnarson (2016) and by Balaban & Mileniczek (2018) are indeed a courageous tentative to import, into the domain of particularly poorly understood phenomena (social, politics, and to a lesser degree economics - Hartley, 2016), the mathematical and statistical instruments and the methodologies employed by the pure, hard sciences. Nevertheless, just using the instruments and the methodology of the hard sciences it is not enough to obtain the objectivity, and is such aspect the representations of Hybrid Warfare mechanics could meet their limit: this is posed by the fact that they use, as input for the equations that represents Hybrid Warfare, not physical data observed during a scientific experiment, but rather observation of the reality that assumes implicitly and explicitly a value judgment, which could lead to a biased output. Such value judgement it is subjective, and not objective like the mathematical and physical sciences; when this is not well understood and managed by the academic and the researcher, it can introduce distortions - which are unacceptable for the purpose of the Science - which could be used as well to enforce a narrative mainstream that contains a so called \u201ctruth\u201d, which lies inside the boundary of politics rather than Science. Those observations around subjectivity of social sciences vs objectivity of pure sciences, being nothing new, suggest however the need to examine the problem under a new perspective, less philosophical and more leaned toward the practical application. The suggestion that the author want make here is that the Verification and Validation process, in particular the methodology used by Professor Bruzzone in doing V&V for SIMCJOH (2016) and the one described in the Modelling & Simulation User Risk Methodology (MURM) developed by Pandolfini, Youngblood et all (2018), could be applied to evaluate if there is a bias and the extent of the it, or at least making clear the value judgment adopted in developing the DIMEFIL/PMESII_PT models. Such V&V research is however outside the scope of the present work, even though it is an offspring of it, and for such reason the author would like to make further inquiries on this particular subject in the future. Then, the theoretical discourse around Hybrid Warfare has been completed addressing the need to establish a new discipline, Strategic Engineering, very much necessary because of the current a political and economic environment which allocates diminishing resources to Defense and Homeland Security (at least in Europe). However, Strategic Engineering can successfully address its challenges when coupled with the understanding and the management of the fourth dimension of military and hybrid operations, Time. For the reasons above, and as elaborated by Leonhard and extensively discussed in the present work, addressing the concern posed by Time dimension is necessary for the success of any military or Hybrid confrontation. The SIMCJOH project, examined under the above perspective, proved that the simulator has the ability to address the fourth dimension of military and non-military confrontation. In operations, Time is the most critical factor during execution, and this was successfully transferred inside the simulator; as such, SIMCJOH can be viewed as a training tool and as well a dynamic generator of events for the MEL/MIL execution during any exercise. In conclusion, SIMCJOH Project successfully faces new challenging aspects, allowed to study and develop new simulation models in order to support decision makers, Commanders and their Staff. Finally, the question posed by Leonhard in terms of recognition of the importance of time management of military operations - nowadays Hybrid Conflict - has not been answered yet; however, the author believes that Modelling and Simulation tools and techniques can represent the safe \u201ctank\u201d where innovative and advanced scientific solutions can be tested, exploiting the advantage of doing it in a synthetic environment

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • …
    corecore