242 research outputs found

    Formalization and Validation of Safety-Critical Requirements

    Full text link
    The validation of requirements is a fundamental step in the development process of safety-critical systems. In safety critical applications such as aerospace, avionics and railways, the use of formal methods is of paramount importance both for requirements and for design validation. Nevertheless, while for the verification of the design, many formal techniques have been conceived and applied, the research on formal methods for requirements validation is not yet mature. The main obstacles are that, on the one hand, the correctness of requirements is not formally defined; on the other hand that the formalization and the validation of the requirements usually demands a strong involvement of domain experts. We report on a methodology and a series of techniques that we developed for the formalization and validation of high-level requirements for safety-critical applications. The main ingredients are a very expressive formal language and automatic satisfiability procedures. The language combines first-order, temporal, and hybrid logic. The satisfiability procedures are based on model checking and satisfiability modulo theory. We applied this technology within an industrial project to the validation of railways requirements

    Cosmic voids detection without density measurements

    Full text link
    Cosmic voids are effective cosmological probes to discriminate among competing world models. Their identification is generally based on density or geometry criteria that, because of their very nature, are prone to shot noise. We propose two void finders that are based on dynamical criterion to select voids in Lagrangian coordinates and minimise the impact of sparse sampling. The first approach exploits the Zel'dovich approximation to trace back in time the orbits of galaxies located in voids and their surroundings, the second uses the observed galaxy-galaxy correlation function to relax the objects' spatial distribution to homogeneity and isotropy. In both cases voids are defined as regions of the negative velocity divergence, that can be regarded as sinks of the back-in-time streamlines of the mass tracers. To assess the performance of our methods we used a dark matter halo mock catalogue CoDECS, and compared the results with those obtained with the ZOBOV void finder. We find that the void divergence profiles are less scattered than the density ones and, therefore, their stacking constitutes a more accurate cosmological probe. The significance of the divergence signal in the central part of voids obtained from both our finders is 60% higher than for overdensity profiles in the ZOBOV case. The ellipticity of the stacked void measured in the divergence field is closer to unity, as expected, than what is found when using halo positions. Therefore our void finders are complementary to the existing methods, that should contribute to improve the accuracy of void-based cosmological tests.Comment: 12 pages, 18 figures, accepted for publication in MNRA

    Towards Automatic Digitalization of Railway Engineering Schematics

    Get PDF
    Relay-based Railways Interlocking Systems (RRIS) carry out critical functions to control stations. Despite being based on old and hard-to-maintain electro-mechanical technology, RRIS are still pervasive. A powerful CAD modeling and analysis approach based on symbolic logic has been recently proposed to support the re-engineering of relay diagrams into more maintainable computer-based technologies. However, the legacy engineering drawings that need to be digitized consist of large, hand-drawn diagrams dating back several decades. Manually transforming such diagrams into the format of the CAD tool is labor-intensive and error-prone, effectively a bottleneck in the reverse-engineering process. In this paper, we tackle the problem of automatic digitalization of RRIS schematics into the corresponding CAD format with an integrative Artificial Intelligence approach. Deep learning-based methods, segment detection, and clustering techniques for the automated digitalization of engineering schematics are used to detect and classify the single elements of the diagram. These elementary elements can then be aggregated into more complex objects leveraging the domain ontology. First results of the method’s capability of automatically reconstructing the engineering schematics are presented

    The formation and evolution of early-type galaxies : solid results and open questions

    Full text link
    The most recent results and some of the open key questions on the evolution of early-type galaxies are reviewed in the general cosmological context of massive galaxy formation.Comment: 8 pages, invited review at the workshop "Probing Stellar Populations out to the Distant Universe", Cefalu` (Italy), September 7 - 19, 200

    From Informal Safety-Critical Requirements to Property-Driven Formal Validation

    Get PDF
    Most of the efforts in formal methods have historically been devoted to comparing a design against a set of requirements. The validation of the requirements themselves, however, has often been disregarded, and it can be considered a largely open problem, which poses several challenges. The first challenge is given by the fact that requirements are often written in natural language, and may thus contain a high degree of ambiguity. Despite the progresses in Natural Language Processing techniques, the task of understanding a set of requirements cannot be automatized, and must be carried out by domain experts, who are typically not familiar with formal languages. Furthermore, in order to retain a direct connection with the informal requirements, the formalization cannot follow standard model-based approaches. The second challenge lies in the formal validation of requirements. On one hand, it is not even clear which are the correctness criteria or the high-level properties that the requirements must fulfill. On the other hand, the expressivity of the language used in the formalization may go beyond the theoretical and/or practical capacity of state-of-the-art formal verification. In order to solve these issues, we propose a new methodology that comprises of a chain of steps, each supported by a specific tool. The main steps are the following. First, the informal requirements are split into basic fragments, which are classified into categories, and dependency and generalization relationships among them are identified. Second, the fragments are modeled using a visual language such as UML. The UML diagrams are both syntactically restricted (in order to guarantee a formal semantics), and enriched with a highly controlled natural language (to allow for modeling static and temporal constraints). Third, an automatic formal analysis phase iterates over the modeled requirements, by combining several, complementary techniques: checking consistency; verifying whether the requirements entail some desirable properties; verify whether the requirements are consistent with selected scenarios; diagnosing inconsistencies by identifying inconsistent cores; identifying vacuous requirements; constructing multiple explanations by enabling the fault-tree analysis related to particular fault models; verifying whether the specification is realizable

    Symbolic Model Checking and Safety Assessment of Altarica models

    Get PDF
    Altarica is a language used to describe critical systems. In this paper we present a novel approach to the analysis of Altarica models, based on a translation into an extended version of NuSMV. This approach opens up the possibility to carry out functional verification and safety assessment with symbolic techniques. An experimental evaluation on a set of industrial case studies demonstrates the advantages of the approach over currently available tools.

    Disentangling interacting dark energy cosmologies with the three-point correlation function

    Full text link
    We investigate the possibility of constraining coupled dark energy (cDE) cosmologies using the three-point correlation function (3PCF). Making use of the CoDECS N-body simulations, we study the statistical properties of cold dark matter (CDM) haloes for a variety of models, including a fiducial Λ\LambdaCDM scenario and five models in which dark energy (DE) and CDM mutually interact. We measure both the halo 3PCF, ζ(θ)\zeta(\theta), and the reduced 3PCF, Q(θ)Q(\theta), at different scales (2<r[2<r\,[Mpc\h]<40]<40) and redshifts (0z20\leq z\leq2). In all cDE models considered in this work, Q(θ)Q(\theta) appears flat at small scales (for all redshifts) and at low redshifts (for all scales), while it builds up the characteristic V-shape anisotropy at increasing redshifts and scales. With respect to the Λ\Lambda CDM predictions, cDE models show lower (higher) values of the halo 3PCF for perpendicular (elongated) configurations. The effect is also scale-dependent, with differences between Λ\LambdaCDM and cDE models that increase at large scales. We made use of these measurements to estimate the halo bias, that results in fair agreement with the one computed from the two-point correlation function (2PCF). The main advantage of using both the 2PCF and 3PCF is to break the biasσ8-\sigma_{8} degeneracy. Moreover, we find that our bias estimates are approximately independent of the assumed strength of DE coupling. This study demonstrates the power of a higher-order clustering analysis in discriminating between alternative cosmological scenarios, for both present and forthcoming galaxy surveys, such as e.g. BOSS and Euclid.Comment: 13 pages, 11 figures. Accepted for publication in MNRA

    Validating Domains and Plans for Temporal Planning via Encoding into Infinite-State Linear Temporal Logic

    Get PDF
    Temporal planning is an active research area of Artificial Intelligence because of its many applications ranging from roboticsto logistics and beyond. Traditionally, authors focused on theautomatic synthesis of plans given a formal representation of thedomain and of the problem. However, the effectiveness of suchtechniques is limited by the complexity of the modeling phase: it ishard to produce a correct model for the planning problem at hand. In this paper, we present a technique to simplify the creation ofcorrect models by leveraging formal-verification tools for automaticvalidation. We start by using the ANML language, a very expressivelanguage for temporal planning problems that has been recentlypresented. We chose ANML because of its usability andreadability. Then, we present a sound-and-complete, formal encodingof the language into Linear Temporal Logic over predicates withinfinite-state variables. Thanks to this reduction, we enable theformal verification of several relevant properties over the planningproblem, providing useful feedback to the modeler
    corecore