1,500 research outputs found

    Automated Refinement Of Hierarchical Object Graphs

    Get PDF
    Object graphs help explain the runtime structure of a system. To make object graphs convey design intent, one insight is to use abstraction by hierarchy, i.e., to show objects that are implementation details as children of architecturally-relevant objects from the application domain. But additional information is needed to express this object hierarchy, using ownership type qualifiers in the code. Adding qualifiers after the fact involves manual overhead, and requires developers to switch between adding qualifiers in the code and looking at abstract object graphs to understand the object structures that the qualifiers describe. We propose an approach where developers express their design intent by refining an object graph directly, while an inference analysis infers valid qualifiers in the code. We present, formalize and implement the inference analysis. Novel features of the inference analysis compared to closely related work include a larger set of qualifiers to support less restrictive object hierarchy (logical containment) in addition to strict hierarchy (strict encapsulation), as well as object uniqueness and object borrowing. A separate extraction analysis then uses these qualifiers and extracts an updated object graph. We evaluate the approach on two subject systems. One of the subject systems is reproduced from an experiment using related techniques and another ownership type system, which enables a meaningful comparison. For the other subject system, we use its documentation to pick refinements that express design intent. We compute metrics on the refinements (how many attempts on each subject system) and classify them by their type. We also compute metrics on the inferred qualifiers and metrics on the object graphs to enable quantitative comparison. Moreover, we qualitatively compare the hierarchical object graphs with the flat object graphs and with each other, by highlighting how they express design intent. Finally, we confirm that the approach can infer from refinements valid qualifiers such that the extracted object graphs reflect the design intent of the refinements

    Interactive Refinement Of Hierarchical Object Graphs

    Get PDF
    Developers need to understand the runtime structure of object-oriented code, and abstract object graphs can help. To extract abstract object graphs that convey design intent in the form of object hierarchy, additional information is needed to express this hierarchy in the code using ownership types, but adding ownership type qualifiers after the fact involves manual overhead, and requires developers to switch between adding qualifiers in the code and looking at abstract object graphs to understand the object structures that the qualifiers describe. We describe an approach where developers express their design intent by refining an object graph directly, while an inference analysis infers valid qualifiers in the code. A separate extraction analysis then uses these qualifiers and extracts an updated object graph. We implement and test the approach on several small test cases and confirm its feasibility

    Predicate Abstraction for Linked Data Structures

    Full text link
    We present Alias Refinement Types (ART), a new approach to the verification of correctness properties of linked data structures. While there are many techniques for checking that a heap-manipulating program adheres to its specification, they often require that the programmer annotate the behavior of each procedure, for example, in the form of loop invariants and pre- and post-conditions. Predicate abstraction would be an attractive abstract domain for performing invariant inference, existing techniques are not able to reason about the heap with enough precision to verify functional properties of data structure manipulating programs. In this paper, we propose a technique that lifts predicate abstraction to the heap by factoring the analysis of data structures into two orthogonal components: (1) Alias Types, which reason about the physical shape of heap structures, and (2) Refinement Types, which use simple predicates from an SMT decidable theory to capture the logical or semantic properties of the structures. We prove ART sound by translating types into separation logic assertions, thus translating typing derivations in ART into separation logic proofs. We evaluate ART by implementing a tool that performs type inference for an imperative language, and empirically show, using a suite of data-structure benchmarks, that ART requires only 21% of the annotations needed by other state-of-the-art verification techniques

    CHORUS Deliverable 3.4: Vision Document

    Get PDF
    The goal of the CHORUS Vision Document is to create a high level vision on audio-visual search engines in order to give guidance to the future R&D work in this area and to highlight trends and challenges in this domain. The vision of CHORUS is strongly connected to the CHORUS Roadmap Document (D2.3). A concise document integrating the outcomes of the two deliverables will be prepared for the end of the project (NEM Summit)

    Linking Data Sovereignty and Data Economy: Arising Areas of Tension

    Get PDF
    In the emerging information economy, data evolves as an essential asset and personal data in particular is used for data-driven business models. However, companies frequently leverage personal data without considering individuals’ data sovereignty. Therefore, we strive to strengthen individuals’ position in data ecosystems by combining concepts of data sovereignty and data economy. Our research design comprises an approach to design thinking iteratively generating, validating, and refining such concepts. As a result, we identified ten areas of tension that arise when linking data sovereignty and data economy. Subsequently, we propose initial solutions to resolve these tensions and thus contribute to knowledge about the development of fair data ecosystems benefiting both individuals’ sovereignty and companies’ access to data

    Spatially Stratified and Multi-Stage Approach for National Land Cover Mapping Based on Sentinel-2 Data and Expert Knowledge

    Get PDF
    Costa, H., Benevides, P., Moreira, F. D., Moraes, D., & Caetano, M. (2022). Spatially Stratified and Multi-Stage Approach for National Land Cover Mapping Based on Sentinel-2 Data and Expert Knowledge. Remote Sensing, 14(8), 1-21. [1865]. https://doi.org/10.3390/rs14081865 -----------------------------------This research was funded by Fundação para a Ciência e a Tecnologia (FCT) through projects IPSTERS (DSAIPA/AI/0100/2018), foRESTER (PCIF/SSI/0102/2017), and SCAPEFIRE (PCIF/MOS/0046/2017), and by Compete2020 (POCI-05-5762-FSE-000368), supported by the European Social Fund. The APC was funded by project foRESTER (PCIF/SSI/0102/2017).Portugal is building a land cover monitoring system to deliver land cover products annually for its mainland territory. This paper presents the methodology developed to produce a prototype relative to 2018 as the first land cover map of the future annual map series (COSsim). A total of thirteen land cover classes are represented, including the most important tree species in Portugal. The mapping approach developed includes two levels of spatial stratification based on landscape dynamics. Strata are analysed independently at the higher level, while nested sublevels can share data and procedures. Multiple stages of analysis are implemented in which subsequent stages improve the outputs of precedent stages. The goal is to adjust mapping to the local landscape and tackle specific problems or divide complex mapping tasks in several parts. Supervised classification of Sentinel-2 time series and post-classification analysis with expert knowledge were performed throughout four stages. The overall accuracy of the map is estimated at 81.3% (±2.1) at the 95% confidence level. Higher thematic accuracy was achieved in southern Portugal, and expert knowledge significantly improved the quality of the map.publishersversionpublishe

    Compasses, beauty queens and other PCs: Pictorial metaphors in computer advertisements

    Get PDF
    Computer advertisements make extensive use of pictorial metaphors. The model pro-posed in Forceville (1996) is used as a starting point to analyze 27 advertisements in PC Magazine, July/August 1999 (American edition) that contain a pictorial metaphor. The aim is twofold: (1) to further contribute to the theory of pictorial metaphor by testing the model against a new corpus; (2) to make an inventory of the source domains used in the metaphors,andtherebytomakesome observations about the ways in which represen-tations of computer technology interact with our daily lives

    Linearizability with Ownership Transfer

    Full text link
    Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms. Unfortunately, it assumes a complete isolation between a library and its client, with interactions limited to passing values of a given data type. This is inappropriate for common programming languages, where libraries and their clients can communicate via the heap, transferring the ownership of data structures, and can even run in a shared address space without any memory protection. In this paper, we present the first definition of linearizability that lifts this limitation and establish an Abstraction Theorem: while proving a property of a client of a concurrent library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. This allows abstracting from the details of the library implementation while reasoning about the client. We also prove that linearizability with ownership transfer can be derived from the classical one if the library does not access some of data structures transferred to it by the client

    Caching, crashing & concurrency - verification under adverse conditions

    Get PDF
    The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency. The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety. As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments. The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations. The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability. Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems
    • …
    corecore