3,962 research outputs found

    Enhanced Parallel Generation of Tree Structures for the Recognition of 3D Images

    Get PDF
    Segmentations of a digital object based on a connectivity criterion at n-xel or sub-n-xel level are useful tools in image topological analysis and recognition. Working with cell complex analogous of digital objects, an example of this kind of segmentation is that obtained from the combinatorial representation so called Homological Spanning Forest (HSF, for short) which, informally, classifies the cells of the complex as belonging to regions containing the maximal number of cells sharing the same homological (algebraic homology with coefficient in a field) information. We design here a parallel method for computing a HSF (using homology with coefficients in Z/2Z) of a 3D digital object. If this object is included in a 3D image of m1 × m2 × m3 voxels, its theoretical time complexity order is near O(log(m1 + m2 + m3)), under the assumption that a processing element is available for each voxel. A prototype implementation validating our results has been written and several synthetic, random and medical tridimensional images have been used for testing. The experiments allow us to assert that the number of iterations in which the homological information is found varies only to a small extent from the theoretical computational time.Ministerio de Economía y Competitividad MTM2016-81030-

    Verifying multi-threaded software using SMT-based context-bounded model checking

    No full text
    We describe and evaluate three approaches to model check multi-threaded software with shared variables and locks using bounded model checking based on Satisfiability Modulo Theories (SMT) and our modelling of the synchronization primitives of the Pthread library. In the lazy approach, we generate all possible interleavings and call the SMT solver on each of them individually, until we either find a bug, or have systematically explored all interleavings. In the schedule recording approach, we encode all possible interleavings into one single formula and then exploit the high speed of the SMT solvers. In the underapproximation and widening approach, we reduce the state space by abstracting the number of interleavings from the proofs of unsatisfiability generated by the SMT solvers. In all three approaches, we bound the number of context switches allowed among threads in order to reduce the number of interleavings explored. We implemented these approaches in ESBMC, our SMT-based bounded model checker for ANSI-C programs. Our experiments show that ESBMC can analyze larger problems and substantially reduce the verification time compared to state-of-the-art techniques that use iterative context-bounding algorithms or counter-example guided abstraction refinement

    Property-Driven Fence Insertion using Reorder Bounded Model Checking

    Full text link
    Modern architectures provide weaker memory consistency guarantees than sequential consistency. These weaker guarantees allow programs to exhibit behaviours where the program statements appear to have executed out of program order. Fortunately, modern architectures provide memory barriers (fences) to enforce the program order between a pair of statements if needed. Due to the intricate semantics of weak memory models, the placement of fences is challenging even for experienced programmers. Too few fences lead to bugs whereas overuse of fences results in performance degradation. This motivates automated placement of fences. Tools that restore sequential consistency in the program may insert more fences than necessary for the program to be correct. Therefore, we propose a property-driven technique that introduces "reorder-bounded exploration" to identify the smallest number of program locations for fence placement. We implemented our technique on top of CBMC; however, in principle, our technique is generic enough to be used with any model checker. Our experimental results show that our technique is faster and solves more instances of relevant benchmarks as compared to earlier approaches.Comment: 18 pages, 3 figures, 4 algorithms. Version change reason : new set of results and publication ready version of FM 201

    Addressing a source of trouble outside of the repair space

    Get PDF
    A body of research in conversation analysis has identified a range of structurally-provided positions in which sources of trouble in talk-in-interaction can be addressed using repair. These practices are contained within what Schegloff (1992) calls the repair space. In this paper, I examine a rare instance in which a source of trouble is not resolved within the repair space and comes to be addressed outside of it. The practice by which this occurs is a post-completion account; that is, an account that is produced after the possible completion of the sequence containing a source of trouble. Unlike fourth position repair, the final repair position available within the repair space, this account is not made in preparation for a revised response to the trouble-source turn. Its more restrictive aim, rather, is to circumvent an ongoing difference between the parties involved. I argue that because the trouble is addressed in this manner, and in this particular position, the repair space can be considered as being limited to the sequence in which a source of trouble originates

    Virtual Prototyping for Dynamically Reconfigurable Architectures using Dynamic Generic Mapping

    Get PDF
    This paper presents a virtual prototyping methodology for Dynamically Reconfigurable (DR) FPGAs. The methodology is based around a library of VHDL image processing components and allows the rapid prototyping and algorithmic development of low-level image processing systems. For the effective modelling of dynamically reconfigurable designs a new technique named, Dynamic Generic Mapping is introduced. This method allows efficient representation of dynamic reconfiguration without needing any additional components to model the reconfiguration process. This gives the designer more flexibility in modelling dynamic configurations than other methodologies. Models created using this technique can then be simulated and targeted to a specific technology using the same code. This technique is demonstrated through the realisation of modules for a motion tracking system targeted to a DR environment, RIFLE-62

    A framework for investigating the interaction in information retrieval

    Get PDF
    To increase retrieval effectiveness, information retrieval systems must offer better supports to users in their information seeking activities. To achieve this, one major concern is to obtain a better understanding of the nature of the interaction between a user and an information retrieval system. For this, we need a means to analyse the interaction in information retrieval, so as to compare the interaction processes within and across information retrieval systems. We present a framework for investigating the interaction between users and information retrieval systems. The framework is based on channel theory, a theory of information and its flow, which provides an explicit ontology that can be used to represent any aspect of the interaction process. The developed framework allows for the investigation of the interaction in information retrieval at the desired level of abstraction. We use the framework to investigate the interaction in relevance feedback and standard web search

    CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code

    Full text link
    We present an instrumenting compiler for enforcing data confidentiality in low-level applications (e.g. those written in C) in the presence of an active adversary. In our approach, the programmer marks secret data by writing lightweight annotations on top-level definitions in the source code. The compiler then uses a static flow analysis coupled with efficient runtime instrumentation, a custom memory layout, and custom control-flow integrity checks to prevent data leaks even in the presence of low-level attacks. We have implemented our scheme as part of the LLVM compiler. We evaluate it on the SPEC micro-benchmarks for performance, and on larger, real-world applications (including OpenLDAP, which is around 300KLoC) for programmer overhead required to restructure the application when protecting the sensitive data such as passwords. We find that performance overheads introduced by our instrumentation are moderate (average 12% on SPEC), and the programmer effort to port OpenLDAP is only about 160 LoC.Comment: Technical report for CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code, appearing at EuroSys 201
    corecore