361,058 research outputs found

    A Tool for Integer Homology Computation: Lambda-At Model

    Full text link
    In this paper, we formalize the notion of lambda-AT-model (where λ\lambda is a non-null integer) for a given chain complex, which allows the computation of homological information in the integer domain avoiding using the Smith Normal Form of the boundary matrices. We present an algorithm for computing such a model, obtaining Betti numbers, the prime numbers p involved in the invariant factors of the torsion subgroup of homology, the amount of invariant factors that are a power of p and a set of representative cycles of generators of homology mod p, for each p. Moreover, we establish the minimum valid lambda for such a construction, what cuts down the computational costs related to the torsion subgroup. The tools described here are useful to determine topological information of nD structured objects such as simplicial, cubical or simploidal complexes and are applicable to extract such an information from digital pictures.Comment: Journal Image and Vision Computing, Volume 27 Issue 7, June, 200

    Synthesis of Topological Quantum Circuits

    Full text link
    Topological quantum computing has recently proven itself to be a very powerful model when considering large- scale, fully error corrected quantum architectures. In addition to its robust nature under hardware errors, it is a software driven method of error corrected computation, with the hardware responsible for only creating a generic quantum resource (the topological lattice). Computation in this scheme is achieved by the geometric manipulation of holes (defects) within the lattice. Interactions between logical qubits (quantum gate operations) are implemented by using particular arrangements of the defects, such as braids and junctions. We demonstrate that junction-based topological quantum gates allow highly regular and structured implementation of large CNOT (controlled-not) gate networks, which ultimately form the basis of the error corrected primitives that must be used for an error corrected algorithm. We present a number of heuristics to optimise the area of the resulting structures and therefore the number of the required hardware resources.Comment: 7 Pages, 10 Figures, 1 Tabl

    A Bayesian approach to constrained single- and multi-objective optimization

    Get PDF
    This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria---the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions---as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300
    • …
    corecore