327 research outputs found

    Analytic cell decomposition and analytic motivic integration

    Get PDF
    The main results of this paper are a Cell Decomposition Theorem for Henselian valued fields with analytic structure in an analytic Denef-Pas language, and its application to analytic motivic integrals and analytic integrals over \FF_q((t)) of big enough characteristic. To accomplish this, we introduce a general framework for Henselian valued fields KK with analytic structure, and we investigate the structure of analytic functions in one variable, defined on annuli over KK. We also prove that, after parameterization, definable analytic functions are given by terms. The results in this paper pave the way for a theory of \emph{analytic} motivic integration and \emph{analytic} motivic constructible functions in the line of R. Cluckers and F. Loeser [\emph{Fonctions constructible et int\'egration motivic I}, Comptes rendus de l'Acad\'emie des Sciences, {\bf 339} (2004) 411 - 416]

    Improving the numerical stability of fast matrix multiplication

    Full text link
    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this paper that the numerical sacrifice of fast algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. Finally, we benchmark performance and demonstrate our improved numerical accuracy

    Notes on bordered Floer homology

    Full text link
    This is a survey of bordered Heegaard Floer homology, an extension of the Heegaard Floer invariant HF-hat to 3-manifolds with boundary. Emphasis is placed on how bordered Heegaard Floer homology can be used for computations.Comment: 73 pages, 29 figures. Based on lectures at the Contact and Symplectic Topology Summer School in Budapest, July 2012. v2: Fixed many small typo

    Quantum Analogue Computing

    Full text link
    We briefly review what a quantum computer is, what it promises to do for us, and why it is so hard to build one. Among the first applications anticipated to bear fruit is quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data is encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data is encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous variable quantum computers (CVQC) becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.Comment: 10 pages, to appear in the Visions 2010 issue of Phil. Trans. Roy. Soc.

    Quadratic Word Equations with Length Constraints, Counter Systems, and Presburger Arithmetic with Divisibility

    Full text link
    Word equations are a crucial element in the theoretical foundation of constraint solving over strings, which have received a lot of attention in recent years. A word equation relates two words over string variables and constants. Its solution amounts to a function mapping variables to constant strings that equate the left and right hand sides of the equation. While the problem of solving word equations is decidable, the decidability of the problem of solving a word equation with a length constraint (i.e., a constraint relating the lengths of words in the word equation) has remained a long-standing open problem. In this paper, we focus on the subclass of quadratic word equations, i.e., in which each variable occurs at most twice. We first show that the length abstractions of solutions to quadratic word equations are in general not Presburger-definable. We then describe a class of counter systems with Presburger transition relations which capture the length abstraction of a quadratic word equation with regular constraints. We provide an encoding of the effect of a simple loop of the counter systems in the theory of existential Presburger Arithmetic with divisibility (PAD). Since PAD is decidable, we get a decision procedure for quadratic words equations with length constraints for which the associated counter system is \emph{flat} (i.e., all nodes belong to at most one cycle). We show a decidability result (in fact, also an NP algorithm with a PAD oracle) for a recently proposed NP-complete fragment of word equations called regular-oriented word equations, together with length constraints. Decidability holds when the constraints are additionally extended with regular constraints with a 1-weak control structure.Comment: 18 page

    Ultrasonic attenuation measurements at very high SNR: correlation, information theory and performance

    Get PDF
    This paper describes a system for ultrasonic wave attenuation measurements which is based on pseudo-random binary codes as transmission signals combined with on-the-fly correlation for received signal detection. The apparatus can receive signals in the nanovolt range against a noise background in the order of hundreds of microvolts and an analogue to digital convertor (ADC) bit-step also in the order of hundreds of microvolts. Very high signal to noise ratios (SNRs) are achieved without recourse to coherent averaging with its associated requirement for high sampling times. The system works by a process of dithering – in which very low amplitude received signals enter the dynamic range of the ADC by 'riding' on electronic noise at the system input. The amplitude of this 'useful noise' has to be chosen with care for an optimised design. The process of optimisation is explained on the basis of classical information theory and is achieved through a simple noise model. The performance of the system is examined for different transmitted code lengths and gain settings in the receiver chain. Experimental results are shown to verify the expected operation when the system is applied to a very highly attenuating material – an aerated slurry

    Identification of the human factors contributing to maintenance failures in a petroleum operation

    Get PDF
    Objective: This research aimed to identify the most frequently occurring human factors contributing to maintenance-related failures within a petroleum industry organization. Commonality between failures will assist in understanding reliability in maintenance processes, thereby preventing accidents in high-hazard domains. Background: Methods exist for understanding the human factors contributing to accidents. Their application in a maintenance context mainly has been advanced in aviation and nuclear power. Maintenance in the petroleum industry provides a different context for investigating the role that human factors play in influencing outcomes. It is therefore worth investigating the contributing human factors to improve our understanding of both human factors in reliability and the factors specific to this domain. Method: Detailed analyses were conducted of maintenance- related failures (N = 38) in a petroleum company using structured interviews with maintenance technicians. The interview structure was based on the Human Factor Investigation Tool (HFIT), which in turn was based on Rasmussen’s model of human malfunction .Results: A mean of 9.5 factors per incident was identified across the cases investigated. The three most frequent human factors contributing to the maintenance failures were found to be assumption (79% of cases), design and maintenance (71%), and communication (66%).Conclusion: HFIT proved to be a useful instrument for identifying the pattern of human factors that recurred most frequently in maintenance-related failures. Application: The high frequency of failures attributed to assumptions and communication demonstrated the importance of problem-solving abilities and organizational communication in a domain where maintenance personnel have a high degree of autonomy and a wide geographical distribution

    Jewish Immigrants in Israel: Disintegration Within Integration?

    Get PDF
    In her chapter, ‘Disintegration within integration’, Amandine Desille examines more recent transformations of Israel’s Law of Return – the Israeli immigration policy which provides the (imagined) repatriation of Diaspora Jews to Israel – in a context of liberalisation of the Israeli economy and the devolution of power to local authorities. Today, new immigrants follow two paths of ‘integration’: ‘direct absorp-tion’, where immigrants are granted benefits while being free to settle wherever they find fit; and ‘community absorption’, where immigrants are placed in ‘absorption centres’ and see their entitlements conditioned by residence, religious observance and more. Those two paths are ‘ethnicised’ in the sense that they depend on country of origin – Western immigrants, considered as economically useful, benefit from direct absorption and a more pluralist attitude of local governments, while immi-grants from Africa and Asia are the objects of an assimilationist policy. This situa-tion of ‘(dis)integration’ within what is supposed to be an inclusive immigrant policy for all Jews, shows the extent to which new criteria of perceived economic performance limit the integration of specific segments of newcomers. The rescaling of immigration and immigrant policies to subnational governments, although it has introduced a more multicultural approach, antagonist to the assimilationist ideology at work in Israel, has not enabled an alternative policy framework which is more accommodating to all.info:eu-repo/semantics/publishedVersio

    Compositional Satisfiability Solving in Separation Logic

    Get PDF
    We introduce a novel decision procedure to the satisfiability problem in array separation logic combined with general inductively defined predicates and arithmetic. Our proposal differentiates itself from existing works by solving satisfiability through compositional reasoning. First, following Fermat’s method of infinite descent, it infers for every inductive definition a “base” that precisely characterises the satisfiability. It then utilises the base to derive such a base for any formula where these inductive predicates reside in. Especially, we identify an expressive decidable fragment for the compositionality. We have implemented the proposal in a tool and evaluated it over challenging problems. The experimental results show that the compositional satisfiability solving is efficient and our tool is effective and efficient when compared with existing solvers
    corecore