1,542 research outputs found

    On palimpsests in neural memory: an information theory viewpoint

    Full text link
    The finite capacity of neural memory and the reconsolidation phenomenon suggest it is important to be able to update stored information as in a palimpsest, where new information overwrites old information. Moreover, changing information in memory is metabolically costly. In this paper, we suggest that information-theoretic approaches may inform the fundamental limits in constructing such a memory system. In particular, we define malleable coding, that considers not only representation length but also ease of representation update, thereby encouraging some form of recycling to convert an old codeword into a new one. Malleability cost is the difficulty of synchronizing compressed versions, and malleable codes are of particular interest when representing information and modifying the representation are both expensive. We examine the tradeoff between compression efficiency and malleability cost, under a malleability metric defined with respect to a string edit distance. This introduces a metric topology to the compressed domain. We characterize the exact set of achievable rates and malleability as the solution of a subgraph isomorphism problem. This is all done within the optimization approach to biology framework.Accepted manuscrip

    A Gray Code for the Shelling Types of the Boundary of a Hypercube

    Get PDF
    We consider two shellings of the boundary of the hypercube equivalent if one can be transformed into the other by an isometry of the cube. We observe that a class of indecomposable permutations, bijectively equivalent to standard double occurrence words, may be used to encode one representative from each equivalence class of the shellings of the boundary of the hypercube. These permutations thus encode the shelling types of the boundary of the hypercube. We construct an adjacent transposition Gray code for this class of permutations. Our result is a signed variant of King's result showing that there is a transposition Gray code for indecomposable permutations

    Harmonious Hilbert curves and other extradimensional space-filling curves

    Full text link
    This paper introduces a new way of generalizing Hilbert's two-dimensional space-filling curve to arbitrary dimensions. The new curves, called harmonious Hilbert curves, have the unique property that for any d' < d, the d-dimensional curve is compatible with the d'-dimensional curve with respect to the order in which the curves visit the points of any d'-dimensional axis-parallel space that contains the origin. Similar generalizations to arbitrary dimensions are described for several variants of Peano's curve (the original Peano curve, the coil curve, the half-coil curve, and the Meurthe curve). The d-dimensional harmonious Hilbert curves and the Meurthe curves have neutral orientation: as compared to the curve as a whole, arbitrary pieces of the curve have each of d! possible rotations with equal probability. Thus one could say these curves are `statistically invariant' under rotation---unlike the Peano curves, the coil curves, the half-coil curves, and the familiar generalization of Hilbert curves by Butz and Moore. In addition, prompted by an application in the construction of R-trees, this paper shows how to construct a 2d-dimensional generalized Hilbert or Peano curve that traverses the points of a certain d-dimensional diagonally placed subspace in the order of a given d-dimensional generalized Hilbert or Peano curve. Pseudocode is provided for comparison operators based on the curves presented in this paper.Comment: 40 pages, 10 figures, pseudocode include

    Identification of quasi-optimal regions in the design space using surrogate modeling

    Get PDF
    The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to find optimal performance characteristics of expensive simulations (forward analysis: from input to optimal output). However, often the practitioner knows a priori the desired performance and is interested in finding the associated input parameters (reverse analysis: from desired output to input). A popular method to solve such reverse (inverse) problems is to minimize the error between the simulated performance and the desired goal. However, there might be multiple quasi-optimal solutions to the problem. In this paper, the authors propose a novel method to efficiently solve inverse problems and to sample Quasi-Optimal Regions (QORs) in the input (design) space more densely. The development of this technique, based on the probability of improvement criterion and kriging models, is driven by a real-life problem from bio-mechanics, i.e., determining the elasticity of the (rabbit) tympanic membrane, a membrane that converts acoustic sound wave into vibrations of the middle ear ossicular bones

    Complexity Theory, Game Theory, and Economics: The Barbados Lectures

    Full text link
    This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics," taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory; and (ii) to illustrate how game-theoretic questions have led to new and interesting complexity theory, including recent several breakthroughs. It consists of two five-lecture sequences: the Solar Lectures, focusing on the communication and computational complexity of computing equilibria; and the Lunar Lectures, focusing on applications of complexity theory in game theory and economics. No background in game theory is assumed.Comment: Revised v2 from December 2019 corrects some errors in and adds some recent citations to v1 Revised v3 corrects a few typos in v

    Stochastic Parameter Estimation of Poroelastic Processes Using Geomechanical Measurements

    Get PDF
    Understanding the structure and material properties of hydrologic systems is important for a number of applications, including carbon dioxide injection for geological carbon storage or enhanced oil recovery, monitoring of hydraulic fracturing projects, mine dewatering, environmental remediation and managing geothermal reservoirs. These applications require a detailed knowledge of the geologic systems being impacted, in order to optimize their operation and safety. In order to evaluate, monitor and manage such hydrologic systems, a stochastic estimation framework was developed which is capable of characterizing the structure and physical parameters of the subsurface. This software framework uses a set of stochastic optimization algorithms to calibrate a heterogeneous subsurface flow model to available field data, and to construct an ensemble of models which represent the range of system states that would explain this data. Many of these systems, such as oil reservoirs, are deep and hydraulically isolted from the shallow subsurface making near-surface fluid pressure measurements uninformative. Near-surface strainmeter, tiltmeter and extensometer signals were therefore evaluated in terms of their potential information content for calibrating poroelastic flow models. Such geomechanical signals are caused by mechanical deformation, and therefore travel through hydraulically impermeable rock much more quickly. A numerical geomechanics model was therefore developed using Geocentric, which couples subsurface flow and elastic deformation equations to simulate geomechanical signals (e.g. pressure, strain, tilt and displacement) given a set of model parameters. A high-performance cluster computer performs this computationally expensive simulation for each set of parameters, and compares the simulation results to measured data in order to evaluate the likelihood of each model. The set of data-model comparisons are then used to estimate each unknown parameter, as well as the uncertainty of each parameter estimate. This uncertainty can be inuenced by limitations in the measured dataset such as random noise, instrument drift, and the number and location of sensors, as well as by conceptual model errors and false underlying assumptions. In this study we find that strain measurements taken from the shallow subsurface can be used to estimate the structure and material parameters of geologic layers much deeper in the subsurface. This can signicantly mitigate drilling and installation costs of monitoring wells, as well as reduce the risk of puncturing or fracturing a target reservoir. These parameter estimates were also used to develop an ensemble of calibrated hydromechanical models which can predict the range of system behavior and inform decision-making on the management of an aquifer or reservoir

    Combinatorial Structures in Hypercubes

    Get PDF

    Center for Space Microelectronics Technology

    Get PDF
    The 1990 technical report of the Jet Propulsion Laboratory Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the center during 1990. The report lists 130 publications, 226 presentations, and 87 new technology reports and patents

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy
    corecore