3,897 research outputs found

    Prediction and Power in Molecular Sensors: Uncertainty and Dissipation When Conditionally Markovian Channels Are Driven by Semi-Markov Environments

    Get PDF
    Sensors often serve at least two purposes: predicting their input and minimizing dissipated heat. However, determining whether or not a particular sensor is evolved or designed to be accurate and efficient is difficult. This arises partly from the functional constraints being at cross purposes and partly since quantifying the predictive performance of even in silico sensors can require prohibitively long simulations. To circumvent these difficulties, we develop expressions for the predictive accuracy and thermodynamic costs of the broad class of conditionally Markovian sensors subject to unifilar hidden semi-Markov (memoryful) environmental inputs. Predictive metrics include the instantaneous memory and the mutual information between present sensor state and input future, while dissipative metrics include power consumption and the nonpredictive information rate. Success in deriving these formulae relies heavily on identifying the environment's causal states, the input's minimal sufficient statistics for prediction. Using these formulae, we study the simplest nontrivial biological sensor model---that of a Hill molecule, characterized by the number of ligands that bind simultaneously, the sensor's cooperativity. When energetic rewards are proportional to total predictable information, the closest cooperativity that optimizes the total energy budget generally depends on the environment's past hysteretically. In this way, the sensor gains robustness to environmental fluctuations. Given the simplicity of the Hill molecule, such hysteresis will likely be found in more complex predictive sensors as well. That is, adaptations that only locally optimize biochemical parameters for prediction and dissipation can lead to sensors that "remember" the past environment.Comment: 21 pages, 4 figures, http://csc.ucdavis.edu/~cmg/compmech/pubs/piness.ht

    The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications

    Get PDF
    The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht

    A model for prediction of spatial farm structure

    Get PDF
    Spatial micro structure and its change over time is recorded for Norwegian farm firms. Relative strong correlations between geographically close neighbors are expected, either because growing farms swallow the smaller ones, or because they are affected by some spatially related unobserved factors. Strong correlations over time are also expected because of prevalent family farming. The paper proposes a state-of-the-art Markov chain model in order to predict the spatial and temporal micro structure taking account of both non-stationarity and spatio/temporal correlations by means of techniques from non-linear state space modeling and Gaussian Markov random fields. The model and the complete data set is then a device with which one can investigate the consequences of ignoring spatial and/or temporal correlations, both with complete data and with more sparsely sampled data, like FADN panels or USDA's repeated cross-sections (ARMS).Farm Management,

    Beyond the Spectral Theorem: Spectrally Decomposing Arbitrary Functions of Nondiagonalizable Operators

    Full text link
    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, often the linear operator techniques that one would then use simply fail since the operators cannot be diagonalized. This curse is well known. It also occurs for finite-dimensional linear operators. We circumvent it by developing a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. It extends the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics are relevant, including memoryful stochastic processes, open non unitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator. In particular, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a general method to construct it. We provide new formulae for constructing projection operators and delineate the relations between projection operators, eigenvectors, and generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples.Comment: 29 pages, 4 figures, expanded historical citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/bst.ht

    On human motion prediction using recurrent neural networks

    Full text link
    Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality. Following the success of deep learning methods in several computer vision tasks, recent work has focused on using deep recurrent neural networks (RNNs) to model human motion, with the goal of learning time-dependent representations that perform tasks such as short-term motion prediction and long-term human motion synthesis. We examine recent work, with a focus on the evaluation methodologies commonly used in the literature, and show that, surprisingly, state-of-the-art performance can be achieved by a simple baseline that does not attempt to model motion at all. We investigate this result, and analyze recent RNN methods by looking at the architectures, loss functions, and training procedures used in state-of-the-art approaches. We propose three changes to the standard RNN models typically used for human motion, which result in a simple and scalable RNN architecture that obtains state-of-the-art performance on human motion prediction.Comment: Accepted at CVPR 1

    Efficient semiparametric estimation and model selection for multidimensional mixtures

    Full text link
    In this paper, we consider nonparametric multidimensional finite mixture models and we are interested in the semiparametric estimation of the population weights. Here, the i.i.d. observations are assumed to have at least three components which are independent given the population. We approximate the semiparametric model by projecting the conditional distributions on step functions associated to some partition. Our first main result is that if we refine the partition slowly enough, the associated sequence of maximum likelihood estimators of the weights is asymptotically efficient, and the posterior distribution of the weights, when using a Bayesian procedure, satisfies a semiparametric Bernstein von Mises theorem. We then propose a cross-validation like procedure to select the partition in a finite horizon. Our second main result is that the proposed procedure satisfies an oracle inequality. Numerical experiments on simulated data illustrate our theoretical results
    • …
    corecore