4,228 research outputs found

    Improved Compressive Sensing Of Natural Scenes Using Localized Random Sampling

    Get PDF
    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging

    Efficient Image Processing Via Compressive Sensing Of Integrate-And-Fire Neuronal Network Dynamics

    Get PDF
    Integrate-and-fire (I&F) neuronal networks are ubiquitous in diverse image processing applications, including image segmentation and visual perception. While conventional I&F network image processing requires the number of nodes composing the network to be equal to the number of image pixels driving the network, we determine whether I&F dynamics can accurately transmit image information when there are significantly fewer nodes than network input-signal components. Although compressive sensing (CS) theory facilitates the recovery of images using very few samples through linear signal processing, it does not address whether similar signal recovery techniques facilitate reconstructions through measurement of the nonlinear dynamics of an I&F network. In this paper, we present a new framework for recovering sparse inputs of nonlinear neuronal networks via compressive sensing. By recovering both one-dimensional inputs and two-dimensional images, resembling natural stimuli, we demonstrate that input information can be well-preserved through nonlinear I&F network dynamics even when the number of network-output measurements is significantly smaller than the number of input-signal components. This work suggests an important extension of CS theory potentially useful in improving the processing of medical or natural images through I&F network dynamics and understanding the transmission of stimulus information across the visual system

    Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach

    Full text link
    Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be "compressed" to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.Comment: 16 pages, 1 figure. Accepted at ICLR 201

    Meta-Analysis of Life Cycle Assessment Studies on Solar Photovoltaic Systems

    Get PDF
    Nowadays, greenhouse gas emission problem is becoming more and more severe. At the same time, world energy demand increases a lot every year. All the countries focus on using renewable energy and take it as the solution of future energy demand problem. Although the solar energy only makes up 1% market share of the total renewable energy, it grows rapidly recent years. Because the energy coming from sun is tens of time more than energy coming from the fossil fuel. There are three main type of the photovoltaic technologies, which are crystalline silicon solar cell, thin-film solar cell and polymer solar cell. Crystalline silicon solar cell is the first generation technology, which make up 90% market share of the solar energy industry. Thin-film solar cell is the second generation technology, which makes up 10% market share of the solar energy. The goal of this thesis are 1) to evaluate the efficiency of each technologies of solar energy; 2) to compare the cumulative energy demand (CED) of solar module of each technology; 3) to compare the energy return on investment (EROI) of each technologies; 4) to know energy demand of balance of system of all technologies; 5) to show the trend of different generation of solar energy through time by showing relation between efficiency, cumulative energy demand and energy return on investment. To accomplish these goals, we use a meta-analysis method in thesis. We collect all the studies on solar energy which has passed the criteria we set. After getting all the data, we evaluate the CED and EROI by using our own method to harmonized each data

    Compressive Sensing Inference Of Neuronal Network Connectivity In Balanced Neuronal Dynamics

    Get PDF
    Determining the structure of a network is of central importance to understanding its function in both neuroscience and applied mathematics. However, recovering the structural connectivity of neuronal networks remains a fundamental challenge both theoretically and experimentally. While neuronal networks operate in certain dynamical regimes, which may influence their connectivity reconstruction, there is widespread experimental evidence of a balanced neuronal operating state in which strong excitatory and inhibitory inputs are dynamically adjusted such that neuronal voltages primarily remain near resting potential. Utilizing the dynamics of model neurons in such a balanced regime in conjunction with the ubiquitous sparse connectivity structure of neuronal networks, we develop a compressive sensing theoretical framework for efficiently reconstructing network connections by measuring individual neuronal activity in response to a relatively small ensemble of random stimuli injected over a short time scale. By tuning the network dynamical regime, we determine that the highest fidelity reconstructions are achievable in the balanced state. We hypothesize the balanced dynamics observed in vivo may therefore be a result of evolutionary selection for optimal information encoding and expect the methodology developed to be generalizable for alternative model networks as well as experimental paradigms

    Extending Transactional Memory with Atomic Deferral

    Get PDF
    This paper introduces atomic deferral, an extension to TM that allows programmers to move long-running or irrevocable operations out of a transaction while maintaining serializability: the transaction and its de- ferred operation appear to execute atomically from the perspective of other transactions. Thus, program- mers can adapt lock-based programs to exploit TM with relatively little effort and without sacrificing scalability by atomically deferring the problematic operations. We demonstrate this with several use cases for atomic deferral, as well as an in-depth analysis of its use on the PARSEC dedup benchmark, where we show that atomic deferral enables TM to be competitive with well-designed lock-based code
    corecore