873 research outputs found

    Statistical physics-based reconstruction in compressed sensing

    Full text link
    Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.Comment: 20 pages, 8 figures, 3 tables. Related codes and data are available at http://aspics.krzakala.or

    Theoretical proposal for a biosensing approach based on a linear array of immobilized gold nanoparticles

    Get PDF
    We propose a sensing mechanism for detection of analytes that can specifically recognized. The sensor is based on closely-spaced chains of functionalized gold nanoparticles (NPs) immobilized on a waveguide surface, with the signal detected by evanescent waveguide absorption spectroscopy. The localized surface plasmon spectrum of a linear array of closely-spaced, hemispherical gold NPs is calculated using the discrete dipole approximation. The plasmon band is found to broaden to a nanowirelike spectrum when a dielectric coating is put on the particles, and the light polarization is along the NP chain. The origin of this broadening is shown to be the polarization-dependent overlap of the evanescent fields of adjacent NPs upon application of the dielectric coating. These features suggests a mechanism for biosensing with an improved sensitivity compared with traditional NP biosensor methods

    Control of surface plasmon resonances in dielectrically coated proximate gold nanoparticles immobilized on a substrate

    Get PDF
    We present experimental and theoretical results for the changes in the optical-plasmon resonance of gold-nanoparticle dimers immobilized on a surface when coated with an organic dielectric material. The plasmon band of a nanoparticle dimer shifts to a higher wavelength when the distance between neighboring particles is decreased, and a well-separated second peak appears. This phenomenon is called cross-talk. We find that an organic coating lets cross-talk start at larger separation distances than for uncoated dimers by bridging the gap between immobilized nanoparticles (creating optical clusters). We study this optical clustering effect as a function of the polarization of the applied light, of the inter-particle distance, of the surrounding environment, and of the optical properties of the coating layer. Theoretical discrete-dipole approximation calculations support the experimental absorption spectroscopy results of gold nanoparticles on glass substrates and on optical waveguides

    A typical reconstruction limit of compressed sensing based on Lp-norm minimization

    Full text link
    We consider the problem of reconstructing an NN-dimensional continuous vector \bx from PP constraints which are generated by its linear transformation under the assumption that the number of non-zero elements of \bx is typically limited to ρN\rho N (0ρ10\le \rho \le 1). Problems of this type can be solved by minimizing a cost function with respect to the LpL_p-norm ||\bx||_p=\lim_{\epsilon \to +0}\sum_{i=1}^N |x_i|^{p+\epsilon}, subject to the constraints under an appropriate condition. For several pp, we assess a typical case limit αc(ρ)\alpha_c(\rho), which represents a critical relation between α=P/N\alpha=P/N and ρ\rho for successfully reconstructing the original vector by minimization for typical situations in the limit N,PN,P \to \infty with keeping α\alpha finite, utilizing the replica method. For p=1p=1, αc(ρ)\alpha_c(\rho) is considerably smaller than its worst case counterpart, which has been rigorously derived by existing literature of information theory.Comment: 12 pages, 2 figure

    Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase Diagrams, and Threshold Achieving Matrices

    Full text link
    Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make less measurements than what was considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in [arXiv:1109.4424] a strategy that allows compressed sensing to be performed at acquisition rates approaching to the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation max- imization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distribution of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.Comment: 42 pages, 37 figures, 3 appendixe

    The Progression in Developing Genomic Resources for Crop Improvement

    Get PDF
    Sequencing technologies have rapidly evolved over the past two decades, and new technologies are being continually developed and commercialized. The emerging sequencing technologies target generating more data with fewer inputs and at lower costs. This has also translated to an increase in the number and type of corresponding applications in genomics besides enhanced computational capacities (both hardware and software). Alongside the evolving DNA sequencing landscape, bioinformatics research teams have also evolved to accommodate the increasingly demanding techniques used to combine and interpret data, leading to many researchers moving from the lab to the computer. The rich history of DNA sequencing has paved the way for new insights and the development of new analysis methods. Understanding and learning from past technologies can help with the progress of future applications. This review focuses on the evolution of sequencing technologies, their significant enabling role in generating plant genome assemblies and downstream applications, and the parallel development of bioinformatics tools and skills, filling the gap in data analysis technique

    Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis

    Full text link
    We show how the Equation-Free approach for multi-scale computations can be exploited to systematically study the dynamics of neural interactions on a random regular connected graph under a pairwise representation perspective. Using an individual-based microscopic simulator as a black box coarse-grained timestepper and with the aid of simulated annealing we compute the coarse-grained equilibrium bifurcation diagram and analyze the stability of the stationary states sidestepping the necessity of obtaining explicit closures at the macroscopic level. We also exploit the scheme to perform a rare-events analysis by estimating an effective Fokker-Planck describing the evolving probability density function of the corresponding coarse-grained observables

    Optimal control theory for unitary transformations

    Full text link
    The dynamics of a quantum system driven by an external field is well described by a unitary transformation generated by a time dependent Hamiltonian. The inverse problem of finding the field that generates a specific unitary transformation is the subject of study. The unitary transformation which can represent an algorithm in a quantum computation is imposed on a subset of quantum states embedded in a larger Hilbert space. Optimal control theory (OCT) is used to solve the inversion problem irrespective of the initial input state. A unified formalism, based on the Krotov method is developed leading to a new scheme. The schemes are compared for the inversion of a two-qubit Fourier transform using as registers the vibrational levels of the X1Σg+X^1\Sigma^+_g electronic state of Na2_2. Raman-like transitions through the A1Σu+A^1\Sigma^+_u electronic state induce the transitions. Light fields are found that are able to implement the Fourier transform within a picosecond time scale. Such fields can be obtained by pulse-shaping techniques of a femtosecond pulse. Out of the schemes studied the square modulus scheme converges fastest. A study of the implementation of the QQ qubit Fourier transform in the Na2_2 molecule was carried out for up to 5 qubits. The classical computation effort required to obtain the algorithm with a given fidelity is estimated to scale exponentially with the number of levels. The observed moderate scaling of the pulse intensity with the number of qubits in the transformation is rationalized.Comment: 32 pages, 6 figure

    An investigation of the mechanisms for strength gain or loss of geopolymer mortar after exposure to elevated temperature

    Get PDF
    When fly ash-based geopolymer mortars were exposed to a temperature of 800 °C, it was found that the strength after the exposure sometimes decreased, but at other times increased. This paper shows that ductility of the mortars has a major correlation to this strength gain/loss behaviour. Specimens prepared with two different fly ashes, with strengths ranging from 5 to 60 MPa, were investigated. Results indicate that the strength losses decrease with increasing ductility, with even strength gains at high levels of ductility. This correlation is attributed to the fact that mortars with high ductility have high capacity to accommodate thermal incompatibilities. It is believed that the two opposing processes occur in mortars: (1) further geopolymerisation and/or sintering at elevated temperatures leading to strength gain; (2) the damage to the mortar because of thermal incompatibility arising from non-uniform temperature distribution. The strength gain or loss occurs depending on the dominant process
    corecore