59,322 research outputs found

    PEAR: PEriodic And fixed Rank separation for fast fMRI

    Full text link
    In functional MRI (fMRI), faster acquisition via undersampling of data can improve the spatial-temporal resolution trade-off and increase statistical robustness through increased degrees-of-freedom. High quality reconstruction of fMRI data from undersampled measurements requires proper modeling of the data. We present an fMRI reconstruction approach based on modeling the fMRI signal as a sum of periodic and fixed rank components, for improved reconstruction from undersampled measurements. We decompose the fMRI signal into a component which a has fixed rank and a component consisting of a sum of periodic signals which is sparse in the temporal Fourier domain. Data reconstruction is performed by solving a constrained problem that enforces a fixed, moderate rank on one of the components, and a limited number of temporal frequencies on the other. Our approach is coined PEAR - PEriodic And fixed Rank separation for fast fMRI. Experimental results include purely synthetic simulation, a simulation with real timecourses and retrospective undersampling of a real fMRI dataset. Evaluation was performed both quantitatively and visually versus ground truth, comparing PEAR to two additional recent methods for fMRI reconstruction from undersampled measurements. Results demonstrate PEAR's improvement in estimating the timecourses and activation maps versus the methods compared against at acceleration ratios of R=8,16 (for simulated data) and R=6.66,10 (for real data). PEAR results in reconstruction with higher fidelity than when using a fixed-rank based model or a conventional Low-rank+Sparse algorithm. We have shown that splitting the functional information between the components leads to better modeling of fMRI, over state-of-the-art methods

    Network Geometry Inference using Common Neighbors

    Full text link
    We introduce and explore a new method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O(t4)O(t^4) running time to map a network of tt nodes, versus O(t3)O(t^3) in the link-based method. But we also develop a hybrid method with O(t3)O(t^3) running time, which combines the common-neighbors and link-based approaches, and explore a heuristic that reduces its running time further to O(t2)O(t^2), without significant reduction in the mapping accuracy. We apply this method to the Autonomous Systems (AS) Internet, and reveal how soft communities of ASes evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASes. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step towards understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections

    Drift Correction Methods for gas Chemical Sensors in Artificial Olfaction Systems: Techniques and Challenges

    Get PDF
    In this chapter the authors introduce the main challenges faced when developing drift correction techniques and will propose a deep overview of state-of-the-art methodologies that have been proposed in the scientific literature trying to underlying pros and cons of these techniques and focusing on challenges still open and waiting for solution

    Scalable Neural Network Decoders for Higher Dimensional Quantum Codes

    Get PDF
    Machine learning has the potential to become an important tool in quantum error correction as it allows the decoder to adapt to the error distribution of a quantum chip. An additional motivation for using neural networks is the fact that they can be evaluated by dedicated hardware which is very fast and consumes little power. Machine learning has been previously applied to decode the surface code. However, these approaches are not scalable as the training has to be redone for every system size which becomes increasingly difficult. In this work the existence of local decoders for higher dimensional codes leads us to use a low-depth convolutional neural network to locally assign a likelihood of error on each qubit. For noiseless syndrome measurements, numerical simulations show that the decoder has a threshold of around 7.1%7.1\% when applied to the 4D toric code. When the syndrome measurements are noisy, the decoder performs better for larger code sizes when the error probability is low. We also give theoretical and numerical analysis to show how a convolutional neural network is different from the 1-nearest neighbor algorithm, which is a baseline machine learning method
    • ā€¦
    corecore