95,150 research outputs found

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation

    Get PDF
    We investigate the performance of sparsely-connected networks of integrate-and-fire neurons for ultra-short term information processing. We exploit the fact that the population activity of networks with balanced excitation and inhibition can switch from an oscillatory firing regime to a state of asynchronous irregular firing or quiescence depending on the rate of external background spikes. We find that in terms of information buffering the network performs best for a moderate, non-zero, amount of noise. Analogous to the phenomenon of stochastic resonance the performance decreases for higher and lower noise levels. The optimal amount of noise corresponds to the transition zone between a quiescent state and a regime of stochastic dynamics. This provides a potential explanation on the role of non-oscillatory population activity in a simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9

    An artificial neural network for dimensions and cost modelling of internal micro-channels fabricated in PMMA using Nd:YVO4 laser

    Get PDF
    For micro-channel fabrication using laser micro-machining processing, estimation techniques are normally utilised to develop an approach for the system behaviour evaluation. Design of Experiments (DOE) and the Artificial Neural Networks (ANN) are two methodologies that can be used as estimation techniques. These techniques help in finding a set of laser processing parameters that provides the required micro-channel dimensions and in finding the optimal solutions in terms reducing the product development time, power consumption and of least cost. In this work, an integrated methodology is presented in which the ANN training experiments were obtained by the statistical software DoE to improve the developed models in ANN. A 33 factorial design of experiments (DoE) was used to get the experimental set. Laser power, P; pulse repetition frequency, PRF; and sample translation speed, U were the ANN inputs. The channel width and the produced micro-channel operating cost per metre were the measured responses. Four Artificial Neural Networks (ANNs) models were developed to be applied to internal micro-channels machined in PMMA using a Nd:YVO4 laser. These models were varied in terms of the selection and the quantity of training data set and constructed using a multi-layered, feed-forward structure with a the back-propagation algorithm. The responses were adequately estimated by the ANN models within the set micro-machining parameters limits. Moreover the effect of changing the selection and the quantity of training data on the approximation capability of the developed ANN model was discussed

    Learning Domain-Specific Word Embeddings from Sparse Cybersecurity Texts

    Full text link
    Word embedding is a Natural Language Processing (NLP) technique that automatically maps words from a vocabulary to vectors of real numbers in an embedding space. It has been widely used in recent years to boost the performance of a vari-ety of NLP tasks such as Named Entity Recognition, Syntac-tic Parsing and Sentiment Analysis. Classic word embedding methods such as Word2Vec and GloVe work well when they are given a large text corpus. When the input texts are sparse as in many specialized domains (e.g., cybersecurity), these methods often fail to produce high-quality vectors. In this pa-per, we describe a novel method to train domain-specificword embeddings from sparse texts. In addition to domain texts, our method also leverages diverse types of domain knowledge such as domain vocabulary and semantic relations. Specifi-cally, we first propose a general framework to encode diverse types of domain knowledge as text annotations. Then we de-velop a novel Word Annotation Embedding (WAE) algorithm to incorporate diverse types of text annotations in word em-bedding. We have evaluated our method on two cybersecurity text corpora: a malware description corpus and a Common Vulnerability and Exposure (CVE) corpus. Our evaluation re-sults have demonstrated the effectiveness of our method in learning domain-specific word embeddings

    Perspectives on Multi-Level Dynamics

    Get PDF
    As Physics did in previous centuries, there is currently a common dream of extracting generic laws of nature in economics, sociology, neuroscience, by focalising the description of phenomena to a minimal set of variables and parameters, linked together by causal equations of evolution whose structure may reveal hidden principles. This requires a huge reduction of dimensionality (number of degrees of freedom) and a change in the level of description. Beyond the mere necessity of developing accurate techniques affording this reduction, there is the question of the correspondence between the initial system and the reduced one. In this paper, we offer a perspective towards a common framework for discussing and understanding multi-level systems exhibiting structures at various spatial and temporal levels. We propose a common foundation and illustrate it with examples from different fields. We also point out the difficulties in constructing such a general setting and its limitations
    • 

    corecore