23,883 research outputs found

    A Polynomial Translation of pi-calculus FCPs to Safe Petri Nets

    Full text link
    We develop a polynomial translation from finite control pi-calculus processes to safe low-level Petri nets. To our knowledge, this is the first such translation. It is natural in that there is a close correspondence between the control flows, enjoys a bisimulation result, and is suitable for practical model checking.Comment: To appear in special issue on best papers of CONCUR'12 of Logical Methods in Computer Scienc

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Neural Network Parametrization of Deep-Inelastic Structure Functions

    Full text link
    We construct a parametrization of deep-inelastic structure functions which retains information on experimental errors and correlations, and which does not introduce any theoretical bias while interpolating between existing data points. We generate a Monte Carlo sample of pseudo-data configurations and we train an ensemble of neural networks on them. This effectively provides us with a probability measure in the space of structure functions, within the whole kinematic region where data are available. This measure can then be used to determine the value of the structure function, its error, point-to-point correlations and generally the value and uncertainty of any function of the structure function itself. We apply this technique to the determination of the structure function F_2 of the proton and deuteron, and a precision determination of the isotriplet combination F_2[p-d]. We discuss in detail these results, check their stability and accuracy, and make them available in various formats for applications.Comment: Latex, 43 pages, 22 figures. (v2) Final version, published in JHEP; Sect.5.2 and Fig.9 improved, a few typos corrected and other minor improvements. (v3) Some inconsequential typos in Tab.1 and Tab 5 corrected. Neural parametrization available at http://sophia.ecm.ub.es/f2neura
    • …
    corecore