74,197 research outputs found

    Understanding Vector-Valued Neural Networks and Their Relationship with Real and Hypercomplex-Valued Neural Networks

    Full text link
    Despite the many successful applications of deep learning models for multidimensional signal and image processing, most traditional neural networks process data represented by (multidimensional) arrays of real numbers. The intercorrelation between feature channels is usually expected to be learned from the training data, requiring numerous parameters and careful training. In contrast, vector-valued neural networks are conceived to process arrays of vectors and naturally consider the intercorrelation between feature channels. Consequently, they usually have fewer parameters and often undergo more robust training than traditional neural networks. This paper aims to present a broad framework for vector-valued neural networks, referred to as V-nets. In this context, hypercomplex-valued neural networks are regarded as vector-valued models with additional algebraic properties. Furthermore, this paper explains the relationship between vector-valued and traditional neural networks. Precisely, a vector-valued neural network can be obtained by placing restrictions on a real-valued model to consider the intercorrelation between feature channels. Finally, we show how V-nets, including hypercomplex-valued neural networks, can be implemented in current deep-learning libraries as real-valued networks

    A unifying Petri net model of non-interference and non-deducibility information flow security

    No full text
    In this paper we introduce FIFO Information Flow Nets (FIFN) as a model for describing information flow security properties. The FIFN is based on Petri nets and has been derived from the work described in [Var89], [Var90] and [Rou86]. Using this new model, we present the information flow security properties Non-Interference between Places (which corresponds to Non-Interference) and Non-Deducibility on Views (which corresponds to Non-Deducibility on Inputs). Then we consider a very general composition operation and show that neither Non-Interference on Places nor Non-Deducibility on Views is preserved under this composition operation. This leads us to a new definition of information flow security referred to as the Feedback Non-Deducibility on Views. We then show that this definition is preserved under the composition operation. This leads us to a new definition of information flow security referred to as the Feedback Non-Deducibility on Views. We then show that this definition is preserved under the composition operation. We then show some similarities between this property and the notion of Non-Deducibility on Strategies

    Session Types in Abelian Logic

    Full text link
    There was a PhD student who says "I found a pair of wooden shoes. I put a coin in the left and a key in the right. Next morning, I found those objects in the opposite shoes." We do not claim existence of such shoes, but propose a similar programming abstraction in the context of typed lambda calculi. The result, which we call the Amida calculus, extends Abramsky's linear lambda calculus LF and characterizes Abelian logic.Comment: In Proceedings PLACES 2013, arXiv:1312.221
    • ā€¦
    corecore