82,870 research outputs found

    Synthesis of Stochastic Flow Networks

    Get PDF
    A stochastic flow network is a directed graph with incoming edges (inputs) and outgoing edges (outputs), tokens enter through the input edges, travel stochastically in the network, and can exit the network through the output edges. Each node in the network is a splitter, namely, a token can enter a node through an incoming edge and exit on one of the output edges according to a predefined probability distribution. Stochastic flow networks can be easily implemented by DNA-based chemical reactions, with promising applications in molecular computing and stochastic computing. In this paper, we address a fundamental synthesis question: Given a finite set of possible splitters and an arbitrary rational probability distribution, design a stochastic flow network, such that every token that enters the input edge will exit the outputs with the prescribed probability distribution. The problem of probability transformation dates back to von Neumann's 1951 work and was followed, among others, by Knuth and Yao in 1976. Most existing works have been focusing on the "simulation" of target distributions. In this paper, we design optimal-sized stochastic flow networks for "synthesizing" target distributions. It shows that when each splitter has two outgoing edges and is unbiased, an arbitrary rational probability \frac{a}{b} with a\leq b\leq 2^n can be realized by a stochastic flow network of size n that is optimal. Compared to the other stochastic systems, feedback (cycles in networks) strongly improves the expressibility of stochastic flow networks.Comment: 2 columns, 15 page

    On the Complexity of Branching Games with Regular Conditions

    Get PDF
    Infinite duration games with regular conditions are one of the crucial tools in the areas of verification and synthesis. In this paper we consider a branching variant of such games - the game contains branching vertices that split the play into two independent sub-games. Thus, a play has the form of~an~infinite tree. The winner of the play is determined by a winning condition specified as a set of infinite trees. Games of this kind were used by Mio to provide a game semantics for the probabilistic mu-calculus. He used winning conditions defined in terms of parity games on trees. In this work we consider a more general class of winning conditions, namely those definable by finite automata on infinite trees. Our games can be seen as a branching-time variant of the stochastic games on graphs. We address the question of determinacy of a branching game and the problem of computing the optimal game value for each of the players. We consider both the stochastic and non-stochastic variants of the games. The questions under consideration are parametrised by the family of strategies we allow: either mixed, behavioural, or pure. We prove that in general, branching games are not determined under mixed strategies. This holds even for topologically simple winning conditions (differences of two open sets) and non-stochastic arenas. Nevertheless, we show that the games become determined under mixed strategies if we restrict the winning conditions to open sets of trees. We prove that the problem of comparing the game value to a rational threshold is undecidable for branching games with regular conditions in all non-trivial stochastic cases. In the non-stochastic cases we provide exact bounds on the complexity of the problem. The only case left open is the 0-player stochastic case, i.e. the problem of computing the measure of a given regular language of infinite trees

    ๊ธฐ๊ณ„ํ•™์Šต ์‹œ์Šคํ…œ ์„ค๊ณ„๋ฅผ ์œ„ํ•œ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ์ตœ๊ธฐ์˜.Machine learning has been paid attention because intelligence such as recognition, decision making, and recommendation is a helpful utility in industrial, medical, transportation, entertainment systems, and others that human need to interact with. As machine learning techniques are extensively applied to various areas, the needs for more robust algorithms and more efficient hardware have been increased. In order to develop an efficient machine learning system, we have researched from high-level algorithm down to low-level hardware logicthe main focus of our work is on ensemble machine learning and stochastic computing (SC). The first work is to combine multiple components, i.e., multiple feature extractors (FE) and multiple classifiers in the aspect of pattern recognition. Ensemble of multiple components is one of challenging approaches for constructing a more accurate classifier. It can handle difficult problems where a single classifier easily makes a wrong decision due to lack of training or parameter optimization. Combining the decisions of participating classifiers statistically reduces the risk of wrong decision. We suggest a hierarchical ensemble framework of multiple feature extractors and multiple classifiers (MFMC). The second work is to construct efficient hardware building blocks for machine learning in order to reduce system complexity and generate high area- and energy-efficient logic, where we exploit the property of machine learning systems that does not require accurate computations. We select stochastic computing (SC), which is an alternative paradigm to conventional binary arithmetic computing. SC can boost efficiency in terms of area, power, and error tolerance, while relaxing the accuracy of computation. The third work is to combine both machine learning and stochastic computing, where we select deep learning. This work presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.1. Introduction 1 1.1 Hierarchical Ensemble Learning Framework 1 1.2 Hardware Building Block for Machine Learning By Using Stochastic Computing 1 1.2.1 Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks 5 2. A Design Framework for Hierarchical Ensemble of Multiple Feature Extractors and Multiple Classifiers 7 2.1 Introduction 7 2.2 Related work 9 2.3 Proposed hierarchical ensemble system 12 2.3.1 Local Mapping Block and Global Mapping Block 12 2.3.2 Complexity comparison according to composition of LMB 15 2.3.3 Motivation for differentiating local and global mappings17 2.3.4 Reinforcement learning for LMB 19 2.3.5 Construction of Bayesian network from GMB 24 2.4 Experimental results 32 2.4.1 Measure of effectiveness for WMV and RL 33 2.4.2 Pedestrian detection dataset 35 2.4.3 Comparison between GMB and AdaBoost 41 2.4.4 UCI Multiple Features dataset 42 2.4.5 LMB selection 44 2.4.6 Discussion 45 2.5 Conclusion 46 3. Synthesis of Efficient Stochastic Logic for Many-Variable Expressions 49 3.1 Introduction 49 3.2 Related Work 52 3.3 SC Logic Synthesis for Multivariate Expressions 54 3.3.1 Probabilistic Logic 55 3.3.2 Definitions 58 3.3.3 Overview of the Proposed Method 60 3.3.4 Direct Synthesis VS. Kernel-based Synthesis 60 3.3.5 SC Kernel 63 3.3.6 Prime SC Kernel 65 3.3.7 iSC Kernel 68 3.3.8 Relationship Between iSC Kernels 70 3.3.9 Hybrid Scheme 75 3.3.10 Cost Function 76 3.3.11 SC Synthesis Algorithm 78 3.4 Experimental Results 82 3.4.1 Performance of SC Logic Synthesis Algorithm 83 3.4.2 Quality of Synthesis Results 84 3.4.3 Comparison of Accuracy 89 3.5 Conclusion 90 4. An Energy-Efficient Random Number Generator for Stochastic Circuits 91 4.1 Introduction 91 4.2 II. Background 92 4.2.1 Preliminaries 92 4.2.2 Shortcomings of Conventional Approaches 93 4.3 III. Proposed Stochastic Number Generator 96 4.3.1 Overview of the Proposed SNG 96 4.3.2 Even-distribution Encoding 96 4.3.3 Inter-group Randomization 98 4.3.4 Proposed Building Block for Bit Shuffling 100 4.3.5 Intra-group Randomization 102 4.4 Experimental Results 103 4.4.1 Accuracy of Generated Stochastic Bit Stream 104 4.4.2 Area, Delay, Power, Energy and SCC Average 104 4.4.3 Energy Efficiency When Operated under Maximal Precision 105 4.5 Conclusion 106 5. Approximate De-randomizer for Stochastic Circuits 107 5.1 Introduction 107 5.2 Proposed Approximate Parallel Counter 108 5.2.1 Analysis for Gate Count in 1-layer Approximate PC 109 5.2.2 Analysis for Error in 1-layer Approximate PC 110 5.3 Experimental Results 111 5.4 Conclusion 112 6. Dynamic Energy-Accuracy Trade-off Using Stochastic Computing in Deep Neural Networks 113 6.1 Introduction 113 6.2 Background 115 6.4 DNN Using Stochastic Circuit 117 6.4.1 Overview of the Proposed DNN using SC 117 6.4.2 Removing Near-Zero Weights 119 6.4.3 Applying Weight Scaling 120 6.4.4 Activation Function with Accumulation 121 6.5 Early Decision Termination 125 6.5.1 Moving Average Tracking Output Trends 126 6.6 Experimental Results 127 6.6.1 Accuracy of DNN Using SC 128 6.6.2 Effectiveness of Early Decision Termination 129 6.6.3 Comparison of Synthesis Results 130 6.7 Conclusion 132 7. Conclusion 134 Bibliography 136 ์š”์•ฝ(๊ตญ๋ฌธ์ดˆ๋ก) 144Docto

    MAA*: A Heuristic Search Algorithm for Solving Decentralized POMDPs

    Full text link
    We present multi-agent A* (MAA*), the first complete and optimal heuristic search algorithm for solving decentralized partially-observable Markov decision problems (DEC-POMDPs) with finite horizon. The algorithm is suitable for computing optimal plans for a cooperative group of agents that operate in a stochastic environment such as multirobot coordination, network traffic control, `or distributed resource allocation. Solving such problems efiectively is a major challenge in the area of planning under uncertainty. Our solution is based on a synthesis of classical heuristic search and decentralized control theory. Experimental results show that MAA* has significant advantages. We introduce an anytime variant of MAA* and conclude with a discussion of promising extensions such as an approach to solving infinite horizon problems.Comment: Appears in Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI2005

    Stochastic and Optimal Distributed Control for Energy Optimization and Spatially Invariant Systems

    Get PDF
    Improving energy efficiency and grid responsiveness of buildings requires sensing, computing and communication to enable stochastic decision-making and distributed operations. Optimal control synthesis plays a significant role in dealing with the complexity and uncertainty associated with the energy systems. The dissertation studies general area of complex networked systems that consist of interconnected components and usually operate in uncertain environments. Specifically, the contents of this dissertation include tools using stochastic and optimal distributed control to overcome these challenges and improve the sustainability of electric energy systems. The first tool is developed as a unifying stochastic control approach for improving energy efficiency while meeting probabilistic constraints. This algorithm is applied to demonstrate energy efficiency improvement in buildings and improving operational efficiency of virtualized web servers, respectively. Although all the optimization in this technique is in the form of convex optimization, it heavily relies on semidefinite programming (SP). A generic SP solver can handle only up to hundreds of variables. This being said, for a large scale system, the existing off-the-shelf algorithms may not be an appropriate tool for optimal control. Therefore, in the sequel I will exploit optimization in a distributed way. The second tool is itself a concrete study which is optimal distributed control for spatially invariant systems. Spatially invariance means the dynamics of the system do not vary as we translate along some spatial axis. The optimal H2 [H-2] decentralized control problem is solved by computing an orthogonal projection on a class of Youla parameters with a decentralized structure. Optimal Hโˆž [H-infinity] performance is posed as a distance minimization in a general Lโˆž [L-infinity] space from a vector function to a subspace with a mixed Lโˆž and Hโˆž space structure. In this framework, the dual and pre-dual formulations lead to finite dimensional convex optimizations which approximate the optimal solution within desired accuracy. Furthermore, a mixed L2 [L-2] /Hโˆž synthesis problem for spatially invariant systems as trade-offs between transient performance and robustness. Finally, we pursue to deal with a more general networked system, i.e. the Non-Markovian decentralized stochastic control problem, using stochastic maximum principle via Malliavin Calculus
    • โ€ฆ
    corecore