9 research outputs found

    The Complexity of Testing Properties of Simple Games

    Get PDF
    Simple games cover voting systems in which a single alternative, such as a bill or an amendment, is pitted against the status quo. A simple game or a yes-no voting system is a set of rules that specifies exactly which collections of ``yea'' votes yield passage of the issue at hand. A collection of ``yea'' voters forms a winning coalition. We are interested on performing a complexity analysis of problems on such games depending on the game representation. We consider four natural explicit representations, winning, loosing, minimal winning, and maximal loosing. We first analyze the computational complexity of obtaining a particular representation of a simple game from a different one. We show that some cases this transformation can be done in polynomial time while the others require exponential time. The second question is classifying the complexity for testing whether a game is simple or weighted. We show that for the four types of representation both problem can be solved in polynomial time. Finally, we provide results on the complexity of testing whether a simple game or a weighted game is of a special type. In this way, we analyze strongness, properness, decisiveness and homogeneity, which are desirable properties to be fulfilled for a simple game.Comment: 18 pages, LaTex fil

    Approximate In-memory computing on RERAMs

    Get PDF
    Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referred to as the \u27memory wall\u27. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOS logic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resulting in a breakdown of \u27Dennard\u27s\u27 scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, where computing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory. ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements which allow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars. Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition, computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of approximate computing , in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods

    Truth Table Minimization of Computational Models

    Full text link
    Complexity theory offers a variety of concise computational models for computing boolean functions - branching programs, circuits, decision trees and ordered binary decision diagrams to name a few. A natural question that arises in this context with respect to any such model is this: Given a function f:{0,1}^n \to {0,1}, can we compute the optimal complexity of computing f in the computational model in question? (according to some desirable measure). A critical issue regarding this question is how exactly is f given, since a more elaborate description of f allows the algorithm to use more computational resources. Among the possible representations are black-box access to f (such as in computational learning theory), a representation of f in the desired computational model or a representation of f in some other model. One might conjecture that if f is given as its complete truth table (i.e., a list of f's values on each of its 2^n possible inputs), the most elaborate description conceivable, then any computational model can be efficiently computed, since the algorithm computing it can run poly(2^n) time. Several recent studies show that this is far from the truth - some models have efficient and simple algorithms that yield the desired result, others are believed to be hard, and for some models this problem remains open. In this thesis we will discuss the computational complexity of this question regarding several common types of computational models. We shall present several new hardness results and efficient algorithms, as well as new proofs and extensions for known theorems, for variants of decision trees, formulas and branching programs

    Моделювання проблем класифікації з підбором оптимального алгоритму

    Get PDF
    В роботі проведено аналітичний огляд сучасних методів побудови класифікаторів. Наведено математичні основи побудови моделей такими методами: метод головних компонент (PCA), дерево прийняття рішення (CART), нейронні мережі прямого поширення(FF NN), наївний баєсів класифікатор (NB), метод опорних векторів (SVM). Побудовано власний алгоритм класифікації даних, що ґрунтується на методі k-найближчих сусідів. Розроблено програмний комплекс, що включає в себе перехресну перевірку, реалізації розглянутих методів та вибір фінального методу використовуючи жадібний алгоритм

    Моделювання проблем класифікації з підбором оптимального алгоритму

    Get PDF
    В роботі проведено аналітичний огляд сучасних методів побудови класифікаторів. Наведено математичні основи побудови моделей такими методами: метод головних компонент (PCA), дерево прийняття рішення (CART), нейронні мережі прямого поширення(FF NN), наївний баєсів класифікатор (NB), метод опорних векторів (SVM). Побудовано власний алгоритм класифікації даних, що ґрунтується на методі k-найближчих сусідів. Розроблено програмний комплекс, що включає в себе перехресну перевірку, реалізації розглянутих методів та вибір фінального методу використовуючи жадібний алгоритм
    corecore