935 research outputs found

    Enabling More Accurate and Efficient Structured Prediction

    Get PDF
    Machine learning practitioners often face a fundamental trade-off between expressiveness and computation time: on average, more accurate, expressive models tend to be more computationally intensive both at training and test time. While this trade-off is always applicable, it is acutely present in the setting of structured prediction, where the joint prediction of multiple output variables often creates two primary, inter-related bottlenecks: inference and feature computation time. In this thesis, we address this trade-off at test-time by presenting frameworks that enable more accurate and efficient structured prediction by addressing each of the bottlenecks specifically. First, we develop a framework based on a cascade of models, where the goal is to control test-time complexity even as features are added that increase inference time (even exponentially). We call this framework Structured Prediction Cascades (SPC); we develop SPC in the context of exact inference and then extend the framework to handle the approximate case. Next, we develop a framework for the setting where the feature computation is explicitly the bottleneck, in which we learn to selectively evaluate features within an instance of the mode. This second framework is referred to as Dynamic Structured Model Selection (DMS), and is once again developed for a simpler, restricted model before being extended to handle a much more complex setting. For both cases, we evaluate our methods on several benchmark datasets, and we find that it is possible to dramatically improve the efficiency and accuracy of structured prediction

    An FPGA based approach for Černý conjecture falsification

    Get PDF
    A synchronizing sequence for an automaton is a special input sequence that sends all states of the automaton to the same state. J. Černý conjectured that the length of the shortest synchronizing sequence of an automaton with n states cannot be greater than (n-1)2, which is known today as the Černý conjecture. This half-a-century old conjecture is still open and it is considered to be the most long-standing open problem in the combinatorial theory of finite state automata. One research line that has been pursued in the literature is to check if the conjecture holds for a fixed number of states n, by considering all automata with n states and checking if any of these automata falsifies the conjecture. This is a computationally intensive task, even for automata up to a dozen of states and only two input symbols. To accelerate the search parallel computation approaches using multicore CPUs have been tried before. In this thesis, we study the use of FPGAs to accelerate the search for an automaton falsifying the Černý conjecture. We present a design to calculate iii the minimum length synchronizing sequence of a finite state automaton. The proposed design is implemented with the parallel computing capability of hardware designs while optimizing the time performance

    A general framework integrating techniques for scheduling under uncertainty

    Get PDF
    Ces dernières années, de nombreux travaux de recherche ont porté sur la planification de tâches et l'ordonnancement sous incertitudes. Ce domaine de recherche comprend un large choix de modèles, techniques de résolution et systèmes, et il est difficile de les comparer car les terminologies existantes sont incomplètes. Nous avons cependant identifié des familles d'approches générales qui peuvent être utilisées pour structurer la littérature suivant trois axes perpendiculaires. Cette nouvelle structuration de l'état de l'art est basée sur la façon dont les décisions sont prises. De plus, nous proposons un modèle de génération et d'exécution pour ordonnancer sous incertitudes qui met en oeuvre ces trois familles d'approches. Ce modèle est un automate qui se développe lorsque l'ordonnancement courant n'est plus exécutable ou lorsque des conditions particulières sont vérifiées. Le troisième volet de cette thèse concerne l'étude expérimentale que nous avons menée. Au-dessus de ILOG Solver et Scheduler nous avons implémenté un prototype logiciel en C++, directement instancié de notre modèle de génération et d'exécution. Nous présentons de nouveaux problèmes d'ordonnancement probabilistes et une approche par satisfaction de contraintes combinée avec de la simulation pour les résoudre. ABSTRACT : For last years, a number of research investigations on task planning and scheduling under uncertainty have been conducted. This research domain comprises a large number of models, resolution techniques, and systems, and it is difficult to compare them since the existing terminologies are incomplete. However, we identified general families of approaches that can be used to structure the literature given three perpendicular axes. This new classification of the state of the art is based on the way decisions are taken. In addition, we propose a generation and execution model for scheduling under uncertainty that combines these three families of approaches. This model is an automaton that develops when the current schedule is no longer executable or when some particular conditions are met. The third part of this thesis concerns our experimental study. On top of ILOG Solver and Scheduler, we implemented a software prototype in C++ directly instantiated from our generation and execution model. We present new probabilistic scheduling problems and a constraintbased approach combined with simulation to solve some instances thereof

    A Phase Change Memory and DRAM Based Framework For Energy-Efficient and High-Speed In-Memory Stochastic Computing

    Get PDF
    Convolutional Neural Networks (CNNs) have proven to be highly effective in various fields related to Artificial Intelligence (AI) and Machine Learning (ML). However, the significant computational and memory requirements of CNNs make their processing highly compute and memory-intensive. In particular, the multiply-accumulate (MAC) operation, which is a fundamental building block of CNNs, requires enormous arithmetic operations. As the input dataset size increases, the traditional processor-centric von-Neumann computing architecture becomes ill-suited for CNN-based applications. This results in exponentially higher latency and energy costs, making the processing of CNNs highly challenging. To overcome these challenges, researchers have explored the Processing-In Memory (PIM) technique, which involves placing the processing unit inside or near the memory unit. This approach reduces data migration length and utilizes the internal memory bandwidth at the memory chip level. However, developing a reliable PIM-based system with minimal hardware modifications and design complexity remains a significant challenge. The proposed solution in the report suggests utilizing different memory technologies, such as Dynamic RAM (DRAM) and phase change memory (PCM), with Stochastic arithmetic and minimal add-on logic. Stochastic computing is a technique that uses random numbers to perform arithmetic operations instead of traditional binary representation. This technique reduces hardware requirements for CNN\u27s arithmetic operations, making it possible to implement them with minimal add-on logic. The report details the workflow for performing arithmetical operations used by CNNs, including MAC, activation, and floating-point functions. The proposed solution includes designs for scalable Stochastic Number Generator (SNG), DRAM CNN accelerator, non-volatile memory (NVM) class PCRAM-based CNN accelerator, and DRAM-based stochastic to binary conversion (StoB) for in-situ deep learning. These designs utilize stochastic computing to reduce the hardware requirements for CNN\u27s arithmetic operations and enable energy and time-efficient processing of CNNs. The report also identifies future research directions for the proposed designs, including in-situ PCRAM-based SNG, ODIN (A Bit-Parallel Stochastic Arithmetic Based Accelerator for In-Situ Neural Network Processing in Phase Change RAM), ATRIA (Bit-Parallel Stochastic Arithmetic Based Accelerator for In-DRAM CNN Processing), and AGNI (In-Situ, Iso-Latency Stochastic-to-Binary Number Conversion for In-DRAM Deep Learning), and presents initial findings for these ideas. In summary, the proposed solution in the report offers a comprehensive approach to address the challenges of processing CNNs, and the proposed designs have the potential to improve the energy and time efficiency of CNNs significantly. Using Stochastic Computing and different memory technologies enables the development of reliable PIM-based systems with minimal hardware modifications and design complexity, providing a promising path for the future of CNN-based applications

    Towards identifying salient patterns in genetic programming individuals

    Get PDF
    This thesis addresses the problem of offline identification of salient patterns in genetic programming individuals. It discusses the main issues related to automatic pattern identification systems, namely that these (a) should help in understanding the final solutions of the evolutionary run, (b) should give insight into the course of evolution and (c) should be helpful in optimizing future runs. Moreover, it proposes an algorithm, Extended Pattern Growing Algorithm ([E]PGA) to extract, filter and sort the identified patterns so that these fulfill as many as possible of the following criteria: (a) they are representative for the evolutionary run and/or search space, (b) they are human-friendly and (c) their numbers are within reasonable limits. The results are demonstrated on six problems from different domains

    Oblivious Sensor Fusion via Secure Multi-Party Combinatorial Filter Evaluation

    Get PDF
    This thesis examines the problem of fusing data from several sensors, potentially distributed throughout an environment, in order to consolidate readings into a single coherent view. We consider the setting when sensor units do not wish others to know their specific sensor streams. Standard methods for handling this fusion make no guarantees about what a curious observer may learn. Motivated by applications where data sources may only choose to participate if given privacy guarantees, we introduce a fusion approach that limits what can be inferred. Our approach is to form an aggregate stream, oblivious to the underlying sensor data, and to evaluate a combinatorial filter on that stream. This is achieved via secure multi-party computational techniques built on cryptographic primitives, which we extend and apply to the problem of fusing discrete sensor signals. We prove that the extensions preserve security under the semi- honest adversary model. Though the approach enables several applications of potential interest, we specifically consider a target tracking case study as a running example. Finally, we also report on a basic, proof-of-concept implementation, demonstrating that it can operate in practice; which we report and analyze the (empirical) running times for components in the architecture, suggesting directions for future improvement

    Recommender Systems

    Get PDF
    The ongoing rapid expansion of the Internet greatly increases the necessity of effective recommender systems for filtering the abundant information. Extensive research for recommender systems is conducted by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and practical achievements, unification and comparison of different approaches are lacking, which impedes further advances. In this article, we review recent developments in recommender systems and discuss the major challenges. We compare and evaluate available algorithms and examine their roles in the future developments. In addition to algorithms, physical aspects are described to illustrate macroscopic behavior of recommender systems. Potential impacts and future directions are discussed. We emphasize that recommendation has a great scientific depth and combines diverse research fields which makes it of interests for physicists as well as interdisciplinary researchers.Comment: 97 pages, 20 figures (To appear in Physics Reports
    corecore