16 research outputs found

    A General Approach to Boolean Function Decomposition and its Application in FPGABased Synthesis

    Get PDF
    An effective logic synthesis procedure based on parallel and serial decomposition of a Boolean function is presented in this paper. The decomposition, carried out as the very first step of the .synthesis process, is based on an original representation of the function by a set of r-partitions over the set of minterms. Two different decomposition strategies, namely serial and parallel, are exploited by striking a balance between the two ideas. The presented procedure can be applied to completely or incompletely specified, single- or multiple-output functions and is suitable for different types of FPGAs including XILINX, ACTEL and ALGOTRONIX devices. The results of the benchmark experiments presented in the paper show that, in several cases, our method produces circuits of significantly reduced complexity compared to the solutions reported in the literature

    Implementation of Large Neural Networks Using Decomposition

    Full text link
    The article presents methods of dealing with huge data in the domain of neural networks. The decomposition of neural networks is introduced and its efficiency is proved by the authors’ experiments. The examinations of the effectiveness of argument reduction in the above filed, are presented. Authors indicate, that decomposition is capable of reducing the size and the complexity of the learned data, and thus it makes the learning process faster or, while dealing with large data, possible. According to the authors experiments, in some cases, argument reduction, makes the learning process harder

    Division-based versus general decomposition-based multiple-level logic synthesis

    Get PDF
    During the last decade, many different approaches have been proposed to solve the multiple-level synthesis problem with different minimum functionally complete systems of primitive logic blocks. The most popular of them is the division-based approach. However, modem microelectronic technology provides a large variety of building blocks which considerably differ from those typically considered. The traditional methods are therefore not suitable for synthesis with many modem building blocks. Furthermore, they often fail to find global optima for complex designs and leave unconsidered some important design aspects. Some of their weaknesses can be eliminated without leaving the paradigm they are based on, other ones are more fundamental. A paradigm which enables efficient exploitation of the opportunities created by the microelectronic technology is the general decomposition paradigm. The aim of this paper is to analyze and compare the general decomposition approach and the division-based approach. The most important advantages of the general decomposition approach are its generality (any network of any building blocks can be considered) and totality (all important design aspects can be considered) as well as handling the incompletely specified functions in a natural way. In many cases, the general decomposition approach gives much better results than the traditional approaches

    Second Generation General System Theory: Perspectives in Philosophy and Approaches in Complex Systems

    Get PDF
    Following the classical work of Norbert Wiener, Ross Ashby, Ludwig von Bertalanffy and many others, the concept of System has been elaborated in different disciplinary fields, allowing interdisciplinary approaches in areas such as Physics, Biology, Chemistry, Cognitive Science, Economics, Engineering, Social Sciences, Mathematics, Medicine, Artificial Intelligence, and Philosophy. The new challenge of Complexity and Emergence has made the concept of System even more relevant to the study of problems with high contextuality. This Special Issue focuses on the nature of new problems arising from the study and modelling of complexity, their eventual common aspects, properties and approaches—already partially considered by different disciplines—as well as focusing on new, possibly unitary, theoretical frameworks. This Special Issue aims to introduce fresh impetus into systems research when the possible detection and correction of mistakes require the development of new knowledge. This book contains contributions presenting new approaches and results, problems and proposals. The context is an interdisciplinary framework dealing, in order, with electronic engineering problems; the problem of the observer; transdisciplinarity; problems of organised complexity; theoretical incompleteness; design of digital systems in a user-centred way; reaction networks as a framework for systems modelling; emergence of a stable system in reaction networks; emergence at the fundamental systems level; behavioural realization of memoryless functions

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Some representation learning tasks and the inspection of their models

    Get PDF
    Today, the field of machine learning knows a wide range of tasks with a wide range of supervision sources, ranging from the traditional classification tasks with neatly labeled data, over data with noisy labels to data with no labels, where we have to rely on other forms of supervision, like self-supervision. In the first part of this thesis, we design machine learning tasks for applications where we do not immediately have access to neatly-labeled training data. First, we design unsupervised representation learning tasks for training embedding models for mathematical expression that allow retrieval of related formulae. We train convolutional neural networks, transformer models and graph neural networks to embed formulas from scientific articles into a real-valued vector space using contextual similarity tasks as well as self-supervised tasks. We base our studies on a novel dataset that consists of over 28 million formulae that we have extracted from scientific articles published on arXiv.org. We represent the formulas in different input formats — images, sequences or trees — depending on the embedding model. We compile an evaluation dataset with annotated search queries from several different disciplines and showcase the usefulness of our approach for deploying a search engine for mathematical expressions. Second, we investigate machine learning tasks in astrophysics. Prediction models are currently trained on simulated data, with hand-crafted features and using multiple singletask models. In contrast, we build a single multi-task convolutional neural network that works directly on telescope images and uses convolution layers to learn suitable feature representations automatically. We design loss functions for each task and propose a novel way to combine the different loss functions to account for their different scales and behaviors. Next, we explore another form of supervision that does not rely on simulated training data, but learns from actual telescope recordings. Through the framework of noisy label learning, we propose an approach for learning gamma hadron classifiers that outperforms existing classifiers trained on simulated, fully-labeled data. Our method is general: it can be used for training models in scenarios that fit our noise assumption of class-conditional label noise with exactly one known noise probability. In the second part of this work, we develop methods to inspect models and gain trust into their decisions. We focus on large, non-linear models that can no longer be understood in their entirety through plain inspection of their trainable parameters. We investigate three approaches for establishing trust in models. First, we propose a method to highlight influential input nodes for similarity computations performed by graph neural networks. We test this approach with our embedding models for retrieval of related formulas and show that it can help understand the similarity scores computed by the models. Second, we investigate explanation methods that provide explanations based on the training process that produced the model. This way we provide explanations that are not merely an approximation of the computation of the prediction function, but actually an investigation into why the model learned to produce an output grounded in the actual data. We propose two different methods for tracking the training process and show how they can be easily implemented within existing deep learning frameworks. Third, we contribute a method to verify the adversarial robustness of random forest classifiers. Our method is based on knowledge distillation of a random forest model into a decision tree model. We bound the approximation error of using the decision tree as a proxy model to the given random forest model and use these bounds to provide guarantees on the adversarial robustness of the random forest. Consequently, our robustness guarantees are approximative, but we can provably control the quality of our results using a hyperparameter

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore