13,080 research outputs found

    Automatic generation of hardware Tree Classifiers

    Full text link
    Machine Learning is growing in popularity and spreading across different fields for various applications. Due to this trend, machine learning algorithms use different hardware platforms and are being experimented to obtain high test accuracy and throughput. FPGAs are well-suited hardware platform for machine learning because of its re-programmability and lower power consumption. Programming using FPGAs for machine learning algorithms requires substantial engineering time and effort compared to software implementation. We propose a software assisted design flow to program FPGA for machine learning algorithms using our hardware library. The hardware library is highly parameterized and it accommodates Tree Classifiers. As of now, our library consists of the components required to implement decision trees and random forests. The whole automation is wrapped around using a python script which takes you from the first step of having a dataset and design choices to the last step of having a hardware descriptive code for the trained machine learning model

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems

    Full text link
    Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available

    Towards an Achievable Performance for the Loop Nests

    Full text link
    Numerous code optimization techniques, including loop nest optimizations, have been developed over the last four decades. Loop optimization techniques transform loop nests to improve the performance of the code on a target architecture, including exposing parallelism. Finding and evaluating an optimal, semantic-preserving sequence of transformations is a complex problem. The sequence is guided using heuristics and/or analytical models and there is no way of knowing how close it gets to optimal performance or if there is any headroom for improvement. This paper makes two contributions. First, it uses a comparative analysis of loop optimizations/transformations across multiple compilers to determine how much headroom may exist for each compiler. And second, it presents an approach to characterize the loop nests based on their hardware performance counter values and a Machine Learning approach that predicts which compiler will generate the fastest code for a loop nest. The prediction is made for both auto-vectorized, serial compilation and for auto-parallelization. The results show that the headroom for state-of-the-art compilers ranges from 1.10x to 1.42x for the serial code and from 1.30x to 1.71x for the auto-parallelized code. These results are based on the Machine Learning predictions.Comment: Accepted at the 31st International Workshop on Languages and Compilers for Parallel Computing (LCPC 2018

    Mining State-Based Models from Proof Corpora

    Full text link
    Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.Comment: To Appear at Conferences on Intelligent Computer Mathematics 201

    Microservices and Machine Learning Algorithms for Adaptive Green Buildings

    Get PDF
    In recent years, the use of services for Open Systems development has consolidated and strengthened. Advances in the Service Science and Engineering (SSE) community, promoted by the reinforcement of Web Services and Semantic Web technologies and the presence of new Cloud computing techniques, such as the proliferation of microservices solutions, have allowed software architects to experiment and develop new ways of building open and adaptable computer systems at runtime. Home automation, intelligent buildings, robotics, graphical user interfaces are some of the social atmosphere environments suitable in which to apply certain innovative trends. This paper presents a schema for the adaptation of Dynamic Computer Systems (DCS) using interdisciplinary techniques on model-driven engineering, service engineering and soft computing. The proposal manages an orchestrated microservices schema for adapting component-based software architectural systems at runtime. This schema has been developed as a three-layer adaptive transformation process that is supported on a rule-based decision-making service implemented by means of Machine Learning (ML) algorithms. The experimental development was implemented in the Solar Energy Research Center (CIESOL) applying the proposed microservices schema for adapting home architectural atmosphere systems on Green Buildings

    A committee machine gas identification system based on dynamically reconfigurable FPGA

    Get PDF
    This paper proposes a gas identification system based on the committee machine (CM) classifier, which combines various gas identification algorithms, to obtain a unified decision with improved accuracy. The CM combines five different classifiers: K nearest neighbors (KNNs), multilayer perceptron (MLP), radial basis function (RBF), Gaussian mixture model (GMM), and probabilistic principal component analysis (PPCA). Experiments on real sensors' data proved the effectiveness of our system with an improved accuracy over individual classifiers. Due to the computationally intensive nature of CM, its implementation requires significant hardware resources. In order to overcome this problem, we propose a novel time multiplexing hardware implementation using a dynamically reconfigurable field programmable gate array (FPGA) platform. The processing is divided into three stages: sampling and preprocessing, pattern recognition, and decision stage. Dynamically reconfigurable FPGA technique is used to implement the system in a sequential manner, thus using limited hardware resources of the FPGA chip. The system is successfully tested for combustible gas identification application using our in-house tin-oxide gas sensors

    AROMA: Automatic Generation of Radio Maps for Localization Systems

    Full text link
    WLAN localization has become an active research field recently. Due to the wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds to the value of the wireless network by providing the location of its users without using any additional hardware. However, WLAN localization systems usually require constructing a radio map, which is a major barrier of WLAN localization systems' deployment. The radio map stores information about the signal strength from different signal strength streams at selected locations in the site of interest. Typical construction of a radio map involves measurements and calibrations making it a tedious and time-consuming operation. In this paper, we present the AROMA system that automatically constructs accurate active and passive radio maps for both device-based and device-free WLAN localization systems. AROMA has three main goals: high accuracy, low computational requirements, and minimum user overhead. To achieve high accuracy, AROMA uses 3D ray tracing enhanced with the uniform theory of diffraction (UTD) to model the electric field behavior and the human shadowing effect. AROMA also automates a number of routine tasks, such as importing building models and automatic sampling of the area of interest, to reduce the user's overhead. Finally, AROMA uses a number of optimization techniques to reduce the computational requirements. We present our system architecture and describe the details of its different components that allow AROMA to achieve its goals. We evaluate AROMA in two different testbeds. Our experiments show that the predicted signal strength differs from the measurements by a maximum average absolute error of 3.18 dBm achieving a maximum localization error of 2.44m for both the device-based and device-free cases.Comment: 14 pages, 17 figure
    corecore