161 research outputs found

    Distributed learning automata-based scheme for classification using novel pursuit scheme

    Get PDF
    Author's accepted manuscript.Available from 03/03/2021.This is a post-peer-review, pre-copyedit version of an article published in Applied Intelligence. The final authenticated version is available online at: http://dx.doi.org/10.1007/s10489-019-01627-w.acceptedVersio

    Evaluating prediction models for electricity consumption

    Get PDF
    This paper presents a system for visualizing electricity consumptiondata along with the implementation of a pattern recognition approach for peakprediction. Various classification algorithms and machine learning techniques aretested and discussed; in particular, Support Vector Machine (SVM), GaussianMixture Model (GMM) and hierarchical classifiers. Most notably, the bestalgorithms are able to detect 70% of the peaks occurring within the next 24 hours.Also, various ways of correlating energy consumption are considered in the presentcontext. Finally, a few directions for future work are discussed

    Predicting Source Code Quality with Static Analysis and Machine Learning

    Get PDF
    This paper is investigating if it is possible to predict source code qualitybased on static analysis and machine learning. The proposed approachincludes a plugin in Eclipse, uses a combination of peer review/humanrating, static code analysis, and classification methods. As training data,public data and student hand-ins in programming are used. Based onthis training data, new and uninspected source code can be accuratelyclassified as “well written” or “badly written”. This is a step towardsfeedback in an interactive environment without peer assessment

    A Comparison Between Tsetlin Machines and Deep Neural Networks in the Context of Recommendation Systems

    Full text link
    Recommendation Systems (RSs) are ubiquitous in modern society and are one of the largest points of interaction between humans and AI. Modern RSs are often implemented using deep learning models, which are infamously difficult to interpret. This problem is particularly exasperated in the context of recommendation scenarios, as it erodes the user's trust in the RS. In contrast, the newly introduced Tsetlin Machines (TM) possess some valuable properties due to their inherent interpretability. TMs are still fairly young as a technology. As no RS has been developed for TMs before, it has become necessary to perform some preliminary research regarding the practicality of such a system. In this paper, we develop the first RS based on TMs to evaluate its practicality in this application domain. This paper compares the viability of TMs with other machine learning models prevalent in the field of RS. We train and investigate the performance of the TM compared with a vanilla feed-forward deep learning model. These comparisons are based on model performance, interpretability/explainability, and scalability. Further, we provide some benchmark performance comparisons to similar machine learning solutions relevant to RSs.Comment: Accepted to NLDL 202

    Development of a Simulator for Prototyping Reinforcement Learning based Autonomous Cars

    Get PDF
    Autonomous driving is a research field that has received attention in recent years, with increasing applications of reinforcement learning (RL) algorithms. It is impractical to train an autonomous vehicle thoroughly in the physical space, i.e., the so-called ’real world’; therefore, simulators are used in almost all training of autonomous driving algorithms. There are numerous autonomous driving simulators, very few of which are specifically targeted at RL. RL-based cars are challenging due to the variety of reward functions available. There is a lack of simulators addressing many central RL research tasks within autonomous driving, such as scene understanding, localization and mapping, planning and driving policies, and control, which have diverse requirements and goals. It is, therefore, challenging to prototype new RL projects with different simulators, especially when there is a need to examine several reward functions at once. This paper introduces a modified simulator based on the Udacity simulator, made for autonomous cars using RL. It creates reward functions, along with sensors to create a baseline implementation for RL-based vehicles. The modified simulator also resets the vehicle when it gets stuck or is in a non-terminating loop, making it more reliable. Overall, the paper seeks to make the prototyping of new systems simple, with the testing of different RL-based systems.Development of a Simulator for Prototyping Reinforcement Learning based Autonomous CarspublishedVersio
    • …
    corecore