12 research outputs found

    Neural PCA and Maximum Likelihood Hebbian Learning on the GPU

    Get PDF
    This study introduces a novel fine-grained parallel implementation of a neural principal component analysis (neural PCA) variant and the maximum Likelihood Hebbian Learning (MLHL) network designed for modern many-core graphics processing units (GPUs). The parallel implementation as well as the computational experiments conducted in order to evaluate the speedup achieved by the GPU are presented and discussed. The evaluation was done on a well-known artificial data set, the 2D bars data set

    A Continual Learning System with Self Domain Shift Adaptation for Fake News Detection

    No full text
    Detecting fake news is currently one of the critical challenges facing modern societies. The problem is particularly relevant, as disinformation is readily used for political warfare but can also cause significant harm to the health of citizens, such as by promoting false data on the harmfulness of selected therapies. One way to combat disinformation is to treat fake news detection as a machine learning task. This paper presents such an approach, which additionally addresses an important problem related to the non-stationarity characteristics of the fake news. We elaborated a stream data with the simulation of domain shift based on two popular benchmark datasets dedicated to the fake news classification problem (Kaggle Fake News and Constraint@AAAI2021–COVID19 Fake News Detection). The proposed learning system works in a Continual Learning (CL) framework and integrates a self domain shift adaptation in a machine learning scheme. The method was built following state-of-the-art techniques, that includes Word2Vec as a feature extractor and the LSTM model as a classifier. The performance of the approach has been evaluated over the generated data stream. The convenience of our approach is showed in the results, where the accuracy gain with respect to a CL approach without domain adaptation is observed to be significant

    Deep Reinforcement Learning Tf-Agent-Based Object Tracking With Virtual Autonomous Drone in a Game Engine

    Get PDF
    The recent development of object-tracking frameworks has affected the performance of many manufacturing and industrial services such as product delivery, autonomous driving systems, security systems, military, transportation and retailing industries, smart cities, healthcare systems, agriculture, etc. Achieving accurate results in physical environments and conditions remains quite challenging for the actual object-tracking. However, the process can be experimented with using simulation techniques or platforms to evaluate and check the model’s performance under different simulation conditions and weather changes. This paper presents one of the target tracking approaches based on the reinforcement learning technique integrated with TensorFlow-Agent (tf-agent) to accomplish the tracking process in the Unreal Game Engine simulation platform AirSim Blocks. The productivity of these platforms can be seen while experimenting in virtual-reality conditions with virtual drone agents and performing fine-tuning to achieve the best or desired performance. In this paper, the tf-agent drone learns how to track an object integration with a deep reinforcement learning process to control the actions, states, and tracking by receiving sequential frames from a simple Blocks environment. The tf-agent model is trained in the AirSim Blocks environment for adaptation to the environment and existing objects in a simulation environment for further testing and evaluation regarding the accuracy of tracking and speed. We tested and compared two approaches, DQN and PPO trackers, and reported results in terms of stability, rewards, and numerical performance

    Dominant Air Pollution Source Determination in the Vicinity of Coking Plant Based on Statistical Data Analysis

    No full text
    The goal of the article is to present the statistical analysis based determination of the dominant air pollution sources in one of the Ostrava pollution monitoring stations. Statistical analyses were based on correlation analysis and quantile-based time pattern analysis. These analyses were able to prove that benzene and toluene pollution is dominantly caused by the same pollution sources. The time pattern analysis then proved the dominance of a nearby cokery. Time pattern analyses also proved traffic to be the dominant source of NO2 pollution and PM10 pollution to be the mix of traffic, heating and industrial sources
    corecore