27 research outputs found

    A comparison among deep learning techniques in an autonomous driving context

    Get PDF
    Al giorno d’oggi, l’intelligenza artificiale è uno dei campi di ricerca che sta ricevendo sempre più attenzioni. Il miglioramento della potenza computazionale a disposizione dei ricercatori e sviluppatori sta rinvigorendo tutto il potenziale che era stato espresso a livello teorico agli albori dell’Intelligenza Artificiale. Tra tutti i campi dell’Intelligenza Artificiale, quella che sta attualmente suscitando maggiore interesse è la guida autonoma. Tantissime case automobilistiche e i più illustri college americani stanno investendo sempre più risorse su questa tecnologia. La ricerca e la descrizione dell’ampio spettro delle tecnologie disponibili per la guida autonoma è parte del confronto svolto in questo elaborato. Il caso di studio si incentra su un’azienda che partendo da zero, vorrebbe elaborare un sistema di guida autonoma senza dati, in breve tempo ed utilizzando solo sensori fatti da loro. Partendo da reti neurali e algoritmi classici, si è arrivati ad utilizzare algoritmi come A3C per descrivere tutte l’ampio spettro di possibilità. Le tecnologie selezionate verranno confrontate in due esperimenti. Il primo è un esperimento di pura visione artificiale usando DeepTesla. In questo esperimento verranno confrontate tecnologie quali le tradizionali tecniche di visione artificiale, CNN e CNN combinate con LSTM. Obiettivo è identificare quale algoritmo ha performance migliori elaborando solo immagini. Il secondo è un esperimento su CARLA, un simulatore basato su Unreal Engine. In questo esperimento, i risultati ottenuti in ambiente simulato con CNN combinate con LSTM, verranno confrontati con i risultati ottenuti con A3C. Obiettivo sarà capire se queste tecniche sono in grado di muoversi in autonomia utilizzando i dati forniti dal simulatore. Il confronto mira ad identificare le criticità e i possibili miglioramenti futuri di ciascuno degli algoritmi proposti in modo da poter trovare una soluzione fattibile che porta ottimi risultati in tempi brevi

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards

    RESPONSE TIME TO HAZARD: THE ROLE OF ATTENTION, DECISION MAKING AND EMOTIONS ON EXPECTATIONS IN REAL-LIFE AND VIRTUAL DRIVING

    Get PDF
    Lo scopo della presente ricerca è studiare il peso del fattore umano nei tempi di reazione alla guida. I tempi di reazione son stati studiati sin dalle origini della Psicologia sperimentale, tuttavia se applicati alla guida risulta obsoleto a causa delle specifiche condizioni in cui la reazione si svolge, ai cambiamenti del traffico moderno e ai nuovi dispositivi di supporto intelligente. In letteratura emerge chiaramente l’influenza sul tempo di reazione delle aspettative, della salienza della risposta, della percezione del rischio, dei carichi cognitivi e delle condizioni di rilevazione. La presente ricerca si prefigge di affrontare l’impatto e le modalità di influenza di questi aspetti psicologici sui tempi di reazione alla guida. In particolare i dati registrati in condizioni di guida ecologica reale saranno usati per a) studiare l’influenza delle aspettative sui processi attentivi, emozionali e di presa di decisione alla guida in risposta al pericolo, e b) per valutare l’influenza di diversi livelli di realismo di simulazioni e simulatori virtuali sui processi psicologici che determinano l’IPTR. I risultati mostrano differenze significative nelle diverse fasi che compongono l’IPTR nelle diverse condizioni. I simulatori di guida si sono rivelati avere una validità relativa, ma non assoluta rispetto ai processi attivati nelle condizioni ecologiche, dimostrandosi però in grado di ricreare e modificare coerentemente i processi di avvistamento del pericolo in funzione della prevedibilità dello stesso; rendendoli strumenti utili per l’apprendimento. La ricerca fornisce informazioni sul funzionamento dei processi cognitivi ed emotivi alla guida utili per la ricostruzione degli incidenti, la sicurezza e la prevenzione stradale.The aim of the present research is to study the role of human factor in a salient driving ability for road accident prevention, that is reaction time to danger. Reaction times (RTs) have been investigated since the origin of experimental Psychology, however when applied to driving, the values became obsolete due to modern driving conditions and interaction with advance driving automatic systems and devices. The influence of expectation, urgency, risk perception, cognitive load and driving conditions on the process that determine RTs have been steadily proven in literature. The present research aims to tackle the influence of these factors on RTs while driving. In particular data measured in real-life driving are used to a) study the influence of expectation on attention, emotions and decision making process, and b) assess the influence of virtual settings with different levels of realism, on the psychological process that determine RTs. A specific task that manipulate driver’s expectations was created to assess the influence of attention and decision making process in the different context on RTs. Results show significant differences in the RTs phases, for different situation. Driving simulators with different levels of realism proved to not have absolute validity, but rather relative on the meanings and learning process in detecting danger and deciding what response foster; giving us interesting information for drivers education, road safety and accident reconstruction

    Motor imagery based EEG features visualization for BCI applications

    Get PDF
    Over recent years, electroencephalography's (EEG) use in the state-of-the-art brain-computer interface (BCI) technology has broadened to augment the quality of life, both with medical and non-medical applicationS. For medical applications, the availability of real-time data for processing, which could be used as command Signals to control robotic devices, is limited to specific platformS. This paper focuses on the possibility to analyse and visualize EEG signal features using OpenViBE acquisition platform in offline mode apart from its default real-time processing capability, and the options available for processing of data in offline mode. We employed OpenViBE platform to acquire EEG Signals, pre-process it and extract features for a BCI System. For testing purposes, we analysed and tried to visualize EEG data offline, by developing scenarios, using method for quantification of event-related (de)synchronization ERD/ERS patterns, as well as, built in signal processing algorithms available in OpenViBE-designer toolbox. Acquired data was based on deployment of standard Graz BCI experimental protocol, used for foot kinaesthetic motor imagery (KMI). Results clearly reflect that the platform OpenViBE is a streaming tool that encourages processing and analysis of EEG data online, contrary to analysis, or visualization of data in offline, or global mode. For offline analysis and visualization of data, other relevant platforms are discussed. In online execution of BCI, OpenViBE is a potential tool for the control of wearable lower-limb devices, robotic vehicles and rehabilitation equipment. Other applications include remote control of mechatronic devices, or driving of passenger cars by human thoughtS

    Static and Dynamic Affordance Learning in Vision-based Direct Perception for Autonomous Driving

    Get PDF
    The recent development in autonomous driving involves high-level computer vision and detailed road scene understanding. Today, most autonomous vehicles are using the mediated perception approach for path planning and control, which highly rely on high-definition 3D maps and real-time sensors. Recent research efforts aim to substitute the massive HD maps with coarse road attributes. In this thesis, We follow the direct perception-based method to train a deep neural network for affordance learning in autonomous driving. The goal and the main contributions of this thesis are in two folds. Firstly, to develop the affordance learning model based on freely available Google Street View panoramas and Open Street Map road vector attributes. Driving scene understanding can be achieved by learning affordances from the images captured by car-mounted cameras. Such scene understanding by learning affordances may be useful for corroborating base-maps such as HD maps so that the required data storage space is minimized and available for processing in real-time. We compare capability in road attribute identification between human volunteers and the trained model by experimental evaluation. The results indicate that this method could act as a cheaper way for training data collection in autonomous driving. The cross-validation results also indicate the effectiveness of the trained model. Secondly, We propose a scalable and affordable data collection framework named I2MAP (image-to-map annotation proximity algorithm) for autonomous driving systems. We built an automated labeling pipeline with both vehicle dynamics and static road attributes. The data collected and annotated under our framework is suitable for direct perception and end-to-end imitation learning. Our benchmark consists of 40,000 images with more than 40 affordance labels under various day time and weather even with very challenging heavy snow. We train and evaluate a ConvNet based traffic flow prediction model for driver warning and suggestion under low visibility condition

    Deep learning based approaches for imitation learning.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable

    Enhancing player experience in computer games: A computational Intelligence approach.

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Automating Game-design and Game-agent Balancing through Computational Intelligence

    Get PDF
    Game design has been a staple of human ingenuity and innovation for as long as games have been around. From sports, such as football, to applying game mechanics to the real world, such as reward schemes in shops, games have impacted the world in surprising ways. The process of developing games can, and should, be aided by automated systems, as machines have proven capable of finding innovative ways of complementing human intuition and inventiveness. When man and machine co-operate, better products are created and the world has only to benefit. This research seeks to find, test and assess methods of using genetic algorithms to human-led game balancing tasks. From tweaking difficulty to optimising pacing, to directing an intelligent agent’s behaviour, all these can benefit from an evolutionary approach and save a game designer many hours, if not days, of work based on trial and error. Furthermore, to improve the speed of any developed GAs, predictive models have been designed to aid the evolutionary process in finding better solutions faster. While these techniques could be applied on a wider variety of tasks, they have been tested almost exclusively on game balance problems. The major contributions are in defining the main challenges of game balance from an academic perspective, proposing solutions for better cooperation between the academic and the industrial side of games, as well as technical improvements to genetic algorithms applied to these tasks. Results have been positive, with success found in both academic publications and industrial cooperation
    corecore