302,135 research outputs found

    Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods

    Get PDF
    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad TIN2013-46801-C4-1-

    The hippocampus and cerebellum in adaptively timed learning, recognition, and movement

    Full text link
    The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensory-motor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contain circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the model's description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention towards goal objects amid distractions. When these model recognition, reinforcement, sensory-motor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-86-C-0037, 90-0128); Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309, N00014-92-J-1904); National Institute of Mental Health (MH-42900

    Intrusion Detection Systems Using Adaptive Regression Splines

    Full text link
    Past few years have witnessed a growing recognition of intelligent techniques for the construction of efficient and reliable intrusion detection systems. Due to increasing incidents of cyber attacks, building effective intrusion detection systems (IDS) are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. In this paper, we report a performance analysis between Multivariate Adaptive Regression Splines (MARS), neural networks and support vector machines. The MARS procedure builds flexible regression models by fitting separate splines to distinct intervals of the predictor variables. A brief comparison of different neural network learning algorithms is also given

    An Active Pattern Recognition Architecture for Mobile Robots

    Full text link
    An active, attentionally-modulated recognition architecture is proposed for object recognition and scene analysis. The proposed architecture forms part of navigation and trajectory planning modules for mobile robots. Key characteristics of the system include movement planning and execution based on environmental factors and internal goal definitions. Real-time implementation of the system is based on space-variant representation of the visual field, as well as an optimal visual processing scheme utilizing separate and parallel channels for the extraction of boundaries and stimulus qualities. A spatial and temporal grouping module (VWM) allows for scene scanning, multi-object segmentation, and featural/object priming. VWM is used to modulate a tn~ectory formation module capable of redirecting the focus of spatial attention. Finally, an object recognition module based on adaptive resonance theory is interfaced through VWM to the visual processing module. The system is capable of using information from different modalities to disambiguate sensory input.Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-92-J-1309); Consejo Nacional de Ciencia y Tecnología (63462

    Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    Get PDF
    © 2017 IOP Publishing Ltd. Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations

    Using qualitative research methods to inform user centred design of an innovative assistive technology device

    Get PDF
    The SPECS project aims to develop a speech-driven device that will allow the home environment to be controlled (for example turning on or off the lights or television). The device developed will be targeted at older people and people with disabilities and will be sensitive to disordered speech. Current environmental control systems (ECS) work using either a switch interface or speech recognition software that does not comprehend disordered speech well. Switch-interface systems are often slow and complicated to use and the uptake of the available speech recognition system has been poor. A significant proportion of people requiring electronic assistive technology (EAT) have dysarthria, a motor speech disorder, associated with their physical disability. Speech control of EAT is seen as desirable for such people but machine recognition of dysarthric speech is a difficult problem due to the variability of their articulatory output. Other work on large vocabulary adaptive speech recognition systems and speaker dependent recognisers has not provided a solution for severely dysarthric speech. Building on the work of the STARDUST project our goal is to develop and implement speech recognition as a viable control interface for people with severe physical disability and severe dysarthria. The SPECS project is funded by the Health Technology Devices Programme of the Department of Health

    Driver recognition using gaussian mixture models and decision fusion techniques

    Get PDF
    In this paper we present our research in driver recognition. The goal of this study is to investigate the performance of different classifier fusion techniques in a driver recognition scenario. We are using solely driving behavior signals such as break and accelerator pedal pressure, engine RPM, vehicle speed; steering wheel angle for identifying the driver identities. We modeled each driver using Gaussian Mixture Models, obtained posterior probabilities of identities and combined these scores using different fixed mid trainable (adaptive) fusion methods. We observed error rates is low as 0.35% in recognition of 100 drivers using trainable combiners. We conclude that the fusion of multi-modal classifier results is very successful in biometric recognition of a person in a car setting.Publisher's Versio

    Design of a new method for detection of occupancy in the smart home using an FBG sensor

    Get PDF
    This article introduces a new way of using a fibre Bragg grating (FBG) sensor for detecting the presence and number of occupants in the monitored space in a smart home (SH). CO2 sensors are used to determine the CO2 concentration of the monitored rooms in an SH. CO2 sensors can also be used for occupancy recognition of the monitored spaces in SH. To determine the presence of occupants in the monitored rooms of the SH, the newly devised method of CO2 prediction, by means of an artificial neural network (ANN) with a scaled conjugate gradient (SCG) algorithm using measurements of typical operational technical quantities (indoor temperature, relative humidity indoor and CO2 concentration in the SH) is used. The goal of the experiments is to verify the possibility of using the FBG sensor in order to unambiguously detect the number of occupants in the selected room (R104) and, at the same time, to harness the newly proposed method of CO2 prediction with ANN SCG for recognition of the SH occupancy status and the SH spatial location (rooms R104, R203, and R204) of an occupant. The designed experiments will verify the possibility of using a minimum number of sensors for measuring the non-electric quantities of indoor temperature and indoor relative humidity and the possibility of monitoring the presence of occupants in the SH using CO2 prediction by means of the ANN SCG method with ANN learning for the data obtained from only one room (R203). The prediction accuracy exceeded 90% in certain experiments. The uniqueness and innovativeness of the described solution lie in the integrated multidisciplinary application of technological procedures (the BACnet technology control SH, FBG sensors) and mathematical methods (ANN prediction with SCG algorithm, the adaptive filtration with an LMS algorithm) employed for the recognition of number persons and occupancy recognition of selected monitored rooms of SH.Web of Science202art. no. 39
    corecore