20 research outputs found

    Mobile Robot Sensor Fusion with Fuzzy ARTMAP

    Full text link
    The raw sensory input available to a mobile robot suffers from a variety of shortcomings. Sensor fusion can yield a percept more veridical than is available from any single sensor input. In this project, the fuzzy ARTMAP neural network is used to fuse sonar and visual sonar on a B14 mobile robot. The neural network learns to associate specific sensory inputs with a corresponding distance metric. Once trained, the network yields predictions of range to obstacles that are more accurate than those provided by either sensor type alone. This improvement in accuracy holds across all distances and angles of approach tested.Defense Advanced Research Projects Agency, Office of Naval Research, Navy Research Laboratory (ONR-00014-96-1-0772, ONR-00014-95-1-0409, ONR-00014-95-0657

    Neural Sensor Fusion for Spatial Visualization on a Mobile Robot

    Full text link
    An ARTMAP neural network is used to integrate visual information and ultrasonic sensory information on a B 14 mobile robot. Training samples for the neural network are acquired without human intervention. Sensory snapshots are retrospectively associated with the distance to the wall, provided by on~ board odomctry as the robot travels in a straight line. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. The neural network effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.Office of Naval Research and Naval Research Laboratory (00014-96-1-0772, 00014-95-1-0409, 00014-95-0657

    The evolution of representation in simple cognitive networks

    Get PDF
    Representations are internal models of the environment that can provide guidance to a behaving agent, even in the absence of sensory information. It is not clear how representations are developed and whether or not they are necessary or even essential for intelligent behavior. We argue here that the ability to represent relevant features of the environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure R. To measure how R changes over time, we evolve two types of networks---an artificial neural network and a network of hidden Markov gates---to solve a categorization task using a genetic algorithm. We find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during their lifetime. This ability allows the agents to act on sensorial inputs in the context of their acquired representations and enables complex and context-dependent behavior. We examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that R should be able to quantify the representations within any cognitive system, and should be predictive of an agent's long-term adaptive success.Comment: 36 pages, 10 figures, one Tabl

    Development of Safe and Secure Control Software for Autonomous Mobile Robots

    Get PDF

    Colour, texture, and motion in level set based segmentation and tracking

    Get PDF
    This paper introduces an approach for the extraction and combination of different cues in a level set based image segmentation framework. Apart from the image grey value or colour, we suggest to add its spatial and temporal variations, which may provide important further characteristics. It often turns out that the combination of colour, texture, and motion permits to distinguish object regions that cannot be separated by one cue alone. We propose a two-step approach. In the first stage, the input features are extracted and enhanced by applying coupled nonlinear diffusion. This ensures coherence between the channels and deals with outliers. We use a nonlinear diffusion technique, closely related to total variation flow, but being strictly edge enhancing. The resulting features are then employed for a vector-valued front propagation based on level sets and statistical region models that approximate the distributions of each feature. The application of this approach to two-phase segmentation is followed by an extension to the tracking of multiple objects in image sequences

    Synesthetic Sensor Fusion via a Cross-Wired Artificial Neural Network.

    Get PDF
    The purpose of this interdisciplinary study was to examine the behavior of two artificial neural networks cross-wired based on the synesthesia cross-wiring hypothesis. Motivation for the study was derived from the study of psychology, robotics, and artificial neural networks, with perceivable application in the domain of mobile autonomous robotics where sensor fusion is a current research topic. This model of synesthetic sensor fusion does not exhibit synesthetic responses. However, it was observed that cross-wiring two independent networks does not change the functionality of the individual networks, but allows the inputs to one network to partially determine the outputs of the other network in some cases. Specifically, there are measurable influences of network A on network B, and yet network B retains its ability to respond independently

    Sensor Data Analysis for Advanced User Interfaces

    Get PDF
    Práce se zabývá tvorbou uživatelského rozhraní založeného na více vstupních signálech, tedy multimodálním rozhraním. Za tímto účelem nejprve rozebírá výhody daného přístupu ke komunikaci s přístroji. Dále práce obsahuje přehled úrovní, na kterých lze fúzi dat provádět, a různé přístupy k rozvržení architektury systému pro zpracování multimodálních dat. Důležitou částí je samotný návrh systému, kde pro výsledné rozhraní byla zvolena distribuovaná architektura používající softwarové agenty pro zpracování vstupů. Ze studovaných metod pro integraci dat byla vybrána hybridní fúze. Cílem má být rozhraní umožňující ovládání multimediálního centra a interakci s dalšími zařízeními v okolí uživatele.The paper deals with the creation of interface based on multiple input signals, i.e. multimodal interface. For this purpose analyzes the benefits of the approach to communicate with the device that way. The work also includes an overview of the level at which you can perform data fusion, and different approaches to the layout of the system architecture for multimodal data processing. The important part is the actual design of the system, where for the interface was chosen distributed architecture using software agents for processing inputs. As a method for data integration was picked hybrid fusion based on dialog driven and unification strategy. The result should be an interface for media center control and interaction with other devices around the user.

    Algorithms for sensor validation and multisensor fusion

    Get PDF
    Existing techniques for sensor validation and sensor fusion are often based on analytical sensor models. Such models can be arbitrarily complex and consequently Gaussian distributions are often assumed, generally with a detrimental effect on overall system performance. A holistic approach has therefore been adopted in order to develop two novel and complementary approaches to sensor validation and fusion based on empirical data. The first uses the Nadaraya-Watson kernel estimator to provide competitive sensor fusion. The new algorithm is shown to reliably detect and compensate for bias errors, spike errors, hardover faults, drift faults and erratic operation, affecting up to three of the five sensors in the array. The inherent smoothing action of the kernel estimator provides effective noise cancellation and the fused result is more accurate than the single 'best sensor'. A Genetic Algorithm has been used to optimise the Nadaraya-Watson fuser design. The second approach uses analytical redundancy to provide the on-line sensor status output μH∈[0,1], where μH=1 indicates the sensor output is valid and μH=0 when the sensor has failed. This fuzzy measure is derived from change detection parameters based on spectral analysis of the sensor output signal. The validation scheme can reliably detect a wide range of sensor fault conditions. An appropriate context dependent fusion operator can then be used to perform competitive, cooperative or complementary sensor fusion, with a status output from the fuser providing a useful qualitative indication of the status of the sensors used to derive the fused result. The operation of both schemes is illustrated using data obtained from an array of thick film metal oxide pH sensor electrodes. An ideal pH electrode will sense only the activity of hydrogen ions, however the selectivity of the metal oxide device is worse than the conventional glass electrode. The use of sensor fusion can therefore reduce measurement uncertainty by combining readings from multiple pH sensors having complementary responses. The array can be conveniently fabricated by screen printing sensors using different metal oxides onto a single substrate

    Toward Building A Social Robot With An Emotion-based Internal Control

    Get PDF
    In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry\u27s users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry\u27s interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator

    Computational modeling of the brain limbic system and its application in control engineering

    Get PDF
    This study mainly deals with the various aspects of modeling the learning processes within the brain limbic system and studying the various aspects of using it for different applications in control engineering. The current study is a multi-aspect research effort which not only requires a background of control engineering, but also a basic knowledge of some biomorphic systems. The main focus of this study is on biological systems which are involved in emotional processes. In mammalians, a part of the brain called the limbic system is mainly responsible for emotional processes. Therefore, general brain emotional processes and specific aspects of the limbic system are reviewed in the early parts of this study. Next, we describe developing a computational model of the limbic system based on these concepts. Since the focus of this study is on the application of the model in engineering systems and not on the biological concepts, the model established is not a very complicated model and does not include all the components of the limbic system. In fact, we are trying to develop a model which captures the minimal and basic properties of the limbic system which are mainly known as the Amygdala-Orbitofrontal Cortex system. The main chapter of this thesis, Chapter IV, shows the utilization of the Brain Emotional Learning (BEL) model in different applications of control and signal fusion systems. The main effort is focused on applying the model to control systems where the model acts as the controller block. Furthermore, the application of the model in signal fusion is also considered where simulation results support the applicability of the model. Finally, we studied different analytical aspects of the model including the behavior of the system during the adaptation phase and the stability of the system. For the first issue, we simplify the model, e.g. remove the nonlinearities, to develop mathematical formulations for behavior of the system. To study the stability of the system, we use the cell-to-cell mapping algorithm which reveals the stability conditions of the system in different representations. This thesis finishes with some concluding remarks and some topics for future research on this field
    corecore