5 research outputs found

    A perception pipeline exploiting trademark databases for service robots

    Get PDF

    Inspired by nature: timescale-free and grid-free event-based computing with\ua0spiking neural networks

    Get PDF
    Computer vision is enjoying huge success in visual processing applications such as facial recognition, object identification, and navigation. Most of these studies work with traditional cameras which produce frames at predetermined fixed time intervals. Real life visual stimuli are, however, generated when changes occur in the environment and are irregular in timing. Biological visual neural systems operate on these changes and are hence free from any fixed timescales that are related to the timing of events in visual input.Inspired by biological systems, neuromorphic devices provide a new way to record visual\ua0data. These devices typically have parallel arrays of sensors which operate asynchronously. They have particular potential for robotics due to their low latency, efficient use of bandwidth and low power requirements. There are a variety of neuromorphic devices for detecting different sensory information; this thesis focuses on using the Dynamic Vision Sensor (DVS) for visual data collection.Event-based sensory inputs are generated on demand as changes happen in the environment. There are no systematic timescales in these activities and the asynchronous nature of the sensors adds to the irregularity of time intervals between events, making event-based data timescale-free. Although the array of sensors are arranged as a grid in vision sensors generally, events in the real world exist in continuous space. Biological systems are not restricted to grid-based sampling, and it is an open question whether event-based data could similarly take advantage of grid-free processing algorithms. To study visual data in a way which is timescale-free and grid-free, which is\ua0 fundamentally different from traditional video data sampled at fixed time intervals which are dense and rigid in space, requires conceptual viewpoints and methods of computation which are not typically employed in existing studies.Bio-inspired computing involves computational components that mimic or at least take inspiration from how nature works. This fusion of engineering and biology often provides insights into complex computational problems. Artificial neural networks, a computing paradigm that is inspired by how our brains work, have been studied widely with visual data. This thesis uses a type of artificial neural network—event-based spiking neural networks—as the basic framework to process event-based visual data.Building upon spiking neural networks, this thesis introduces two methods that process event-based data with the principles of being timescale-free and grid-free. The first method preprocesses events as distributions of Gaussian shaped spatiotemporal volumes, and then introduces a new neuron model with time-delayed dendrites and dendritic and axonal computation as the main building blocks of the spiking neural network to perform long-term predictions. Gaussians are used for simplicity purposes. This Gaussian-based method is shown in this thesis to outperform a commonly used iterative prediction paradigm on DVS data.The second method involves a new concept for processing event-based data based on the “light cone” idea in physics. Starting from a given point in real space at a given time, a light cone is the set of points in spacetime reachable without exceeding the speed of light, and these points trace out spacetime trajectories called world lines. The light cone concept is applied to DVS data. As an object moves with respect to the DVS, the events generated are related by their speeds relative to the DVS. An observer can calculate possible world lines for each point but has no access to the correct one. The idea of a “motion cone” is introduced to refer to the distribution of possible world lines for an event. Motion cones provide a novel theory for the early stages of visual processing. Instead of spatial clustering, world lines produce a new representation determined by a speed-based clustering of events. A novel spiking neural network model with dendritic connections based on motion cones is proposed, with the ability predict future motion pattern in a long-term prediction.Freedom from timescales and fixed grid sizes are fundamental characteristics of neuromorphic event-based data but few algorithms to date exploit their potential. Focusing on the inter-event relationship in the continuous spatiotemporal volume can preserve these features during processing. This thesis presents two examples of incorporating the features of being timescale-free and grid-free into algorithm development and examines their performance on real world DVS data. These new concepts and models contribute to the neuromorphic computation field by providing new ways of thinking about event-based representations and their associated algorithms. They also have the potential to stimulate rethinking of representations in the early stages of an event-based vision system. To aid algorithm development, a benchmarking data set containing data ranging from simple environment changes collected from a stationary camera to complex environmentally rich navigation performed by mobile robots has been collated. Studies conducted in this thesis use examples from this benchmarking data set which is also made available to the public

    In situ Distributed Genetic Programming: An Online Learning Framework for Resource Constrained Networked Devices

    Get PDF
    This research presents In situ Distributed Genetic Programming (IDGP) as a framework for distributively evolving logic while attempting to maintain acceptable average performance on highly resource-constrained embedded networked devices. The framework is motivated by the proliferation of devices employing microcontrollers with communications capability and the absence of online learning approaches that can evolve programs for them. Swarm robotics, Internet of Things (IoT) devices including smart phones, and arguably the most constrained of the embedded systems, Wireless Sensor Networks (WSN) motes, all possess the capabilities necessary for the distributed evolution of logic - specifically the abilities of sensing, computing, actuation and communications. Genetic programming (GP) is a mechanism that can evolve logic for these devices using their “native” logic representation (i.e. programs) and so technically GP could evolve any behaviour that can be coded on the device. IDGP is designed, implemented, demonstrated and analysed as a framework for evolving logic via genetic programming on highly resource-constrained networked devices in real-world environments while achieving acceptable average performance. Designed with highly resource-constrained devices in mind, IDGP provides a guide for those wishing to implement genetic programming on such systems. Furthermore, an implementation on mote class devices is demonstrated to evolve logic for a time-varying sense-compute-act problem and another problem requiring the evolution of primitive communications. Distributed evolution of logic is also achieved by employing the Island Model architecture, and a comparison of individual and distributed evolution (with the same and slightly different goals) presented. This demonstrates the advantage of leveraging the fact that such devices often reside within networks of devices experiencing similar conditions. Since GP is a population-based metaheuristic which relies on the diversity of the population to achieve learning, many, if not most, programs within the population exhibit poor performance. As such, the average observed performance (pool fitness) of the population using the standard GP learning mechanism is unlikely to be acceptable for online learning scenarios. This is suspected to be the reason why no previous attempts have been made to deploy standard GP as an online learning approach. Nonetheless, the benefits of GP for evolving logic on such devices are compelling and motivated the design of a novel satisficing heuristic called Fitness Importance (FI). FI is population-based heuristic used to bias the evaluation of candidate solutions such that an “acceptable” average fitness (AAF) is achieved while also achieving ongoing, though diminished, learning capacity. This trade off motivated further investigation into whether dynamically adjusting the average performance in response to AAF would be superior to a constant, balanced, performing-learning approach. Dynamic and constant strategies were compared on a simple problem where the AAF target was changed during evolution, revealing that dynamically tracking the AAF target can yield a higher success rate in meeting the AAF. The combination of IDGP and FI offers a novel approach for achieving online learning with GP on highly resource-constrained embedded systems. Furthermore, it simultaneously considers the acceptable average performance of the system which may change during the operational lifetime. This approach could be applied to swarm and cooperative robot systems, WSN motes or IoT devices allowing them to cooperatively learn and adapt their logic locally to meet dynamic performance requirements

    Analysing and comparing problem landscapes for black-box optimization via length scale

    Get PDF
    corecore