1,420 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Energy-efficient data acquisition for accurate signal estimation in wireless sensor networks

    Get PDF
    Long-term monitoring of an environment is a fundamental requirement for most wireless sensor networks. Owing to the fact that the sensor nodes have limited energy budget, prolonging their lifetime is essential in order to permit long-term monitoring. Furthermore, many applications require sensor nodes to obtain an accurate estimation of a point-source signal (for example, an animal call or seismic activity). Commonly, multiple sensor nodes simultaneously sample and then cooperate to estimate the event signal. The selection of cooperation nodes is important to reduce the estimation error while conserving the network’s energy. In this paper, we present a novel method for sensor data acquisition and signal estimation, which considers estimation accuracy, energy conservation, and energy balance. The method, using a concept of ‘virtual clusters,’ forms groups of sensor nodes with the same spatial and temporal properties. Two algorithms are used to provide functionality. The ‘distributed formation’ algorithm automatically forms and classifies the virtual clusters. The ‘round robin sample scheme’ schedules the virtual clusters to sample the event signals in turn. The estimation error and the energy consumption of the method, when used with a generalized sensing model, are evaluated through analysis and simulation. The results show that this method can achieve an improved signal estimation while reducing and balancing energy consumption

    Survey on Various Aspects of Clustering in Wireless Sensor Networks Employing Classical, Optimization, and Machine Learning Techniques

    Get PDF
    A wide range of academic scholars, engineers, scientific and technology communities are interested in energy utilization of Wireless Sensor Networks (WSNs). Their extensive research is going on in areas like scalability, coverage, energy efficiency, data communication, connection, load balancing, security, reliability and network lifespan. Individual researchers are searching for affordable methods to enhance the solutions to existing problems that show unique techniques, protocols, concepts, and algorithms in the wanted domain. Review studies typically offer complete, simple access or a solution to these problems. Taking into account this motivating factor and the effect of clustering on the decline of energy, this article focuses on clustering techniques using various wireless sensor networks aspects. The important contribution of this paper is to give a succinct overview of clustering

    Asynchronous glutamate release is enhanced in low release efficacy synapses and dispersed across the active zone

    Get PDF
    The balance between fast synchronous and delayed asynchronous release of neurotransmitters has a major role in defining computational properties of neuronal synapses and regulation of neuronal network activity. However, how it is tuned at the single synapse level remains poorly understood. Here, using the fluorescent glutamate sensor SF-iGluSnFR, we image quantal vesicular release in tens to hundreds of individual synaptic outputs from single pyramidal cells with 4 millisecond temporal and 75 nm spatial resolution. We find that the ratio between synchronous and asynchronous synaptic vesicle exocytosis varies extensively among synapses supplied by the same axon, and that the synchronicity of release is reduced at low release probability synapses. We further demonstrate that asynchronous exocytosis sites are more widely distributed within the release area than synchronous sites. Together, our results reveal a universal relationship between the two major functional properties of synapses – the timing and the overall efficacy of neurotransmitter release

    Cognitive visual tracking and camera control

    Get PDF
    Cognitive visual tracking is the process of observing and understanding the behaviour of a moving person. This paper presents an efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario. Such a high-level feedback control loop, which is the main novelty of our work, will serve to reduce uncertainties in the observed scene and to maximize the amount of information extracted from it. It is implemented with a distributed camera system using SQL tables as virtual communication channels, and Situation Graph Trees for knowledge representation, inference and high-level camera control. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision

    Multi-sensor fusion for human-robot interaction in crowded environments

    Get PDF
    For challenges associated with the ageing population, robot assistants are becoming a promising solution. Human-Robot Interaction (HRI) allows a robot to understand the intention of humans in an environment and react accordingly. This thesis proposes HRI techniques to facilitate the transition of robots from lab-based research to real-world environments. The HRI aspects addressed in this thesis are illustrated in the following scenario: an elderly person, engaged in conversation with friends, wishes to attract a robot's attention. This composite task consists of many problems. The robot must detect and track the subject in a crowded environment. To engage with the user, it must track their hand movement. Knowledge of the subject's gaze would ensure that the robot doesn't react to the wrong person. Understanding the subject's group participation would enable the robot to respect existing human-human interaction. Many existing solutions to these problems are too constrained for natural HRI in crowded environments. Some require initial calibration or static backgrounds. Others deal poorly with occlusions, illumination changes, or real-time operation requirements. This work proposes algorithms that fuse multiple sensors to remove these restrictions and increase the accuracy over the state-of-the-art. The main contributions of this thesis are: A hand and body detection method, with a probabilistic algorithm for their real-time association when multiple users and hands are detected in crowded environments; An RGB-D sensor-fusion hand tracker, which increases position and velocity accuracy by combining a depth-image based hand detector with Monte-Carlo updates using colour images; A sensor-fusion gaze estimation system, combining IR and depth cameras on a mobile robot to give better accuracy than traditional visual methods, without the constraints of traditional IR techniques; A group detection method, based on sociological concepts of static and dynamic interactions, which incorporates real-time gaze estimates to enhance detection accuracy.Open Acces

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
    corecore