4,947 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Development of Multi-Robotic Arm System for Sorting System Using Computer Vision

    Get PDF
    This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    • …
    corecore